1

我在 hadoop 1.0.1 上运行一个程序,我得到以下异常:

Exception in thread "main" java.lang.ClassNotFoundException: -i
        at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
        at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)

我提到了相同的代码在 hadoop 1.2.1 上有效但在 hadoop 1.0.1 上无效的事实。

This is my main class:
public static void main(String[] args) throws Exception {
            readArguments(args);
        JobConf conf = new JobConf(Embed.class);
        conf.setJobName("embed");

        conf.setOutputKeyClass(Text.class);
        conf.setOutputValueClass(LongWritable.class);

        conf.setMapperClass(Map.class);
        //conf.setCombinerClass(Reduce.class);
        conf.setReducerClass(Reduce.class);

        conf.setInputFormat(TextInputFormat.class);
        conf.setOutputFormat(TextOutputFormat.class);

        FileInputFormat.setInputPaths(conf, new Path(input));
        FileOutputFormat.setOutputPath(conf, new Path(output));

        JobClient.runJob(conf);

    }

mapper 和 reducer 签名类是:

public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, LongWritable> {
     public void map(LongWritable key, Text value, OutputCollector<Text, LongWritable> output, Reporter reporter) throws IOException {
          ........
     }
}

public static class Reduce extends MapReduceBase implements Reducer<Text, LongWritable, Text, LongWritable> {
        public void reduce(Text key, Iterator<LongWritable> values, OutputCollector<Text, LongWritable> output, Reporter reporter) throws IOException {
               ..........
        }

知道为什么它不起作用吗?

4

0 回答 0