我试图用 Map reduce 框架编写一些简单的代码。以前我使用 mapred 包实现,我能够将输入格式类指定为 KeyvalueTextInputFormat 但是在使用 mapreduce 的新 Api 中,这个类不存在。我尝试使用 TextInputFormat.class 但我仍然得到以下异常
- job_local_0001
java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text
at com.hp.hpl.mapReduceprocessing.MapReduceWrapper$HitFileProccesorMapper_internal.map(MapReduceWrapper.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
这是代码的示例片段
Configuration conf = new Configuration();
conf.set("key.value.separator.output.line", ",");
Job job = new Job(conf, "Result Aggregation");
job.setJarByClass(ProcessInputFile.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setMapperClass(MultithreadedMapper.class);
MultithreadedMapper.setMapperClass(job, HitFileProccesorMapper_internal.class);
MultithreadedMapper.setNumberOfThreads(job, 3);
//job.setMapperClass(HitFileProccesorMapper_internal.class);
job.setReducerClass(HitFileReducer_internal.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(inputFileofhits.getName()));
FileOutputFormat.setOutputPath(job, new Path(ProcessInputFile.resultAggProps
.getProperty("OUTPUT_DIRECTORY")));
try {
job.waitForCompletion(true);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
请让我知道要进行哪些配置更改,以避免类转换异常。