我使用 将Hbase Export utility tool
hbase 表作为SequenceFile
.
现在我想使用 mapreduce 作业来处理这个文件:
public class MapSequencefile {
public static class MyMapper extends Mapper<LongWritable, Text, Text, Text>{
@Override
protected void map(LongWritable key, Text value,
Mapper<LongWritable, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
System.out.println(key+"...."+value);
}
}
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf , MapSequencefile.class.getSimpleName());
job.setJarByClass(MapSequencefile.class);
job.setNumReduceTasks(0);
job.setMapperClass(MyMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setInputFormatClass(SequenceFileInputFormat.class); //use SequenceFileInputFormat
FileInputFormat.setInputPaths(job, "hdfs://192.16.31.10:8020/input/");
FileOutputFormat.setOutputPath(job, new Path("hdfs://192.16.31.10:8020/out/"));
job.waitForCompletion(true);
}
}
但它总是抛出这个异常:
Caused by: java.io.IOException: Could not find a deserializer for the Value class: 'org.apache.hadoop.hbase.client.Result'. Please ensure that the configuration 'io.serializations' is properly configured, if you're using custom serialization.
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1964)
at org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1811)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1760)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1774)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:54)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
我能做些什么来解决这个错误?