我正在玩和学习 hadoop MapReduce。
我正在尝试从VCF文件 ( http://en.wikipedia.org/wiki/Variant_Call_Format ) 映射数据:VCF 是一个制表符分隔的文件,以(可能很大)标题开头。此标头是获取正文中记录的语义所必需的。
我想创建一个使用这些数据的映射器。必须可以从此 Mapper 访问标头才能对行进行解码。
从http://jayunit100.blogspot.fr/2013/07/hadoop-processing-headers-in-mappers.html ,我创建了这个InputFormat,带有一个自定义的 Reader :
public static class VcfInputFormat extends FileInputFormat<LongWritable, Text>
{
/* the VCF header is stored here */
private List<String> headerLines=new ArrayList<String>();
@Override
public RecordReader<LongWritable, Text> createRecordReader(InputSplit split,
TaskAttemptContext context) throws IOException,
InterruptedException {
return new VcfRecordReader();
}
@Override
protected boolean isSplitable(JobContext context, Path filename) {
return false;
}
private class VcfRecordReader extends LineRecordReader
{
/* reads all lines starting with '#' */
@Override
public void initialize(InputSplit genericSplit,
TaskAttemptContext context) throws IOException {
super.initialize(genericSplit, context);
List<String> headerLines=new ArrayList<String>();
while( super.nextKeyValue())
{
String row = super.getCurrentValue().toString();
if(!row.startsWith("#")) throw new IOException("Bad VCF header");
headerLines.add(row);
if(row.startsWith("#CHROM")) break;
}
}
}
}
现在,在Mapper中,有没有办法有一个指针VcfInputFormat.this.headerLines
来解码这些行?
public static class VcfMapper
extends Mapper<LongWritable, Text, Text, IntWritable>{
public void map(LongWritable key, Text value, Context context ) throws IOException, InterruptedException {
my.VcfCodec codec=new my.VcfCodec(???????.headerLines);
my.Variant variant =codec.decode(value.toString());
//(....)
}
}