我编写了一个自定义记录阅读器来读取 Hadoop 中的文本和 gzip 文件,因为我有一个特殊要求,即拥有完整的文件数据作为键的值和文件名。来源如下:
public class WholeFileRecordReader extends RecordReader<Text, BytesWritable> {
private CompressionCodecFactory compressionCodecs = null;
private FileSplit fileSplit;
private Configuration conf;
private InputStream in;
private Text key = new Text("");
private BytesWritable value = new BytesWritable();
private boolean processed = false;
@Override
public void initialize(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException {
this.fileSplit = (FileSplit) split;
this.conf = context.getConfiguration();
final Path file = fileSplit.getPath();
compressionCodecs = new CompressionCodecFactory(conf);
final CompressionCodec codec = compressionCodecs.getCodec(file);
System.out.println(codec);
FileSystem fs = file.getFileSystem(conf);
in = fs.open(file);
if (codec != null) {
in = codec.createInputStream(in);
}
}
@Override
public boolean nextKeyValue() throws IOException, InterruptedException {
if (!processed) {
byte[] contents = new byte[(int) fileSplit.getLength()];
Path file = fileSplit.getPath();
key.set(file.getName());
try {
IOUtils.readFully(in, contents, 0, contents.length);
value.set(contents, 0, contents.length);
} finally {
IOUtils.closeStream(in);
}
processed = true;
return true;
}
return false;
}
@Override
public Text getCurrentKey() throws IOException, InterruptedException {
return key;
}
@Override
public BytesWritable getCurrentValue() throws IOException, InterruptedException {
return value;
}
@Override
public float getProgress() throws IOException {
return processed ? 1.0f : 0.0f;
}
@Override
public void close() throws IOException {
// Do nothing
}
}
问题是我的代码正在读取不完整的文件数据。这可能是因为我使用 fileSplit(指向压缩文件)来确定内容的长度,因此我得到的值较小。因此,这会导致将不完整的数据传递给 Mapper。
有人可以指出如何获取 gizipped 文件数据的实际长度或修改 RecordReader 以使其读取完整数据。