0

我正在运行一个简单的 map-reduce 作业。此作业使用来自常见爬网数据的 250 个文件。

例如 s3://aws-publicdatasets/common-crawl/parse-output/segment/1341690169105/

如果我使用 50、100 个文件,一切正常。但是有 250 个文件我得到了这个错误

java.io.IOException: Attempted read from closed stream.
    at org.apache.commons.httpclient.ContentLengthInputStream.read(ContentLengthInputStream.java:159)
    at java.io.FilterInputStream.read(FilterInputStream.java:116)
    at org.apache.commons.httpclient.AutoCloseInputStream.read(AutoCloseInputStream.java:107)
    at org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:76)
    at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:136)
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:111)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readByte(DataInputStream.java:248)
    at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
    at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
    at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1707)
    at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1773)
    at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:1849)
    at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
    at org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper$SubMapRecordReader.nextKeyValue(MultithreadedMapper.java:180)
    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
    at org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper$MapRunner.run(MultithreadedMapper.java:268)

有什么线索吗?

4

1 回答 1

0

你有多少地图槽来处理输入?是不是接近100?

这是一个猜测,但在您处理第一批文件时,与 S3 的连接可能会超时,并且当插槽可用于处理更多文件时,连接不再打开。我相信 NativeS3FileSystem 的超时错误会显示为 IOExceptions。

于 2013-01-08T04:06:45.327 回答