3

我试图做一个目录,其中包含数百个扩展名为 .avro 的小文件

但对于某些文件失败并出现以下错误:

14/09/18 13:05:19 INFO mapred.JobClient:  map 99% reduce 0%
14/09/18 13:05:22 INFO mapred.JobClient:  map 100% reduce 0%
14/09/18 13:05:24 INFO mapred.JobClient: Task Id : attempt_201408291204_35665_m_000000_0, Status : FAILED
java.io.IOException: Copied: 32 Skipped: 0 Failed: 1
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.close(DistCp.java:584)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

14/09/18 13:05:25 INFO mapred.JobClient:  map 83% reduce 0%
14/09/18 13:05:32 INFO mapred.JobClient:  map 100% reduce 0%
14/09/18 13:05:32 INFO mapred.JobClient: Task Id : attempt_201408291204_35665_m_000005_0, Status : FAILED
java.io.IOException: Copied: 20 Skipped: 0 Failed: 1
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.close(DistCp.java:584)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

14/09/18 13:05:33 INFO mapred.JobClient:  map 83% reduce 0%
14/09/18 13:05:41 INFO mapred.JobClient:  map 93% reduce 0%
14/09/18 13:05:48 INFO mapred.JobClient:  map 100% reduce 0%
14/09/18 13:05:51 INFO mapred.JobClient: Job complete: job_201408291204_35665
14/09/18 13:05:51 INFO mapred.JobClient: Counters: 33
14/09/18 13:05:51 INFO mapred.JobClient:   File System Counters
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of bytes read=0
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of bytes written=1050200
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of large read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of write operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of bytes read=782797980
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of bytes written=0
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of read operations=88
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of large read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of write operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of bytes read=0
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of bytes written=782775062
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of large read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of write operations=0
14/09/18 13:05:51 INFO mapred.JobClient:   Job Counters
14/09/18 13:05:51 INFO mapred.JobClient:     Launched map tasks=8
14/09/18 13:05:51 INFO mapred.JobClient:     Total time spent by all maps in occupied slots (ms)=454335
14/09/18 13:05:51 INFO mapred.JobClient:     Total time spent by all reduces in occupied slots (ms)=0
14/09/18 13:05:51 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/09/18 13:05:51 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/18 13:05:51 INFO mapred.JobClient:   Map-Reduce Framework
14/09/18 13:05:51 INFO mapred.JobClient:     Map input records=125
14/09/18 13:05:51 INFO mapred.JobClient:     Map output records=53
14/09/18 13:05:51 INFO mapred.JobClient:     Input split bytes=798
14/09/18 13:05:51 INFO mapred.JobClient:     Spilled Records=0
14/09/18 13:05:51 INFO mapred.JobClient:     CPU time spent (ms)=50250
14/09/18 13:05:51 INFO mapred.JobClient:     Physical memory (bytes) snapshot=1930326016
14/09/18 13:05:51 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=9781469184
14/09/18 13:05:51 INFO mapred.JobClient:     Total committed heap usage (bytes)=5631639552
14/09/18 13:05:51 INFO mapred.JobClient:   org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
14/09/18 13:05:51 INFO mapred.JobClient:     BYTES_READ=22883
14/09/18 13:05:51 INFO mapred.JobClient:   distcp
14/09/18 13:05:51 INFO mapred.JobClient:     Bytes copied=782769559
14/09/18 13:05:51 INFO mapred.JobClient:     Bytes expected=782769559
14/09/18 13:05:51 INFO mapred.JobClient:     Files copied=70
14/09/18 13:05:51 INFO mapred.JobClient:     Files skipped=53

这里有更多来自 JobTracker UI 的片段:

2014-09-18 13:04:24,381 INFO org.apache.hadoop.fs.s3native.NativeS3FileSystem: OutputStream for key '09/01/01/SEARCHES/_distcp_tmp_hrb8ba/part-m-00005.avro' upload complete
2014-09-18 13:04:25,136 INFO org.apache.hadoop.tools.DistCp: FAIL part-m-00005.avro : java.io.IOException: Fail to rename tmp file (=s3://magnetic-test/09/01/01/SEARCHES/_distcp_tmp_hrb8ba/part-m-00005.avro) to destination file (=s3://abc/09/01/01/SEARCHES/part-m-00005.avro)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:494)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:463)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:549)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:316)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:490)
    ... 11 more

有人知道这个问题吗?

4

2 回答 2

3

通过添加-D mapred.task.timeout=60000000distcp 命令解决了这个问题

于 2016-01-28T17:22:28.263 回答
0

我尝试了建议的答案,但没有运气。我在复制许多小文件时遇到了这个问题(大约数千个,总共不超过半 GB)。我无法使 distcp 命令工作(与 OP 发布的错误相同),所以切换到hadoop fs -cp是我的解决方案。附带说明一下,在同一个集群中,使用 distcp 复制其他更大的文件可以正常工作。

于 2016-02-25T21:02:53.593 回答