我在 1GB 的文本文件上运行简单的 wordcount 作业。我的集群有 8 个 Datanode 和 1 个 namenode,每个都有 3GB 的存储容量。
当我运行 wordcount 时,我可以看到 map 总是成功,而 reducer 抛出错误并失败。请在下面找到错误消息。
14/10/05 15:42:02 INFO mapred.JobClient: map 100% reduce 31%
14/10/05 15:42:07 INFO mapred.JobClient: Task Id : attempt_201410051534_0002_m_000016_0, Status : FAILED
FSError: java.io.IOException: No space left on device
14/10/05 15:42:14 INFO mapred.JobClient: Task Id : attempt_201410051534_0002_r_000000_0, Status : FAILED
java.io.IOException: Task: attempt_201410051534_0002_r_000000_0 - The reduce copier failed
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for file:/app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201410051534_0002/attempt_201410051534_0002_r_000000_0/output/map_18.out
你能告诉我如何解决这个问题吗?
谢谢纳瓦兹