5

当我在 3 节点集群 hadoop 中运行简单的 wordcount 示例时,出现以下错误。我检查了必要文件夹的所有写/读权限。此错误不会停止 mapreduce 作业,但所有工作负载都转到集群中的一台机器上,当任务到达时,其他两台机器会出现上述相同的错误。

12/09/13 09:38:37 INFO mapred.JobClient: Task Id : attempt_201209121718_0006_m_000008_0,Status : FAILED
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Creation of symlink from /hadoop/libexec/../logs/userlogs/job_201209121718_0006/attempt_201209121718_0006_m_000008_0 to /hadoop/hadoop-datastore
/mapred/local/userlogs/job_201209121718_0006/attempt_201209121718_0006_m_000008_0 failed.
    at org.apache.hadoop.mapred.TaskLog.createTaskAttemptLogDir(TaskLog.java:110)
    at org.apache.hadoop.mapred.DefaultTaskController.createLogDir(DefaultTaskController.java:71)
    at org.apache.hadoop.mapred.TaskRunner.prepareLogFiles(TaskRunner.java:316)
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228)

12/09/13 09:38:37 WARN mapred.JobClient: Error reading task outputhttp://peter:50060/tasklog?plaintext=true&attemptid=attempt_201209121718_0006_m_000008_0&filter=stdout
12/09/13 09:38:37 WARN mapred.JobClient: Error reading task outputhttp://peter:50060/tasklog?plaintext=true&attemptid=attempt_201209121718_0006_m_000008_0&filter=stderr

那个错误是关于什么的?

4

1 回答 1

-1

java.lang.Throwable:子错误

org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)

似乎为任务跟踪器分配的内存超过了节点的实际内存。检查此链接说明

于 2013-11-29T03:47:35.543 回答