我运行了一个简单的排序程序,但是,我遇到了如下错误。
12/06/15 01:13:17 WARN mapred.JobClient: Error reading task outputServer returned HTTP response code: 403 for URL: _http://192.168.1.106:50060/tasklog?plaintext=true&attemptid=attempt_201206150102_0002_m_000001_1&filter=stdout
12/06/15 01:13:18 WARN mapred.JobClient: Error reading task outputServer returned HTTP response code: 403 for URL: _http://192.168.1.106:50060/tasklog?plaintext=true&attemptid=attempt_201206150102_0002_m_000001_1&filter=stderr
12/06/15 01:13:20 INFO mapred.JobClient: map 50% reduce 0%
12/06/15 01:13:23 INFO mapred.JobClient: map 100% reduce 0%
12/06/15 01:14:19 INFO mapred.JobClient: Task Id : attempt_201206150102_0002_m_000000_2, Status : FAILED
Too many fetch-failures
12/06/15 01:14:20 WARN mapred.JobClient: Error reading task outputServer returned HTTP response code: 403 for URL: _http://192.168.1.106:50060/tasklog?plaintext=true&attemptid=attempt_201206150102_0002_m_000000_2&filter=stdout
有谁知道是什么原因以及如何解决?
--------更新更多日志信息-------
2012-06-15 19:56:07,039 警告 org.apache.hadoop.util.NativeCodeLoader:无法为您的平台加载 native-hadoop 库...在适用的情况下使用内置 java 类 2012-06-15 19:56: 07,258 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl:源名称 ugi 已经存在!2012-06-15 19:56:07,339 信息 org.apache.hadoop.mapred.Task:使用 ResourceCalculatorPlugin:空 2012-06-15 19:56:07,346 信息 org.apache.hadoop.mapred.ReduceTask:ShuffleRamManager:MemoryLimit= 144965632,MaxSingleShuffleLimit=36241408 2012-06-15 19:56:07,351 INFO org.apache.hadoop.mapred.ReduceTask:attempt_201206151954_0001_r_000000_0 线程已启动:用于合并磁盘文件的线程 2012-076-15 INFO.076-15 9:56: apache.hadoop.mapred.ReduceTask:attempt_201206151954_0001_r_000000_0 线程已启动:用于合并内存文件的线程 2012-06-15 19:56:07,
2012-06-15 19:56:32,077 info org.apache.hadoop.mapred.reducetask:任务尝试_201206151954_0001_r_000000_0:从尝试的Fetch#1失败_201206151954_0001_M_000000_0 2012-06-15 19:56:32,077 info org.apache.hadoop.mapred.reduceTask :即使在 MAX_FETCH_RETRIES_PER_MAP 重试之后,也无法从尝试_201206151954_0001_m_000000_0 获取地图输出......或者这是一个读取错误,报告给 JobTracker 2012-06-15 19:56:32,077 WARN org.apache.hadoop.mapred.ReduceTask0050_201020615主机 192.168.1.106 到罚球箱,12 秒后下一次联系 2012-06-15 19:56:32,077 信息 org.apache.hadoop.mapred.ReduceTask:尝试_201206151954_0001_r_000000_0:从以前的故障中获得 1 个地图输出 2012-06-15 19 :56:47,080 信息 org.apache.hadoop.mapred.ReduceTask:尝试_201206151954_0001_r_000000_0计划的1个输出(0慢一个慢主机和0 dup主机)2012-06-15 19:56:56,048 Warn org.apache.hadoop.mapred.reducetask:尝试_201206151954_0001_R_000000_0复制失败:尝试_201206151954_0001_M_000000_0来自192.168.1.106 2012-06-15 19:56 :56,049 WARN org.apache.hadoop.mapred.ReduceTask:java.io.IOException:服务器返回的 HTTP 响应代码:403 用于 URL:_http://192.168.1.106:50060/mapOutput?job=job_201206151954_0001&map=attempt_201206151954_0001_m_000000 at sunreduce= .net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436) 在 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getInputStream(ReduceTask.java:1639) 在 org.apache.hadoop.mapred .ReduceTask$ReduceCopier$MapOutputCopier.setupSecureConnection(ReduceTask.java:1575) 在 org.apache。hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutput(ReduceTask.java:1483) 在 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:1394) 在 org.apache.hadoop.mapred。 ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:1326)
尝试_201206151954_0001_r_000000_0 需要另外 2 个地图输出,其中 0 已经在进行中 2012-06-15 19:57:11,053 信息 org.apache.hadoop.mapred.ReduceTask:尝试_201206151954_0001_r_000000_0 计划的 0 个输出(1 个慢速主机和 0 个慢速主机) 06-15 19:57:11,053 INFO org.apache.hadoop.mapred.ReduceTask: Penalized(slow) Hosts: 2012-06-15 19:57:11,053 INFO org.apache.hadoop.mapred.ReduceTask: 192.168.1.106 Will考虑后:1秒。2012-06-15 19:57:16,055 信息 org.apache.hadoop.mapred.ReduceTask:attempt_201206151954_0001_r_000000_0 预定的 1 个输出(0 个慢速主机和 0 个重复主机) 2012-06-15 19:57:25,984 警告 org.apache.hadoop。 mapred.ReduceTask:attempt_201206151954_0001_r_000000_0 复制失败:从 192.168.1.106 开始的尝试_201206151954_0001_m_000000_0 2012-06-15 19:57:25,984 WARN org.apache.hadoop.mapred.ReduceTask:
最好的祝福,