1

I noticed my reducer is stuck due to a dead host. On the logs, it's showing a lot of retry messages. Is it possible to tell job tracker to give up on the dead node and resume the work? There were 323 mappers and only 1 reducer. I am on hadoop-1.0.3.

2012-08-08 11:52:19,903 INFO org.apache.hadoop.mapred.ReduceTask: 192.168.1.23 Will be considered after: 65 seconds.
2012-08-08 11:53:19,905 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Need another 63 map output(s) where 0 is already in progress
2012-08-08 11:53:19,905 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Scheduled 0 outputs (1 slow hosts and0 dup hosts)
2012-08-08 11:53:19,905 INFO org.apache.hadoop.mapred.ReduceTask: Penalized(slow) Hosts: 
2012-08-08 11:53:19,905 INFO org.apache.hadoop.mapred.ReduceTask: 192.168.1.23 Will be considered after: 5 seconds.
2012-08-08 11:53:29,906 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Scheduled 1 outputs (0 slow hosts and0 dup hosts)
2012-08-08 11:53:47,907 WARN org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 copy failed: attempt_201207191440_0203_m_000001_0 from 192.168.1.23
2012-08-08 11:53:47,907 WARN org.apache.hadoop.mapred.ReduceTask: java.net.NoRouteToHostException: No route to host
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
    at java.net.Socket.connect(Socket.java:546)
    at sun.net.NetworkClient.doConnect(NetworkClient.java:173)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
    at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
    at sun.net.www.http.HttpClient.New(HttpClient.java:321)
    at sun.net.www.http.HttpClient.New(HttpClient.java:338)
    at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:876)
    at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:801)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getInputStream(ReduceTask.java:1618)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.setupSecureConnection(ReduceTask.java:1575)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutput(ReduceTask.java:1483)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:1394)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:1326)

2012-08-08 11:53:47,907 INFO org.apache.hadoop.mapred.ReduceTask: Task attempt_201207191440_0203_r_000000_0: Failed fetch #18 from attempt_201207191440_0203_m_000001_0
2012-08-08 11:53:47,907 WARN org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 adding host 192.168.1.23 to penalty box, next contact in 1124 seconds
2012-08-08 11:53:47,907 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0: Got 1 map-outputs from previous failures
2012-08-08 11:54:22,909 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Need another 63 map output(s) where 0 is already in progress
2012-08-08 11:54:22,909 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Scheduled 0 outputs (1 slow hosts and0 dup hosts)
2012-08-08 11:54:22,909 INFO org.apache.hadoop.mapred.ReduceTask: Penalized(slow) Hosts: 
2012-08-08 11:54:22,909 INFO org.apache.hadoop.mapred.ReduceTask: 192.168.1.23 Will be considered after: 1089 seconds.

I leave it alone and it retried for a while then give up on the dead host and rerun the mapper and succeeded. It's caused by two ip addressed on the host and I intentionally turned off one ip which was the one hadoop use.

My question is whether there is a way to tell hadoop to give up the dead host without retrying.

4

1 回答 1

3

从您的日志中,您可以看到无法连接到运行地图任务的任务跟踪器之一。运行 reducer 的 tasktracker 试图通过 HTTP 协议检索 map 中间结果,它失败了,因为拥有结果的 tasktracker 已经死了。

tasktracker 失败的默认行为是这样的:

如果它们属于不完整的作业,jobtracker 会安排在失败的 tasktracker 上运行并成功完成的 map 任务重新运行,因为它们的中间输出驻留在失败的 tasktracker 的本地文件系统上,reduce 任务可能无法访问它们。任何正在进行的任务也会重新安排。

问题是,如果一个任务(无论是 map 还是 reduce)失败太多次(我认为是 4 次),它将不再被重新安排,并且作业将会失败。在您的情况下,映射似乎已成功完成,但减速器无法连接到映射器并检索中间结果。它尝试了 4 次,然后作业失败。

失败的任务不能完全忽略,因为它是作业的一部分,除非作业包含的所有任务都成功,否则作业本身不会成功。

尝试找到reducer 尝试访问的链接并将其复制到浏览器中以查看您得到的错误。

您还可以将某个节点列入黑名单并从 Hadoop 使用的节点列表中完全排除:

  In conf/mapred-site.xml

  <property>
     <name>mapred.hosts.exclude</name>
     <value>/full/path/of/host/exclude/file</value>
  </property>

  To reconfigure nodes.

  /bin/hadoop mradmin -refreshNodes
于 2012-08-08T19:14:51.477 回答