5

我正在尝试在 Hadoop 2.2.0 集群上运行 wordcount 示例。由于此异常,许多地图都失败了:

2014-01-07 05:07:12,544 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.net.ConnectException: Call From slave2-machine/127.0.1.1 to slave2-machine:49222 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
    at org.apache.hadoop.ipc.Client.call(Client.java:1351)
    at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:231)
    at com.sun.proxy.$Proxy6.getTask(Unknown Source)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:133)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
    at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
    at org.apache.hadoop.ipc.Client.call(Client.java:1318)
    ... 4 more

每次我运行作业时,都会更改有问题的端口,但映射任务仍然失败。我不知道哪个进程应该监听那个端口。我还尝试在运行期间跟踪netstat -ntlp输出,并且没有进程从未听过端口。

更新:主节点的内容/etc/hosts是这样的:

127.0.0.1   localhost
127.0.1.1   master-machine

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.1.101 slave1 slave1-machine
192.168.1.102 slave2 slave2-machine
192.168.1.1 master

对于 slave1 是:

127.0.0.1   localhost
127.0.1.1   slave1-machine

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.1.1 master
192.168.1.101 slave1
192.168.1.102 slave2 slave2-machine

对于 slave2,它就像 slave1 有细微的变化,我想你可以猜到。最后,yarn/hadoop/etc/hadoop/slaveson master的内容是:

slave1
slave2
4

1 回答 1

8

1.检查hadoop节点是否可以互相ssh。2.检查所有配置文件中hadoop守护进程的地址和端口是否一致。3.检查所有节点的/etc/hosts。这是检查您是否已正确启动集群的有用链接: 集群设置

我知道了!您的 /etc/hosts 不正确。您应该删除 127.0.1.1 行。我的意思是他们应该是这样的:

127.0.0.1       localhost
192.168.1.101    master
192.168.1.103    slave1
192.168.1.104    slave2
192.168.1.105    slave3

并为所有这样的奴隶复制粘贴。此外,奴隶也应该能够相互 ssh。

于 2014-01-16T20:28:19.757 回答