0

我的所有设置都正确,并且能够Hadoop ( 1.1.2 )在单节点上运行。但是,在对相关文件( /etc/hosts, *-site.xml )进行更改后,我无法将Datanode添加到集群中,并且在从站上不断出现以下错误。

有人知道如何纠正这个吗?

2013-05-13 15:36:10,135 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-05-13 15:36:11,137 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-05-13 15:36:12,140 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
4

2 回答 2

0

尝试使用 namenode 的 ip 地址或网络名称 relpacing localhost

于 2013-07-07T00:43:38.327 回答
0

检查fs.default.namecore-site.xml conf 文件中的值(在集群中的每个节点上)。这需要是名称节点的网络名称,我怀疑你有这个hdfs://localhost:54310)。

未能检查集群中所有节点上的 hadoop 配置文件中是否提及 localhost:

grep localhost $HADOOP_HOME/conf/*.xml
于 2013-05-13T10:38:49.873 回答