5

我正在按照教程安装 hadoop: http: //www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ 现在我被困在“复制本地示例数据”到 HDFS”步骤。

我得到的连接错误:

<12/10/26 17:29:16 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
12/10/26 17:29:17 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
12/10/26 17:29:18 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
12/10/26 17:29:19 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
12/10/26 17:29:20 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
12/10/26 17:29:21 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
12/10/26 17:29:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
12/10/26 17:29:23 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
12/10/26 17:29:24 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
12/10/26 17:29:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused

这已经和这个问题差不多了: Errors while running hadoop

现在的重点是,我已经禁用了 ivp6,如那里和上面的教程中所述,但它没有帮助。有什么我一直想念的吗?

编辑:

我在第二台机器上用新安装的 ubuntu 重复了该教程,并逐步进行了比较。事实证明,在 hduser 的 bashrc 配置中存在一些错误。后来效果很好...

4

3 回答 3

4

如果我Hadoop fs <anything>在 DataNode/NameNode 未运行时尝试这样做,我会收到确切的错误消息,所以我猜你也会发生同样的情况。

输入jps您的终端。如果一切都在运行,它应该如下所示:

16022 DataNode
16524 Jps
15434 TaskTracker
15223 JobTracker
15810 NameNode
16229 SecondaryNameNode

我敢打赌你是 DataNode 或 NameNode 没有运行。如果 jps 的打印输出缺少任何内容,请重新启动它。

于 2012-10-26T17:59:14.063 回答
0

在整个配置之后给出这个命令

hadoop 名称节点 -formate

并通过此命令启动所有服务

全部启动.sh

这将解决您的问题

于 2013-09-26T08:38:53.750 回答
0
  1. 转到您的 etc/hadoop/core-site.xml。检查 fs.default.name 的值应该如下图所示。{ fs.default.name hdfs://localhost:54310 }
  2. 在整个配置之后给出这个命令

hadoop 名称节点格式

  1. 通过此命令启动所有服务

全部启动.sh

这将解决您的问题。

您的名称节点可能处于安全模式,运行 bin/hdfs dfsadmin -safemode leave 或 bin/hadoop dsfadmin -safemode leave 然后按照步骤 - 2 和步骤 -3

于 2015-05-29T09:09:01.900 回答