0

我对 Hadoop 很陌生。我使用以下命令启动 Hadoop...

[gpadmin@BigData1-ahandler root]$ /usr/local/hadoop-0.20.1/bin/start-all.sh
starting namenode, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-namenode-BigData1-ahandler.out
localhost: starting datanode, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-datanode-BigData1-ahandler.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-secondarynamenode-BigData1-ahandler.out
starting jobtracker, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-jobtracker-BigData1-ahandler.out
localhost: starting tasktracker, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-tasktracker-BigData1-ahandler.out

当我尝试 -cat 来自以下目录的输出时,出现错误:“没有可用的节点”。这个错误是什么意思?我该如何解决?还是开始调试?

[gpadmin@BigData1-ahandler root]$ hadoop fs -cat output/d*/part-*
13/11/13 15:33:09 INFO hdfs.DFSClient: No node available for block: blk_-5883966349607013512_1099 file=/user/gpadmin/output/d15795/part-00000
13/11/13 15:33:09 INFO hdfs.DFSClient: Could not obtain block blk_-5883966349607013512_1099 from any node:  java.io.IOException: No live nodes contain current block
4

1 回答 1

0

当您在名称节点之前启动数据节点时会发生这种情况。

当 datanodes 在 namenode 启动之前启动时,datanode 服务会尝试签入 namenode 并失败说"namenode not found"。然后一旦namenode启动,它就没有签入数据节点,因此它找不到正在访问的数据块所在的节点。

您应该浏览脚本start-all.sh并确保名称节点在数据节点之前启动。

于 2013-11-13T23:31:41.697 回答