0

我在我的名称节点上收到以下日志,并将我的数据节点从执行中删除

2013-02-08 03:25:54,345 WARN  namenode.NameNode (NameNodeRpcServer.java:errorReport(825)) - Fatal disk error on xxx.xxx.xxx.xxx:50010: DataNode failed volumes:/home/srikmvm/hadoop-0.23.0/tmp/current;
2013-02-08 03:25:54,349 INFO  net.NetworkTopology (NetworkTopology.java:remove(367)) - Removing a node: /default-rack/xxx.xxx.xxx.xxx:50010

谁能建议如何纠正这个问题?

数据节点日志:

2013-02-08 03:25:54,718 WARN datanode.DataNode (FSDataset.java:checkDirs(871)) - Removing failed volume /home/srikmvm/hadoop-0.23.0/tmp/current:   
    org.apache.hadoop.util.DiskChecker$DiskErrorException: can not create directory: /home/srikmvm/hadoop-0.23.0/tmp/current/BP-876979163-137.132.153.125-13602411944‌​23/current/finalized 
      at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:87)
4

1 回答 1

0

导致此错误消息的原因:

  • 目录/目录路径是否存在
  • 运行datanode进程的用户是否有权限创建/写入该目录

    /home/srikmvm/hadoop-0.23.0/tmp/current/BP-876979163-137.132.153.125-13602411944‌​23/current/finalized

于 2013-02-09T02:10:51.317 回答