0

在 HDFS (cloudera 2.0.0-cdh4.2.0) 上附加文件时遇到错误。导致错误的用例是:

  • 在文件系统 (DistributedFileSystem) 上创建文件。好的
  • 附加之前创建的文件。错误

    OutputStream stream = FileSystem.append(filePath); stream.write(fileContents);

    然后抛出错误:

Exception in thread "main" java.io.IOException: Failed to add a datanode.
User may turn off this feature by setting dfs.client.block.write.replace-datanode-on- failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[host1:50010, host2:50010], original=[host1:50010, host2:50010]) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:792) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:852) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:958) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:469)

一些相关的 hdfs 配置:

dfs.replication设置为 2

dfs.client.block.write.replace-datanode-on-failure.policy设置为 true dfs.client.block.write.replace-datanode-on-failure设置为 DEFAULT

有任何想法吗?谢谢!

4

1 回答 1

1

Problem was solved by running on file system

hadoop dfs -setrep -R -w 2 /

Old files on file system had replication factor set to 3, setting dfs.replication to 2 in hdfs-site.xml will not solve the problem as this config will not apply to already existing files.

So, if u remove machines from cluster you better check files and system replication factor

于 2013-03-11T21:17:30.613 回答