在 HDFS (cloudera 2.0.0-cdh4.2.0) 上附加文件时遇到错误。导致错误的用例是:
- 在文件系统 (DistributedFileSystem) 上创建文件。好的
附加之前创建的文件。错误
OutputStream stream = FileSystem.append(filePath); stream.write(fileContents);
然后抛出错误:
Exception in thread "main" java.io.IOException: Failed to add a datanode.
User may turn off this feature by setting dfs.client.block.write.replace-datanode-on- failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[host1:50010, host2:50010], original=[host1:50010, host2:50010])
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:792)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:852)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:958)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:469)
一些相关的 hdfs 配置:
dfs.replication
设置为 2
dfs.client.block.write.replace-datanode-on-failure.policy
设置为 true
dfs.client.block.write.replace-datanode-on-failure
设置为 DEFAULT
有任何想法吗?谢谢!