似乎我无法让 hadoop 正常启动。我正在使用 hadoop 0.23.9:
[msknapp@localhost sbin]$ hadoop namenode -format
...
[msknapp@localhost sbin]$ ./start-dfs.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/cloud/hadoop-0.23.9/logs/hadoop-msknapp-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /usr/local/cloud/hadoop-0.23.9/logs/hadoop-msknapp-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/cloud/hadoop-0.23.9/logs/hadoop-msknapp-secondarynamenode-localhost.localdomain.out
[msknapp@localhost sbin]$ ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/cloud/hadoop-0.23.9/logs/yarn-msknapp-resourcemanager-localhost.localdomain.out
localhost: starting nodemanager, logging to /usr/local/cloud/hadoop-0.23.9/logs/yarn-msknapp-nodemanager-localhost.localdomain.out
[msknapp@localhost sbin]$ cd /var/local/stock/data
[msknapp@localhost data]$ hadoop fs -ls /
[msknapp@localhost data]$ hadoop fs -mkdir /stock
[msknapp@localhost data]$ ls
companies.csv raw slf_series.txt
[msknapp@localhost data]$ hadoop fs -put companies.csv /stock/companies.csv
13/12/08 11:10:40 WARN hdfs.DFSClient: DataStreamer Exception
java.io.IOException: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1180)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:414)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:394)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1571)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1262)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1565)
at org.apache.hadoop.ipc.Client.call(Client.java:1094)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:195)
at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:102)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:67)
at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1130)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1006)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:458)
put: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
13/12/08 11:10:40 ERROR hdfs.DFSClient: Failed to close file /stock/companies.csv._COPYING_
java.io.IOException: File /stock/companies.csv._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1180)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1536)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:414)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:394)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1571)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1262)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1565)
at org.apache.hadoop.ipc.Client.call(Client.java:1094)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:195)
at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:102)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:67)
at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1130)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1006)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:458)
这是我的 core-site.xml:
<property>
<name>fs.default.name</name>
<value>hdfs://localhost/</value>
</property>
和我的 hdfs-site.xml:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
和 mapred-site.xml:
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
我查看了我拥有的所有文档,我无法弄清楚如何正确启动 hadoop。我在网上找不到任何关于 hadoop-0.23.9 的文档。我的 Hadoop 书是为 0.22 编写的。在线文档适用于 2.1.1,巧合的是我无法开始工作。
有人可以告诉我如何正确启动我的hadoop吗?