0

我已经将hadoop的WordCount代码编写为eclipse中的java应用程序来测试hadoop以运行应用程序,但是当我尝试以hdfs用户身份运行它时,会出现这个错误:

./hadoop jar /home/masi/eclipse_workspace/WordCount_apacheSample/bin/test2.jar WordCountApacheSample /user/hdfs/wordCountInput /user/hdfs/wordCountOutput
13/10/02 17:14:50 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
13/10/02 17:14:50 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
13/10/02 17:14:50 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.net.ConnectException: Call From virtual-machine/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
Exception in thread "main" java.net.ConnectException: Call From virtual-machine/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:780)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:727)
    at org.apache.hadoop.ipc.Client.call(Client.java:1239)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:630)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1559)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:811)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1345)
    at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:140)
    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:418)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:333)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1236)
    at WordCountApacheSample.main(WordCountApacheSample.java:71)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:597)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:508)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
    at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:253)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1288)
    at org.apache.hadoop.ipc.Client.call(Client.java:1206)
    ... 29 more

虽然我用 hdfs://localhost:9000/ 测试了输入和输出路径,但没有区别!顺便说一句,我研究了许多与我的问题相关的帖子,但它们没有用

任何帮助表示赞赏。谢谢。

4

2 回答 2

1

最后我自己解决了这个问题,并决定在这里说出原因来帮助别人:) 原因听起来有点傻,但问题是:hadoop 守护进程停止了!我的虚拟机突然关闭,重新启动虚拟机后,我忘记再次启动守护进程(datanode,namenode,...)!所以这个问题的原因是:datanode和namenode等守护进程没有运行。

于 2015-08-02T12:07:21.443 回答
0

如果您发现您的 hdfs 已损坏,那么您可以执行以下操作:

sudo -su hdfs
hadoop  fsck /
hadoop  dfsadmin -safemode leave

... - 然后删除损坏的文件(如果有) - 使用以下内容:

      hadoop  fs -rmr -skipTrash <folder with your files>
      hadoop  fsck -files delete /

检查状态 :

hadoop  fsck /

在此之后状态应该是 HEALTHY - 然后手动重新启动 Ambari 中的所有内容

我在一个小型集群上尝试了这个,并在遇到上述类似错误后设法让它重新启动并运行

于 2016-07-07T13:08:22.787 回答