0

在我的机器上启动单节点集群时,我在启动数据节点时遇到错误

   ************************************************************/
2013-02-18 20:21:32,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = somnath-laptop/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.4
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
************************************************************/
2013-02-18 20:21:32,593 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-18 20:21:32,618 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-18 20:21:32,620 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-18 20:21:32,620 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-02-18 20:21:33,052 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-18 20:21:33,056 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-02-18 20:21:37,890 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.IOException: Connection reset by peer
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at sun.proxy.$Proxy5.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:370)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:429)
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:331)
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:296)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:356)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
Caused by: java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcher.read0(Native Method)
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
    at sun.nio.ch.IOUtil.read(IOUtil.java:224)

关于如何解决此错误的任何想法?

4

1 回答 1

1

好的,问题解决了。

由于我通过网络代理使用我的单节点集群,因此我在 $HADOOP_HOME/conf/mapred-site.xml 中添加了以下属性行,以便在跨 Hadoop 守护程序进行通信时绕过代理服务器。

但是,这次我尝试直接连接互联网,所以不得不注释掉我在 mapred-site.xml 中添加的属性。

以下是我注释掉的 mapred-site.xml 中的属性:

<!-- 
<property>
<name>hadoop.rpc.socket.factory.class.default</name>
<value>org.apache.hadoop.net.StandardSocketFactory</value>
<final>true</final>
<description>
  Prevent proxy settings set up by clients in their job configs from affecting our connectivity.
</description>
</property> 
-->
于 2013-02-19T02:48:02.700 回答