4

我有 8 台从属计算机和 1 台用于运行 Hadoop(版本 0.21)的主计算机

当我在 10GB 数据上运行 MapReduce 代码时,集群的一些数据节点突然断开连接 在所有映射器完成并处理了大约 80% 的减速器后,随机一个或多个数据节点从网络中断开。然后其他数据节点开始从网络中消失,即使我在发现某些数据节点断开连接时终止了 MapReduce 作业。

我尝试将 dfs.datanode.max.xcievers 更改为 4096,关闭所有计算节点的防火墙,禁用 selinux 并将文件打开限制的数量增加到 20000,但它们根本不起作用......

有人有解决这个问题的想法吗?

以下是来自 mapreduce 的错误日志

12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED
java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427)

以下是来自datanode的日志

2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010
2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010
2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257)
    at java.lang.Thread.run(Thread.java:722)

2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453
2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010]
    at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
    at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164)
    at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203)
    at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388)
    at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284)
    at java.lang.Thread.run(Thread.java:722)

hdfs-site.xml

<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/data/name</value>
 </property>
 <property>
     <name>dfs.data.dir</name>
              <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value>
     </property>
     <property>
         <name>dfs.replication</name>
         <value>3</value>
     </property>

     <property>
               <name>dfs.datanode.max.xcievers</name>
              <value>4096</value>
    </property>

    <property>
            <name>dfs.http.address</name>
            <value>0.0.0.0:20070</value>
            <description>50070
      The address and the base port where the dfs namenode web ui will listen on.
      If the port is 0 then the server will start on a free port.
            </description>
    </property>

    <property>
            <name>dfs.datanode.http.address</name>
            <value>0.0.0.0:20075</value>
            <description>50075
      The datanode http server address and port.
      If the port is 0 then the server will start on a free port.
            </description>
     </property>

    <property>
      <name>dfs.secondary.http.address</name>
      <value>0.0.0.0:20090</value>
      <description>50090
      The secondary namenode http server address and port.
      If the port is 0 then the server will start on a free port.
      </description>
    </property>

    <property>
      <name>dfs.datanode.address</name>
      <value>0.0.0.0:20010</value>
      <description>50010
      The address where the datanode server will listen to.
      If the port is 0 then the server will start on a free port.
      </description>

 <property>
      <name>dfs.datanode.ipc.address</name>
      <value>0.0.0.0:20020</value>
      <description>50020
      The datanode ipc server address and port.
      If the port is 0 then the server will start on a free port.
      </description>
    </property>

    <property>
      <name>dfs.datanode.https.address</name>
      <value>0.0.0.0:20475</value>
    </property>

        <property>
         <name>dfs.https.address</name>
          <value>0.0.0.0:20470</value>
        </property>
</configuration>

mapred-site.xml

<configuration>
    <property>
            <name>mapred.job.tracker</name>
            <value>masternode:29001</value>
    </property>
    <property>
            <name>mapred.system.dir</name>
            <value>/home/hadoop/data/mapreduce/system</value>
    </property>
    <property>
            <name>mapred.local.dir</name>
            <value>/home/hadoop/data/mapreduce/local</value>
    </property>
    <property>
            <name>mapred.map.tasks</name>
            <value>32</value>
            <description> default number of map tasks per job.</description>
    </property>
    <property>
            <name>mapred.tasktracker.map.tasks.maximum</name>
            <value>4</value>
    </property>
    <property>
            <name>mapred.reduce.tasks</name>
            <value>8</value>
            <description> default number of reduce tasks per job.</description>
    </property>
    <property>
            <name>mapred.map.child.java.opts</name>
            <value>-Xmx2048M</value>
    </property>
    <property>
            <name>io.sort.mb</name>
            <value>500</value>
    </property>
    <property>
            <name>mapred.task.timeout</name>
            <value>1800000</value> <!-- 30 minutes -->
    </property>


    <property>
            <name>mapred.job.tracker.http.address</name>
            <value>0.0.0.0:20030</value>
            <description> 50030
            The job tracker http server address and port the server will listen on.
            If the port is 0 then the server will start on a free port.
            </description>
        </property>

        <property>
                <name>mapred.task.tracker.http.address</name>
                <value>0.0.0.0:20060</value>
                <description> 50060

         </property>

</configuration>
4

1 回答 1

1

尝试max.xcieversconf/hdfs-site.xml http://hbase.apache.org/book.html#dfs.datanode.max.xcievers中配置:

<property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
</property>
于 2014-04-04T10:10:59.487 回答