我按照http://gauravkohli.com/2014/08/26/apache-oozie-installation-on-hadoop-2-4-1/上的步骤在 Linux 机器上 安装了oozie 4.1.0
hadoop version - 2.6.0
maven - 3.0.4
pig - 0.12.0
集群设置 -
主节点运行- 名称节点,资源管理器,代理服务器。
从节点运行-Datanode,Nodemanager。
当我运行单个工作流作业时意味着它成功了。但是当我尝试运行多个工作流作业时,即两个作业都处于接受状态
检查错误日志,我将问题深入分析为,
014-12-24 21:00:36,758 [JobControl] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: 172.16.***.***/172.16.***.***:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-12-25 09:30:39,145 [communication thread] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: 172.16.***.***/172.16.***.***:52406. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-12-25 09:30:39,199 [communication thread] INFO org.apache.hadoop.mapred.Task - Communication exception: java.io.IOException: Failed on local exception: java.net.SocketException: Network is unreachable: no further information; Host Details : local host is: "SystemName/127.0.0.1"; destination host is: "172.16.***.***":52406;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:231)
at $Proxy9.ping(Unknown Source)
at org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:742)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.SocketException: Network is unreachable: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
at org.apache.hadoop.ipc.Client.call(Client.java:1382)
... 5 more
Heart beat
Heart beat
.
.
在上述正在运行的作业中,如果我手动终止任何一个启动器作业,则(hadoop job -kill <launcher-job-id>)
意味着所有作业都会成功。所以我认为问题是不止一个启动器作业同时运行意味着作业会遇到死锁..
如果有人知道上述问题的原因和解决方案。请尽快帮我一个忙。