4

我是 Spark 的新手,在提交申请时遇到了问题。我设置了一个主节点,其中有两个带有 spark 的从属节点,一个带有 zookeeper 的单个节点,以及一个带有 kafka 的单个节点。我想在 python 中使用 spark 流启动 kafka wordcount 示例的修改版本。

要提交应用程序,我所做的是通过 ssh 进入主 spark 节点并运行<path to spark home>/bin/spark-submit. 如果我用它的 ip 指定主节点,一切都很好,并且应用程序正确地使用来自 kafka 的消息,我可以从 SparkUI 中看到应用程序在两个从属节点上都正确运行:

./bin/spark-submit --master spark://<spark master ip>:7077 --jars ./external/spark-streaming-kafka-assembly_2.10-1.3.1.jar ./examples/src/main/python/streaming/kafka_wordcount.py <zookeeper ip>:2181 test

但是如果我用它的主机名指定主节点:

./bin/spark-submit --master spark://spark-master01:7077 --jars ./external/spark-streaming-kafka-assembly_2.10-1.3.1.jar ./examples/src/main/python/streaming/kafka_wordcount.py zookeeper01:2181 test

然后它挂起这些日志:

15/05/27 02:01:58 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark-master01:7077/user/Master...
15/05/27 02:02:18 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark-master01:7077/user/Master...
15/05/27 02:02:38 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark-master01:7077/user/Master...
15/05/27 02:02:58 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/05/27 02:02:58 ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
15/05/27 02:02:58 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.

我的/etc/hosts文件如下所示:

<spark master ip> spark-master01
127.0.0.1 localhost

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
<spark slave-01 ip> spark-slave01
<spark slave-02 ip> spark-slave02
<kafka01 ip> kafka01
<zookeeper ip> zookeeper01

更新

这是输出的第一部分netstat -n -a

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address           State
tcp        0      0 0.0.0.0:22              0.0.0.0:*                 LISTEN
tcp        0      0 <spark master ip>:22    <my laptop ip>:60113      ESTABLISHED
tcp        0    260 <spark master ip>:22    <my laptop ip>:60617      ESTABLISHED
tcp6       0      0 :::22                   :::*                      LISTEN
tcp6       0      0 <spark master ip>:7077  :::*                      LISTEN
tcp6       0      0 :::8080                 :::*                      LISTEN
tcp6       0      0 <spark master ip>:6066  :::*                      LISTEN
tcp6       0      0 127.0.0.1:60105         127.0.0.1:44436           TIME_WAIT
tcp6       0      0 <spark master ip>:43874 <spark master ip>:7077    TIME_WAIT
tcp6       0      0 127.0.0.1:51220         127.0.0.1:55029           TIME_WAIT
tcp6       0      0 <spark master ip>:7077  <spark slave 01 ip>:37061 ESTABLISHED
tcp6       0      0 <spark master ip>:7077  <spark slave 02 ip>:47516 ESTABLISHED
tcp6       0      0 127.0.0.1:51220         127.0.0.1:55026           TIME_WAIT
4

2 回答 2

1

您使用的是主机名而不是 IP 地址。/etc/hosts因此,您应该在每个节点的文件中提及您的主机名。然后它将起作用。

于 2015-05-27T07:02:26.323 回答
0

你可以先试试看ping spark-master01ip spark-master01 解析到什么。然后您可以尝试netstat -n -a查看您的 spark master 的端口 7077 是否正确绑定到您的 spark master 节点的 ip。

于 2015-05-27T06:28:28.580 回答