当在 spark (0.7.0) 中停止整个集群时
$SPARK_HOME/bin/stop-all.sh
并非所有工人都被正确停止。更具体地说,如果我想重新启动集群
$SPARK_HOME/bin/start-all.sh
我得到:
host1: starting spark.deploy.worker.Worker, logging to [...]
host3: starting spark.deploy.worker.Worker, logging to [...]
host2: starting spark.deploy.worker.Worker, logging to [...]
host5: starting spark.deploy.worker.Worker, logging to [...]
host4: spark.deploy.worker.Worker running as process 8104. Stop it first.
host7: spark.deploy.worker.Worker running as process 32452. Stop it first.
host6: starting spark.deploy.worker.Worker, logging to [...]
在 host4 和 host7 上,确实有一个 StandaloneExecutorBackend 仍在运行:
$ jps
27703 Worker
27763 StandaloneExecutorBackend
28601 Jps
简单地重复
$SPARK_HOME/bin/stop-all.sh
不幸的是也没有阻止工人。Spark 只是告诉我工人即将停止:
host2: no spark.deploy.worker.Worker to stop
host7: stopping spark.deploy.worker.Worker
host1: no spark.deploy.worker.Worker to stop
host4: stopping spark.deploy.worker.Worker
host6: no spark.deploy.worker.Worker to stop
host5: no spark.deploy.worker.Worker to stop
host3: no spark.deploy.worker.Worker to stop
不spark.deploy.master.Master
停止
然而,
$ jps
27703 Worker
27763 StandaloneExecutorBackend
28601 Jps
另有说法。有人知道如何stop-all.sh
正常工作吗?谢谢。