2

我正在运行 spark v2.0.0 YARN 集群。我在 Spark 大师旁边奔跑。

我已经设置了一个 jupyter Python3 笔记本并安装了Spark Magic ,并按照必要的说明将 Spark Magic 连接到 Livy,尽管当我创建会话时,我从笔记本中收到一条错误消息。

Added endpoint http://spark-master:8998
Starting Spark application

ID  YARN Application ID Kind    State   Spark UI    Driver log  Current session?
0   None    pyspark idle            ✔
---------------------------------------------------------------------------
LivyUnexpectedStatusException             Traceback (most recent call last)
/opt/conda/lib/python3.5/site-packages/hdijupyterutils/ipywidgetfactory.py in submit_clicked(self, button)
     63 
     64     def submit_clicked(self, button):
---> 65         self.parent_widget.run()

/opt/conda/lib/python3.5/site-packages/sparkmagic/controllerwidget/createsessionwidget.py in run(self)
     56 
     57         try:
---> 58             self.spark_controller.add_session(alias, endpoint, skip, properties)
     59         except ValueError as e:
     60             self.ipython_display.send_error("""Could not add session with

/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/sparkcontroller.py in add_session(self, name, endpoint, skip_if_exists, properties)
     79         session = self._livy_session(http_client, properties, self.ipython_display)
     80         self.session_manager.add_session(name, session)
---> 81         session.start()
     82 
     83     def get_session_id_for_client(self, name):

/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/livysession.py in start(self)
    148             else:
    149                 command = Command("sqlContext")
--> 150                 (success, out) = command.execute(self)
    151                 if success:
    152                     self.ipython_display.writeln(u"SparkContext available as 'sc'.")

/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/command.py in execute(self, session)
     29         statement_id = -1
     30         try:
---> 31             session.wait_for_idle()
     32             data = {u"code": self.code}
     33             response = session.http_client.post_statement(session.id, data)

/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/livysession.py in wait_for_idle(self, seconds_to_wait)
    238                     .format(self.id, self.status)
    239                 self.logger.error(error)
--> 240                 raise LivyUnexpectedStatusException(u'{} See logs:\n{}'.format(error, self.get_logs()))
    241 
    242             if seconds_to_wait <= 0.0:

LivyUnexpectedStatusException: Session 0 unexpectedly reached final status 'error'. See logs:

在 jupyter 的管理 spark 部分中创建新会话时,我从 Livy 日志中得到错误

17/02/10 13:06:08 INFO StateStore$: Using BlackholeStateStore for recovery.
17/02/10 13:06:08 INFO BatchSessionManager: Recovered 0 batch sessions. Next session id: 0
17/02/10 13:06:08 INFO InteractiveSessionManager: Recovered 0 interactive sessions. Next session id: 0
17/02/10 13:06:08 INFO InteractiveSessionManager: Heartbeat watchdog thread started.
17/02/10 13:06:08 INFO WebServer: Starting server on http://spark-master:8998
17/02/10 13:06:34 INFO InteractiveSession$: Creating LivyClient for sessionId: 0
17/02/10 13:06:34 WARN RSCConf: Your hostname, spark-master, resolves to a loopback address, but we couldn't find any external IP address!
17/02/10 13:06:34 WARN RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
17/02/10 13:06:35 INFO InteractiveSessionManager: Registering new session 0
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 INFO driver.RSCDriver: Starting RPC server...
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 WARN rsc.RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 INFO driver.RSCDriver: Received job request 3ca8a52b-8dd5-41f0-8151-a8201d72d422
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 INFO driver.RSCDriver: SparkContext not yet up, queueing job request.
17/02/10 13:06:36 INFO ContextLauncher: Setting default log level to "WARN".
17/02/10 13:06:36 INFO ContextLauncher: To adjust logging level use sc.setLogLevel(newLevel).
17/02/10 13:06:36 INFO ContextLauncher: 17/02/10 13:06:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/10 13:06:37 INFO ContextLauncher: 17/02/10 13:06:37 ERROR repl.PythonInterpreter: Process has died with 1
17/02/10 13:06:37 INFO RSCClient: Received result for 3ca8a52b-8dd5-41f0-8151-a8201d72d422

并在 livy 日志中获取此输出

我无法确定确切的问题/修复是什么。如果我将会话设置为使用 Scala 语言而不是 Python,我就能够创建成功的连接。尽管只有将会话语言设置为 python 时才会出现错误。如果有人知道在 Jupyter 中连接 livy-repl pyspark 会话的解决方案,请告诉我!

更新

Livy 仍然无法创建 PySpark 会话。

curl -v -X POST --data '{"kind": "pyspark"}' -H "Content-Type: application/json" example.com/sessions

会话状态将直接从“开始”变为“失败”。在 livy 会话失败之前,资源管理器上的 YARN 日志提供以下权利。

To adjust logging level use sc.setLogLevel(newLevel).
17/02/26 05:02:25 WARN rsc.RSCConf: Your hostname, yarn-slave1, resolves to a loopback address, but we couldn't find any external IP address!
17/02/26 05:02:25 WARN rsc.RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
17/02/26 05:02:32 ERROR repl.PythonInterpreter: Process has died with 1
17/02/26 05:02:33 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000002 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:33 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
    at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
    at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
    at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
17/02/26 05:02:40 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000005 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000005
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:40 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
    at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
    at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
    at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
17/02/26 05:02:47 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000006 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000006
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:47 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
    at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
    at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
    at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
17/02/26 05:02:53 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000007 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000007
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:53 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
    at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
    at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
    at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)

spark-defaults.conf

spark.yarn.appMasterEnv.PYSPARK_PYTHON python2

核心站点.xml

<property>
  <name>hadoop.proxyuser.livy.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.livy.hosts</name>
  <value>*</value>
</property>

livy.conf

livy.server.host = 0.0.0.0
livy.server.port = 8998
livy.spark.master = yarn
livy.spark.deployMode = cluster
4

3 回答 3

5

我能够重现这个问题。

问题似乎是 spark 2.0.0 和 livy 的 pyspark 版本不兼容。如果您将 spark 更新到最新版本(当前为 2.1.0),则 pyspark 版本可以进行通信,并且可以顺利创建 spark 会话。

于 2017-02-26T06:15:25.197 回答
1

即使使用 spark 2.1.1 和 livy,我也遇到过类似的问题。Livy 会话状态从“开始”变为“错误”。原来我使用的是 Java-7,而 Livy 和 Spark 需要 Java-8。解决了我的问题。

于 2017-09-15T05:10:21.287 回答
0

我面临着类似的问题。原来罪魁祸首是活泼的版本。将cloudera livy替换为apache livy-0.6.0-incubating版本后,问题解决;我能够在 livy 上创建 pyspark 类型的会话。

于 2019-08-05T12:02:59.873 回答