2

我一直在尝试让 spark-deep-learning 库在我的 EMR 集群上运行,以便能够与 Python 2.7 并行读取图像。我一直在寻找这个很长一段时间,但我未能找到解决方案。我尝试在 conf 中为 sparksession 设置不同的配置设置,但在尝试创建 SparkSession 对象时出现以下错误

ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
   at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
   at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
   at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
   at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
   at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
   at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
   at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
   at py4j.Gateway.invoke(Gateway.java:238)
   at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
   at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
   at py4j.GatewayConnection.run(GatewayConnection.java:214)
   at java.lang.Thread.run(Thread.java:748)

以上是使用 jupyter notebook 时的结果。我尝试使用 spark submit 提交 py 文件,并添加我需要用作 --jars、--driver-class-path 和 --conf spark.executor.extraClassPath 的值的 jar,如此链接所述。这里是我提交的代码以及由此产生的导入错误:

bin/spark-submit --jars /home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
--driver-class-path /home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
--conf spark.executor.extraClassPath=/home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
/home/hadoop/RunningCode6.py 

Traceback (most recent call last):
  File "/home/hadoop/RunningCode6.py", line 74, in <module>
  from sparkdl import KerasImageFileTransformer
ImportError: No module named sparkdl

该库在独立模式下运行良好,但是当我使用集群模式时,我不断收到上述错误之一。

我真的希望有人能帮我解决这个问题,因为我已经盯着它看了好几个星期了,我需要让它工作

谢谢!

4

0 回答 0