2

我的集群是Spark-0.7.2+ Mesos-0.9。我用python写了一个spark程序,在本地模式下运行良好。但是当我在 mesos 上运行它时发生了一些错误。这是错误信息:

13/09/30 15:40:13 INFO TaskSetManager: Finished TID 13 in 242 ms (progress: 2/3)
13/09/30 15:40:13 INFO DAGScheduler: Completed ResultTask(4, 1)
send
Exception in thread "DAGScheduler" spark.SparkException: EOF reached before Python server acknowledged
        at spark.api.python.PythonAccumulatorParam.addInPlace(PythonRDD.scala:303)
        at spark.api.python.PythonAccumulatorParam.addInPlace(PythonRDD.scala:278)
        at spark.Accumulable.$plus$plus$eq(Accumulators.scala:52)
        at spark.Accumulators$$anonfun$add$2.apply(Accumulators.scala:235)
        at spark.Accumulators$$anonfun$add$2.apply(Accumulators.scala:233)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:93)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:93)
        at scala.collection.Iterator$class.foreach(Iterator.scala:660)
        at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:43)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:93)
        at spark.Accumulators$.add(Accumulators.scala:233)
        at spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:494)
        at spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:300)
        at spark.scheduler.DAGScheduler.spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:364)
        at spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:107)
13/09/30 15:40:13 INFO TaskSetManager: Finished TID 12 in 407 ms (progress: 3/3)

这不是每次都发生。似乎套接字连接不稳定。有人可以建议如何解决这个问题吗?

4

1 回答 1

0

我通过将 Java 8 更新到 U91 解决了这个问题

于 2016-06-06T16:42:23.053 回答