1

我正在尝试JobServer并希望在我们的生产环境中使用它。

我想一起使用mlliband spark-jobserver,但是我遇到了一个错误(在 spark-jobserver 上,当发送作业时)。

job-server[ERROR] Uncaught error from thread [JobServer-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[JobServer]
job-server[ERROR] java.lang.NoClassDefFoundError: org/apache/spark/mllib/stat/Statistics$
job-server[ERROR]   at SparkCorrelation$.getCorrelation(SparkCorrelation.scala:50)
job-server[ERROR]   at SparkCorrelation$.runJob(SparkCorrelation.scala:28)
job-server[ERROR]   at SparkCorrelation$.runJob(SparkCorrelation.scala:11)
job-server[ERROR]   at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:234)

我正在使用spark-jobserver 0.5.0spark 1.2

有什么想法吗?

代码:

def getCorrelation(sc: SparkContext):Double={
        val pathFile = "hdfs://localhost:9000/user/hduser/correlacion.csv"
        val fileData = getFileData(sc,pathFile)
        val colX = getDoubleColumn(fileData,1)
        val colY = getDoubleColumn(fileData,2)
        Statistics.corr(colX,colY,"pearson")
    }

override def runJob(sc: SparkContext, config: Config): Any = {/*
    val dd = sc.parallelize(config.getString("input.string").split(" ").toSeq)
    dd.map((_, 1)).reduceByKey(_ + _).collect().toMap*/
    getCorrelation(sc)
  }
4

1 回答 1

1

万一你还想知道。只需用于SPARK-CLASSPATH在本地模式下链接到 MLlib。

或者,只需修改 Dependencies.scala 即可访问 Mllib。只需将其添加到lazy val SparkDeps.

两种解决方案都在这里找到:

https://github.com/spark-jobserver/spark-jobserver/issues/341

https://github.com/spark-jobserver/spark-jobserver/issues/138

于 2016-01-27T13:57:34.913 回答