1

我是苏打水的新手。我一直在尝试在 intellij 中为其开发项目,但不能。我在互联网上找不到很多相同的资源。因此,任何人都可以告诉如何使用 h20 开发一个简单的项目,并使用 IntelliJ 在 scala 中激发。

我试过这段代码:

import org.apache.spark.h2o.H2OContext
import org.apache.spark.sql.DataFrame
import org.apache.spark.{h2o, SparkConf, SparkContext}
import water.H2OClientApp
import water.fvec._
import org.apache.spark.h2o._
object test {
  def main(args: Array[String]) {

    val conf = new SparkConf().setMaster("local[*]").setAppName("testing")
    val sc = new SparkContext(conf)

    val source = getClass.getResource("data.txt")
    val distF = sc.textFile(source.getFile)
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    import sqlContext.implicits._
    val table1 = distF.map(_.split(",")).map(p => Person(p(0), p(1),p(2),p(3),p(4),p(5),p(6))).toDF()


    import org.apache.spark.h2o._
    val h2oContext = new H2OContext(sc).start()
    import h2oContext._
    import org.apache.spark.rdd.RDD

    val mydf2:h2o.RDD[Person] = h2oContext.createH2ORDD(table1)
    println("Count of mydf2================>>>>>>>>"+mydf2.count())

  }
}

case class Person(Country: String, ASN: String,Time_Stamp: String,Metric_A: String,Co_Server: String,Bytes: String,Send_Time:String);

为此我得到了错误。生成的日志的错误部分是:

15/12/24 03:45:53 WARN TaskSetManager: Lost task 1.0 in stage 5.0 (TID 17, localhost): java.lang.IllegalArgumentException: argument type mismatch
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.spark.rdd.H2ORDD$$anon$1.next(H2ORDD.scala:106)
    at org.apache.spark.rdd.H2ORDD$$anon$1.next(H2ORDD.scala:64)
    at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1555)
    at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
    at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

15/12/24 03:45:53 ERROR TaskSetManager: Task 1 in stage 5.0 failed 1 times; aborting job
15/12/24 03:45:53 INFO TaskSchedulerImpl: Removed TaskSet 5.0, whose tasks have all completed, from pool 
15/12/24 03:45:53 INFO TaskSetManager: Lost task 0.0 in stage 5.0 (TID 16) on executor localhost: java.lang.IllegalArgumentException (argument type mismatch) [duplicate 1]
15/12/24 03:45:53 INFO TaskSchedulerImpl: Removed TaskSet 5.0, whose tasks have all completed, from pool 
15/12/24 03:45:53 INFO TaskSchedulerImpl: Cancelling stage 5
15/12/24 03:45:53 INFO DAGScheduler: ResultStage 5 (count at test.scala:32) failed in 0.038 s
15/12/24 03:45:53 INFO DAGScheduler: Job 5 failed: count at test.scala:32, took 0.050463 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 5.0 failed 1 times, most recent failure: Lost task 1.0 in stage 5.0 (TID 17, localhost): java.lang.IllegalArgumentException: argument type mismatch
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.spark.rdd.H2ORDD$$anon$1.next(H2ORDD.scala:106)
    at org.apache.spark.rdd.H2ORDD$$anon$1.next(H2ORDD.scala:64)
    at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1555)
    at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
    at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1919)
    at org.apache.spark.rdd.RDD.count(RDD.scala:1121)
    at test$.main(test.scala:32)
    at test.main(test.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: java.lang.IllegalArgumentException: argument type mismatch
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.spark.rdd.H2ORDD$$anon$1.next(H2ORDD.scala:106)
    at org.apache.spark.rdd.H2ORDD$$anon$1.next(H2ORDD.scala:64)
    at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1555)
    at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
    at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

请让我知道我哪里错了,我必须做哪些改变以及为什么。

4

3 回答 3

2

另请查看https://github.com/h2oai/h2o-droplets/tree/master/sparkling-water-droplet

它为简单的 Sparkling Water 项目提供了框架代码。另请查看以下几行:https ://github.com/h2oai/h2o-droplets/blob/master/sparkling-water-droplet/build.gradle#L34-L43它允许您配置对 H2O 和 Spark 的依赖关系。

我建议使用最新版本的苏打水 - 1.5.9。

关于在 Idea 中打开项目 - 只需build.gradle在 Idea 中打开并按照 Gradle 项目导入向导进行操作。

另一个更新:droplet 现在还包含 Sbt 定义:https ://github.com/h2oai/h2o-droplets/blob/master/sparkling-water-droplet/build.sbt

于 2015-12-24T01:09:16.167 回答
1

首先是在 Intellij 中创建一个 Scala 项目。然后您必须在build.sbt文件中设置依赖项。具体来说:

name := "Your Project Name"
version := "1.0-SNAPSHOT"
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.10" % "1.5.1", 
"org.scalaz" %% "scalaz-core" % "7.1.5",
"javax.servlet" % "javax.servlet-api" % "3.0.1",
"junit" % "junit" % "4.12",
"ai.h2o" % "sparkling-water-core_2.10" % "1.4.8"
)
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)

根据您的 spark 版本和 H2O 版本,您可以搜索 Maven 中央存储库并检查哪些与这两个包兼容并下载各自的包。

在您的情况下,您可能不需要 javax.servlet 包。

此外,对于组装插件,您必须在/project/plugins.sbt文件中声明以下内容:

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.13.0")

然后打开 SBT 选项卡(在 Intellij 窗口的右侧)并按刷新按钮(左上角)。

最后通过执行以下链接中的数字 4 来验证一切正常:http: //h2o-release.s3.amazonaws.com/sparkling-water/rel-1.4/9/index.html

希望以上内容对您有所帮助。

于 2015-12-23T17:07:55.790 回答
0

这主要是由于输入和案例类之间的类型不匹配造成的。

于 2016-03-08T22:06:14.243 回答