1

我是 Scala 和 Spark 的新手。

我正在练习SparkHdfsLR.scala 代码

但是我在这段代码中遇到了问题:

60    val lines = sc.textFile(inputPath)
61    val points = lines.map(parsePoint _).cache()
62    val ITERATIONS = args(2).toInt

第 61 行不起作用。在我把它改成这样之后:

60    val lines = sc.textFile(inputPath)
61    val points = lines.take(149800).map(parsePoint _)  //149800 is the total number of lines
62    val ITERATIONS = args(2).toInt

来自 sbt 运行的错误消息是:

[error] (run-main) org.apache.spark.SparkException: Job failed: Task 0.0:1 failed more than 4 times
org.apache.spark.SparkException: Job failed: Task 0.0:1 failed more than 4 times
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:760)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:758)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:758)
at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:379)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:441)
at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:149)
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[error] {file:/var/sdb/home/tim.tan/workspace/spark/}default-d3d73f/compile:run: Nonzero exit code: 1
[error] Total time: 52 s, completed Dec 20, 2013 5:42:18 PM

任务节点的std错误为:

13/12/20 17:42:16 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
13/12/20 17:42:16 INFO executor.StandaloneExecutorBackend: Connecting to driver: akka://spark@SHXJ-H07-SDB06:38975/user/StandaloneScheduler
13/12/20 17:42:17 INFO executor.StandaloneExecutorBackend: Successfully registered with driver
13/12/20 17:42:17 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
13/12/20 17:42:17 INFO spark.SparkEnv: Connecting to BlockManagerMaster: akka://spark@SHXJ-H07-SDB06:38975/user/BlockManagerMaster
13/12/20 17:42:17 INFO storage.MemoryStore: MemoryStore started with capacity 323.9 MB.
13/12/20 17:42:17 INFO storage.DiskStore: Created local directory at /tmp/spark-local-20131220174217-be8e
13/12/20 17:42:17 INFO network.ConnectionManager: Bound socket to port 52043 with id = ConnectionManagerId(TS-BH90,52043)
13/12/20 17:42:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
13/12/20 17:42:17 INFO storage.BlockManagerMaster: Registered BlockManager
13/12/20 17:42:17 INFO spark.SparkEnv: Connecting to MapOutputTracker: akka://spark@SHXJ-H07-SDB06:38975/user/MapOutputTracker
13/12/20 17:42:17 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-1b1a6c0b-965e-4834-a3d3-554c95442041
13/12/20 17:42:17 INFO server.Server: jetty-7.x.y-SNAPSHOT
13/12/20 17:42:17 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:41811
13/12/20 17:42:18 ERROR executor.StandaloneExecutorBackend: Driver terminated or disconnected! Shutting down.

登录worker如下:

13/12/19 17:49:26 INFO worker.Worker: Asked to launch executor app-20131219174926-0001/2 for SparkHdfsLR
13/12/19 17:49:26 INFO worker.ExecutorRunner: Launch command: "java" "-cp" ":/var/bh/spark/conf:/var/bh/spark/assembly/target/scala-2.9.3/spark-assembly-0.8.0-incubating-hadoop1.0.3.jar:/var/bh/spark/core/target/scala-2.9.3/test-classes:/var/bh/spark/repl/target/scala-2.9.3/test-classes:/var/bh/spark/mllib/target/scala-2.9.3/test-classes:/var/bh/spark/bagel/target/scala-2.9.3/test-classes:/var/bh/spark/streaming/target/scala-2.9.3/test-classes" "-Djava.library.path=/var/bh/hadoop/lib/native/Linux-amd64-64/" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.StandaloneExecutorBackend" "akka://spark@SHXJ-H07-SDB06:56158/user/StandaloneScheduler" "2" "TS-BH87" "8"
13/12/19 17:49:30 INFO worker.Worker: Asked to kill executor app-20131219174926-0001/2
13/12/19 17:49:30 INFO worker.ExecutorRunner: Runner thread for executor app-20131219174926-0001/2 interrupted
13/12/19 17:49:30 INFO worker.ExecutorRunner: Killing process!

看起来工作负载没有成功启动。

我不知道为什么。有没有人可以给我一个建议?

4

1 回答 1

0

我找到了为什么它不起作用。

由于一些不好的配置,spark只能在standalone模式下工作。更正配置,如果你想让代码在分布式模式下运行,最后两个参数必须特定于函数 SparkContext:

new SparkContext(master, jobName, [sparkHome], [jars])

如果最后两个参数不具体,则 scala 脚本只能在独立模式下工作。

于 2013-12-23T02:53:21.603 回答