1

首先让我说我是Spark、SparkR、Hadoop 等的新手……我是 .NET 开发人员,负责将我们的 .NET 应用程序与 Apache Spark 以及最终的 Apache SparkR 集成。我目前能够在本地运行示例,但是当指向我的 Linux 集群(主:spark01,从属:spark02-spark05)时,我无法运行 PI 示例。当我使用以下脚本时,出现以下错误。

My Client Mode Command:
<p>
C:\MyData\Apache_Spark\SparkCLR-master\build\runtime>scripts\sparkclr-submit.cmd --proxy-user miadmin --total-executor-cores 2 --master spark://spark01:7077 --exe Pi.exe C:\MyData\Apache_Spark\SparkCLR-master\examples\pi\bin\Debug spark.local.dir %temp%

错误:

"C:\MyData\Apache_Spark\SparkCLR-master\build\tools\spark-1.6.0-bin-hadoop2.6\conf\spark-env.cmd"
SPARKCLR_JAR=spark-clr_2.10-1.6.0-SNAPSHOT.jar
Zip driver directory C:\MyData\Apache_Spark\SparkCLR-master\examples\pi\bin\Debug to C:\Users\shunley\AppData\Local\Temp\Debug_1453925538545.zip
[sparkclr-submit.cmd] Command to run --proxy-user miadmin --total-executor-cores 2 --master spark://spark01:7077 --name Pi --files C:\Users\shunley\AppData\Local\Temp\Debug_1453925538545.zip --class org.apache.spark.deploy.csharp.CSharpRunner C:\MyData\Apache_Spark\SparkCLR-master\build\runtime\lib\spark-clr_2.10-1.6.0-SNAPSHOT.jar C:\MyData\Apache_Spark\SparkCLR-master\examples\pi\bin\Debug C:\MyData\Apache_Spark\SparkCLR-master\examples\pi\bin\Debug\Pi.exe spark.local.dir C:\Users\shunley\AppData\Local\Temp
[CSharpRunner.main] Starting CSharpBackend!
[CSharpRunner.main] Port number used by CSharpBackend is 4485
[CSharpRunner.main] adding key=spark.jars and value=file:/C:/MyData/Apache_Spark/SparkCLR-master/build/runtime/lib/spark-clr_2.10-1.6.0-SNAPSHOT.jar to environment
[CSharpRunner.main] adding key=spark.app.name and value=Pi to environment
[CSharpRunner.main] adding key=spark.cores.max and value=2 to environment
[CSharpRunner.main] adding key=spark.files and value=file:/C:/Users/shunley/AppData/Local/Temp/Debug_1453925538545.zip to environment
[CSharpRunner.main] adding key=spark.submit.deployMode and value=client to environment
[CSharpRunner.main] adding key=spark.master and value=spark://spark01:7077 to environment
[2016-01-27T20:12:19.7218665Z] [SHUNLEY10] [Info] [ConfigurationService] ConfigurationService runMode is CLUSTER
[2016-01-27T20:12:19.7228674Z] [SHUNLEY10] [Info] [SparkCLRConfiguration] CSharpBackend successfully read from environment variable CSHARPBACKEND_PORT
[2016-01-27T20:12:19.7228674Z] [SHUNLEY10] [Info] [SparkCLRIpcProxy] CSharpBackend port number to be used in JvMBridge is 4485
[2016-01-27 15:12:19,866] [1] [DEBUG] [Microsoft.Spark.CSharp.Examples.PiExample] - spark.local.dir is set to C:\Users\shunley\AppData\Local\Temp\
[2016-01-27 15:12:21,467] [1] [INFO ] [Microsoft.Spark.CSharp.Examples.PiExample] - ----- Running Pi example -----
collectAndServe on object of type NullObject failed
null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.api.csharp.CSharpBackendHandler.handleMethodCall(CSharpBackendHandler.scala:153)
at org.apache.spark.api.csharp.CSharpBackendHandler.channelRead0(CSharpBackendHandler.scala:94)
at org.apache.spark.api.csharp.CSharpBackendHandler.channelRead0(CSharpBackendHandler.scala:27)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 9, spark02): java.io.IOException: Cannot run program "CSharpWorker.exe": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:161)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:87)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:63)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.api.csharp.CSharpRDD.compute(CSharpRDD.scala:62)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:187)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 15 more

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
... 25 more
Caused by: java.io.IOException: Cannot run program "CSharpWorker.exe": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:161)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:87)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:63)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.api.csharp.CSharpRDD.compute(CSharpRDD.scala:62)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:187)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 15 more
()
methods:
public static int org.apache.spark.api.python.PythonRDD.collectAndServe(org.apache.spark.rdd.RDD)
args:
argType: org.apache.spark.api.csharp.CSharpRDD, argValue: CSharpRDD[1] at RDD at PythonRDD.scala:43
[2016-01-27T20:12:28.0995397Z] [SHUNLEY10] [Error] [JvmBridge] JVM method execution failed: Static method collectAndServe failed for class org.apache.spark.api.python.PythonRDD when called with 1 parameters ([Index=1, Type=JvmObjectReference, Value=12], )
[2016-01-27T20:12:28.0995397Z] [SHUNLEY10] [Error] [JvmBridge] org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 9, spark02): java.io.IOException: Cannot run program "CSharpWorker.exe": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:161)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:87)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:63)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.api.csharp.CSharpRDD.compute(CSharpRDD.scala:62)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:187)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 15 more

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.api.csharp.CSharpBackendHandler.handleMethodCall(CSharpBackendHandler.scala:153)
at org.apache.spark.api.csharp.CSharpBackendHandler.channelRead0(CSharpBackendHandler.scala:94)
at org.apache.spark.api.csharp.CSharpBackendHandler.channelRead0(CSharpBackendHandler.scala:27)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot run program "CSharpWorker.exe": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:161)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:87)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:63)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.api.csharp.CSharpRDD.compute(CSharpRDD.scala:62)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:187)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 15 more

[2016-01-27T20:12:28.1296129Z] [SHUNLEY10] [Exception] [JvmBridge] JVM method execution failed: Static method collectAndServe failed for class org.apache.spark.api.python.PythonRDD when called with 1 parameters ([Index=1, Type=JvmObjectReference, Value=12], )
at Microsoft.Spark.CSharp.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic, Object classNameOrJvmObjectReference, String methodName, Object[] parameters)
[2016-01-27 15:12:28,130] [1] [INFO ] [Microsoft.Spark.CSharp.Examples.PiExample] - ----- Error running Pi example (duration=00:00:06.6599877) -----
System.Exception: JVM method execution failed: Static method collectAndServe failed for class org.apache.spark.api.python.PythonRDD when called with 1 parameters ([Index=1, Type=JvmObjectReference, Value=12], )
at Microsoft.Spark.CSharp.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic, Object classNameOrJvmObjectReference, String methodName, Object[] parameters)
at Microsoft.Spark.CSharp.Interop.Ipc.JvmBridge.CallStaticJavaMethod(String className, String methodName, Object[] parameters)
at Microsoft.Spark.CSharp.Proxy.Ipc.RDDIpcProxy.CollectAndServe()
at Microsoft.Spark.CSharp.Core.RDD1.Collect() at Microsoft.Spark.CSharp.Core.RDD1.Reduce(Func`3 f)
at Microsoft.Spark.CSharp.Examples.PiExample.Pi() in C:\MyData\Apache_Spark\SparkCLR-master\examples\Pi\Program.cs:line 76
at Microsoft.Spark.CSharp.Examples.PiExample.Main(String[] args) in C:\MyData\Apache_Spark\SparkCLR-master\examples\Pi\Program.cs:line 35
[2016-01-27 15:12:28,131] [1] [INFO ] [Microsoft.Spark.CSharp.Examples.PiExample] - Completed running examples. Calling SparkContext.Stop() to tear down ...
[2016-01-27 15:12:28,131] [1] [INFO ] [Microsoft.Spark.CSharp.Examples.PiExample] - If this program (SparkCLRExamples.exe) does not terminate in 10 seconds, please manually terminate java process launched by this program!!!
Requesting to close all call back sockets.
[CSharpRunner.main] closing CSharpBackend
Requesting to close all call back sockets.
[CSharpRunner.main] Return CSharpBackend code 1
Utils.exit() with status: 1, maxDelayMillis: 1000

作为文档和快速入门,我有几个问题:https ://github.com/Microsoft/SparkCLR/wiki/Quick-Start ,并没有真正谈论它。

当快速入门说要对独立集群环境使用以下命令时:

cd \path\to\runtime

scripts\sparkclr-submit.cmd ^
--total-executor-cores 2 ^
--master spark://host:port ^ 
--exe Pi.exe ^
\path\to\Pi\bin[debug|release] ^
spark.local.dir %temp%

我了解在第一行导航到运行时文件夹(本地或提交服务器上)。我指定了 master,所以它知道要在哪个 spark 集群上运行(这是远程 spark 集群)。现在,令人困惑的是,我们仍然指向 Pi 可执行文件和临时目录的本地(Windows)文件系统吗?我们还可以指定一个数据目录吗?如果我们为数据指定集群上的 linux 目录,格式是什么(尤其是如果我们不使用 Hadoop)?user@spark url:/path/to/sparkclr/runtime/samples/Pi/bin?

我们目前正在寻找使用 Spark 和 SparkR 从我们的应用程序中进行处理,我只是想了解您的 API 如何与 Spark 交互、提交工作、检索结果等。

任何帮助启动和运行集群示例(客户端和集群模式)将不胜感激。

谢谢,

斯科特

4

1 回答 1

3

根据给定的错误消息,似乎缺少 CSharpWorker.exe。请仔细检查它是否存在于目录 C:\MyData\Apache_Spark\SparkCLR-master\examples\pi\bin\Debug 下。

以下是 Pi 示例 FYI 的典型文件列表:

01/25/2016  02:36 PM    <DIR>          .
01/25/2016  02:36 PM    <DIR>          ..
01/21/2016  11:58 AM            16,384 CSharpWorker.exe
01/21/2016  11:58 AM             1,737 CSharpWorker.exe.config
01/13/2016  09:55 PM           304,640 log4net.dll
01/13/2016  09:55 PM         1,533,153 log4net.xml
01/21/2016  11:58 AM           233,472 Microsoft.Spark.CSharp.Adapter.dll
01/13/2016  09:55 PM           520,192 Newtonsoft.Json.dll
01/13/2016  09:55 PM           501,178 Newtonsoft.Json.xml
01/21/2016  12:42 PM             8,704 Pi.exe
01/13/2016  10:00 PM             1,673 Pi.exe.config
01/21/2016  12:42 PM            17,920 Pi.pdb
01/25/2016  02:36 PM            24,216 Pi.vshost.exe
01/13/2016  10:00 PM             1,673 Pi.vshost.exe.config
07/10/2015  07:01 PM               490 Pi.vshost.exe.manifest
01/13/2016  09:55 PM            74,240 Razorvine.Pyrolite.dll
01/13/2016  09:55 PM            40,960 Razorvine.Serpent.dll

回答您的其他问题:

问题 1:这里令人困惑的是,我们仍然指向 Pi 可执行文件和临时目录的本地(Windows)文件系统吗?

这取决于您使用的部署模式。对于客户端模式,由于驱动程序在本地运行,您应该需要将 Pi 可执行文件及其依赖项放在本地文件系统上。对于集群模式,您需要将可执行文件和依赖项打包成一个 zip 文件并上传到 HDFS,还需要将 spark-clr_2.10-1.5.200.jar 放在 HDFS 上,然后使用以下命令提交应用程序。

sparkclr-submit.cmd --proxy-user miadmin --total-executor-cores 20 --master spark://spark01:7077 --remote-sparkclr-jar hdfs://path/to/spark-clr_2.10-1.5.200.jar --exe Pi.exe hdfs://path/to/Pi.zip

问题2:我们也可以指定一个数据目录吗?如果我们为数据指定集群上的 linux 目录,格式是什么(尤其是如果我们不使用 Hadoop)?user@spark url:/path/to/sparkclr/runtime/samples/Pi/bin

如果我没理解错的话,你这里提到的数据目录应该是你的驱动程序使用的。如果是这样,则完全取决于您的驱动程序是否可以处理该格式。提交cmd中驱动目录或zip后指定的所有参数都将作为程序参数直接传递给驱动程序。

于 2016-01-28T02:40:46.833 回答