3

我正在使用 RandomForestCLassifier 构建模型。这是我的代码,

conf = SparkConf()
conf.setAppName('spark-nltk')
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
m=sc.textFile("Question_Type_Classification_testing_purpose/data/TREC_10.txt").map(lambda s: s.split(" ",1))
df= m.toDF()

创建的数据框有两个默认列名“_1”和“_2”。“_1”列有标签,“_2”列有训练数据,它们是纯文本句子。我正在执行以下步骤来创建模型,

tokenizer = Tokenizer(inputCol="_2", outputCol="words")
tok= tokenizer.transform(df) 
hashingTF = HashingTF(inputCol="words", outputCol="raw_features")
h=hashingTF.transform(tok)
indexer = StringIndexer(inputCol='_1', outputCol="idxlabel").fit(df)
idx=indexer.transform(h)  
lr = RandomForestClassifier(labelCol="idxlabel").setFeaturesCol("raw_features")  
model=lr.fit(idx) 

我知道我每次都可以使用 Pipeline() 方法来代替 transform() ,但我还需要使用我的自定义 POS 标记器,并且在使用自定义转换器持久化管道时存在一些问题。所以我决定使用标准库。我发出以下命令来提交我的 Spark 作业,

spark-submit --driver-memory 5g Question_Type_Classification_testing_purpose/spark-nltk.py

当我在本地运行作业时,我将执行程序内存设置为 5g,因为当 spark 在本地模式下运行时,工作程序在驱动程序中运行。

17/04/01 02:59:19 INFO Executor: Running task 1.0 in stage 15.0 (TID 25)
17/04/01 02:59:19 INFO Executor: Running task 0.0 in stage 15.0 (TID 24)
17/04/01 02:59:19 INFO BlockManager: Found block rdd_38_1 locally
17/04/01 02:59:19 INFO BlockManager: Found block rdd_38_0 locally
17/04/01 02:59:19 INFO Executor: Finished task 1.0 in stage 15.0 (TID 25). 2432 bytes result sent to driver
17/04/01 02:59:19 INFO Executor: Finished task 0.0 in stage 15.0 (TID 24). 2432 bytes result sent to driver
17/04/01 02:59:19 INFO TaskSetManager: Finished task 1.0 in stage 15.0 (TID 25) in 390 ms on localhost (executor driver) (1/2)
17/04/01 02:59:19 INFO TaskSetManager: Finished task 0.0 in stage 15.0 (TID 24) in 390 ms on localhost (executor driver) (2/2)
17/04/01 02:59:19 INFO TaskSchedulerImpl: Removed TaskSet 15.0, whose tasks have all completed, from pool
17/04/01 02:59:19 INFO DAGScheduler: ShuffleMapStage 15 (mapPartitions at RandomForest.scala:534) finished in 0.390 s
17/04/01 02:59:19 INFO DAGScheduler: looking for newly runnable stages
17/04/01 02:59:19 INFO DAGScheduler: running: Set()
17/04/01 02:59:19 INFO DAGScheduler: waiting: Set(ResultStage 16)
17/04/01 02:59:19 INFO DAGScheduler: failed: Set()
17/04/01 02:59:19 INFO DAGScheduler: Submitting ResultStage 16 (MapPartitionsRDD[47] at map at RandomForest.scala:553), which has no missing parents
17/04/01 02:59:19 INFO MemoryStore: Block broadcast_20 stored as values in memory (estimated size 2.6 MB, free 1728.4 MB)
17/04/01 02:59:19 INFO MemoryStore: Block broadcast_20_piece0 stored as bytes in memory (estimated size 57.0 KB, free 1728.4 MB)
17/04/01 02:59:19 INFO BlockManagerInfo: Added broadcast_20_piece0 in memory on 192.168.56.1:55850 (size: 57.0 KB, free: 1757.9 MB)
17/04/01 02:59:19 INFO SparkContext: Created broadcast 20 from broadcast at DAGScheduler.scala:996
17/04/01 02:59:19 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 16 (MapPartitionsRDD[47] at map at RandomForest.scala:553)
17/04/01 02:59:19 INFO TaskSchedulerImpl: Adding task set 16.0 with 2 tasks
17/04/01 02:59:19 INFO TaskSetManager: Starting task 0.0 in stage 16.0 (TID 26, localhost, executor driver, partition 0, ANY, 5848 bytes)
17/04/01 02:59:19 INFO TaskSetManager: Starting task 1.0 in stage 16.0 (TID 27, localhost, executor driver, partition 1, ANY, 5848 bytes)
17/04/01 02:59:19 INFO Executor: Running task 0.0 in stage 16.0 (TID 26)
17/04/01 02:59:19 INFO Executor: Running task 1.0 in stage 16.0 (TID 27)
17/04/01 02:59:19 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks out of 2 blocks
17/04/01 02:59:19 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
17/04/01 02:59:19 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks out of 2 blocks
17/04/01 02:59:19 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
17/04/01 02:59:19 INFO Executor: Finished task 1.0 in stage 16.0 (TID 27). 14434 bytes result sent to driver
17/04/01 02:59:19 INFO TaskSetManager: Finished task 1.0 in stage 16.0 (TID 27) in 78 ms on localhost (executor driver) (1/2)
17/04/01 02:59:19 ERROR Executor: Exception in task 0.0 in stage 16.0 (TID 26)
java.lang.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
17/04/01 02:59:19 WARN TaskSetManager: Lost task 0.0 in stage 16.0 (TID 26, localhost, executor driver): java.lang.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

17/04/01 02:59:19 ERROR TaskSetManager: Task 0 in stage 16.0 failed 1 times; aborting job
17/04/01 02:59:19 INFO TaskSchedulerImpl: Removed TaskSet 16.0, whose tasks have all completed, from pool
17/04/01 02:59:19 INFO TaskSchedulerImpl: Cancelling stage 16
17/04/01 02:59:19 INFO DAGScheduler: ResultStage 16 (collectAsMap at RandomForest.scala:563) failed in 0.328 s due to Job aborted due to stage failure: Task 0 in stage 16.0 failed 1 times, m
failure: Lost task 0.0 in stage 16.0 (TID 26, localhost, executor driver): java.lang.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
17/04/01 02:59:19 INFO DAGScheduler: Job 11 failed: collectAsMap at RandomForest.scala:563, took 1.057042 s
Traceback (most recent call last):
  File "C:/SPARK2.0/bin/Question_Type_Classification_testing_purpose/spark-nltk2.py", line 178, in <module>
    model=lr.fit(idx)
  File "C:\SPARK2.0\python\lib\pyspark.zip\pyspark\ml\base.py", line 64, in fit
  File "C:\SPARK2.0\python\lib\pyspark.zip\pyspark\ml\wrapper.py", line 236, in _fit
  File "C:\SPARK2.0\python\lib\pyspark.zip\pyspark\ml\wrapper.py", line 233, in _fit_java
  File "C:\SPARK2.0\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 1133, in __call__
  File "C:\SPARK2.0\python\lib\pyspark.zip\pyspark\sql\utils.py", line 63, in deco
  File "C:\SPARK2.0\python\lib\py4j-0.10.4-src.zip\py4j\protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o86.fit.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 1 times, most recent failure: Lost task 0.0 in stage 16.0 (TID 26, localhost, executor driver
ng.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:748)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:747)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:747)
    at org.apache.spark.ml.tree.impl.RandomForest$.findBestSplits(RandomForest.scala:563)
    at org.apache.spark.ml.tree.impl.RandomForest$.run(RandomForest.scala:198)
    at org.apache.spark.ml.classification.RandomForestClassifier.train(RandomForestClassifier.scala:137)
    at org.apache.spark.ml.classification.RandomForestClassifier.train(RandomForestClassifier.scala:45)
    at org.apache.spark.ml.Predictor.fit(Predictor.scala:96)
    at org.apache.spark.ml.Predictor.fit(Predictor.scala:72)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.UnsupportedOperationException: empty.maxBy
    at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:236)
    at scala.collection.SeqViewLike$AbstractTransformed.maxBy(SeqViewLike.scala:37)
    at org.apache.spark.ml.tree.impl.RandomForest$.binsToBestSplit(RandomForest.scala:831)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:561)
    at org.apache.spark.ml.tree.impl.RandomForest$$anonfun$14.apply(RandomForest.scala:553)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more

17/04/01 02:59:20 INFO SparkContext: Invoking stop() from shutdown hook
17/04/01 02:59:20 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4040

提交 spark 作业时,我的可用物理 RAM 总共 8G 为 6G,因此我将驱动程序内存设置为 5g。最初,我给了 4g,但出现“OutOfMemory”错误,所以我将其更改为 5。我的数据集大小非常小。它包含900条记录,以纯文本句子的形式,文件大小为50Kb。

此错误的可能原因是什么?我尝试减少和增加数据大小,但没有任何反应。谁能让我知道我做错了什么?我需要设置任何其他 conf 变量吗?是不是因为RF?非常感谢任何帮助。我在 4 核的 Windows 机器上使用 PySpark 2.1 和 Python 2.7。

4

3 回答 3

3

在我训练随机森林分类器时,我在 spark 2.1 中也遇到了这个问题。因为在我使用 spark 2.0 之前,对我来说没有这样的问题。所以我尝试了火花2.0.2。我的程序运行没有任何问题。

至于错误,我的猜测是数据没有均匀分区。也许尝试在训练之前重新分区您的数据。另外,不要设置太多分区。否则每个分区可能只有很少的数据点,这可能会导致训练中出现一些边缘情况。

于 2017-05-17T09:36:06.460 回答
2

这就是 Spark 的问题。将在 2.2 版本中修复。查看详情: https ://issues.apache.org/jira/browse/SPARK-18036

于 2017-06-27T19:54:37.980 回答
1

我不确定 RandomForest 有什么问题,但是如果我使用 DecisionTreesClassifier,上面的代码就可以工作。可能由于 RF 是一种集成方法,它可能会尝试对导致错误的数据集进行一些操作。但是,使用 DT,它工作正常。

目前,这是我能想到的唯一解决方案。如果有人弄清楚上述问题的适当解决方案,请告诉我。

于 2017-04-02T08:10:23.670 回答