2

使用 SparklyR 和 Spark 2.0.2 调用逻辑回归后,我在 Spark 上收到以下错误。

ml_logistic_regression(Data, ml_formula)

我读入 Spark 的数据集比较大(2.2GB)。这是错误消息:

Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 
13 in stage 64.0 failed 1 times, most recent failure: 
Lost task 13.0 in stage 64.0 (TID 1132, localhost):    
java.util.concurrent.ExecutionException: 
java.lang.Exception: 
failed to compile: org.codehaus.janino.JaninoRuntimeException: 
Code of method "(Lorg/apache/spark/sql/catalyst/InternalRow;)Z" 
of class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificPredicate" 
grows beyond 64 KB

其他人也有类似的问题:https ://github.com/rstudio/sparklyr/issues/298但我找不到解决方案。有任何想法吗?

4

1 回答 1

1

当您对数据进行子集化并尝试运行模型时会发生什么?您可能需要更改配置设置以处理数据的大小:

library(dplyr)
library(sparklyr)
#configure the spark session and connect
config <- spark_config()
config$`sparklyr.shell.driver-memory` <- "XXG" #change depending on the size of the data
config$`sparklyr.shell.executor-memory` <- "XXG"

sc <-  spark_connect(master='yarn-client', spark_home='/XXXX/XXXX/XXXX',config = config)

您还可以更改其他设置spark_config()以处理性能。这只是一对夫妇的例子。

于 2017-04-05T14:17:26.030 回答