使用 SparklyR 和 Spark 2.0.2 调用逻辑回归后,我在 Spark 上收到以下错误。
ml_logistic_regression(Data, ml_formula)
我读入 Spark 的数据集比较大(2.2GB)。这是错误消息:
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task
13 in stage 64.0 failed 1 times, most recent failure:
Lost task 13.0 in stage 64.0 (TID 1132, localhost):
java.util.concurrent.ExecutionException:
java.lang.Exception:
failed to compile: org.codehaus.janino.JaninoRuntimeException:
Code of method "(Lorg/apache/spark/sql/catalyst/InternalRow;)Z"
of class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificPredicate"
grows beyond 64 KB
其他人也有类似的问题:https ://github.com/rstudio/sparklyr/issues/298但我找不到解决方案。有任何想法吗?