1

我正在尝试将 Spark DataFrame 转换为 H2O DataFrame

对于火花设置,我正在使用

 .setMaster("local[1]")
 .set("spark.driver.memory", "4g")
 .set("spark.executor.memory", "4g")

我尝试了 H2O 2.0.2 和 H2O 1.6.4。我在以下位置遇到了相同的错误:

 val trainsetH2O: H2OFrame = trainsetH
 val testsetH2O: H2OFrame = testsetH

错误信息是:

 ERROR Executor: Exception in task 49.0 in stage 3.0 (TID 62)
 java.lang.OutOfMemoryError: PermGen space
     at sun.misc.Unsafe.defineClass(Native Method)
     at sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:63)
     at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:399)
     at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:396)
     at java.security.AccessController.doPrivileged(Native Method)
     at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:395)
     at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:113)
     at sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:331)
     at java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1376)
     at java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:72)
     at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:493)
     at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
     at java.security.AccessController.doPrivileged(Native Method)
     at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:468)
     at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
     at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
     at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
     at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)

哪里错了?trainset 和 testset 中的数据都小于 10K,所以实际上很小。

4

1 回答 1

3

问题是您用完了 PermGem 内存,这与您通常为驱动程序和执行程序配置的内存空间不同

.set("spark.driver.memory", "4g") .set("spark.executor.memory", "4g")

这是 JVM 内存的一部分,其中包含已加载的类。要为火花驱动程序和执行程序增加它,请使用以下参数调用spark-submit或命令。spark-shell

--conf spark.driver.extraJavaOptions="-XX:MaxPermSize=384m" --conf spark.executor.extraJavaOptions="-XX:MaxPermSize=384m"

于 2016-12-20T10:04:56.300 回答