1

我正在尝试在 Google DataProc 上使用 H2O 苏打水。我已经在独立的 Spark 上成功运行了 Sparkling Water,现在继续在 DataProc 上使用它。最初,我得到一个关于spark.dynamicAllocation.enabled不被支持的错误,所以我去了大师并开始这样......

pyspark \
   --conf spark.ext.h2o.fail.on.unsupported.spark.param=false \
   --conf spark.dynamicAllocation.enabled=false

启动Sparkling Water的交互是这样的,一旦stage达到30000左右,它就开始研磨,然后在30分钟左右后,出现一串错误:

>>> from pysparkling import *
>>> import h2o
>>> hc = H2OContext.getOrCreate(spark)
18/04/11 11:56:08 WARN org.apache.spark.h2o.backends.internal.InternalH2OBackend: Increasing 'spark.locality.wait' to value 30000
18/04/11 11:56:08 WARN org.apache.spark.h2o.backends.internal.InternalH2OBackend: Due to non-deterministic behavior of Spark broadcast-based joins
We recommend to disable them by
configuring `spark.sql.autoBroadcastJoinThreshold` variable to value `-1`:
sqlContext.sql("SET spark.sql.autoBroadcastJoinThreshold=-1")
[Stage 0:=================>                               (35346 + 11) / 100001]

我尝试了各种方法,例如: - 部署小型(3 个节点)。- 部署 30 个工作人员集群。- 尝试运行 DataProc 映像 1.1 (Spark 2.0)、1.2 (Spark 2.2) 和预览版 (Spark 2.2)。

还尝试了各种 Spark 选项:

spark.ext.h2o.fail.on.unsupported.spark.param=false \
spark.ext.h2o.nthreads=2
spark.ext.h2o.cluster.size=2
spark.ext.h2o.default.cluster.size=2
spark.ext.h2o.hadoop.memory=50m
spark.ext.h2o.repl.enabled=false
spark.ext.h2o.flatfile=false
spark.dynamicAllocation.enabled=false
spark.executor.memory=700m

有人对 Google DataProc 上的 H2O 有任何运气吗?

详细错误如下:

18/04/11 12:08:40 WARN org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1523445048432_0005_01_000006 on host: cluster-dev-w-0.c.trust-networks.internal. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1523445048432_0005_01_000006
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
    at org.apache.hadoop.util.Shell.run(Shell.java:869)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Container exited with a non-zero exit code 1

18/04/11 12:08:48 ERROR org.apache.spark.network.server.TransportRequestHandler: Error sending result RpcResponse{requestId=5571077381947066483, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=81 cap=156]}} to /10.154.0.12:59387; closing connection
java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)

然后:

Exception in thread "task-result-getter-3" java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.lang.Class.newReflectionData(Class.java:2513)
    at java.lang.Class.reflectionData(Class.java:2503)
    at java.lang.Class.privateGetDeclaredConstructors(Class.java:2660)
    at java.lang.Class.getConstructor0(Class.java:3075)
    at java.lang.Class.newInstance(Class.java:412)
    at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:403)
    at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:394)
    at java.security.AccessController.doPrivileged(Native Method)
    at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:393)
    at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:112)
4

2 回答 2

2

好的,我想我自己解决了这个问题。Sparkling Water 根据 Google DataProc 中的一些非默认设置分配资源。

我编辑了/etc/spark/conf/spark-defaults.conf,然后更改spark.dynamicAllocation.enabledfalse,然后更改spark.ext.h2o.dummy.rdd.mul.factor1,这使得 H2O 集群可以在大约 3 分钟内以大约十分之一的资源启动。

如果对您来说启动太慢,请尝试spark.executor.instances10000to5000或减少1000,尽管此设置会影响您在 Spark 集群上运行的所有其他内容的性能。

于 2018-04-11T16:03:35.860 回答
1

你得到 java.lang.OutOfMemoryError。给更多的内存。

于 2018-04-11T14:41:41.327 回答