0

I’m trying to train a GBM on an EMR cluster with 60 c4.8xlarge nodes using Sparkling Water. The process runs successfully up to a specific data size. Once I hit a certain data size (number of training examples) the process freezes in the collect stage in SpreadRDDBuilder.scala and dies after an hour. While this is happening the network memory continues to grow to capacity while there’s no progress in Spark stages (see below) and very little CPU usage and network traffic. I’ve tried increasing the executor and driver memory and num-executors but I’m seeing the exact same behavior under all configurations.

Thanks for looking at this. It’s my first time posting here so please let me know if you need any more information.

Parameters

spark-submit --num-executors 355 --driver-class-path h2o-genmodel-3.10.1.2.jar:/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/* --driver-memory 20G --executor-memory 10G --conf spark.sql.shuffle.partitions=10000 --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --driver-java-options -Dlog4j.configuration=file:${PWD}/log4j.xml --conf spark.ext.h2o.repl.enabled=false --conf spark.dynamicAllocation.enabled=false --conf spark.locality.wait=3000 --class com.X.X.X.Main X.jar -i s3a://x

Other parameters that I’ve tried with no success:

 conf spark.ext.h2o.topology.change.listener.enabled=false
 conf spark.scheduler.minRegisteredResourcesRatio=1
 conf spark.task.maxFailures=1
 conf spark.yarn.max.executor.failures=1

Spark UI

collect at SpreadRDDBuilder.scala:105 118/3551
collect at SpreadRDDBuilder.scala:105 109/3551
collect at SpreadRDDBuilder.scala:105 156/3551
collect at SpreadRDDBuilder.scala:105 151/3551
collect at SpreadRDDBuilder.scala:105 641/3551

Driver logs

17/02/13 22:43:39 WARN LiveListenerBus: Dropped 49459 SparkListenerEvents since Mon Feb 13 22:42:39 UTC 2017
 [Stage 9:(641 + 1043) / 3551][Stage 10:(151 + 236) / 3551][Stage 11:(156 + 195) / 3551]

stderror for yarn containers

t.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:745)
 Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
     at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
     at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
     at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
     at scala.concurrent.Await$.result(package.scala:190)
     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
     ... 14 more
 17/02/13 22:56:34 WARN Executor: Issue communicating with driver in heartbeater
 org.apache.spark.SparkException: Error sending message [message = Heartbeat(222,[Lscala.Tuple2;@c7ac58,BlockManagerId(222, ip-172-31-25-18.ec2.internal, 36644))]
     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:119)
     at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:518)
     at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:547)
     at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:547)
     at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:547)
     at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1953)
     at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:547)
     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval
     at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
     at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
     at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
     at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
     ... 13 more
 Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
     at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
     at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
     at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
     at scala.concurrent.Await$.result(package.scala:190)
     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
     ... 14 more
 17/02/13 22:56:41 WARN TransportResponseHandler: Ignoring response for RPC 8189382742475673817 from /172.31.27.164:37563 (81 bytes) since it is not outstanding
 17/02/13 22:56:41 WARN TransportResponseHandler: Ignoring response for RPC 7998046565668775240 from /172.31.27.164:37563 (81 bytes) since it is not outstanding
 17/02/13 22:56:41 WARN TransportResponseHandler: Ignoring response for RPC 8944638230411142855 from /172.31.27.164:37563 (81 bytes) since it is not outstanding
4

1 回答 1

0

问题在于将非常高的基数(数亿个唯一值)字符串列转换为枚举。从数据框中删除这些列解决了这个问题。有关更多详细信息,请参见:https ://community.h2o.ai/questions/1747/gbm-training-with-sparkling-water-on-emr-failing-w.html

于 2017-02-18T23:47:09.343 回答