3

With SparkR, I'm trying for a PoC to collect an RDD that I created from text files which contains around 4M lines.

My Spark cluster is running in Google Cloud, is bdutil deployed and is composed with 1 master and 2 workers with 15gb of RAM and 4 cores each. My HDFS repository is based on Google Storage with gcs-connector 1.4.0. SparkR is intalled on each machine, and basic tests are working on small files.

Here is the script I use :

Sys.setenv("SPARK_MEM" = "1g")
sc <- sparkR.init("spark://xxxx:7077", sparkEnvir=list(spark.executor.memory="1g"))
lines <- textFile(sc, "gs://xxxx/dir/")
test <- collect(lines)

First time I run this, it seems to be working fine, all the tasks are run successfully, spark's ui says that the job completed, but I never get the R prompt back :

15/06/04 13:36:59 WARN SparkConf: Setting 'spark.executor.extraClassPath' to ':/home/hadoop/hadoop-install/lib/gcs-connector-1.4.0-hadoop1.jar' as a work-around.
15/06/04 13:36:59 WARN SparkConf: Setting 'spark.driver.extraClassPath' to ':/home/hadoop/hadoop-install/lib/gcs-connector-1.4.0-hadoop1.jar' as a work-around.
15/06/04 13:36:59 INFO Slf4jLogger: Slf4jLogger started
15/06/04 13:37:00 INFO Server: jetty-8.y.z-SNAPSHOT
15/06/04 13:37:00 INFO AbstractConnector: Started SocketConnector@0.0.0.0:52439
15/06/04 13:37:00 INFO Server: jetty-8.y.z-SNAPSHOT
15/06/04 13:37:00 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040

15/06/04 13:37:54 INFO GoogleHadoopFileSystemBase: GHFS version: 1.4.0-hadoop1
15/06/04 13:37:55 WARN LoadSnappy: Snappy native library is available
15/06/04 13:37:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/04 13:37:55 WARN LoadSnappy: Snappy native library not loaded
15/06/04 13:37:55 INFO FileInputFormat: Total input paths to process : 68
[Stage 0:=======================================================>                                                                                     (27 + 10) / 68]

Then after a CTRL-C to get the R prompt back, I try to run the collect method again, here is the result :

[Stage 1:==========================================================>                                                                                   (28 + 9) / 68]15/06/04 13:42:08 ERROR ActorSystemImpl: Uncaught fatal error from thread [sparkDriver-akka.remote.default-remote-dispatcher-5] shutting down ActorSystem [sparkDriver]
java.lang.OutOfMemoryError: Java heap space
        at org.spark_project.protobuf.ByteString.toByteArray(ByteString.java:515)
        at akka.remote.serialization.MessageContainerSerializer.fromBinary(MessageContainerSerializer.scala:64)
        at akka.serialization.Serialization$$anonfun$deserialize$1.apply(Serialization.scala:104)
        at scala.util.Try$.apply(Try.scala:161)
        at akka.serialization.Serialization.deserialize(Serialization.scala:98)
        at akka.remote.MessageSerializer$.deserialize(MessageSerializer.scala:23)
        at akka.remote.DefaultMessageDispatcher.payload$lzycompute$1(Endpoint.scala:58)
        at akka.remote.DefaultMessageDispatcher.payload$1(Endpoint.scala:58)
        at akka.remote.DefaultMessageDispatcher.dispatch(Endpoint.scala:76)
        at akka.remote.EndpointReader$$anonfun$receive$2.applyOrElse(Endpoint.scala:937)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:415)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
        at akka.dispatch.Mailbox.run(Mailbox.scala:220)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

I understand the exception message, but I don't understand why I am getting this the second time. Also, why the collect never returns after completing in Spark?

I Googled every piece of information I have, but I had no luck finding a solution. Any help or hint would be greatly appreciated!

Thanks

4

1 回答 1

2

这似乎是 Java 内存中对象表示效率低下与一些明显的长期对象引用的简单组合,这些引用导致某些集合无法及时进行垃圾收集,以便新的 collect() 调用覆盖旧的一个就地。

我尝试了一些选项,对于包含约 4M 行的示例 256MB 文件,我确实重现了您的行为,第一次收集很好,但第二次使用SPARK_MEM=1g. 然后我改为设置SPARK_MEM=4g,然后我可以 ctrl+c 并test <- collect(lines)根据需要重新运行多次。

一方面,即使引用没有泄漏,请注意,在你第一次运行之后test <- collect(lines),变量test保存着巨大的行数组,第二次调用它,在最终被分配给变量之前collect(lines)执行,因此在任何直接的指令排序中,都没有办法对. 这意味着第二次运行将使 SparkRBackend 进程同时持有整个集合的两个副本,从而导致您看到的 OOM。testtest

为了诊断,我在 master 上启动了 SparkR 并首先运行

dhuo@dhuo-sparkr-m:~$ jps | grep SparkRBackend
8709 SparkRBackend

我还检查了一下top,它使用了大约 22MB 的内存。我获取了一个堆配置文件jmap

jmap -heap:format=b 8709
mv heap.bin heap0.bin

然后我运行了第一轮test <- collect(lines),此时运行top显示它使用了约 1.7g 的 RES 内存。我抓了另一个堆转储。最后,我还尝试test <- {}摆脱引用以允许垃圾收集。这样做之后,打印出来test并显示它是空的,我抓起另一个堆转储并注意到 RES 仍然显示 1.7g。我曾经jhat heap0.bin分析原始堆转储,并得到:

Heap Histogram

All Classes (excluding platform)

Class   Instance Count  Total Size
class [B    25126   14174163
class [C    19183   1576884
class [<other>  11841   1067424
class [Lscala.concurrent.forkjoin.ForkJoinTask; 16  1048832
class [I    1524    769384
...

运行收集后,我有:

Heap Histogram

All Classes (excluding platform)

Class   Instance Count  Total Size
class [C    2784858 579458804
class [B    27768   70519801
class java.lang.String  2782732 44523712
class [Ljava.lang.Object;   2567    22380840
class [I    1538    8460152
class [Lscala.concurrent.forkjoin.ForkJoinTask; 27  1769904

即使在我取消后test,它仍然大致相同。这向我们展示了 2784858 个 char[] 实例,总大小为 579MB,还有 2782732 个 String 实例,大概将这些 char[] 放在它上面。我一直跟着参考图,得到了类似的东西

char[] -> String -> String[] -> ... -> class scala.collection.mutable.DefaultEntry -> class [Lscala.collection.mutable.HashEntry; -> class scala.collection.mutable.HashMap -> class edu.berkeley.cs.amplab.sparkr.JVMObjectTracker$ -> java.util.Vector@0x785b48cd8 (36 bytes) -> sun.misc.Launcher$AppClassLoader@0x7855c31a8 ( 138 字节)

然后 AppClassLoader 有成千上万的入站引用。因此,沿着该链的某个地方,应该已经删除了它们的引用,但没有这样做,导致整个收集的数组位于内存中,而我们尝试获取它的第二个副本。

最后,要回答关于在 之后挂起的问题collect,它似乎与不适合 R 进程内存的数据有关;这是与该问题相关的线程:https ://www.mail-archive.com/user@spark.apache.org/msg29155.html

我确认使用只有几行的较小文件,然后运行collect确实不会挂起。

于 2015-06-06T01:37:54.857 回答