0

我正在使用 Apache Spark 运行独立应用程序,当我将所有数据作为文本文件加载到 RDD 时,出现以下错误:

15/02/27 20:34:40 ERROR Utils: Uncaught exception in thread stdout writer for python
java.lang.OutOfMemoryError: Java heap space
   at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
   at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream.<init>(GoogleHadoopFSInputStream.java:81)
   at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.open(GoogleHadoopFileSystemBase.java:764)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
   at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:78)
   at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:51)
   at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:233)
   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:210)
   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:99)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
   at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
   at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
   at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
   at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
   at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
   at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
   at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460)
   at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)
Exception in thread "stdout writer for python" java.lang.OutOfMemoryError: Java heap space
   at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
   at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream.<init>(GoogleHadoopFSInputStream.java:81)
   at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.open(GoogleHadoopFileSystemBase.java:764)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
   at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:78)
   at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:51)
   at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:233)
   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:210)
   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:99)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
   at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
   at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
   at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
   at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
   at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
   at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
   at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460)
   at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)

我认为这与我使用该cache函数将整个 RDD 缓存到内存的事实有关。当我从我的代码中去掉这个函数时,我没有注意到任何变化。所以我不断收到这个错误。

我的 RDD 来自位于谷歌云存储桶中的目录中的几个文本文件。

你能帮我解决这个错误吗?

4

1 回答 1

1

Spark 需要根据集群大小、形状和工作负载进行相当多的配置调整,而且开箱即用,可能不适用于实际大小的工作负载。

使用bdutil部署时,获取Spark的最佳方式其实是使用官方支持的bdutil插件,简单的:

./bdutil -e extensions/spark/spark_env.sh deploy

或等同于速记:

./bdutil -e spark deploy

这将确保 gcs-connector 和内存设置等都在 Spark 中正确配置。

理论上,您也可以使用 bdutil 直接在现有集群上安装 Spark,尽管这经过不太彻底的测试:

# After you've already deployed the cluster with ./bdutil deploy:
./bdutil -e spark run_command_group install_spark -t all
./bdutil -e spark run_command_group spark_configure_startup -t all
./bdutil -e spark run_command_group start_spark -t master

这应该与您./bdutil -e spark deploy最初运行时相同。如果您已部署,./bdutil -e my_custom_env.sh deploy那么上述所有命令实际上都需要以./bdutil -e my_custom_env.sh -e spark run_command_group.

在您的情况下,相关的 Spark 内存设置可能与spark.executor.memory和/或SPARK_WORKER_MEMORY和/或有关SPARK_DAEMON_MEMORY

编辑:在相关说明中,我们刚刚发布了默认为 Spark 1.2.1 的 bdutil-1.2.0,并且还添加了改进的 Spark 驱动程序内存设置和 YARN 支持。

于 2015-02-28T01:14:32.143 回答