1

我在 S3 上有 1.2GB 的 orc 数据,我正在尝试使用相同的方法执行以下操作:

1) 缓存 snappy 集群上的数据 [snappydata 0.9]

2)对缓存的数据集执行groupby查询

3) 与 Spark 2.0.0 的性能对比

我使用的是 64 GB/8 核的机器,Snappy 集群的配置如下:

$ cat locators
localhost

$cat leads
localhost -heap-size=4096m -spark.executor.cores=1

$cat servers
localhost -heap-size=6144m
localhost -heap-size=6144m
localhost -heap-size=6144m
localhost -heap-size=6144m
localhost -heap-size=6144m
localhost -heap-size=6144m

现在,我编写了一个小 Python 脚本,用于缓存 S3 中的 orc 数据并运行一个简单的 group by 查询,如下所示:

from pyspark.sql.snappy import SnappyContext
from pyspark import SparkContext,SparkConf
conf = SparkConf().setAppName('snappy_sample')
sc = SparkContext(conf=conf)
sqlContext = SnappyContext(sc)

sqlContext.sql("CREATE EXTERNAL TABLE if not exists my_schema.my_table using orc options(path 's3a://access_key:secret_key@bucket_name/path')")
sqlContext.cacheTable("my_schema.my_table")

out = sqlContext.sql("select *  from my_schema.my_table where (WeekId = '1') order by sum_viewcount desc limit 25")
out.collect()

使用以下命令执行上述脚本:

spark-submit --master local[*] snappy_sample.py

我收到以下错误:

17/10/04 02:50:32 WARN memory.MemoryStore: Not enough space to cache rdd_2_5 in memory! (computed 21.2 MB so far)
17/10/04 02:50:32 WARN memory.MemoryStore: Not enough space to cache rdd_2_0 in memory! (computed 21.2 MB so far)
17/10/04 02:50:32 WARN storage.BlockManager: Persisting block rdd_2_5 to disk instead.
17/10/04 02:50:32 WARN storage.BlockManager: Persisting block rdd_2_0 to disk instead.
17/10/04 02:50:47 WARN storage.BlockManager: Putting block rdd_2_2 failed due to an exception
17/10/04 02:50:47 WARN storage.BlockManager: Block rdd_2_2 could not be removed as it was not found on disk or in memory
17/10/04 02:50:47 ERROR executor.Executor: Exception in task 2.0 in stage 0.0 (TID 2)
java.lang.OutOfMemoryError: GC overhead limit exceeded


at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
    at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
    at org.apache.spark.sql.execution.columnar.compression.CompressibleColumnBuilder$class.build(CompressibleColumnBuilder.scala:96)
    at org.apache.spark.sql.execution.columnar.NativeColumnBuilder.build(ColumnBuilder.scala:97)
    at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1$$anonfun$next$2.apply(InMemoryRelation.scala:135)
    at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1$$anonfun$next$2.apply(InMemoryRelation.scala:134)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
    at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1.next(InMemoryRelation.scala:134)
    at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1.next(InMemoryRelation.scala:98)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:232)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:935)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:926)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:866)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:926)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:670)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:331)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:282)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:320)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:284)
    at org.apache.spark.sql.execution.WholeStageCodegenRDD.compute(WholeStageCodegenExec.scala:496)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:320)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:284)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:320)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:284)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
17/10/04 02:50:47 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-2,5,main]
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
    at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
    at org.apache.spark.sql.execution.columnar.compression.CompressibleColumnBuilder$class.build(CompressibleColumnBuilder.scala:96)
    at org.apache.spark.sql.execution.columnar.NativeColumnBuilder.build(ColumnBuilder.scala:97)
    at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1$$anonfun$next$2.apply(InMemoryRelation.scala:135)
    at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1$$anonfun$next$2.apply(InMemoryRelation.scala:134)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
    at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1.next(InMemoryRelation.scala:134)
    at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1.next(InMemoryRelation.scala:98)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:232)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:935)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:926)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:866)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:926)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:670)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:331)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:282)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:320)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:284)
    at org.apache.spark.sql.execution.WholeStageCodegenRDD.compute(WholeStageCodegenExec.scala:496)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:320)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:284)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:320)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:284)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
17/10/04 02:50:48 INFO snappystore: VM is exiting - shutting down distributed system

除了上述错误,如何检查数据是否已缓存在 snappy 集群中?

4

1 回答 1

2

1) 首先,您看起来不像是使用 python 脚本连接到 SnappyData 集群,而是在本地模式下运行它。在这种情况下,python 脚本启动的 JVM 会出现 OOM 失败,正如预期的那样。在“智能连接器”模式下使用 python 连接到 SnappyData 集群时:

spark-submit --master local[*] --conf snappydata.connection=locator:1527 snappy_sample.py

上面的 host:port 是 thrift 服务器运行的定位器主机和端口(默认为 1527)。

2)其次,您的示例将使用 Spark 进行缓存。如果要使用 SnappyData,请加载到列表中:

from pyspark.sql.snappy import SnappySession
from pyspark import SparkContext,SparkConf
conf = SparkConf().setAppName('snappy_sample')
sc = SparkContext(conf=conf)
session = SnappySession(sc)

session.sql("CREATE EXTERNAL TABLE if not exists my_table using orc options(path 's3a://access_key:secret_key@bucket_name/path')")
session.table("my_table").write.format("column").saveAsTable("my_column_table")

out = session.sql("select *  from my_column_table where (WeekId = '1') order by sum_viewcount desc limit 25")
out.collect()

另请注意使用“SnappySession”而不是自 Spark 2.0.x 以来已弃用的上下文。在与 Spark 缓存进行比较时,您可以在单独的脚本中使用“cacheTable”并针对上游 Spark 运行。请注意,“cacheTable”将延迟缓存,这意味着第一个查询将执行实际缓存,因此使用 Spark 缓存的第一个查询运行将非常慢,但后续查询应该更快。

3) 更新到有很多改进的 1.0 版本,而不是使用 0.9。在启动之前,您还需要将hadoop-aws-2.7.3aws-java-sdk-1.7.4添加到 conf/leads 和 conf/servers 中的“-classpath”(或放入产品的 jars 目录)簇。

于 2017-10-04T11:44:52.367 回答