0

我想知道是否可以在 Spark 上下文中向 Cassandra 写入 SimpleFeature?我正在尝试将我的数据的 SimpleFeatures 映射到 Spark RDD,但我遇到了一些问题。以下被调用的 createFeature() 函数在独立单元测试中工作正常,我有另一个单元测试调用它,并通过 GeoMesa api 成功写入 Cassandra,并使用它生成的 SimpleFeature:

import org.locationtech.geomesa.spark.GeoMesaSparkKryoRegistrator

. . .

private val sparkConf = new SparkConf(true).set("spark.cassandra.connection.host","localhost").set("spark.serializer","org.apache.spark.serializer.KryoSerializer").set("spark.kryo.registrator",classOf[GeoMesaSparkKryoRegistrator].getName).setAppName(appName).setMaster(master)

. . .                                            

val rowsRDD = processedRDD.map(r => {

...

println("** NAME VALUE MAP **")

for ((k,v) <- featureNamesValues) printf("key: %s, value: %s\n", k, v)

val feature = MyGeoMesaManager.createFeature(featureTypeConfig.asJava,featureNamesValues.asJava)
feature
})

rowsRDD.print()

但是,我现在在 Spark 上下文中的 RDD 的 map() 函数中调用函数这一事实导致 SimpleFeatureImpl 上的序列化错误,原因是 Spark 分区:

18/02/12 08:00:46 ERROR Executor: Exception in task 0.0 in stage 19.0 (TID 
9)
java.io.NotSerializableException: org.geotools.feature.simple.SimpleFeatureImpl
Serialization stack:
- object not serializable (class: org.geotools.feature.simple.SimpleFeatureImpl, value: SimpleFeatureImpl:myfeature=[SimpleFeatureImpl.Attribute: . . ., SimpleFeatureImpl.Attribute: . . .])
- element of array (index: 0)
- array (class [Lorg.opengis.feature.simple.SimpleFeature;, size 4)

好的,然后我添加了 geomesa spark 核心页面上提到的 kyro 依赖项,以减轻这种情况,但是现在当 map 函数执行时,我在 GeoMesaSparkKryoRegistrator 类上收到 NoClassDefFoundError 错误,但是您可以看到 geomesa-spark-类路径上存在核心依赖项,我可以导入该类:

18/02/12 08:08:37 ERROR Executor: Exception in task 0.0 in stage 26.0 (TID 
11)
java.lang.NoClassDefFoundError: Could not initialize class org.locationtech.geomesa.spark.GeoMesaSparkKryoRegistrator$
at org.locationtech.geomesa.spark.GeoMesaSparkKryoRegistrator$$anon$1.write(GeoMesaSparkKryoRegistrator.scala:36)
at org.locationtech.geomesa.spark.GeoMesaSparkKryoRegistrator$$anon$1.write(GeoMesaSparkKryoRegistrator.scala:32)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:318)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:315)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:383)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

最后,我尝试将 com.esotericsoftware.kryo 依赖项添加到类路径中,但我得到了同样的错误。

是否有可能用 GeoMesa、Spark 和 Cassandra 做我想做的事情?感觉就像我在 1 码线上,但我不能完全打进去。

4

1 回答 1

1

设置类路径的最简单方法是将 maven 与 maven shade 插件一起使用。添加对 geomesa-cassandra-datastore 和 geomesa-spark-geotools 模块的依赖:

<dependency>
  <groupId>org.locationtech.geomesa</groupId>
  <artifactId>geomesa-cassandra-datastore_2.11</artifactId>
</dependency>
<dependency>
  <groupId>org.locationtech.geomesa</groupId>
  <artifactId>geomesa-spark-geotools_2.11</artifactId>
</dependency>

然后添加一个 Maven 阴影插件,类似于此处用于 Accumulo 的插件。使用带阴影的 jar 提交您的 spark 作业,类路径应该包含所需的一切。

于 2018-02-12T14:12:55.903 回答