5

我在spark custom kryo 编码器中已经概述了 spark 的问题,它没有为 UDF 提供架构,但现在创建了一个最小的示例: https ://gist.github.com/geoHeil/dc9cfb8eca5c06fca01fc9fc03431b2f

class SomeOtherClass(foo: Int)
case class FooWithSomeOtherClass(a: Int, b: String, bar: SomeOtherClass)
case class FooWithoutOtherClass(a: Int, b: String, bar: Int)
case class Foo(a: Int)
implicit val someOtherClassEncoder: Encoder[SomeOtherClass] = Encoders.kryo[SomeOtherClass]
val df2 = Seq(FooWithSomeOtherClass(1, "one", new SomeOtherClass(4))).toDS
val df3 = Seq(FooWithoutOtherClass(1, "one", 1), FooWithoutOtherClass(2, "two", 2)).toDS
val df4 = df3.map(d => FooWithSomeOtherClass(d.a, d.b, new SomeOtherClass(d.bar)))

在这里,即使createDataSet语句失败,因为

java.lang.UnsupportedOperationException: No Encoder found for SomeOtherClass
- field (class: "SomeOtherClass", name: "bar")
- root class: "FooWithSomeOtherClass"

为什么编码器不在范围内或至少不在正确范围内?

此外,尝试指定一个显式编码器,如:

df3.map(d => {FooWithSomeOtherClass(d.a, d.b, new SomeOtherClass(d.bar))}, (Int, String, Encoders.kryo[SomeOtherClass]))

不起作用。

4

1 回答 1

4

发生这种情况是因为您应该在整个序列化堆栈中使用 Kryo 编码器,这意味着您的顶级对象应该有一个 Kryo 编码器。以下在本地 Spark shell 上成功运行(您感兴趣的更改在第一行):

  implicit val topLevelObjectEncoder: Encoder[FooWithSomeOtherClass] = Encoders.kryo[FooWithSomeOtherClass]

  val df1 = Seq(Foo(1), Foo(2)).toDF

  val df2 = Seq(FooWithSomeOtherClass(1, "one", new SomeOtherClass(4))).toDS

  val df3 = Seq(FooWithoutOtherClass(1, "one", 1), FooWithoutOtherClass(2, "two", 2)).toDS
  df3.printSchema
  df3.show

  val df4 = df3.map(d => FooWithSomeOtherClass(d.a, d.b, new SomeOtherClass(d.bar)))
  df4.printSchema
  df4.show
  df4.collect
于 2017-07-10T15:15:38.223 回答