我正在尝试使用 Spark Runner 测试 KafkaIO 的 Apache Beam 代码。该代码适用于 Direct Runner。
但是,如果我在下面添加代码行,则会引发错误:
options.setRunner(SparkRunner.class);
错误:
ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 2.0 (TID 0)
java.lang.StackOverflowError
at java.base/java.io.ObjectInputStream$BlockDataInputStream.readByte(ObjectInputStream.java:3307)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2135)
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:482)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:440)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:488)
at jdk.internal.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
我尝试使用的版本:
<beam.version>2.33.0</beam.version>
<spark.version>3.1.2</spark.version>
<kafka.version>3.0.0</kafka.version>