在 spark-batch 作业(在 EMR 上运行)中每天阅读来自 kafka 主题的最新消息的最佳选择是什么?我不想使用 spark-streaming ,因为没有 24/7 的集群。我看到了kafka-utils的选项: https ://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka_2.11 但是我看到最后一个版本是在2016年。它仍然是最好的选择吗?
谢谢!
- - - - - - - - - - - 编辑 - - - - - - -
感谢您的回复,我尝试了这个 JAR:
'org.apache.spark', name: 'spark-sql-kafka-0-10_2.12', version: '2.4.4'
在 EMR 上运行它:scalaVersion = '2.12.11' sparkVersion = '2.4.4'
使用以下代码:
val df = spark
.read
.format("kafka")
.option("kafka.bootstrap.servers", "kafka-utl")
.option("subscribe", "mytopic")
.option("startingOffsets", "earliest")
.option("kafka.partition.assignment.strategy","range") //added it due to error on missing default value for this param
.load()
df.show()
我想阅读每一批次,kafka 中所有可用的消息。该程序因以下错误而失败:
21/08/18 16:29:50 WARN ConsumerConfig: The configuration auto.offset.reset = earliest was supplied but isn't a known config.
Exception in thread "Kafka Offset Reader" java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)V
at org.apache.spark.sql.kafka010.SubscribeStrategy.createConsumer(ConsumerStrategy.scala:63)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.consumer(KafkaOffsetReader.scala:86)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$fetchTopicPartitions$1(KafkaOffsetReader.scala:119)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anon$1$$anon$2.run(KafkaOffsetReader.scala:59)
我做错了什么?谢谢。