我们面临着与此线程中描述的相同的问题。
这里 - Samza 正在请求一个太旧的 Kafka 分区偏移量(即 Kafka 日志已向前移动)。我们将属性设置
consumer.auto.offset.reset
为smallest
,因此期望 Samza 在这种情况下将其检查点重置为最早可用的分区偏移量。但这并没有发生,我们不断地得到这种形式的例外:
INFO [2018-08-21 19:26:20,924] [U:669,F:454,T:1,123,M:2,658]
kafka.producer.SyncProducer:[Logging_class:info:66] - [main] -
Disconnecting from vrni-platform-release:9092
INFO [2018-08-21 19:26:20,924] [U:669,F:454,T:1,123,M:2,658]
system.kafka.GetOffset:[Logging_class:info:63] - [main] - Validating offset
56443499 for topic and partition Topic3-0
WARN [2018-08-21 19:26:20,925] [U:669,F:454,T:1,123,M:2,658]
system.kafka.KafkaSystemConsumer:[Logging_class:warn:74] - [main] - While
refreshing brokers for Topic3-0:
org.apache.kafka.common.errors.OffsetOutOfRangeException: The requested
offset is not within the range of offsets maintained by the server..
Retrying
版本详情
- 萨姆萨:2.11-0.14.1
- 卡夫卡客户端:1.1.0
- 卡夫卡服务器:1.1.0 Scala 2.11
浏览代码,似乎GetOffset::isValidOffset
应该能够捕获异常OffsetOutOfRangeException
并将其转换为 false 值。但似乎这并没有发生。会不会出现不匹配的package
情况Exception
?GetOffSet类正在捕获异常import kafka.common.OffsetOutOfRangeException
,但从日志来看,该类的包似乎不同。这可能是原因吗?
def isValidOffset(consumer: DefaultFetchSimpleConsumer, topicAndPartition: TopicAndPartition, offset: String) = {
info("Validating offset %s for topic and partition %s" format (offset, topicAndPartition))
try {
val messages = consumer.defaultFetch((topicAndPartition, offset.toLong))
if (messages.hasError) {
KafkaUtil.maybeThrowException(messages.error(topicAndPartition.topic, topicAndPartition.partition).exception())
}
info("Able to successfully read from offset %s for topic and partition %s. Using it to instantiate consumer." format (offset, topicAndPartition))
true
} catch {
case e: OffsetOutOfRangeException => false
}
}
此外,似乎BrokerProxy类 - 的调用者GetOffset
将打印日志"It appears that..."
以防它获得错误值,但它没有记录这一行(表明GetOffset
方法中生成的某些异常未被捕获并被传播):
def addTopicPartition(tp: TopicAndPartition, nextOffset: Option[String]) = {
debug("Adding new topic and partition %s to queue for %s" format (tp, host))
if (nextOffsets.asJava.containsKey(tp)) {
toss("Already consuming TopicPartition %s" format tp)
}
val offset = if (nextOffset.isDefined && offsetGetter.isValidOffset(simpleConsumer, tp, nextOffset.get)) {
nextOffset
.get
.toLong
} else {
warn("It appears that we received an invalid or empty offset %s for %s. Attempting to use Kafka's auto.offset.reset setting. This can result in data loss if processing continues." format (nextOffset, tp))
offsetGetter.getResetOffset(simpleConsumer, tp)
}
debug("Got offset %s for new topic and partition %s." format (offset, tp))
nextOffsets += tp -> offset
metrics.topicPartitions.get((host, port)).set(nextOffsets.size)
}
这可能是由于我们使用的 Kafka 客户端库版本不匹配造成的吗?是否有我们应该与 Samza 0.14.1 一起使用的推荐 Kafka 客户端版本(假设 Kafka 服务器是 1.x)?
对此的任何帮助将不胜感激。