我是 Confluent 和 Kafka 的初学者。
当我在从节点服务器(分布式模式但只在一台服务器上)使用 Confluent 平台时,Confluent 服务器(只有服务器,kafka 工作正常)时不时宕机。因为我是新手,所以我在创建源和接收器时犯了错误,这与故障有关吗?
这是我的配置:
# Sample configuration for a distributed Kafka Connect worker that uses Avro serialization and
# integrates the the Schema Registry. This sample configuration assumes a local installation of
# Confluent Platform with all services running on their default ports.
# Bootstrap Kafka servers. If multiple servers are specified, they should be comma-separated.
bootstrap.servers=localhost:9092
# The group ID is a unique identifier for the set of workers that form a single Kafka Connect
# cluster
group.id=connect-cluster
# The converters specify the format of data in Kafka and how to translate it into Connect data.
# Every Connect user will need to configure these based on the format they want their data in
# when loaded from or stored into Kafka
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:18081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:18081
# The internal converter used for offsets and config data is configurable and must be specified,
# but most users will always want to use the built-in default. Offset and config data is never
# visible outside of Connect in this format.
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
# Kafka topic where connector configuration will be persisted. You should create this topic with a
# single partition and high replication factor (e.g. 3)
config.storage.topic=connect-configs
# Kafka topic where connector offset data will be persisted. You should create this topic with many
# partitions (e.g. 25) and high replication factor (e.g. 3)
offset.storage.topic=connect-offsets
# Kafka topic where connector status data will be persisted. You should create this topic with many
# partitions (e.g. 25) and high replication factor (e.g. 3)
status.storage.topic=connect-statuses
# Confuent Control Center Integration -- uncomment these lines to enable Kafka client interceptors
# that will report audit data that can be displayed and analyzed in Confluent Control Center
producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
很好奇,因为 Confluent Platform 是一个设计精良的项目,得到了很多专家的支持,更重要的是它是商业化的。
飞然