0

我正在使用 srimzi 运算符并在 k8s 上运行 kafka 集群。我想使用 Kafka Mirror Maker,我使用 CRD yml 部署了 Kafka Mirror Maker,但是我的 KMM pod 处于 crashLoopBack 状态。我没有得到问题是什么?这是我的 Kafka MirrorMaker yml

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaMirrorMaker
metadata:
  name: my-mirror-maker
spec:
  version: 2.6.0
  replicas: 1
  consumer:
    bootstrapServers: my-cluster-kafka-bootstrap:9092
    groupId: my-source-group-id
  producer:
    bootstrapServers: my-cluster2-kafka-bootstrap:9092
  whitelist: ".*"

还有我的 kafka-cluster yml :

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    version: 2.6.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.6"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

我的第二个 Kafka 集群:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster2
spec:
  kafka:
    version: 2.6.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.6"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

我的 pod 列表及其状态:

    strimzi       my-bridge-bridge-684df9fc64-d7gqg              1/1     Running   2          10m
strimzi       my-cluster-entity-operator-7b546bddfd-4622z    3/3     Running   0          6m51s
strimzi       my-cluster-kafka-0                             1/1     Running   0          9m26s
strimzi       my-cluster-kafka-1                             1/1     Running   2          9m26s
strimzi       my-cluster-kafka-2                             1/1     Running   2          9m26s
strimzi       my-cluster-zookeeper-0                         1/1     Running   0          10m
strimzi       my-cluster-zookeeper-1                         1/1     Running   1          10m
strimzi       my-cluster-zookeeper-2                         1/1     Running   0          10m
strimzi       my-cluster2-entity-operator-74f6f4dbc4-7jhvh   3/3     Running   0          7m52s
strimzi       my-cluster2-kafka-0                            1/1     Running   0          9m39s
strimzi       my-cluster2-kafka-1                            1/1     Running   0          9m39s
strimzi       my-cluster2-kafka-2                            1/1     Running   0          9m39s
strimzi       my-cluster2-zookeeper-0                        1/1     Running   0          10m
strimzi       my-cluster2-zookeeper-1                        1/1     Running   0          10m
strimzi       my-cluster2-zookeeper-2                        1/1     Running   0          10m
strimzi       my-connect-cluster-connect-6cdb6cd79d-qlnhg    1/1     Running   4          10m
strimzi       strimzi-cluster-operator-54ff55979f-sxrzq      1/1     Running   0          11m

Pod 日志:

 ^Cist@ist-1207:~kubectl logs -f my-mirror-maker-mirror-maker-78544b8c8-rz5ms -n strimzi
    Kafka Mirror Maker consumer configuration:
    # Bootstrap servers
    bootstrap.servers=my-cluster-kafka-bootstrap:9092
    # Consumer group
    group.id=my-source-group-id
    # Provided configuration



security.protocol=PLAINTEXT




Kafka Mirror Maker producer configuration:
# Bootstrap servers
bootstrap.servers=my-cluster2-cluster-kafka-bootstrap:9092
# Provided configuration


security.protocol=PLAINTEXT




2020-11-20 11:41:38,990 INFO Starting readiness poller (io.strimzi.mirrormaker.agent.MirrorMakerAgent) [main]
2020-11-20 11:41:39,176 INFO Starting liveness poller (io.strimzi.mirrormaker.agent.MirrorMakerAgent) [main]
2020-11-20 11:41:39,604 INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [main]
2020-11-20 11:41:40,128 INFO Starting mirror maker (kafka.tools.MirrorMaker$) [main]
WARNING: The default partition assignment strategy of the mirror maker will change from 'range' to 'roundrobin' in an upcoming release (so that better load balancing can be achieved). If you prefer to make this switch in advance of that release add the following to the corresponding config: 'partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor'
2020-11-20 11:41:40,301 INFO ProducerConfig values: 
    acks = -1
    batch.size = 16384
    bootstrap.servers = [my-cluster2-cluster-kafka-bootstrap:9092]
    buffer.memory = 33554432
    client.dns.lookup = use_all_dns_ips
    client.id = producer-1
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 2147483647
    enable.idempotence = false
    interceptor.classes = []
    internal.auto.downgrade.txn.commit = false
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    linger.ms = 0
    max.block.ms = 9223372036854775807
    max.in.flight.requests.per.connection = 1
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metadata.max.idle.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig) [main]
2020-11-20 11:41:40,392 WARN Couldn't resolve server my-cluster2-cluster-kafka-bootstrap:9092 from bootstrap.servers as DNS resolution failed for my-cluster2-cluster-kafka-bootstrap (org.apache.kafka.clients.ClientUtils) [main]
2020-11-20 11:41:40,393 INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer) [main]
2020-11-20 11:41:40,400 ERROR Exception when starting mirror maker. (kafka.tools.MirrorMaker$) [main]
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:441)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:301)
    at kafka.tools.MirrorMaker$MirrorMakerProducer.<init>(MirrorMaker.scala:370)
    at kafka.tools.MirrorMaker$MirrorMakerOptions.checkArgs(MirrorMaker.scala:536)
    at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:87)
    at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
    at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89)
    at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:415)
    ... 5 more
Exception in thread "main" java.lang.NullPointerException
    at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:94)
    at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
2020-11-20 11:41:40,410 INFO Start clean shutdown. (kafka.tools.MirrorMaker$) [MirrorMakerShutdownHook]
2020-11-20 11:41:40,413 INFO Shutting down consumer threads. (kafka.tools.MirrorMaker$) [MirrorMakerShutdownHook]
2020-11-20 11:41:40,413 INFO Closing producer. (kafka.tools.MirrorMaker$) [MirrorMakerShutdownHook]
2020-11-20 11:41:40,414 ERROR Uncaught exception in thread 'MirrorMakerShutdownHook': (org.apache.kafka.common.utils.KafkaThread) [MirrorMakerShutdownHook]
java.lang.NullPointerException
    at kafka.tools.MirrorMaker$.cleanShutdown(MirrorMaker.scala:172)
    at kafka.tools.MirrorMaker$MirrorMakerOptions.$anonfun$checkArgs$2(MirrorMaker.scala:522)
    at kafka.utils.Exit$.$anonfun$addShutdownHook$1(Exit.scala:38)
    at java.base/java.lang.Thread.run(Thread.java:834)

这是我的 kafka 集群的 svc:

NAMESPACE     NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default       kubernetes                       ClusterIP   10.96.0.1        <none>        443/TCP                      24m
kube-system   kube-dns                         ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       24m
strimzi       my-bridge-bridge-service         ClusterIP   10.108.118.142   <none>        8080/TCP                     11m
strimzi       my-cluster-kafka-bootstrap       ClusterIP   10.109.128.192   <none>        9091/TCP,9092/TCP,9093/TCP   10m
strimzi       my-cluster-kafka-brokers         ClusterIP   None             <none>        9091/TCP,9092/TCP,9093/TCP   10m
strimzi       my-cluster-zookeeper-client      ClusterIP   10.110.172.185   <none>        2181/TCP                     11m
strimzi       my-cluster-zookeeper-nodes       ClusterIP   None             <none>        2181/TCP,2888/TCP,3888/TCP   11m
strimzi       my-cluster2-kafka-bootstrap      ClusterIP   10.105.92.74     <none>        9091/TCP,9092/TCP,9093/TCP   10m
strimzi       my-cluster2-kafka-brokers        ClusterIP   None             <none>        9091/TCP,9092/TCP,9093/TCP   10m
strimzi       my-cluster2-zookeeper-client     ClusterIP   10.98.76.46      <none>        2181/TCP                     11m
strimzi       my-cluster2-zookeeper-nodes      ClusterIP   None             <none>        2181/TCP,2888/TCP,3888/TCP   11m
strimzi       my-connect-cluster-connect-api   ClusterIP   10.101.136.97    <none>        8083/TCP                     11m
4

1 回答 1

0

当然,MM 需要运行两个集群才能镜像它们,我只看到一个命名my-cluster的,而在 MM 资源中,您正在镜像两个命名的集群my-source-clustermy-target-cluster如引导服务器中所述。您现在拥有的唯一引导服务器是my-cluster-kafka-bootstrap,无论如何只有一个集群不足以进行镜像。

于 2020-11-20T10:02:52.790 回答