0

我有一个在多个本地(裸机/物理)机器上运行的 kubernetes 集群。我想在集群上部署 kafka,但我不知道如何在我的配置中使用 srimzi。

我尝试按照快速入门页面上的教程进行操作:https
: //strimzi.io/docs/quickstart/master/ 让我的 zookeeper pod 处于待处理状态2.4. Creating a cluster

Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims

我经常用hostpath我的卷,我不知道这是怎么回事......

编辑:我尝试使用 Arghya Sadhu 的命令创建一个 StorageClass,但问题仍然存在。
我的 PVC 的描述:

kubectl describe -n my-kafka-project persistentvolumeclaim/data-my-cluster-zookeeper-0
Name:          data-my-cluster-zookeeper-0
Namespace:     my-kafka-project
StorageClass:  local-storage
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/instance=my-cluster
               app.kubernetes.io/managed-by=strimzi-cluster-operator
               app.kubernetes.io/name=strimzi
               strimzi.io/cluster=my-cluster
               strimzi.io/kind=Kafka
               strimzi.io/name=my-cluster-zookeeper
Annotations:   strimzi.io/delete-claim: false
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    my-cluster-zookeeper-0
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  WaitForFirstConsumer  72s (x66 over 16m)  persistentvolume-controller  waiting for first consumer to be created before binding

还有我的豆荚:

kubectl describe -n my-kafka-project pod/my-cluster-zookeeper-0
Name:           my-cluster-zookeeper-0
Namespace:      my-kafka-project
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=my-cluster
                app.kubernetes.io/managed-by=strimzi-cluster-operator
                app.kubernetes.io/name=strimzi
                controller-revision-hash=my-cluster-zookeeper-7f698cf9b5
                statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
                strimzi.io/cluster=my-cluster
                strimzi.io/kind=Kafka
                strimzi.io/name=my-cluster-zookeeper
Annotations:    strimzi.io/cluster-ca-cert-generation: 0
                strimzi.io/generation: 0
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/my-cluster-zookeeper
Containers:
  zookeeper:
    Image:      strimzi/kafka:0.15.0-kafka-2.3.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /opt/kafka/zookeeper_run.sh
    Liveness:   exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:          1
      ZOOKEEPER_METRICS_ENABLED:     false
      STRIMZI_KAFKA_GC_LOG_ENABLED:  false
      KAFKA_HEAP_OPTS:               -Xms128M
      ZOOKEEPER_CONFIGURATION:       autopurge.purgeInterval=1
                                     tickTime=2000
                                     initLimit=5
                                     syncLimit=2

    Mounts:
      /opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
      /var/lib/zookeeper from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
  tls-sidecar:
    Image:       strimzi/kafka:0.15.0-kafka-2.3.1
    Ports:       2888/TCP, 3888/TCP, 2181/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /opt/stunnel/zookeeper_stunnel_run.sh
    Liveness:   exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:   1
      TLS_SIDECAR_LOG_LEVEL:  notice
    Mounts:
      /etc/tls-sidecar/cluster-ca-certs/ from cluster-ca-certs (rw)
      /etc/tls-sidecar/zookeeper-nodes/ from zookeeper-nodes (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-my-cluster-zookeeper-0
    ReadOnly:   false
  zookeeper-metrics-and-logging:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-cluster-zookeeper-config
    Optional:  false
  zookeeper-nodes:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-nodes
    Optional:    false
  cluster-ca-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-cluster-ca-cert
    Optional:    false
  my-cluster-zookeeper-token-hgk2b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-token-hgk2b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
4

3 回答 3

1

您需要有一个满足PersistentVolumeClaim约束的 PersistentVolume。

使用本地存储。使用本地存储类:

$ cat <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF | kubectl apply -f -

您需要在集群中配置默认​​ storageClass,以便 PersistentVolumeClaim 可以从那里获取存储。

$ kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
于 2020-01-07T16:47:05.827 回答
0

是的,在我看来,Kubernetes 在基础设施级别缺少一些东西。您应该提供用于静态分配给 PVC 的 PersistentVolume,或者正如 Arghya 已经提到的,您可以提供 StorageClasses 用于动态分配。

于 2020-01-08T09:17:03.373 回答
0

在我的情况下,我在另一个命名空间中创建了 kafka,my-cluster-kafka但 srimzi 运算符在 namespace 上kafka

所以我只是在同一个命名空间中创建。出于测试目的,我使用临时存储。

这里是 kafla.yaml:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    replicas: 1
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
        authentication:
          type: tls
      - name: external
        port: 9094
        type: nodeport
        tls: false
    storage:
      type: ephemeral
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
  zookeeper:
    replicas: 1
    storage:
      type: ephemeral
  entityOperator:
    topicOperator: {}
    userOperator: {}
于 2021-03-10T04:06:26.327 回答