0

我正在运行一台 Dev Linux 机器并设置一个本地 Kafka 以在 Kubernetes 上进行开发(从 docker-compose 移动以学习和练习 pourposes),一切正常,但我现在正在尝试将卷从 Kafka 和 Zookeeper 映射到主机但我只能处理 Kafka 卷。对于 zookeeper,我配置并将数据和日志路径映射到一个卷,但内部目录没有在主机上公开(这发生在 kafka 映射中),它只显示数据和日志文件夹,但实际上没有内容存在主机,因此重新启动 Zookeeper 会重置状态。

我想知道在使用 Kind 并映射来自不同 pod 的多个目录时是否存在限制或不同的方法,我错过了什么?为什么只有 Kafka 卷在主机上成功持久化。

完整设置以及如何运行它的自述文件位于 Githubpv-pvc-setup文件夹下。

Zookeeper 有意义的配置,Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: zookeeper
  name: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      service: zookeeper
  strategy: {}
  template:
    metadata:
      labels:
        network/kafka-network: "true"
        service: zookeeper
    spec:
      containers:
        - env:
            - name: TZ
            - name: ZOOKEEPER_CLIENT_PORT
              value: "2181"
            - name: ZOOKEEPER_DATA_DIR
              value: "/var/lib/zookeeper/data"
            - name: ZOOKEEPER_LOG_DIR
              value: "/var/lib/zookeeper/log"
            - name: ZOOKEEPER_SERVER_ID
              value: "1"
          image: confluentinc/cp-zookeeper:7.0.1
          name: zookeeper
          ports:
            - containerPort: 2181
          resources: {}
          volumeMounts:
            - mountPath: /var/lib/zookeeper
              name: zookeeper-data
      hostname: zookeeper
      restartPolicy: Always
      volumes:
        - name: zookeeper-data
          persistentVolumeClaim:
            claimName: zookeeper-pvc

持久卷声明:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: zookeeper-local-storage
  resources:
    requests:
      storage: 5Gi

持续音量:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-pv
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: zookeeper-local-storage
  capacity:
    storage: 5Gi
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /var/lib/zookeeper

种类配置:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
  - role: worker
    extraPortMappings:
      - containerPort: 30092 # internal kafka nodeport
        hostPort: 9092 # port exposed on "host" machine for kafka
      - containerPort: 30081 # internal schema-registry nodeport
        hostPort: 8081 # port exposed on "host" machine for schema-registry
    extraMounts:
      - hostPath: ./tmp/kafka-data
        containerPath: /var/lib/kafka/data
        readOnly: false
        selinuxRelabel: false
        propagation: Bidirectional
      - hostPath: ./tmp/zookeeper-data
        containerPath: /var/lib/zookeeper
        readOnly: false
        selinuxRelabel: false
        propagation: Bidirectional

正如我提到的设置工作,我现在只是试图确保相关的 kafka 和 zookeeper 卷映射到持久性外部存储(在本例中为本地磁盘)。

4

1 回答 1

2

我终于整理出来了。我在初始设置中有两个主要问题,现在已修复。

用于在本地主机上持久保存数据的文件夹需要事先创建,因此它们与uid:guid用于创建初始 Kind 集群的文件夹相同,如果不存在,文件夹将无法正确保存数据。

从 zookeeper(数据和日志)为每个持久文件夹创建特定的持久卷和持久卷声明,并在 kind-config 上配置它们。这是最终的种类配置:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
  - role: worker
    extraPortMappings:
      - containerPort: 30092 # internal kafka nodeport
        hostPort: 9092 # port exposed on "host" machine for kafka
      - containerPort: 30081 # internal schema-registry nodeport
        hostPort: 8081 # port exposed on "host" machine for schema-registry
    extraMounts:
      - hostPath: ./tmp/kafka-data
        containerPath: /var/lib/kafka/data
        readOnly: false
        selinuxRelabel: false
        propagation: Bidirectional
      - hostPath: ./tmp/zookeeper-data/data
        containerPath: /var/lib/zookeeper/data
        readOnly: false
        selinuxRelabel: false
        propagation: Bidirectional
      - hostPath: ./tmp/zookeeper-data/log
        containerPath: /var/lib/zookeeper/log
        readOnly: false
        selinuxRelabel: false
        propagation: Bidirectional

如果您想为了好玩而运行它,此 repo 中提供了使用持久卷和持久卷声明的完整设置以及进一步的说明。https://github.com/mmaia/kafka-local-kubernetes

于 2022-01-09T16:42:31.580 回答