1

我安装了 cassandra 操作员,并设置了具有 3 个节点的 cassandra 数据中心/集群。我已经创建了示例键空间、表并插入了数据。我看到它在我的存储部分创建了 3 个 PVC。当我删除数据中心时,它也删除了关联的 PVC,所以当我设置相同的配置数据中心/集群时,它是全新的,没有更早的键空间或表。我怎样才能让它们持久化以备将来使用?我正在使用下面的示例 yaml https://github.com/datastax/cass-operator/tree/master/operator/example-cassdc-yaml/cassandra-3.11.x

我没有在其中找到任何持久性VolumeClaim 配置,它具有 storageConfig: cassandraDataVolumeClaimSpec: 有人遇到过这种情况吗?

编辑:存储类详细信息:

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass 
metadata:
  annotations:
    description: Provides RWO and RWX Filesystem volumes with Retain Policy
  storageclass.kubernetes.io/is-default-class: "false"
  name: ocs-storagecluster-cephfs-retain
parameters:
  clusterID: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner 
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  fsName: ocs-storagecluster-cephfilesystem
provisioner: openshift-storage.cephfs.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: Immediate

这是 Cassandra 集群 YAML:

    apiVersion: cassandra.datastax.com/v1beta1
kind: CassandraDatacenter
metadata:
  name: dc
  generation: 2
spec:
  size: 3
  config:
    cassandra-yaml:
      authenticator: AllowAllAuthenticator
      authorizer: AllowAllAuthorizer
      role_manager: CassandraRoleManager
    jvm-options:
      additional-jvm-opts:
        - '-Ddse.system_distributed_replication_dc_names=dc1'
        - '-Ddse.system_distributed_replication_per_dc=1'
      initial_heap_size: 800M
      max_heap_size: 800M
  resources: {}
  clusterName: cassandra
  systemLoggerResources: {}
  configBuilderResources: {}
  serverVersion: 3.11.7
  serverType: cassandra
  storageConfig:
    cassandraDataVolumeClaimSpec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: ocs-storagecluster-cephfs-retain
  managementApiAuth:
    insecure: {}

编辑:光伏详情:

oc get pv pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 -o yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com
  creationTimestamp: "2022-02-23T20:52:54Z"
  finalizers:
  - kubernetes.io/pv-protection
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:pv.kubernetes.io/provisioned-by: {}
      f:spec:
        f:accessModes: {}
        f:capacity:
          .: {}
          f:storage: {}
        f:claimRef:
          .: {}
          f:apiVersion: {}
          f:kind: {}
          f:name: {}
          f:namespace: {}
          f:resourceVersion: {}
          f:uid: {}
        f:csi:
          .: {}
          f:controllerExpandSecretRef:
            .: {}
            f:name: {}
            f:namespace: {}
          f:driver: {}
          f:nodeStageSecretRef:
            .: {}
            f:name: {}
            f:namespace: {}
          f:volumeAttributes:
            .: {}
            f:clusterID: {}
            f:fsName: {}
            f:storage.kubernetes.io/csiProvisionerIdentity: {}
            f:subvolumeName: {}
          f:volumeHandle: {}
        f:persistentVolumeReclaimPolicy: {}
        f:storageClassName: {}
        f:volumeMode: {}
    manager: csi-provisioner
    operation: Update
    time: "2022-02-23T20:52:54Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:phase: {}
    manager: kube-controller-manager
    operation: Update
    time: "2022-02-23T20:52:54Z"
  name: pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
  resourceVersion: "51684941"
  selfLink: /api/v1/persistentvolumes/pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
  uid: 8ded2de5-6d4e-45a1-9b89-a385d74d6d4a
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: server-data-cstone-cassandra-cstone-dc-default-sts-1
    namespace: dv01-cornerstone
    resourceVersion: "51684914"
    uid: 15def0ca-6cbc-4569-a560-7b9e89a7b7a7
  csi:
    controllerExpandSecretRef:
      name: rook-csi-cephfs-provisioner
      namespace: openshift-storage
    driver: openshift-storage.cephfs.csi.ceph.com
    nodeStageSecretRef:
      name: rook-csi-cephfs-node
      namespace: openshift-storage
    volumeAttributes:
      clusterID: openshift-storage
      fsName: ocs-storagecluster-cephfilesystem
      storage.kubernetes.io/csiProvisionerIdentity: 1645064620191-8081-openshift-storage.cephfs.csi.ceph.com
      subvolumeName: csi-vol-92d5e07d-94ea-11ec-92e8-0a580a20028c
    volumeHandle: 0001-0011-openshift-storage-0000000000000001-92d5e07d-94ea-11ec-92e8-0a580a20028c
  persistentVolumeReclaimPolicy: Retain
  storageClassName: ocs-storagecluster-cephfs-retain
  volumeMode: Filesystem
status:
  phase: Bound
4

1 回答 1

1

根据规范:

存储配置。这会在每个服务器 pod 上的 /var/lib/cassandra 处设置一个 100GB 的卷。用户可以按照以下说明创建服务器存储存储类... https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd

在部署 Cassandra 规范之前,首先确保您的集群已经安装了CSI 驱动程序并正常工作,然后根据需要继续创建 StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: server-storage
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
  type: pd-ssd

重新部署您的 Cassandra 现在应该在删除时保留数据磁盘。

于 2022-02-12T03:27:48.183 回答