1

我有自己的集群 k8s。我正在尝试将集群链接到 openstack / cinder。

当我创建 PVC 时,我可以看到 k8s 中的 PV 和 Openstack 中的卷。但是,当我将 pod 与 PVC 链接时,我收到消息 k8s - Cinder“0/x 个节点可用:x 个节点有卷节点关联冲突”。

我的 yml 测试:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: classic
provisioner: kubernetes.io/cinder
parameters:
  type: classic

---


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-infra-consuldata4
  namespace: infra
spec:
  storageClassName: classic
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: consul
  namespace: infra
  labels:
    app: consul
spec:
  replicas: 1
  selector:
    matchLabels:
      app: consul
  template:
    metadata:
      labels:
        app: consul
    spec:
      containers:
      - name: consul
        image: consul:1.4.3
        volumeMounts:
        - name: data
          mountPath: /consul
        resources:
          requests:
            cpu: 100m
          limits:
            cpu: 500m
        command: ["consul", "agent", "-server", "-bootstrap", "-ui", "-bind", "0.0.0.0", "-client", "0.0.0.0", "-data-dir", "/consul"]
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-infra-consuldata4

结果:

kpro describe pvc -n infra
Name:          pvc-infra-consuldata4
Namespace:     infra
StorageClass:  classic
Status:        Bound
Volume:        pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Labels:        
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc-infra-consuldata4","namespace":"infra"},"spec":...
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age   From                         Message
  ----       ------                 ----  ----                         -------
  Normal     ProvisioningSucceeded  61s   persistentvolume-controller  Successfully provisioned volume pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c using kubernetes.io/cinder
Mounted By:  consul-85684dd7fc-j84v7
kpro describe po -n infra consul-85684dd7fc-j84v7
Name:               consul-85684dd7fc-j84v7
Namespace:          infra
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=consul
                    pod-template-hash=85684dd7fc
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/consul-85684dd7fc
Containers:
  consul:
    Image:      consul:1.4.3
    Port:       <none>
    Host Port:  <none>
    Command:
      consul
      agent
      -server
      -bootstrap
      -ui
      -bind
      0.0.0.0
      -client
      0.0.0.0
      -data-dir
      /consul
    Limits:
      cpu:  2
    Requests:
      cpu:        500m
    Environment:  <none>
    Mounts:
      /consul from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nxchv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-infra-consuldata4
    ReadOnly:   false
  default-token-nxchv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-nxchv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  36s (x6 over 2m40s)  default-scheduler  0/6 nodes are available: 6 node(s) had volume node affinity conflict. 

为什么 K8s 成功创建 Cinder 卷,但无法调度 pod ?

4

3 回答 3

2

尝试找出持久卷的 nodeAffinity:

$ kubctl describe pv pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Node Affinity:     
  Required Terms:  
    Term 0:        kubernetes.io/hostname in [xxx]

然后尝试找出是否与您的 pod 应该在其上运行xxx的节点标签匹配:yyy

$ kubectl get nodes
NAME      STATUS   ROLES               AGE   VERSION
yyy       Ready    worker              8d    v1.15.3

如果它们不匹配,您将遇到错误,您需要使用正确的配置"x node(s) had volume node affinity conflict"重新创建持久卷。nodeAffinity

于 2020-01-22T11:39:18.073 回答
0

当我在尝试让我的 pod 连接到它之前忘记部署 EBS CSI 驱动程序时,我也遇到了这个问题。

kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
于 2021-07-06T14:15:03.087 回答
-1

您已经设置provisioner: kubernetes.io/cinder,它基于存储类的 Kubernetes 文档 - OpenStack Cinder

笔记:

功能状态: Kubernetes 1.11 已弃用

这个 OpenStack 的内部供应商已被弃用。请使用 OpenStack 的外部云提供商

基于OpenStack GitHub,您应该设置provisioner: openstack.org/standalone-cinder

请查看persistent-volume-provisioning cinder了解详细使用和yaml文件。

您可能也有兴趣阅读这些 StackOverflow 问题:

Kubernetes Cinder 卷不使用 cloud-provider=openstack 挂载

如何使用 OpenStack Cinder 在 Kubernetes 集群中创建存储类和动态配置持久卷

于 2019-03-07T11:59:04.593 回答