2

Kubernetes 版本:1.13.4(1.13.2 上同样的问题)。

我在 digitalocean 上自行托管集群。

操作系统:coreos 2023.4.0

我在一个节点上有 2 个卷:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: prometheus-pv-volume
  labels:
    type: local
    name: prometheus-pv-volume
spec:
  storageClassName: local-storage
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
  hostPath:
    path: "/prometheus-volume"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: node-role.kubernetes.io/monitoring
          operator: Exists
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: grafana-pv-volume
  labels:
    type: local
    name: grafana-pv-volume
spec:
  storageClassName: local-storage
  capacity:
    storage: 1Gi
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/grafana-volume"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: node-role.kubernetes.io/monitoring
          operator: Exists

并且 2 pvc 在同一个节点上使用它们。这是一个:

  storage:
volumeClaimTemplate:
  spec:
    storageClassName: local-storage
    selector:
      matchLabels:
        name: prometheus-pv-volume
    resources:
      requests:
        storage: 100Gi

一切正常。

kubectl get pv --all-namespaces输出:

NAME                   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                           STORAGECLASS    REASON   AGE
grafana-pv-volume      1Gi        RWO            Retain           Bound    monitoring/grafana-storage                      local-storage            16m
prometheus-pv-volume   100Gi      RWO            Retain           Bound    monitoring/prometheus-k8s-db-prometheus-k8s-0   local-storage            16m

kubectl get pvc --all-namespaces输出:

NAMESPACE    NAME                                 STATUS   VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS    AGE
monitoring   grafana-storage                      Bound    grafana-pv-volume      1Gi        RWO            local-storage   10m
monitoring   prometheus-k8s-db-prometheus-k8s-0   Bound    prometheus-pv-volume   100Gi      RWO            local-storage   10m

问题是我每 2 分钟从 kube-controller-manager 获取这些日志消息:

W0302 17:16:07.877212       1 plugins.go:845] FindExpandablePluginBySpec(prometheus-pv-volume) -> err:no volume plugin matched
W0302 17:16:07.877164       1 plugins.go:845] FindExpandablePluginBySpec(grafana-pv-volume) -> err:no volume plugin matched

它们为什么会出现?我怎样才能解决这个问题?

4

1 回答 1

4

似乎可以安全地忽略最近删除的消息(2 月 20 日)并且不会在未来的版本中出现:https ://github.com/kubernetes/kubernetes/pull/73901

于 2019-03-02T19:14:41.457 回答