1

我正在尝试使 k8s pod 能够在不使用privileged mode. 我正在尝试的方法是在 k8s 中使用 PVC 在 fsdax 目录之上创建一个本地 PV,并让我的 pod 使用它。但是,我总是得到MountVolume.NewMounter initialization failed ... : path does not exist错误。

这是我的 yaml 文件和 PMEM 状态:

存储类 yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

PV yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pmem-pv-volume
spec:
  capacity:
    storage: 50Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/pmem0/vol1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: disktype
          operator: In
          values:
          - pmem

聚氯乙烯:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pmem-pv-claim
spec:
  storageClassName: local-storage
  volumeName: pmem-pv-volume
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

豆荚yaml:

apiVersion: v1
kind: Pod
metadata:
  name: daemon
  labels:
    env: test
spec:
  hostNetwork: true
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - pmem
  containers:
  - name: daemon-container
    command: ["/usr/bin/bash", "-c", "sleep 3600"]
    image: mm:v2
    imagePullPolicy: Never
    volumeMounts:
    - mountPath: /mnt/pmem
      name: pmem-pv-storage
    - mountPath: /tmp
      name: tmp
    - mountPath: /var/log/memverge
      name: log
    - mountPath: /var/memverge/data
      name: data
  volumes:
    - name: pmem-pv-storage
      persistentVolumeClaim:
        claimName: pmem-pv-claim
    - name: tmp
      hostPath:
        path: /tmp
    - name: log
      hostPath:
        path: /var/log/memverge
    - name: data
      hostPath:
        path: /var/memverge/data

一些状态和 k8s 输出:

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0 745.2G  0 disk
├─sda1        8:1    0     1G  0 part /boot
└─sda2        8:2    0   740G  0 part
  ├─cl-root 253:0    0   188G  0 lvm  /
  ├─cl-swap 253:1    0    32G  0 lvm  [SWAP]
  └─cl-home 253:2    0   520G  0 lvm  /home
sr0          11:0    1  1024M  0 rom
nvme0n1     259:0    0     7T  0 disk
└─nvme0n1p1 259:1    0     7T  0 part /mnt/nvme
pmem0       259:2    0 100.4G  0 disk /mnt/pmem0
$ kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS    REASON   AGE
pmem-pv-volume   50Gi       RWO            Delete           Bound    default/pmem-pv-claim   local-storage            20h
$ kubectl get pvc
NAME            STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS    AGE
pmem-pv-claim   Bound    pmem-pv-volume   50Gi       RWO            local-storage   20h
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY   STATUS              RESTARTS   AGE
default       daemon                             0/1     ContainerCreating   0          20h
kube-system   coredns-74ff55c5b-5crgg            1/1     Running             0          20h
kube-system   etcd-minikube                      1/1     Running             0          20h
kube-system   kube-apiserver-minikube            1/1     Running             0          20h
kube-system   kube-controller-manager-minikube   1/1     Running             0          20h
kube-system   kube-proxy-2m7p6                   1/1     Running             0          20h
kube-system   kube-scheduler-minikube            1/1     Running             0          20h
kube-system   storage-provisioner                1/1     Running             0          20h
$ kubectl get events
LAST SEEN   TYPE      REASON        OBJECT       MESSAGE
108s        Warning   FailedMount   pod/daemon   MountVolume.NewMounter initialization failed for volume "pmem-pv-volume" : path "/mnt/pmem0/vol1" does not exist
47m         Warning   FailedMount   pod/daemon   Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[tmp log data default-token-4t8sv pmem-pv-storage]: timed out waiting for the condition
37m         Warning   FailedMount   pod/daemon   Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[default-token-4t8sv pmem-pv-storage tmp log data]: timed out waiting for the condition
13m         Warning   FailedMount   pod/daemon   Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[pmem-pv-storage tmp log data default-token-4t8sv]: timed out waiting for the condition
$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root  4096 Jan 21 17:56 vol1

它在抱怨path "/mnt/pmem0/vol1" does not exist,但它确实存在:

$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root  4096 Jan 21 17:56 vol1

除了使用 local-PV,我还尝试了:

  1. PMEM-CSI。但是 PMEM-CSI 方法被容器 / 内核问题阻止了我:https ://github.com/containerd/containerd/issues/3221

  2. 光伏。当我尝试创建由 PMEM 支持的 PV 时,pod 无法正确声明 PMEM 存储,但始终作为覆盖 fs 安装/在主机之上。

任何人都可以提供一些帮助吗?非常感谢!

4

1 回答 1

1

正如评论中所讨论的:

使用 minikube、rancher 和任何其他容器化版本的 kubelets 将导致MountVolume.NewMounter 初始化 volume 失败,说明此路径 存在。

如果 kubelet 在容器中运行,它无法访问同一路径的主机文件系统。您必须将 hostDir 调整为 kubelet 容器中的正确路径。

您还可以按照github 上的建议为本地卷添加绑定。请根据您的需要调整复制粘贴的示例,如果您将使用它

    "HostConfig": {
        "Binds": [
            "/mnt/local:/mnt/local"
        ],

像kubeadm这样的常规安装(非 conteinerized)不会有相同的行为,您也不会收到此类错误。

于 2021-01-26T13:31:25.727 回答