1

我按照示例在kubernetes中使用rbd,但不能成功。谁能帮我!!错误 :

Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289702    1254 volumes.go:114] Could not create volume builder for pod 5df3610e-86c8-11e5-bc34-002590fdf95c: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched
Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289770    1254 kubelet.go:1210] Unable to mount volumes for pod "rbd2_default": can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched; skipping pod
Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.299458    1254 pod_workers.go:111] Error syncing pod 5df3610e-86c8-11e5-bc34-002590fdf95c, skipping: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched

我使用 rbd-with-secret.json 的模板文件:

core@core-1-94 ~/kubernetes/examples/rbd $ cat rbd-with-secret.json
{
"apiVersion": "v1",
"id": "rbdpd2",
"kind": "Pod",
"metadata": {
    "name": "rbd2"
},
"spec": {
    "nodeSelector": {"kubernetes.io/hostname" :"10.12.1.97"},
    "containers": [
        {
            "name": "rbd-rw",
            "image": "kubernetes/pause",
            "volumeMounts": [
                {
                    "mountPath": "/mnt/rbd",
                    "name": "rbdpd"
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "rbdpd",
            "rbd": {
                "monitors": [
            "10.14.1.33:6789",
        "10.14.1.35:6789",
            "10.14.1.36:6789"
            ],
                "pool": "rbd",
                "image": "foo",
                "user": "admin",
                "secretRef": {"name": "ceph-secret"},
                "fsType": "ext4",
                "readOnly": true
            }
        }
    ]
}
}

秘诀:

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFBemV6bFdZTXdXQWhBQThxeG1IT2NKa0QrYnE0K3RZUmtsVncK

ceph 配置在 /etc/ceph/

core@core-1-94 ~/kubernetes/examples/rbd $ ls -alh /etc/ceph
total 20K
drwxr-xr-x  2 root root 4.0K Nov  6 18:38 .
drwxr-xr-x 26 root root 4.0K Nov  9 17:07 ..
-rw-------  1 root root   63 Nov  4 11:27 ceph.client.admin.keyring
-rw-r--r--  1 root root  264 Nov  6 18:38 ceph.conf
-rw-r--r--  1 root root  384 Nov  6 14:35 ceph.conf.orig
-rw-------  1 root root    0 Nov  4 11:27 tmpkqDKwf

键为:

    core@core-1-94 ~/kubernetes/examples/rbd $ sudo cat              
    /etc/ceph/ceph.client.admin.keyring
    [client.admin]
          键 = AQAzezlWYMwWAhAA8qxmHOcJkD+bq4+tYRklVw==
4

1 回答 1

1

如果没有安装 rbd 命令并且在路径中,您将得到“没有匹配的卷插件”。

如示例所示,您需要确保 ceph 已安装在 Kubernetes 节点上。例如,在 Fedora 中: $ sudo yum -y install ceph-common

我将提出一个问题以澄清错误消息。

于 2015-11-16T23:22:50.840 回答