2

我正在尝试使用 kubernetes 的 cinder 插件来创建一个 pod vloume,但我发现我的集群和 cinder 之间没有任何活动来安装设备。

Kubernetes 版本:

kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:10:32Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

命令 kubelet 的启动及其状态:

systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Tue 2016-12-06 05:18:54 EST; 16h ago
     Docs: http://kubernetes.io/docs/
 Main PID: 3677 (kubelet)
    Tasks: 34
   Memory: 38.8M
      CPU: 14min 3.458s
   CGroup: /system.slice/kubelet.service
           ├─3677 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --cloud-provider=openstack --cloud-config=/home/tcluser/yyf/config/cloud.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-pr
           └─3707 journalctl -k -f

这是我的 cloud.conf 文件:

cat /home/<user>/config/cloud.conf 
[Global]
auth-url=http://<openstack-url>:35357/v3
username=<user>
password=<password>
region=<region>
tenant-name=<tenant-name>
tenant-id=<tenant-id>
domain-name=default

看来 k8s 能够与 openstack 成功通信。从 /var/log/syslog:

openstack.go:215] Got instance id from http://169.254.169.254/openstack/2012-08-10/meta_data.json: 5ff7824e-b201-4c69-a422-44712953407f
server.go:355] Successfully initialized cloud provider: "openstack" from the config file: "/home/<user>/config/cloud.conf"

我的 pod yaml 文件和 cinder 列表输出:

cat pod_cinder.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  labels:
    name: busybox
spec:
  containers:
    - resources:
        limits :
          cpu: 0.5
      image: busybox:latest
      name: busybox
      command: ["/bin/sh", "-c", "sleep 3600"]
      volumeMounts:
        - mountPath: /mydata
          name: datavo1
  volumes:
    - name: datavo1
      cinder:
        volumeID: d94db4e5-0274-4a74-8464-9651e6af31d9
        fsType: ext4

cinder list 
+--------------------------------------+-----------+------------------+-----------+------+-------------+----------+-------------+-------------+
|                  ID                  |   Status  | Migration Status |    Name   | Size | Volume Type | Bootable | Multiattach | Attached to |
+--------------------------------------+-----------+------------------+-----------+------+-------------+----------+-------------+-------------+
| d94db4e5-0274-4a74-8464-9651e6af31d9 | available |        -         | kube-test |  10  |   storage   |  false   |    False    |             |
+--------------------------------------+-----------+------------------+-----------+------+-------------+----------+-------------+-------------+

然后我尝试创建 pod,一段时间后,我描述了 pod 显示:

kubectl describe pod busybox
Name:           busybox
Namespace:      default
Node:           k8s-compute02/10.10.10.122
Start Time:     Tue, 06 Dec 2016 04:10:00 -0500
Labels:         name=busybox
Status:         Pending
IP:
Controllers:    <none>
Containers:
  busybox:
    Container ID:
    Image:              busybox:latest
    Image ID:
    Port:
    Command:
      /bin/sh
      -c
      sleep 3600
    Limits:
      cpu:      500m
    Requests:
      cpu:              500m
    State:              Waiting
      Reason:           ContainerCreating
    Ready:              False
    Restart Count:      0
    Volume Mounts:
      /mydata from datavo1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zfvqo (ro)
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True 
  Ready         False 
  PodScheduled  True 
Volumes:
  datavo1:
  <unknown>
  default-token-zfvqo:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-zfvqo
QoS Class:      Burstable
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason          Message
  ---------     --------        -----   ----                    -------------   --------        ------          -------
  17h           8m              465     {kubelet k8s-compute02}                 Warning         FailedMount     Unable to mount volumes for pod "busybox_default(c419f17d-bb93-11e6-a1bd-fa163e1181a8)": timeout expired waiting for volumes to attach/mount for pod "busybox"/"default". list of unattached/unmounted volumes=[datavo1]
  17h           8m              465     {kubelet k8s-compute02}                 Warning         FailedSync      Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "busybox"/"default". list of unattached/unmounted volumes=[datavo1]
  3m            55s             2       {kubelet k8s-compute02}                 Warning         FailedMount     Unable to mount volumes for pod "busybox_default(c419f17d-bb93-11e6-a1bd-fa163e1181a8)": timeout expired waiting for volumes to attach/mount for pod "busybox"/"default". list of unattached/unmounted volumes=[datavo1]
  3m            55s             2       {kubelet k8s-compute02}                 Warning         FailedSync      Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "busybox"/"default". list of unattached/unmounted volumes=[datavo1]

而且我在 /var/log/syslog 中无法获得有关 attacher.go 的任何信息,当我运行“grep attacher.go /var/log/syslog”时,我什么也没有。

可以从下面找到错误:

tail -f /var/log/syslog
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.726182    2060 reflector.go:403] pkg/kubelet/kubelet.go:406: Watch close - *api.Node
total 24 items received
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.736581    2060 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume
for volume "kubernetes.io/cinder/d94db4e5-0274-4a74-8464-9651e6af31d9" (spec.Name: "datavo1") pod "4b5a15b0-bad0-11e6-a1bd-fa163e1181a8" (UID:
"4b5a15b0-bad0-11e6-a1bd-fa163e1181a8")
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.736833    2060 reconciler.go:225] VerifyControllerAttachedVolume operation started
for volume "kubernetes.io/cinder/d94db4e5-0274-4a74-8464-9651e6af31d9" (spec.Name: "datavo1") pod "4b5a15b0-bad0-11e6-a1bd-fa163e1181a8" (UID:
"4b5a15b0-bad0-11e6-a1bd-fa163e1181a8")
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.736991    2060 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume
for volume "kubernetes.io/secret/4b5a15b0-bad0-11e6-a1bd-fa163e1181a8-default-token-zfvqo" (spec.Name: "default-token-zfvqo") pod "4b5a15b0-bad0
-11e6-a1bd-fa163e1181a8" (UID: "4b5a15b0-bad0-11e6-a1bd-fa163e1181a8")
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737073    2060 reconciler.go:225] VerifyControllerAttachedVolume operation started
for volume "kubernetes.io/secret/4b5a15b0-bad0-11e6-a1bd-fa163e1181a8-default-token-zfvqo" (spec.Name: "default-token-zfvqo") pod "4b5a15b0-bad0
-11e6-a1bd-fa163e1181a8" (UID: "4b5a15b0-bad0-11e6-a1bd-fa163e1181a8")
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737152    2060 volume_stat_calculator.go:103] Failed to calculate volume metrics for
pod kube-proxy-amd64-ca6kp_kube-system(09ab318e-bab2-11e6-a1bd-fa163e1181a8) volume dbus: metrics are not supported for MetricsNil Volumes
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737207    2060 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737242    2060 volume_stat_calculator.go:103] Failed to calculate volume metrics for
pod kube-proxy-amd64-ca6kp_kube-system(09ab318e-bab2-11e6-a1bd-fa163e1181a8) volume kubeconfig: metrics are not supported for MetricsNil Volumes
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737246    2060 reconciler.go:142] Sources are all ready, starting reconstruct state
function
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: E1205 04:52:42.737479    2060 nestedpendingoperations.go:253] Operation for
"\"kubernetes.io/cinder/d94db4e5-0274-4a74-8464-9651e6af31d9\"" failed. No retries permitted until 2016-12-05 04:54:42.737442277 -0500 EST
(durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/d94db4e5-0274-4a74-8464-9651e6af31d9" (spec.Name: "datavo1") pod "4b5a15b0-bad0-
11e6-a1bd-fa163e1181a8" (UID: "4b5a15b0-bad0-11e6-a1bd-fa163e1181a8") has not yet been added to the list of VolumesInUse in the node's volume
status.
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737575    2060 volume_stat_calculator.go:103] Failed to calculate volume metrics for
pod calico-node-s08on_kube-system(09ab29c4-bab2-11e6-a1bd-fa163e1181a8) volume cni-net-dir: metrics are not supported for MetricsNil Volumes
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737591    2060 volume_stat_calculator.go:103] Failed to calculate volume metrics for
pod calico-node-s08on_kube-system(09ab29c4-bab2-11e6-a1bd-fa163e1181a8) volume var-run-calico: metrics are not supported for MetricsNil Volumes
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737601    2060 volume_stat_calculator.go:103] Failed to calculate volume metrics for
pod calico-node-s08on_kube-system(09ab29c4-bab2-11e6-a1bd-fa163e1181a8) volume cni-bin-dir: metrics are not supported for MetricsNil Volumes
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737612    2060 volume_stat_calculator.go:103] Failed to calculate volume metrics for
pod calico-node-s08on_kube-system(09ab29c4-bab2-11e6-a1bd-fa163e1181a8) volume lib-modules: metrics are not supported for MetricsNil Volumes
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737676    2060 reconciler.go:556] Get volumes from pod directory
"/var/lib/kubelet/pods" [{podName:09ab29c4-bab2-11e6-a1bd-fa163e1181a8 volumeSpecName:default-token-1ayim
mountPath:/var/lib/kubelet/pods/09ab29c4-bab2-11e6-a1bd-fa163e1181a8/volumes/kubernetes.io~secret/default-token-1ayim
pluginName:kubernetes.io/secret} {podName:09ab318e-bab2-11e6-a1bd-fa163e1181a8 volumeSpecName:default-token-1ayim
mountPath:/var/lib/kubelet/pods/09ab318e-bab2-11e6-a1bd-fa163e1181a8/volumes/kubernetes.io~secret/default-token-1ayim
pluginName:kubernetes.io/secret} {podName:7233f0a2-b6ae-11e6-84a8-fa163e1181a8 volumeSpecName:default-token-1ayim
mountPath:/var/lib/kubelet/pods/7233f0a2-b6ae-11e6-84a8-fa163e1181a8/volumes/kubernetes.io~secret/default-token-1ayim
pluginName:kubernetes.io/secret}]
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737787    2060 kubelet.go:2350] SyncLoop (SYNC): 2 pods; kube-proxy-amd64-
ca6kp_kube-system(09ab318e-bab2-11e6-a1bd-fa163e1181a8), calico-node-s08on_kube-system(09ab29c4-bab2-11e6-a1bd-fa163e1181a8)
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737859    2060 openstack_instances.go:131] NodeAddresses(k8s-compute01) =>
[{InternalIP 10.10.10.121}]
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737902    2060 generic.go:177] GenericPLEG: Relisting
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.737902    2060 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
Dec  5 04:52:42 k8s-compute01 kubelet[2060]: I1205 04:52:42.738016    2060 kubelet.go:2373] SyncLoop (housekeeping)

主要错误是:

Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/d94db4e5-0274-4a74-8464-9651e6af31d9" (spec.Name: "datavo1") pod "4b5a15b0-bad0-11e6-a1bd-fa163e1181a8" (UID: "4b5a15b0-bad0-11e6-a1bd-fa163e1181a8")

然后,我阅读了 k8s 源代码:

cat /kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go
func (rc *reconciler) reconcile() {
    ......
    // Ensure volumes that should be attached/mounted are attached/mounted.
    for _, volumeToMount := range rc.desiredStateOfWorld.GetVolumesToMount() {
        volMounted, devicePath, err := rc.actualStateOfWorld.PodExistsInVolume(volumeToMount.PodName, volumeToMount.VolumeName)
        volumeToMount.DevicePath = devicePath
        if cache.IsVolumeNotAttachedError(err) {
            if rc.controllerAttachDetachEnabled || !volumeToMount.PluginIsAttachable {
                // Volume is not attached (or doesn't implement attacher), kubelet attach is disabled, wait
                // for controller to finish attaching volume.
                glog.V(12).Infof("Attempting to start VerifyControllerAttachedVolume for volume %q (spec.Name: %q) pod %q (UID: %q)",
                    volumeToMount.VolumeName,
                    volumeToMount.VolumeSpec.Name(),
                    volumeToMount.PodName,
                    volumeToMount.Pod.UID)
                ......
            } else {
                // Volume is not attached to node, kubelet attach is enabled, volume implements an attacher,
                // so attach it
                volumeToAttach := operationexecutor.VolumeToAttach{
                    VolumeName: volumeToMount.VolumeName,
                    VolumeSpec: volumeToMount.VolumeSpec,
                    NodeName:   rc.hostName,
                }
                glog.V(12).Infof("Attempting to start AttachVolume for volume %q (spec.Name: %q)  pod %q (UID: %q)",
                    volumeToMount.VolumeName,
                    volumeToMount.VolumeSpec.Name(),
                    volumeToMount.PodName,
                    volumeToMount.Pod.UID)
               ......
          }

我希望程序运行“else”语句,但它运行“if”语句。为什么?

谁能帮我?非常感谢!

4

0 回答 0