0

我已完成使用 VA 安装 ICP。使用 1 个 Master、1 个 Proxy、1 个 Management、1 个 VA 和 3 个带有 GlusterFS 的 Worker。


此列表 Kubernetes Pod运行

检查使用“kubectl”获取 pod

存储 - ICP 上的 PersistentVolume GlusterFS

在此处输入图像描述

这描述了 Kubernetes Pods 错误事件


自定义指标适配器

Events:
      Type    Reason     Age   From                     Message
      ----    ------     ----  ----                     -------
      Normal  Scheduled  17m   default-scheduler        Successfully assigned kube-system/custom-metrics-adapter-5d5b694df7-cggz8 to 192.168.10.126
      Normal  Pulled     17m   kubelet, 192.168.10.126  Container image "swgcluster.icp:8500/ibmcom/curl:4.0.0" already present on machine
      Normal  Created    17m   kubelet, 192.168.10.126  Created container
      Normal  Started    17m   kubelet, 192.168.10.126  Started container

监控-grafana

Events:
      Type     Reason       Age   From                     Message
      ----     ------       ----  ----                     -------
      Normal   Scheduled    18m   default-scheduler        Successfully assigned kube-system/monitoring-grafana-799d7fcf97-sj64j to 192.168.10.126
      Warning  FailedMount  1m (x8 over 16m)  kubelet, 192.168.10.126  (combined from similar events): MountVolume.SetUp failed for volume "pvc-251f69e3-fd60-11e8-9779-000c2914ff99" : mount failed: mount failed: exit status 32
    Mounting command: systemd-run
    Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e2c85434-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251f69e3-fd60-11e8-9779-000c2914ff99 --scope -- mount -t glusterfs -o log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-251f69e3-fd60-11e8-9779-000c2914ff99/monitoring-grafana-799d7fcf97-sj64j-glusterfs.log,backup-volfile-servers=192.168.10.115:192.168.10.116:192.168.10.119,auto_unmount,log-level=ERROR 192.168.10.115:vol_946f98c8a92ce2930acd3181d803943c /var/lib/kubelet/pods/e2c85434-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251f69e3-fd60-11e8-9779-000c2914ff99
    Output: Running scope as unit run-r6ba2425d0e7f437d922dbe0830cd5a97.scope.
    mount: unknown filesystem type 'glusterfs'

     the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod monitoring-grafana-799d7fcf97-sj64j
      Warning  FailedMount  50s (x8 over 16m)  kubelet, 192.168.10.126  Unable to mount volumes for pod "monitoring-grafana-799d7fcf97-sj64j_kube-system(e2c85434-fd67-11e8-822b-000c2914ff99)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"monitoring-grafana-799d7fcf97-sj64j". list of unmounted volumes=[grafana-storage]. list of unattached volumes=[grafana-storage config-volume dashboard-volume dashboard-config ds-job-config router-config monitoring-ca-certs monitoring-certs router-entry default-token-f6d9q]

监控-prometheus

Events:
  Type     Reason       Age   From                     Message
  ----     ------       ----  ----                     -------
  Normal   Scheduled    19m   default-scheduler        Successfully assigned kube-system/monitoring-prometheus-85546d8575-jr89h to 192.168.10.126
  Warning  FailedMount  4m (x6 over 17m)    kubelet, 192.168.10.126  Unable to mount volumes for pod "monitoring-prometheus-85546d8575-jr89h_kube-system(e2ca91a8-fd67-11e8-822b-000c2914ff99)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"monitoring-prometheus-85546d8575-jr89h". list of unmounted volumes=[storage-volume]. list of unattached volumes=[config-volume rules-volume etcd-certs storage-volume router-config monitoring-ca-certs monitoring-certs monitoring-client-certs router-entry lua-scripts-config-config default-token-f6d9q]
  Warning  FailedMount  55s (x11 over 17m)  kubelet, 192.168.10.126  (combined from similar events): MountVolume.SetUp failed for volume "pvc-252001ed-fd60-11e8-9779-000c2914ff99" : mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e2ca91a8-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-252001ed-fd60-11e8-9779-000c2914ff99 --scope -- mount -t glusterfs -o auto_unmount,log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-252001ed-fd60-11e8-9779-000c2914ff99/monitoring-prometheus-85546d8575-jr89h-glusterfs.log,backup-volfile-servers=192.168.10.115:192.168.10.116:192.168.10.119 192.168.10.115:vol_f101b55d8b1dc3021ec7689713a74e8c /var/lib/kubelet/pods/e2ca91a8-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-252001ed-fd60-11e8-9779-000c2914ff99
Output: Running scope as unit run-r638272b55bca4869b271e8e4b1ef45cf.scope.
mount: unknown filesystem type 'glusterfs'

 the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod monitoring-prometheus-85546d8575-jr89h

监控-prometheus-alertmanager

Events:
  Type     Reason       Age   From                     Message
  ----     ------       ----  ----                     -------
  Normal   Scheduled    20m   default-scheduler        Successfully assigned kube-system/monitoring-prometheus-alertmanager-65445b66bd-6bfpn to 192.168.10.126
  Warning  FailedMount  1m (x9 over 18m)  kubelet, 192.168.10.126  (combined from similar events): MountVolume.SetUp failed for volume "pvc-251ed00f-fd60-11e8-9779-000c2914ff99" : mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e2cbe5e7-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251ed00f-fd60-11e8-9779-000c2914ff99 --scope -- mount -t glusterfs -o backup-volfile-servers=192.168.10.115:192.168.10.116:192.168.10.119,auto_unmount,log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-251ed00f-fd60-11e8-9779-000c2914ff99/monitoring-prometheus-alertmanager-65445b66bd-6bfpn-glusterfs.log 192.168.10.115:vol_7766e36a77cbd2c0afe3bd18626bd2c4 /var/lib/kubelet/pods/e2cbe5e7-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251ed00f-fd60-11e8-9779-000c2914ff99
Output: Running scope as unit run-r35994e15064e48e2a36f69a88009aa5d.scope.
mount: unknown filesystem type 'glusterfs'

 the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod monitoring-prometheus-alertmanager-65445b66bd-6bfpn
  Warning  FailedMount  23s (x9 over 18m)  kubelet, 192.168.10.126  Unable to mount volumes for pod "monitoring-prometheus-alertmanager-65445b66bd-6bfpn_kube-system(e2cbe5e7-fd67-11e8-822b-000c2914ff99)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"monitoring-prometheus-alertmanager-65445b66bd-6bfpn". list of unmounted volumes=[storage-volume]. list of unattached volumes=[config-volume storage-volume router-config monitoring-ca-certs monitoring-certs router-entry default-token-f6d9q]
4

1 回答 1

0

重新安装 ICP(IBM Cloud Private)后,刚刚解决了这个问题。

我检查了几个可能性错误,然后在几个节点上没有完全安装 GlusterFS 客户端。

我检查命令“所有节点上的 GlusterFS 客户端”:(使用 Ubuntu 作为操作系统)

sudo apt-get install glusterfs-client -y
于 2018-12-12T10:04:15.347 回答