4

问题

kubectl(CentOS 7 上的 1.8.3)错误消息实际上意味着什么以及如何解决。

Nov 19 22:32:24 master kubelet[4425]: E1119 22:32:24.269786 4425 summary.go:92] 无法获取
“/system.slice/kubelet.service”的系统容器统计信息:无法获取 cgroup 统计信息“/system.slice/kubelet.service”:未能获得 con

Nov 19 22:32:24 master kubelet[4425]: E1119 22:32:24.269802 4425 summary.go:92] 无法获取“/”的系统容器统计信息system.slice/docker.service”:无法获取“/system.slice/docker.service”的 cgroup 统计信息:无法获取 conta

研究

发现相同的错误并通过如下更新kubelet的服务单元来遵循解决方法,但没有奏效。

/etc/systemd/system/kubelet.service

[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/usr/bin/kubelet --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

背景

按照安装 kubeadm设置 Kubernetes 集群。安装 Docker文档中的部分说明了如何对齐 cgroup 驱动程序,如下所示。

注意:确保 kubelet 使用的 cgroup 驱动与 Docker 使用的相同。为了确保兼容性,您可以更新 Docker,如下所示:

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

但是这样做会导致 docker 服务无法启动:

无法使用文件 /etc/docker/daemon.json 配置 Docker 守护程序:以下指令均指定为标志”
。11 月 19 日 16:55:56 localhost.localdomain systemd 1:docker.service:主进程退出,代码=退出,状态=1/失败。

Maser 节点已准备就绪,所有系统 pod 都在运行。

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      1/1       Running   0          39m
kube-system   kube-apiserver-master            1/1       Running   0          39m
kube-system   kube-controller-manager-master   1/1       Running   0          39m
kube-system   kube-dns-545bc4bfd4-mqqqk        3/3       Running   0          40m
kube-system   kube-flannel-ds-fclcs            1/1       Running   2          13m
kube-system   kube-flannel-ds-hqlnb            1/1       Running   0          39m
kube-system   kube-proxy-t7z5w                 1/1       Running   0          40m
kube-system   kube-proxy-xdw42                 1/1       Running   0          13m
kube-system   kube-scheduler-master            1/1       Running   0          39m

环境

CentOS 上的 Kubernetes 1.8.3 和 Flannel。

$ kubectl version -o json | python -m json.tool
{
    "clientVersion": {
        "buildDate": "2017-11-08T18:39:33Z",
        "compiler": "gc",
        "gitCommit": "f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd",
        "gitTreeState": "clean",
        "gitVersion": "v1.8.3",
        "goVersion": "go1.8.3",
        "major": "1",
        "minor": "8",
        "platform": "linux/amd64"
    },
    "serverVersion": {
        "buildDate": "2017-11-08T18:27:48Z",
        "compiler": "gc",
        "gitCommit": "f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd",
        "gitTreeState": "clean",
        "gitVersion": "v1.8.3",
        "goVersion": "go1.8.3",
        "major": "1",
        "minor": "8",
        "platform": "linux/amd64"
    }
}


$ kubectl describe node master
Name:               master
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=master
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"86:b6:7a:d6:7b:b3"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=10.0.2.15
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             node-role.kubernetes.io/master:NoSchedule
CreationTimestamp:  Sun, 19 Nov 2017 22:27:17 +1100
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:27:13 +1100   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:27:13 +1100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:27:13 +1100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:32:24 +1100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.99.10
  Hostname:    master
Capacity:
 cpu:     1
 memory:  3881880Ki
 pods:    110
Allocatable:
 cpu:     1
 memory:  3779480Ki
 pods:    110
System Info:
 Machine ID:                 ca0a351004604dd49e43f8a6258ddd77
 System UUID:                CA0A3510-0460-4DD4-9E43-F8A6258DDD77
 Boot ID:                    e9060efa-42be-498d-8cb8-8b785b51b247
 Kernel Version:             3.10.0-693.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.8.3
 Kube-Proxy Version:         v1.8.3
PodCIDR:                     10.244.0.0/24
ExternalID:                  master
Non-terminated Pods:         (7 in total)
  Namespace                  Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                              ------------  ----------  ---------------  -------------
  kube-system                etcd-master                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-dns-545bc4bfd4-mqqqk         260m (26%)    0 (0%)      110Mi (2%)       170Mi (4%)
  kube-system                kube-flannel-ds-hqlnb             0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-t7z5w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-master             100m (10%)    0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  810m (81%)    0 (0%)      110Mi (2%)       170Mi (4%)
Events:
  Type    Reason                   Age                From                Message
  ----    ------                   ----               ----                -------
  Normal  Starting                 38m                kubelet, master     Starting kubelet.
  Normal  NodeAllocatableEnforced  38m                kubelet, master     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    37m (x8 over 38m)  kubelet, master     Node master status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  37m (x8 over 38m)  kubelet, master     Node master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    37m (x7 over 38m)  kubelet, master     Node master status is now: NodeHasNoDiskPressure
  Normal  Starting                 37m                kube-proxy, master  Starting kube-proxy.
  Normal  Starting                 32m                kubelet, master     Starting kubelet.
  Normal  NodeAllocatableEnforced  32m                kubelet, master     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    32m                kubelet, master     Node master status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  32m                kubelet, master     Node master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    32m                kubelet, master     Node master status is now: NodeHasNoDiskPressure
  Normal  NodeNotReady             32m                kubelet, master     Node master status is now: NodeNotReady
  Normal  NodeReady                32m                kubelet, master     Node master status is now: NodeReady

4

2 回答 2

0

这个问题的原因是节点 docker version 差异 kubernetes 需要 docker version 。

可以直接卸载docker,在每个节点上重新安装指定版本的docker,下一步重启docker,节点会立即恢复在线。

而且这个判断中安装的docker-images和pods不会受到影响,因为物理文件夹还在。

yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

yum install -y docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
systemctl enable docker
systemctl start docker
于 2019-12-11T04:22:01.010 回答
-1

我有完全相同的问题,我已经添加了ExecStart上面提到的参数,但仍然得到同样的错误。然后我做了kubeadm resetsystemctl daemon-reload重新创建了集群。这个错误似乎消失了。现在测试...

于 2017-11-19T17:24:41.320 回答