0

让 microk8s 在两个节点上运行。Ready最近它进入了一个状态,主节点因为microk8s.daemon-containerd服务无法启动而无法进入状态。这在尝试cert-manager在 k8s 集群中运行配置后开始发生。

据我所见,cert-manager-webhookpod 正在第二个节点上运行。

我试过microk8s stop/ microk8s start。我什microk8s reset至在这一点上尝试过,但 containerd 总是显示相同的错误。

输出:

$ kubectl get node
NAME        STATUS     ROLES    AGE   VERSION
pi-k8s-00   NotReady   <none>   77d   v1.18.6-1+b4f4cb0b7fe3c1
pi-k8s-01   Ready      <none>   77d   v1.19.2-34+37bbd8cebecb60
$ kubectl get pod -n cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-676b755d5f-6bjxv              1/1     Running   0          12m
cert-manager-cainjector-795f67b984-tsmw9   1/1     Running   3          12m
cert-manager-webhook-86c4dcd4b5-bgrmb      1/1     Running   0          12m
$ sudo journalctl -u snap.microk8s.daemon-containerd
...
Oct 17 10:42:33 pi-k8s-00 microk8s.daemon-containerd[44363]: time="2020-10-17T10:42:33.848409047Z" level=fatal msg="Failed to run CRI service" error="failed to recover state: failed to reserve sandbox name \"cert-manager-webhook>
Oct 17 10:42:33 pi-k8s-00 systemd[1]: snap.microk8s.daemon-containerd.service: Main process exited, code=exited, status=1/FAILURE
Oct 17 10:42:33 pi-k8s-00 systemd[1]: snap.microk8s.daemon-containerd.service: Failed with result 'exit-code'.
Oct 17 10:42:34 pi-k8s-00 systemd[1]: snap.microk8s.daemon-containerd.service: Scheduled restart job, restart counter is at 5.
Oct 17 10:42:34 pi-k8s-00 systemd[1]: Stopped Service for snap application microk8s.daemon-containerd.
Oct 17 10:42:34 pi-k8s-00 systemd[1]: snap.microk8s.daemon-containerd.service: Start request repeated too quickly.
Oct 17 10:42:34 pi-k8s-00 systemd[1]: snap.microk8s.daemon-containerd.service: Failed with result 'exit-code'.
Oct 17 10:42:34 pi-k8s-00 systemd[1]: Failed to start Service for snap application microk8s.daemon-containerd.
$ uname -a
Linux pi-k8s-00 5.4.0-1021-raspi #24-Ubuntu SMP PREEMPT Mon Oct 5 09:59:23 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux

如何使主节点恢复到良好的运行/就绪状态?

- - 更新 - -

输出:

$ less /var/snap/microk8s/current/inspection-report/snap.microk8s.daemon-containerd/journal.log
Oct 18 14:48:03 pi-k8s-00 microk8s.daemon-containerd[239043]: time="2020-10-18T14:48:03.936439781Z" level=fatal msg="Failed to run CRI service" error="failed to recover state: failed to reserve sandbox name \"cert-manager-webhook-64b9b4fdfd-9d6tm_cert-manager_81fb08ac-7e87-42bd-9123-b0b8b098fe50_3\": name \"cert-manager-webhook-64b9b4fdfd-9d6tm_cert-manager_81fb08ac-7e87-42bd-9123-b0b8b098fe50_3\" is reserved for \"149b0aa92e3eb042f87353ead44a7247e756c8071f804bfbec3b781a5565e52c\""

最后一条日志显示沙盒名称是为给定 ID 保留的。

那会是什么身份证?我该去哪里,应该做些什么来释放一些东西?

在硬重启后查看“未能保留沙箱名称”错误中的注释#1014我试过:

$ sudo ctr -n=k8s.io containers info 149b0aa92e3eb042f87353ead44a7247e756c8071f804bfbec3b781a5565e52c
ctr: container "149b0aa92e3eb042f87353ead44a7247e756c8071f804bfbec3b781a5565e52c" in namespace "k8s.io": not found

但是从输出中可以看出,不存在具有该 id 的容器吗?

4

1 回答 1

1

似乎容器数据已损坏,因此解决此问题的方法是通过执行以下操作重新创建容器数据:

$ microk8s.stop
$ mv /var/snap/microk8s/common/var/lib/containerd /var/snap/microk8s/common/var/lib/_containerd
$ microk8s.start

Kubernetes 主节点再次显示状态Ready

$ kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
pi-k8s-00   Ready    <none>   84d   v1.19.2-34+37bbd8cebecb60
pi-k8s-01   Ready    <none>   84d   v1.19.2-34+37bbd8cebecb60

有关详细信息,请参阅我在 microk8s github 问题页面Failed to Reserve Sandbox Name上的帖子。

于 2020-10-24T10:59:00.500 回答