0

我有一个带有 1 个控制平面和 2 个节点的集群。我面临一个问题,即 pod 处于挂起状态并且对 pod 或节点的描述没有显示任何事件。这个问题在我创建集群的那一刻不会发生。几天后就会出现这种情况。以下是我查看的输出和消息。现在我已经没有任何故障排除线索了。

kubectl get nodes -o wide
NAME                STATUS   ROLES                  AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
dev-control-plane   Ready    control-plane,master   4d6h   v1.20.7   172.18.0.2    <none>        Ubuntu 21.04   5.11.0-16-generic   containerd://1.5.2
dev-worker          Ready    <none>                 4d6h   v1.20.7   172.18.0.3    <none>        Ubuntu 21.04   5.11.0-16-generic   containerd://1.5.2
dev-worker2         Ready    <none>                 4d6h   v1.20.7   172.18.0.4    <none>        Ubuntu 21.04   5.11.0-16-generic   containerd://1.5.2

当我对节点进行描述时,没有记录任何事件

kubectl describe nodes 
Name:               dev-control-plane
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=dev-control-plane
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 09 Dec 2021 18:03:10 +0530
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  dev-control-plane
  AcquireTime:     <unset>
  RenewTime:       Tue, 14 Dec 2021 00:18:16 +0530
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 14 Dec 2021 00:15:15 +0530   Thu, 09 Dec 2021 18:03:07 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 14 Dec 2021 00:15:15 +0530   Thu, 09 Dec 2021 18:03:07 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 14 Dec 2021 00:15:15 +0530   Thu, 09 Dec 2021 18:03:07 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 14 Dec 2021 00:15:15 +0530   Thu, 09 Dec 2021 18:03:34 +0530   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.18.0.2
  Hostname:    dev-control-plane
Capacity:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
Allocatable:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
System Info:
  Machine ID:                 71683ce055cf4961b8a3ee1c84333375
  System UUID:                1afd3039-3bfc-4ae0-9f08-a07c860b766e
  Boot ID:                    71486fa9-9e88-4991-b71b-0cabb5682524
  Kernel Version:             5.11.0-16-generic
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.5.2
  Kubelet Version:            v1.20.7
  Kube-Proxy Version:         v1.20.7
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
ProviderID:                   kind://docker/dev/dev-control-plane
Non-terminated Pods:          (7 in total)
  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-dev-control-plane                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         13h
  kube-system                 kindnet-kczjl                                100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      4d6h
  kube-system                 kube-apiserver-dev-control-plane             250m (2%)     0 (0%)      0 (0%)           0 (0%)         7h2m
  kube-system                 kube-controller-manager-dev-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         4h59m
  kube-system                 kube-proxy-zpqk9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
  kube-system                 kube-scheduler-dev-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         7h
  metallb-system              speaker-7sr4c                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                750m (6%)   100m (0%)
  memory             150Mi (1%)  50Mi (0%)
  ephemeral-storage  100Mi (0%)  0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>


Name:               dev-worker
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=dev-worker
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 09 Dec 2021 18:03:38 +0530
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  dev-worker
  AcquireTime:     <unset>
  RenewTime:       Tue, 14 Dec 2021 00:18:16 +0530
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 14 Dec 2021 00:17:56 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 14 Dec 2021 00:17:56 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 14 Dec 2021 00:17:56 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 14 Dec 2021 00:17:56 +0530   Mon, 13 Dec 2021 17:51:06 +0530   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.18.0.3
  Hostname:    dev-worker
Capacity:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
Allocatable:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
System Info:
  Machine ID:                 850286600982428ba831af865181ed72
  System UUID:                8f9a2778-c12d-4c6f-89aa-e35a1ab4f630
  Boot ID:                    71486fa9-9e88-4991-b71b-0cabb5682524
  Kernel Version:             5.11.0-16-generic
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.5.2
  Kubelet Version:            v1.20.7
  Kube-Proxy Version:         v1.20.7
PodCIDR:                      10.244.2.0/24
PodCIDRs:                     10.244.2.0/24
ProviderID:                   kind://docker/dev/dev-worker
Non-terminated Pods:          (3 in total)
  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                ------------  ----------  ---------------  -------------  ---
  kube-system                 kindnet-vrqsz       100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      4d6h
  kube-system                 kube-proxy-2j2jj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
  metallb-system              speaker-8c4rk       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (0%)  100m (0%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:              <none>


Name:               dev-worker2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=dev-worker2
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 09 Dec 2021 18:03:38 +0530
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  dev-worker2
  AcquireTime:     <unset>
  RenewTime:       Tue, 14 Dec 2021 00:18:16 +0530
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 14 Dec 2021 00:16:15 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 14 Dec 2021 00:16:15 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 14 Dec 2021 00:16:15 +0530   Thu, 09 Dec 2021 18:03:38 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 14 Dec 2021 00:16:15 +0530   Thu, 09 Dec 2021 18:03:48 +0530   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.18.0.4
  Hostname:    dev-worker2
Capacity:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
Allocatable:
  cpu:                12
  ephemeral-storage:  490691512Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15237756Ki
  pods:               110
System Info:
  Machine ID:                 e68f0d5d1ff542d5a9e822d04a4c65ea
  System UUID:                68068b61-c0ce-449f-a60a-a7ebb54674b7
  Boot ID:                    71486fa9-9e88-4991-b71b-0cabb5682524
  Kernel Version:             5.11.0-16-generic
  OS Image:                   Ubuntu 21.04
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.5.2
  Kubelet Version:            v1.20.7
  Kube-Proxy Version:         v1.20.7
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
ProviderID:                   kind://docker/dev/dev-worker2
Non-terminated Pods:          (3 in total)
  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                ------------  ----------  ---------------  -------------  ---
  kube-system                 kindnet-dqgpn       100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      4d6h
  kube-system                 kube-proxy-bghtx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
  metallb-system              speaker-wd7ft       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d6h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (0%)  100m (0%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:              <none>

描述 pod 时也是如此

kubectl describe pod nginx3
Name:         nginx3
Namespace:    default
Priority:     0
Node:         <none>
Labels:       run=nginx3
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  nginx3:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-g26tr (ro)
Volumes:
  default-token-g26tr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-g26tr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>

kube-system pod 中的所有 pod 都正常

kubectl get pods  -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
etcd-dev-control-plane                      1/1     Running   0          13h
kindnet-dqgpn                               1/1     Running   4          4d6h
kindnet-kczjl                               1/1     Running   4          4d6h
kindnet-vrqsz                               1/1     Running   4          4d6h
kube-apiserver-dev-control-plane            1/1     Running   0          7h5m
kube-controller-manager-dev-control-plane   1/1     Running   4          5h1m
kube-proxy-2j2jj                            1/1     Running   4          4d6h
kube-proxy-bghtx                            1/1     Running   4          4d6h
kube-proxy-zpqk9                            1/1     Running   4          4d6h
kube-scheduler-dev-control-plane            1/1     Running   4          7h3m

还部署了metallb

kubectl get pods -n  metallb-system
NAME            READY   STATUS    RESTARTS   AGE
speaker-7sr4c   1/1     Running   4          4d6h
speaker-8c4rk   1/1     Running   4          4d6h
speaker-wd7ft   1/1     Running   6          4d6h

在 kube-scheduler pod 中观察到一些可疑的日志并抱怨连接超时问题

E1213 18:49:11.055015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.18.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:18.462234       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://172.18.0.3:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:20.875619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.18.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:21.398879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.18.0.3:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:24.216140       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://172.18.0.3:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:30.550908       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.18.0.3:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:33.391076       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:40.663384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.18.0.3:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:42.069543       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://172.18.0.3:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:46.514854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.18.0.3:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:49.125063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.18.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:56.713987       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.18.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:57.622639       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:49:58.085948       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.18.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:04.678999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:07.131961       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.18.0.3:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:16.841002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.18.0.3:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:17.867246       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.18.0.3:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:18.073130       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://172.18.0.3:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:18.200523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://172.18.0.3:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:26.614618       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.18.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:28.496243       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.18.0.3:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:32.498903       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.18.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:32.996472       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://172.18.0.3:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:46.853901       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:52.536527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.18.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:53.265195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.18.0.3:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:55.692606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:59.711252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.18.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:50:59.819107       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://172.18.0.3:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:07.978263       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.18.0.3:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:10.578197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.18.0.3:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:12.975560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://172.18.0.3:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:14.441638       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.18.0.3:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:15.840423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://172.18.0.3:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:17.413701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.18.0.3:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:23.459527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.18.0.3:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:34.259630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.18.0.3:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:40.883489       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.18.0.3:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
E1213 18:51:42.076899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.18.0.3:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused

不知道这个问题是什么,如果有人可以给我一些指示以进一步排除故障,我将不胜感激

4

0 回答 0