0

我有一个带有 kubeadm init 的 kube 集群设置(大部分是默认设置)。一切都按预期工作,除了如果我的一个节点在 pod 正在运行时离线,则 pod 会Running无限期地保持状态。根据我的阅读,它们应该进入UnknownorFailure状态,并且在 --pod-eviction-timeout (默认 5m)之后,它们应该被重新安排到另一个健康节点。

这是 Node 7 离线 20 多分钟后我的 pod(我也将它放置了两天以上,没有重新安排):

kubectl get pods -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
workshop-30000-77b95f456c-sxkp5        1/1     Running   0          20m   REDACTED       node7   <none>           <none>
workshop-operator-657b45b6b8-hrcxr     2/2     Running   0          23m   REDACTED       node7   <none>           <none>

kubectl get deployments -o wide
NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES                                                                                          SELECTOR
deployment.apps/workshop-30000      0/1     1            0           21m   workshop-ubuntu    REDACTED                                                            client=30000
deployment.apps/workshop-operator   0/1     1            0           17h   ansible,operator   REDACTED   name=workshop-operator

您可以看到 pod 仍标记为Running,而它们的部署具有Ready: 0/1.

这是我的节点:

kubectl get nodes -o wide
NAME                STATUS     ROLES    AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION     CONTAINER-RUNTIME
kubernetes-master   Ready      master   34d    v1.17.3   REDACTED      <none>        Ubuntu 19.10   5.3.0-42-generic   docker://19.3.2
kubernetes-worker   NotReady   <none>   34d    v1.17.3   REDACTED      <none>        Ubuntu 19.10   5.3.0-29-generic   docker://19.3.2
node3               NotReady   worker   21d    v1.17.3   REdACTED      <none>        Ubuntu 19.10   5.3.0-40-generic   docker://19.3.2
node4               Ready      <none>   19d    v1.17.3   REDACTED      <none>        Ubuntu 19.10   5.3.0-40-generic   docker://19.3.2
node6               NotReady   <none>   5d7h   v1.17.4   REDACTED      <none>        Ubuntu 19.10   5.3.0-42-generic   docker://19.3.6
node7               NotReady   <none>   5d6h   v1.17.4   REDACTED      <none>        Ubuntu 19.10   5.3.0-42-generic   docker://19.3.6

问题可能是什么?我所有的容器都有就绪和活跃度探测器。我试过搜索文档和其他地方,但找不到任何解决这个问题的方法。

目前,如果一个节点出现故障,我可以将其上的 pod 重新安排到活动节点的唯一方法是,如果我使用 --force 和 --graceperiod=0 手动删除它们,这会破坏一些主要的Kubernetes 的目标:自动化和自我修复。

根据文档:https ://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-lifetime If a node dies or is disconnected from the rest of the cluster, Kubernetes applies a policy for setting the phase of all Pods on the lost node to Failed.

- - - - - 额外的信息 - - - - - - - -

kubectl describe pods workshop-30000-77b95f456c-sxkp5
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  <unknown>          default-scheduler  Successfully assigned workshop-operator/workshop-30000-77b95f456c-sxkp5 to node7
  Normal   Pulling    37m                kubelet, node7     Pulling image "REDACTED"
  Normal   Pulled     37m                kubelet, node7     Successfully pulled image "REDACTED"
  Normal   Created    37m                kubelet, node7     Created container workshop-ubuntu
  Normal   Started    37m                kubelet, node7     Started container workshop-ubuntu
  Warning  Unhealthy  36m (x2 over 36m)  kubelet, node7     Liveness probe failed: Get http://REDACTED:8080/healthz: dial tcp REDACTED:8000: connect: connection refused
  Warning  Unhealthy  36m (x3 over 36m)  kubelet, node7     Readiness probe failed: Get http://REDACTED:8000/readyz: dial tcp REDACTED:8000: connect: connection refused

我相信那些 liveness 和 readiness 探测失败只是由于启动缓慢。节点关闭后似乎没有检查活动/准备情况(最后一次检查是 37 分钟前)。

这是一个具有以下版本的自托管集群:

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

感谢所有帮助的人。

编辑:最初使用 kubeadm 引导时,这可能是一个错误或潜在的错误配置。完全重新安装 kubernetes 集群并从 1.17.4 更新到 1.18 解决了这个问题,现在 Pod 从死节点重新调度。

4

1 回答 1

3

TaintBasedEvictionsKubernetes 版本 1.13 之后将功能标志设置为 true(默认),您可以在容忍度下在其规范内设置您的 pod 驱逐时间。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      tolerations:
      - key: "node.kubernetes.io/unreachable"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 2
      - key: "node.kubernetes.io/not-ready"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 2
      containers:
      - image: busybox
        command:
        - sleep
        - "3600"
        imagePullPolicy: IfNotPresent
        name: busybox
      restartPolicy: Always

如果在 300 秒(默认)或 2 秒(在容忍度中设置)之后,Pod 没有被重新安排,您需要这样做kubectl delete node会触发节点上 Pod 的重新安排。

于 2020-03-23T04:48:45.403 回答