问题
如果 K8S ,在Monitor Node Health文档中提到了node-problem-detector 。如果它不在 GCE 中,我们如何使用它?它是否向仪表板提供信息或提供 API 指标?
如果 K8S ,在Monitor Node Health文档中提到了node-problem-detector 。如果它不在 GCE 中,我们如何使用它?它是否向仪表板提供信息或提供 API 指标?
“这个工具旨在使集群管理堆栈中的上游层可以看到各种节点问题。它是一个守护进程,在每个节点上运行,检测节点问题并将其报告给 apiserver。”
好吧,但是...这实际上是什么意思?我如何判断它是否进入了 api 服务器?
之前和之后是什么样子的?知道这将帮助我理解它在做什么。
在安装节点问题检测器之前,我看到:
Bash# kubectl describe node ip-10-40-22-166.ec2.internal | grep -i condition -A 20 | grep Ready -B 20
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 20 Jun 2019 12:30:05 -0400 Thu, 20 Jun 2019 12:30:05 -0400 WeaveIsUp Weave pod has set this
OutOfDisk False Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:30:14 -0400 KubeletReady kubelet is posting ready status
安装节点问题检测器后,我看到:
Bash# helm upgrade --install npd stable/node-problem-detector -f node-problem-detector.values.yaml
Bash# kubectl rollout status daemonset npd-node-problem-detector #(wait for up)
Bash# kubectl describe node ip-10-40-22-166.ec2.internal | grep -i condition -A 20 | grep Ready -B 20
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
DockerDaemon False Thu, 20 Jun 2019 22:06:17 -0400 Thu, 20 Jun 2019 22:04:14 -0400 DockerDaemonHealthy Docker daemon is healthy
EBSHealth False Thu, 20 Jun 2019 22:06:17 -0400 Thu, 20 Jun 2019 22:04:14 -0400 NoVolumeErrors Volumes are attaching successfully
KernelDeadlock False Thu, 20 Jun 2019 22:06:17 -0400 Thu, 20 Jun 2019 22:04:14 -0400 KernelHasNoDeadlock kernel has no deadlock
ReadonlyFilesystem False Thu, 20 Jun 2019 22:06:17 -0400 Thu, 20 Jun 2019 22:04:14 -0400 FilesystemIsNotReadOnly Filesystem is not read-only
NetworkUnavailable False Thu, 20 Jun 2019 12:30:05 -0400 Thu, 20 Jun 2019 12:30:05 -0400 WeaveIsUp Weave pod has set this
OutOfDisk False Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:30:14 -0400 KubeletReady kubelet is posting ready status
注意我请求帮助想出一种方法来查看所有节点,Kenna Ofoegbu 想出了这个超级有用和可读的 gem:
zsh# nodes=$(kubectl get nodes | sed '1d' | awk '{print $1}') && for node in $nodes; do; kubectl describe node | sed -n '/Conditions/,/Ready/p' ; done
Bash# (same command, gives errors)
好的,现在我知道节点问题检测器的作用,但是......向节点添加条件有什么好处,我如何使用条件来做一些有用的事情?
问题:如何使用 Kubernetes 节点问题检测器?
用例 #1:自动修复故障节点
步骤 1.) 安装节点问题检测器,以便它可以将新的条件元数据附加到节点。
第 2 步。)利用 Planetlabs/draino 对条件恶劣的节点进行封锁和排水。
第 3 步。)利用https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler进行自动修复。(当节点被封锁并排空时,它会被标记为不可调度,这将触发一个新的节点被配置,然后坏节点的资源利用率将超低,这将导致坏节点被取消配置)
来源:https ://github.com/kubernetes/node-problem-detector#remedy-systems
用例 #2:显示不健康的节点事件,以便 Kubernetes 可以检测到它,然后将其注入到您的监控堆栈中,这样您就有了事件发生和时间的可审计历史记录。
这些不健康的节点事件记录在主机节点的某处,但通常,主机节点会生成太多嘈杂/无用的日志数据,以至于默认情况下通常不会收集这些事件。
节点问题检测器知道在主机节点上的何处查找这些事件,并在看到负面结果的信号时过滤掉噪音,并将其发布到它的 pod 日志中,这不是噪音。
pod 日志可能会被摄取到 ELK 和 Prometheus Operator 堆栈中,在那里可以对其进行检测、警报、存储和绘制图表。
另外,请注意,没有什么能阻止您实现这两个用例。
更新,在评论中为每个请求添加了一段 node-problem-detector.helm-values.yaml 文件:
log_monitors:
#https://github.com/kubernetes/node-problem-detector/tree/master/config contains the full list, you can exec into the pod and ls /config/ to see these as well.
- /config/abrt-adaptor.json #Adds ABRT Node Events (ABRT: automatic bug reporting tool), exceptions will show up under "kubectl describe node $NODENAME | grep Events -A 20"
- /config/kernel-monitor.json #Adds 2 new Node Health Condition Checks "KernelDeadlock" and "ReadonlyFilesystem"
- /config/docker-monitor.json #Adds new Node Health Condition Check "DockerDaemon" (Checks if Docker is unhealthy as a result of corrupt image)
# - /config/docker-monitor-filelog.json #Error: "/var/log/docker.log: no such file or directory", doesn't exist on pod, I think you'd have to mount node hostpath to get it to work, gain doesn't sound worth effort.
# - /config/kernel-monitor-filelog.json #Should add to existing Node Health Check "KernelDeadlock", more thorough detection, but silently fails in NPD pod logs for me.
custom_plugin_monitors: #[]
# Someone said all *-counter plugins are custom plugins, if you put them under log_monitors, you'll get #Error: "Failed to unmarshal configuration file "/config/kernel-monitor-counter.json""
- /config/kernel-monitor-counter.json #Adds new Node Health Condition Check "FrequentUnregisteredNetDevice"
- /config/docker-monitor-counter.json #Adds new Node Health Condition Check "CorruptDockerOverlay2"
- /config/systemd-monitor-counter.json #Adds 3 new Node Health Condition Checks "FrequentKubeletRestart", "FrequentDockerRestart", and "FrequentContainerdRestart"
你的意思是:如何安装它?
kubectl create -f https://github.com/kubernetes/node-problem-detector.yaml
考虑到node-problem-detector是一个Kubernetes 插件,您需要在您自己的 Kubernetes 服务器上安装该插件。
一个 Kubernetes 集群有一个将使用它的插件管理器。