我正在努力了解如何在 kubernetes 1.10 上使用 flannel 正确配置 kube-dns,并将 containerd 作为 CRI。
kube-dns 无法运行,出现以下错误:
kubectl -n kube-system logs kube-dns-595fdb6c46-9tvn9 -c kubedns
I0424 14:56:34.944476 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
I0424 14:56:35.444469 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
E0424 14:56:35.815863 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:192: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: no route to host
E0424 14:56:35.815863 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:189: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: no route to host
I0424 14:56:35.944444 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
I0424 14:56:36.444462 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
I0424 14:56:36.944507 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
F0424 14:56:37.444434 1 dns.go:209] Timeout waiting for initialization
kubectl -n kube-system describe pod kube-dns-595fdb6c46-9tvn9
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 47m (x181 over 3h) kubelet, worker1 Readiness probe failed: Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning BackOff 27m (x519 over 3h) kubelet, worker1 Back-off restarting failed container
Normal Killing 17m (x44 over 3h) kubelet, worker1 Killing container with id containerd://dnsmasq:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 12m (x178 over 3h) kubelet, worker1 Liveness probe failed: Get http://10.244.0.2:10054/metrics: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning BackOff 2m (x855 over 3h) kubelet, worker1 Back-off restarting failed container
确实没有到 10.96.0.1 端点的路由:
ip route
default via 10.240.0.254 dev ens160
10.240.0.0/24 dev ens160 proto kernel scope link src 10.240.0.21
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.0.0/16 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.244.4.0/24 via 10.244.4.0 dev flannel.1 onlink
10.244.5.0/24 via 10.244.5.0 dev flannel.1 onlink
什么负责配置集群服务地址范围和关联路由?是容器运行时、覆盖网络(在本例中为 flannel)还是其他什么?应该在哪里配置?
配置主机和我的 pod 网络之间的10-containerd-net.conflist
网桥。服务网络也可以在这里配置吗?
cat /etc/cni/net.d/10-containerd-net.conflist
{
"cniVersion": "0.3.1",
"name": "containerd-net",
"plugins": [
{
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"promiscMode": true,
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true}
}
]
}
编辑:
从 2016 年开始就遇到了这个问题:
截至几周前(我忘记了发布,但它是 1.2.x,其中 x != 0)(#24429)我们修复了路由,以便到达以服务 IP 为目标的节点的任何流量都将被处理为如果它来到一个节点端口。这意味着您应该能够将您的服务集群 IP 范围的静态路由设置为一个或多个节点,并且这些节点将充当桥接器。这与大多数人使用法兰绒搭接覆盖层的技巧相同。
它不完美,但它有效。如果您想要最佳行为(即不丢失客户端 IP),将来需要更精确地使用路由,否则我们将看到更多的非 kube-proxy 服务实现。
这仍然相关吗?我需要为服务 CIDR 设置静态路由吗?还是问题实际上与kube-proxy
法兰绒或容器有关?
我的法兰绒配置:
cat /etc/cni/net.d/10-flannel.conflist
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
和 kube-proxy:
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--cluster-cidr=10.244.0.0/16 \
--feature-gates=SupportIPVSProxyMode=true \
--ipvs-min-sync-period=5s \
--ipvs-sync-period=5s \
--ipvs-scheduler=rr \
--kubeconfig=/etc/kubernetes/kube-proxy.conf \
--logtostderr=true \
--master=https://192.168.160.1:6443 \
--proxy-mode=ipvs \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
编辑:
看了kube-proxy调试步骤,好像kube-proxy
联系不上master。我怀疑这是问题的很大一部分。我在 HAProxy 负载均衡器后面有 3 个控制器/主节点,它绑定到192.168.160.1:6443
并转发循环到10.240.0.1[1|2|3]:6443
. 这可以在上面的输出/配置中看到。
在kube-proxy.service
,我已经指定--master=192.168.160.1:6443
。为什么尝试连接到端口 443?我可以改变这个 - 似乎没有端口标志吗?出于某种原因,它是否需要使用端口 443?