我在 ubuntu 16.04.3 虚拟机上部署了一个 k8s 集群。集群由 1 个主节点和 3 个节点组成。覆盖网络是法兰绒。
# kubectl get no
NAME STATUS ROLES AGE VERSION
buru Ready <none> 70d v1.8.4
fraser Ready,SchedulingDisabled <none> 2h v1.8.4
tasmania Ready <none> 1d v1.8.4
whiddy Ready,SchedulingDisabled master 244d v1.8.4
尽管配置方式完全相同,但我的两个节点(buru 和 tasmania)工作正常,而第三个节点(fraser)根本不想协作。
如果我在 fraser 服务器中 ssh,我可以正确访问覆盖网络:
root@fraser:~# ifconfig flannel.1
flannel.1 Link encap:Ethernet HWaddr 52:4a:da:84:8a:7b
inet addr:10.244.3.0 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::504a:daff:fe84:8a7b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:11 errors:0 dropped:0 overruns:0 frame:0
TX packets:11 errors:0 dropped:8 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:756 (756.0 B) TX bytes:756 (756.0 B)
root@fraser:~# ping 10.244.0.1
PING 10.244.0.1 (10.244.0.1) 56(84) bytes of data.
64 bytes from 10.244.0.1: icmp_seq=1 ttl=64 time=0.764 ms
^C
--- 10.244.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.764/0.764/0.764/0.000 ms
root@fraser:~# ping 10.244.0.1
PING 10.244.0.1 (10.244.0.1) 56(84) bytes of data.
64 bytes from 10.244.0.1: icmp_seq=1 ttl=64 time=0.447 ms
64 bytes from 10.244.0.1: icmp_seq=2 ttl=64 time=1.20 ms
64 bytes from 10.244.0.1: icmp_seq=3 ttl=64 time=0.560 ms
^C
--- 10.244.0.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.447/0.736/1.203/0.334 ms
但是 pod 显然无法到达覆盖网络:
# kubectl --all-namespaces=true get po -o wide | grep fraser
kube-system test-fraser 1/1 Running 0 20m 10.244.3.7 fraser
# kubectl -n kube-system exec -ti test-fraser ash
/ # ping 10.244.0.1
PING 10.244.0.1 (10.244.0.1): 56 data bytes
^C
--- 10.244.0.1 ping statistics ---
12 packets transmitted, 0 packets received, 100% packet loss
该test-fraser
吊舱只是我用于故障排除的高山静态吊舱。
相同的 pod,以相同的方式部署在另一个节点 (buru) 上工作得很好。
由于覆盖网络在主机本身上工作,我会说法兰绒在这里工作得很好。但是,出于某种原因,pod 内的网络无法正常工作。
其他注意事项
- 所有服务器均未启用防火墙
- Docker版本相同(1.13.1)
- 就 ubuntu 更新而言,所有节点都是最新的
有人可以帮我解决这个问题吗?
编辑
kubectl describe no fraser
Name: fraser
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=fraser
Annotations: flannel.alpha.coreos.com/backend-data={"VtepMAC":"52:4a:da:84:8a:7b"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=80.211.157.110
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Thu, 07 Dec 2017 12:51:22 +0100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 07 Dec 2017 15:27:27 +0100 Thu, 07 Dec 2017 12:51:22 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 07 Dec 2017 15:27:27 +0100 Thu, 07 Dec 2017 14:47:57 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 07 Dec 2017 15:27:27 +0100 Thu, 07 Dec 2017 14:47:57 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Thu, 07 Dec 2017 15:27:27 +0100 Thu, 07 Dec 2017 14:48:07 +0100 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 80.211.157.110
Hostname: fraser
Capacity:
cpu: 4
memory: 8171244Ki
pods: 110
Allocatable:
cpu: 4
memory: 8068844Ki
pods: 110
System Info:
Machine ID: cb102c57fd539a2fb8ffab52578f27bd
System UUID: 423E50F4-C4EF-23F0-F300-B568F4B4B8B1
Boot ID: ca80d640-380a-4851-bab0-ee1fffd20bb2
Kernel Version: 4.4.0-92-generic
OS Image: Ubuntu 16.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.13.1
Kubelet Version: v1.8.4
Kube-Proxy Version: v1.8.4
PodCIDR: 10.244.3.0/24
ExternalID: fraser
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system filebeat-mghqx 100m (2%) 0 (0%) 100Mi (1%) 200Mi (2%)
kube-system kube-flannel-ds-gvw4s 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-62vts 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system test-fraser 0 (0%) 0 (0%) 0 (0%) 0 (0%)
prometheus prometheus-prometheus-node-exporter-mwq67 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
100m (2%) 0 (0%) 100Mi (1%) 200Mi (2%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 48m kubelet, fraser Starting kubelet.
Normal NodeAllocatableEnforced 48m kubelet, fraser Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 48m kubelet, fraser Node fraser status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 48m kubelet, fraser Node fraser status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 48m kubelet, fraser Node fraser status is now: NodeHasNoDiskPressure
Normal NodeNotReady 48m kubelet, fraser Node fraser status is now: NodeNotReady
Normal NodeNotSchedulable 48m kubelet, fraser Node fraser status is now: NodeNotSchedulable
Normal NodeReady 48m kubelet, fraser Node fraser status is now: NodeReady
Normal NodeNotSchedulable 48m kubelet, fraser Node fraser status is now: NodeNotSchedulable
Normal NodeAllocatableEnforced 48m kubelet, fraser Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 48m kubelet, fraser Node fraser status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 48m kubelet, fraser Node fraser status is now: NodeHasSufficientMemory
Normal Starting 48m kubelet, fraser Starting kubelet.
Normal NodeNotReady 48m kubelet, fraser Node fraser status is now: NodeNotReady
Normal NodeHasNoDiskPressure 48m kubelet, fraser Node fraser status is now: NodeHasNoDiskPressure
Normal NodeReady 48m kubelet, fraser Node fraser status is now: NodeReady
Normal Starting 39m kubelet, fraser Starting kubelet.
Normal NodeAllocatableEnforced 39m kubelet, fraser Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 39m kubelet, fraser Node fraser status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 39m (x2 over 39m) kubelet, fraser Node fraser status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x2 over 39m) kubelet, fraser Node fraser status is now: NodeHasNoDiskPressure
Normal NodeNotReady 39m kubelet, fraser Node fraser status is now: NodeNotReady
Normal NodeNotSchedulable 39m kubelet, fraser Node fraser status is now: NodeNotSchedulable
Normal NodeReady 39m kubelet, fraser Node fraser status is now: NodeReady
Normal Starting 39m kube-proxy, fraser Starting kube-proxy.