我在 4 centos 7 盒子上运行 kubernetes,master 和 minions。我也安装了法兰绒和skydns。flannel 覆盖 ip 是 172.17.0.0/16,我的服务集群 ip 是 10.254.0.0/16。我在 k8 集群上运行大三角帆吊舱。我看到的是大三角帆服务无法找到对方。每个 pod 从 172.17 切片中获取一个 ip,我可以使用该 ip 从任何节点对 pod 执行 ping 操作。然而,服务本身使用集群 ip 并且无法相互通信。由于 Kube-proxy 应该转发此流量,因此我查看了 iptable 规则,我看到了:
[root@MultiNode4 ~$]iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere 10.254.206.105 /* spinnaker/spkr-clouddriver: has no endpoints */ tcp dpt:afs3-prserver reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.162.75 /* spinnaker/spkr-orca: has no endpoints */ tcp dpt:us-srv reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.62.109 /* spinnaker/spkr-rush: has no endpoints */ tcp dpt:8085 reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.68.125 /* spinnaker/spkr-echo: has no endpoints */ tcp dpt:8089 reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.123.127 /* spinnaker/spkr-front50: has no endpoints */ tcp dpt:webcache reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.36.197 /* spinnaker/spkr-gate: has no endpoints */ tcp dpt:8084 reject-with icmp-port-unreachable
似乎 kube-proxy 无法转发。我在 kube-proxy 启动时没有错误:
[root@MultiNode4 ~$]systemctl status kube-proxy -l
kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-07-07 02:54:54 EDT; 1h 10min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 7866 (kube-proxy)
Memory: 3.6M
CGroup: /system.slice/kube-proxy.service
└─7866 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://centos-master:8080
Jul 07 02:54:54 clm-aus-015349.bmc.com systemd[1]: Started Kubernetes Kube-Proxy Server.
Jul 07 02:54:54 clm-aus-015349.bmc.com systemd[1]: Starting Kubernetes Kube-Proxy Server...
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: E0707 02:54:54.754845 7866 server.go:340] Can't get Node "multiNode4", assuming iptables proxy: nodes "MultiNode4" not found
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.756460 7866 server.go:200] Using iptables Proxier.
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.756527 7866 proxier.go:208] missing br-netfilter module or unset br-nf-call-iptables; proxy may not work as intended
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.756551 7866 server.go:213] Tearing down userspace rules.
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.770100 7866 conntrack.go:36] Setting nf_conntrack_max to 262144
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.770145 7866 conntrack.go:41] Setting conntrack hashsize to 65536
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.771445 7866 conntrack.go:46] Setting nf_conntrack_tcp_timeout_established to 86400
我错过了什么?