0

Pod ip 仅从同一节点 ping。

当我尝试从其他节点/工作人员 ping pod ip 时,它没有 ping。

master2@master2:~$ kubectl get pods --namespace=kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-6ff8cbb789-lxwqq   1/1     Running   0          6d21h   192.168.180.2     master2   <none>           <none>
calico-node-4mnfk                          1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
calico-node-c4rjb                          1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
calico-node-dgqwx                          1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
calico-node-fhtvz                          1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
calico-node-mhd7w                          1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
coredns-8b5d5b85f-fjq72                    1/1     Running   0          45m     192.168.135.11    node3     <none>           <none>
coredns-8b5d5b85f-hgg94                    1/1     Running   0          45m     192.168.166.136   node1     <none>           <none>
etcd-master1                               1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
etcd-master2                               1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-apiserver-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-apiserver-master2                     1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-controller-manager-master1            1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-controller-manager-master2            1/1     Running   2          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-66nxz                           1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-fnrrz                           1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-proxy-lq5xp                           1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
kube-proxy-vxhwm                           1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
kube-proxy-zgwzq                           1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
kube-scheduler-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-scheduler-master2                     1/1     Running   1          6d21h   10.10.41.159      master2   <none>           <none>

当我尝试从节点 3 在节点 2 上使用 ip 192.168.104.8 ping pod 失败并说 100% 数据丢失

master1@master1:~/cluster$ sudo kubectl get pods  -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES

contentms-cb475f569-t54c2    1/1     Running   0          6d21h   192.168.104.1    node2   <none>           <none>
nav-6f67d5bd79-9khmm         1/1     Running   0          6d8h    192.168.104.8    node2   <none>           <none>
react                        1/1     Running   0          7m24s   192.168.135.12   node3   <none>           <none>
statistics-5668cd7dd-thqdf   1/1     Running   0          6d15h   192.168.104.4    node2   <none>           <none>
4

1 回答 1

1

它是路线问题

我为每个节点 eth0 和 eth1 使用了两个 ip。

在路由中,它使用 eth1 代替 eth0 ip。

我禁用了 eth1 ips,一切正常。

于 2020-01-22T09:30:29.577 回答