1

由于尝试访问 kubernetes 内部服务时出错,我在某些节点上遇到了 CoreDNS 问题,它们处于 Crashloopback 状态。

这是一个使用 Kubespray 部署的新 K8s 集群,网络层是 Weave,Kubernetes 版本为 1.12.5,位于 Openstack 上。我已经测试了与端点的连接,例如到达 10.2.70.14:6443 没有问题。但是从 pod 到 10.233.0.1:443 的 telnet 失败了。

在此先感谢您的帮助

kubectl describe svc kubernetes
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.233.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         10.2.70.14:6443,10.2.70.18:6443,10.2.70.27:6443 + 2 more...
Session Affinity:  None
Events:            <none>

从 CoreDNS 日志中:

E0415 17:47:05.453762       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.233.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
E0415 17:47:05.456909       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.233.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
E0415 17:47:06.453258       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.233.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused

此外,从有问题的节点之一检查 kube-proxy 的日志显示以下错误:

I0415 19:14:32.162909       1 graceful_termination.go:160] Trying to delete rs: 10.233.0.1:443/TCP/10.2.70.36:6443
I0415 19:14:32.162979       1 graceful_termination.go:171] Not deleting, RS 10.233.0.1:443/TCP/10.2.70.36:6443: 1 ActiveConn, 0 InactiveConn
I0415 19:14:32.162989       1 graceful_termination.go:160] Trying to delete rs: 10.233.0.1:443/TCP/10.2.70.18:6443
I0415 19:14:32.163017       1 graceful_termination.go:171] Not deleting, RS 10.233.0.1:443/TCP/10.2.70.18:6443: 1 ActiveConn, 0 InactiveConn
E0415 19:14:32.215707       1 proxier.go:430] Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: line 7 failed
)
4

1 回答 1

2

我遇到了完全相同的问题,结果证明我的 kubespray 配置错误。尤其是 nginx 入口设置ingress_nginx_host_network

当它变成我们的时候,你必须设置ingress_nginx_host_network: true(默认为 false)

如果您不想重新运行整个 kubespray 脚本,请编辑 nginx 入口守护程序集

$ kubectl -n ingress-nginx edit ds ingress-nginx-controller

  1. 添加--report-node-internal-ip-address到命令行:
spec:
  container:
      args:
       - /nginx-ingress-controller
       - --configmap=$(POD_NAMESPACE)/ingress-nginx
       - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
       - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
       - --annotations-prefix=nginx.ingress.kubernetes.io
       - --report-node-internal-ip-address # <- new
  1. 将以下两个属性设置在与 eg 相同的级别serviceAccountName: ingress-nginx
serviceAccountName: ingress-nginx
hostNetwork: true # <- new
dnsPolicy: ClusterFirstWithHostNet  # <- new

然后保存退出:wq,查看pod状态kubectl get pods --all-namespaces

来源: https ://github.com/kubernetes-sigs/kubespray/issues/4357

于 2019-11-23T22:20:18.133 回答