1

我已经使用具有默认设置(kube_proxy_mode: iptablesdns_mode: coredns )的 2 个裸机服务器(1 个主服务器和 1 个工作服务器)设置了一个 k8s 集群,我想在内部运行一个BIND DNS 服务器来管理几个域名.

我使用 helm 3 部署了一个 helloworld Web 应用程序进行测试。一切都像一个魅力(HTTP,HTTPS,让我们加密思想证书管理器)。

kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:24:46Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8smaster   Ready    master   22d   v1.16.7
k8sslave    Ready    <none>   21d   v1.16.7

我使用Helm 3图表在默认命名空间中部署了我的BIND DNS 服务器(命名)的图像;使用公开绑定应用程序容器的端口 53 的服务。

我已经使用 pod 和绑定服务测试了 DNS 解析;它运作良好。下面是从主节点绑定 k8s 服务的测试:

kubectl -n default get svc bind -o wide
NAME   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE    SELECTOR
bind   ClusterIP   10.233.31.255   <none>        53/TCP,53/UDP   4m5s   app=bind,release=bind

kubectl get endpoints bind
NAME   ENDPOINTS                                                        AGE
bind   10.233.75.239:53,10.233.93.245:53,10.233.75.239:53 + 1 more...   4m12s

export SERVICE_IP=`kubectl get services bind -o go-template='{{.spec.clusterIP}}{{"\n"}}'`
nslookup www.example.com ${SERVICE_IP}
Server:     10.233.31.255
Address:    10.233.31.255#53

Name:   www.example.com
Address: 176.31.XXX.XXX

因此绑定 DNS 应用程序已部署并通过绑定 k8s 服务正常工作

下一步;我按照https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/文档设置 Nginx 入口控制器(配置映射和服务)来处理端口上的tcp/udp请求53并将它们重定向到绑定 DNS 应用程序。

当我从外部计算机测试名称解析时,它不起作用:

nslookup www.example.com <IP of the k8s master>
;; connection timed out; no servers could be reached

我深入研究了 k8s 配置、日志等,并在kube-proxy日志中发现了一条警告消息:

ps auxw | grep kube-proxy
root     19984  0.0  0.2 141160 41848 ?        Ssl  Mar26  19:39 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8smaster

journalctl --since "2 days ago" | grep kube-proxy
<NOTHING RETURNED>

KUBEPROXY_FIRST_POD=`kubectl get pods -n kube-system -l k8s-app=kube-proxy -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | head -n 1`
kubectl logs -n kube-system ${KUBEPROXY_FIRST_POD}

I0326 22:26:03.491900       1 node.go:135] Successfully retrieved node IP: 91.121.XXX.XXX
I0326 22:26:03.491957       1 server_others.go:150] Using iptables Proxier.
I0326 22:26:03.492453       1 server.go:529] Version: v1.16.7
I0326 22:26:03.493179       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0326 22:26:03.493647       1 config.go:131] Starting endpoints config controller
I0326 22:26:03.493663       1 config.go:313] Starting service config controller
I0326 22:26:03.493669       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0326 22:26:03.493679       1 shared_informer.go:197] Waiting for caches to sync for service config
I0326 22:26:03.593986       1 shared_informer.go:204] Caches are synced for endpoints config 
I0326 22:26:03.593992       1 shared_informer.go:204] Caches are synced for service config 
E0411 17:02:48.113935       1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-udp" (91.121.XXX.XXX:53/udp), skipping this externalIP: listen udp 91.121.XXX.XXX:53: bind: address already in use
E0411 17:02:48.119378       1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-tcp" (91.121.XXX.XXX:53/tcp), skipping this externalIP: listen tcp 91.121.XXX.XXX:53: bind: address already in use

然后我寻找谁已经在使用 53 端口...

netstat -lpnt | grep 53
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      1682/systemd-resolv 
tcp        0      0 87.98.XXX.XXX:53        0.0.0.0:*               LISTEN      19984/kube-proxy    
tcp        0      0 169.254.25.10:53        0.0.0.0:*               LISTEN      14448/node-cache    
tcp6       0      0 :::9253                 :::*                    LISTEN      14448/node-cache    
tcp6       0      0 :::9353                 :::*                    LISTEN      14448/node-cache

查看 proc 14448/node-cache:

cat /proc/14448/cmdline 
/node-cache-localip169.254.25.10-conf/etc/coredns/Corefile-upstreamsvccoredns

所以coredns已经在处理端口 53,这是正常的,因为它是 k8s 内部 DNS 服务。

在 coredns 文档(https://github.com/coredns/coredns/blob/master/README.md)中,他们谈到了-dns.port使用不同端口的选项......但是当我查看 kubespray (它有 3 个 jinja 模板https ://github.com/kubernetes-sigs/kubespray/tree/release-2.12/roles/kubernetes-apps/ansible/templates用于创建coredns configmap、服务等,类似于https://kubernetes.io/docs/tasks /administer-cluster/dns-custom-nameservers/#coredns)所有内容都使用端口 53 进行硬编码。

所以我的问题是:是否有 k8s 集群配置/解决方法,以便我可以运行自己的 DNS 服务器并将其公开到端口 53?

也许?

  • 将 coredns 设置为使用不同于 53 的端口?似乎很难,我真的不确定这是否有意义!
  • 我可以设置我的绑定 k8s 服务以公开端口 5353 并配置 nginx 入口控制器来处理此 5353 端口并重定向到应用程序 53 端口。但这需要设置 iptables 以将在端口 53 上收到的外部 DSN 请求*路由到我在端口 5353 上的绑定 k8s 服务?什么是 iptables 配置(INPUT / PREROUTING 或 FORWARD)?这种网络配置会破坏 coredns 吗?

问候,

克里斯

4

1 回答 1

0

I suppose Your nginx-ingress doesn't work as expected. You need Load Balancer provider, such as MetalLB, to Your bare metal k8s cluster to receive external connections on ports like 53. And You don't need nginx-ingress to use with bind, just change bind Service type from ClusterIP to LoadBalancer and ensure you got an external IP on this Service. Your helm chart manual may help to switch to LoadBalancer.

于 2020-04-12T16:38:47.713 回答