17

我正在尝试设置 Kubernetes 集群,但无法运行 CoreDNS。我运行了以下命令来启动集群:

sudo swapoff -a
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo kubeadm init

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s- version=$(kubectl version | base64 | tr -d '\n')"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

要检查 POD kubectl get pods --all-namespaces,我得到

NAMESPACE     NAME                                    READY   STATUS             RESTARTS   AGE
kube-system   coredns-68fb79bcf6-6s5bp                0/1     CrashLoopBackOff   6          10m
kube-system   coredns-68fb79bcf6-hckxq                0/1     CrashLoopBackOff   6          10m
kube-system   etcd-myserver                           1/1     Running            0          79m
kube-system   kube-apiserver-myserver                 1/1     Running            0          79m
kube-system   kube-controller-manager-myserver        1/1     Running            0          79m
kube-system   kube-proxy-9ls64                        1/1     Running            0          80m
kube-system   kube-scheduler-myserver                 1/1     Running            0          79m
kube-system   kubernetes-dashboard-77fd78f978-tqt8m   1/1     Running            0          80m
kube-system   weave-net-zmhwg                         2/2     Running            0          80m

所以CoreDNS不断崩溃。我能找到的唯一错误消息来自 /var/log/syslog

Oct  4 18:06:44 myserver kubelet[16397]: E1004 18:06:44.961409   16397 pod_workers.go:186] Error syncing pod c456a48b-c7c3-11e8-bf23-02426706c77f ("coredns-68fb79bcf6-6s5bp_kube-system(c456a48b-c7c3-11e8-bf23-02426706c77f)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-68fb79bcf6-6s5bp_kube-system(c456a48b-c7c3-11e8-bf23-02426706c77f)"

并从kubectl logs coredns-68fb79bcf6-6s5bp -n kube-system

.:53
2018/10/04 11:04:55 [INFO] CoreDNS-1.2.2
2018/10/04 11:04:55 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2018/10/04 11:04:55 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
2018/10/04 11:04:55 [FATAL] plugin/loop: Seen "HINFO IN 3256902131464476443.1309143030470211725." more than twice, loop detected

我发现的一些解决方案是发布

kubectl -n kube-system get deployment coredns -o yaml | \
sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
kubectl apply -f -

并修改/etc/resolv.conf为指向实际的 DNS,而不是 localhost,我也尝试过。

该问题在https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#pods-in-runco ​​ntainererror-crashloopbackoff-or-error-state 中进行了描述,我尝试了许多不同的 Pod 网络但没有帮助。

我跑sudo kubeadm reset && rm -rf ~/.kube/ && sudo kubeadm init了好几次。

我正在运行 Ubuntu 16.04、Kubernetes 1.12 和 Docker 17.03。有任何想法吗?

4

6 回答 6

23

我也有同样的问题。

我已经通过删除 coredns 厘米内的插件“循环”解决了这个问题。但我不知道这个云是否有其他问题。

1、kubectl编辑cm coredns -n kube-system

2、<a href="https://i.stack.imgur.com/NsYL1.png" rel="noreferrer">删除'loop',保存退出

3、重启coredns pods by:<code>kubectl delete pod coredns.... -n kube-system

于 2018-10-09T13:41:13.153 回答
10

通过使用这种方式解决了一些问题:

  1. 打开并编辑 coredns 的 configmap。

    kubectl edit cm coredns -n kube-system

  2. “将 proxy . /etc/resolv.conf 替换为您的上游 DNS 的 IP 地址,例如 proxy . 8.8.8.8。” 根据coredns 日志输出中的链接(在页面末尾)
  3. 保存并退出。
  4. kubectl get pods -n kube-system -oname |grep coredns |xargs kubectl delete -n kube-system

链接中解释了问题的原因。您可以在此 cmd 的输出中找到此链接

kubectl 记录 coredns-7d9cd4f75b-cpwxp -n kube-system

此链接位于 CoreDNS-1.2.4 的输出中。

我使用这个 cmd 升级 CoreDNS

kubectl 补丁部署 -n=kube-system coredns -p '{"spec": {"template": {"spec":{"containers":[{"image":"k8s.gcr.io/coredns:1.2. 4", "name":"coredns","re​​sources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"100m","memory":"70Mi" }}}]}}}}'

于 2018-10-21T02:57:29.497 回答
2

我认为简单地loop从 Kubernetes 中删除函数并不是一种干净的方法。CoreDNS Github实际上提供了一些解决此问题的指南。

他们在指南中建议了 3 种方法

  • 将以下内容添加到 kubelet: --resolv-conf 。您的“真实” resolv.conf 是包含上游服务器的实际 IP 且没有本地/环回地址的文件。该标志告诉 kubelet 将备用的 resolv.conf 传递给 Pod。对于使用 systemd-resolved 的系统,/run/systemd/resolve/resolv.conf 通常是“真正的”resolv.conf 的位置,尽管这可能因您的发行版而异。
  • 禁用主机节点上的本地 DNS 缓存,并将 /etc/resolv.conf 恢复为原始。
  • 一个快速而肮脏的修复方法是编辑您的 Corefile,替换 proxy 。/etc/resolv.conf 带有您的上游 DNS 的 IP 地址,例如 proxy 。8.8.8.8。但这仅解决了 CoreDNS 的问题,kubelet 将继续将无效的 resolv.conf 转发到所有默认的 dnsPolicy Pod,使它们无法解析 DNS。
于 2019-01-30T21:19:04.907 回答
2

--network-plugin=cni我的解决方案是删除/var/lib/kubelet/kubeadm-flags.env

于 2019-09-24T09:31:22.530 回答
1

是的你是对的。该问题已在此处GitHub 上进行了描述。解决方法是升级 Docker,禁用 SElinux 或修改allowPrivilegeEscalationtrue. 但是,今天我试图重现您的问题,但无法做到这一点。为您提供命令和输出,也许它会帮助您从一开始就创建一个工作版本。

Docker 版本 17.03.2-ce,Kubernetes v1.12.0,Ubuntu 16.04,CoreDNS-1.2.2,在 GCP 中创建的实例。

#apt-get update && apt-get install -y mc ebtables ethtool docker.io apt-transport-https curl
#curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

#cat <<EOF >/etc/apt/sources.list.d/kubernetes.list \
deb http://apt.kubernetes.io/ kubernetes-xenial main \
EOF

#apt-get update && apt-get install -y kubelet kubeadm kubectl

#kubeadm init
$mkdir -p $HOME/.kube
$sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$sudo chown $(id -u):$(id -g) $HOME/.kube/config
$kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

serviceaccount/weave-net 创建

clusterrole.rbac.authorization.k8s.io/weave-net 创建

clusterrolebinding.rbac.authorization.k8s.io/weave-net 创建

role.rbac.authorization.k8s.io/weave-net 创建

rolebinding.rbac.authorization.k8s.io/weave-net 创建

daemonset.extensions/weave-net 创建

$kubectl get pods --all-namespaces
NAMESPACE     NAME                                              READY   STATUS              RESTARTS   AGE
kube-system   pod/coredns-576cbf47c7-6qbtq                      0/1     Pending             0          79s
kube-system   pod/coredns-576cbf47c7-jr6hb                      0/1     Pending             0          79s
kube-system   pod/etcd-kube-weave-master-1                      1/1     Running             0          38s
kube-system   pod/kube-apiserver-kube-weave-master-1            1/1     Running             0          28s
kube-system   pod/kube-controller-manager-kube-weave-master-1   1/1     Running             0          30s
kube-system   pod/kube-proxy-4p9l5                              1/1     Running             0          79s
kube-system   pod/kube-scheduler-kube-weave-master-1            1/1     Running             0          34s
kube-system   pod/weave-net-z6mhw                               0/2     ContainerCreating   0          8s

再过一分钟:

$kubectl get pods --all-namespaces
NAMESPACE     NAME                                              READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-576cbf47c7-6qbtq                      1/1     Running   0          98s
kube-system   pod/coredns-576cbf47c7-jr6hb                      1/1     Running   0          98s
kube-system   pod/etcd-kube-weave-master-1                      1/1     Running   0          57s
kube-system   pod/kube-apiserver-kube-weave-master-1            1/1     Running   0          47s
kube-system   pod/kube-controller-manager-kube-weave-master-1   1/1     Running   0          49s
kube-system   pod/kube-proxy-4p9l5                              1/1     Running   0          98s
kube-system   pod/kube-scheduler-kube-weave-master-1            1/1     Running   0          53s
kube-system   pod/weave-net-z6mhw                               2/2     Running   0          27s

Coredns 吊舱说明:

kubectl describe pod/coredns-576cbf47c7-6qbtq -n kube-system
Name:               coredns-576cbf47c7-6qbtq
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               kube-weave-master-1/10.154.0.8
Start Time:         Fri, 05 Oct 2018 11:06:54 +0000
Labels:             k8s-app=kube-dns
                    pod-template-hash=576cbf47c7
Annotations:        <none>
Status:             Running
IP:                 10.32.0.3
Controlled By:      ReplicaSet/coredns-576cbf47c7
Containers:
  coredns:
    Container ID:  docker://db1712600b4c927b99063fa41bc36c3346c55572bd63730fc993f03379fa457b
    Image:         k8s.gcr.io/coredns:1.2.2
    Image ID:      docker-pullable://k8s.gcr.io/coredns@sha256:3e2be1cec87aca0b74b7668bbe8c02964a95a402e45ceb51b2252629d608d03a
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Fri, 05 Oct 2018 11:06:57 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-wp7tm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-wp7tm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-wp7tm
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From                          Message
  ----     ------            ----                ----                          -------
  Warning  FailedScheduling  23m (x12 over 24m)  default-scheduler             0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Normal   Scheduled         23m                 default-scheduler             Successfully assigned kube-system/coredns-576cbf47c7-6qbtq to kube-weave-master-1
  Normal   Pulled            23m                 kubelet, kube-weave-master-1  Container image "k8s.gcr.io/coredns:1.2.2" already present on machine
  Normal   Created           23m                 kubelet, kube-weave-master-1  Created container
  Normal   Started           23m                 kubelet, kube-weave-master-1  Started container

此外,请提供您config.yaml用于kubeadm init --config config.yaml更好地了解您在指定配置文件位置方面的问题。

于 2018-10-05T12:09:54.403 回答
0

我的解决方案是删除 /var/lib/kubelet/kubeadmflags.env 中的 --network-plugin=cni 然后重新启动机器,COredns 将运行。祝你好运

于 2020-01-28T04:49:50.413 回答