3

我是 k8s 的新手,并试图在没有任何自动化的情况下从头开始在 Vagrant(Ubuntu 16.04)中运行 3 节点(master + 2 workers)集群(v1.9.6)。我相信对于像我这样的初学者来说,这是获得实践经验的正确方法。老实说,我已经为此花费了一个多星期,感到绝望。

我的问题是 coredns pod(与 kube-dns 相同)无法通过 ClusterIP 访问 kube-apiserver。它看起来像这样:

vagrant@master-0:~$ kubectl get svc --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP         2d
kube-system   kube-dns     ClusterIP   10.0.30.1    <none>        53/UDP,53/TCP   2h

vagrant@master-0:~$ kubectl logs coredns-5c6d9fdb86-mffzk -n kube-system
E0330 15:40:45.476465       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:319: Failed to list *v1.Namespace: Get https://10.0.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
E0330 15:40:45.478241       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:312: Failed to list *v1.Service: Get https://10.0.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
E0330 15:40:45.478289       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:314: Failed to list *v1.Endpoints: Get https://10.0.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout

同时,我可以从任何机器和 pod 内部 ping 10.0.0.1(使用busybox 进行测试),但 curl 不起作用。

掌握

接口

br-e468013fba9d Link encap:Ethernet  HWaddr 02:42:8f:da:d3:35
          inet addr:172.18.0.1  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

docker0   Link encap:Ethernet  HWaddr 02:42:d7:91:fd:9b
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

enp0s3    Link encap:Ethernet  HWaddr 02:74:f2:80:ad:a4
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::74:f2ff:fe80:ada4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3521 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2116 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:784841 (784.8 KB)  TX bytes:221888 (221.8 KB)

enp0s8    Link encap:Ethernet  HWaddr 08:00:27:45:ed:ec
          inet addr:192.168.0.1  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe45:edec/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:322839 errors:0 dropped:0 overruns:0 frame:0
          TX packets:329938 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:45879993 (45.8 MB)  TX bytes:89279972 (89.2 MB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:249239 errors:0 dropped:0 overruns:0 frame:0
          TX packets:249239 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:75677355 (75.6 MB)  TX bytes:75677355 (75.6 MB)

iptables

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-e468013fba9d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-e468013fba9d -j DOCKER
-A FORWARD -i br-e468013fba9d ! -o br-e468013fba9d -j ACCEPT
-A FORWARD -i br-e468013fba9d -o br-e468013fba9d -j ACCEPT
-A DOCKER-ISOLATION -i br-e468013fba9d -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o br-e468013fba9d -j DROP
-A DOCKER-ISOLATION -j RETURN
-A DOCKER-USER -j RETURN

路线

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    0      0        0 enp0s3
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 enp0s3
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-e468013fba9d
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 enp0s8

kube-apiserver (docker-compose)

version: '3'
services:
  kube_apiserver:
    image: gcr.io/google-containers/hyperkube:v1.9.6
    restart: always
    network_mode: host
    container_name: kube-apiserver
    ports:
      - "8080"
    volumes:
      - "/var/lib/kubernetes/ca-key.pem:/var/lib/kubernetes/ca-key.pem"
      - "/var/lib/kubernetes/ca.pem:/var/lib/kubernetes/ca.pem"
      - "/var/lib/kubernetes/kubernetes.pem:/var/lib/kubernetes/kubernetes.pem"
      - "/var/lib/kubernetes/kubernetes-key.pem:/var/lib/kubernetes/kubernetes-key.pem"
    command: ["/usr/local/bin/kube-apiserver",
              "--admission-control", "Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota",
              "--advertise-address", "192.168.0.1",
              "--etcd-servers", "http://192.168.0.1:2379,http://192.168.0.2:2379,http://192.168.0.3:2379",
              "--insecure-bind-address", "127.0.0.1",
              "--insecure-port", "8080",
              "--kubelet-https", "true",
              "--service-cluster-ip-range", "10.0.0.0/16",
              "--allow-privileged", "true",
              "--runtime-config", "api/all",
              "--service-account-key-file", "/var/lib/kubernetes/ca-key.pem",
              "--client-ca-file", "/var/lib/kubernetes/ca.pem",
              "--tls-ca-file", "/var/lib/kubernetes/ca.pem",
              "--tls-cert-file", "/var/lib/kubernetes/kubernetes.pem",
              "--tls-private-key-file", "/var/lib/kubernetes/kubernetes-key.pem",
              "--kubelet-certificate-authority", "/var/lib/kubernetes/ca.pem",
              "--kubelet-client-certificate", "/var/lib/kubernetes/kubernetes.pem",
              "--kubelet-client-key", "/var/lib/kubernetes/kubernetes-key.pem"]

kube-controller-manager (docker-compose)

version: '3'
services:
  kube_controller_manager:
    image: gcr.io/google-containers/hyperkube:v1.9.6
    restart: always
    network_mode: host
    container_name: kube-controller-manager
    ports:
      - "10252"
    volumes:
      - "/var/lib/kubernetes/ca-key.pem:/var/lib/kubernetes/ca-key.pem"
      - "/var/lib/kubernetes/ca.pem:/var/lib/kubernetes/ca.pem"
    command: ["/usr/local/bin/kube-controller-manager",
              "--allocate-node-cidrs", "true",
              "--cluster-cidr", "10.10.0.0/16",
              "--master", "http://127.0.0.1:8080",
              "--port", "10252",
              "--service-cluster-ip-range", "10.0.0.0/16",
              "--leader-elect", "false",
              "--service-account-private-key-file", "/var/lib/kubernetes/ca-key.pem",
              "--root-ca-file", "/var/lib/kubernetes/ca.pem"]

kube-调度程序(docker-compose)

version: '3'
services:
  kube_scheduler:
    image: gcr.io/google-containers/hyperkube:v1.9.6
    restart: always
    network_mode: host
    container_name: kube-scheduler
    ports:
      - "10252"
    command: ["/usr/local/bin/kube-scheduler",
              "--master", "http://127.0.0.1:8080",
              "--port", "10251"]

工人0

接口

br-c5e101440189 Link encap:Ethernet  HWaddr 02:42:60:ba:c9:81
          inet addr:172.18.0.1  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

cbr0      Link encap:Ethernet  HWaddr ae:48:89:15:60:fd
          inet addr:10.10.0.1  Bcast:10.10.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a406:b0ff:fe1d:1d85/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1149 errors:0 dropped:0 overruns:0 frame:0
          TX packets:409 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:72487 (72.4 KB)  TX bytes:35650 (35.6 KB)

enp0s3    Link encap:Ethernet  HWaddr 02:74:f2:80:ad:a4
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::74:f2ff:fe80:ada4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3330 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2269 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:770147 (770.1 KB)  TX bytes:246770 (246.7 KB)

enp0s8    Link encap:Ethernet  HWaddr 08:00:27:07:69:06
          inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe07:6906/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:268762 errors:0 dropped:0 overruns:0 frame:0
          TX packets:258080 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:48488207 (48.4 MB)  TX bytes:25791040 (25.7 MB)

flannel.1 Link encap:Ethernet  HWaddr 86:8e:2f:c4:98:82
          inet addr:10.10.0.0  Bcast:0.0.0.0  Mask:255.255.255.255
          inet6 addr: fe80::848e:2fff:fec4:9882/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2955 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2955 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:218772 (218.7 KB)  TX bytes:218772 (218.7 KB)

vethe5d2604 Link encap:Ethernet  HWaddr ae:48:89:15:60:fd
          inet6 addr: fe80::ac48:89ff:fe15:60fd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:828 (828.0 B)

iptables

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-N DOCKER-USER
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -m comment --comment "kubernetes forward rules" -j KUBE-FORWARD
-A FORWARD -s 10.0.0.0/16 -j ACCEPT
-A FORWARD -d 10.0.0.0/16 -j ACCEPT
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.10.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.10.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

路线

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    0      0        0 enp0s3
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 enp0s3
10.10.0.0       0.0.0.0         255.255.255.0   U     0      0        0 cbr0
10.10.1.0       10.10.1.0       255.255.255.0   UG    0      0        0 flannel.1
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-c5e101440189
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 enp0s8

kubelet(系统服务)

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
#After=docker.service
#Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet \
  --allow-privileged=true \
  --anonymous-auth=false \
  --authorization-mode=AlwaysAllow \
  --cloud-provider= \
  --cluster-dns=10.0.30.1 \
  --cluster-domain=cluster.local \
  --node-ip=192.168.0.2 \
  --pod-cidr=10.10.0.0/24 \
  --kubeconfig=/var/lib/kubelet/kubeconfig \
  --runtime-request-timeout=15m \
  --hostname-override=worker0 \
#  --read-only-port=10255 \
  --client-ca-file=/var/lib/kubernetes/ca.pem \
  --tls-cert-file=/var/lib/kubelet/worker0.pem \
  --tls-private-key-file=/var/lib/kubelet/worker0-key.pem
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

kube-proxy(系统服务)

[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
#After=docker.service
#Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --cluster-cidr=10.10.0.0/16 \
  --kubeconfig=/var/lib/kube-proxy/kubeconfig \
  --v=5
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Worker1 的配置与 worker0 非常相似。

如果需要任何其他信息,请告诉我。

4

3 回答 3

1

根据kube-apiserver文档:

--bind-address ip     The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
--secure-port int     The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 6443)

据我所知,标志--bind-address--secure-port 没有在您的kube-apiserver配置中定义,因此默认情况下会kube-apiserver0.0.0.0:6443.

因此,为了解决您的问题,只需--secure-port在配置中添加标志kube-apiserver

"--secure-port", "443",
于 2018-03-31T10:48:01.427 回答
0

请确保您的 apiserver pod 所在的主机已设置 iptables 接受您的 pod 的 cidr 范围。如

-A INPUT -s 10.32.0.0/12 -j ACCEPT

我认为这与在同一主机上访问服务时,iptable不使用翻译地址作为源地址有关。

于 2018-08-22T04:18:28.333 回答
0

更改自:

--service-cluster-ip-range", "10.0.0.0/16

至:

--service-cluster-ip-range", "10.10.0.0/16

使 --service-cluster-ip-range 值与 flannel CIDR 匹配。

于 2018-04-01T12:57:57.523 回答