0

我打算测试一个重要的 Kubernetes 设置作为 CI 的一部分,并希望在 CD 之前运行完整的系统。我无法运行--privileged容器,并且正在使用 docker 容器作为主机的同级运行docker run -v /var/run/docker.sock:/var/run/docker.sock

基本的 docker 设置似乎正在容器上工作:

linuxbrew@03091f71a10b:~$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

但是,minikube 无法在 docker容器内启动,报告连接问题:

linuxbrew@03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378    2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538    2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213     197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541     197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil>  [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593     197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992     197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused                                                  

尽管网络已链接并且端口已正确转发,但仍会出现这种情况:

linuxbrew@51fbce78731e:~$ docker container ls
CONTAINER ID   IMAGE                                 COMMAND                  CREATED         STATUS         PORTS                                                                                                                                  NAMES
93c35cec7e6f   gcr.io/k8s-minikube/kicbase:v0.0.27   "/usr/local/bin/entr…&quot;   2 minutes ago   Up 2 minutes   127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp   minikube
51fbce78731e   7f7ba6fd30dd                          "/bin/bash"              8 minutes ago   Up 8 minutes                                                                                                                                          bpt-ci
linuxbrew@51fbce78731e:~$ docker network ls
NETWORK ID     NAME       DRIVER    SCOPE
1e800987d562   bridge     bridge    local
aa6b2909aa87   host       host      local
d4db150f928b   kind       bridge    local
a781cb9345f4   minikube   bridge    local
0a8c35a505fb   none       null      local
linuxbrew@51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube

curl minikube 容器在尝试从主机发送时似乎还活着,甚至ssh正在响应:

mastercook@linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350 

mastercook@linuxkitchen:~$ ssh root@127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names

我错过了什么,如何让 minikube 正确发现正常工作的 minikube 容器?

4

1 回答 1

2

因为minikube没有完成集群创建,所以在(兄弟)Docker 容器中运行 Kubernetes 有利于kind.

鉴于(兄弟)容器对其设置了解不够,网络连接有点缺陷。具体来说,kind即使实际容器位于主机 docker 中的不同 IP 上,(和 minikube)也会在创建集群时选择环回 IP。

要更正网络,(兄弟)容器需要连接到实际托管 Kubernetes 映像的网络。为此,该过程如下图所示:

  1. 创建一个 Kubernetes 集群:
linuxbrew@324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 
 ✓ Preparing nodes   
 ✓ Writing configuration  
 ✓ Starting control-plane ️ 
 ✓ Installing CNI  
 ✓ Installing StorageClass  
Set kubectl context to "kind-acluster"
You can now use your cluster with:

kubectl cluster-info --context kind-acluster

Thanks for using kind! 
  1. 验证集群是否可访问:
linuxbrew@324ba0f819d7:~$ kubectl cluster-info --context kind-acluster

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?

3.) 由于无法访问集群,请检索控制平面主 IP。请注意集群名称中添加的“-control-plane”:

linuxbrew@324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)

4.) 使用实际的主 IP 更新 kube 配置:

linuxbrew@324ba0f819d7:~$ sed -i "s/^    server:.*/    server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config

5.)(兄弟)容器仍然无法访问此 IP,并且要将容器连接到正确的网络,请检索 docker 网络 ID:

linuxbrew@324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)

6.) 最后将(兄弟)容器 ID(应存储在$HOSTNAME环境变量中)与集群 docker 网络连接:

linuxbrew@324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME

7.) 验证更改后控制平面是否可访问:

linuxbrew@324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

如果kubectl返回 Kubernetes 控制平面和 CoreDNS URL,如上一步所示,则配置成功。

于 2021-10-31T11:14:46.670 回答