我是 Kubernetes 世界的新手,如果我写错了,请原谅我。我正在尝试部署 kubernetes 仪表板
我的集群包含三个主节点和 3 个工作节点,它们已耗尽且不可调度,以便将仪表板安装到主节点:
[root@pp-tmp-test20 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
pp-tmp-test20 Ready master 2d2h v1.15.2
pp-tmp-test21 Ready master 37h v1.15.2
pp-tmp-test22 Ready master 37h v1.15.2
pp-tmp-test23 Ready,SchedulingDisabled worker 36h v1.15.2
pp-tmp-test24 Ready,SchedulingDisabled worker 36h v1.15.2
pp-tmp-test25 Ready,SchedulingDisabled worker 36h v1.15.2
我正在尝试通过此 url 部署 kubernetes 仪表板:
[root@pp-tmp-test20 ~]# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
之后,
kubernetes-dashboard-5698d5bc9-ql6q8
在我的主节点上安排了一个 podpp-tmp-test20/172.31.68.220
豆荚
kube-system kubernetes-dashboard-5698d5bc9-ql6q8 /1 Running 1 7m11s 10.244.0.7 pp-tmp-test20 <none> <none>
- pod 的日志
[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
2019/08/14 10:14:57 Starting overwatch
2019/08/14 10:14:57 Using in-cluster config to connect to apiserver
2019/08/14 10:14:57 Using service account token for csrf signing
2019/08/14 10:14:58 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 10:14:58 Generating JWE encryption key
2019/08/14 10:14:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 10:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 10:14:59 Initializing JWE encryption key from synchronized object
2019/08/14 10:14:59 Creating in-cluster Heapster client
2019/08/14 10:14:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:14:59 Auto-generating certificates
2019/08/14 10:14:59 Successfully created certificates
2019/08/14 10:14:59 Serving securely on HTTPS port: 8443
2019/08/14 10:15:29 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:15:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
- 豆荚的描述
[root@pp-tmp-test20 ~]# kubectl describe pob kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
Name: kubernetes-dashboard-5698d5bc9-ql6q8
Namespace: kube-system
Priority: 0
Node: pp-tmp-test20/172.31.68.220
Start Time: Wed, 14 Aug 2019 16:58:39 +0200
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=5698d5bc9
Annotations: <none>
Status: Running
IP: 10.244.0.7
Controlled By: ReplicaSet/kubernetes-dashboard-5698d5bc9
Containers:
kubernetes-dashboard:
Container ID: docker://40edddf7a9102d15e3b22f4bc6f08b3a07a19e4841f09360daefbce0486baf0e
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
Image ID: docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Running
Started: Wed, 14 Aug 2019 16:58:43 +0200
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 14 Aug 2019 16:58:41 +0200
Finished: Wed, 14 Aug 2019 16:58:42 +0200
Ready: True
Restart Count: 1
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-ptw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-ptw78:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-ptw78
Optional: false
QoS Class: BestEffort
Node-Selectors: dashboard=true
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m41s default-scheduler Successfully assigned kube-system/kubernetes-dashboard-5698d5bc9-ql6q8 to pp-tmp-test20.tec.prj.in.phm.education.gouv.fr
Normal Pulled 2m38s (x2 over 2m40s) kubelet, pp-tmp-test20 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
Normal Created 2m37s (x2 over 2m39s) kubelet, pp-tmp-test20 Created container kubernetes-dashboard
Normal Started 2m37s (x2 over 2m39s) kubelet, pp-tmp-test20 Started container kubernetes-dashboard
- 仪表板服务的描述
[root@pp-tmp-test20 ~]# kubectl describe svc/kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.110.236.88
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints: 10.244.0.7:8443
Session Affinity: None
Events: <none>
- 运行 pod 的主服务器上的 docker ps
[root@pp-tmp-test20 ~]# Docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
40edddf7a910 f9aed6605b81 "/dashboard --inse..." 7 minutes ago Up 7 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_1
e7f3820f1cf2 k8s.gcr.io/pause:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_0
[root@pp-tmp-test20 ~]# docker logs 40edddf7a910
2019/08/14 14:58:43 Starting overwatch
2019/08/14 14:58:43 Using in-cluster config to connect to apiserver
2019/08/14 14:58:43 Using service account token for csrf signing
2019/08/14 14:58:44 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 14:58:44 Generating JWE encryption key
2019/08/14 14:58:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 14:58:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 14:58:44 Initializing JWE encryption key from synchronized object
2019/08/14 14:58:44 Creating in-cluster Heapster client
2019/08/14 14:58:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:58:44 Auto-generating certificates
2019/08/14 14:58:44 Successfully created certificates
2019/08/14 14:58:44 Serving securely on HTTPS port: 8443
2019/08/14 14:59:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:59:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 15:00:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
1/ 在我的主人上我启动代理
[root@pp-tmp-test20 ~]# kubectl proxy
Starting to serve on 127.0.0.1:8001
2/ 我用 x11 重定向从我的主人启动 firefox 并点击这个 url
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
这是我在浏览器中收到的错误消息
Error: 'dial tcp 10.244.0.7:8443: connect: no route to host'
Trying to reach: 'https://10.244.0.7:8443/'
同时,我从启动代理的控制台收到了这些错误
I0814 16:10:05.836114 20240 log.go:172] http: proxy error: context canceled
I0814 16:10:06.198701 20240 log.go:172] http: proxy error: context canceled
I0814 16:13:21.708190 20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708229 20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708270 20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:39.335483 20240 log.go:172] http: proxy error: context canceled
I0814 16:13:39.716360 20240 log.go:172] http: proxy error: context canceled
但是在刷新n次(随机)浏览器后,我可以进入登录界面输入令牌(之前创建的)
但是......同样的错误再次发生
在点击 n 次“登录”按钮后,我能够获得仪表板.. 几秒钟。
之后,当我探索界面时,仪表板开始产生相同的错误:
我查看了 pod 日志,我们可以看到一些流量:
[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
2019/08/14 14:16:56 Getting list of all services in the cluster
2019/08/14 14:16:56 [2019-08-14T14:16:56Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:01 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 10.244.0.1:56140: { contents hidden }
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global/cani request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 Cannot find settings config map: configmaps "kubernetes-dashboard-settings" not found
又是 pod 日志
[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
Error from server: Get https://172.31.68.220:10250/containerLogs/kube-system/kubernetes-dashboard-5698d5bc9-ql6q8/kubernetes-dashboard: Forbidden
我做错了什么?你能告诉我一些调查方法吗?
编辑 :
我使用的服务帐户
# cat dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
# cat dashboard-adminuser-ClusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system