0

我能够成功配置 K8S 集群。但后来我想允许对 kub apiserver 的异常访问,所以我将以下参数添加到 /kube-apiserver.yaml

- --insecure-bind-address=0.0.0.0
- --insecure-port=8080

但是当我重新启动服务时,它无法成功启动 apiserver。所以我恢复到原始配置,但是当我启动服务时仍然出现以下错误。我得到了各种各样的错误,我认为主要原因是 kubelet 无法启动 Api 服务器。

    �~W~O kubelet.service
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2017-05-14 01:57:40 UTC; 1min 16s ago
  Process: 4055 ExecStartPre=/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid (code=exited, status=0/SUCCESS)
  Process: 4050 ExecStartPre=/usr/bin/mkdir -p /var/log/containers (code=exited, status=0/SUCCESS)
  Process: 4045 ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests (code=exited, status=0/SUCCESS)
 Main PID: 4082 (kubelet)
    Tasks: 15 (limit: 32768)
   Memory: 55.0M
      CPU: 7.876s
   CGroup: /system.slice/kubelet.service
           �~T~\�~T~@4082 /kubelet --api-servers=http://127.0.0.1:8080 --register-schedulable=false --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin= --container-runtime=docker --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=192.168.57.12 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local
           �~T~T�~T~@4126 journalctl -k -f

May 14 01:58:42 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:42.403056    4082 kubelet_node_status.go:101] Unable to register node "192.168.57.12" with API server: Post http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 01:58:46 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:46.565119    4082 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node '192.168.57.12' not found
May 14 01:58:49 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:49.403315    4082 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
May 14 01:58:49 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:49.406572    4082 kubelet_node_status.go:77] Attempting to register node 192.168.57.12
May 14 01:58:49 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:49.467143    4082 kubelet_node_status.go:101] Unable to register node "192.168.57.12" with API server: rpc error: code = 13 desc = transport is closing
May 14 01:58:53 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:53.717328    4082 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
May 14 01:58:56 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:56.467325    4082 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
May 14 01:58:56 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:56.469607    4082 kubelet_node_status.go:77] Attempting to register node 192.168.57.12
May 14 01:58:56 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:56.540698    4082 kubelet_node_status.go:101] Unable to register node "192.168.57.12" with API server: rpc error: code = 13 desc = transport is closing
May 14 01:58:56 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:56.624800    4082 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node '192.168.57.12' not found

我该如何克服这个问题,有没有办法清理所有东西并将其作为新事物开始。我认为一些元数据仍然令人难以忘怀。

编辑

来自 /var/log/pods 的完整日志

{"log":"[restful] 2017/05/14 02:13:39 log.go:30: [restful/swagger] listing is available at https://192.168.57.12:443/swaggerapi/\n","stream":"stderr","time":"2017-05-14T02:13:39.793102449Z"}
{"log":"[restful] 2017/05/14 02:13:39 log.go:30: [restful/swagger] https://192.168.57.12:443/swaggerui/ is mapped to folder /swagger-ui/\n","stream":"stderr","time":"2017-05-14T02:13:39.79318582Z"}
{"log":"E0514 02:13:39.808436       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.LimitRange: Get https://localhost:443/api/v1/limitranges?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.808684379Z"}
{"log":"E0514 02:13:39.827225       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ServiceAccount: Get https://localhost:443/api/v1/serviceaccounts?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.827488516Z"}
{"log":"E0514 02:13:39.827352       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *storage.StorageClass: Get https://localhost:443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.827527463Z"}
{"log":"E0514 02:13:39.836498       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ResourceQuota: Get https://localhost:443/api/v1/resourcequotas?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.85392487Z"}
{"log":"E0514 02:13:39.836599       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Secret: Get https://localhost:443/api/v1/secrets?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.853986447Z"}
{"log":"E0514 02:13:39.836878       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Namespace: Get https://localhost:443/api/v1/namespaces?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.853997731Z"}
{"log":"I0514 02:13:40.063564       1 serve.go:79] Serving securely on 0.0.0.0:443\n","stream":"stderr","time":"2017-05-14T02:13:40.063882848Z"}
{"log":"I0514 02:13:40.063699       1 serve.go:94] Serving insecurely on 127.0.0.1:8080\n","stream":"stderr","time":"2017-05-14T02:13:40.063934866Z"}
{"log":"E0514 02:13:40.290119       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport: write tcp 192.168.57.12:34040-\u003e192.168.57.13:2379: write: broken pipe\n","stream":"stderr","time":"2017-05-14T02:13:40.290393332Z"}
{"log":"E0514 02:13:40.425110       1 client_ca_hook.go:58] rpc error: code = 13 desc = transport: write tcp 192.168.57.12:34040-\u003e192.168.57.13:2379: write: broken pipe\n","stream":"stderr","time":"2017-05-14T02:13:40.425345333Z"}
{"log":"E0514 02:13:41.169712       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport: write tcp 192.168.57.12:36072-\u003e192.168.57.13:2379: write: connection reset by peer\n","stream":"stderr","time":"2017-05-14T02:13:41.169945414Z"}
{"log":"E0514 02:13:42.597820       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing\n","stream":"stderr","time":"2017-05-14T02:13:42.598129559Z"}
{"log":"E0514 02:13:44.957615       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport: write tcp 192.168.57.12:43412-\u003e192.168.57.13:2379: write: broken pipe\n","stream":"stderr","time":"2017-05-14T02:13:44.957912009Z"}
{"log":"E0514 02:13:48.209202       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport: write tcp 192.168.57.12:49898-\u003e192.168.57.13:2379: write: broken pipe\n","stream":"stderr","time":"2017-05-14T02:13:48.209484622Z"}
{"log":"E0514 02:13:49.791540       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing\n","stream":"stderr","time":"2017-05-14T02:13:49.79181274Z"}
{"log":"I0514 02:13:50.925762       1 trace.go:61] Trace \"Create /api/v1/namespaces/kube-system/pods\" (started 2017-05-14 02:13:40.914013106 +0000 UTC):\n","stream":"stderr","time":"2017-05-14T02:13:50.926040257Z"}
{"log":"[33.749µs] [33.749µs]
4

2 回答 2

1

这是由 etcd 的版本引起的,您通过在 apiserver 配置文件中设置版本解决了这个问题。

你也可以通过升级你的 etcd 来解决这个问题。我想建议您阅读此文档以了解如何升级 etcd。

于 2017-05-21T05:55:11.583 回答
0

我可以通过在 kube-apiserver.yaml 中设置以下两个参数来解决这个问题。问题是默认情况下 api 服务器被配置为与 etcd3 服务器通信。所以我不得不专门设置ETCD版本。

--storage-backend=etcd2
--storage-media-type=application/json
于 2017-05-17T03:09:29.023 回答