3

我正在尝试使用 etcd 配置启动 kubernetes api 服务器(kubernetes 使用 go-etcd,它具有从配置文件中读取所有参数的方法):

{ 
  "cluster": {
    "machines": [ "https://my-public-hostname:2379" ] 
  }, 
  "config": { 
  "certFile": "/etc/ssl/etcd/client.pem", 
  "keyFile": "/etc/ssl/etcd/client.key.pem", 
  "caCertFiles": [ 
  "/etc/ssl/etcd/ca.pem" 
  ], 
    "timeout": 5, 
    "consistency": "WEAK" 
  } 
}

但在 kube-apiserver 中失败,因为它无法成功访问 etcd。我认为这是因为它试图同步集群......但我不知道。

我已经使用内部 ips 创建了一个(etcd)集群,用于广告和客户端地址,但设置为 0.0.0.0/0 的 listen-client-urls 除外。此外,整个集群位于负载均衡器后面,可通过my-public-hostname.

在容器内部(因为我正在使用hyperkube),etcdctl除非我设置“--no-sync”参数,否则它将无法工作。如果我在没有该参数的情况下使用 etcdctl,它可能会像 kube-apiserver 一样失败。但是我无法检查 kubernetes 中执行集群同步的代码...

有任何想法吗?

提前致谢。

编辑:

这似乎是与 kubernetes 中的当前 etcd 客户端(https://github.com/coreos/go-etcd)相关的错误,这不是最新的(https://github.com/coreos/etcd/client ) )。我根据经验对此进行了测试,“etcd/client”有效,但“go-etcd”无效,您可以在此处查看此测试:https ://github.com/glerchundi/etcd-go-clients-test 。

值得注意的是,在 kubernetes 中将 go-etcd 迁移到 etcd/client 的工作正在进行中:https ://github.com/kubernetes/kubernetes/issues/11962 。

Kubernetes 团队的任何人都可以证实这一点吗?

附录1

我正在尝试在 CoreOS 中运行 kubernetes,并且:flannel工作,locksmithd工作,fleet工作(他们使用相同的 etcd 客户端凭据访问 etcd)所以这可能与 kubernetes 如何访问 etcd 端点有关。

附录2(这些命令是在hyperkube容器内执行的,具体是这个gcr.io/google_containers/hyperkube:v1.0.6:)

没有 --no-sync 的 etcdctl 无法输出以下内容:

root@98b2524464f1:/# etcdctl --cert-file="/etc/ssl/etcd/client.pem" --key-file="/etc/ssl/etcd/client.key.pem" --ca-file="/etc/ssl/etcd/ca.pem" --peers="http//my-public-hostname:2379" ls / 
Error: 501: All the given peers are not reachable (failed to propose on members [https://10.1.0.1:2379 https://10.1.0.0:2379 https://10.1.0.2:2379] twice [last error: Get https://10.1.0.0:2379/v2/keys/?quorum=false&recursive=false&sorted=false: dial tcp 10.1.0.0:2379: i/o timeout]) [0]

和 kube-apiserver 有这个:

root@98b2524464f1:/# /hyperkube \ 
apiserver \ 
--bind-address=0.0.0.0 \ 
--etcd_config=/etc/kubernetes/ssl/etcd.json \ 
--allow-privileged=true \ 
--service-cluster-ip-range=10.3.0.0/24 \ 
--secure_port=443 \ 
--advertise-address=10.0.0.2 \ 
--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ 
--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \ 
--tls-private-key-file=/etc/kubernetes/ssl/apiserver.key.pem \ 
--client-ca-file=/etc/kubernetes/ssl/ca.pem \ 
--service-account-key-file=/etc/kubernetes/ssl/apiserver.key.pem

F1002 09:47:29.348527 384 controller.go:80] Unable to perform initial IP allocation check: unable to refresh the service IP block: 501: All the given peers are not reachable (failed to propose on members [https://my-public-hostname:2379] twice [last error: Get https://my-public-hostname:2379/v2/keys/registry/ranges/serviceips?quorum=false&recursive=false&sorted=false: dial tcp: i/o timeout]) [0]

附录 3

etcd #0:
  etcd2:
    name: etcd0
    initial-cluster-state: new
    initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380
    data-dir: /var/lib/etcd2
    advertise-client-urls: https://10.1.0.0:2379
    initial-advertise-peer-urls: http://10.1.0.0:2380
    listen-client-urls: https://0.0.0.0:2379
    listen-peer-urls: http://10.1.0.0:2380
    client-cert-auth: true
    trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem
    cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem
    key-file: /etc/ssl/etcd/private/etcd-server.key.pem

etcd #1:
  etcd2:
    name: etcd1
    initial-cluster-state: new
    initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380
    data-dir: /var/lib/etcd2
    advertise-client-urls: https://10.1.0.1:2379
    initial-advertise-peer-urls: http://10.1.0.1:2380
    listen-client-urls: https://0.0.0.0:2379
    listen-peer-urls: http://10.1.0.1:2380
    client-cert-auth: true
    trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem
    cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem
    key-file: /etc/ssl/etcd/private/etcd-server.key.pem

etcd #2:
  etcd2:
    name: etcd2
    initial-cluster-state: new
    initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380
    data-dir: /var/lib/etcd2
    advertise-client-urls: https://10.1.0.2:2379
    initial-advertise-peer-urls: http://10.1.0.2:2380
    listen-client-urls: https://0.0.0.0:2379
    listen-peer-urls: http://10.1.0.2:2380
    client-cert-auth: true
    trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem
    cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem
    key-file: /etc/ssl/etcd/private/etcd-server.key.pem
4

1 回答 1

2

最后我找出导致这个问题的原因。超时未正确定义,因为将go-etcdjson 超时值解组为 time.Duration,它使用纳秒作为基本单位。因此,对于 1s 的值,应该写入 1000000000。

按照上面的例子:

{ 
  "cluster": {
    "machines": [ "https://my-public-hostname:2379" ] 
  }, 
  "config": { 
    "certFile": "/etc/ssl/etcd/client.pem", 
    "keyFile": "/etc/ssl/etcd/client.key.pem", 
    "caCertFiles": [ 
      "/etc/ssl/etcd/ca.pem" 
    ], 
    "timeout": 5000000000, 
    "consistency": "WEAK" 
  } 
}
于 2015-10-04T17:11:15.083 回答