2

对于我的 e2e 测试,我正在启动一个单独的集群,我想将我的生产 TLS 证书导入其中。我无法在两个集群之间切换上下文(从一个导出/获取以及导入/应用(输入)到另一个),因为集群似乎不可见。

我使用 GitLab CI 提取了一个 MVCE,并在.gitlab-ci.yml其中创建了一个秘密用于演示目的:

stages:
  - main
  - tear-down

main:
  image: google/cloud-sdk
  stage: main
  script:
    - echo "$GOOGLE_KEY" > key.json
    - gcloud config set project secret-transfer
    - gcloud auth activate-service-account --key-file key.json --project secret-transfer
    - gcloud config set compute/zone us-central1-a
    - gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
    - kubectl create secret generic secret-1 --from-literal=key=value
    - gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
    - gcloud config set container/use_client_certificate True
    - gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
    - kubectl get secret letsencrypt-prod --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
    - gcloud config set container/cluster secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
    - kubectl apply --cluster=secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -f secret-1.yml

tear-down:
  image: google/cloud-sdk
  stage: tear-down
  when: always
  script:
    - echo "$GOOGLE_KEY" > key.json
    - gcloud config set project secret-transfer
    - gcloud auth activate-service-account --key-file key.json
    - gcloud config set compute/zone us-central1-a
    - gcloud container clusters delete --quiet secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
    - gcloud container clusters delete --quiet secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID

我添加了secret-transfer-[1/2]-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IIDbeforekubectl语句以避免error: no server found for cluster "secret-transfer-1-...-...",但它不会改变结果。

我创建了一个项目secret-transfer,激活了 Kubernetes API 并获得了我在环境变量中提供的 Compute Engine 服务帐户的 JSON 密钥GOOGLE_KEY。结帐后的输出是

$ echo "$GOOGLE_KEY" > key.json

$ gcloud config set project secret-transfer
Updated property [core/project].

$ gcloud auth activate-service-account --key-file key.json --project secret-transfer
Activated service account credentials for: [131478687181-compute@developer.gserviceaccount.com]

$ gcloud config set compute/zone us-central1-a
Updated property [compute/zone].

$ gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). 
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-1-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-1-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-1-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-1-9b219ea8-9.
NAME                          LOCATION       MASTER_VERSION  MASTER_IP      MACHINE_TYPE  NODE_VERSION   NUM_NODES  STATUS
secret-transfer-1-9b219ea8-9  us-central1-a  1.12.8-gke.10   34.68.118.165  f1-micro      1.12.8-gke.10  3          RUNNING

$ kubectl create secret generic secret-1 --from-literal=key=value
secret/secret-1 created

$ gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). 
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-2-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-2-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-2-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-2-9b219ea8-9.
NAME                          LOCATION       MASTER_VERSION  MASTER_IP      MACHINE_TYPE  NODE_VERSION   NUM_NODES  STATUS
secret-transfer-2-9b219ea8-9  us-central1-a  1.12.8-gke.10   104.198.37.21  f1-micro      1.12.8-gke.10  3          RUNNING

$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].

$ gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].

$ kubectl get secret secret-1 --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
error: no server found for cluster "secret-transfer-1-9b219ea8-9"

我期待kubectl get secret工作,因为两个集群都存在并且--cluster参数指向正确的集群。

4

2 回答 2

4

通常,gcloud命令用于管理gcloud资源和处理您如何通过 . 因此,我会避免这样做:gcloudkubectl

$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].

$ gcloud config set container/cluster \
  secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].

它没有做你可能认为它正在做的事情(即,改变任何关于如何kubectl瞄准集群的事情),并且可能会扰乱未来gcloud命令的工作方式。

分离的另一个后果,gcloud特别kubectlkubectl不完全了解您的gcloud设置,是从角度来看的集群名称与从gcloud角度来看的集群名称不同kubectl。当您执行类似的操作时gcloud config set compute/zonekubectl对此一无所知,因此它必须能够唯一识别可能具有相同名称但位于不同项目和区域中的集群,甚至可能不在 GKE 中(如 minikube 或其他一些云提供商)。这就是为什么kubectl --cluster=<gke-cluster-name> <some_command>不起作用,这就是您看到错误消息的原因:

error: no server found for cluster "secret-transfer-1-9b219ea8-9"

正如@coderanger 指出~/.kube/config的那样,在您的文件中生成的集群名称gcloud container clusters create ...具有更复杂的名称,目前其模式类似于gke_[project]_[region]_[name].

因此,您可以运行命令kubectl --cluster gke_[project]_[region]_[name] ...(或者kubectl --context [project]_[region]_[name] ...更习惯使用的命令,尽管在这种情况下两者都可以正常工作,因为您对两个集群使用相同的服务帐户),但这需要了解如何gcloud为上下文和集群生成这些字符串名字。

另一种方法是执行以下操作:

$ KUBECONFIG=~/.kube/config1 gcloud container clusters create \
  secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
  --project secret-transfer --machine-type=f1-micro

$ KUBECONFIG=~/.kube/config1 kubectl create secret secret-1 --from-literal=key=value

$ KUBECONFIG=~/.kube/config2 gcloud container clusters create \
  secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
  --project secret-transfer --machine-type=f1-micro

$ KUBECONFIG=~/.kube/config1 kubectl get secret secret-1 -o yaml > secret-1.yml

$ KUBECONFIG=~/.kube/config2 kubectl apply -f secret-1.yml

通过拥有KUBECONFIG您控制的单独文件,您不必猜测任何字符串。在创建集群时设置KUBECONFIG变量将导致创建该文件并将访问该集群gcloud的凭据放在该文件中。在运行命令时kubectl设置环境变量将确保使用该特定文件中设置的上下文。KUBECONFIGkubectlkubectl

于 2019-08-18T22:30:56.410 回答
0

您可能的意思是使用--context而不是--cluster. 上下文设置集群和用户都在使用。此外,GKE 创建的上下文和集群(和用户)名称不仅仅是集群标识符,它是gke_[project]_[region]_[name].

于 2019-08-18T21:44:08.770 回答