2

我正在尝试使用带有 sidecar 代理的 gcloud SQL Postgres 数据库在 GKE 上设置 node.js 应用程序。我正在关注文档,但没有让它工作。代理似乎无法启动(应用容器确实启动)。我不知道为什么代理容器无法启动,也不知道如何调试它(例如,我如何收到错误消息!?)。

mysecret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: [base64_username]
  password: [base64_password]

输出kubectl get secrets

NAME                      TYPE                                  DATA   AGE
default-token-tbgsv       kubernetes.io/service-account-token   3      5d
mysecret                  Opaque                                2      7h

应用部署.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: app
          image: gcr.io/myproject/firstapp:v2
          ports:
            - containerPort: 8080
          env:
            - name: POSTGRES_DB_HOST
              value: 127.0.0.1:5432
            - name: POSTGRES_DB_USER
              valueFrom:
                secretKeyRef:
                  name: mysecret
                  key: username
            - name: POSTGRES_DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysecret
                  key: password
        - name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command: ["/cloud_sql_proxy",
                    "-instances=myproject:europe-west4:databasename=tcp:5432",
                    "-credential_file=/secrets/cloudsql/mysecret.json"]
          securityContext:
            runAsUser: 2
            allowPrivilegeEscalation: false
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: mysecret

输出kubectl create -f ./kubernetes/app-deployment.json

deployment.apps/myapp created

输出kubectl get deployments

NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
myapp   1         1         1            0           5s

输出kubectl get pods

NAME                     READY   STATUS             RESTARTS   AGE
myapp-5bc965f688-5rxwp   1/2     CrashLoopBackOff   1          10s

输出kubectl describe pod/myapp-5bc955f688-5rxwp -n default

Name:               myapp-5bc955f688-5rxwp
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               gke-standard-cluster-1-default-pool-1ec52705-186n/10.164.0.4
Start Time:         Sat, 15 Dec 2018 21:46:03 +0100
Labels:             app=myapp
                    pod-template-hash=1675219244
Annotations:        kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container app; cpu request for container cloudsql-proxy
Status:             Running
IP:                 10.44.1.9
Controlled By:      ReplicaSet/myapp-5bc965f688
Containers:
  app:
    Container ID:   docker://d3ba7ff9c581534a4d55a5baef2d020413643e0c2361555eac6beba91b38b120
    Image:          gcr.io/myproject/firstapp:v2
    Image ID:       docker-pullable://gcr.io/myproject/firstapp@sha256:80168b43e3d0cce6d3beda6c3d1c679cdc42e88b0b918e225e7679252a59a73b
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 15 Dec 2018 21:46:04 +0100
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      POSTGRES_DB_HOST:      127.0.0.1:5432
      POSTGRES_DB_USER:      <set to the key 'username' in secret 'mysecret'>  Optional: false
      POSTGRES_DB_PASSWORD:  <set to the key 'password' in secret 'mysecret'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
  cloudsql-proxy:
    Container ID:  docker://96e2ed0de8fca21ecd51462993b7083bec2a31f6000bc2136c85842daf17435d
    Image:         gcr.io/cloudsql-docker/gce-proxy:1.11
    Image ID:      docker-pullable://gcr.io/cloudsql-docker/gce-proxy@sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
    Port:          <none>
    Host Port:     <none>
    Command:
      /cloud_sql_proxy
      -instances=myproject:europe-west4:databasename=tcp:5432
      -credential_file=/secrets/cloudsql/mysecret.json
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 15 Dec 2018 22:43:37 +0100
      Finished:     Sat, 15 Dec 2018 22:43:37 +0100
    Ready:          False
    Restart Count:  16
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /secrets/cloudsql from cloudsql-instance-credentials (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  cloudsql-instance-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mysecret
    Optional:    false
  default-token-tbgsv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tbgsv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                    From                                                        Message
  ----     ------     ----                   ----                                                        -------
  Normal   Scheduled  59m                    default-scheduler                                           Successfully assigned default/myapp-5bc955f688-5rxwp to gke-standard-cluster-1-default-pool-1ec52705-186n
  Normal   Pulled     59m                    kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n  Container image "gcr.io/myproject/firstapp:v2" already present on machine
  Normal   Created    59m                    kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n  Created container
  Normal   Started    59m                    kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n  Started container
  Normal   Started    59m (x4 over 59m)      kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n  Started container
  Normal   Pulled     58m (x5 over 59m)      kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n  Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
  Normal   Created    58m (x5 over 59m)      kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n  Created container
  Warning  BackOff    4m46s (x252 over 59m)  kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n  Back-off restarting failed container

编辑:我的秘密似乎有问题,因为kubectl logs 5bc955f688-5rxwp cloudsql-proxy我得到了:

2018/12/16 22:26:28 invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory

我通过以下方式创建了这个秘密:

kubectl create -f ./kubernetes/mysecret.yaml

我认为秘密变成了 JSON...当我将 app-deployment.yaml 中的 mysecret.json 更改为 mysecret.yaml 时,我仍然会收到类似的错误...

4

1 回答 1

2

我错过了正确的密钥(credentials.json)。它必须是您从服务帐户生成的密钥;然后你把它变成一个秘密。另请参阅此问题

于 2018-12-17T16:17:57.847 回答