1

我目前坚持在 kubernetes 中连接 clusterIp 服务。主要目标是使用 grpc 连接一个 pod(微服务)和使用 node 连接另一个 pod(客户端)。我正在使用服务名称来公开并连接到微服务产品-微服务,但是在尝试调用客户端上的微服务时出现此错误。

"Error: 14 UNAVAILABLE: failed to connect to all addresses",
            "    at Object.exports.createStatusError (/usr/src/app/node_modules/grpc/src/common.js:91:15)",
            "    at Object.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:1209:28)",
            "    at InterceptingListener._callNext (/usr/src/app/node_modules/grpc/src/client_interceptors.js:568:42)",
            "    at InterceptingListener.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:618:8)",
            "    at callback (/usr/src/app/node_modules/grpc/src/client_interceptors.js:847:24)"

我查看了我创建的 docker 映像,它指向此地址 url:'0.0.0.0:50051' 但无法正常工作,因为本文建议https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without -tears/ 到目前为止,我只有一个用于产品的微服务,其中包含管理产品的逻辑,并且是使用 node-js 和 grpc 开发的(本地运行完美)。xxx-microservice-products-deployment 我在 k8s 中命名并包含它们的定义如下所示:

kind: Deployment
metadata:
  name: pinebox-microservice-products-deployment
  labels:
    app: pinebox
    type: microservice
    domain: products
spec:
  template:
    metadata:
      name: pinebox-microservice-products-pod
      labels:
        app: pinebox
        type: microservice
        domain: products
    spec:
      containers:
        - name: pg-container
          image: postgres
          env:
            - name: POSTGRES_USER
              value: testuser
            - name: POSTGRES_PASSWORD
              value: testpass
            - name: POSTGRES_DB
              value: db_development
          ports:
            - containerPort: 5432
        - name: microservice-container
          image: registry.digitalocean.com/pinebox/pinebox-microservices-products:latest
      imagePullSecrets:
        - name: regcred
  replicas: 1
  selector:
    matchLabels:
      app: pinebox
      type: microservice
      domain: products

然后为了连接到它们,我们创建了一个带有 a 的服务,clusterIp 它暴露了50051,它们在 k8s 中的定义如下所示:

kind: Service
apiVersion: v1
metadata:
  name: pinebox-products-microservice
spec:
  selector:
    app: pinebox
    type: microservice
    domain: products
  ports:
    - targetPort: 50051
      port: 50051

现在,我们也在节点中创建了一个客户端,其中包含在 (get,post) 后台与微服务建立连接的 api 方法。我命名了客户端xxx-api-main-app-deployment,它们在 k8s 中的定义如下所示:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pinebox-api-main-app-deployment
  labels:
    app: pinebox
    type: api
    domain: main-app
    role: users-service
spec:
  template:
    metadata:
      name: pinebox-api-main-app-pod
      labels:
        app: pinebox
        type: api
        domain: main-app
        role: products-service
    spec:
      containers:
        - name: pinebox-api-main-app-container
          image: registry.digitalocean.com/pinebox/pinebox-main-app:latest
      imagePullSecrets:
        - name: regcred
  replicas: 1
  selector:
    matchLabels:
      app: pinebox
      type: api
      domain: main-app
      role: products-service

另外,我创建了一个服务来导出 api,它们的 k8s 定义如下所示:

kind: Service
apiVersion: v1
metadata:
  name: pinebox-api-main-app-service
spec:
  selector:
    app: pinebox
    type: api
    domain: main-app
    role: products-service
  type: NodePort
  ports:
    - name: name-of-the-port
      port: 3333
      targetPort: 3333
      nodePort: 30003

直到这里,一切看起来都很好。因此,我尝试与服务建立连接,但出现此错误

"Error: 14 UNAVAILABLE: failed to connect to all addresses",
            "    at Object.exports.createStatusError (/usr/src/app/node_modules/grpc/src/common.js:91:15)",
            "    at Object.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:1209:28)",
            "    at InterceptingListener._callNext (/usr/src/app/node_modules/grpc/src/client_interceptors.js:568:42)",
            "    at InterceptingListener.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:618:8)",
            "    at callback (/usr/src/app/node_modules/grpc/src/client_interceptors.js:847:24)"

我没有发现任何有用的东西让它工作。有人有任何线索吗?

因此,在深入研究了该问题的解决方案后,我发现 kubernetes 团队建议使用linkerd将连接从字面上转换为 http,因为 k8s 在这种情况下不起作用。所以我跟着这篇文章https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/,然后我去linkerd指南并按照安装步骤。现在我能够看到linkeird仪表板,但无法与客户端进行微服务通信。因此,我尝试检查端口是否在客户端 pod 中公开,因此,我使用以下命令进行验证:

$ kubectl exec -i -t pod/pinebox-api-main-app-deployment-5fb5d4bf9f-ttwn5 --container pinebox-api-
main-app-container -- /bin/bash
$ pritnenv

这是输出:

PINEBOX_PRODUCTS_MICROSERVICE_PORT_50051_TCP_PORT=50051
KUBERNETES_SERVICE_PORT_HTTPS=443
PINEBOX_PRODUCTS_MICROSERVICE_SERVICE_PORT=50051
KUBERNETES_PORT_443_TCP_PORT=443
PINEBOX_API_MAIN_APP_SERVICE_SERVICE_PORT_NAME_OF_THE_PORT=3333
PORT=3000
NODE_VERSION=12.18.2
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
PINEBOX_API_MAIN_APP_SERVICE_PORT_3333_TCP_PORT=3333
PINEBOX_PRODUCTS_MICROSERVICE_SERVICE_HOST=10.105.230.111
TERM=xterm
PINEBOX_API_MAIN_APP_SERVICE_PORT=tcp://10.106.81.212:3333
SHLVL=1
PINEBOX_PRODUCTS_MICROSERVICE_PORT=tcp://10.105.230.111:50051
KUBERNETES_SERVICE_PORT=443
PINEBOX_PRODUCTS_MICROSERVICE_PORT_50051_TCP=tcp://10.105.230.111:50051
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PINEBOX_API_MAIN_APP_SERVICE_SERVICE_PORT=3333
KUBERNETES_SERVICE_HOST=10.96.0.1
_=/usr/bin/printenv
root@pinebox-api-main-app-deployment-5fb5d4bf9f-ttwn5:/usr/src/app# 

如您所见,存在包含服务端口的 env 变量,因为这是有效的。我没有使用 IP 直接,因为当我扩展部署以拥有更多资源时它不会工作。然后,我验证我的微服务正在使用:

kubectl logs pod/xxx-microservice-products-deployment-78df57c96d-tlvvj -c microservice-container

这是输出:

[Nest] 1   - 07/25/2020, 4:23:22 PM   [NestFactory] Starting Nest application...
[Nest] 1   - 07/25/2020, 4:23:22 PM   [InstanceLoader] PineboxMicroservicesProductsDataAccessModule dependencies initialized +12ms
[Nest] 1   - 07/25/2020, 4:23:22 PM   [InstanceLoader] PineboxMicroservicesProductsFeatureShellModule dependencies initialized +0ms
[Nest] 1   - 07/25/2020, 4:23:22 PM   [InstanceLoader] AppModule dependencies initialized +0ms       
[Nest] 1   - 07/25/2020, 4:23:22 PM   [NestMicroservice] Nest microservice successfully started +22ms[Nest] 1   - 07/25/2020, 4:23:22 PM   Microservice Products is listening +15ms

一切看起来都不错。然后我重新验证我在代码上使用的端口:

  • 微服务
const microservicesOptions = {
 transport: Transport.GRPC,
 options: {
   url: '0.0.0.0:50051',
   credentials: ServerCredentials.createInsecure(),
   package: 'grpc.health.v1',
   protoPath: join(__dirname, 'assets/health.proto'),
 },
};
  • 客户:
ClientsModule.register([
     {
       name: 'HERO_PACKAGE',
       transport: Transport.GRPC,
       options: {
         url: '0.0.0.0:50051',
         package: 'grpc.health.v1',
         protoPath: join(__dirname, 'assets/health.proto'),
         // credentials: credentials.createInsecure()
       },
     },
   ])

然后,我决定检查linkerd为客户端运行的 pod 内的日志。 kubectl logs pod/xxx-api-main-app-deployment-5fb5d4bf9f-ttwn5 -c linkerd-init 输出是这样的:

2020/07/25 16:37:50 Tracing this script execution as [1595695070]
2020/07/25 16:37:50 State of iptables rules before run:
2020/07/25 16:37:50 > iptables -t nat -vnL
2020/07/25 16:37:50 < Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
2020/07/25 16:37:50 > iptables -t nat -F PROXY_INIT_REDIRECT
2020/07/25 16:37:50 < iptables: No chain/target/match by that name.
2020/07/25 16:37:50 > iptables -t nat -X PROXY_INIT_REDIRECT
2020/07/25 16:37:50 < iptables: No chain/target/match by that name.
2020/07/25 16:37:50 Will ignore port(s) [4190 4191] on chain PROXY_INIT_REDIRECT
2020/07/25 16:37:50 Will redirect all INPUT ports to proxy
2020/07/25 16:37:50 > iptables -t nat -F PROXY_INIT_OUTPUT
2020/07/25 16:37:50 < iptables: No chain/target/match by that name.
2020/07/25 16:37:50 > iptables -t nat -X PROXY_INIT_OUTPUT
2020/07/25 16:37:50 < iptables: No chain/target/match by that name.
2020/07/25 16:37:50 Ignoring uid 2102
2020/07/25 16:37:50 Redirecting all OUTPUT to 4140
2020/07/25 16:37:50 Executing commands:
2020/07/25 16:37:50 > iptables -t nat -N PROXY_INIT_REDIRECT -m comment --comment proxy-init/redirect-common-chain/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -A PROXY_INIT_REDIRECT -p tcp --match multiport --dports 4190,4191 -j RETURN -m comment --comment proxy-init/ignore-port-4190,4191/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -A PROXY_INIT_REDIRECT -p tcp -j REDIRECT --to-port 4143 -m comment --comment proxy-init/redirect-all-incoming-to-proxy-port/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -A PREROUTING -j PROXY_INIT_REDIRECT -m comment --comment proxy-init/install-proxy-init-prerouting/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -N PROXY_INIT_OUTPUT -m comment --comment proxy-init/redirect-common-chain/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner 2102 -o lo ! -d 127.0.0.1/32 -j PROXY_INIT_REDIRECT -m comment --comment proxy-init/redirect-non-loopback-local-traffic/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner 2102 -j RETURN -m comment --comment proxy-init/ignore-proxy-user-id/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -A PROXY_INIT_OUTPUT -o lo -j RETURN -m comment --comment proxy-init/ignore-loopback/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -A PROXY_INIT_OUTPUT -p tcp -j REDIRECT --to-port 4140 -m comment --comment proxy-init/redirect-all-outgoing-to-proxy-port/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -A OUTPUT -j PROXY_INIT_OUTPUT -m comment --comment proxy-init/install-proxy-init-output/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -vnL
2020/07/25 16:37:51 < Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PROXY_INIT_REDIRECT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* proxy-init/install-proxy-init-prerouting/1595695070 */
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PROXY_INIT_OUTPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* proxy-init/install-proxy-init-output/1595695070 */
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain PROXY_INIT_OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PROXY_INIT_REDIRECT  all  --  *      lo      0.0.0.0/0           !127.0.0.1            owner UID match 2102 /* proxy-init/redirect-non-loopback-local-traffic/1595695070 */
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            owner UID match 2102 /* proxy-init/ignore-proxy-user-id/1595695070 */
    0     0 RETURN     all  --  *      lo      0.0.0.0/0            0.0.0.0/0            /* proxy-init/ignore-loopback/1595695070 */
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* proxy-init/redirect-all-outgoing-to-proxy-port/1595695070 */ redir ports 4140
Chain PROXY_INIT_REDIRECT (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 4190,4191 /* proxy-init/ignore-port-4190,4191/1595695070 */
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* proxy-init/redirect-all-incoming-to-proxy-port/1595695070 */ redir ports 4143
    ```
I'm not sure where the problem is, and thanks in advance for your help.
Hopefully this give you more context and you can point me out in the right direction.
4

1 回答 1

0

Linkerd 的 iptables 输出proxy-init看起来不错。

你检查过linkerd-proxy容器内部的日志吗?这可能有助于了解正在发生的事情。

port-forward@KoopaKiller 推荐的测试也值得一试

于 2020-09-02T04:19:12.070 回答