4

我正在设置 k8s 本地 k8s 集群。对于测试,我在使用 kubeadm 设置的 vm 上使用单节点集群。我的要求包括在 k8s 中运行 MQTT 集群 (vernemq),并通过 Ingress (istio) 进行外部访问。

在不部署入口的情况下,我可以通过 NodePort 或 LoadBalancer 服务连接 (mosquitto_sub)。

Istio 是使用安装的istioctl install --set profile=demo

问题

我正在尝试从集群外部访问 VerneMQ 代理。入口(Istio 网关)——在这种情况下似乎是完美的解决方案,但我无法建立与代理的 TCP 连接(也无法通过入口 IP,也无法直接通过 svc/vernemq IP)。

那么,如何通过 Istio 入口从外部客户端建立这个 TCP 连接?

我试过的

我创建了两个命名空间:

  • 使用 istio 暴露 – 使用 istio 代理注入
  • 带负载均衡器的暴露 - 没有 istio 代理

exposed-with-loadbalancer命名空间中,我使用 LoadBalancer 服务部署了 vernemq。它有效,这就是我知道可以访问 VerneMQ 的方式(其中mosquitto_sub -h <host> -p 1883 -t hello,主机是 svc/vernemq 的 ClusterIP 或 ExternalIP)。仪表板可在主机上访问:8888/status,仪表板上的“客户端在线”增量。

我在其中exposed-with-istio部署了带有 ClusterIP 服务、Istios 网关和 VirtualService 的 vernemq。istio-proxy 注入后,mosquitto_sub 不能通过 svc/vernemq IP 订阅,也不能通过 istio 入口(网关)IP 订阅。命令只是永远挂起,不断重试。同时,vernemq 仪表板端点可通过服务 ip 和 istio 网关访问。

我猜必须配置 istio 代理才能使 mqtt 工作。

这是 istio-ingressgateway 服务:

kubectl describe svc/istio-ingressgateway -n istio-system

Name:                     istio-ingressgateway
Namespace:                istio-system
Labels:                   app=istio-ingressgateway
                          install.operator.istio.io/owning-resource=installed-state
                          install.operator.istio.io/owning-resource-namespace=istio-system
                          istio=ingressgateway
                          istio.io/rev=default
                          operator.istio.io/component=IngressGateways
                          operator.istio.io/managed=Reconcile
                          operator.istio.io/version=1.7.0
                          release=istio
Annotations:              Selector:  app=istio-ingressgateway,istio=ingressgateway
Type:                     LoadBalancer
IP:                       10.100.213.45
LoadBalancer Ingress:     192.168.100.240
Port:                     status-port  15021/TCP
TargetPort:               15021/TCP
Port:                     http2  80/TCP
TargetPort:               8080/TCP
Port:                     https  443/TCP
TargetPort:               8443/TCP
Port:                     tcp  31400/TCP
TargetPort:               31400/TCP
Port:                     tls  15443/TCP
TargetPort:               15443/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
...

这是来自 istio-proxy 的调试日志 kubectl logs svc/vernemq -n test istio-proxy

2020-08-24T07:57:52.294477Z debug   envoy filter    original_dst: New connection accepted
2020-08-24T07:57:52.294516Z debug   envoy filter    tls inspector: new connection accepted
2020-08-24T07:57:52.294532Z debug   envoy filter    http inspector: new connection accepted
2020-08-24T07:57:52.294580Z debug   envoy filter    [C5645] new tcp proxy session
2020-08-24T07:57:52.294614Z debug   envoy filter    [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294638Z debug   envoy pool  creating a new connection
2020-08-24T07:57:52.294671Z debug   envoy pool  [C5646] connecting
2020-08-24T07:57:52.294684Z debug   envoy connection    [C5646] connecting to 127.0.0.1:1883
2020-08-24T07:57:52.294725Z debug   envoy connection    [C5646] connection in progress
2020-08-24T07:57:52.294746Z debug   envoy pool  queueing request due to no available connections
2020-08-24T07:57:52.294750Z debug   envoy conn_handler  [C5645] new connection
2020-08-24T07:57:52.294768Z debug   envoy connection    [C5646] delayed connection error: 111
2020-08-24T07:57:52.294772Z debug   envoy connection    [C5646] closing socket: 0
2020-08-24T07:57:52.294783Z debug   envoy pool  [C5646] client disconnected
2020-08-24T07:57:52.294790Z debug   envoy filter    [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294794Z debug   envoy connection    [C5645] closing data_to_write=0 type=1
2020-08-24T07:57:52.294796Z debug   envoy connection    [C5645] closing socket: 1
2020-08-24T07:57:52.294864Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=12
2020-08-24T07:57:52.294882Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=16
2020-08-24T07:57:52.294885Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=20
2020-08-24T07:57:52.294887Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=24
2020-08-24T07:57:52.294891Z debug   envoy conn_handler  [C5645] adding to cleanup list
2020-08-24T07:57:52.294949Z debug   envoy pool  [C5646] connection destroyed

这是来自 istio-ingressagateway 的日志。IP10.244.243.205属于 VerneMQ pod,而不是服务(可能是有意的)。

2020-08-24T08:48:31.536593Z debug   envoy filter    [C13236] new tcp proxy session
2020-08-24T08:48:31.536702Z debug   envoy filter    [C13236] Creating connection to cluster outbound|1883||vernemq.test.svc.cluster.local
2020-08-24T08:48:31.536728Z debug   envoy pool  creating a new connection
2020-08-24T08:48:31.536778Z debug   envoy pool  [C13237] connecting
2020-08-24T08:48:31.536784Z debug   envoy connection    [C13237] connecting to 10.244.243.205:1883
2020-08-24T08:48:31.537074Z debug   envoy connection    [C13237] connection in progress
2020-08-24T08:48:31.537116Z debug   envoy pool  queueing request due to no available connections
2020-08-24T08:48:31.537138Z debug   envoy conn_handler  [C13236] new connection
2020-08-24T08:48:31.537181Z debug   envoy connection    [C13237] connected
2020-08-24T08:48:31.537204Z debug   envoy pool  [C13237] assigning connection
2020-08-24T08:48:31.537221Z debug   envoy filter    TCP:onUpstreamEvent(), requestedServerName: 
2020-08-24T08:48:31.537880Z debug   envoy misc  Unknown error code 104 details Connection reset by peer
2020-08-24T08:48:31.537907Z debug   envoy connection    [C13237] remote close
2020-08-24T08:48:31.537913Z debug   envoy connection    [C13237] closing socket: 0
2020-08-24T08:48:31.537938Z debug   envoy pool  [C13237] client disconnected
2020-08-24T08:48:31.537953Z debug   envoy connection    [C13236] closing data_to_write=0 type=0
2020-08-24T08:48:31.537958Z debug   envoy connection    [C13236] closing socket: 1
2020-08-24T08:48:31.538156Z debug   envoy conn_handler  [C13236] adding to cleanup list
2020-08-24T08:48:31.538191Z debug   envoy pool  [C13237] connection destroyed

我的配置

vernemq-istio-ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: exposed-with-istio
  labels:
    istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vernemq
  namespace: exposed-with-istio
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: endpoint-reader
  namespace: exposed-with-istio
rules:
  - apiGroups: ["", "extensions", "apps"]
    resources: ["endpoints", "deployments", "replicasets", "pods"]
    verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: exposed-with-istio
subjects:
  - kind: ServiceAccount
    name: vernemq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  namespace: exposed-with-istio
  labels:
    app: vernemq
spec:
  selector:
    app: vernemq
  type: ClusterIP
  ports:
    - port: 4369
      name: empd
    - port: 44053
      name: vmq
    - port: 8888
      name: http-dashboard
    - port: 1883
      name: tcp-mqtt
      targetPort: 1883
    - port: 9001
      name: tcp-mqtt-ws
      targetPort: 9001
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vernemq
  namespace: exposed-with-istio
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      serviceAccountName: vernemq
      containers:
        - name: vernemq
          image: vernemq/vernemq
          ports:
            - containerPort: 1883
              name: tcp-mqtt
              protocol: TCP
            - containerPort: 8080
              name: tcp-mqtt-ws
            - containerPort: 8888
              name: http-dashboard
            - containerPort: 4369
              name: epmd
            - containerPort: 44053
              name: vmq
            - containerPort: 9100-9109 # shortened
          env:
            - name: DOCKER_VERNEMQ_ACCEPT_EULA
              value: "yes"
            - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
              value: "on"
            - name: DOCKER_VERNEMQ_listener__tcp__allowed_protocol_versions
              value: "3,4,5"
            - name: DOCKER_VERNEMQ_allow_register_during_netsplit
              value: "on"
            - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
              value: "1"
            - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
              value: "vernemq"
            - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
              value: "9100"
            - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
              value: "9109"
            - name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
              value: "1"
vernemq-loadbalancer-service.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: exposed-with-loadbalancer
---
... the rest it the same except for namespace and service type ...
istio.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: vernemq-destination
  namespace: exposed-with-istio
spec:
  host: vernemq.exposed-with-istio.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: vernemq-gateway
  namespace: exposed-with-istio
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 31400
      name: tcp
      protocol: TCP
    hosts:
    - "*"
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vernemq-virtualservice
  namespace: exposed-with-istio
spec:
  hosts:
  - "*"
  gateways:
  - vernemq-gateway
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vernemq-virtualservice
  namespace: exposed-with-istio
spec:
  hosts:
  - "*"
  gateways:
  - vernemq-gateway
  http:
  - match:
    - uri:
        prefix: /status
    route:
    - destination:
        host: vernemq.exposed-with-istio.svc.cluster.local
        port:
          number: 8888
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: vernemq.exposed-with-istio.svc.cluster.local
        port:
          number: 1883

Kiali 截图是否暗示 ingressgateway 只将 HTTP 流量转发到服务并吃掉所有 TCP? Kiali 图形选项卡

UPD

根据建议,这是输出:

** 但是您的特使日志显示了一个问题:特使杂项未知错误代码 104 详细信息由对等方和特使池 [C5648] 客户端断开连接重置连接。

istioctl proxy-config listeners vernemq-c945876f-tvvz7.exposed-with-istio

第一个 | grep 8888 和 | grep 1883

0.0.0.0        8888  App: HTTP                                               Route: 8888
0.0.0.0        8888  ALL                                                     PassthroughCluster
10.107.205.214 1883  ALL                                                     Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
...                                            Cluster: outbound|853||istiod.istio-system.svc.cluster.local
10.107.205.214 1883  ALL                                                     Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.108.218.134 3000  App: HTTP                                               Route: grafana.istio-system.svc.cluster.local:3000
10.108.218.134 3000  ALL                                                     Cluster: outbound|3000||grafana.istio-system.svc.cluster.local
10.107.205.214 4369  App: HTTP                                               Route: vernemq.exposed-with-istio.svc.cluster.local:4369
10.107.205.214 4369  ALL                                                     Cluster: outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        8888  App: HTTP                                               Route: 8888
0.0.0.0        8888  ALL                                                     PassthroughCluster
10.107.205.214 9001  ALL                                                     Cluster: outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        9090  App: HTTP                                               Route: 9090
0.0.0.0        9090  ALL                                                     PassthroughCluster
10.96.0.10     9153  App: HTTP                                               Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10     9153  ALL                                                     Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        9411  App: HTTP                                               ...
0.0.0.0        15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0               InboundPassthroughClusterIpv4
0.0.0.0        15006 Addr: 0.0.0.0/0                                         InboundPassthroughClusterIpv4
0.0.0.0        15006 App: TCP TLS                                            Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls; App: TCP TLS                                Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls                                              Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls; App: TCP TLS                                Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls                                              Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 App: TCP TLS                                            Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15010 App: HTTP                                               Route: 15010
0.0.0.0        15010 ALL                                                     PassthroughCluster
10.106.166.154 15012 ALL                                                     Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 App: HTTP                                               Route: 15014
0.0.0.0        15014 ALL                                                     PassthroughCluster
0.0.0.0        15021 ALL                                                     Inline Route: /healthz/ready*
10.100.213.45  15021 App: HTTP                                               Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.100.213.45  15021 ALL                                                     Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0        15090 ALL                                                     Inline Route: /stats/prometheus*
10.100.213.45  15443 ALL                                                     Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.105.193.108 15443 ALL                                                     Cluster: outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
0.0.0.0        20001 App: HTTP                                               Route: 20001
0.0.0.0        20001 ALL                                                     PassthroughCluster
10.100.213.45  31400 ALL                                                     Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.107.205.214 44053 App: HTTP                                               Route: vernemq.exposed-with-istio.svc.cluster.local:44053
10.107.205.214 44053 ALL                                                     Cluster: outbound|44053||vernemq.exposed-with-istio.svc.cluster.local

** 此外,请运行:istioctl proxy-config endpoints 和 istioctl proxy-config routes。

istioctl proxy-config endpoints vernemq-c945876f-tvvz7.exposed-with-istio grep 1883

10.244.243.206:1883              HEALTHY     OK                outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883                   HEALTHY     OK                inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
ENDPOINT                         STATUS      OUTLIER CHECK     CLUSTER
10.101.200.113:9411              HEALTHY     OK                zipkin
10.106.166.154:15012             HEALTHY     OK                xds-grpc
10.211.55.14:6443                HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.244.243.193:53                HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.193:9153              HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.195:53                HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.195:9153              HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.197:15010             HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.244.243.197:15012             HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.244.243.197:15014             HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.244.243.197:15017             HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.244.243.197:15053             HEALTHY     OK                outbound|853||istiod.istio-system.svc.cluster.local
10.244.243.198:8080              HEALTHY     OK                outbound|80||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:8443              HEALTHY     OK                outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:15443             HEALTHY     OK                outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.199:8080              HEALTHY     OK                outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:8443              HEALTHY     OK                outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15021             HEALTHY     OK                outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15443             HEALTHY     OK                outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:31400             HEALTHY     OK                outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.201:3000              HEALTHY     OK                outbound|3000||grafana.istio-system.svc.cluster.local
10.244.243.202:9411              HEALTHY     OK                outbound|9411||zipkin.istio-system.svc.cluster.local
10.244.243.202:16686             HEALTHY     OK                outbound|80||tracing.istio-system.svc.cluster.local
10.244.243.203:9090              HEALTHY     OK                outbound|9090||kiali.istio-system.svc.cluster.local
10.244.243.203:20001             HEALTHY     OK                outbound|20001||kiali.istio-system.svc.cluster.local
10.244.243.204:9090              HEALTHY     OK                outbound|9090||prometheus.istio-system.svc.cluster.local
10.244.243.206:1883              HEALTHY     OK                outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:4369              HEALTHY     OK                outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:8888              HEALTHY     OK                outbound|8888||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:9001              HEALTHY     OK                outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:44053             HEALTHY     OK                outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883                   HEALTHY     OK                inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:4369                   HEALTHY     OK                inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:8888                   HEALTHY     OK                inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:9001                   HEALTHY     OK                inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:15000                  HEALTHY     OK                prometheus_stats
127.0.0.1:15020                  HEALTHY     OK                agent
127.0.0.1:44053                  HEALTHY     OK                inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
unix://./etc/istio/proxy/SDS     HEALTHY     OK                sds-grpc

istioctl proxy-config routes vernemq-c945876f-tvvz7.exposed-with-istio

NOTE: This output only contains routes loaded via RDS.
NAME                                                                         DOMAINS                               MATCH                  VIRTUAL SERVICE
istio-ingressgateway.istio-system.svc.cluster.local:15021                    istio-ingressgateway.istio-system     /*                     
istiod.istio-system.svc.cluster.local:853                                    istiod.istio-system                   /*                     
20001                                                                        kiali.istio-system                    /*                     
15010                                                                        istiod.istio-system                   /*                     
15014                                                                        istiod.istio-system                   /*                     
vernemq.exposed-with-istio.svc.cluster.local:4369                            vernemq                               /*                     
vernemq.exposed-with-istio.svc.cluster.local:44053                           vernemq                               /*                     
kube-dns.kube-system.svc.cluster.local:9153                                  kube-dns.kube-system                  /*                     
8888                                                                         vernemq                               /*                     
80                                                                           istio-egressgateway.istio-system      /*                     
80                                                                           istio-ingressgateway.istio-system     /*                     
80                                                                           tracing.istio-system                  /*                     
grafana.istio-system.svc.cluster.local:3000                                  grafana.istio-system                  /*                     
9411                                                                         zipkin.istio-system                   /*                     
9090                                                                         kiali.istio-system                    /*                     
9090                                                                         prometheus.istio-system               /*                     
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local     *                                     /*                     
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local     *                                     /*                     
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
                                                                             *                                     /stats/prometheus*     
InboundPassthroughClusterIpv4                                                *                                     /*                     
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
InboundPassthroughClusterIpv4                                                *                                     /*                     
                                                                             *                                     /healthz/ready*    
4

4 回答 4

2

首先,我建议为 pod 启用特使日志记录

kubectl exec -it <pod-name> -c istio-proxy -- curl -X POST http://localhost:15000/logging?level=trace

不遵循 istio sidecar 日志

kubectl logs <pod-name> -c isito-proxy -f

更新

由于您的特使代理记录了双方的问题,因此连接有效,但无法建立。

关于端口 15006:在 istio 中,所有流量都通过 envoy 代理(istio-sidecar)进行路由。为此,istio 将每个端口映射到 15006 用于入站(意味着所有传入流量从某个地方到 sidecar)和 15001 用于出站(意味着从 sidecar 到某个地方)。更多信息:https ://istio.io/latest/docs/ops/diagnostic-tools/proxy-cmd/#deep-dive-into-envoy-configuration

外观配置istioctl proxy-config listeners <pod-name>到此为止。让我们尝试找出错误。

Istio 有时对其配置要求非常严格。为了排除这种情况,您能否先调整您的服务type: ClusterIP并为 mqtt 端口添加一个目标端口:

- port: 1883
  name: tcp-mqtt
  targetPort: 1883

此外,请运行:istioctl proxy-config endpoints <pod-name>istioctl proxy-config routes <pod-name>

于 2020-08-21T20:44:12.177 回答
2

在 istio 网关上使用 VerneMQ 时遇到了同样的问题。问题是如果 listener.tcp.default 包含默认值 127.0.0.1:1883,VerneMQ 进程会重置 TCP 连接。我通过使用带有“0.0.0.0:1883”的 DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT 来修复它。

于 2020-12-27T12:46:05.813 回答
1

根据您的配置,我会说您必须使用 ServiceEntry 来启用网格中的 Pod 和网格外的 Pod 之间的通信。

ServiceEntry 允许将额外的条目添加到 Istio 的内部服务注册表中,以便网格中自动发现的服务可以访问/路由到这些手动指定的服务. 服务条目描述了服务的属性(DNS 名称、VIP、端口、协议、端点)。这些服 此外,还可以使用workloadSelector 字段动态选择服务条目的端点。这些端点可以是使用 WorkloadEntry 对象或 Kubernetes pod 声明的 VM 工作负载。在单个服务下同时选择 Pod 和 VM 的能力允许将服务从 VM 迁移到 Kubernetes,而无需更改与服务关联的现有 DNS 名称。

您可以使用服务条目向 Istio 内部维护的服务注册表添加一个条目。 添加服务条目后,Envoy 代理可以将流量发送到服务,就好像它是网格中的服务一样。配置服务条目允许您管理在网格之外运行的服务的流量

有关更多信息和更多示例,请访问此处此处有关服务入口的 istio 文档。


如果您还有其他问题,请告诉我。

于 2020-08-24T09:32:51.277 回答
0

这是我的网关、虚拟服务和服务配置,它们可以正常工作

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: mqtt-domain-tld-gw
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 31400
      name: tcp
      protocol: TCP
    hosts:
    - mqtt.domain.tld
  - port:
      number: 15443
      name: tls
      protocol: TLS
    hosts:
    - mqtt.domain.tld
    tls:
      mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: mqtt-domain-tld-vs
spec:
  hosts:
  - mqtt.domain.tld
  gateways:
  - mqtt-domain-tld-gw
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: mqtt
        port:
          number: 1883
  tls:
  - match:
    - port: 15443
      sniHosts:
      - mqtt.domain.tld
    route:
    - destination:
        host: mqtt
        port:
          number: 8883
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: mqtt
  name: mqtt
spec:
  ports:
  - name: tcp-mqtt
    port: 1883
    protocol: TCP
    targetPort: 1883
    appProtocol: tcp
  - name: tls-mqtt
    port: 8883
    protocol: TCP
    targetPort: 8883
    appProtocol: tls
  selector:
    app: mqtt
  type: LoadBalancer
于 2021-06-06T21:53:21.823 回答