在具有 CIDR 172.20.0.0/16 的 VPC 中拥有 AWS EKS 集群并istio 1.0.2
使用 helm 安装:
helm upgrade -i istio install/kubernetes/helm/istio \
--namespace istio-system \
--set tracing.enabled=true \
--set grafana.enabled=true \
--set telemetry-gateway.grafanaEnabled=true \
--set telemetry-gateway.prometheusEnabled=true \
--set global.proxy.includeIPRanges="172.20.0.0/16" \
--set servicegraph.enabled=true \
--set galley.enabled=false
然后部署一些 pod 进行测试:
apiVersion: v1
kind: Service
metadata:
name: service-one
labels:
app: service-one
spec:
ports:
- port: 80
targetPort: 8080
name: http
selector:
app: service-one
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service-one
spec:
replicas: 1
template:
metadata:
labels:
app: service-one
spec:
containers:
- name: app
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-two
labels:
app: service-two
spec:
ports:
- port: 80
targetPort: 8080
name: http-status
selector:
app: service-two
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service-two
spec:
replicas: 1
template:
metadata:
labels:
app: service-two
spec:
containers:
- name: app
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
并部署它:
kubectl apply -f <(istioctl kube-inject -f app.yaml)
然后在 service-one pod 中,我正在请求 service-two 并且在 service-one 的 istio-proxy 容器中没有关于传出请求的日志,但是如果我重新配置 istio 而不设置global.proxy.includeIPRanges
它按预期工作(但我需要这个配置来允许多个外部连接)。我该如何调试正在发生的事情?