将我们的 Azure AKS - Kubernetes 环境升级到 Kubernetes 版本 1.19.3 迫使我也将我的 Nginx helm.sh/chart 升级到 nginx-ingress-0.7.1。结果,我被迫将 API 版本定义更改为 networking.k8s.io/v1,因为我的 DevOps 管道相应地失败了(旧 API 的警告导致错误)。但是,现在我的问题是我的会话亲和性注释被忽略并且响应中没有设置会话 cookie。
我拼命地改名字,尝试不同的不相关的博客文章以某种方式解决这个问题。
任何帮助将非常感激。
我当前的 nginx yaml(我已删除状态/托管字段标签以增强可读性):
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx-ingress-infra-nginx-ingress
namespace: ingress-infra
labels:
app.kubernetes.io/instance: nginx-ingress-infra
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: nginx-ingress-infra-nginx-ingress
helm.sh/chart: nginx-ingress-0.7.1
annotations:
deployment.kubernetes.io/revision: '1'
meta.helm.sh/release-name: nginx-ingress-infra
meta.helm.sh/release-namespace: ingress-infra
spec:
replicas: 2
selector:
matchLabels:
app: nginx-ingress-infra-nginx-ingress
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress-infra-nginx-ingress
annotations:
prometheus.io/port: '9113'
prometheus.io/scrape: 'true'
spec:
containers:
- name: nginx-ingress-infra-nginx-ingress
image: 'nginx/nginx-ingress:1.9.1'
args:
- '-nginx-plus=false'
- '-nginx-reload-timeout=0'
- '-enable-app-protect=false'
- >-
-nginx-configmaps=$(POD_NAMESPACE)/nginx-ingress-infra-nginx-ingress
- >-
-default-server-tls-secret=$(POD_NAMESPACE)/nginx-ingress-infra-nginx-ingress-default-server-secret
- '-ingress-class=infra'
- '-health-status=false'
- '-health-status-uri=/nginx-health'
- '-nginx-debug=false'
- '-v=1'
- '-nginx-status=true'
- '-nginx-status-port=8080'
- '-nginx-status-allow-cidrs=127.0.0.1'
- '-report-ingress-status'
- '-external-service=nginx-ingress-infra-nginx-ingress'
- '-enable-leader-election=true'
- >-
-leader-election-lock-name=nginx-ingress-infra-nginx-ingress-leader-election
- '-enable-prometheus-metrics=true'
- '-prometheus-metrics-listen-port=9113'
- '-enable-custom-resources=true'
- '-enable-tls-passthrough=false'
- '-enable-snippets=false'
- '-ready-status=true'
- '-ready-status-port=8081'
- '-enable-latency-metrics=false'
我的服务名称“帐户”的入口配置:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: account
namespace: infra
resourceVersion: '194790'
labels:
app.kubernetes.io/managed-by: Helm
annotations:
kubernetes.io/ingress.class: infra
meta.helm.sh/release-name: infra
meta.helm.sh/release-namespace: infra
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
nginx.ingress.kubernetes.io/proxy-buffering: 'on'
nginx.ingress.kubernetes.io/proxy-buffers-number: '4'
spec:
tls:
- hosts:
- account.infra.mydomain.com
secretName: my-default-cert **this is a self-signed certificate with cn=account.infra.mydomain.com
rules:
- host: account.infra.mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: account
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 123.123.123.123 **redacted**
我的帐户服务 yaml
kind: Service
apiVersion: v1
metadata:
name: account
namespace: infra
labels:
app.kubernetes.io/instance: infra
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: account
app.kubernetes.io/version: latest
helm.sh/chart: account-0.1.0
annotations:
meta.helm.sh/release-name: infra
meta.helm.sh/release-namespace: infra
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app.kubernetes.io/instance: infra
app.kubernetes.io/name: account
clusterIP: 10.0.242.212
type: ClusterIP
sessionAffinity: ClientIP **just tried to add this setting to the service, but does not work either**
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
status:
loadBalancer: {}