2

我在进行诸如 RPC 调用之类的出站连接时遇到间歇性响应失败,它由我的应用程序 (Java) 记录,如下所示:

org.apache.http.NoHttpResponseException: RPC_SERVER.com:443 failed to respond !

出站连接流

Kubernetes Node -> ELB for internal NGINX -> internal NGINX ->[Upstream To]-> ELB RPC server -> RPC server instance

通常的 EC2 (AWS) 上不会出现此问题。

我可以通过这样做在我的本地主机上重现

  1. 在端口 9200 中运行作为客户端的主应用程序
  2. 在 9205 端口运行 RPC 服务器
  3. 客户端将使用端口 9202 与服务器建立连接
  4. 运行$ socat TCP4-LISTEN:9202,reuseaddr TCP4:localhost:9205它将侦听端口 9202,然后将其转发到 9205(RPC 服务器)
  5. 使用在 iptables 上添加规则$ sudo iptables -A INPUT -p tcp --dport 9202 -j DROP
  6. 触发 RPC 调用,它将返回与我之前描述的相同的错误消息

假设

由 kubernetes 上的 NAT 引起,据我所知,NAT 正在使用conntrackconntrack如果 TCP 连接空闲一段时间会中断,客户端会假设连接仍然建立,尽管它不是。(CMIIW)

我也尝试过扩展kube-dns到 10 个副本,但问题仍然存在。

节点规范

使用 calico 作为网络插件

$ sysctl -a | grep conntrack

net.netfilter.nf_conntrack_acct = 0
net.netfilter.nf_conntrack_buckets = 65536
net.netfilter.nf_conntrack_checksum = 1
net.netfilter.nf_conntrack_count = 1585
net.netfilter.nf_conntrack_events = 1
net.netfilter.nf_conntrack_expect_max = 1024
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_helper = 1
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_log_invalid = 0
net.netfilter.nf_conntrack_max = 262144
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 3600
net.netfilter.nf_conntrack_tcp_timeout_established = 86400
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_timestamp = 0
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.nf_conntrack_max = 262144

Kubelet 配置

[Service]
Restart=always
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CLOUD_ARGS=--cloud-provider=aws"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CLOUD_ARGS

Kubectl 版本

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.7", GitCommit:"8e1552342355496b62754e61ad5f802a0f3f1fa7", GitTreeState:"clean", BuildDate:"2017-09-28T23:56:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Kube 代理日志

W1004 05:34:17.400700       8 server.go:190] WARNING: all flags other than --config, --write-config-to, and --cleanup-iptables are deprecated. Please begin using a config file ASAP.
I1004 05:34:17.405871       8 server.go:478] Using iptables Proxier.
W1004 05:34:17.414111       8 server.go:787] Failed to retrieve node info: nodes "ip-172-30-1-20" not found
W1004 05:34:17.414174       8 proxier.go:483] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
I1004 05:34:17.414288       8 server.go:513] Tearing down userspace rules.
I1004 05:34:17.443472       8 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144
I1004 05:34:17.443518       8 conntrack.go:52] Setting nf_conntrack_max to 262144
I1004 05:34:17.443555       8 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1004 05:34:17.443584       8 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1004 05:34:17.443851       8 config.go:102] Starting endpoints config controller
I1004 05:34:17.443888       8 config.go:202] Starting service config controller
I1004 05:34:17.443890       8 controller_utils.go:994] Waiting for caches to sync for endpoints config controller
I1004 05:34:17.443916       8 controller_utils.go:994] Waiting for caches to sync for service config controller
I1004 05:34:17.544155       8 controller_utils.go:1001] Caches are synced for service config controller
I1004 05:34:17.544155       8 controller_utils.go:1001] Caches are synced for endpoints config controller

$ lsb_release -s -d Ubuntu 16.04.3 LTS

4

1 回答 1

2

检查sysctl net.netfilter.nf_conntrack_tcp_timeout_close_wait包含您的程序的 pod 内部的值。您列出的节点上的值 (3600) 可能与 pod 内的值不同。

如果 pod 中的值太小(例如 60),并且您的 Java 客户端在完成传输时使用 FIN 半关闭 TCP 连接,但响应时间超过 close_wait 超时时间,nf_conntrack 将丢失连接状态并且您的客户端程序将不会收到响应。

您可能需要更改客户端程序的行为以不使用 TCP 半关闭,或者将 的值修改net.netfilter.nf_conntrack_tcp_timeout_close_wait为更大。请参阅https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/

于 2019-12-20T04:15:16.640 回答