0

我在 Google Kubernetes Engine 中管理的 kubernetes 集群上运行了 Cloud Run for Anthos 服务。

所有部署的服务突然停止响应。问题的原因是服务的 pod 的queue-proxy容器开始循环崩溃。

我不熟悉 knative,我在互联网上找不到与 GKE 和特定容器相关的任何类似内容。

容器的日志对queue-proxy我真的没有帮助,因为我不熟悉 knative:


{"level":"info","ts":1626091452.0552812,"logger":"fallback-logger","caller":"logging/config.go:78","msg":"Fetch GitHub commit ID from kodata failed","error":"open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","ts":1626091452.055691,"logger":"fallback-logger.queueproxy","caller":"queue/main.go:347","msg":"Queue container is starting with queue.BreakerParams{QueueDepth:800, MaxConcurrency:80, InitialCapacity:80}","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}
{"level":"info","ts":1626091452.0628664,"logger":"fallback-logger.queueproxy","caller":"metrics/exporter.go:160","msg":"Flushing the existing exporter before setting up the new exporter.","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}
{"level":"info","ts":1626091452.0656931,"logger":"fallback-logger.queueproxy","caller":"metrics/stackdriver_exporter.go:203","msg":"Created Opencensus Stackdriver exporter with config &{knative.dev/internal/serving revision stackdriver 60000000000 0x163f700 <nil>  false 0  true knative.dev/internal/serving/revision custom.googleapis.com/knative.dev/revision {   false}}","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}
{"level":"info","ts":1626091452.065773,"logger":"fallback-logger.queueproxy","caller":"metrics/exporter.go:173","msg":"Successfully updated the metrics exporter; old config: <nil>; new config &{knative.dev/internal/serving revision stackdriver 60000000000 0x163f700 <nil>  false 0  true knative.dev/internal/serving/revision custom.googleapis.com/knative.dev/revision {   false}}","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}
{"level":"info","ts":1626091453.9316235,"logger":"fallback-logger.queueproxy","caller":"queue/main.go:234","msg":"Received TERM signal, attempting to gracefully shutdown servers.","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}
{"level":"info","ts":1626091453.9317243,"logger":"fallback-logger.queueproxy","caller":"queue/main.go:236","msg":"Sleeping 45s to allow K8s propagation of non-ready state","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}
{"level":"info","ts":1626091498.93187,"logger":"fallback-logger.queueproxy","caller":"queue/main.go:241","msg":"Shutting down main server","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}
{"level":"info","ts":1626091499.4330895,"logger":"fallback-logger.queueproxy","caller":"queue/main.go:250","msg":"Shutting down server: admin","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}
{"level":"info","ts":1626091499.9333868,"logger":"fallback-logger.queueproxy","caller":"queue/main.go:250","msg":"Shutting down server: metrics","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}
{"level":"info","ts":1626091500.433703,"logger":"fallback-logger.queueproxy","caller":"queue/main.go:255","msg":"Shutdown complete, exiting...","knative.dev/key":"default/test-nginx-00002-kuw","knative.dev/pod":"test-nginx-00002-kuw-deployment-5b94b4c464-8pwdq"}

在 Ready 状态更改为 CrashLoopBackoff 之前,它会重新启动几次,这会影响 pod 的就绪状态并使其不可用。GKE 为我的实际应用程序创建了另一个容器,user-container该容器始终正常运行。在两次CrashLoopBackOff重新启动之间的时间内,该服务可以访问并正常工作。

集群配置没有改变,我尝试升级节点的版本,但问题仍然存在。

我开始认为我被这个容器误导了,真正的原因在其他地方,但我不知道从哪里看,因为我实际上什么都没做。

你对如何解决这个问题有什么建议吗?

编辑:集群在1.18.17-gke.1901我尝试过的版本上运行1.19.9-gke.17001.20.7-gke.2200但问题仍然存在

EDIT2:我刚刚在更新日志中遇到了这个问题:Version 1.18.18-gke.1700 is no longer available in the Stable channel.可能是因为我的集群正在运行这个版本,它会自动升级吗?

4

0 回答 0