添加资源后 Pod 经常重启。
在此之前没有添加资源。吊舱根本不会重新启动,或者它可能每天只发生一次或两次。
我不确定资源是否会影响健康检查,因此 pod 会经常重启。
apiVersion: apps/v1
kind: Deployment
metadata:
name: testservice-dpm
labels:
app: testservice-api
spec:
replicas: 1
selector:
matchLabels:
app: testservice-api
template:
metadata:
labels:
app: testservice-api
spec:
containers:
- name: testservice
image: testservice:v6.0.0
env:
- name: MSSQL_PORT
value: "1433"
resources:
limits:
cpu: 500m
memory: 1000Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: test-p
volumeMounts:
- name: test-v
mountPath: /app/appsettings.json
subPath: appsettings.json
livenessProbe:
httpGet:
path: /api/ServiceHealth/CheckLiveness
port: 80
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 1
periodSeconds: 3
successThreshold: 1
failureThreshold: 1
readinessProbe:
httpGet:
path: /api/ServiceHealth/CheckReadiness
port: 80
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 1
periodSeconds: 3
successThreshold: 1
failureThreshold: 1
volumes:
- name: test-v
configMap:
name: testservice-config
以下是描述所有 testservice pod 的结果。
- testservice-dpm-d7979cc69-rwxr4(在 10 分钟内重新启动 7 次,并且现在仍然回退重新启动失败的容器)
Name: testservice-dpm-d7979cc69-rwxr4
Namespace: testapi
Priority: 0
Node: workernode3/yyy.yyy.yy.yy
Start Time: Thu, 30 Dec 2021 12:48:50 +0700
Labels: app=testservice-api
pod-template-hash=d7979cc69
Annotations: kubectl.kubernetes.io/restartedAt: 2021-12-29T20:02:45Z
Status: Running
IP: xx.xxx.x.xxx
IPs:
IP: xx.xxx.x.xxx
Controlled By: ReplicaSet/testservice-dpm-d7979cc69
Containers:
testservice:
Container ID: docker://86a50f98b48bcf8bfa209a478c1127e998e36c1c7bcece71599f50feabb89834
Image: testservice:v6.0.0
Image ID: docker-pullable://testservice@sha256:57a3955d07febf4636eeda1bc6a18468aacf66e883d7f6d8d3fdcb5163724a84
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 30 Dec 2021 12:55:13 +0700
Finished: Thu, 30 Dec 2021 12:55:19 +0700
Ready: False
Restart Count: 7
Limits:
cpu: 500m
memory: 1000Mi
Requests:
cpu: 100m
memory: 100Mi
Liveness: http-get http://:80/api/ServiceHealth/CheckLiveness delay=3s timeout=1s period=3s #success=1 #failure=1
Readiness: http-get http://:80/api/ServiceHealth/CheckReadiness delay=3s timeout=1s period=3s #success=1 #failure=1
Environment:
MSSQL_PORT: 1433
Mounts:
/app/appsettings.json from authen-v (rw,path="appsettings.json")
/etc/localtime from tz-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fd9bt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
authen-v:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: testservice-config
Optional: false
tz-config:
Type: HostPath (bare host directory volume)
Path: /usr/share/zoneinfo/Asia/Bangkok
HostPathType: File
kube-api-access-fd9bt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned testapi/testservice-dpm-d7979cc69-rwxr4 to workernode3
Warning Unhealthy 11m (x2 over 11m) kubelet Readiness probe failed: Get "http://xx.xxx.x.xxx:80/api/ServiceHealth/CheckReadiness": dial tcp xx.xxx.x.xxx:80: connect: connection refused
Warning Unhealthy 11m (x3 over 11m) kubelet Readiness probe failed: Get "http://xx.xxx.x.xxx:80/api/ServiceHealth/CheckReadiness": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 11m (x3 over 11m) kubelet Liveness probe failed: Get "http://xx.xxx.x.xxx:80/api/ServiceHealth/CheckLiveness": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal Killing 11m (x3 over 11m) kubelet Container testservice failed liveness probe, will be restarted
Normal Created 10m (x4 over 11m) kubelet Created container testservice
Normal Started 10m (x4 over 11m) kubelet Started container testservice
Normal Pulled 10m (x4 over 11m) kubelet Container image "testservice:v6.0.0" already present on machine
Warning BackOff 80s (x51 over 11m) kubelet Back-off restarting failed container
- testservice-dpm-d7979cc69-7nq28(10分钟内重启4次,现在运行)
Name: testservice-dpm-d7979cc69-7nq28
Namespace: testapi
Priority: 0
Node: workernode3/yyy.yyy.yy.yy
Start Time: Thu, 30 Dec 2021 12:47:37 +0700
Labels: app=testservice-api
pod-template-hash=d7979cc69
Annotations: kubectl.kubernetes.io/restartedAt: 2021-12-29T20:02:45Z
Status: Running
IP: xx.xxx.x.xxx
IPs:
IP: xx.xxx.x.xxx
Controlled By: ReplicaSet/testservice-dpm-d7979cc69
Containers:
testservice:
Container ID: docker://03739fc1694370abda202ba56928b46fb5f3ef7545f527c2dd73764e55f725cd
Image: testservice:v6.0.0
Image ID: docker-pullable://testservice@sha256:57a3955d07febf4636eeda1bc6a18468aacf66e883d7f6d8d3fdcb5163724a84
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 30 Dec 2021 12:48:44 +0700
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 30 Dec 2021 12:48:10 +0700
Finished: Thu, 30 Dec 2021 12:48:14 +0700
Ready: True
Restart Count: 4
Limits:
cpu: 500m
memory: 1000Mi
Requests:
cpu: 100m
memory: 100Mi
Liveness: http-get http://:80/api/ServiceHealth/CheckLiveness delay=3s timeout=1s period=3s #success=1 #failure=1
Readiness: http-get http://:80/api/ServiceHealth/CheckReadiness delay=3s timeout=1s period=3s #success=1 #failure=1
Environment:
MSSQL_PORT: 1433
Mounts:
/app/appsettings.json from authen-v (rw,path="appsettings.json")
/etc/localtime from tz-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-slz4b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
authen-v:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: testservice-config
Optional: false
tz-config:
Type: HostPath (bare host directory volume)
Path: /usr/share/zoneinfo/Asia/Bangkok
HostPathType: File
kube-api-access-slz4b:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned testapi/testservice-dpm-d7979cc69-7nq28 to workernode3
Warning Unhealthy 14m (x2 over 14m) kubelet Readiness probe failed: Get "http://xx.xxx.x.xxx:80/api/ServiceHealth/CheckReadiness": dial tcp xx.xxx.x.xxx:80: connect: connection refused
Warning Unhealthy 14m (x3 over 14m) kubelet Readiness probe failed: Get "http://xx.xxx.x.xxx:80/api/ServiceHealth/CheckReadiness": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 14m (x3 over 14m) kubelet Liveness probe failed: Get "http://xx.xxx.x.xxx:80/api/ServiceHealth/CheckLiveness": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal Killing 14m (x3 over 14m) kubelet Container testservice failed liveness probe, will be restarted
Warning BackOff 14m (x2 over 14m) kubelet Back-off restarting failed container
Normal Started 14m (x4 over 14m) kubelet Started container testservice
Normal Pulled 14m (x4 over 14m) kubelet Container image "testservice:v6.0.0" already present on machine
Normal Created 14m (x4 over 14m) kubelet Created container testservice
- testservice-dpm-d7979cc69-z566c(10 分钟内不重启,现在正在运行)
Name: testservice-dpm-d7979cc69-z566c
Namespace: testapi
Priority: 0
Node: workernode3/yyy.yyy.yy.yy
Start Time: Thu, 30 Dec 2021 12:47:30 +0700
Labels: app=testservice-api
pod-template-hash=d7979cc69
Annotations: kubectl.kubernetes.io/restartedAt: 2021-12-29T20:02:45Z
Status: Running
IP: xx.xxx.x.xxx
IPs:
IP: xx.xxx.x.xxx
Controlled By: ReplicaSet/testservice-dpm-d7979cc69
Containers:
testservice:
Container ID: docker://19c3a672cd8453e1c5526454ffb0fbdec67fa5b17d6d8166fae38930319ed247
Image: testservice:v6.0.0
Image ID: docker-pullable://testservice@sha256:57a3955d07febf4636eeda1bc6a18468aacf66e883d7f6d8d3fdcb5163724a84
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 30 Dec 2021 12:47:31 +0700
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 1000Mi
Requests:
cpu: 100m
memory: 100Mi
Liveness: http-get http://:80/api/ServiceHealth/CheckLiveness delay=3s timeout=1s period=3s #success=1 #failure=1
Readiness: http-get http://:80/api/ServiceHealth/CheckReadiness delay=3s timeout=1s period=3s #success=1 #failure=1
Environment:
MSSQL_PORT: 1433
Mounts:
/app/appsettings.json from authen-v (rw,path="appsettings.json")
/etc/localtime from tz-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cpdnc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
authen-v:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: testservice-config
Optional: false
tz-config:
Type: HostPath (bare host directory volume)
Path: /usr/share/zoneinfo/Asia/Bangkok
HostPathType: File
kube-api-access-cpdnc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned testapi/testservice-dpm-d7979cc69-z566c to workernode3
Normal Pulled 16m kubelet Container image "testservice:v6.0.0" already present on machine
Normal Created 16m kubelet Created container testservice
Normal Started 16m kubelet Started container testservice