我在 Fedora 服务器上的集群 k8s 上有一些问题,我有 1 个主节点和 2 个节点,找到了 etc、flannel、docker 和 kubernetes 的配置
我跑
kubectl run busybox --image=busybox --port 8080 \
-- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p 8080; done"
而且,这很好
kubectl expose deployment busybox --type=NodePort
现在
kubectl autoscale deployment busybox --min=1 --max=4 --cpu-percent=20 deployment "busybox" autoscaled
当描述一个 hpa 时,它的指标是
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
busybox Deployment/busybox <unknown>/20% 1 4 1 1h
我试试这个https://github.com/kubernetes-incubator/metrics-server
git clone https://github.com/kubernetes-incubator/metrics-server.git
kubectl create -f metrics-server/deploy/1.8+/
但是度量的 pod 的状态是它的 CrashLoopBackOff
kubectl logs metrics-server-6fbfb84cdd-5gkth --namespace=kube-system
I0618 18:23:36.725579 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0618 18:23:36.741334 1 heapster.go:72] Metrics Server version v0.2.1
F0618 18:23:36.752641 1 heapster.go:112] Failed to create source provide: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
和
kubectl describe hpa busybox
Name: busybox
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 18 Jun 2018 12:55:28 -0400
Reference: Deployment/busybox
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 20%
Min replicas: 1
Max replicas: 4
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 1h (x13 over 1h) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 49m (x91 over 1h) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 44m (x9 over 48m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Warning FailedComputeMetricsReplicas 33m (x13 over 39m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 4m (x71 over 39m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
我从 /etc/kubernetes/apiserver 的 KUBE_ADMISSION_CONTROL 中删除了 ServiceAccount