6

我有一个 Horizo​​ntalPodAutoscalar 来根据 CPU 扩展我的 pod。这里的 minReplicas 设置为5

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-web
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp-web
  minReplicas: 5 
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 50

然后,我添加了 Cron 作业以根据一天中的时间放大/缩小我的水平 pod 自动缩放器:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: production
  name: cron-runner
rules:
- apiGroups: ["autoscaling"]
  resources: ["horizontalpodautoscalers"]
  verbs: ["patch", "get"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: cron-runner
  namespace: production
subjects:
- kind: ServiceAccount
  name: sa-cron-runner
  namespace: production
roleRef:
  kind: Role
  name: cron-runner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa-cron-runner
  namespace: production
---

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: django-scale-up-job
  namespace: production
spec:
  schedule: "56 11 * * 1-6"
  successfulJobsHistoryLimit: 0 # Remove after successful completion
  failedJobsHistoryLimit: 1 # Retain failed so that we see it
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: sa-cron-runner
          containers:
          - name: django-scale-up-job
            image: bitnami/kubectl:latest
            command:
            - /bin/sh
            - -c
            - kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":8}}'
          restartPolicy: OnFailure
----
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: django-scale-down-job
  namespace: production
spec:
  schedule: "30 20 * * 1-6"
  concurrencyPolicy: Forbid
  successfulJobsHistoryLimit: 0 # Remove after successful completion
  failedJobsHistoryLimit: 1 # Retain failed so that we see it
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: sa-cron-runner
          containers:
          - name: django-scale-down-job
            image: bitnami/kubectl:latest
            command:
            - /bin/sh
            - -c
            - kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":5}}'
          restartPolicy: OnFailure

这非常有效,除了现在当我部署它时minReplicas用 Horizo​​ntalPodAutoscaler 规范中的 minReplicas 覆盖这个值(在我的例子中,它设置为 5)

我正在部署我的 HPAkubectl apply -f ~/autoscale.yaml

有没有办法处理这种情况?我是否需要创建某种共享逻辑,以便我的部署脚本可以计算出 minReplicas 值应该是什么?或者有没有更简单的方法来处理这个?

4

1 回答 1

3

我认为您还可以考虑以下两种选择:


使用 helm 通过查找功能管理应用程序的生命周期:

此解决方案背后的主要思想是HPA在尝试使用helm install/upgrade命令创建/重新创建特定集群资源(此处)之前查询它的状态。

minReplicas我的意思是每次升级应用程序堆栈之前检查当前值。


HPA将资源与应用程序清单文件分开管理

在这里,您可以将此任务移交给专门的操作员,该操作员可以与您根据特定时间表进行调整的HPA操作员共存:CronJobsminReplicas

于 2021-02-23T12:18:55.237 回答