您好,登錄后才能下訂單哦!
如何使用HPA以及需要注意的細節有哪些,相信很多沒有經驗的人對此束手無策,為此本文總結了問題出現的原因和解決方法,通過這篇文章希望你能解決這個問題。
下面我們將會為大家講解如何使用 HPA 以及一些需要注意的細節。
autoscaling/v1
實踐v1 的模板可能是大家平時見到最多的也是最簡單的,v1 版本的 HPA 只支持一種指標 —— CPU。傳統意義上,彈性伸縮最少也會支持 CPU 與 Memory 兩種指標,為什么在 Kubernetes 中只放開了 CPU 呢?其實最早的 HPA 是計劃同時支持這兩種指標的,但是實際的開發測試中發現:內存不是一個非常好的彈性伸縮判斷條件。因為和 CPU不 同,很多內存型的應用,并不會因為 HPA 彈出新的容器而帶來內存的快速回收,很多應用的內存都要交給語言層面的 VM 進行管理,也就是說,內存的回收是由 VM 的 GC 來決定的。這就有可能因為 GC 時間的差異導致 HPA 在不恰當的時間點震蕩,因此在 v1 的版本中,HPA 就只支持了 CPU 這一種指標。
一個標準的 v1 模板大致如下:
apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: php-apache namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: php-apache minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 50
其中 scaleTargetRef
表示當前要操作的伸縮對象是誰。在本例中,伸縮的對象是一個 apps/v1
版本的 Deployment
。 targetCPUUtilizationPercentage
表示:當整體的資源利用率超過 50% 的時候,會進行擴容。接下來我們做一個簡單的 Demo 來實踐下。
登錄容器服務控制臺,首先創建一個應用部署,選擇使用模板創建,模板內容如下:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: php-apache labels: app: php-apache spec: replicas: 1 selector: matchLabels: app: php-apache template: metadata: labels: app: php-apache spec: containers: - name: php-apache image: registry.cn-hangzhou.aliyuncs.com/ringtail/hpa-example:v1.0 ports: - containerPort: 80 resources: requests: memory: "300Mi" cpu: "250m" --- apiVersion: v1 kind: Service metadata: name: php-apache labels: app: php-apache spec: selector: app: php-apache ports: - protocol: TCP name: http port: 80 targetPort: 80 type: ClusterIP
部署壓測模組 HPA 模板
apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: php-apache namespace: default spec: scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: php-apache minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 50
開啟壓力測試
apiVersion: apps/v1beta1 kind: Deployment metadata: name: load-generator labels: app: load-generator spec: replicas: 1 selector: matchLabels: app: load-generator template: metadata: labels: app: load-generator spec: containers: - name: load-generator image: busybox command: - "sh" - "-c" - "while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done"
檢查擴容狀態
關閉壓測應用
檢查縮容狀態
這樣一個使用 autoscaling/v1
的 HPA 就完成了。相對而言,這個版本的 HPA 目前是最簡單的,無論是否升級 Metrics-Server
都可以實現。
<a name="AZA2C"></a>
autoscaling/v2beta1
實踐在前面的內容中為大家講解了 HPA 還有 autoscaling/v2beta1
和 autoscaling/v2beta2
兩個版本。這兩個版本的區別是 autoscaling/v1beta1
支持了 Resource Metrics
和 Custom Metrics
。而在 autoscaling/v2beta2
的版本中額外增加了 External Metrics
的支持。對于 External Metrics
在本文中就不進行過多贅述,因為 External Metrics
目前在社區里面沒有太多成熟的實現,比較成熟的實現是 Prometheus Custom Metrics
。
上面這張圖為大家展現了開啟 Metrics Server
后, HPA 如何使用不同類型的Metrics
,如果需要使用 Custom Metrics
,則需要配置安裝相應的 Custom Metrics Adapter
。在下文中,主要為大家介紹一個基于 QPS
來進行彈性伸縮的例子。
安裝 Metrics Server
并在 kube-controller-manager
中進行開啟
目前默認的阿里云容器服務 Kubernetes 集群使用還是 Heapster
,容器服務計劃在 1.12 中更新 Metrics Server
,這個地方需要特別說明下,社區雖然已經逐漸開始廢棄 Heapster
,但是社區中還有大量的組件是在強依賴 Heapster
的 API,因此阿里云基于 Metrics Server
進行了 Heapster
完整的兼容,既可以讓開發者使用 Metrics Server
的新功能,又可以無需擔心其他組件的宕機。
在部署新的 Metrics Server
之前,我們首先要備份一下 Heapster
中的一些啟動參數,因為這些參數稍后會直接用在 Metrics Server
的模板中。其中重點關心的是兩個 Sink,如果需要使用 Influxdb 的開發者,可以保留第一個 Sink;如果需要保留云監控集成能力的開發者,則保留第二個 Sink。
將這兩個參數拷貝到 Metrics Server
的啟動模板中,在本例中是兩個都兼容,并下發部署。
apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: name: metrics-server namespace: kube-system labels: kubernetes.io/name: "Metrics-server" spec: selector: k8s-app: metrics-server ports: - port: 443 protocol: TCP targetPort: 443 --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.metrics.k8s.io spec: service: name: metrics-server namespace: kube-system group: metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: admin containers: - name: metrics-server image: registry.cn-hangzhou.aliyuncs.com/ringtail/metrics-server:1.1 imagePullPolicy: Always command: - /metrics-server - '--source=kubernetes:https://kubernetes.default' - '--sink=influxdb:http://monitoring-influxdb:8086' - '--sink=socket:tcp://monitor.csk.[region_id].aliyuncs.com:8093?clusterId=[cluster_id]&public=true'
接下來我們修改下 Heapster
的 Service
,將服務的后端從 Heapster
轉移到 Metrics Server
。
如果此時從控制臺的節點頁面可以獲取到右側的監控信息的話,說明 Metrics Server
已經完全兼容 Heapster
。
此時通過 kubectl get apiservice
,如果可以看到注冊的 v1beta1.metrics.k8s.io
的 api,則說明已經注冊成功。
接下來我們需要在 kube-controller-manager
上切換 Metrics
的數據來源。kube-controller-manger
部署在每個 master 上,是通過 Static Pod
的托管給 kubelet 的。因此只需要修改 kube-controller-manager
的配置文件,kubelet 就會自動進行更新。kube-controller-manager
在主機上的路徑是 /etc/kubernetes/manifests/kube-controller-manager.yaml
。
需要將 --horizontal-pod-autoscaler-use-rest-clients=true
,這里有一個注意點,因為如果使用 vim 進行編輯,vim 會自動生成一個緩存文件影響最終的結果,所以比較建議的方式是將這個配置文件移動到其他的目錄下進行修改,然后再移回原來的目錄。至此,Metrics Server
已經可以為 HPA 進行服務了,接下來我們來做自定義指標的部分。
部署 Custom Metrics Adapter
如集群中未部署 Prometheus,可以參考《阿里云容器Kubernetes監控(七) - Prometheus監控方案部署》先部署 Prometheus。接下來我們部署 Custom Metrics Adapter
。
kind: Namespace apiVersion: v1 metadata: name: custom-metrics --- kind: ServiceAccount apiVersion: v1 metadata: name: custom-metrics-apiserver namespace: custom-metrics --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-metrics:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-metrics --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: custom-metrics-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-metrics --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-metrics-resource-reader rules: - apiGroups: - "" resources: - namespaces - pods - services verbs: - get - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-metrics-apiserver-resource-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: custom-metrics-resource-reader subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-metrics --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-metrics-getter rules: - apiGroups: - custom.metrics.k8s.io resources: - "*" verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: hpa-custom-metrics-getter roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: custom-metrics-getter subjects: - kind: ServiceAccount name: horizontal-pod-autoscaler namespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata: name: custom-metrics-apiserver namespace: custom-metrics labels: app: custom-metrics-apiserver spec: replicas: 1 selector: matchLabels: app: custom-metrics-apiserver template: metadata: labels: app: custom-metrics-apiserver spec: tolerations: - key: beta.kubernetes.io/arch value: arm effect: NoSchedule - key: beta.kubernetes.io/arch value: arm64 effect: NoSchedule serviceAccountName: custom-metrics-apiserver containers: - name: custom-metrics-server image: luxas/k8s-prometheus-adapter:v0.2.0-beta.0 args: - --prometheus-url=http://prometheus-k8s.monitoring.svc:9090 - --metrics-relist-interval=30s - --rate-interval=60s - --v=10 - --logtostderr=true ports: - containerPort: 443 securityContext: runAsUser: 0 --- apiVersion: v1 kind: Service metadata: name: api namespace: custom-metrics spec: ports: - port: 443 targetPort: 443 selector: app: custom-metrics-apiserver --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: name: v1beta1.custom.metrics.k8s.io spec: insecureSkipTLSVerify: true group: custom.metrics.k8s.io groupPriorityMinimum: 1000 versionPriority: 5 service: name: api namespace: custom-metrics version: v1beta1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-metrics-server-resources rules: - apiGroups: - custom-metrics.metrics.k8s.io resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: hpa-controller-custom-metrics roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: custom-metrics-server-resources subjects: - kind: ServiceAccount name: horizontal-pod-autoscaler namespace: kube-system
部署手壓測應用與 HPA 模板
apiVersion: apps/v1 kind: Deployment metadata: labels: app: sample-metrics-app name: sample-metrics-app spec: replicas: 2 selector: matchLabels: app: sample-metrics-app template: metadata: labels: app: sample-metrics-app spec: tolerations: - key: beta.kubernetes.io/arch value: arm effect: NoSchedule - key: beta.kubernetes.io/arch value: arm64 effect: NoSchedule - key: node.alpha.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 0 - key: node.alpha.kubernetes.io/notReady operator: Exists effect: NoExecute tolerationSeconds: 0 containers: - image: luxas/autoscale-demo:v0.1.2 name: sample-metrics-app ports: - name: web containerPort: 8080 readinessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 3 periodSeconds: 5 livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 3 periodSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: sample-metrics-app labels: app: sample-metrics-app spec: ports: - name: web port: 80 targetPort: 8080 selector: app: sample-metrics-app --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: sample-metrics-app labels: service-monitor: sample-metrics-app spec: selector: matchLabels: app: sample-metrics-app endpoints: - port: web --- kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2beta1 metadata: name: sample-metrics-app-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: sample-metrics-app minReplicas: 2 maxReplicas: 10 metrics: - type: Object object: target: kind: Service name: sample-metrics-app metricName: http_requests targetValue: 100 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: sample-metrics-app namespace: default annotations: traefik.frontend.rule.type: PathPrefixStrip spec: rules: - http: paths: - path: /sample-app backend: serviceName: sample-metrics-app servicePort: 80
這個壓測的應用暴露了一個 Prometheus
的接口。接口中的數據如下,其中 http_requests_total
這個指標就是我們接下來伸縮使用的自定義指標。
[root@iZwz99zrzfnfq8wllk0dvcZ manifests]# curl 172.16.1.160:8080/metrics # HELP http_requests_total The amount of requests served by the server in total # TYPE http_requests_total counter http_requests_total 3955684
部署壓測應用
apiVersion: apps/v1beta1 kind: Deployment metadata: name: load-generator labels: app: load-generator spec: replicas: 1 selector: matchLabels: app: load-generator template: metadata: labels: app: load-generator spec: containers: - name: load-generator image: busybox command: - "sh" - "-c" - "while true; do wget -q -O- http://sample-metrics-app.default.svc.cluster.local; done"
查看 HPA 的狀態與伸縮,稍等幾分鐘,Pod 已經伸縮成功了。
workspace kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 0%/50% 1 10 1 21d sample-metrics-app-hpa Deployment/sample-metrics-app 538133m/100 2 10 10 15h
看完上述內容,你們掌握如何使用HPA以及需要注意的細節有哪些的方法了嗎?如果還想學到更多技能或想了解更多相關內容,歡迎關注億速云行業資訊頻道,感謝各位的閱讀!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。