Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Autoscaling in Kubernetes


Published on

Autoscaling of workloads in the Kubernetes environment. A slidedeck about Pod and Node autoscaling and the machinery behind it that makes it happen. Few recommendations for Pod and Node autoscaling while implementing it.

Published in: Technology
  • Login to see the comments

Autoscaling in Kubernetes

  1. 1. An implementation primer ContainerConf 2018 Bangalore Autoscaling in Kubernetes
  2. 2. Hrishikesh Deodhar Director of Engineering InfraCloud technologies Containers | DevOps | Cloud | Kubernetes
  3. 3. Auto of Scale ● Autoscaling: Why & what? ● What to scale in Kubernetes ○ Pods ○ Nodes ● Pod scaling ○ Metric Server ○ HPA controller ○ Monitoring pipeline ● Node Scaling ○ Kubernetes Autoscaler ○ Escalator
  4. 4. As explained to a 5 year old Image Source: Capacity of current app With single instance User Requests Your current VM/Pod/Container A spare instance so that load can be shared
  5. 5. ● Horizontally Scale ○ Applications to meet user demand ○ Nodes to meet infrastructure demand (Of applications) Autoscaling: what?
  6. 6. ● Match: Actual usage == Current Usage ● Use elasticity of cloud effectively ● Optimize Cost Autoscaling: why?
  7. 7. Pod Autoscaling Let’s start with a demo
  8. 8. Horizontal Pod Autoscaler Controller Deployment Controller Kubelet KubeletcAdvisor Kubelet Pod Pod Pod Metrics ServerMetrics Aggregator Replica set Resource Metrics from Pods Pod Autoscaling (What just happened earlier...) Prometheus Adapter Prometheus Custom Metrics Pod
  9. 9. Node Autoscaling
  10. 10. Node Autoscaling ● Kubernetes AutoScaler ( ● Escalator ( We will talk about Kubernetes Autoscaler
  11. 11. Kubernetes Autoscaler: Basics ● A controller inside Kubernetes Cluster ● Increases Cluster Size when: ○ Pods in pending state due to insufficient resources ● Decreases Cluster Size when: ○ Cluster resource consumption is low for sufficient duration
  12. 12. Kubernetes Autoscaler: Safety first! Won’t evict nodes if: ● PodDisruptionBudget is restrictive ● Pod can not be moved because of affinity/node selector rules ● Kube-system pods running ● Pods with local storage ● Pods with annotation: "": "false"
  13. 13. Clusters can be complicated business! Availability Zone 1 Availability Zone 3 Availability Zone 2 Node Pool 1 (CPU Intensive) ● Scaling happens at node pool level ● Can be done across AZs ● “Expanders” can be used for different strategies Node Pool 2 (Mem Intensive) Node Pool 2 (GPU - ML/DL)
  14. 14. HPA & Autoscaler: Marriage made in heaven Pods & Nodes scale down After the load goes down, pods are evicted. This leads to under utilization of nodes and node is evicted HPA Scales Pods HPA scales the pods based on HPA definitions. More pods are are scheduled and some of them go in pending state Autoscaler Kicks in Cluster autoscaler adds more nodes based on pending pods and pods start running HPA and cluster scaling working together
  15. 15. Recommendations From a real world implementation
  16. 16. Scaling speed The delay between two consecutive up/down scale in HPA is configured at cluster level (Current upscale delay at 3m). Based on how fast you need to scale, this should be configured at cluster level. Similar controls exist for node autoscaler
  17. 17. Scaling metric Every workload is different, some should be scaled on CPU, some on number of messages in queue, some on a different metric outside of application. Choose your “scaling metric” carefully!
  18. 18. Monitoring Adapters If you are using a commercial monitoring tool - you will have to route metrics to metric server. You can also build not to depend on outage of a SaaS monitoring tool! Also check you have adapter from monitoring tool to Metric server!
  19. 19. State & Scaling Statefulset scaling is very different - you need to provision volume etc. and depending on underlying datastore, might need initial data bootstrapping etc
  20. 20. Further reading ● Metrics Server ( ● ● ●
  21. 21. Thank you! Demo code: