Kubernetes is an excellent container orchestration platform, but when it comes to AutoScaling of containers, there is limited capability with just CPU & memory metrics scaling applications. This talk will introduce how can the combination of KEDA & Azure Kubernetes Service can help AutoScaling of applications based on metrics from various metric sources, eg. Prometheus, Kafka, etc.
▸ This type of Scaling depends on human action to scale the
$ Kubectl scale deployment/my-app --replicas=2
Similar operation can be done for cluster Autoscaling using az cli
▸ This type of Scaling depends on system metrics to scale the
$ Kubectl autoscale deployment/my-app --max=5 --cpu-percent=80
Limited by metric types which can be used out of the box
Horizontal pod Autoscaler (HPA)
▸ This type of Scaling implements Virtual Kubelet in AKS cluster.
▸ Azure container instances(ACI) are responsible for running extra
pods without any additional infrastructure required.
KEDA is a Kubernetes-based Event Driven Autoscaler
KEDA works alongside standard Kubernetes components like the Horizontal
Pod Autoscaler and can extend functionality without overwriting or
Roles of Keda-operator:
▸ Agent - KEDA activates and deactivates Kubernetes Deployments to
scale to and from zero on no events.
▸ Metrics - KEDA acts as a Kubernetes metrics server that exposes rich
event data like queue length or stream lag to the Horizontal Pod
How does KEDA works?
ScaledObjects: Desired mapping
between an event source and the
External trigger source:
Prometheus, Kafka, RabbitMQ, etc
Scaler: Detect if a deployment
should be activated or deactivated,
and feed custom metrics for a
specific event source
Metrics adapter: Presents metrics
from external sources to the
Horizontal Pod Autoscaler