KUBERNETES WITH KEDA &
AZURE FUNCTIONS
A WAY TO HAVE SERVERLESS ON KUBERNETES
EDUARD TOMÀS
@EIXIMENIS
#netcoreconf
¿Quien soy yo?
• Principal Tech Lead @ PlainConcepts BCN
• Padre orgulloso
• Bebedor de cerveza
• Picateclas a mucha honra
• Microsoft MVP desde 2012
INDEX
• How can I run Azure Functions on Kubernetes?
• What is KEDA
• Why KEDA
• Some examples
WHY SERVERLESS ON KUBERNETES?
1. Easier adoption of hybrid / multicloud
2. Less lock-in
3. Single platform to focus on
4. Unified operations with other workloads
5. More h/w control (i.e. GPU enabled clusters)
6. Run AFs alongside other app (access to service mesh,
custom shared environment, …)
SERVERLESS VS KUBERNETES
• Not a fight really
• You can run serverless workloads on Kubernetes
• Also there are some serverless kubernetes implementations (AKS virtual nodes, EKS Fargate)
• So, you can have
• Serverless on Kubernetes
• A serverless Kubernetes
• And serverless on a serverless Kubernetes 
THE FUTURE OF K8S IS SERVERLESS
• Serverless containers infrastructure is developed (ACI,
Fargate,…)
• Needs to be orchestrated in some way
• Kubernetes orchestration API is current de-facto
standard
• In near future we will see a mix of nodes and serverless
infrastructure orchestrated under the k8s API
• Kubernetes community is aware of this and API it’s
evolving to support these scenarios
https://thenewstack.io/the-future-of-kubernetes-is-serverless/
CAN I RUN AZURE FUNCTIONS IN KUBERNETES?
• If you can dockerize them, you can run them in
Kubernetes.
• func init --docker-only
• Let’s see it 
DEPLOYING ON KUBERNETES
• You only need a deployment to run the Azure Function
• A secret map to store the secrets (connection strings)
• And your AF it’s up and running! :)
• Again: Let’s see it 
So, running Azure Functions on Kubernetes is not
really the issue…
The really issue is… scaling them appropiately
KUBERNETES (POD) AUTOSCALING 101
To auto scale a deployment you need two things:
1. A metric on which to scale (like %CPU)
2. An HPA bounded to that metric
KUBERNETES (POD) AUTOSCALING 101
• HPA pulls metrics exposed by the metrics server
• OOB metrics server exposes only CPU & Mem
• So, OOB you can auto scale an Azure Function
based on CPU usage or memory consumption
AUTOSCALING AZURE FUNCTIONS
• Usually using CPU or Mem to scale an AF is not the best
strategy
• You are focusing on symptoms rather than causes
• You should scale based on these causes
• pending messages to read
• pending registers to process
• ….
So, KEDA is not about running Azure Functions on
Kubernetes
KEDA is about scaling them
Kubernetes Event Driven Autoscaler
WHAT EXACTLY DOES KEDA?
• KEDA is able to read external metrics…
• … exposing them to the metrics server…
• … allowing the usage of HPA to scale over those
metrics.
WHAT EXACTLY DOES KEDA?
• KEDA do not auto scale your Azure Functions
• But provides all necessary stuff needed by the HPA to
auto scale them based on external metrics.
• Using KEDA you can auto scale your AFs based on the
real causes, not on the symptoms
HOW KEDA DOES ITS JOB?
A scaler watches for
external triggers (like
new message in a
specific queue)
HOW KEDA DOES ITS JOB?
The trigger updates a
metric which is exposed
through the metrics
server.
HOW KEDA DOES ITS JOB?
A standard HPA bound
to this metric scales the
AF deployment if
needed
THE KEDA SCALERS
• Currently KEDA provides several scalers for different
technologies
• More scalers are added over time
• https://keda.sh/docs/2.0/scalers/
THE SCALEDOBJECT CRD
• To “plug” a scaler to the Kubernetes we use the
ScaledObject CRD provided by KEDA
• Each ScaledObject configures one scaler to look for
external events
AUTO SCALING USING KEDA
• So, I have an AF deployed to Kubernetes that I want to auto scale
• I need to create an ScaledObject to get the metric on which to
scale (like pending messages on a SQS queue)
• Then I need to create an HPA bounded to this metric
• And the magic will happen!
• Let’s see it!
SCALING JOBS
• Scaling jobs is an alternative approach to run FaaS-like
workloads
• Instead of processing N events in a single pod, a new job
(which ends creating a pod) is scheduled for each event
• Once again… Let’s see it! 
HPA COULD BE THE MOST POWERFUL VILLAIN
• Beware with workloads scaled
through the HPA
• If scale down is triggered HPA
will just… snap its fingers
• A pod can be killed while
processing!
DEFENDING PODS FROM HPA
1. Using pod lifecycles
1. Ask for “additional” time when Kubernetes wants to
kill the pod.
2. Works but is… ugly (pod could stand in terminating
long time)
2. Using jobs 
THANKS 
HAVE A WONDERFUL DAY AND STAY SAFE!!!
EDUARD TOMÀS
@EIXIMENIS

CollabDays 2020 Barcelona - Serverless Kubernetes with KEDA

  • 2.
    KUBERNETES WITH KEDA& AZURE FUNCTIONS A WAY TO HAVE SERVERLESS ON KUBERNETES EDUARD TOMÀS @EIXIMENIS
  • 3.
    #netcoreconf ¿Quien soy yo? •Principal Tech Lead @ PlainConcepts BCN • Padre orgulloso • Bebedor de cerveza • Picateclas a mucha honra • Microsoft MVP desde 2012
  • 4.
    INDEX • How canI run Azure Functions on Kubernetes? • What is KEDA • Why KEDA • Some examples
  • 5.
    WHY SERVERLESS ONKUBERNETES? 1. Easier adoption of hybrid / multicloud 2. Less lock-in 3. Single platform to focus on 4. Unified operations with other workloads 5. More h/w control (i.e. GPU enabled clusters) 6. Run AFs alongside other app (access to service mesh, custom shared environment, …)
  • 6.
    SERVERLESS VS KUBERNETES •Not a fight really • You can run serverless workloads on Kubernetes • Also there are some serverless kubernetes implementations (AKS virtual nodes, EKS Fargate) • So, you can have • Serverless on Kubernetes • A serverless Kubernetes • And serverless on a serverless Kubernetes 
  • 7.
    THE FUTURE OFK8S IS SERVERLESS • Serverless containers infrastructure is developed (ACI, Fargate,…) • Needs to be orchestrated in some way • Kubernetes orchestration API is current de-facto standard • In near future we will see a mix of nodes and serverless infrastructure orchestrated under the k8s API • Kubernetes community is aware of this and API it’s evolving to support these scenarios https://thenewstack.io/the-future-of-kubernetes-is-serverless/
  • 8.
    CAN I RUNAZURE FUNCTIONS IN KUBERNETES? • If you can dockerize them, you can run them in Kubernetes. • func init --docker-only • Let’s see it 
  • 9.
    DEPLOYING ON KUBERNETES •You only need a deployment to run the Azure Function • A secret map to store the secrets (connection strings) • And your AF it’s up and running! :) • Again: Let’s see it 
  • 10.
    So, running AzureFunctions on Kubernetes is not really the issue… The really issue is… scaling them appropiately
  • 11.
    KUBERNETES (POD) AUTOSCALING101 To auto scale a deployment you need two things: 1. A metric on which to scale (like %CPU) 2. An HPA bounded to that metric
  • 12.
    KUBERNETES (POD) AUTOSCALING101 • HPA pulls metrics exposed by the metrics server • OOB metrics server exposes only CPU & Mem • So, OOB you can auto scale an Azure Function based on CPU usage or memory consumption
  • 13.
    AUTOSCALING AZURE FUNCTIONS •Usually using CPU or Mem to scale an AF is not the best strategy • You are focusing on symptoms rather than causes • You should scale based on these causes • pending messages to read • pending registers to process • ….
  • 14.
    So, KEDA isnot about running Azure Functions on Kubernetes KEDA is about scaling them Kubernetes Event Driven Autoscaler
  • 15.
    WHAT EXACTLY DOESKEDA? • KEDA is able to read external metrics… • … exposing them to the metrics server… • … allowing the usage of HPA to scale over those metrics.
  • 16.
    WHAT EXACTLY DOESKEDA? • KEDA do not auto scale your Azure Functions • But provides all necessary stuff needed by the HPA to auto scale them based on external metrics. • Using KEDA you can auto scale your AFs based on the real causes, not on the symptoms
  • 17.
    HOW KEDA DOESITS JOB? A scaler watches for external triggers (like new message in a specific queue)
  • 18.
    HOW KEDA DOESITS JOB? The trigger updates a metric which is exposed through the metrics server.
  • 19.
    HOW KEDA DOESITS JOB? A standard HPA bound to this metric scales the AF deployment if needed
  • 20.
    THE KEDA SCALERS •Currently KEDA provides several scalers for different technologies • More scalers are added over time • https://keda.sh/docs/2.0/scalers/
  • 21.
    THE SCALEDOBJECT CRD •To “plug” a scaler to the Kubernetes we use the ScaledObject CRD provided by KEDA • Each ScaledObject configures one scaler to look for external events
  • 22.
    AUTO SCALING USINGKEDA • So, I have an AF deployed to Kubernetes that I want to auto scale • I need to create an ScaledObject to get the metric on which to scale (like pending messages on a SQS queue) • Then I need to create an HPA bounded to this metric • And the magic will happen! • Let’s see it!
  • 23.
    SCALING JOBS • Scalingjobs is an alternative approach to run FaaS-like workloads • Instead of processing N events in a single pod, a new job (which ends creating a pod) is scheduled for each event • Once again… Let’s see it! 
  • 24.
    HPA COULD BETHE MOST POWERFUL VILLAIN • Beware with workloads scaled through the HPA • If scale down is triggered HPA will just… snap its fingers • A pod can be killed while processing!
  • 25.
    DEFENDING PODS FROMHPA 1. Using pod lifecycles 1. Ask for “additional” time when Kubernetes wants to kill the pod. 2. Works but is… ugly (pod could stand in terminating long time) 2. Using jobs 
  • 26.
    THANKS  HAVE AWONDERFUL DAY AND STAY SAFE!!! EDUARD TOMÀS @EIXIMENIS