SlideShare a Scribd company logo
1 of 59
Advanced Scheduling in Kubernetes
Oleg Chunikhin | CTO, Kublr
Introductions
Oleg Chunikhin
CTO, Kublr
• 20 years in software architecture & development
• Working w/ Kubernetes since its release in 2015
• Software architect behind Kublr—an enterprise
ready container management platform
• Twitter @olgch
Enterprise Kubernetes Needs
Developers SRE/Ops/DevOps/SecOps
• Self-service
• Compatible
• Conformant
• Configurable
• Open & Flexible
• Governance
• Org multi-tenancy
• Single pane of glass
• Operations
• Monitoring
• Log collection
• Image management
• Identity management
• Security
• Reliability
• Performance
• Portability
@olgch; @kublr
@olgch; @kublr
Automation
Ingress
Custom
Clusters
Infrastructure
Logging Monitoring
Observability
API
Usage
Reporting
RBAC IAM
Air Gap TLS
Certificate
Rotation
Audit
Storage Networking Container
Registry
CI / CD App Mgmt
Infrastructure
Container Runtime Kubernetes
OPERATIONS SECURITY &
GOVERNANCE
What’s in the slides
• Kubernetes overview
• Scheduling algorithm
• Scheduling controls
• Advanced scheduling techniques
• Examples, use cases, and recommendations
@olgch; @kublr
Kubernetes | Nodes and Pods
Node2
Pod A-2
10.0.1.5
Cnt1
Cnt2
Node 1
Pod A-1
10.0.0.3
Cnt1
Cnt2
Pod B-1
10.0.0.8
Cnt3
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
Pod A
Pod B
K8S
Controller(s)
User
Node 1
Pod A
Pod B Node 2
Pod C
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
It all starts empty
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Kubelet registers node
object in master
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Node 1
Node 2
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Node 1
Node 2
User creates (unscheduled) Pod
object(s) in Master
Pod A
Pod B
Pod C
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Node 1
Node 2
Scheduler notices
unscheduled Pods ...
Pod A
Pod B
Pod C
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Node 1
Node 2
…identifies the best
node to run them on…
Pod A
Pod B
Pod C
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Node 1
Node 2
…and marks the pods as
scheduled on corresponding
nodes.
Pod A
Pod B
Pod C
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Node 1
Node 2
Kubelet notices pods
scheduled to its nodes…
Pod A
Pod B
Pod C
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Node 1
Node 2
… starts pods’
containers.
Pod A
Pod B
Pod C
Pod A
Pod B
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Node 1
Node 2
… and reports pods as “running”
to master.
Pod A
Pod B
Pod C
Pod A
Pod B
@olgch; @kublr
Kubernetes | Container Orchestration
Node 1
Docker
Kubelet
K8S Master API
K8S
Scheduler(s)
K8S
Controller(s)
User
Node 1
Node 2
Scheduler finds the best
node to run pods.
HOW?
Pod A
Pod B
Pod C
Pod A
Pod B
@olgch; @kublr
Kubernetes | Scheduling Algorithm
For each pod that needs scheduling:
1. Filter nodes
2. Calculate nodes priorities
3. Schedule pod if possible
@olgch; @kublr
Kubernetes | Scheduling Algorithm
Volume filters
• Do pod requested volumes’ zones fit the node’s zone?
• Can the node attach the volumes?
• Are there mounted volumes conflicts?
• Are there additional volume topology constraints?
Volume filters
Resource filters
Topology filters
Prioritization
@olgch; @kublr
Kubernetes | Scheduling Algorithm
Resource filters
• Does pod requested resources (CPU, RAM GPU, etc) fit the node’s available
resources?
• Can pod requested ports be opened on the node?
• Is there no memory or disk pressure on the node?
Volume filters
Resource filters
Topology filters
Prioritization
@olgch; @kublr
Kubernetes | Scheduling Algorithm
Topology filters
• Is Pod requested to run on this node?
• Are there inter-pod affinity constraints?
• Does the node match Pod’s node selector?
• Can Pod tolerate node’s taints
Volume filters
Resource filters
Topology filters
Prioritization
@olgch; @kublr
Kubernetes | Scheduling Algorithm
Prioritize with weights for:
• Pod replicas distribution
• Least (or most) node utilization
• Balanced resource usage
• Inter-pod affinity priority
• Node affinity priority
• Taint toleration priority
Volume filters
Resource filters
Topology filters
Prioritization
@olgch; @kublr
Scheduling | Controlling Pods Destination
• Resource requirements
• Be aware of volumes
• Node constraints
• Affinity and anti-affinity
• Priorities and Priority Classes
• Scheduler configuration
• Custom / multiple schedulers
@olgch; @kublr
Scheduling Controlled | Resources
• CPU, RAM, other (GPU)
• Requests and limits
• Reserved resources
kind: Node
status:
allocatable:
cpu: "4"
memory: 8070796Ki
pods: "110"
capacity:
cpu: "4"
memory: 8Gi
pods: "110"
kind: Pod
spec:
containers:
- name: main
resources:
requests:
cpu: 100m
memory: 1Gi
@olgch; @kublr
Scheduling Controlled | Volumes
• Request volumes in the right
zones
• Make sure node can attach
enough volumes
• Avoid volume location conflicts
• Use volume topology constraints
Node 1
Pod A
Node 2 Volume 2
Pod B
Unschedulable
Zone A
Pod C
Requested
Volume
Zone B
@olgch; @kublr
Scheduling Controlled | Volumes
• Request volumes in the right
zones
• Make sure node can attach
enough volumes
• Avoid volume location conflicts
• Use volume topology constraints
Node 1
Pod A
Volume 2Pod B
Pod C Requested
Volume
Volume 1
@olgch; @kublr
Scheduling Controlled | Volumes
• Request volumes in the right
zones
• Make sure node can attach
enough volumes
• Avoid volume location conflicts
• Use volume topology constraints
Node 1
Volume 1Pod A
Node 2
Volume 2Pod B
Pod C
@olgch; @kublr
Scheduling Controlled | Volumes
• Request volumes in the right
zones
• Make sure node can attach
enough volumes
• Avoid volume location conflicts
• Use volume topology constraints
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
...
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
@olgch; @kublr
Scheduling Controlled | Constraints
• Host constraints
• Labels and node selectors
• Taints and tolerations
Node 1Pod A
kind: Pod
spec:
nodeName: node1
kind: Node
metadata:
name: node1
@olgch; @kublr
Scheduling Controlled | Node Constraints
• Host constraints
• Labels and node selectors
• Taints and tolerations
Node 1
Pod A Node 2
Node 3
label: tier: backend
kind: Node
metadata:
labels:
tier: backend
kind: Pod
spec:
nodeSelector:
tier: backend
@olgch; @kublr
Scheduling Controlled | Node Constraints
• Host constraints
• Labels and node selectors
• Taints and tolerations
kind: Pod
spec:
tolerations:
- key: error
value: disk
operator: Equal
effect: NoExecute
tolerationSeconds: 60
kind: Node
spec:
taints:
- effect: NoSchedule
key: error
value: disk
timeAdded: null
Pod B
Node 1
tainted
Pod A
tolerate
@olgch; @kublr
Scheduling Controlled | Taints
Taints communicate node conditions
• Key – condition category
• Value – specific condition
• Operator – value wildcard
• Equal – value equality
• Exists – key existence
• Effect
• NoSchedule – filter at scheduling time
• PreferNoSchedule – prioritize at scheduling time
• NoExecute – filter at scheduling time, evict if executing
• TolerationSeconds – time to tolerate “NoExecute” taint
kind: Pod
spec:
tolerations:
- key: <taint key>
value: <taint value>
operator: <match operator>
effect: <taint effect>
tolerationSeconds: 60
@olgch; @kublr
Scheduling Controlled | Affinity
• Node affinity
• Inter-pod affinity
• Inter-pod anti-affinity
kind: Pod
spec:
affinity:
nodeAffinity: { ... }
podAffinity: { ... }
podAntiAffinity: { ... }
@olgch; @kublr
Scheduling Controlled | Node Affinity
Scope
• Preferred during scheduling, ignored during execution
• Required during scheduling, ignored during execution
kind: Pod
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 10
preference: { <node selector term> }
- ...
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- { <node selector term> }
- ... v
@olgch; @kublr
Interlude | Node Selector vs Selector Term
...
nodeSelector:
<label 1 key>: <label 1 value>
...
...
<node selector term>:
matchExpressions:
- key: <label key>
operator: In | NotIn | Exists | DoesNotExist | Gt | Lt
values:
- <label value 1>
...
...
@olgch; @kublr
Scheduling Controlled | Inter-pod Affinity
Scope
• Preferred during scheduling, ignored during execution
• Required during scheduling, ignored during execution
kind: Pod
spec:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 10
podAffinityTerm: { <pod affinity term> }
- ...
requiredDuringSchedulingIgnoredDuringExecution:
- { <pod affinity term> }
- ...
@olgch; @kublr
Scheduling Controlled | Inter-pod Anti-affinity
Scope
• Preferred during scheduling, ignored during execution
• Required during scheduling, ignored during execution
kind: Pod
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 10
podAffinityTerm: { <pod affinity term> }
- ...
requiredDuringSchedulingIgnoredDuringExecution:
- { <pod affinity term> }
- ...
@olgch; @kublr
Scheduling Controlled | Pod Affinity Terms
• topologyKey – nodes’ label key defining co-location
• labelSelector and namespaces – select group of pods
<pod affinity term>:
topologyKey: <topology label key>
namespaces: [ <namespace>, ... ]
labelSelector:
matchLabels:
<label key>: <label value>
...
matchExpressions:
- key: <label key>
operator: In | NotIn | Exists | DoesNotExist
values: [ <value 1>, ... ]
...
@olgch; @kublr
Scheduling Controlled | Affinity Example
affinity:
topologyKey: tier
labelSelector:
matchLabels:
group: a
Node 1
tier: a
Pod B
group: a
Node 3
tier: b
tier: a
Node 4
tier: b
tier: b
Pod B
group: a
Node 1
tier: a
@olgch; @kublr
Scheduling Controlled | Scheduler Configuration
• Algorithm Provider
• Scheduling Policies and Profiles (alpha)
• Scheduler WebHook
@olgch; @kublr
Default Scheduler | Algorithm Provider
kube-scheduler
--scheduler-name=default-scheduler
--algorithm-provider=DefaultProvider
--algorithm-provider=ClusterAutoscalerProvider
@olgch; @kublr
Default Scheduler | Custom Policy Config
kube-scheduler
--scheduler-name=default-scheduler
--policy-config-file=<file>
--use-legacy-policy-config=<true|false>
--policy-configmap=<config map name>
--policy-configmap-namespace=<config map ns>
@olgch; @kublr
Default Scheduler | Custom Policy Config
{
"kind" : "Policy",
"apiVersion" : "v1",
"predicates" : [
{"name" : "PodFitsHostPorts"},
...
{"name" : "HostName"}
],
"priorities" : [
{"name" : "LeastRequestedPriority", "weight" : 1},
...
{"name" : "EqualPriority", "weight" : 1}
],
"hardPodAffinitySymmetricWeight" : 10,
"alwaysCheckAllPredicates" : false
}
@olgch; @kublr
Default Scheduler | Scheduler WebHook
{
"kind" : "Policy",
"apiVersion" : "v1",
"predicates" : [...],
"priorities" : [...],
"extenders" : [{
"urlPrefix": "http://127.0.0.1:12346/scheduler",
"filterVerb": "filter",
"bindVerb": "bind",
"prioritizeVerb": "prioritize",
"weight": 5,
"enableHttps": false,
"nodeCacheCapable": false
}],
"hardPodAffinitySymmetricWeight" : 10,
"alwaysCheckAllPredicates" : false
}
@olgch; @kublr
Default Scheduler | Scheduler WebHook
func fiter(pod, nodes) api.NodeList
func prioritize(pod, nodes) HostPriorityList
func bind(pod, node)
@olgch; @kublr
Scheduling Controlled | Multiple Schedulers
kind: Pod
Metadata:
name: pod2
spec:
schedulerName: my-scheduler
kind: Pod
Metadata:
name: pod1
spec:
...
@olgch; @kublr
Scheduling Controlled | Custom Scheduler
Naive implementation
• In an infinite loop:
• Get list of Nodes: /api/v1/nodes
• Get list of Pods: /api/v1/pods
• Select Pods with
status.phase == Pending and
spec.schedulerName == our-name
• For each pod:
• Calculate target Node
• Create a new Binding object: POST /api/v1/bindings
apiVersion: v1
kind: Binding
Metadata:
namespace: default
name: pod1
target:
apiVersion: v1
kind: Node
name: node1
@olgch; @kublr
Scheduling Controlled | Custom Scheduler
Better implementation
• Watch Pods: /api/v1/pods
• On each Pod event:
• Process if the Pod with
status.phase == Pending and
spec.schedulerName == our-name
• Get list of Nodes: /api/v1/nodes
• Calculate target Node
• Create a new Binding object: POST /api/v1/bindings
apiVersion: v1
kind: Binding
Metadata:
namespace: default
name: pod1
target:
apiVersion: v1
kind: Node
name: node1
@olgch; @kublr
Scheduling Controlled | Custom Scheduler
Even better implementation
• Watch Nodes: /api/v1/nodes
• On each Node event:
• Update Node cache
• Watch Pods: /api/v1/pods
• On each Pod event:
• Process if the Pod with
status.phase == Pending and
spec.schedulerName == our-name
• Calculate target Node
• Create a new Binding object: POST /api/v1/bindings
apiVersion: v1
kind: Binding
Metadata:
namespace: default
name: pod1
target:
apiVersion: v1
kind: Node
name: node1
@olgch; @kublr
Use Case | Distributed Pods
apiVersion: v1
kind: Pod
metadata:
name: db-replica-3
labels:
component: db
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: component
operator: In
values: [ "db" ]
Node 2
db-replica-2
Node 1
Node 3
db-replica-1
db-replica-3
@olgch; @kublr
Use Case | Co-located Pods
apiVersion: v1
kind: Pod
metadata:
name: app-replica-1
labels:
component: web
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: component
operator: In
values: [ "db" ]
Node 2
db-replica-2
Node 1
Node 3
db-replica-1
app-replica-1
@olgch; @kublr
Use Case | Reliable Service on Spot Nodes
• “fixed” node group
Expensive, more reliable, fixed number
Tagged with label nodeGroup: fixed
• “spot” node group
Inexpensive, unreliable, auto-scaled
Tagged with label nodeGroup: spot
• Scheduling rules:
• At least two pods on “fixed” nodes
• All other pods favor “spot” nodes
• Custom scheduler or multiple Deployments
@olgch; @kublr
Scheduling | Dos and Don’ts
DO
• Prefer scheduling based on resources and
pod affinity to node constraints and affinity
• Specify resource requests
• Keep requests == limits
• Especially for non-elastic resources
• Memory is non-elastic!
• Safeguard against missing resource specs
• Namespace default limits
• Admission controllers
• Plan architecture of localized volumes
(EBS, local)
DON’T
• ... assign pod to nodes directly
• ... use node-affinity or node constraints
• ... use pods with no resource requests
@olgch; @kublr
Scheduling | Key Takeaways
• Scheduling filters and priorities
• Resource requests and availability
• Inter-pod affinity/anti-affinity
• Volumes localization (AZ)
• Node labels and selectors
• Node affinity/anti-affinity
• Node taints and tolerations
• Scheduler(s) tweaking and customization
@olgch; @kublr
Next steps
• Pod priority, preemption, and eviction
• Pod Overhead
• Scheduler Profiles
• Scheduler performance considerations
• Admission Controllers and dynamic admission control
• Dynamic policies and OPA
@olgch; @kublr
References
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
https://kubernetes.io/docs/concepts/configuration/resource-bin-packing/
https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/
https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
https://kubernetes.io/docs/reference/scheduling/policies/
https://kubernetes.io/docs/reference/scheduling/profiles/
https://github.com/kubernetes/community/blob/master/contributors/design-
proposals/scheduling/scheduler_extender.md
@olgch; @kublr
Q&A
@olgch; @kublr
Oleg Chunikhin
CTO
oleg@kublr.com
@olgch
Kublr | kublr.com
@kublr
Signup for our newsletter
at kublr.com

More Related Content

What's hot

Kubernetes Ingress 101
Kubernetes Ingress 101Kubernetes Ingress 101
Kubernetes Ingress 101Kublr
 
Centralizing Kubernetes and Container Operations
Centralizing Kubernetes and Container OperationsCentralizing Kubernetes and Container Operations
Centralizing Kubernetes and Container OperationsKublr
 
Kubernetes intro public - kubernetes meetup 4-21-2015
Kubernetes intro   public - kubernetes meetup 4-21-2015Kubernetes intro   public - kubernetes meetup 4-21-2015
Kubernetes intro public - kubernetes meetup 4-21-2015Rohit Jnagal
 
Intro into Rook and Ceph on Kubernetes
Intro into Rook and Ceph on KubernetesIntro into Rook and Ceph on Kubernetes
Intro into Rook and Ceph on KubernetesKublr
 
Kubernetes meetup 101
Kubernetes meetup 101Kubernetes meetup 101
Kubernetes meetup 101Jakir Patel
 
Introduction to Kubernetes
Introduction to KubernetesIntroduction to Kubernetes
Introduction to Kubernetesrajdeep
 
Introduction to Kubernetes
Introduction to KubernetesIntroduction to Kubernetes
Introduction to KubernetesRoss Kukulinski
 
KubeCon EU 2016: A Practical Guide to Container Scheduling
KubeCon EU 2016: A Practical Guide to Container SchedulingKubeCon EU 2016: A Practical Guide to Container Scheduling
KubeCon EU 2016: A Practical Guide to Container SchedulingKubeAcademy
 
Kubernetes intro public - kubernetes user group 4-21-2015
Kubernetes intro   public - kubernetes user group 4-21-2015Kubernetes intro   public - kubernetes user group 4-21-2015
Kubernetes intro public - kubernetes user group 4-21-2015reallavalamp
 
Lessons learned with kubernetes in production at PlayPass
Lessons learned with kubernetes in productionat PlayPassLessons learned with kubernetes in productionat PlayPass
Lessons learned with kubernetes in production at PlayPassPeter Vandenabeele
 
K8s best practices from the field!
K8s best practices from the field!K8s best practices from the field!
K8s best practices from the field!DoiT International
 
Kubernetes automation in production
Kubernetes automation in productionKubernetes automation in production
Kubernetes automation in productionPaul Bakker
 
The Evolution of your Kubernetes Cluster
The Evolution of your Kubernetes ClusterThe Evolution of your Kubernetes Cluster
The Evolution of your Kubernetes ClusterKublr
 
KubeCon EU 2016: Heroku to Kubernetes
KubeCon EU 2016: Heroku to KubernetesKubeCon EU 2016: Heroku to Kubernetes
KubeCon EU 2016: Heroku to KubernetesKubeAcademy
 
A Primer on Kubernetes and Google Container Engine
A Primer on Kubernetes and Google Container EngineA Primer on Kubernetes and Google Container Engine
A Primer on Kubernetes and Google Container EngineRightScale
 
Kubernetes: The Next Research Platform
Kubernetes: The Next Research PlatformKubernetes: The Next Research Platform
Kubernetes: The Next Research PlatformBob Killen
 
DevOps in AWS with Kubernetes
DevOps in AWS with KubernetesDevOps in AWS with Kubernetes
DevOps in AWS with KubernetesOleg Chunikhin
 
Managing kubernetes deployment with operators
Managing kubernetes deployment with operatorsManaging kubernetes deployment with operators
Managing kubernetes deployment with operatorsCloud Technology Experts
 
Virtualization inside kubernetes
Virtualization inside kubernetesVirtualization inside kubernetes
Virtualization inside kubernetesinwin stack
 

What's hot (20)

Kubernetes Ingress 101
Kubernetes Ingress 101Kubernetes Ingress 101
Kubernetes Ingress 101
 
Centralizing Kubernetes and Container Operations
Centralizing Kubernetes and Container OperationsCentralizing Kubernetes and Container Operations
Centralizing Kubernetes and Container Operations
 
Kubernetes intro public - kubernetes meetup 4-21-2015
Kubernetes intro   public - kubernetes meetup 4-21-2015Kubernetes intro   public - kubernetes meetup 4-21-2015
Kubernetes intro public - kubernetes meetup 4-21-2015
 
Intro into Rook and Ceph on Kubernetes
Intro into Rook and Ceph on KubernetesIntro into Rook and Ceph on Kubernetes
Intro into Rook and Ceph on Kubernetes
 
Intro to kubernetes
Intro to kubernetesIntro to kubernetes
Intro to kubernetes
 
Kubernetes meetup 101
Kubernetes meetup 101Kubernetes meetup 101
Kubernetes meetup 101
 
Introduction to Kubernetes
Introduction to KubernetesIntroduction to Kubernetes
Introduction to Kubernetes
 
Introduction to Kubernetes
Introduction to KubernetesIntroduction to Kubernetes
Introduction to Kubernetes
 
KubeCon EU 2016: A Practical Guide to Container Scheduling
KubeCon EU 2016: A Practical Guide to Container SchedulingKubeCon EU 2016: A Practical Guide to Container Scheduling
KubeCon EU 2016: A Practical Guide to Container Scheduling
 
Kubernetes intro public - kubernetes user group 4-21-2015
Kubernetes intro   public - kubernetes user group 4-21-2015Kubernetes intro   public - kubernetes user group 4-21-2015
Kubernetes intro public - kubernetes user group 4-21-2015
 
Lessons learned with kubernetes in production at PlayPass
Lessons learned with kubernetes in productionat PlayPassLessons learned with kubernetes in productionat PlayPass
Lessons learned with kubernetes in production at PlayPass
 
K8s best practices from the field!
K8s best practices from the field!K8s best practices from the field!
K8s best practices from the field!
 
Kubernetes automation in production
Kubernetes automation in productionKubernetes automation in production
Kubernetes automation in production
 
The Evolution of your Kubernetes Cluster
The Evolution of your Kubernetes ClusterThe Evolution of your Kubernetes Cluster
The Evolution of your Kubernetes Cluster
 
KubeCon EU 2016: Heroku to Kubernetes
KubeCon EU 2016: Heroku to KubernetesKubeCon EU 2016: Heroku to Kubernetes
KubeCon EU 2016: Heroku to Kubernetes
 
A Primer on Kubernetes and Google Container Engine
A Primer on Kubernetes and Google Container EngineA Primer on Kubernetes and Google Container Engine
A Primer on Kubernetes and Google Container Engine
 
Kubernetes: The Next Research Platform
Kubernetes: The Next Research PlatformKubernetes: The Next Research Platform
Kubernetes: The Next Research Platform
 
DevOps in AWS with Kubernetes
DevOps in AWS with KubernetesDevOps in AWS with Kubernetes
DevOps in AWS with Kubernetes
 
Managing kubernetes deployment with operators
Managing kubernetes deployment with operatorsManaging kubernetes deployment with operators
Managing kubernetes deployment with operators
 
Virtualization inside kubernetes
Virtualization inside kubernetesVirtualization inside kubernetes
Virtualization inside kubernetes
 

Similar to Advanced Scheduling in Kubernetes

Google Kubernetes Engine Deep Dive Meetup
Google Kubernetes Engine Deep Dive MeetupGoogle Kubernetes Engine Deep Dive Meetup
Google Kubernetes Engine Deep Dive MeetupIftach Schonbaum
 
Kubernetes Walk Through from Technical View
Kubernetes Walk Through from Technical ViewKubernetes Walk Through from Technical View
Kubernetes Walk Through from Technical ViewLei (Harry) Zhang
 
Kubernetes Internals
Kubernetes InternalsKubernetes Internals
Kubernetes InternalsShimi Bandiel
 
Introduction kubernetes 2017_12_24
Introduction kubernetes 2017_12_24Introduction kubernetes 2017_12_24
Introduction kubernetes 2017_12_24Sam Zheng
 
Kubernetes meetup - 2018-05-23
Kubernetes meetup - 2018-05-23Kubernetes meetup - 2018-05-23
Kubernetes meetup - 2018-05-23Ruben Ernst
 
Kubernetes fundamentals
Kubernetes fundamentalsKubernetes fundamentals
Kubernetes fundamentalsVictor Morales
 
Lc3 beijing-june262018-sahdev zala-guangya
Lc3 beijing-june262018-sahdev zala-guangyaLc3 beijing-june262018-sahdev zala-guangya
Lc3 beijing-june262018-sahdev zala-guangyaSahdev Zala
 
Openstack days sv building highly available services using kubernetes (preso)
Openstack days sv   building highly available services using kubernetes (preso)Openstack days sv   building highly available services using kubernetes (preso)
Openstack days sv building highly available services using kubernetes (preso)Allan Naim
 
Building Portable Applications with Kubernetes
Building Portable Applications with KubernetesBuilding Portable Applications with Kubernetes
Building Portable Applications with KubernetesKublr
 
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...Everything you ever needed to know about Kafka on Kubernetes but were afraid ...
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...HostedbyConfluent
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101Kublr
 
How Self-Healing Nodes and Infrastructure Management Impact Reliability
How Self-Healing Nodes and Infrastructure Management Impact ReliabilityHow Self-Healing Nodes and Infrastructure Management Impact Reliability
How Self-Healing Nodes and Infrastructure Management Impact ReliabilityKublr
 
Continuous Deployment with Kubernetes, Docker and GitLab CI
Continuous Deployment with Kubernetes, Docker and GitLab CIContinuous Deployment with Kubernetes, Docker and GitLab CI
Continuous Deployment with Kubernetes, Docker and GitLab CIalexanderkiel
 
Scheduling a Kubernetes Federation with Admiralty
Scheduling a Kubernetes Federation with AdmiraltyScheduling a Kubernetes Federation with Admiralty
Scheduling a Kubernetes Federation with AdmiraltyIgor Sfiligoi
 
Introduction to SolrCloud
Introduction to SolrCloudIntroduction to SolrCloud
Introduction to SolrCloudVarun Thacker
 
Kubernetes Problem-Solving
Kubernetes Problem-SolvingKubernetes Problem-Solving
Kubernetes Problem-SolvingAll Things Open
 
Open stackaustinmeetupsept21
Open stackaustinmeetupsept21Open stackaustinmeetupsept21
Open stackaustinmeetupsept21Brent Doncaster
 
Introduction to Kubernetes RBAC
Introduction to Kubernetes RBACIntroduction to Kubernetes RBAC
Introduction to Kubernetes RBACKublr
 
Best practices for highly available and large scale SolrCloud
Best practices for highly available and large scale SolrCloudBest practices for highly available and large scale SolrCloud
Best practices for highly available and large scale SolrCloudAnshum Gupta
 

Similar to Advanced Scheduling in Kubernetes (20)

Google Kubernetes Engine Deep Dive Meetup
Google Kubernetes Engine Deep Dive MeetupGoogle Kubernetes Engine Deep Dive Meetup
Google Kubernetes Engine Deep Dive Meetup
 
Kubernetes Walk Through from Technical View
Kubernetes Walk Through from Technical ViewKubernetes Walk Through from Technical View
Kubernetes Walk Through from Technical View
 
Kubernetes Internals
Kubernetes InternalsKubernetes Internals
Kubernetes Internals
 
Introduction kubernetes 2017_12_24
Introduction kubernetes 2017_12_24Introduction kubernetes 2017_12_24
Introduction kubernetes 2017_12_24
 
Kubernetes meetup - 2018-05-23
Kubernetes meetup - 2018-05-23Kubernetes meetup - 2018-05-23
Kubernetes meetup - 2018-05-23
 
Kubernetes fundamentals
Kubernetes fundamentalsKubernetes fundamentals
Kubernetes fundamentals
 
Lc3 beijing-june262018-sahdev zala-guangya
Lc3 beijing-june262018-sahdev zala-guangyaLc3 beijing-june262018-sahdev zala-guangya
Lc3 beijing-june262018-sahdev zala-guangya
 
Openstack days sv building highly available services using kubernetes (preso)
Openstack days sv   building highly available services using kubernetes (preso)Openstack days sv   building highly available services using kubernetes (preso)
Openstack days sv building highly available services using kubernetes (preso)
 
Building Portable Applications with Kubernetes
Building Portable Applications with KubernetesBuilding Portable Applications with Kubernetes
Building Portable Applications with Kubernetes
 
Intro to Kubernetes
Intro to KubernetesIntro to Kubernetes
Intro to Kubernetes
 
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...Everything you ever needed to know about Kafka on Kubernetes but were afraid ...
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101
 
How Self-Healing Nodes and Infrastructure Management Impact Reliability
How Self-Healing Nodes and Infrastructure Management Impact ReliabilityHow Self-Healing Nodes and Infrastructure Management Impact Reliability
How Self-Healing Nodes and Infrastructure Management Impact Reliability
 
Continuous Deployment with Kubernetes, Docker and GitLab CI
Continuous Deployment with Kubernetes, Docker and GitLab CIContinuous Deployment with Kubernetes, Docker and GitLab CI
Continuous Deployment with Kubernetes, Docker and GitLab CI
 
Scheduling a Kubernetes Federation with Admiralty
Scheduling a Kubernetes Federation with AdmiraltyScheduling a Kubernetes Federation with Admiralty
Scheduling a Kubernetes Federation with Admiralty
 
Introduction to SolrCloud
Introduction to SolrCloudIntroduction to SolrCloud
Introduction to SolrCloud
 
Kubernetes Problem-Solving
Kubernetes Problem-SolvingKubernetes Problem-Solving
Kubernetes Problem-Solving
 
Open stackaustinmeetupsept21
Open stackaustinmeetupsept21Open stackaustinmeetupsept21
Open stackaustinmeetupsept21
 
Introduction to Kubernetes RBAC
Introduction to Kubernetes RBACIntroduction to Kubernetes RBAC
Introduction to Kubernetes RBAC
 
Best practices for highly available and large scale SolrCloud
Best practices for highly available and large scale SolrCloudBest practices for highly available and large scale SolrCloud
Best practices for highly available and large scale SolrCloud
 

More from Kublr

Container Runtimes and Tooling, v2
Container Runtimes and Tooling, v2Container Runtimes and Tooling, v2
Container Runtimes and Tooling, v2Kublr
 
Hybrid architecture solutions with kubernetes and the cloud native stack
Hybrid architecture solutions with kubernetes and the cloud native stackHybrid architecture solutions with kubernetes and the cloud native stack
Hybrid architecture solutions with kubernetes and the cloud native stackKublr
 
Multi-cloud Kubernetes BCDR with Velero
Multi-cloud Kubernetes BCDR with VeleroMulti-cloud Kubernetes BCDR with Velero
Multi-cloud Kubernetes BCDR with VeleroKublr
 
Kubernetes persistence 101
Kubernetes persistence 101Kubernetes persistence 101
Kubernetes persistence 101Kublr
 
Portable CI/CD Environment as Code with Kubernetes, Kublr and Jenkins
Portable CI/CD Environment as Code with Kubernetes, Kublr and JenkinsPortable CI/CD Environment as Code with Kubernetes, Kublr and Jenkins
Portable CI/CD Environment as Code with Kubernetes, Kublr and JenkinsKublr
 
Kubernetes 101
Kubernetes 101Kubernetes 101
Kubernetes 101Kublr
 
Setting up CI/CD Pipeline with Kubernetes and Kublr step by-step
Setting up CI/CD Pipeline with Kubernetes and Kublr step by-stepSetting up CI/CD Pipeline with Kubernetes and Kublr step by-step
Setting up CI/CD Pipeline with Kubernetes and Kublr step by-stepKublr
 
Canary Releases on Kubernetes with Spinnaker, Istio, & Prometheus (2020)
Canary Releases on Kubernetes with Spinnaker, Istio, & Prometheus (2020)Canary Releases on Kubernetes with Spinnaker, Istio, & Prometheus (2020)
Canary Releases on Kubernetes with Spinnaker, Istio, & Prometheus (2020)Kublr
 
How to Run Kubernetes in Restrictive Environments
How to Run Kubernetes in Restrictive EnvironmentsHow to Run Kubernetes in Restrictive Environments
How to Run Kubernetes in Restrictive EnvironmentsKublr
 
Kubernetes as Infrastructure Abstraction
Kubernetes as Infrastructure AbstractionKubernetes as Infrastructure Abstraction
Kubernetes as Infrastructure AbstractionKublr
 
Centralizing Kubernetes Management in Restrictive Environments
Centralizing Kubernetes Management in Restrictive EnvironmentsCentralizing Kubernetes Management in Restrictive Environments
Centralizing Kubernetes Management in Restrictive EnvironmentsKublr
 
Canary Releases on Kubernetes w/ Spinnaker, Istio, and Prometheus
Canary Releases on Kubernetes w/ Spinnaker, Istio, and PrometheusCanary Releases on Kubernetes w/ Spinnaker, Istio, and Prometheus
Canary Releases on Kubernetes w/ Spinnaker, Istio, and PrometheusKublr
 
Kubernetes data science and machine learning
Kubernetes data science and machine learningKubernetes data science and machine learning
Kubernetes data science and machine learningKublr
 

More from Kublr (13)

Container Runtimes and Tooling, v2
Container Runtimes and Tooling, v2Container Runtimes and Tooling, v2
Container Runtimes and Tooling, v2
 
Hybrid architecture solutions with kubernetes and the cloud native stack
Hybrid architecture solutions with kubernetes and the cloud native stackHybrid architecture solutions with kubernetes and the cloud native stack
Hybrid architecture solutions with kubernetes and the cloud native stack
 
Multi-cloud Kubernetes BCDR with Velero
Multi-cloud Kubernetes BCDR with VeleroMulti-cloud Kubernetes BCDR with Velero
Multi-cloud Kubernetes BCDR with Velero
 
Kubernetes persistence 101
Kubernetes persistence 101Kubernetes persistence 101
Kubernetes persistence 101
 
Portable CI/CD Environment as Code with Kubernetes, Kublr and Jenkins
Portable CI/CD Environment as Code with Kubernetes, Kublr and JenkinsPortable CI/CD Environment as Code with Kubernetes, Kublr and Jenkins
Portable CI/CD Environment as Code with Kubernetes, Kublr and Jenkins
 
Kubernetes 101
Kubernetes 101Kubernetes 101
Kubernetes 101
 
Setting up CI/CD Pipeline with Kubernetes and Kublr step by-step
Setting up CI/CD Pipeline with Kubernetes and Kublr step by-stepSetting up CI/CD Pipeline with Kubernetes and Kublr step by-step
Setting up CI/CD Pipeline with Kubernetes and Kublr step by-step
 
Canary Releases on Kubernetes with Spinnaker, Istio, & Prometheus (2020)
Canary Releases on Kubernetes with Spinnaker, Istio, & Prometheus (2020)Canary Releases on Kubernetes with Spinnaker, Istio, & Prometheus (2020)
Canary Releases on Kubernetes with Spinnaker, Istio, & Prometheus (2020)
 
How to Run Kubernetes in Restrictive Environments
How to Run Kubernetes in Restrictive EnvironmentsHow to Run Kubernetes in Restrictive Environments
How to Run Kubernetes in Restrictive Environments
 
Kubernetes as Infrastructure Abstraction
Kubernetes as Infrastructure AbstractionKubernetes as Infrastructure Abstraction
Kubernetes as Infrastructure Abstraction
 
Centralizing Kubernetes Management in Restrictive Environments
Centralizing Kubernetes Management in Restrictive EnvironmentsCentralizing Kubernetes Management in Restrictive Environments
Centralizing Kubernetes Management in Restrictive Environments
 
Canary Releases on Kubernetes w/ Spinnaker, Istio, and Prometheus
Canary Releases on Kubernetes w/ Spinnaker, Istio, and PrometheusCanary Releases on Kubernetes w/ Spinnaker, Istio, and Prometheus
Canary Releases on Kubernetes w/ Spinnaker, Istio, and Prometheus
 
Kubernetes data science and machine learning
Kubernetes data science and machine learningKubernetes data science and machine learning
Kubernetes data science and machine learning
 

Recently uploaded

Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 

Recently uploaded (20)

Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 

Advanced Scheduling in Kubernetes

  • 1. Advanced Scheduling in Kubernetes Oleg Chunikhin | CTO, Kublr
  • 2. Introductions Oleg Chunikhin CTO, Kublr • 20 years in software architecture & development • Working w/ Kubernetes since its release in 2015 • Software architect behind Kublr—an enterprise ready container management platform • Twitter @olgch
  • 3. Enterprise Kubernetes Needs Developers SRE/Ops/DevOps/SecOps • Self-service • Compatible • Conformant • Configurable • Open & Flexible • Governance • Org multi-tenancy • Single pane of glass • Operations • Monitoring • Log collection • Image management • Identity management • Security • Reliability • Performance • Portability @olgch; @kublr
  • 4. @olgch; @kublr Automation Ingress Custom Clusters Infrastructure Logging Monitoring Observability API Usage Reporting RBAC IAM Air Gap TLS Certificate Rotation Audit Storage Networking Container Registry CI / CD App Mgmt Infrastructure Container Runtime Kubernetes OPERATIONS SECURITY & GOVERNANCE
  • 5. What’s in the slides • Kubernetes overview • Scheduling algorithm • Scheduling controls • Advanced scheduling techniques • Examples, use cases, and recommendations @olgch; @kublr
  • 6. Kubernetes | Nodes and Pods Node2 Pod A-2 10.0.1.5 Cnt1 Cnt2 Node 1 Pod A-1 10.0.0.3 Cnt1 Cnt2 Pod B-1 10.0.0.8 Cnt3 @olgch; @kublr
  • 7. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) Pod A Pod B K8S Controller(s) User Node 1 Pod A Pod B Node 2 Pod C @olgch; @kublr
  • 8. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User It all starts empty @olgch; @kublr
  • 9. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Kubelet registers node object in master @olgch; @kublr
  • 10. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Node 1 Node 2 @olgch; @kublr
  • 11. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Node 1 Node 2 User creates (unscheduled) Pod object(s) in Master Pod A Pod B Pod C @olgch; @kublr
  • 12. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Node 1 Node 2 Scheduler notices unscheduled Pods ... Pod A Pod B Pod C @olgch; @kublr
  • 13. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Node 1 Node 2 …identifies the best node to run them on… Pod A Pod B Pod C @olgch; @kublr
  • 14. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Node 1 Node 2 …and marks the pods as scheduled on corresponding nodes. Pod A Pod B Pod C @olgch; @kublr
  • 15. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Node 1 Node 2 Kubelet notices pods scheduled to its nodes… Pod A Pod B Pod C @olgch; @kublr
  • 16. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Node 1 Node 2 … starts pods’ containers. Pod A Pod B Pod C Pod A Pod B @olgch; @kublr
  • 17. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Node 1 Node 2 … and reports pods as “running” to master. Pod A Pod B Pod C Pod A Pod B @olgch; @kublr
  • 18. Kubernetes | Container Orchestration Node 1 Docker Kubelet K8S Master API K8S Scheduler(s) K8S Controller(s) User Node 1 Node 2 Scheduler finds the best node to run pods. HOW? Pod A Pod B Pod C Pod A Pod B @olgch; @kublr
  • 19. Kubernetes | Scheduling Algorithm For each pod that needs scheduling: 1. Filter nodes 2. Calculate nodes priorities 3. Schedule pod if possible @olgch; @kublr
  • 20. Kubernetes | Scheduling Algorithm Volume filters • Do pod requested volumes’ zones fit the node’s zone? • Can the node attach the volumes? • Are there mounted volumes conflicts? • Are there additional volume topology constraints? Volume filters Resource filters Topology filters Prioritization @olgch; @kublr
  • 21. Kubernetes | Scheduling Algorithm Resource filters • Does pod requested resources (CPU, RAM GPU, etc) fit the node’s available resources? • Can pod requested ports be opened on the node? • Is there no memory or disk pressure on the node? Volume filters Resource filters Topology filters Prioritization @olgch; @kublr
  • 22. Kubernetes | Scheduling Algorithm Topology filters • Is Pod requested to run on this node? • Are there inter-pod affinity constraints? • Does the node match Pod’s node selector? • Can Pod tolerate node’s taints Volume filters Resource filters Topology filters Prioritization @olgch; @kublr
  • 23. Kubernetes | Scheduling Algorithm Prioritize with weights for: • Pod replicas distribution • Least (or most) node utilization • Balanced resource usage • Inter-pod affinity priority • Node affinity priority • Taint toleration priority Volume filters Resource filters Topology filters Prioritization @olgch; @kublr
  • 24. Scheduling | Controlling Pods Destination • Resource requirements • Be aware of volumes • Node constraints • Affinity and anti-affinity • Priorities and Priority Classes • Scheduler configuration • Custom / multiple schedulers @olgch; @kublr
  • 25. Scheduling Controlled | Resources • CPU, RAM, other (GPU) • Requests and limits • Reserved resources kind: Node status: allocatable: cpu: "4" memory: 8070796Ki pods: "110" capacity: cpu: "4" memory: 8Gi pods: "110" kind: Pod spec: containers: - name: main resources: requests: cpu: 100m memory: 1Gi @olgch; @kublr
  • 26. Scheduling Controlled | Volumes • Request volumes in the right zones • Make sure node can attach enough volumes • Avoid volume location conflicts • Use volume topology constraints Node 1 Pod A Node 2 Volume 2 Pod B Unschedulable Zone A Pod C Requested Volume Zone B @olgch; @kublr
  • 27. Scheduling Controlled | Volumes • Request volumes in the right zones • Make sure node can attach enough volumes • Avoid volume location conflicts • Use volume topology constraints Node 1 Pod A Volume 2Pod B Pod C Requested Volume Volume 1 @olgch; @kublr
  • 28. Scheduling Controlled | Volumes • Request volumes in the right zones • Make sure node can attach enough volumes • Avoid volume location conflicts • Use volume topology constraints Node 1 Volume 1Pod A Node 2 Volume 2Pod B Pod C @olgch; @kublr
  • 29. Scheduling Controlled | Volumes • Request volumes in the right zones • Make sure node can attach enough volumes • Avoid volume location conflicts • Use volume topology constraints apiVersion: v1 kind: PersistentVolume metadata: name: pv spec: ... nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node @olgch; @kublr
  • 30. Scheduling Controlled | Constraints • Host constraints • Labels and node selectors • Taints and tolerations Node 1Pod A kind: Pod spec: nodeName: node1 kind: Node metadata: name: node1 @olgch; @kublr
  • 31. Scheduling Controlled | Node Constraints • Host constraints • Labels and node selectors • Taints and tolerations Node 1 Pod A Node 2 Node 3 label: tier: backend kind: Node metadata: labels: tier: backend kind: Pod spec: nodeSelector: tier: backend @olgch; @kublr
  • 32. Scheduling Controlled | Node Constraints • Host constraints • Labels and node selectors • Taints and tolerations kind: Pod spec: tolerations: - key: error value: disk operator: Equal effect: NoExecute tolerationSeconds: 60 kind: Node spec: taints: - effect: NoSchedule key: error value: disk timeAdded: null Pod B Node 1 tainted Pod A tolerate @olgch; @kublr
  • 33. Scheduling Controlled | Taints Taints communicate node conditions • Key – condition category • Value – specific condition • Operator – value wildcard • Equal – value equality • Exists – key existence • Effect • NoSchedule – filter at scheduling time • PreferNoSchedule – prioritize at scheduling time • NoExecute – filter at scheduling time, evict if executing • TolerationSeconds – time to tolerate “NoExecute” taint kind: Pod spec: tolerations: - key: <taint key> value: <taint value> operator: <match operator> effect: <taint effect> tolerationSeconds: 60 @olgch; @kublr
  • 34. Scheduling Controlled | Affinity • Node affinity • Inter-pod affinity • Inter-pod anti-affinity kind: Pod spec: affinity: nodeAffinity: { ... } podAffinity: { ... } podAntiAffinity: { ... } @olgch; @kublr
  • 35. Scheduling Controlled | Node Affinity Scope • Preferred during scheduling, ignored during execution • Required during scheduling, ignored during execution kind: Pod spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 10 preference: { <node selector term> } - ... requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - { <node selector term> } - ... v @olgch; @kublr
  • 36. Interlude | Node Selector vs Selector Term ... nodeSelector: <label 1 key>: <label 1 value> ... ... <node selector term>: matchExpressions: - key: <label key> operator: In | NotIn | Exists | DoesNotExist | Gt | Lt values: - <label value 1> ... ... @olgch; @kublr
  • 37. Scheduling Controlled | Inter-pod Affinity Scope • Preferred during scheduling, ignored during execution • Required during scheduling, ignored during execution kind: Pod spec: affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 10 podAffinityTerm: { <pod affinity term> } - ... requiredDuringSchedulingIgnoredDuringExecution: - { <pod affinity term> } - ... @olgch; @kublr
  • 38. Scheduling Controlled | Inter-pod Anti-affinity Scope • Preferred during scheduling, ignored during execution • Required during scheduling, ignored during execution kind: Pod spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 10 podAffinityTerm: { <pod affinity term> } - ... requiredDuringSchedulingIgnoredDuringExecution: - { <pod affinity term> } - ... @olgch; @kublr
  • 39. Scheduling Controlled | Pod Affinity Terms • topologyKey – nodes’ label key defining co-location • labelSelector and namespaces – select group of pods <pod affinity term>: topologyKey: <topology label key> namespaces: [ <namespace>, ... ] labelSelector: matchLabels: <label key>: <label value> ... matchExpressions: - key: <label key> operator: In | NotIn | Exists | DoesNotExist values: [ <value 1>, ... ] ... @olgch; @kublr
  • 40. Scheduling Controlled | Affinity Example affinity: topologyKey: tier labelSelector: matchLabels: group: a Node 1 tier: a Pod B group: a Node 3 tier: b tier: a Node 4 tier: b tier: b Pod B group: a Node 1 tier: a @olgch; @kublr
  • 41. Scheduling Controlled | Scheduler Configuration • Algorithm Provider • Scheduling Policies and Profiles (alpha) • Scheduler WebHook @olgch; @kublr
  • 42. Default Scheduler | Algorithm Provider kube-scheduler --scheduler-name=default-scheduler --algorithm-provider=DefaultProvider --algorithm-provider=ClusterAutoscalerProvider @olgch; @kublr
  • 43. Default Scheduler | Custom Policy Config kube-scheduler --scheduler-name=default-scheduler --policy-config-file=<file> --use-legacy-policy-config=<true|false> --policy-configmap=<config map name> --policy-configmap-namespace=<config map ns> @olgch; @kublr
  • 44. Default Scheduler | Custom Policy Config { "kind" : "Policy", "apiVersion" : "v1", "predicates" : [ {"name" : "PodFitsHostPorts"}, ... {"name" : "HostName"} ], "priorities" : [ {"name" : "LeastRequestedPriority", "weight" : 1}, ... {"name" : "EqualPriority", "weight" : 1} ], "hardPodAffinitySymmetricWeight" : 10, "alwaysCheckAllPredicates" : false } @olgch; @kublr
  • 45. Default Scheduler | Scheduler WebHook { "kind" : "Policy", "apiVersion" : "v1", "predicates" : [...], "priorities" : [...], "extenders" : [{ "urlPrefix": "http://127.0.0.1:12346/scheduler", "filterVerb": "filter", "bindVerb": "bind", "prioritizeVerb": "prioritize", "weight": 5, "enableHttps": false, "nodeCacheCapable": false }], "hardPodAffinitySymmetricWeight" : 10, "alwaysCheckAllPredicates" : false } @olgch; @kublr
  • 46. Default Scheduler | Scheduler WebHook func fiter(pod, nodes) api.NodeList func prioritize(pod, nodes) HostPriorityList func bind(pod, node) @olgch; @kublr
  • 47. Scheduling Controlled | Multiple Schedulers kind: Pod Metadata: name: pod2 spec: schedulerName: my-scheduler kind: Pod Metadata: name: pod1 spec: ... @olgch; @kublr
  • 48. Scheduling Controlled | Custom Scheduler Naive implementation • In an infinite loop: • Get list of Nodes: /api/v1/nodes • Get list of Pods: /api/v1/pods • Select Pods with status.phase == Pending and spec.schedulerName == our-name • For each pod: • Calculate target Node • Create a new Binding object: POST /api/v1/bindings apiVersion: v1 kind: Binding Metadata: namespace: default name: pod1 target: apiVersion: v1 kind: Node name: node1 @olgch; @kublr
  • 49. Scheduling Controlled | Custom Scheduler Better implementation • Watch Pods: /api/v1/pods • On each Pod event: • Process if the Pod with status.phase == Pending and spec.schedulerName == our-name • Get list of Nodes: /api/v1/nodes • Calculate target Node • Create a new Binding object: POST /api/v1/bindings apiVersion: v1 kind: Binding Metadata: namespace: default name: pod1 target: apiVersion: v1 kind: Node name: node1 @olgch; @kublr
  • 50. Scheduling Controlled | Custom Scheduler Even better implementation • Watch Nodes: /api/v1/nodes • On each Node event: • Update Node cache • Watch Pods: /api/v1/pods • On each Pod event: • Process if the Pod with status.phase == Pending and spec.schedulerName == our-name • Calculate target Node • Create a new Binding object: POST /api/v1/bindings apiVersion: v1 kind: Binding Metadata: namespace: default name: pod1 target: apiVersion: v1 kind: Node name: node1 @olgch; @kublr
  • 51. Use Case | Distributed Pods apiVersion: v1 kind: Pod metadata: name: db-replica-3 labels: component: db spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: kubernetes.io/hostname labelSelector: matchExpressions: - key: component operator: In values: [ "db" ] Node 2 db-replica-2 Node 1 Node 3 db-replica-1 db-replica-3 @olgch; @kublr
  • 52. Use Case | Co-located Pods apiVersion: v1 kind: Pod metadata: name: app-replica-1 labels: component: web spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: kubernetes.io/hostname labelSelector: matchExpressions: - key: component operator: In values: [ "db" ] Node 2 db-replica-2 Node 1 Node 3 db-replica-1 app-replica-1 @olgch; @kublr
  • 53. Use Case | Reliable Service on Spot Nodes • “fixed” node group Expensive, more reliable, fixed number Tagged with label nodeGroup: fixed • “spot” node group Inexpensive, unreliable, auto-scaled Tagged with label nodeGroup: spot • Scheduling rules: • At least two pods on “fixed” nodes • All other pods favor “spot” nodes • Custom scheduler or multiple Deployments @olgch; @kublr
  • 54. Scheduling | Dos and Don’ts DO • Prefer scheduling based on resources and pod affinity to node constraints and affinity • Specify resource requests • Keep requests == limits • Especially for non-elastic resources • Memory is non-elastic! • Safeguard against missing resource specs • Namespace default limits • Admission controllers • Plan architecture of localized volumes (EBS, local) DON’T • ... assign pod to nodes directly • ... use node-affinity or node constraints • ... use pods with no resource requests @olgch; @kublr
  • 55. Scheduling | Key Takeaways • Scheduling filters and priorities • Resource requests and availability • Inter-pod affinity/anti-affinity • Volumes localization (AZ) • Node labels and selectors • Node affinity/anti-affinity • Node taints and tolerations • Scheduler(s) tweaking and customization @olgch; @kublr
  • 56. Next steps • Pod priority, preemption, and eviction • Pod Overhead • Scheduler Profiles • Scheduler performance considerations • Admission Controllers and dynamic admission control • Dynamic policies and OPA @olgch; @kublr
  • 57. References https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ https://kubernetes.io/docs/concepts/configuration/resource-bin-packing/ https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/ https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/ https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ https://kubernetes.io/docs/reference/scheduling/policies/ https://kubernetes.io/docs/reference/scheduling/profiles/ https://github.com/kubernetes/community/blob/master/contributors/design- proposals/scheduling/scheduler_extender.md @olgch; @kublr
  • 59. Oleg Chunikhin CTO oleg@kublr.com @olgch Kublr | kublr.com @kublr Signup for our newsletter at kublr.com

Editor's Notes

  1. “If you like something you hear today, please tweet at me @olgch”
  2. I will spend a few minutes reintroducing docker and kubernetes architecture concepts… before we dig into kubernetes scheduling. Talking about scheduling, I’ll try to explain capabilities, … controls available to cluster users and administrators, … and extension points We’ll also look at a couple of examples and… Some recommendations
  3. Registering nodes in the wizard Appointment of pods on the nodes The address allocation is submitted (from the pool of addresses of the overlay network allocated to the node at registration) Joint launch of containers in the pod Sharing the address space of a dataport and data volumes with containers The overall life cycle of the pod and its container The life cycle of the pod is very simple - moving and changing is not allowed, you must be re-created
  4. Master API maintains the general picture – vision of desired and current known state Master relies on other components – controllers, kubelet – to update current known state User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  5. First there was nothing
  6. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  7. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  8. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  9. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  10. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  11. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  12. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  13. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  14. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  15. Master API maintains the general picture User modifies to-be state and reads current state Controllers “clarify” to-be state Kubelet perform actions to achieve to-be state, and reports current state Scheduler is just one of the controllers, responsible for assigning unassigned pods to specific nodes
  16. Pod requests new volumes, can they be created in a zone where the can be attached to the node? If requested volumes already exist, can they be attached to the node? If the volumes are already attached/mounted, can they be mounted to this node? Any other user-specified constraints?
  17. This most often happens in AWS, where EBS can only be attached to instances in the same AZ where EBS is located
  18. This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey: For PreferredDuringScheduling pod anti-affinity, empty topologyKey is interpreted as "all topologies" ("all topologies" here means all the topologyKeys indicated by scheduler command-line argument --failure-domains); For affinity and for RequiredDuringScheduling pod anti-affinity, empty topologyKey is not allowed.
  19. This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey: For PreferredDuringScheduling pod anti-affinity, empty topologyKey is interpreted as "all topologies" ("all topologies" here means all the topologyKeys indicated by scheduler command-line argument --failure-domains); For affinity and for RequiredDuringScheduling pod anti-affinity, empty topologyKey is not allowed.
  20. Unified application delivery and ops platform wanted: monitoring, logs, security, multiple env, ... Where the project comes from Company overview Kubernetes as a solution – standardized delivery platform Kubernetes is great for managing containers, but who manages Kubernetes? How to streamline monitoring and collection of logs with multiple Kubernetes clusters?
  21. Unified application delivery and ops platform wanted: monitoring, logs, security, multiple env, ... Where the project comes from Company overview Kubernetes as a solution – standardized delivery platform Kubernetes is great for managing containers, but who manages Kubernetes? How to streamline monitoring and collection of logs with multiple Kubernetes clusters?