Google Cloud Platform
What’s new in Kubernetes
Docker & Bay Area OpenSource meetup
February 16, 2016
Daniel Smith <dbsmith@google.com>
Senior Software Engineer
Google Cloud Platform
Kubernetes
Greek for “Helmsman”; also the root of the
words “governor” and “cybernetic”
• Runs and manages containers
• Inspired and informed by Google’s experiences
and internal systems
• Supports multiple cloud and bare-metal
environments
• Supports multiple container runtimes
• 100% Open source, written in Go
Manage applications, not machines
Google Cloud Platform
Google has been developing
and using containers to
manage applications for
over 10 years.
Images by Connie Zhou
Google Cloud Platform
Review: What’s old in Kubernetes?
Google Cloud Platform
kubelet
UI
kubeletCLI
API
users master nodes
The 10000 foot view
etcd
kubelet
scheduler
controllers
apiserver
Google Cloud Platform
Pods
Google Cloud Platform
Pods
Small group of containers & volumes
Tightly coupled
The atom of scheduling & placement
Shared namespace
• share IP address & localhost
• share IPC, etc.
Managed lifecycle
• bound to a node, restart in place
• can die, cannot be reborn with same ID
Example: data puller & web server
Consumers
Content
Manager
File
Puller
Web
Server
Volume
Pod
Google Cloud Platform
Volumes
Very similar to Docker’s concept
Pod scoped storage
Share the pod’s lifetime & fate
Support many types of volume plugins
• Empty dir (and tmpfs)
• Host path
• Git repository
• GCE Persistent Disk
• AWS Elastic Block Store
• Azure File Storage
• iSCSI
• Flocker
• NFS
• GlusterFS
• Ceph File and RBD
• Cinder
• FibreChannel
• Secret, ConfigMap, DownwardAPI
• Flex (exec a binary)
• ...
Google Cloud Platform
ReplicationControllers
Google Cloud Platform
ReplicationControllers
A simple control loop
Runs out-of-process wrt API server
Has 1 job: ensure N copies of a pod
• if too few, start some
• if too many, kill some
• grouped by a selector
Cleanly layered on top of the core
• all access is by public APIs
Replicated pods are fungible
• No implied order or identity
ReplicationController
- name = “my-rc”
- selector = {“App”: “MyApp”}
- podTemplate = { ... }
- replicas = 4
API Server
How
many?
3
Start 1
more
OK
How
many?
4
Google Cloud Platform
Services
Google Cloud Platform
Services
A group of pods that work together
• grouped by a selector
Defines access policy
• “load balanced” or “headless”
Gets a stable virtual IP and port
• sometimes called the service portal
• also a DNS name
VIP is managed by kube-proxy
• watches all services
• updates iptables when backends change
Hides complexity - ideal for non-native apps
Client
Virtual IP
Google Cloud Platform
External Services
Services IPs are only available inside the
cluster
Need to receive traffic from “the outside
world”
Builtin: Service “type”
• NodePort: expose on a port on every node
• LoadBalancer: provision a cloud load-balancer
DiY load-balancer solutions
• socat (for nodePort remapping)
• haproxy
• nginx
Google Cloud Platform
What’s new in Kubernetes?
Google Cloud Platform
Ingress (L7)
Services are assumed L3/L4
Lots of apps want HTTP/HTTPS
Ingress maps incoming traffic to backend
services
• by HTTP host headers
• by HTTP URL paths
HAProxy, NGINX, AWS and GCE
implementations in progress
Now with SSL!
Status: BETA in Kubernetes v1.2
URL Map
Client
Service-foo: 10.0.0.1 Service-bar
10.0.0.2
api.company.com
24.7.8.9
http://api.company.com/foo http://api.company.com/bar
Ingress API
Ingress (L7)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: k8s.io
http:
paths:
- path: /foo
backend:
serviceName: fooSvc
servicePort: 80
- path: /bar
backend:
serviceName: barSvc
servicePort: 80
fooSvc barSvc
http://k8s.io/foo http://k8s.io/bar
Ingress (L7)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: asdf.io
http:
paths:
- backend:
serviceName:
qwertySvc
servicePort: 80
- host: aoeu.io
http:
paths:
- backend:
serviceName:
dvorakSvc
servicePort: 80
qwertySvc dvorakSvc
http://asdf.io/* http://aoeu.io/*
Ingress (L7)
Ingress Object Ingress Controller
● GCE
● HAProxy
● ...
Ingress (L7)
Google Cloud Platform
kube-proxy
Google Cloud Platform
iptables kube-proxy
iptables
kube-proxy apiserver
Node X
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
watch
services &
endpoints
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
kubectl run ...
watch
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
schedule
watch
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
watch
kubectl expose ...
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
new
service!
update
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
watch
configure
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
watch
VIP
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
new
endpoints!
update
VIP
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
VIP
watch
configure
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
VIP
watch
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
VIP
watch
Client
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
VIP
watch
Client
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
VIP
watch
Client
iptables kube-proxy
Google Cloud Platform
iptables
kube-proxy apiserver
Node X
VIP
watch
Client
iptables kube-proxy
iptables kube-proxy
Google Cloud Platform
ConfigMaps (and Secrets)
Google Cloud Platform
ConfigMaps
Problem: how to manage app configuration
• ...without making overly-brittle container images
12-factor says config comes from the
environment
• Kubernetes is the environment
Manage config via the Kubernetes API
Inject config as a virtual volume into your Pods
• late-binding, live-updated (atomic)
• also available as env vars
Status: GA in Kubernetes v1.2
node
API
Pod Config
Map
Google Cloud Platform
Secrets
Problem: how to grant a pod access to a
secured something?
• don’t put secrets in the container image!
12-factor says config comes from the
environment
• Kubernetes is the environment
Manage secrets via the Kubernetes API
Inject secrets as virtual volumes into your Pods
• late-binding, tmpfs - never touches disk
• also available as env vars
node
API
Pod Secret
Google Cloud Platform
Rolling updates
Google Cloud Platform
Rolling Updates
ReplicationController
- replicas: 3
- selector:
- app: MyApp
- version: v1
Service
- app: MyApp
Google Cloud Platform
Rolling Updates
ReplicationController
- replicas: 3
- selector:
- app: MyApp
- version: v1
Service
- app: MyApp
# Update pods of frontend-v1 using new replication controller data in frontend-v2.json.
$ kubectl rolling-update frontend-v1 -f frontend-v2.json
# Update pods of frontend-v1 using JSON data passed into stdin.
$ cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
# Update the pods of frontend-v1 to frontend-v2 by just changing the image, and switching
the
# name of the replication controller.
$ kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
# Update the pods of frontend by just changing the image, and keeping the old name
$ kubectl rolling-update frontend --image=image:v2
Google Cloud Platform
ReplicationController
- replicas: 3
- selector:
- app: MyApp
- version: v1
ReplicationController
- replicas: 0
- selector:
- app: MyApp
- version: v2
Service
- app: MyAppRolling Updates
Google Cloud Platform
ReplicationController
- replicas: 3
- selector:
- app: MyApp
- version: v1
ReplicationController
- replicas: 1
- selector:
- app: MyApp
- version: v2
Service
- app: MyAppRolling Updates
Google Cloud Platform
ReplicationController
- replicas: 2
- selector:
- app: MyApp
- version: v1
ReplicationController
- replicas: 1
- selector:
- app: MyApp
- version: v2
Service
- app: MyAppRolling Updates
Google Cloud Platform
ReplicationController
- replicas: 2
- selector:
- app: MyApp
- version: v1
ReplicationController
- replicas: 2
- selector:
- app: MyApp
- version: v2
Service
- app: MyAppRolling Updates
Google Cloud Platform
ReplicationController
- replicas: 1
- selector:
- app: MyApp
- version: v1
ReplicationController
- replicas: 2
- selector:
- app: MyApp
- version: v2
Service
- app: MyAppRolling Updates
Google Cloud Platform
ReplicationController
- replicas: 1
- selector:
- app: MyApp
- version: v1
ReplicationController
- replicas: 3
- selector:
- app: MyApp
- version: v2
Service
- app: MyAppRolling Updates
Google Cloud Platform
ReplicationController
- replicas: 0
- selector:
- app: MyApp
- version: v1
ReplicationController
- replicas: 3
- selector:
- app: MyApp
- version: v2
Service
- app: MyAppRolling Updates
Google Cloud Platform
ReplicationController
- replicas: 3
- selector:
- app: MyApp
- version: v2
Service
- app: MyAppRolling Updates
Google Cloud Platform
Deployments
Google Cloud Platform
Deployments
Rolling update is too imperative
Deployment manages RC changes for you
• stable object name
• updates are done server-side rather than client
• kubectl edit or kubectl apply is all you need
Aggregates stats
Can have multiple updates in flight
Status: BETA in Kubernetes v1.2
...
Google Cloud Platform
Deployments
...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Google Cloud Platform
Jobs
Google Cloud Platform
Jobs
Run-to-completion, as opposed to run-forever
• Express parallelism vs. required completions
• Workflow: restart on failure
• Build/test: don’t restart on failure
Aggregates success/failure counts
Built for batch and big-data work
Status: GA in Kubernetes v1.2
...
Start Finish
apiVersion: extensions/v1beta1
kind: Job
metadata:
name: ffmpeg
spec:
selector:
matchLabels:
app: ffmpeg
template:
metadata:
labels:
app: ffmpeg
spec:
containers:
- name: ffmpeg
image: ffmpeg
restartPolicy: OnFailure
Jobs
Start Finish
apiVersion: extensions/v1beta1
kind: Job
metadata:
name: ffmpeg
spec:
selector:
matchLabels:
app: ffmpeg
# run 5 times before done
completions: 5
...
Jobs
Start Finish
apiVersion: extensions/v1beta1
kind: Job
metadata:
name: ffmpeg
spec:
selector:
matchLabels:
app: ffmpeg
# run 5 times before done
completions: 5
parallelism: 2
...
Jobs
Google Cloud Platform
DaemonSets
Google Cloud Platform
DaemonSets
Problem: how to run a Pod on every node
• or a subset of nodes
Similar to ReplicationController
• principle: do one thing, don’t overload
“Which nodes?” is a selector
Use familiar tools and patterns
Status: BETA in Kubernetes v1.2
Pod
Google Cloud Platform
Graceful Termination
Google Cloud Platform
Graceful Termination
Give pods time to clean up
• finish in-flight operations
• log state
• flush to disk
• 30 seconds by default
Catch SIGTERM, cleanup, exit ASAP
Pod status “Terminating”
Declarative: ‘DELETE’ manifests as an object
field in the API
Google Cloud Platform
HorizontalPodAutoscalers
Google Cloud Platform
HorizontalPodAutoScalers
Automatically scale ReplicationControllers to a
target utilization
• CPU utilization for now
• Probably more later
Operates within user-defined min/max bounds
Set it and forget it
Status: GA in Kubernetes v1.2
...
Stats
Google Cloud Platform
Cluster Auto-Scaling
Google Cloud Platform
Cluster Scaling
Add nodes when needed
• e.g. CPU usage too high
• nodes self-register with API server
Remove nodes when not needed
• e.g. CPU usage too low
Status: Works on GCE, need other
implementations
...
Google Cloud Platform
New and coming soon
• Cron (scheduled jobs)
• Custom metrics
• “Apply” a config (even more declarative)
• Interactive containers
• Bandwidth shaping
• Third-party API objects
• Scalability: 1000 nodes, 100+ pods/node
• Performance
• Machine-generated Go clients (less deps!)
• Volume usage stats
• Multi-zone (AZ) support
• Multi-scheduler support
• Node affinity and anti-affinity
• Multi-cluster federation
• API federation
• More volume types
• Private Docker registry
• External DNS integration
• Volume classes and auto-provisioning
• Node fencing
• DiY Cloud Provider plugins
• More container runtimes (e.g. Hyper)
• Better auth{n,z}
• Network policy (micro-segmentation)
• Big data integrations
• Device scheduling (e.g. GPUs)
Google Cloud Platform
Kubernetes status & plans
Open sourced in June, 2014
• v1.0 in July, 2015
• v1.1 in November, 2015
• v1.2 ... soon!
Google Container Engine (GKE)
• hosted Kubernetes - don’t think about cluster setup
PaaSes:
• RedHat OpenShift, Deis, Stratos
Distros:
• CoreOS Tectonic, Mirantis Murano (OpenStack),RedHat
Atomic, Mesos
Hitting a ~3 month release cadence
Google Cloud Platform
The Goal: Read-write open source
Containers are a new way of working
Requires new concepts and new tools
Google has a lot of experience...
...but we are listening to users!
Your input does make a difference!
The Goal: Read-write open source
The Goal: Read-write open source
Google Cloud Platform
Kubernetes is Open
- open community
- open design
- open source
- open to ideas
http://kubernetes.io
https://github.com/kubernetes/kubernetes
slack: kubernetes
twitter: @kubernetesio

What's new in Kubernetes

  • 1.
    Google Cloud Platform What’snew in Kubernetes Docker & Bay Area OpenSource meetup February 16, 2016 Daniel Smith <dbsmith@google.com> Senior Software Engineer
  • 2.
    Google Cloud Platform Kubernetes Greekfor “Helmsman”; also the root of the words “governor” and “cybernetic” • Runs and manages containers • Inspired and informed by Google’s experiences and internal systems • Supports multiple cloud and bare-metal environments • Supports multiple container runtimes • 100% Open source, written in Go Manage applications, not machines
  • 3.
    Google Cloud Platform Googlehas been developing and using containers to manage applications for over 10 years. Images by Connie Zhou
  • 4.
    Google Cloud Platform Review:What’s old in Kubernetes?
  • 5.
    Google Cloud Platform kubelet UI kubeletCLI API usersmaster nodes The 10000 foot view etcd kubelet scheduler controllers apiserver
  • 6.
  • 7.
    Google Cloud Platform Pods Smallgroup of containers & volumes Tightly coupled The atom of scheduling & placement Shared namespace • share IP address & localhost • share IPC, etc. Managed lifecycle • bound to a node, restart in place • can die, cannot be reborn with same ID Example: data puller & web server Consumers Content Manager File Puller Web Server Volume Pod
  • 8.
    Google Cloud Platform Volumes Verysimilar to Docker’s concept Pod scoped storage Share the pod’s lifetime & fate Support many types of volume plugins • Empty dir (and tmpfs) • Host path • Git repository • GCE Persistent Disk • AWS Elastic Block Store • Azure File Storage • iSCSI • Flocker • NFS • GlusterFS • Ceph File and RBD • Cinder • FibreChannel • Secret, ConfigMap, DownwardAPI • Flex (exec a binary) • ...
  • 9.
  • 10.
    Google Cloud Platform ReplicationControllers Asimple control loop Runs out-of-process wrt API server Has 1 job: ensure N copies of a pod • if too few, start some • if too many, kill some • grouped by a selector Cleanly layered on top of the core • all access is by public APIs Replicated pods are fungible • No implied order or identity ReplicationController - name = “my-rc” - selector = {“App”: “MyApp”} - podTemplate = { ... } - replicas = 4 API Server How many? 3 Start 1 more OK How many? 4
  • 11.
  • 12.
    Google Cloud Platform Services Agroup of pods that work together • grouped by a selector Defines access policy • “load balanced” or “headless” Gets a stable virtual IP and port • sometimes called the service portal • also a DNS name VIP is managed by kube-proxy • watches all services • updates iptables when backends change Hides complexity - ideal for non-native apps Client Virtual IP
  • 13.
    Google Cloud Platform ExternalServices Services IPs are only available inside the cluster Need to receive traffic from “the outside world” Builtin: Service “type” • NodePort: expose on a port on every node • LoadBalancer: provision a cloud load-balancer DiY load-balancer solutions • socat (for nodePort remapping) • haproxy • nginx
  • 14.
  • 15.
    Google Cloud Platform Ingress(L7) Services are assumed L3/L4 Lots of apps want HTTP/HTTPS Ingress maps incoming traffic to backend services • by HTTP host headers • by HTTP URL paths HAProxy, NGINX, AWS and GCE implementations in progress Now with SSL! Status: BETA in Kubernetes v1.2 URL Map Client
  • 16.
  • 17.
    apiVersion: extensions/v1beta1 kind: Ingress metadata: name:test spec: rules: - host: k8s.io http: paths: - path: /foo backend: serviceName: fooSvc servicePort: 80 - path: /bar backend: serviceName: barSvc servicePort: 80 fooSvc barSvc http://k8s.io/foo http://k8s.io/bar Ingress (L7)
  • 18.
    apiVersion: extensions/v1beta1 kind: Ingress metadata: name:test spec: rules: - host: asdf.io http: paths: - backend: serviceName: qwertySvc servicePort: 80 - host: aoeu.io http: paths: - backend: serviceName: dvorakSvc servicePort: 80 qwertySvc dvorakSvc http://asdf.io/* http://aoeu.io/* Ingress (L7)
  • 19.
    Ingress Object IngressController ● GCE ● HAProxy ● ... Ingress (L7)
  • 20.
  • 21.
    Google Cloud Platform iptableskube-proxy iptables kube-proxy apiserver Node X
  • 22.
    Google Cloud Platform iptables kube-proxyapiserver Node X watch services & endpoints iptables kube-proxy
  • 23.
    Google Cloud Platform iptables kube-proxyapiserver Node X kubectl run ... watch iptables kube-proxy
  • 24.
    Google Cloud Platform iptables kube-proxyapiserver Node X schedule watch iptables kube-proxy
  • 25.
    Google Cloud Platform iptables kube-proxyapiserver Node X watch kubectl expose ... iptables kube-proxy
  • 26.
    Google Cloud Platform iptables kube-proxyapiserver Node X new service! update iptables kube-proxy
  • 27.
    Google Cloud Platform iptables kube-proxyapiserver Node X watch configure iptables kube-proxy
  • 28.
    Google Cloud Platform iptables kube-proxyapiserver Node X watch VIP iptables kube-proxy
  • 29.
    Google Cloud Platform iptables kube-proxyapiserver Node X new endpoints! update VIP iptables kube-proxy
  • 30.
    Google Cloud Platform iptables kube-proxyapiserver Node X VIP watch configure iptables kube-proxy
  • 31.
    Google Cloud Platform iptables kube-proxyapiserver Node X VIP watch iptables kube-proxy
  • 32.
    Google Cloud Platform iptables kube-proxyapiserver Node X VIP watch Client iptables kube-proxy
  • 33.
    Google Cloud Platform iptables kube-proxyapiserver Node X VIP watch Client iptables kube-proxy
  • 34.
    Google Cloud Platform iptables kube-proxyapiserver Node X VIP watch Client iptables kube-proxy
  • 35.
    Google Cloud Platform iptables kube-proxyapiserver Node X VIP watch Client iptables kube-proxy
  • 36.
  • 37.
  • 38.
    Google Cloud Platform ConfigMaps Problem:how to manage app configuration • ...without making overly-brittle container images 12-factor says config comes from the environment • Kubernetes is the environment Manage config via the Kubernetes API Inject config as a virtual volume into your Pods • late-binding, live-updated (atomic) • also available as env vars Status: GA in Kubernetes v1.2 node API Pod Config Map
  • 39.
    Google Cloud Platform Secrets Problem:how to grant a pod access to a secured something? • don’t put secrets in the container image! 12-factor says config comes from the environment • Kubernetes is the environment Manage secrets via the Kubernetes API Inject secrets as virtual volumes into your Pods • late-binding, tmpfs - never touches disk • also available as env vars node API Pod Secret
  • 40.
  • 41.
    Google Cloud Platform RollingUpdates ReplicationController - replicas: 3 - selector: - app: MyApp - version: v1 Service - app: MyApp
  • 42.
    Google Cloud Platform RollingUpdates ReplicationController - replicas: 3 - selector: - app: MyApp - version: v1 Service - app: MyApp # Update pods of frontend-v1 using new replication controller data in frontend-v2.json. $ kubectl rolling-update frontend-v1 -f frontend-v2.json # Update pods of frontend-v1 using JSON data passed into stdin. $ cat frontend-v2.json | kubectl rolling-update frontend-v1 -f - # Update the pods of frontend-v1 to frontend-v2 by just changing the image, and switching the # name of the replication controller. $ kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # Update the pods of frontend by just changing the image, and keeping the old name $ kubectl rolling-update frontend --image=image:v2
  • 43.
    Google Cloud Platform ReplicationController -replicas: 3 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 0 - selector: - app: MyApp - version: v2 Service - app: MyAppRolling Updates
  • 44.
    Google Cloud Platform ReplicationController -replicas: 3 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 1 - selector: - app: MyApp - version: v2 Service - app: MyAppRolling Updates
  • 45.
    Google Cloud Platform ReplicationController -replicas: 2 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 1 - selector: - app: MyApp - version: v2 Service - app: MyAppRolling Updates
  • 46.
    Google Cloud Platform ReplicationController -replicas: 2 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 2 - selector: - app: MyApp - version: v2 Service - app: MyAppRolling Updates
  • 47.
    Google Cloud Platform ReplicationController -replicas: 1 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 2 - selector: - app: MyApp - version: v2 Service - app: MyAppRolling Updates
  • 48.
    Google Cloud Platform ReplicationController -replicas: 1 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 3 - selector: - app: MyApp - version: v2 Service - app: MyAppRolling Updates
  • 49.
    Google Cloud Platform ReplicationController -replicas: 0 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 3 - selector: - app: MyApp - version: v2 Service - app: MyAppRolling Updates
  • 50.
    Google Cloud Platform ReplicationController -replicas: 3 - selector: - app: MyApp - version: v2 Service - app: MyAppRolling Updates
  • 51.
  • 52.
    Google Cloud Platform Deployments Rollingupdate is too imperative Deployment manages RC changes for you • stable object name • updates are done server-side rather than client • kubectl edit or kubectl apply is all you need Aggregates stats Can have multiple updates in flight Status: BETA in Kubernetes v1.2 ...
  • 53.
    Google Cloud Platform Deployments ... apiVersion:extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
  • 54.
  • 55.
    Google Cloud Platform Jobs Run-to-completion,as opposed to run-forever • Express parallelism vs. required completions • Workflow: restart on failure • Build/test: don’t restart on failure Aggregates success/failure counts Built for batch and big-data work Status: GA in Kubernetes v1.2 ...
  • 56.
    Start Finish apiVersion: extensions/v1beta1 kind:Job metadata: name: ffmpeg spec: selector: matchLabels: app: ffmpeg template: metadata: labels: app: ffmpeg spec: containers: - name: ffmpeg image: ffmpeg restartPolicy: OnFailure Jobs
  • 57.
    Start Finish apiVersion: extensions/v1beta1 kind:Job metadata: name: ffmpeg spec: selector: matchLabels: app: ffmpeg # run 5 times before done completions: 5 ... Jobs
  • 58.
    Start Finish apiVersion: extensions/v1beta1 kind:Job metadata: name: ffmpeg spec: selector: matchLabels: app: ffmpeg # run 5 times before done completions: 5 parallelism: 2 ... Jobs
  • 59.
  • 60.
    Google Cloud Platform DaemonSets Problem:how to run a Pod on every node • or a subset of nodes Similar to ReplicationController • principle: do one thing, don’t overload “Which nodes?” is a selector Use familiar tools and patterns Status: BETA in Kubernetes v1.2 Pod
  • 61.
  • 62.
    Google Cloud Platform GracefulTermination Give pods time to clean up • finish in-flight operations • log state • flush to disk • 30 seconds by default Catch SIGTERM, cleanup, exit ASAP Pod status “Terminating” Declarative: ‘DELETE’ manifests as an object field in the API
  • 63.
  • 64.
    Google Cloud Platform HorizontalPodAutoScalers Automaticallyscale ReplicationControllers to a target utilization • CPU utilization for now • Probably more later Operates within user-defined min/max bounds Set it and forget it Status: GA in Kubernetes v1.2 ... Stats
  • 65.
  • 66.
    Google Cloud Platform ClusterScaling Add nodes when needed • e.g. CPU usage too high • nodes self-register with API server Remove nodes when not needed • e.g. CPU usage too low Status: Works on GCE, need other implementations ...
  • 67.
    Google Cloud Platform Newand coming soon • Cron (scheduled jobs) • Custom metrics • “Apply” a config (even more declarative) • Interactive containers • Bandwidth shaping • Third-party API objects • Scalability: 1000 nodes, 100+ pods/node • Performance • Machine-generated Go clients (less deps!) • Volume usage stats • Multi-zone (AZ) support • Multi-scheduler support • Node affinity and anti-affinity • Multi-cluster federation • API federation • More volume types • Private Docker registry • External DNS integration • Volume classes and auto-provisioning • Node fencing • DiY Cloud Provider plugins • More container runtimes (e.g. Hyper) • Better auth{n,z} • Network policy (micro-segmentation) • Big data integrations • Device scheduling (e.g. GPUs)
  • 68.
    Google Cloud Platform Kubernetesstatus & plans Open sourced in June, 2014 • v1.0 in July, 2015 • v1.1 in November, 2015 • v1.2 ... soon! Google Container Engine (GKE) • hosted Kubernetes - don’t think about cluster setup PaaSes: • RedHat OpenShift, Deis, Stratos Distros: • CoreOS Tectonic, Mirantis Murano (OpenStack),RedHat Atomic, Mesos Hitting a ~3 month release cadence
  • 69.
    Google Cloud Platform TheGoal: Read-write open source Containers are a new way of working Requires new concepts and new tools Google has a lot of experience... ...but we are listening to users! Your input does make a difference!
  • 70.
  • 71.
  • 72.
    Google Cloud Platform Kubernetesis Open - open community - open design - open source - open to ideas http://kubernetes.io https://github.com/kubernetes/kubernetes slack: kubernetes twitter: @kubernetesio