Kubernetes is a great tool to run (Docker) containers in a clustered production environment. When deploying often to production we need fully automated blue-green deployments, which makes it possible to deploy without any downtime. We also need to handle external HTTP requests and SSL offloading. This requires integration with a load balancer like Ha-Proxy. Another concern is (semi) auto scaling of the Kubernetes cluster itself when running in a cloud environment. E.g. partially scale down the cluster at night.
In this technical deep dive you will learn how to setup Kubernetes together with other open source components to achieve a production ready environment that takes code from git commit to production without downtime.
8. nginx
web
files
Pod
• May contain multiple containers
• Lifecycle of these containers
bound together
• Containers in pod see each other
on localhost
• Env vars for services
pod
REDIS_SERVICE_HOST=10.201.159.165
REDIS_PORT_6379_TCP_PORT=6379
Container
Container
9. Networking
• We run many pods on a single machine
• Pods may expose the same ports
• How to avoid conflicts!?
24. HTTP load balancing
• Expose Kubernetes services to the outside world
• SSL offloading
• Gzip
• Redirects
25. Kubernetes ingress
• Built in support for GCE load balancers
• Future support for extensions (not quite there yet)
• What about your own environment!?
26. Using a custom
load balancer
• Use Ha-proxy in front of Kubernetes
• Configure Ha-proxy dynamically
• The same works for nginx, apache…
32. Using the API
• /v1/namespaces/mynamespace/pods
• /v1/namespaces/mynamespace/services
• /v1/namespaces/mynamespace/replicationcontrollers
REST API that gives access to everything
33. Client libraries
• Amdatu Kubernetes OSGi
• Amdatu Kubernetes Go
• Clojure, Node, Python etc…
kubernetes.listNodes().subscribe(nodes -> {
nodes.getItems()
.forEach(System.out::println);
});
pods, err := kubernetes.ListPods(TEST_NAMESPACE)
if err != nil {
panic(err)
}
for _,pod := range pods.Items {
log.Println(pod.Name)
}
Java
Go
63. How to scale a
Kubernetes cluster?
Kubernetes node
pod pod pod
pod pod pod
pod pod pod
pod pod pod
64. How to scale a
Kubernetes cluster?
Kubernetes nodeKubernetes nodeKubernetes node
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod
65. How to scale a
Kubernetes cluster?
Kubernetes nodeKubernetes nodeKubernetes node
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod pod pod pod
66. How to scale a
Kubernetes cluster?
Kubernetes nodeKubernetes node
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod
pod pod pod pod pod pod
67. How to scale a
Kubernetes cluster?
Kubernetes nodeKubernetes nodeKubernetes node
68. Scaling up
1. Use AWS API to start new nodes (ScalingGroup)
2. Cloud-init to register node to Kubernetes cluster
69. Scaling down
1. Set node to “unschedulable”
2. Drain node (relocate pods to other machines)
3. Remove node from Kubernetes
4. Use AWS API to terminate nodes (ScalingGroup)
70. Amdatu scalerd
• CLI to add/remove nodes to a cluster
• Node draining to prevent downtime
• Scheduled automated scaling
75. Datastores in Kubernetes
• Kubernetes does have persistent volumes
• Most data stores require lots of tuning
• … don’t auto scale
• … require manual steps to configure cluster