Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Kubernetes automation in production

3,436 views

Published on

Kubernetes is a great tool to run (Docker) containers in a clustered production environment. When deploying often to production we need fully automated blue-green deployments, which makes it possible to deploy without any downtime. We also need to handle external HTTP requests and SSL offloading. This requires integration with a load balancer like Ha-Proxy. Another concern is (semi) auto scaling of the Kubernetes cluster itself when running in a cloud environment. E.g. partially scale down the cluster at night.

In this technical deep dive you will learn how to setup Kubernetes together with other open source components to achieve a production ready environment that takes code from git commit to production without downtime.

Published in: Software

Kubernetes automation in production

  1. 1. @pbakker#Kubernetes Kubernetes Automation Paul Bakker @pbakker paulbakker.io
  2. 2. @pbakker Paul Bakker Software architect at Luminis Technologies
  3. 3. @pbakker Paul Bakker Software architect at Luminis Technologies
  4. 4. Why Kubernetes • Run Docker in clusters • scheduling containers on machines • networking • storage • automation
  5. 5. The basics
  6. 6. Docker container Docker container Docker container Docker container Node Docker container Docker container Docker container Docker container Pods Master Node Pods API etcdetcdetcd
  7. 7. Docker container Docker container Docker container Docker container Node Docker container Docker container Docker container Docker container Pods Docker container Docker container Docker container Replication Controller Master schedules schedules Node Pods
  8. 8. nginx web files Pod • May contain multiple containers • Lifecycle of these containers bound together • Containers in pod see each other on localhost • Env vars for services pod REDIS_SERVICE_HOST=10.201.159.165 REDIS_PORT_6379_TCP_PORT=6379 Container Container
  9. 9. Networking • We run many pods on a single machine • Pods may expose the same ports • How to avoid conflicts!?
  10. 10. Dynamic IP addresses • Each pod gets a virtual IP • Ports not shared with other pods
  11. 11. pod pod Docker container Docker container Docker container Service Services Fixed, virtual IP address Dynamic IP address Dynamic IP address
  12. 12. Multi component deployments • Each component deployed as a pod • Individually update and scale pods • Use services for component communication
  13. 13. Multi component deployments frontend backend service 1 backend service 2 Redis pod pod pod pod backend service 1 backend service 1 backend service 1 backend service 2 backend service 2 backend service 2 s e r v i c e s e r v i c e s e r v i c e
  14. 14. Multi component deployments frontend backend service 1 backend service 2 Redis pod pod pod pod backend service 1 backend service 1 backend service 1 backend service 2 backend service 2 backend service 2 s e r v i c e s e r v i c e s e r v i c e application
  15. 15. Multi component deployments frontend backend service 1 backend service 2 Redis pod pod pod pod backend service 1 backend service 1 backend service 1 backend service 2 backend service 2 backend service 2 s e r v i c e s e r v i c e s e r v i c e component / service
  16. 16. Namespaces pod service rcrcrcpodpod serviceservice pod service rcrcrcpodpod serviceservicepod service rcrcrcpodpod serviceservice Namespace A Namespace B Namespace C
  17. 17. kubectl kubectl create -f my-rc.yml kubectl create -f my-service.yml
  18. 18. apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
  19. 19. apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 On how many nodes should this run?
  20. 20. apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 On how many nodes should this run? Describes our Docker container Ports, storage needs, etc.
  21. 21. apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 On how many nodes should this run? Labels, this (loosely) couples controllers, pods and services together Describes our Docker container Ports, storage needs, etc.
  22. 22. DEMO
  23. 23. HTTP Load balancing
  24. 24. HTTP load balancing • Expose Kubernetes services to the outside world • SSL offloading • Gzip • Redirects
  25. 25. Kubernetes ingress • Built in support for GCE load balancers • Future support for extensions (not quite there yet) • What about your own environment!?
  26. 26. Using a custom load balancer • Use Ha-proxy in front of Kubernetes • Configure Ha-proxy dynamically • The same works for nginx, apache…
  27. 27. pod pod Docker container Docker container Docker container Service Dynamic IP address Dynamic IP address Load balancer node ha-proxy HTTPS SSL offloading
  28. 28. pod pod Docker container Docker container Docker container Service Dynamic IP address Dynamic IP address Load balancer node ha-proxy HTTPS A W S E L B SSL offloading
  29. 29. pod pod Docker container Docker container Docker container Service Dynamic IP address Dynamic IP address Load balancer node ha-proxy HTTPS A W S E L B Virtual private network
  30. 30. How does ha-proxy know about our services? • Ha-proxy uses a static config file • Auto-generate it based on data in etcd • Confd
  31. 31. Automation
  32. 32. Using the API • /v1/namespaces/mynamespace/pods • /v1/namespaces/mynamespace/services • /v1/namespaces/mynamespace/replicationcontrollers REST API that gives access to everything
  33. 33. Client libraries • Amdatu Kubernetes OSGi • Amdatu Kubernetes Go • Clojure, Node, Python etc… kubernetes.listNodes().subscribe(nodes -> { nodes.getItems() .forEach(System.out::println); }); pods, err := kubernetes.ListPods(TEST_NAMESPACE) if err != nil { panic(err) } for _,pod := range pods.Items { log.Println(pod.Name) } Java Go
  34. 34. Blue-green deployment • Deployment without downtime • Only one version is active at a time • Rolls back on failed deployment
  35. 35. Docker container Docker container Docker container pod v1 ha-proxy HTTPS
  36. 36. Docker container Docker container Docker container pod v1 ha-proxy HTTPS
  37. 37. Docker container Docker container Docker container pod v1 ha-proxy HTTPS deploy new version v2v2v2pod v2 deployer
  38. 38. Docker container Docker container Docker container v1 ha-proxy HTTPS health check… v2v2v2v2 deployer
  39. 39. Docker container Docker container Docker container v1 ha-proxy HTTPS health check… v2v2v2v2 deployer
  40. 40. Docker container Docker container Docker container v1 ha-proxy HTTPS v2v2v2v2 confd Update configdeployer
  41. 41. v1 ha-proxy HTTPS v2v2v2v2 v1v1v1
  42. 42. ha-proxy HTTPS v2v2v2v2
  43. 43. Deployer The Deployer
  44. 44. Kubernetes API Deployer Create RC The Deployer
  45. 45. Kubernetes API Deployer podpodpod pod Create RC service Creates The Deployer
  46. 46. Kubernetes API Deployer podpodpod pod GET /health Create RC service Creates The Deployer
  47. 47. Kubernetes API etcd Deployer podpodpod pod GET /health Create RC confd Watch Switch Load Balancer Backend service Creates The Deployer
  48. 48. Kubernetes API HAProxy etcd Deployer podpodpod pod GET /health Create RC generate config confd Watch Switch Load Balancer Backend service Creates The Deployer
  49. 49. Deployer
  50. 50. Kubernetes API Deployer 1- Create RC
  51. 51. Kubernetes API Deployer podpodpod pod 1- Create RC service 2- Creates
  52. 52. Kubernetes API Deployer podpodpod pod 3- GET /health 1- Create RC service 2- Creates
  53. 53. Kubernetes API etcd Deployer podpodpod pod 3- GET /health 1- Create RC confd 5- Watch 4- Switch Load Balancer Backend service 2- Creates
  54. 54. Kubernetes API HAProxy etcd Deployer podpodpod pod 3- GET /health 1- Create RC 6- generate config confd 5- Watch 4- Switch Load Balancer Backend service 2- Creates
  55. 55. Amdatu Kubernetes Deployer • Kubernetes deployment orchestration • Load balancer configuration • Blue-green deployment • Apache licensed • Go
  56. 56. { "deploymentType": "blue-green", "namespace": "default", "useHealthCheck": true, "newVersion": "#", "appName": "cloudrti-demo", "replicas": 2, "frontend": "cloud-rti-demo.amdatu.com", "podspec": {} }
  57. 57. Amdatu Deploymentctl • UI for setting up deployments • Deployment history • Webhooks for triggering from external events • OSGi / Vertx / Angular 2
  58. 58. DEMO
  59. 59. Build / deploy pipelines Build Server Docker Hub builds image alpha Deployer webhook deploys
  60. 60. Scaling
  61. 61. Kubernetes node How to scale a Kubernetes cluster?
  62. 62. Kubernetes node pod pod pod pod pod pod How to scale a Kubernetes cluster?
  63. 63. How to scale a Kubernetes cluster? Kubernetes node pod pod pod pod pod pod pod pod pod pod pod pod
  64. 64. How to scale a Kubernetes cluster? Kubernetes nodeKubernetes nodeKubernetes node pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod
  65. 65. How to scale a Kubernetes cluster? Kubernetes nodeKubernetes nodeKubernetes node pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod
  66. 66. How to scale a Kubernetes cluster? Kubernetes nodeKubernetes node pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod pod
  67. 67. How to scale a Kubernetes cluster? Kubernetes nodeKubernetes nodeKubernetes node
  68. 68. Scaling up 1. Use AWS API to start new nodes (ScalingGroup) 2. Cloud-init to register node to Kubernetes cluster
  69. 69. Scaling down 1. Set node to “unschedulable” 2. Drain node (relocate pods to other machines) 3. Remove node from Kubernetes 4. Use AWS API to terminate nodes (ScalingGroup)
  70. 70. Amdatu scalerd • CLI to add/remove nodes to a cluster • Node draining to prevent downtime • Scheduled automated scaling
  71. 71. { "name": "night", "cron": "0 0 21 * * *", "description": "Switch to half capacity at night", "desiredCapacity": 2, "appScaleTemplates": [ { "app": "demo", "replicationControllerScaleTemplates": [ { "replicationController": "*", "replicas": 1 } ] } ] } scalerctl create nighttime.json
  72. 72. How and where to run these tools? • In Kubernetes of course! • Bootstrap using kubectl scripts
  73. 73. Master API etcdetcdetcd Kubernetes Node Kubernetes Node Kubernetes Node Kubernetes Node HA-Proxy VPN
  74. 74. Master API etcdetcdetcd Kubernetes Node Kubernetes Node Kubernetes Node Kubernetes Node HA-Proxy VPN What about my database!?
  75. 75. Datastores in Kubernetes • Kubernetes does have persistent volumes • Most data stores require lots of tuning • … don’t auto scale • … require manual steps to configure cluster
  76. 76. Master API etcdetcdetcd Kubernetes Node Kubernetes Node Kubernetes Node Kubernetes Node HA-Proxy VPN etcdetcdmongo etcdetcdKafka
  77. 77. • Fully managed Kubernetes • Centralised logging • Application / cluster monitoring
  78. 78. @YourTwitterHandle#DVXFR14{session hashtag} @pbakker#Kubernetes Q & A https://bitbucket.org/amdatulabs Open source projects:

×