Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Zero downtime deployment of micro-services with Kubernetes

475 views

Published on

Talk on deployment strategies with Kubernetes covering kubernetes configuration files and the actual implementation of your service in Golang and .net core.

You will find demos for recreate, rolling updates, blue-green, and canary deployments.

Source and demos, you will find on github: https://github.com/wojciech12/talk_zero_downtime_deployment_with_kubernetes

Published in: Technology
  • Login to see the comments

Zero downtime deployment of micro-services with Kubernetes

  1. 1. ZERO DOWNTIME DEPLOYMENT STRATEGIES WITH K8S + HOW TO PREPARE YOUR SERVICE Wojciech Barczynski - SMACC.io | Hypatos.ai Listopad 2018
  2. 2. WOJCIECH BARCZYŃSKI Lead So ware Engineer & System Engineer Interests: working so ware Hobby: teaching so ware engineering and programming
  3. 3. STORY Go + Kubernetes SMACC - Fintech / ML - [10.2017- ...] Lyke - Mobile Fashion app - [12.2016, 07.2017]
  4. 4. KUBERNETES Learn-as-you-go Battery for 12factor apps ...app must be smarter
  5. 5. KUBERNETES Kubernetes Node Node Node Node App Ingress Controller Repository make docker_push; kubectl create -f app-srv-dpl.yaml
  6. 6. SCALE UP! SCALE DOWN! Kubernetes Node Node Node Node App Ingress Controller App Appscale 3x kubectl --replicas=3 -f app-srv-dpl.yaml
  7. 7. DEPLOYMENT AND PODS Node Master Deployment Docker containers Node Docker Containers Pod Schedule Schedule
  8. 8. DEPLOYMENT.YML apiVersion: apps/v1 kind: Deployment metadata: name: demo-api labels: app: demo-api spec: replicas: 3 strategy: type: Recreate selector: matchLabels: app: demo-api template: metadata:
  9. 9. SERVICE AND PODS Node Deployment Pod Node Pod services Fixed virtual address Fixed DNS entry Service matches pods based on labels
  10. 10. SERVICE.YML apiVersion: v1 kind: Service metadata: name: demo-api spec: ports: - port: 80 protocol: TCP selector: app: demo-api type: LoadBalancer
  11. 11. BASIC CONCEPTS Name Purpose Service Interface Entry point (Service Name) Deployment Factory How many pods, which pods Pod Implementation 1+ docker running
  12. 12. HOW GET USER REQUESTS? API BACKOFFICE 1 DATA WEB ADMIN BACKOFFICE 2 BACKOFFICE 3 API.DOMAIN.COM DOMAIN.COM/WEB BACKOFFICE.DOMAIN.COM ORCHESTRATOR PRIVATE NETWORKINTERNET API LISTEN (DOCKER, SWARM, MESOS...) Ingress Controller
  13. 13. INGRESS Pattern Target App Service api.smacc.io/v1/users users-v1 api.smacc.io/v2/users users-v2 smacc.io web
  14. 14. SERVICE DISCOVERY AND LABELING names in DNS: curl labels: name=value annotations: prometheus.io/scrape: "true" http://users/list
  15. 15. DEPLOYMENT STRATEGIES
  16. 16. STRATEGIES We will see: Replace (downtime visible) Rolling updates Blue Green Canary
  17. 17. OTHER We will not cover: Feature toggles A/B like Shadow deployment
  18. 18. FIRST THE HOMEWORK Need to support: liveness - am I dead? readiness - can I serve requests?
  19. 19. KUBE LIVENESS PROBE livenessProbe: httpGet: path: /model port: 8000 httpHeaders: - name: X-Custom-Header value: Awesome initialDelaySeconds: 600 periodSeconds: 5 timeoutSeconds: 18 successThreshold: 1 failureThreshold: 3
  20. 20. LIVENESS PROBE our pod gets restarted too many restarts -> CrashLoop
  21. 21. KUBE READINESS PROBE readinessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5
  22. 22. YOUR APP SHOULD ON STOP 1. we get SIGTERM signal
  23. 23. YOUR APP SHOULD ON STOP 1. we get SIGTERM signal 2. app gives 500 on readinessProbe
  24. 24. YOUR APP SHOULD ON STOP 1. we get SIGTERM signal 2. app gives 500 on readinessProbe 3. k8s does not send more requests
  25. 25. YOUR APP SHOULD ON STOP 1. we get SIGTERM signal 2. app gives 500 on readinessProbe 3. k8s does not send more requests 4. app shuts down gracefully
  26. 26. YOUR APP SHOULD ON STOP 1. we get SIGTERM signal 2. app gives 500 on readinessProbe 3. k8s does not send more requests 4. app shuts down gracefully 5. kuberenetes forces kill if 30s limit exceeded
  27. 27. ALWAYS Implement readiness for: ML Model-based components slow starting time
  28. 28. DEMO SERVICE IMPLEMENTATION graceful shutdown demo service
  29. 29. GRACEFUL SHUTDOWN From :missy func (s *Service) prepareShutdown(h Server) { signal.Notify(s.Stop, os.Interrupt, syscall.SIGTERM) <-s.Stop s.StatusNotReady() shutdown(h) }
  30. 30. DEMO - RECREATE Service Pods Labels Service Pods Labels v1 v1 v1 v2 v2 v2
  31. 31. DEMO - RECREATE spec: replicas: 3 strategy: type: Recreate kubectl set image deployment/demo-api app=wojciech11/api-status:2.0.0
  32. 32. DEMO - RECREATE quick downtime visible
  33. 33. DEMO - ROLLING UPDATES Service Pods Labels Service Pods Labels v1 v1 v1 v1 v1 v2
  34. 34. DEMO - ROLLING UPDATES strategy: type: RollingUpdate rollingUpdate: maxSurge: 2 maxUnavailable: 0 docs
  35. 35. DEMO - ROLLING UPDATES kubectl set image deployment/demo-api app=wojciech11/api-status:2.0.0
  36. 36. DEMO - ROLLING UPDATES the most popular
  37. 37. DEMO - GREEN/BLUE Pods Service Pods Blue Green
  38. 38. DEMO - GREEN/BLUE Pods Service Pods Blue Green
  39. 39. DEMO - GREEN/BLUE kubectl patch service api-status -p '{"spec":{"selector": {"label": "green"} }}'
  40. 40. DEMO - GREEN/BLUE For big changes Less common Might be implemented with Ingress
  41. 41. DEMO - CANARY Pods Service Pods Production Canary
  42. 42. DEMO - CANARY Pods Service Pods Production Canary
  43. 43. DEMO - CANARY Pods Service Pods Production Canary
  44. 44. DEMO - CANARY Pods Service Pods Production Canary
  45. 45. DEMO - CANARY manually
  46. 46. kubectl scale --replicas=3 deploy/api-status-nginx-blue kubectl scale --replicas=1 deploy/api-status-nginx-green # no errors, let's continoue kubectl scale --replicas=2 deploy/api-status-nginx-blue kubectl scale --replicas=2 deploy/api-status-nginx-green
  47. 47. TRAEFIK apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: traefik.ingress.kubernetes.io/service-weights: | my-app: 99% my-app-canary: 1% name: my-app spec: rules: - http: paths: - backend: serviceName: my-app servicePort: 80 traffic-splitting
  48. 48. ISTIO apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: helloworld spec: hosts: - helloworld http: - route: - destination: host: helloworld subset: v1 weight: 90 - destination: host: helloworld traffic shi ing with Istio
  49. 49. SUMMARY learn-as-you go <3 easy to implement complex strategies see kubectl rollout and minReadySeconds for more control
  50. 50. THANK YOU. QUESTIONS? https://github.com/wojciech12/talk_zero_downtime_deployment_with_kubernetes
  51. 51. BACKUP SLIDES
  52. 52. STORY Lyke - [12.2016 - 07.2017] SMACC - [10.2017 - present]
  53. 53. LYKE Now JollyChic Indonesia E-commerce Mobile-only 50k+ users 2M downloads Top 10 Fashion Apps w Google Play Store http://www.news.getlyke.com/single- post/2016/12/02/Introducing-the- New-Beautiful-LYKE
  54. 54. GOOD PARTS Fast Growth A/B Testing Data-driven Product Manager, UI Designer, Mobile Dev, and tester - one body
  55. 55. CHALLENGES 50+ VMs in Amazon, 1 VM - 1 App, idle machine Puppet, hilarious (manual) deployment process Fear Forgotten components sometimes performance issues
  56. 56. SMACC Machine Learning FinTech SaaS and API platform From Enterprise (Deutsche Bank, AoK) to SME Well-known FinTech Startup in Germany
  57. 57. STORY Legacy on AWS, experiments with AWS ECS :/ Self-hosted K8S on ProfitBricks Get to Microso ScaleUp, welcome Azure Luckily - Azure-Kubernetes-Service
  58. 58. DIFFERENCE ☠ Two teams in Berlin and Warsaw Me in Warsaw
  59. 59. APPROACH Simplify, Simplify Hide K8S magic git tag driven Continoues Deployment
  60. 60. KUBERNETES CONCEPTS+
  61. 61. PODS See each other on localhost Live and die together Can expose multiple ports Pod nginx WebFiles ENV: HOSTNAME=0.0.0.0 LISTENPORT=8080
  62. 62. SIDE-CARS Pod memcached prometheus exporter Pod app swagger-ui 8080 80 9150 11211
  63. 63. LOAD BALANCING Kubernetes Worker Kubernetes Worker Kubernetes Worker Node Port 30000 Node Node Kubernetes Worker Node user-232 <<Requests>> B users Port 30000 Port 30000 Port 30000 Load Balancer user-12F user-32F

×