Tectonic Summit 2016: Kubernetes 1.5 and Beyond

295 views

Published on

David Aronchick, Senior Product Manager, Google, talks about Kubernetes 1.5 and beyond.

12/12/16

Published in: Technology

Tectonic Summit 2016: Kubernetes 1.5 and Beyond

  1. 1. Kubernetes 1.5 and Beyond David Aronchick Product Manager at Google Container Engine & Kubernetes
  2. 2. Velocity 1.0 1.1 1.2 1.3 TotalCommits 1.5 Commits Since July 2014 1.4
  3. 3. Adoption ~4k Commits in 1.5 +25% Unique Contributors Top 0.01% of all Github Projects 3500+ External Projects Based on K8s Companies Contributing Companies Using
  4. 4. Give Everyone the Power to Run Agile, Reliable, Distributed Systems at Scale
  5. 5. Introducing Kubernetes 1.5
  6. 6. Kubernetes 1.5 Enterprise Highlights Simple Setup (including multiple clusters!) Sophisticated Scheduling Network policy Helm for application installation
  7. 7. Problem: Setting up a Kubernetes cluster is hard Today: Use kube-up.sh (and hope you don’t have to customize) Compile from HEAD and manually address security Use a third-party tool (some of which are great!) Simplified Setup
  8. 8. Solution: kubeadm! Simplified Setup
  9. 9. Solution: kubeadm! Simplified Setup master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni master.myco.com# kubeadm init
  10. 10. Solution: kubeadm! Simplified Setup master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni master.myco.com# kubeadm init Kubernetes master initialized successfully! You can now join any number of nodes by running the following command: kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
  11. 11. Solution: kubeadm! Simplified Setup master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni master.myco.com# kubeadm init Kubernetes master initialized successfully! You can now join any number of nodes by running the following command: kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3 node-01.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni node-01.myco.com# kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
  12. 12. Solution: kubeadm! Simplified Setup master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni master.myco.com# kubeadm init Kubernetes master initialized successfully! You can now join any number of nodes by running the following command: kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3 node-01.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni node-01.myco.com# kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3 Node join complete.
  13. 13. Solution: kubeadm! Simplified Setup master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni master.myco.com# kubeadm init Kubernetes master initialized successfully! You can now join any number of nodes by running the following command: kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3 node-01.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni node-01.myco.com# kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3 Node join complete. master.myco.com# kubectl apply -f https://git.io/weave-kube
  14. 14. Solution: kubeadm! Simplified Setup master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni master.myco.com# kubeadm init Kubernetes master initialized successfully! You can now join any number of nodes by running the following command: kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3 node-01.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni node-01.myco.com# kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3 Node join complete. master.myco.com# kubectl apply -f https://git.io/weave-kube Network setup complete.
  15. 15. Problem: Using multiple-clusters is hard Today: Clusters as multiple independent silos Use Kubernetes federation from scratch Simplified Setup: Federation Edition
  16. 16. Solution: kubefed! Simplified Setup: Federation Edition
  17. 17. Solution: kubefed! Simplified Setup: Federation Edition dc1.example.com# kubefed init fellowship --host-cluster-context=rivendell -- dns-zone-name="example.com"
  18. 18. Solution: kubefed! Simplified Setup: Federation Edition dc1.example.com# kubefed init fellowship --host-cluster-context=rivendell -- dns-zone-name="example.com" Federation “Rivendell” created.
  19. 19. Solution: kubefed! Simplified Setup: Federation Edition dc1.example.com# kubefed init fellowship --host-cluster-context=rivendell -- dns-zone-name="example.com" Federation “Rivendell” created. dc1.example.com# kubectl config use-context fellowship
  20. 20. Solution: kubefed! Simplified Setup: Federation Edition dc1.example.com# kubefed init fellowship --host-cluster-context=rivendell -- dns-zone-name="example.com" Federation “Rivendell” created. dc1.example.com# kubectl config use-context fellowship switched to context "Fellowship”
  21. 21. Solution: kubefed! Simplified Setup: Federation Edition dc1.example.com# kubefed init fellowship --host-cluster-context=rivendell -- dns-zone-name="example.com" Federation “Rivendell” created. dc1.example.com# kubectl config use-context fellowship switched to context "Fellowship” dc1.example.com# kubefed join gondor --host-cluster-context=fellowship
  22. 22. Solution: kubefed! Simplified Setup: Federation Edition dc1.example.com# kubefed init fellowship --host-cluster-context=rivendell -- dns-zone-name="example.com" Federation “Rivendell” created. dc1.example.com# kubectl config use-context fellowship switched to context "Fellowship” dc1.example.com# kubefed join gondor --host-cluster-context=fellowship Cluster “Gonder” joined to federation “Rivendell”.
  23. 23. Solution: kubefed! Simplified Setup: Federation Edition dc1.example.com# kubefed init fellowship --host-cluster-context=rivendell -- dns-zone-name="example.com" Federation “Rivendell” created. dc1.example.com# kubectl config use-context fellowship switched to context "Fellowship” dc1.example.com# kubefed join gondor --host-cluster-context=fellowship Cluster “Gonder” joined to federation “Rivendell”. dc1.example.com# kubectl create -f multi-cluster-deployment.yml
  24. 24. Solution: kubefed! Simplified Setup: Federation Edition dc1.example.com# kubefed init fellowship --host-cluster-context=rivendell -- dns-zone-name="example.com" Federation “Rivendell” created. dc1.example.com# kubectl config use-context fellowship switched to context "Fellowship” dc1.example.com# kubefed join gondor --host-cluster-context=fellowship Cluster “Gonder” joined to federation “Rivendell”. dc1.example.com# kubectl create -f multi-cluster-deployment.yml deployment "multi-cluster-deployment" created
  25. 25. Sophisticated Scheduling Problem: Deploying and managing workloads on large, heterogenous clusters is hard Today: Liberal use of labels (and keeping your team in sync) Manual tooling Didn’t you use Kubernetes to avoid this?
  26. 26. Solution: Sophisticated Scheduling! Taints/tolerations Forgiveness Disruption budget Sophisticated Scheduling
  27. 27. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) SCENARIO: Specialized Hardware
  28. 28. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Pod 1 (Need 4 GB) Node 3 (4GB) SCENARIO: Specialized Hardware
  29. 29. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Pod 1 (Need 4 GB) Node 3 (4GB) SCENARIO: Specialized Hardware
  30. 30. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Pod 1 (Need 4 GB) Node 3 (4GB) Any node with 4GB is good with me! SCENARIO: Specialized Hardware
  31. 31. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) Pod 1 (Need 4 GB) SCENARIO: Specialized Hardware
  32. 32. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Pod 2 (Need 4 GB + 2 GPU) Node 3 (4GB) Pod 1 (Need 4 GB) SCENARIO: Specialized Hardware
  33. 33. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Pod 2 (Need 4 GB + 2 GPU) Node 3 (4GB) Oh noes! I guess I’ll have to give up. Pod 1 (Need 4 GB) SCENARIO: Specialized Hardware
  34. 34. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Pod 2 (Need 4 GB + 2 GPU) Node 3 (4GB) I guess I’ll go with one of these nodes. Pod 1 (Need 4 GB) SCENARIO: Specialized Hardware
  35. 35. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) Pod 1 (Need 4 GB) Pod 2 (Need 4 GB + 2 GPU) SCENARIO: Specialized Hardware
  36. 36. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) I am very unhappy. Pod 1 (Need 4 GB) Pod 2 (Need 4 GB + 2 GPU) SCENARIO: Specialized Hardware
  37. 37. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) I am very unhappy. Pod 1 (Need 4 GB) Pod 2 (Need 4 GB + 2 GPU) SCENARIO: Specialized Hardware
  38. 38. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) taint: key: GPU effect: PreferNoSchedule SCENARIO: Specialized Hardware
  39. 39. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) taint: key: GPU effect: PreferNoSchedule SCENARIO: Specialized Hardware
  40. 40. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Pod 1 (Need 4 GB) Node 3 (4GB) taint: key: GPU effect: PreferNoSchedule SCENARIO: Specialized Hardware
  41. 41. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Pod 1 (Need 4 GB) Node 3 (4GB) taint: key: GPU effect: PreferNoSchedule SCENARIO: Specialized Hardware
  42. 42. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Pod 1 (Need 4 GB) Node 3 (4GB) I’ll try to avoid nodes with GPUs (but may end up there anyway) taint: key: GPU effect: PreferNoSchedule SCENARIO: Specialized Hardware
  43. 43. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) Pod 1 (Need 4 GB)
  44. 44. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) Pod 1 (Need 4 GB) SCENARIO: Specialized Hardware Pod 2 (Need 4 GB + 2 GPU) toleration: key: GPU effect: PreferNoSchedule
  45. 45. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) Pod 1 (Need 4 GB) Pod 2 (Need 4 GB + 2 GPU) toleration: key: GPU effect: PreferNoSchedule SCENARIO: Specialized Hardware
  46. 46. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) Pod 1 (Need 4 GB) Yay! There’s a spot that’s a perfect fit!Pod 2 (Need 4 GB + 2 GPU) toleration: key: GPU effect: PreferNoSchedule SCENARIO: Specialized Hardware
  47. 47. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) Pod 1 (Need 4 GB) Pod 2 (Need 4 GB + 2 GPU) SCENARIO: Specialized Hardware
  48. 48. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) Pod 1 (Need 4 GB) Pod 2 (Need 4 GB + 2 GPU) SCENARIO: Specialized Hardware We are both happy!We are both happy!
  49. 49. Sophisticated Scheduling: Taints/Toleration Node 1 (4GB + 2 GPU) Node 2 (4GB) Kubernetes Cluster Node 3 (4GB) Pod 1 (Need 4 GB) Pod 2 (Need 4 GB + 2 GPU) We are both happy!We are both happy! SCENARIO: Specialized Hardware
  50. 50. Sophisticated Scheduling: Taints/Toleration Node 1 (Premium) Node 2 (Premium) Kubernetes Cluster Node 3 (Regular) SCENARIO: Reserved instances
  51. 51. Sophisticated Scheduling: Taints/Toleration Node 1 (Premium) Node 2 (Premium) Kubernetes Cluster Node 3 (Regular) taint: key: user value: specialTeam effect: NoSchedule SCENARIO: Reserved instances
  52. 52. Sophisticated Scheduling: Taints/Toleration Node 1 (Premium) Node 2 (Premium) Kubernetes Cluster Node 3 (Regular) taint: key: user value: specialTeam effect: NoSchedule SCENARIO: Reserved instances
  53. 53. Sophisticated Scheduling: Taints/Toleration Node 1 (Premium) Node 2 (Premium) Kubernetes Cluster Node 3 (Regular) taint: key: user value: specialTeam effect: NoSchedule SCENARIO: Reserved instances Premium Pod toleration: key: “user” value: specialTeam effect: NoSchedule Premium Pod
  54. 54. Sophisticated Scheduling: Taints/Toleration Node 1 (Premium) Node 2 (Premium) Kubernetes Cluster Node 3 (Regular) taint: key: user value: specialTeam effect: NoSchedule SCENARIO: Reserved instances Premium Pod toleration: key: “user” value: specialTeam effect: NoSchedule Premium Pod
  55. 55. Sophisticated Scheduling: Taints/Toleration Node 1 (Premium) Node 2 (Premium) Kubernetes Cluster Node 3 (Regular) We can go anywhere! taint: key: user value: specialTeam effect: NoSchedule SCENARIO: Reserved instances Premium Pod toleration: key: “user” value: specialTeam effect: NoSchedule Premium Pod
  56. 56. Sophisticated Scheduling: Taints/Toleration Node 1 (Premium) Node 2 (Premium) Kubernetes Cluster Node 3 (Regular) Premium Pod Regular Pod Premium Pod SCENARIO: Reserved instances
  57. 57. Sophisticated Scheduling: Taints/Toleration Node 1 (Premium) Node 2 (Premium) Kubernetes Cluster Node 3 (Regular) Premium Pod Regular Pod Premium Pod SCENARIO: Reserved instances
  58. 58. Sophisticated Scheduling: Taints/Toleration Node 1 (Premium) Node 2 (Premium) Kubernetes Cluster Node 3 (Regular) I will fail to schedule even though there’s a spot for me. Premium Pod Regular Pod Premium Pod SCENARIO: Reserved instances
  59. 59. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Node 3 TPM SCENARIO: Ensuring node meets spec
  60. 60. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Pod Node 3 TPM SCENARIO: Ensuring node meets spec
  61. 61. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Pod Node 3 TPM SCENARIO: Ensuring node meets spec
  62. 62. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Pod Node 3 I must wait until a node is available and trusted. TPM SCENARIO: Ensuring node meets spec
  63. 63. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Pod Node 3 TPM I must wait until a node is available and trusted. SCENARIO: Ensuring node meets spec
  64. 64. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Pod Node 3 TPM I must wait until a node is available and trusted. SCENARIO: Ensuring node meets spec
  65. 65. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Pod Node 3 TPM I must wait until a node is available and trusted. SCENARIO: Ensuring node meets spec
  66. 66. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Pod Node 3 TPM I must wait until a node is available and trusted. SCENARIO: Ensuring node meets spec
  67. 67. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Pod Node 3 I can be scheduled! TPM SCENARIO: Ensuring node meets spec
  68. 68. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Pod Node 3 I can be scheduled! TPM Pod SCENARIO: Ensuring node meets spec
  69. 69. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Node 3 TPM Pod SCENARIO: Ensuring node meets spec
  70. 70. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Node 3 Pod API Server SCENARIO: Hardware failing (but not failed)
  71. 71. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Node 3 Pod API Server SCENARIO: Hardware failing (but not failed)
  72. 72. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Node 3 Pod API Server SCENARIO: Hardware failing (but not failed)
  73. 73. Sophisticated Scheduling: Taints/Toleration Node 1 Node 2 Kubernetes Cluster Node 3 Pod API Server This node’s disk is failing! SCENARIO: Hardware failing (but not failed)
  74. 74. Node 1 Pod Kubernetes Cluster Node 2 Node 3 Sophisticated Scheduling: Taints/Toleration API Server Taint the node SCENARIO: Hardware failing (but not failed)
  75. 75. Node 1 Pod Kubernetes Cluster Node 2 Node 3 Sophisticated Scheduling: Taints/Toleration API Server Taint the node SCENARIO: Hardware failing (but not failed)
  76. 76. Node 1 Pod Kubernetes Cluster Node 2 Node 3 Sophisticated Scheduling: Taints/Toleration API Server SCENARIO: Hardware failing (but not failed) Taint the node
  77. 77. Node 1 Pod Kubernetes Cluster Node 2 Node 3 Sophisticated Scheduling: Taints/Toleration Schedule new pod and kill the old one API Server SCENARIO: Hardware failing (but not failed)
  78. 78. Node 1 Pod Kubernetes Cluster Node 2 Node 3 Sophisticated Scheduling: Taints/Toleration Schedule new pod and kill the old one New Pod API Server SCENARIO: Hardware failing (but not failed)
  79. 79. Node 1 Pod Kubernetes Cluster Node 2 Node 3 Sophisticated Scheduling: Taints/Toleration Schedule new pod and kill the old one New Pod API Server SCENARIO: Hardware failing (but not failed)
  80. 80. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 Pod (t=5m) All is well. Pod (t=30m) API Server SCENARIO: Supporting network failure
  81. 81. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 It’s been 1m since I heard from Node 1 Pod (t=5m) Pod (t=30m) API Server SCENARIO: Supporting network failure
  82. 82. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 It’s been 2m since I heard from Node 1 Pod (t=5m) Pod (t=30m) API Server SCENARIO: Supporting network failure
  83. 83. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 It’s been 3m since I heard from Node 1 Pod (t=5m) Pod (t=30m) API Server SCENARIO: Supporting network failure
  84. 84. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 It’s been 4m since I heard from Node 1 Pod (t=5m) Pod (t=30m) API Server SCENARIO: Supporting network failure
  85. 85. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 It’s been 5m since I heard from Node 1 Pod (t=5m) Pod (t=30m) API Server SCENARIO: Supporting network failure
  86. 86. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 !!! Pod (t=5m) Pod (t=30m) API Server SCENARIO: Supporting network failure
  87. 87. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 Treat pod as dead & schedule new 5m Pod Pod (t=30m) API Server Pod (t=5m) SCENARIO: Supporting network failure
  88. 88. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 Treat pod as dead & schedule new 5m Pod Pod (t=30m) API Server Pod (t=5m) SCENARIO: Supporting network failure
  89. 89. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 Treat pod as dead & schedule new 5m Pod Pod (t=30m) API Server Pod (t=5m) Pod (t=5m) SCENARIO: Supporting network failure
  90. 90. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 Treat pod as dead & schedule new 5m Pod Pod (t=30m) API Server Pod (t=5m) Pod (t=5m) SCENARIO: Supporting network failure
  91. 91. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 It’s been 30m since I heard from Node 1 Pod (t=30m) API Server Pod (t=5m) Pod (t=5m) SCENARIO: Supporting network failure
  92. 92. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 Pod (t=30m) API Server Pod (t=5m) !!! Pod (t=5m) SCENARIO: Supporting network failure
  93. 93. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 API Server Pod (t=5m) Treat pod as dead & schedule a new 30m pod Pod (t=5m) Pod (t=30m) SCENARIO: Supporting network failure
  94. 94. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 API Server Pod (t=5m) Treat pod as dead & schedule a new 30m pod Pod (t=5m) Pod (t=30m) SCENARIO: Supporting network failure
  95. 95. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 API Server Pod (t=5m) Treat pod as dead & schedule a new 30m pod Pod (t=5m) Pod (t=30m) SCENARIO: Supporting network failure
  96. 96. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 API Server Pod (t=5m) Treat pod as dead & schedule a new 30m pod Pod (t=5m) Pod (t=30m) Pod (t=30m) SCENARIO: Supporting network failure
  97. 97. Sophisticated Scheduling: Forgiveness Node 1 Node 2 Kubernetes Cluster Node 3 API Server Pod (t=5m) Treat pod as dead & schedule a new 30m pod Pod (t=5m) Pod (t=30m) Pod (t=30m) SCENARIO: Supporting network failure
  98. 98. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 Two Pod Set (A) API Server Time to upgrade to Kubernetes 1.5! Two Pod Set (B) SCENARIO: Cluster upgrades with stateful workloads
  99. 99. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 Two Pod Set (A) API Server Two Pod Set (B) “Evict A!” SCENARIO: Cluster upgrades with stateful workloads
  100. 100. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 Two Pod Set (A) API Server Two Pod Set (B) “Shut down” SCENARIO: Cluster upgrades with stateful workloads
  101. 101. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 Two Pod Set (A) API Server Two Pod Set (B) SCENARIO: Cluster upgrades with stateful workloads
  102. 102. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 Two Pod Set (A) API Server Two Pod Set (B) “Evict B!” SCENARIO: Cluster upgrades with stateful workloads
  103. 103. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 Two Pod Set (A) API Server Two Pod Set (B) “Sorry, can’t!” SCENARIO: Cluster upgrades with stateful workloads
  104. 104. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 Two Pod Set (A) API Server Two Pod Set (B) SCENARIO: Cluster upgrades with stateful workloads
  105. 105. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 Two Pod Set (A) API Server Two Pod Set (B) SCENARIO: Cluster upgrades with stateful workloads
  106. 106. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 Two Pod Set (A) API Server Two Pod Set (B) “Ok, now Evict B!” SCENARIO: Cluster upgrades with stateful workloads
  107. 107. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 API Server Two Pod Set (B) “OK!” Two Pod Set (A) SCENARIO: Cluster upgrades with stateful workloads
  108. 108. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 API Server Two Pod Set (B) Two Pod Set (A) “Shutdown” SCENARIO: Cluster upgrades with stateful workloads
  109. 109. Sophisticated Scheduling: Disruption Budget Node 1 Node 2 Kubernetes Cluster Node 3 API Server Two Pod Set (B) Two Pod Set (A) SCENARIO: Cluster upgrades with stateful workloads
  110. 110. Network Policy Problem: Network policy is complicated! Today: Use VM tooling to support security (but limit VM utilization) Managing port level security Proxy-ing everything
  111. 111. Solution: Network Policy Object! Network Policy
  112. 112. Network Policy Object VM 1 VM 2 VM 3 SCENARIO: Two-tier app needs to be locked down
  113. 113. Network Policy Object VM 1 VM 2 VM 3 SCENARIO: Two-tier app needs to be locked down
  114. 114. Network Policy Object VM 1 VM 2 VM 3 SCENARIO: Two-tier app needs to be locked down
  115. 115. Network Policy Object VM 1 VM 2 VM 3 SCENARIO: Two-tier app needs to be locked down ✓
  116. 116. Network Policy Object VM 1 VM 2 VM 3 SCENARIO: Two-tier app needs to be locked down ✓
  117. 117. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3
  118. 118. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3
  119. 119. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3 ??
  120. 120. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3 ??
  121. 121. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3 ? ? ??
  122. 122. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3 ? ? ?? Nothing can talk to anything!
  123. 123. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3 Nothing can talk to anything!
  124. 124. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3 “Green” can talk to “Red”
  125. 125. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3 “Green” can talk to “Red” ✓ ✓
  126. 126. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3 “Green” can talk to “Red” ✓
  127. 127. Kubernetes Cluster Network Policy Object SCENARIO: Two-tier app needs to be locked down VM 1 VM 2 VM 3 “Green” can talk to “Red” ✓ ✓
  128. 128. Problem: I need to deploy complicated apps! Today: Manually deploy applications once per cluster Manually publish global endpoints and load balance Build a control plane for monitoring application Helm
  129. 129. Solution: Helm - The Package manager for Kubernetes Think “apt-get/yum” Supports Kubernetes objects natively Deployments DaemonSets Secrets & config Multi-tier apps Upgrades Helm
  130. 130. Helm DaemonSets: DataDog Node 1 Node 2 Kubernetes Cluster Node 3
  131. 131. Helm DaemonSets: DataDog Node 1 Node 2 Kubernetes Cluster Node 3 helm install --name datadog --set datadog.apiKey=<APIKEY> stable/datadog
  132. 132. Helm DaemonSets: DataDog Node 1 Node 2 Kubernetes Cluster Node 3 helm install --name datadog --set datadog.apiKey=<APIKEY> stable/datadog
  133. 133. Solution: Helm - The Package manager for Kubernetes Helm
  134. 134. Solution: Helm - The Package manager for Kubernetes Helm
  135. 135. Solution: Helm - The Package manager for Kubernetes Helm
  136. 136. Solution: Helm - The Package manager for Kubernetes Helm
  137. 137. Solution: Helm - The Package manager for Kubernetes Helm helm install sapho
  138. 138. Accelerating Stateful Applications
  139. 139. Accelerating Stateful Applications Management of storage and data for stateful applications on Kubernetes
  140. 140. Accelerating Stateful Applications Management of storage and data for stateful applications on Kubernetes Management of Kubernetes at enterprise scale
  141. 141. Accelerating Stateful Applications Container-optimized servers for compute and storage Management of storage and data for stateful applications on Kubernetes Management of Kubernetes at enterprise scale
  142. 142. Accelerating Stateful Applications Container-optimized servers for compute and storage Management of storage and data for stateful applications on Kubernetes Management of Kubernetes at enterprise scale +
  143. 143. Accelerating Stateful Applications Container-optimized servers for compute and storage Management of storage and data for stateful applications on Kubernetes Management of Kubernetes at enterprise scale + Automated Stateful Apps on K8S
  144. 144. What’s Next
  145. 145. What’s Next
  146. 146. What’s Next Nothing!*
  147. 147. What’s Next Nothing!* * for large values of “Nothing”
  148. 148. What’s Next Nothing!* * for large values of “Nothing” Bringing many features from alpha to beta & GA, including: Federated deployments and daemon sets Improved RBAC StatefulSet upgrades Improved scaling & etcd 3 Easy cluster setup for high availability configuration Integrated Metrics API
  149. 149. Kubernetes is Open • open community • open design • open source • open to ideas Twitter: @aronchick Email: aronchick@google.com • kubernetes.io • github.com/kubernetes/kubernetes • slack.kubernetes.io • twitter: @kubernetesio

×