Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Running Production-Grade Kubernetes on AWS


Published on

Kubernetes has been a key component for many companies to reduce technical debt in infrastructure by:

• Fostering the Adoption of Docker
• Simplifying Container Management
• Onboarding Developers On Infrastructure
• Unlocking Continuous Integration and Delivery

During this meetup we are going to discuss the following topics and share some best practices

• What's new with Kubernetes 1.3
• Generate Cluster Configuration using CloudFormation
• Deploy Kubernetes Clusters on AWS
• Scaling the Cluster
• Integrating Ingress with Elastic Load Balancer
• Using Internal ELB's as Kubernetes' Service
• Using EBS for persistent volumes
• Integrating Route53

Published in: Technology
  • We called it "operation mind control" - as we discovered a simple mind game that makes a girl become obsessed with you. (Aand it works even if you're not her type or she's already dating someone else) Here's how we figured it out... 
    Are you sure you want to  Yes  No
    Your message goes here
  • Earn a 6-Figure Side-Income Online... Signup for the free training HERE ★★★
    Are you sure you want to  Yes  No
    Your message goes here
  • Want to earn $4000/m? Of course you do. Learn how when you join today! ★★★
    Are you sure you want to  Yes  No
    Your message goes here
  • Earn $500 for taking a 1 hour paid survey! read more... ●●●
    Are you sure you want to  Yes  No
    Your message goes here
  • Your opinions matter! get paid BIG $$$ for them! START NOW!!.. ♥♥♥
    Are you sure you want to  Yes  No
    Your message goes here

Running Production-Grade Kubernetes on AWS

  1. 1. 1 Running Production-Grade Kubernetes on AWS
  2. 2. 2 <>
  3. 3. 3 Let’s Play Join at with Game PIN: 728274
  4. 4. 4 Agenda ● What’s new in Kubernetes v1.3 ● Bootstrapping K8s cluster on AWS ● Watchouts & Limitations!
  5. 5. Copyright 2015 Google Inc Kubernetes 101 Replication controllers create new pod "replicas" from a template and ensures that a configurable number of those pods are running. Services provide a bridge based on an IP and port pair for client applications to access backends without needing to write code that is Kubernetes-specific. Replication Controllers ServicesLabels Labels are metadata that are attached to objects, such as pods. They enable organization and selection of subsets of objects with a cluster. Pods Pods are ephemeral units that are used to manage one or more tightly coupled containers. They enable data sharing and communication among their constituent components.
  6. 6. 6 What's new in Kubernetes 1.3
  7. 7. 7 Release Highlights ● Init Containers (alpha) ● Fixed PDs ● Cluster Federation (alpha) ● Optional HTTP2 ● Pod Level QoS Policy ● TLS Secrets ● kubectl set command ● UI ● Jobs ● RBAC (alpha, experimental) ● Garbage Collector (alpha) ● Pet Sets ● rkt runtime ● Network Policies ● kubectl auto-complete
  8. 8. 8 Init Containers
  9. 9. 9 Init Container: register pod to external service
  10. 10. 10 Init Container: clone a git repo into a volume
  11. 11. 11 Jobs (pods are *expected* to terminate) Creates 1...n pods and ensures that a certain number of them run to completion. 3 job types: ● Non-Parallel (normally only one pod is started, unless the pod fails) ● Parallel with fixed count (complete when there is one successful pod for each value in range 1 to .spec.completions) ● Parallel with a work queue
  12. 12. 12 Job: Work Queue with Pod Per Work Item
  13. 13. 13 Increased Scale ● Up w/ up to 2k nodes per cluster ● Up to 60k pods per cluster Under the bonnet, the biggest change that has resulted in the improvements in scalability is to use Protocol Buffer-based serialization in the API instead of JSON.
  14. 14. 14 Multi-Zone Clusters Deploy clusters to multiple availability zones to increase availability: ● Multiple zones can be configured at cluster creation or can be added to a cluster after the fact.
  15. 15. 15 Heterogeneous Clusters Customers can now add different types of nodes to the same cluster. ● NodePools allow for different types of nodes to be joined to a single master, minimizing administrative overhead ● Built-in scheduler changes to allow scheduling to node types with only a configuration change
  16. 16. 16 Cluster Federation Deploy a service to multiple clusters simultaneously (including external load balancer configuration) via a single Federated API. ● Federated Services span multiple clusters (possibly running on different cloud providers, or on premise), and are created with a single API call. ● The federation service automatically: ○ deploys the service across multiple clusters in the federation ○ monitors the health of these services ○ manages DNS records to ensure that clients are always directed to the closest healthy instance of the federated service. More info: ● Sneak peek video
  17. 17. 17 New kubectl commands A new command kubectl set now allows the container image to be set in a single one-line command. $ kubectl set image deployment/web nginx=nginx:1.9.1 To watch the update rollout and verify it succeeds, there is now a new convenient command: rollout status. So, for example, to see the rollout of nginx/nginx:1.9.1 from nginx/nginx:1.7.9: $ kubectl rollout status deployment/web Waiting for rollout to finish: 2 out of 4 new replicas has been updated... Waiting for rollout to finish: 2 out of 4 new replicas has been updated... Waiting for rollout to finish: 2 out of 4 new replicas has been updated... Waiting for rollout to finish: 3 out of 4 new replicas has been updated... Waiting for rollout to finish: 3 out of 4 new replicas has been updated... Waiting for rollout to finish: 3 out of 4 new replicas has been updated... deployment nginx successfully rolled out
  18. 18. 18 clusters can now automatically request more compute when the have scheduled more jobs than there is CPU or memory available ● If there are no resources in the cluster to schedule a recently created pod, a new node is added. ● If a nodes is underutilized and all pods running on it can be easily moved elsewhere, then the node can be drained and deleted. ● Pay only for resources that are actually needed and get new resources when the demand increases. Cluster Autoscaling (alpha)
  19. 19. 19 Improved dashboard Manage Kubernetes almost entirely through a web browser. ● All workload types are now supported, including DaemonSets, Deployments and Rolling updates
  20. 20. 20 Minikube Minikube is a new local development platform for Kubernetes, so customers can begin developing on their desktop or laptop. ● Packages and configures a Linux VM, Docker and all Kubernetes components, optimized for local development ● Can be installed with a single command ● Alongside the regular pods, services and controllers, supports advanced Kubernetes features: ● DNS ● NodePorts ● ConfigMaps and Secrets ● Dashboards
  21. 21. 21 The new "PetSet" object provides a raft of features for supporting containers that run stateful workloads (such as databases or key value stores), including: ● Permanent hostnames, that persist across restarts ● Automatically provisioned Persistent Disks per-container, that live beyond the life of a container ● Unique identities in a group, to allow for clustering and leader election ● Initialization containers, which are critical for starting up clustered applications Stateful workload support (Pet Sets) In Alpha in Kubernetes 1.3
  22. 22. 22 What's coming next
  23. 23. 23 New features for Kubernetes in 1.4 ● Full cross-cluster federation, including ○ Single universal API ○ Global load balancer ○ Replica sets that span multiple clusters ● Granular permissions for clusters ● Simplified installation for common applications One line install for simple applications in fully tested configurations ● Universal setup Greatly simplified on-prem and complex cloud deployments ● Integrated external DNS (including Route53) Simplified integration with external DNS providers Expected release date for 1.4 is 16 September
  24. 24. 24 Deploying K8s to Amazon AWS
  25. 25. 25 What we wanted to achieve...
  26. 26. 26 4.5 Step Deployment into existing VPC Based on CoreOS K8s project: $ kube-aws init & adjust your cluster.yaml $ kube-aws render (generates CF stack) $ kube-aws validate $ kube-aws up (deploys the CF stack)
  27. 27. 27 What you get... CloudFormation Stack w/: ● Controller (master) node with EIP ● Autoscaling Group/Launch Config for Worker Nodes (fixed scaling) ● A Record in Route53 for Controller ● Security Groups to allow traffic between controller and works ● IAM Roles for both Controller and Workers ● AWS Addons (ELB, EBS integration)
  28. 28. 28 Watchouts! etcd high availability - build your own etcd cluster and expose it with internal ELB (CF stack) default TLS keys 90-days expiration - replace generated TLS assets with your own master/controller sizing - m3.xlarge for < 100 nodes - m3.2xlarge for < 250 nodes - c4.4xlarge > 500 nodes
  29. 29. 29 Limitations can’t deploy the cluster into existing subnets - the fix is on the way in 0.9 pv/pvc are available only in the same zone - because ebs volumes available in single AZ
  30. 30. 30 Scaling the cluster
  31. 31. 31 Exposing Services $ kubectl expose deployment nginx --port:80 --type=”LoadBalancer” kind: Service apiVersion: v1 metadata: name: nginx annotations: Externally with ELB (nodePort implementation) Internally with ELB:
  32. 32. 32 Persistent Volumes/Claims EBS Volumes (available in single AZ) EFS Volumes (multi AZ but with require manual recovery)
  33. 33. 33 Spot Instances Import ASG to Spotinst’s Elastigroup
  34. 34. 34 Next meetups: