AMAZON EKS DEEP DIVE
ANDRZEJ KOMARNICKI – DEVOPS ARCHITECT
Kubernetes
Version
Kubernetes Patch
Version
Amazon EKS
Platform
Version
Enabled Admission Controllers Release Notes
1.10 1.10.3 eks.2 ​Initializers, NamespaceLifecycle, Limit
Ranger, ServiceAccount, DefaultStora
geClass, ResourceQuota, DefaultToler
ationSeconds, NodeRestriction, Muta
tingAdmissionWebhook,ValidatingAd
missionWebhook
•Added support for
Kubernetes aggregation layer.
•Added support for
Kubernetes Horizontal Pod
Autoscaler (HPA).
•Kubernetes Metrics Server 0.3.0
or greater is compatible with EKS
platform version eks.2.
1.10 1.10.3 eks.1 ​Initializers, NamespaceLifecycle, Limit
Ranger, ServiceAccount, DefaultStora
geClass, ResourceQuota, DefaultToler
ationSeconds,NodeRestriction
Initial launch of Amazon EKS.
Current and recent Amazon EKS platform versions are
described in the table below:
EKS CUSTOMERS
EKS – KUBERNETES MASTERS
EKS ARCHITECTURE
Amazon EKS Shared Responsibility Model
For Amazon EKS, AWS is responsible for the Kubernetes control plane, which includes the control
plane nodes and etcd database.
You assume responsibility and management of the following:
• The security configuration of the data plane, including the configuration of the security groups that
allow traffic to pass from the Amazon EKS control plane into the customer VPC
• The configuration of the worker nodes and the containers themselves
• The worker node guest operating system (including updates and security patches)
• Other associated application software:
• Setting up and managing network controls, such as firewall rules
• Managing platform-level identity and access management, either with or in addition to IAM
EKS NETWORKING
CNI PLUGIN
Any Kubernetes cluster on AWS
• EKS
• BYOK8s
Daemonset deployment
• kubectl create –f eks-cni.yaml
CNI INFRASTRUCTURE
VPC CNI NETWORKING INTERNALS
VPC CNI PLUGIN ARCHITECTURE
Kubernetes + AWS IAM
• AWS native access management
• In collaboration with Heptio
• Kubectl and worker nodes
• Works with Kubernetes RBAC
IAM Auth Support == Upstream in 1.10
https://github.com/kubernetes-sigs/aws-iam-authenticator
IAM AUTHENTICATION + KUBECTL
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
WORKER PROVISIONING
Load Balancing - Classic/NLB
Amazon EKS supports the Network Load Balancer and the Classic Load Balancer through the Kubernetes
service of type LoadBalancer. The configuration of your load balancer is controlled by annotations that are
added to the manifest for your service.
By default, Classic Load Balancers are used for LoadBalancer type services. To use the Network Load
Balancer instead, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Load Balancing - ALB
• CoreOS ALB Ingress Controller: Supported by AWS (in beta)
• Exposes ALB functionality to Kubernetes via Ingress Resources
• Layer 7 load balancing, supports content-based routing by host
or path
https://github.com/kubernetes-sigs/aws-alb-ingress-controller
The following diagram details the AWS components this controller creates. It also demonstrates the route
ingress traffic takes from the ALB to the Kubernetes cluster.
Ingress Creation
This section describes each step (circle) above. This example demonstrates satisfying 1 ingress resource.
[1]: The controller watches for ingress events from the API server. When it finds ingress resources that satisfy its
requirements, it begins the creation of AWS resources.
[2]: An ALB (ELBv2) is created in AWS for the new ingress resource. This ALB can be internet-facing or internal.
You can also specify the subnets it's created in using annotations.
[3]: Target Groups are created in AWS for each unique Kubernetes service described in the ingress resource.
[4]: Listeners are created for every port detailed in your ingress resource annotations. When no port is specified,
sensible defaults (80 or 443) are used. Certificates may also be attached via annotations.
[5]: Rules are created for each path specified in your ingress resource. This ensures traffic to a specific path is
routed to the correct Kubernetes Service.
Along with the above, the controller also...
•deletes AWS components when ingress resources are removed from k8s.
•modifies AWS components when ingress resources change in k8s.
•assembles a list of existing ingress-related AWS components on start-up, allowing you to recover if the controller
were to be restarted.
VISIBILITY THROUGHOUT YOUR KUBERNETES CLUSTER
LOG AGGREGATION IN CLOUDWATCH LOGS VIA FLUENTD
https://github.com/kubernetes/charts/tree/master/incubator/fluentd-cloudwatch
METRICS
CI/CD for apps on Kubernetes - options
Jenkins
AWS CodePipeline, AWS CodeCommit, AWS CodeBuild
AWS partners
• GitLab
• Shippable
• CircleCI
• Codeship
https://github.com/aws-samples/aws-kube-codesuite
Spot Instances
Amazon EC2 Spot Instances are spare EC2 capacity that offer discounts of 70-90% over On-
Demand prices. The Spot price is determined by term trends in supply and demand and the
amount of On-Demand capacity on a particular instance size, family, Availability Zone, and AWS
Region.
If the available On-Demand capacity of a particular instance type is depleted, the Spot Instance
is sent an interruption notice two minutes ahead to gracefully wrap up things. I recommend a
diversified fleet of instances, with multiple instance types created by Spot Fleets or EC2 Fleets.
You can use Spot Instances for various fault-tolerant and flexible applications. In a workload that
uses container orchestration and management platforms like EKS or Amazon Elastic Container
Service (Amazon ECS), the schedulers have built-in mechanisms to identify any pods or
containers on these interrupted EC2 instances. The interrupted pods or containers are then
replaced on other EC2 instances in the cluster.
Solution
component
Role in solution Code Deployment
Cluster Autoscaler
Scales EC2
instances in or out
Open source K8s pod DaemonSet on On-Demand Instances
Auto Scaling group
Provisions Spot or
On-Demand
Instances
AWS Via CloudFormation
Spot Instance
interrupt handler
Sets K8s nodes to
drain state, when
the Spot Instance
is interrupted
Open source
K8s pod DaemonSet on all K8s nodes with the
label lifecycle=EC2Spot
Solution architecture
There are three goals to accomplish with this solution:
1. The cluster must scale automatically to match the demands of an application.
2. Optimize for cost by using Spot Instances.
3. The cluster must be resilient to Spot Instance interruptions.
These goals are accomplished with the following components:
EKS Deep Dive Complete
http://www.linkedin.com/in/andrzejkomarnicki/

Amazon EKS Deep Dive

  • 1.
    AMAZON EKS DEEPDIVE ANDRZEJ KOMARNICKI – DEVOPS ARCHITECT
  • 14.
    Kubernetes Version Kubernetes Patch Version Amazon EKS Platform Version EnabledAdmission Controllers Release Notes 1.10 1.10.3 eks.2 ​Initializers, NamespaceLifecycle, Limit Ranger, ServiceAccount, DefaultStora geClass, ResourceQuota, DefaultToler ationSeconds, NodeRestriction, Muta tingAdmissionWebhook,ValidatingAd missionWebhook •Added support for Kubernetes aggregation layer. •Added support for Kubernetes Horizontal Pod Autoscaler (HPA). •Kubernetes Metrics Server 0.3.0 or greater is compatible with EKS platform version eks.2. 1.10 1.10.3 eks.1 ​Initializers, NamespaceLifecycle, Limit Ranger, ServiceAccount, DefaultStora geClass, ResourceQuota, DefaultToler ationSeconds,NodeRestriction Initial launch of Amazon EKS. Current and recent Amazon EKS platform versions are described in the table below:
  • 15.
  • 16.
  • 17.
  • 18.
    Amazon EKS SharedResponsibility Model For Amazon EKS, AWS is responsible for the Kubernetes control plane, which includes the control plane nodes and etcd database. You assume responsibility and management of the following: • The security configuration of the data plane, including the configuration of the security groups that allow traffic to pass from the Amazon EKS control plane into the customer VPC • The configuration of the worker nodes and the containers themselves • The worker node guest operating system (including updates and security patches) • Other associated application software: • Setting up and managing network controls, such as firewall rules • Managing platform-level identity and access management, either with or in addition to IAM
  • 19.
  • 21.
    CNI PLUGIN Any Kubernetescluster on AWS • EKS • BYOK8s Daemonset deployment • kubectl create –f eks-cni.yaml
  • 22.
  • 23.
  • 24.
    VPC CNI PLUGINARCHITECTURE
  • 27.
    Kubernetes + AWSIAM • AWS native access management • In collaboration with Heptio • Kubectl and worker nodes • Works with Kubernetes RBAC IAM Auth Support == Upstream in 1.10 https://github.com/kubernetes-sigs/aws-iam-authenticator
  • 28.
    IAM AUTHENTICATION +KUBECTL https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
  • 29.
  • 30.
    Load Balancing -Classic/NLB Amazon EKS supports the Network Load Balancer and the Classic Load Balancer through the Kubernetes service of type LoadBalancer. The configuration of your load balancer is controlled by annotations that are added to the manifest for your service. By default, Classic Load Balancers are used for LoadBalancer type services. To use the Network Load Balancer instead, apply the following annotation to your service: service.beta.kubernetes.io/aws-load-balancer-type: nlb
  • 31.
    Load Balancing -ALB • CoreOS ALB Ingress Controller: Supported by AWS (in beta) • Exposes ALB functionality to Kubernetes via Ingress Resources • Layer 7 load balancing, supports content-based routing by host or path https://github.com/kubernetes-sigs/aws-alb-ingress-controller
  • 32.
    The following diagramdetails the AWS components this controller creates. It also demonstrates the route ingress traffic takes from the ALB to the Kubernetes cluster.
  • 33.
    Ingress Creation This sectiondescribes each step (circle) above. This example demonstrates satisfying 1 ingress resource. [1]: The controller watches for ingress events from the API server. When it finds ingress resources that satisfy its requirements, it begins the creation of AWS resources. [2]: An ALB (ELBv2) is created in AWS for the new ingress resource. This ALB can be internet-facing or internal. You can also specify the subnets it's created in using annotations. [3]: Target Groups are created in AWS for each unique Kubernetes service described in the ingress resource. [4]: Listeners are created for every port detailed in your ingress resource annotations. When no port is specified, sensible defaults (80 or 443) are used. Certificates may also be attached via annotations. [5]: Rules are created for each path specified in your ingress resource. This ensures traffic to a specific path is routed to the correct Kubernetes Service. Along with the above, the controller also... •deletes AWS components when ingress resources are removed from k8s. •modifies AWS components when ingress resources change in k8s. •assembles a list of existing ingress-related AWS components on start-up, allowing you to recover if the controller were to be restarted.
  • 34.
    VISIBILITY THROUGHOUT YOURKUBERNETES CLUSTER
  • 35.
    LOG AGGREGATION INCLOUDWATCH LOGS VIA FLUENTD https://github.com/kubernetes/charts/tree/master/incubator/fluentd-cloudwatch
  • 36.
  • 37.
    CI/CD for appson Kubernetes - options Jenkins AWS CodePipeline, AWS CodeCommit, AWS CodeBuild AWS partners • GitLab • Shippable • CircleCI • Codeship
  • 38.
  • 39.
    Spot Instances Amazon EC2Spot Instances are spare EC2 capacity that offer discounts of 70-90% over On- Demand prices. The Spot price is determined by term trends in supply and demand and the amount of On-Demand capacity on a particular instance size, family, Availability Zone, and AWS Region. If the available On-Demand capacity of a particular instance type is depleted, the Spot Instance is sent an interruption notice two minutes ahead to gracefully wrap up things. I recommend a diversified fleet of instances, with multiple instance types created by Spot Fleets or EC2 Fleets. You can use Spot Instances for various fault-tolerant and flexible applications. In a workload that uses container orchestration and management platforms like EKS or Amazon Elastic Container Service (Amazon ECS), the schedulers have built-in mechanisms to identify any pods or containers on these interrupted EC2 instances. The interrupted pods or containers are then replaced on other EC2 instances in the cluster.
  • 40.
    Solution component Role in solutionCode Deployment Cluster Autoscaler Scales EC2 instances in or out Open source K8s pod DaemonSet on On-Demand Instances Auto Scaling group Provisions Spot or On-Demand Instances AWS Via CloudFormation Spot Instance interrupt handler Sets K8s nodes to drain state, when the Spot Instance is interrupted Open source K8s pod DaemonSet on all K8s nodes with the label lifecycle=EC2Spot Solution architecture There are three goals to accomplish with this solution: 1. The cluster must scale automatically to match the demands of an application. 2. Optimize for cost by using Spot Instances. 3. The cluster must be resilient to Spot Instance interruptions. These goals are accomplished with the following components:
  • 42.
    EKS Deep DiveComplete http://www.linkedin.com/in/andrzejkomarnicki/