In this talk, we provide an introduction to Amazon Elastic Container Service for Kubernetes (Amazon EKS). Learn the basics of managing, deploying, and scaling containerized applications using Kubernetes on AWS. We first provide a quick introduction of containers, Kubernetes, and Amazon EKS. Then we dive into a hands-on demonstration of Amazon EKS.
Flexible: Even the most complex applications can be containerized.
Lightweight: Containers leverage and share the host kernel.
Interchangeable: You can deploy updates and upgrades on-the-fly.
Portable: You can build locally, deploy to the cloud, and run anywhere.
Scalable: You can increase and automatically distribute container replicas.
Stackable: You can stack services vertically and on-the-fly.
Hardware: Improve utilization
Cost Effective
Docker is a Linux utility that allows for easy creation, distribution and execution of containerized applications.
Great for managing a small number of containers across a few physical/virtual servers.
A Dockerfile is a plain text file that specifics the components that are to be included to assemble the Image.
A Image is a template to create a Container. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings
Images are stored in a Registry, such as DockerHub or AWS ECR (Elastic Container Register).
Tanya: Containers has been around for a very long time but it wasn't it till docker was create that allowed for easy creation , distribution and execution of containerized applications. now it allows for easy management. If yo uhave heard of container you have have heard of docker. they are almost simanulus now. Docker main three components include the docker engine that, it allows you run containers on a single host, the docker redistry that allow you to store and distrubite images and command line tools to amanage and view logs. This is great to manage a hand full of container on a few host. But what happens when you start expaning. You need to scale out quickly, doing this by hand it becomes very tendious. that is where containers orchestration comes into play, it is values to manage a large distribution of containers running on the docker engine.
Package apps into a unit.
Run the package the same on any platform.
Production application deals with dozens of containers running across hundreds of machines
treating their Data Center as one massive computer.
Master Node:
The main machine that controls the nodes
Main entrypoint for all administrative tasks
It handles the orchestration of the worker nodes
Worker Node:
It is a worker machine in Kubernetes (used to be known as minion)
This machine performs the requested tasks. Each Node is controlled by the Master Node
Runs containers inside pods
This is where the Docker engine runs and takes care of downloading images and starting containers
Master: The machine that controls Kubernetes nodes. This is where all task assignments originate.
Node: These machines perform the requested, assigned tasks. The Kubernetes master controls them.
Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage away from the underlying container. This lets you move containers around the cluster more easily.
Replication controller: This controls how many identical copies of a pod should be running somewhere on the cluster.
Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod—no matter where it moves to in the cluster or even if it’s been replaced.
Kubelet: This service runs on nodes and reads the container manifests and ensures the defined containers are started and running.
kubectl: This is the command line configuration tool for Kubernetes.
How you’re using containers in your environment?
A rudimentary application of Linux containers treats them as efficient, fast virtual machines. Once you scale this to a production environment and multiple applications, it's clear that you need multiple, colocated containers working together to deliver the individual services. This significantly multiplies the number of containers in your environment and as those containers accumulate, the complexity also grows.
Kubernetes fixes a lot of common problems with container proliferation—sorting containers together into a ”pod.” Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services—like networking and storage—to those containers. Other parts of Kubernetes help you load balance across these pods and ensure you have the right number of containers running to support your workloads.
With the right implementation of Kubernetes—and with the help of other open source projects like Atomic Registry, Open vSwitch, heapster, OAuth, and SELinux— you can orchestrate all parts of your container infrastructure.
Each node has three main components runningthat maintains running pods and provides kuberenets a runtime environment. Kubelet is a agent that runs on each node and ensure that containers are running in a pod. Kube-proxy maintains the networking abstraction layer by maintain network rules on the host node and does the required port forwarding. And each node need a container runtime software, we will be using Docker but other runtimes are supported such as rkt (rocket), runc.
Node agent that interprets the YAML manifests to run the containers as defined
This service runs on nodes and reads the container manifests and ensures the defined containers are started and running.
A Kubelet node agent periodically checks the health of the containers in a pod. In addition, it ensures that the volume is mounted as per manifest, and it downloads the sensitive information required to run the container. It also
How you’re using containers in your environment?
A rudimentary application of Linux containers treats them as efficient, fast virtual machines. Once you scale this to a production environment and multiple applications, it's clear that you need multiple, colocated containers working together to deliver the individual services. This significantly multiplies the number of containers in your environment and as those containers accumulate, the complexity also grows.
Kubernetes fixes a lot of common problems with container proliferation—sorting containers together into a ”pod.” Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services—like networking and storage—to those containers. Other parts of Kubernetes help you load balance across these pods and ensure you have the right number of containers running to support your workloads.
With the right implementation of Kubernetes—and with the help of other open source projects like Atomic Registry, Open vSwitch, heapster, OAuth, and SELinux— you can orchestrate all parts of your container infrastructure.
How you’re using containers in your environment?
A rudimentary application of Linux containers treats them as efficient, fast virtual machines. Once you scale this to a production environment and multiple applications, it's clear that you need multiple, colocated containers working together to deliver the individual services. This significantly multiplies the number of containers in your environment and as those containers accumulate, the complexity also grows.
Kubernetes fixes a lot of common problems with container proliferation—sorting containers together into a ”pod.” Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services—like networking and storage—to those containers. Other parts of Kubernetes help you load balance across these pods and ensure you have the right number of containers running to support your workloads.
With the right implementation of Kubernetes—and with the help of other open source projects like Atomic Registry, Open vSwitch, heapster, OAuth, and SELinux— you can orchestrate all parts of your container infrastructure.
Pod placement depends on each node's resources availability and on each pod's recourse requirements
Service define a set of pods and a policy on how the pods should be access.
Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod—no matter where it moves to in the cluster or even if it’s been replaced.
https://kubernetes.io/docs/concepts/services-networking/service/
How you’re using containers in your environment?
A rudimentary application of Linux containers treats them as efficient, fast virtual machines. Once you scale this to a production environment and multiple applications, it's clear that you need multiple, colocated containers working together to deliver the individual services. This significantly multiplies the number of containers in your environment and as those containers accumulate, the complexity also grows.
Kubernetes fixes a lot of common problems with container proliferation—sorting containers together into a ”pod.” Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services—like networking and storage—to those containers. Other parts of Kubernetes help you load balance across these pods and ensure you have the right number of containers running to support your workloads.
With the right implementation of Kubernetes—and with the help of other open source projects like Atomic Registry, Open vSwitch, heapster, OAuth, and SELinux— you can orchestrate all parts of your container infrastructure.
kubectl is a command line interface for running commands against Kubernetes clusters. This overview covers kubectl syntax, describes the command operations, and provides common examples.
Whether running in on-premises data centers or public clouds
This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification required.
Deploy and manage applications on your Amazon EKS cluster the same way that you would with any other Kubernetes environment.