6. You can start packaging your
applications into containers and
run them using Docker. It makes
a clean definition and distribution
but if the number of containers
and nodes start growing you can
no longer manually manage them
or just script docker run
command, you need an
orchestrator.
7. Containers
Orchestration
Running a container on a single local machine
is easy, but on a production environment you
will find yourself to run hundreds of containers
into hundreds of different servers, you will need
to be able to replace a container that was
running on a server that failed, or to manage
the networking between containers, or to scale
them horizontally, or to manage updates, etc…
this is why Orchestrators came into play.
10. Docker and Kubernetes
Docker and Kubernetes have the larger
communities and larger adoption
Fully supported by all major Cloud providers
Fully supported for an On Premises configuration
Part of the Open Containers Initiatives
Part of the Cloud Native Computing Foundation
Docker supports Kubernetes (now part of the
Enterprise Edition)
Docker support migration from Swarm to
Kubernetes
Google Borg as foundation of Kubernetes
12. Docker Basics
Dockerfile
Source code of an image
Image
Immutable package of application and its dependencies
Composed by multiple layers
Container
Running instance of an image
Registry
Repository of images
Docker Daemon
Build images
Run Containers
Docker CLI
13. Docker File
Image Build Instruction
A dockerfile contains the instruction for
the docker build process on how to
create a new image
Build of an image is done by executing
command inside a container
A container is the execution of an
image
Multi-Stage builds should be used to
optimise image creation process and
image size
16. Master Node
Primary control plane for Kubernetes
Etcd
The etcd project, developed by the team at CoreOS, is a lightweight, distributed
key-value store that can be configured to span across multiple nodes.
Kubernetes uses etcd to store configuration data that can be accessed by each of the
nodes in the cluster.
Kube-apiserver
This is the main management point of the entire cluster as it allows a user to
configure Kubernetes' workloads and organizational units
The API server implements a RESTful interface
Kube-controller-manager
It manages different controllers that regulate the state of the cluster, manage
workload life cycles, and perform routine tasks.
When a change is seen, the controller reads the new information and implements
the procedure that fulfills the desired state.
Kube-scheduler
The process that actually assigns workloads to specific nodes
The scheduler is responsible for tracking available capacity on each host to make
sure that workloads are not scheduled in excess of the available resources.
17. Worker Node
Hosts to run containers
Container Runtime
Typically Docker
Rkt and runC supported
Kubelet
The kubelet service communicates with the master components
to authenticate to the cluster and receive commands and work
The kubelet process then assumes responsibility for maintaining
the state of the work on the node server.
Kube-Proxy
To manage individual host subnetting and make services
available to other componentsThe scheduler is responsible for
tracking available capacity on each host to make sure that
workloads are not scheduled in excess of the available
resources.
18. Objectes in Kubernetes
POD
A pod generally represents one or more containers that should be controlled as a single application
Replication Controller and Replication Set
The replication controller is responsible for ensuring that the number of pods deployed in the cluster matches the number of pods in its configuration, including the ability to increase or
decrease their number
Deployment
Deployments are a high level object designed to ease the life cycle management of replicated pods
In case of configuration changes Kubernetes will adjust the replica sets, manage transitions between different application versions, and optionally maintain event history and undo
capabilities automatically
Stateful Set
Stateful sets provide a stable networking identifier by creating a unique, number-based name for each pod that will persist even if the pod needs to be moved to another node. Persistent
storage volumes can be transferred with a pod when rescheduling is necessary
Daemon Set
specialized form of pod controller that run a copy of a pod on each node
Services
A service groups together logical collections of pods that perform the same function to present them as a single entity
21. HELM
Package Manager
Part of the Cloud Native Computing Foundation
Designed to simply management of
dependencies on Kubernetes deployments
CHARTS: Helm packages, a few YAML
configurations files
Mostly standard Kubernetes YAML format
Templates and Values yaml files used to
abstract composition of Kubernetes YAML files
with variables (e.g. by environment)
Requirementes.yaml used to define
dependencies
22. ISTIO
Service Mesh
Traffic Management
Decouples traffic flow and infrastructure scaling, letting you
specify via Pilot what rules you want traffic to follow rather
than which specific pods/VMs
Security
Strong identity, powerful policy, transparent TLS
encryption, and authentication, authorization and audit
(AAA) tools
Policy and Telemetry
A flexible model to enforce authorization policies and
collect telemetry for the services in a mesh
Performance and Scalability
Support for Horizontal Pod Autoscaling
The current presentation does not want to represent a full technical guide to containers and orchestrator, but it aims to provide the context for you to understand what containers and orchestrators are about, as a first step to start looking for further technical training.
Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, the application, necessary binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), can handle more applications and require fewer VMs and Operating systems.
Docker is the most famous and by large the most adopted container technology, but it is not the only one existing.
Containers are based on capabilities of the OS kernel, as Kernel Namespaces, Cgroups and Chroot. Containers management software as Docker provide a control plane, APIs and CLI to more easily manage, in the form of pre-defined packages, the build and execution of images and containers.
Docker is an App Container as RKT and runC; while Lxc (and the Ubuntu version named Lxd), Linux-Vserver and OpenVZ are containers Full-System (meaning a different version of the kernel can be executed inside the container). For Microsoft Windows the alternatives are Hyper-V Containers (a container Full-System) or Docker.
runC is not really a different container manager, it is the runtime environment developed initially by Docker and released to the Open Container Initiative (see: https://www.opencontainers.org/about/members )
Each container software defines its own format for the image package, even if Rkt is able to run also Docker images.
Kubernetes is considered as the standard in terms of containers orchestrators but it is not the only options.
Docker Swarm, Nomad, Kontena and Mesos are still possible alternatives.
Please remind that the image is an application image, so in order to run it has to be compatible with the OS Kernel of the host.
Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimise Dockerfiles while keeping them easy to read and maintain. One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artefacts you don’t need before moving on to the next layer. With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artefacts from one stage to another, leaving behind everything you don’t want in the final image. y default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first FROM instruction. However, you can name your stages, by adding an as <NAME> to the FROM instruction.
Kubernetes, at its basic level, is a system for running and coordinating containerised applications across a cluster of machines. It is a platform designed to completely manage the life cycle of containerised applications and services using methods that provide predictability, scalability, and high availability.
Kubernetes brings together individual physical or virtual machines into a cluster using a shared network to communicate between each server. This cluster is the physical platform where all Kubernetes components, capabilities, and workloads are configured.
The machines in the cluster are each given a role within the Kubernetes ecosystem. One server (or a small group in highly available deployments) functions as the master server. This server acts as a gateway and brain for the cluster by exposing an API for users and clients, health checking other servers, deciding how best to split up and assign work (known as "scheduling"), and orchestrating communication between other components. The master server acts as the primary point of contact with the cluster and is responsible for most of the centralised logic Kubernetes provides. The other machines in the cluster are designated as nodes: servers responsible for accepting and running workloads using local and external resources. To help with isolation, management, and flexibility, Kubernetes runs applications and services in containers, so each node needs to be equipped with a container runtime (like Docker or Rkt). The node receives work instructions from the master server and creates or destroys containers accordingly, adjusting networking rules to route and forward traffic appropriately.
Additionally the “Cloud-Controller-Manager” is used in Cloud deployments.
Cloud controller managers act as the glue that allows Kubernetes to interact providers with different capabilities, features, and APIs while maintaining relatively generic constructs internally. This allows Kubernetes to update its state information according to information gathered from the cloud provider, adjust cloud resources as changes are needed in the system, and create and use additional cloud services to satisfy the work requirements submitted to the cluster.
Helm can: Install software; Automatically install software dependencies; Upgrade software; Configure software deployments; Fetch software packages from repositories.
Helm provides this functionality through the following components:
A command line tool, helm, which provides the user interface to all Helm functionality.
A companion server component, tiller, that runs on your Kubernetes cluster, listens for commands from helm, and handles the configuration and deployment of software releases on the cluster.
The Helm packaging format, called charts.
During the installation of a chart, Helm combines the chart's templates with the configuration specified by the user and the defaults in value.yaml. These are rendered into Kubernetes manifests that are then deployed via the Kubernetes API. This creates a release, a specific configuration and deployment of a particular chart.
This concept of releases is important, because you may want to deploy the same application more than once on a cluster. For instance, you may need multiple RabbitMQ servers with different configurations. You also will probably want to upgrade different instances of a chart individually. Perhaps one application is ready for an updated RabbitMQ server but another is not. With Helm, you upgrade each release individually.