4. Why
Scenario
● 100,000s of jobs (batch, cron, etc)
● 1000s of different applications (stateless & stateful)
● Across n clusters
● Each with 10,000s of machines
6. Why
● High utilization
● High availability
● Minimize fault recovery time
● Reduce the probability of
correlated failures
with
7. Why
● High utilization
● High availability
● Minimize fault recovery time
● Reduce the probability of correlated
failures
with
● declarative job specification language
● name service integration
● real-time job monitoring
● analyze and simulate system behavior
● APIs & dashboard
8. What A. a container platform
B. a microservices platform
C. a portable cloud platform
D. platform for managing containerized
workloads and services
a. Portable
b. Extensible
c. Open source
E. provides a container-centric management
environment
The name Kubernetes originates from Greek, meaning helmsman or pilot, and is the root of governor and cybernetic. K8s is an abbreviation derived by replacing the 8 letters “ubernete” with “8”. a11y and i18n
Succinctly, the layers comprise:
The Nucleus which provides standardized API and execution machinery, including basic REST mechanics, security, individual Pod, container, network interface and storage volume management, all of which are extensible via well-defined interfaces. The Nucleus is non-optional and expected to be the most stable part of the system.
The Application Management Layer which provides basic deployment and routing, including self-healing, scaling, service discovery, load balancing and traffic routing. This is often referred to as orchestration and the service fabric. Default implementations of all functions are provided, but conformant replacements are permitted.
The Governance Layer which provides higher level automation and policy enforcement, including single- and multi-tenancy, metrics, intelligent autoscaling and provisioning, and schemes for authorization, quota, network, and storage policy expression and enforcement. These are optional, and achievable via other solutions.
The Interface Layer which provides commonly used libraries, tools, UI's and systems used to interact with the Kubernetes API.
The Ecosystem which includes everything else associated with Kubernetes, and is not really "part of" Kubernetes at all. This is where most of the development happens, and includes CI/CD, middleware, logging, monitoring, data processing, PaaS, serverless/FaaS systems, workflow, container runtimes, image registries, node and cloud provider management, and many others.
the primary implementer of the Pod and Node APIs that drive the container execution layer.
An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
the lowest level component in Kubernetes
supports Docker and rkt as continer runtimes
a group of one or more containers (such as Docker or rkt), with shared storage/network, and a information about how to run the containers. A pod’s contents are always co-located and co-scheduled, and run in a shared context
Containers within a pod share an IP address and port space, and can find each other via localhost
Applications within a pod also have access to shared volumes
Containers should only be scheduled together in a single Pod if they are tightly coupled and need to share resources such as disk
A node is a worker machine in Kubernetes and may be a VM or physical machine, depending on the cluster. Multiple Pods can run on one Node.
Every Kubernetes Node runs at least:
Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine.
A container runtime (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the application.
pods represent running processes on nodes in the cluster
"Preferentially run my workloads in my on-premise cluster(s), but automatically overflow to my cloud-hosted cluster(s) if I run out of on-premise capacity".
"Most of my workloads should run in my preferred cloud-hosted cluster(s), but some are privacy-sensitive, and should be automatically diverted to run in my secure, on-premise cluster(s)".
"I want to avoid vendor lock-in, so I want my workloads to run across multiple cloud providers all the time. I change my set of such cloud providers, and my pricing contracts with them, periodically".
"I want to be immune to any single data centre or cloud availability zone outage, so I want to spread my service across multiple such zones (and ideally even across multiple cloud providers)."
Kubectl
kubectl is the command line tool for Kubernetes. It controls the Kubernetes cluster manager.
Kubeadm
kubeadm is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha).
Kubefed
kubefed is the command line tool to help you administrate your federated clusters.
Minikube
minikube is a tool that makes it easy to run a single-node Kubernetes cluster locally on your workstation for development and testing purposes.
Dashboard
Dashboard, the web-based user interface of Kubernetes, allows you to deploy containerized applications to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself.
Helm
Kubernetes Helm is a tool for managing packages of pre-configured Kubernetes resources, aka Kubernetes charts.
Use Helm to:
Find and use popular software packaged as Kubernetes charts
Share your own applications as Kubernetes charts
Create reproducible builds of your Kubernetes applications
Intelligently manage your Kubernetes manifest files
Manage releases of Helm packages
Kompose
Kompose is a tool to help Docker Compose users move to Kubernetes.
Use Kompose to:
Translate a Docker Compose file into Kubernetes objects
Go from local Docker development to managing your application via Kubernetes
Convert v1 or v2 Docker Compose yaml files or Distributed Application Bundles
Skaffold is a command line tool that facilitates continuous development for Kubernetes applications. You can iterate on your application source code locally then deploy to local or remote Kubernetes clusters. Skaffold handles the workflow for building, pushing and deploying your application. It can also be used in an automated context such as a CI/CD pipeline to leverage the same workflow and tooling when moving applications to production.