SlideShare a Scribd company logo
1 of 235
Aniekan Akpaffiong
Updated May 2017
The
Absolute Best
Compendium of
Docker
The ABC of Docker
Systems
Administrator
Instructor
Presenter
Principal
Systems
Engineer
Project
Manager
Account
Support
Manager
</>
</>
</>
</>
</>
ACC CAPITAL HOLDINGS
Presentation Introduction
Codify my experience
with Docker around:
Docker technologies
Containers vs.
virtualization
Critical concepts
Usage examples
Present lessons learned
Promote the use of the Docker
Container Management platform
2
Goal
Consider this a work-in-progress
Table of Contents
Introduction
Docker
Technology
Containers vs.
Virtual
Machines
Deployment
Model
Docker
Components
Docker
Command
Line
Linux
Command
Line
Relevant
Linux
Features
Docker
Commands
Terms
2
Module 1
Docker Introduction
• Docker enables the creation and management of lightweight, self-
contained, immutable runtime environments, called Containers.
• The container packages an application workload (and its
dependencies) in a compute environment with its own CPU,
memory, and I/O resources.
• Docker enables the efficient management and friction-less
deployment of containers onto any Docker platform, and at any
software lifecycle phase from development to production.
Introduction
Introduction
• Docker promises to encapsulate an application, deploy it in a
repeatable manner across any Docker-enabled platform, and
manage it efficiently at scale
Introduction
• At a high-level, Docker helps makes the development, distribution
and execution of applications (packaged as Containers) as
frictionless as possible
• Docker provides a management framework for application
virtualization
• A Docker environment is configurable; manually via command line
tools such as Docker Client and programmatically via REST API
Introduction
• Docker and Container are sometimes used interchangeably,
however Docker is essentially a Container management solution
Introduction
• Containers offer an environment as close as possible to that of a
virtual machine (VM) without the overhead that comes with
running a separate kernel and simulating the hardware
• A Container could be correctly described as operating system
virtualization
– it facilitates running multiple, isolated, user-space operating environments
(containers) on top of a single kernel
Introduction
The Docker ecosystem includes:
Object Layer
• Container
• Image
Docker Layer
• Docker Host (daemon,
REST, clients)
• Drivers and Plug-ins
(storage, networking)
• Docker Registry
• Tools (Swarm mode,
Compose)
Host Layer
• Linux
• Mac OS
• Windows
Platform Layer
• Bare metal
• Virtual machine
• Cloud
Introduction - Docker Ecosystem
• Object layer:
– Docker runs
application,
packaged as
containers
– Applications
are deployed
from remote
or local
registries
Introduction - Docker Ecosystem
• Docker layer:
– Docker Host
(daemon, API,
clients)
– Drivers and
Plug-ins
(storage,
networking)
– Docker Registry
(Hub and Store)
– Tools such as
Swarm,
Compose
Introduction - Docker Ecosystem
• Host OS layer:
– Docker is
available on
Linux, Mac
OS,Windows
hosts
Introduction - Docker Ecosystem
• Platform layer:
– Docker host is
deployable on
any platform
from local
physical hosts,
to virtual
machines and
the cloud
Introduction – Putting it all together: Build, Ship, Run
• Docker's Container as a Service (CaaS) workflow, i.e. how applications move from development to
deployment.
• Build: Docker facilitates dev/test
environment. Developer creates
application. Finished application is
bundled as a Docker image
• Ship: Docker image is pushed to a
registry (an image distribution portal,
e.g. DockerTrusted Registry) or Docker
Hub by the DEV team. OPS accesses,
and pulls down image from registry
• Run: Image is instantiated (i.e. run in a
container), managed and scaled on any
Docker-enabled platform
DevTeam OpsTeam
Run
Deploy & Manage
Build
Development
Ship
Content & Collaboration
gettyimages.co.uk
Introduction – Putting it all together: Build, Ship, Run
• Docker provides the mechanism to build, ship, and run any app, on any OS, on any platform1
• Build an app via Docker CLI or
orchestration tools such as docker build,
docker create, docker-compose.
• Ship the app by uploading it to a Docker
Registry via docker push
• Run the app by pulling its image from
the registry, docker pull, and start it via
docker run.
• Leverage additional Docker tools
(compose, swarm mode, datacenter) to
orchestrate and secure the environment
1With limitations.
Introduction – Use Case
Use Cases Enabled by Docker CaaS
Cloud
• Cloud Migration
• Hybrid Cloud
• Multi-Cloud
Apps
• Containerization
• Microservices
• CI/CD - Continuous Integration,
Continuous Deployment
• DevOps
• Self-service Portal
Data
• Data Processing
• Pipelines
Introduction – Use Case
• Docker affords developers:
– assurance that locally developed apps run unmodified on any Docker platform
– application portability across platforms: physical, virtual, cloud
– consistent deployment model from Development to Production
– focus on writing code instead of micro-managing systems setup
– access to an ecosystem of apps and easy app integration model
– freedom to use rebuild/redeploy instead of upgrade deploying new app versions
• Docker allows operations:
– flexibility to choose a deployment model that best suites the workload
– reduction in number of systems under management relative to the workload
– built-in tools for management, clustering, orchestration
Module 2
DockerTechnology
Technology
• Docker containers wrap, an application in an environment that
contains everything it needs to run: code, runtime, system tools,
system libraries
• The Docker container can be executed in any Docker-enabled
platform with the guarantee that the execution environment
exposed to the application will be the same in development, and
production
Technology
• The goal of the container is to guarantee, with as much efficiency
as possible, that the application will run the same, regardless of the
platform
Technology
Infrastructure
Isolates
application
processes
on a shared
OS kernel
Creates light,
dense
execution
environments
Enables
portability
across
platforms
Application
Application
and
dependencies
packaged as a
portable,
immutable
environment
Facilitates
continuous
integration
and
continuous
development
(CI/CD)
Eases access to
and sharing of
containerized
components
From: Containers for the Enterprise: A Red Hat Virtual Event
C on tai n e rs provi de be n e fi ts to both the i n frastruc ture an d ap p li c ati on
Technology
• ContainersTransform Applications,
Infrastructure and Processes
– Applications: decomposing development into
services that can be developed independently,
improving efficiency, agility and innovation
– Infrastructure: moving from traditional datacenter
to Cloud to a flexible Hybrid model
– Processes: enables easy adoption ofAgile and
DevOps processing over traditional Waterfall
model, the goal being improved flexibility,
innovation and go-to-market speed
From: Why containers - Beginning of the buyer’s journey -- IT Leader audience by Red Hat
Container Runtime Format
• A container format describes how a container is packaged
• Docker deployed several runtime formats before settling on containerd:
– Linux Container (LXC)
• LXC is an operating system-level virtualization solution for running multiple isolated Linux
systems (containers) on top of a single kernel.
• Available in Docker up until Docker v1.8 (optional as of Docker v0.9)
– Libcontainer
• Unify and standardize the way apps are packaged, delivered, and run in software
containers.
• a library that provides direct access for Docker to Linux container APIs, in a consistent and
predictable way, and without depending on LXC or any other user-space packages
• Introduced as the default at Docker 0.9
Container Runtime Format
• Current Docker container format:
– runC
• runC is a lightweight, portable container runtime
• anAPI used by Docker to interact with system containment features
• benefit includes a consistent interface to containment features across Linux
distributions
• is based on libcontainer
– Containerd
• the latest Universal Runtime on Linux
• responsible for running and monitoring Docker Containers
• has multiple components including Executor, Supervisor and runC
Execution Environment
• Docker combines:
– kernel features (such as cgroups, namespaces, etc.)
– a Union File System
– a unified, low-level container format (runC)
– a management framework
to build, ship and run portable, immutable and efficient computing environments called containers.
Resource Allocation & Isolation
• Cgroups - resource allocation - limits usage
– limits an application to a specific set of resources (CPU, memory, I/O,
network, etc.)
– allows Docker to share available system resources to containers and enforce
limits and constraints
• Namespaces - resource isolation - limits access
– a feature of the Linux kernel that isolates and virtualizes system resources
and applies it to a workload or a set of processes.
– allows an application to have its own view and control of shared system
resources such as network stack, process space, mount point, etc.
Resource Allocation & Isolation
• Cgroups and Namespaces are capabilities of the Linux kernel which
sandbox processes from one another, and control their resource
consumption
Linux namespaces
Namespace Description
UTS UnixTimesharing System - isolates two system identifiers: nodename and domainname –
e.g. allows a container to have its own hostname independently of the host and other
containers.
IPC Inter-ProcessCommunication - Manage access to IPC resources; queues, semaphores, and
shared memory - process/groups can have own IPC resources.
PID Process ID – PID isolation - segments the PIDs that one container can view and manipulate
MNT Mount - filesystem mount points - processes can have their own root FS.The mount
namespace gives you a scoped view of the mounts on your system.
NET Network - Manage network interfaces; IP, routes, devices, etc. - provides a logical copy of
the network stack, with its own routing tables, firewall rules and network devices
USER UID, GUID - User namespaces allows containers to have a different view of the UID and GID
space than the host system.
Linux namespaces
• A namespace wraps a global system resource in an abstraction that
makes it appear to the processes within the namespace that they
have their own isolated instance of the global resource
• Namespaces provide a form of isolation for the Docker container
– It prevents processes running in a container from seeing or affecting
processes in another container or in the host system
– It limits what a container can see and how it presents itself to the rest of the
system
• Namespaces create a "wall" around a container
Control Groups
• Control groups or CGroups implement resource accounting and
limiting and process prioritization
– track resource usage and help ensure that each container gets its (fair)
share of system resources (memory, CPU, disk I/O)
– A benefit of cgroups is that it prevents a single container from bringing
down a host by consuming more system resources than it should.
• CGroup use cases include:
– fending off certain types of denial-of-service attacks
– Creating good citizens in a multi-tenant platform-as-a-service (PaaS)
environments
Container
Hardware
Host Operating System
Docker
Namespaces
CGroups
CGroups
Namespaces
If Namespaces creates a
wall around a container,
CGroups form the floor
and ceiling of each
container
ContainerizationTimeline
ContainerTechnologyTimeline
Container Security
• Image pushed to a public registry might inadvertently expose
sensitive private data
• Be cautious when using Dockerfile instructions such as COPY, ADD
or ENV that they do not inadvertently expose sensitive information
– If sensitive information is needed, consider incorporating it at runtime, in
the docker run command.
– Docker Compose provides an improvement for keeping the Dockerfile
clean of sensitive information and avoids exposing it at runtime via the use
of the docker-compose.yml file
Container Security
• “Effective security is pervasive. It should be taken into account at every point in
the lifecycle”
• “Leverage security best practices such as:
– minimizing attack surface
– securing the borders
– trusted sources
– continuous scans
– timely patching
– defense-in-depth
– separation of controls e.g.
• middle-ware (software architects)
• applications (developers)
• base image (administrator)”
From: Containers for the Enterprise: A Red Hat Virtual Event, March 2017
Container Security
• Docker suggests several areas to consider with respect to security:
– the intrinsic and applied security of the kernel
• Kernel namespaces
• ControlGroups
– Attack Surface of Docker Daemon
– Security Configuration and Best Practices
Container Security
• Do not relax your security posture just because you use Docker
Containers
Module 3
Containers vs.Virtual Machines
Containers vs.Virtual Machines
• In HardwareVirtualization, a physical
computer can be turned into one or
more logical computers, called
Virtual Machines (VMs)
– HardwareVirtualization decouples the
application from the underlying
hardware
– HardwareVirtualization partitions a
physical computer
– Virtual machines present a supporting
environment for applications to run
Type 1 Hypervisor
Hypervisor
VM1 VM2 VM3
Hardware
Application
Virtual Machine
Guest OS
Application
Hardware
OS is tightly integrated
with hardware: device
driver, CPU, disk, etc.
Application is tightly
integrated with the OS
Host Operating System
Moving an application
between systems is
complex. Moving a
running application is
very complex.
Moving an OS between
systems is very
complex.
Monolith Micro-services
Bare Metal
Application
Hardware
Hypervisor is tightly
integrated with hardware
Application, guest OS
and VM are integrated
Hypervisor
Moving aVM (with
integrated guest OS
and application)
between hypervisors
is routine.
Virtual Machine
Guest OS
Monolith Micro-services
Virtualization
Application
Container
Hardware
Docker is tightly integrated
with OS kernel
Application and
container are integrated
Host Operating System
Moving a Container
(with integrated
application)
between Docker
platforms is routine.
Docker
Operating System is tightly
integrated with hardware
Monolith Micro-services
Containerization
Containers vs.Virtual Machines
• A Docker container is similar to a virtual machine, however:
– Containers, operating at a higher level, decouple the application from the
underlying operating system
– Containers partition processes running on a single operating system
– Containers share the host OS kernel.Virtual machines share the hypervisor
Hardware
Hypervisor is
tightly integrated
with hardware
Application, guest
OS and VM are
integrated
Hypervisor
Hardware
Docker is tightly
integrated with
OS kernel
Application
and container
are integrated
Host OS Kernel
Docker
Operating System is
tightly integrated with
hardware
Containers vs.Virtual Machines
Footprint
• EachVM runs a complete (guest) operating system
• Containers share the hosts’ operating system kernel
• Advantage… to the container as sharing the kernel allows for more efficiency, e.g. a reduction in maintenance
Process
• EachVM hosts an operating system, with a full complement of native applications and processes
• A Docker container runs by default a single application
• Advantage… to the container.A single-application system provides improved agility
Setup
• Setting up aVM requires subject matter expertise and system resources
• A container is a user-space process and demands fewer resources
• Advantage… to the container. It is a more developer-friendly environment
Portability
• AVM is setup as a standalone environment with the full execution environment needed by its main applications
• The container is a single-application environment. For multi-tier applications, multiple containers are typically used
• Advantage… to the container. Both are portable, however the container provides a higher level abstraction
Containers vs.Virtual Machines: Similarities
• Containers package an application with all of its dependencies and allow it run
the same on any platform.
• Virtual machines package an operating system with all its dependencies and
allow it to run the same independent of the hardware platform
Containers vs.Virtual Machines: Similarities
Features Container Virtual
Machine
Benefit
Lightweight   Leverage resources more efficiently than bare-metal single
server implementations
Shell access   Connection to the shell remotely or via console
Has own process space   Run in a partitioned environment
Has own network interface   Ability to create its own network access
Root access   Login as or escalate privileges to ‘administrator’
Install and update services   Independently update environment
Leverage hosts’ kernel   Optimized space and memory utilization
Optimized for single workloads   Enhanced portability
Minimum system processes   Efficiency through reduced footprint and management, by
eliminating unneeded libraries and services
Runs as a process on the host OS   Management flexibility and improved resource utilization
Boot a different OS   Flexibility to choose the right OS for any particular workload
Maturity   Robust feature set for resiliency, management and support
Containers vs.Virtual Machines
Containers vs.Virtual Machines
• “Containers are toVirtual Machines asThreads are to Processes.”
Containers vs.Virtual Machines: Complementary
• The decision to use Containers orVirtual Machines should not be
considered a zero-sum game
• There are cases where:
– Containers are a better fit, e.g. agile software lifecycle
– Virtual Machines are a better fit, e.g. hostile multi-tenant environment
• Containers andVirtual Machines
can be complementary, e.g.
– aVirtual Machine hosting a Container
environment
Containers vs.Virtual Machines: Complementary
Containers andVMs
can complement each
other
Containers require a
compatible host
operating system
VM provides extreme
isolation (e.g. in
multi-tenant
environments)
Containers start in
seconds or less
AVM can take
minutes to boot
Containers can be deployed inside aVM to leverage the best features of each platform
Module 4
Docker Deployment Model
Kernel
Docker Engine
Debian
nginx
Alpine
writable layer
writable layer
Needed resources
not in the kernel
(e.g. binaries,
libraries, etc.) are
supplied by the
container's base
image or by
subsequent layers.
Docker and the Host OS Kernel
Docker uses the
host operating
systems' kernel as
a base.
The kernel
contains a core set
of components
required by
containers on the
host.
Docker Deployment Model
Docker Engine
Containers(s)
Docker Host
Docker Daemon
REST API
CLI Tools
You can install
Docker, or more
specifically the
Docker Engine on top
of a Linux, Mac or
Windows host
The
relationship
between the
components
on a Docker
host
Docker can be installed as an
application on older Windows or Mac
systems via the Docker Toolbox.
Toolbox uses docker-machine to
provision a VirtualBox VM, which runs
the boot2docker Linux distribution,
and manages containers via the
Docker daemon.
Docker Deployment Model
DockerToolbox
Minimum System Requirements:
Docker Toolbox has less rigorous requirements.
Windows
64-bit Windows 7 (or higher)
Hardware-Assisted Virtualization
Mac
macOS 10.8 “Mountain Lion” or newer
Included
components:
Docker Machine
Docker Engine
Docker Compose
Kinematic
Boot2Docker
VirtualBox
Docker Deployment Model
Docker can install natively
either on a Windows OS
using Hyper-V VM or on a
Mac OS, using the
HyperKit VM.
Runs Linux containers only
Docker for Windows & Docker for Mac
Minimum System Requirements
Mac
Mac must be 2010 or newer model
OS X El Capitan 10.11 or later
Windows
64bitWindows 10 Pro, Enterprise and Education
(1511 November update, Build 10586 or later)
Included
components:
Docker Engine
Docker Registry
Docker Compose
Docker Machine
Docker can be deployed
natively on a Linux
Operating System.
The Docker engine is
installed on the system
with the Docker daemon
managing the containers
and the Docker client
providing access to the
Docker daemon.
Docker Deployment Model
Minimum System Requirements
Linux
64-bit version of distributions running
version 3.10+ of the Linux kernel
Native Linux
Docker can be deployed natively
on Windows Server 2016 and
Windows 10. Can use Docker CLI
or PowerShell to manage
containers. There is no need for
a virtual machine or Linux.
Run any Windows application
inside a Docker container
Docker Deployment Model
Minimum System Requirements
Windows
Windows Server 2016 and Windows
10
Included
components:
Docker Engine
Docker Registry
Docker Compose
Docker Machine
Docker on Windows
DockerVariants
Docker
Community
Edition (CE)
Tiers
Edge
Stable
Platform
CentOS
Debian
Fedora,
Ubuntu
Mac
Windows 10
Cloud: AWS, Azure, etc.
Docker
Enterprise
Edition (EE)
Tiers
Basic
Standard
Advanced
Platform
CentOS
Red Hat Enterprise Linux (RHEL)
Ubuntu
SUSE Linux Enterprise Server (SLES)
Oracle Linux
Windows Server 2016
Cloud: AWS, Azure, etc.
Module 5
Docker Components
Docker Components
• Docker is a Container management tool.
• It consists of:
– core technologies such as images, union filesystems, administration and
management software such as the Docker engine and Swarm
– concepts such as layers, and tags, supporting plug-ins for volumes and
networks
– and more
Docker Objects
(14 slides)
Docker: A Layered Environment
Kernel
Docker Engine
Debian
nginx
Alpine
writable layer
writable layer
Finally, to instantiate a Container, a writable layer is added.
A Docker image
is built up from a
series of layers.
Each layer
represents an
instruction in
the image’s
Dockerfile.
Each layer except
the top-most is
read-only.
Each layer adds
to or replaces
(overlays) the
layer below it.
Docker: A Layered Environment
• Kernel
o this is the kernel of the host operating system
o shared across all containers on host
• Bootfs
o boot filesystem (with bootloader and kernel)
o same across different Linux distributions
• Rootfs
o root filesystem directories: e.g. /bin, /boot, /dev, /lib, …)
o different across Linux distributions
• Base image
o binaries/libraries
o functionality not in the host OS kernel
• Image(s)
o deployed on top of the base image
o (optional) read-only layer(s)
• Container
o a single writeable layer
o changed container data exists here
Bootfs/rootfs
Base Image layer
Image layer
Container layer
...
Docker: A Layered Environment
A Container object is instantiated by loading the image
layers into memory and adding a writable top layer.
A container creates a run-time environment on top of the
underlying host kernel.
Note:The run-time environment includes a set of binaries
and libraries needed by the application running in the
container and a writeable layer where updates are stored.
Bootfs/rootfs
Base Image layer
Image layer
Container layer
...
Dockerfile
• A Docker Image is built from a simple, descriptive set of steps called instructions, which are
stored in a text file called a Dockerfile.
• To create an image, the Docker daemon reads the Dockerfile and the "context", which is the
set of files in the directory in which the image is built, to build and output an image.
Dockerfile
• Can be described as the source code of the image or an
artifact that describes how a Docker image is created
• Is a text file with two types of entries:
– # Comment
• a line beginning with a hash symbol; used to insert a comment
– INSTRUCTION
• provides instructions to the docker build command
• executed in the order listed; each one creating a layer of the image
– Example Dockerfile:
# Start with ubuntu 16.04
FROM ubuntu:16.04
MAINTAINER neokobo.blogspot.com
# Instruction with three components
RUN apt-get update && apt-get install emacs24 && apt-get
clean
CMD ["/bin/bash"]
Dockerfile Instructions include:
o FROM - Specify the base image
o MAINTAINER - Specify the maintainer
o LABEL - A key-value pair adds metadata to an
image
o RUN - Run a command
o ADD - Add a file or directory
o ENV - Create an environment variable
o COPY - Copy files/directories from a source to
a destination
o VOLUME - enable access to a directory
o CMD - process to run when executing the
container
o ENTRYPOINT - sets the primary command for
the image
Image
• A Docker Image is a read-only template from which a Docker run-
time environment (or Container) is instantiated
• Docker composes images from layers, where each represents a
change to a base image.
Image
• Similar in concept to a class in object-
oriented programming
• Can be built from scratch or an existing
image can be pulled from a registry
• Images can be thought of as golden images.
They cannot be modified except by:
– instantiating a container
– modifying the resulting container
– committing the changes to a new image
• Docker images are stored as a series of
read-only layers
Bootfs/rootfs
Base Image layer
Image layer
...
Container
• When a container is instantiated, Docker
adds a read-write layer on top of the
read-only layer(s)
• Docker uses storage drivers to manage
the contents of the image layers and the
writable container layer
• The storage driver:
– is responsible for stacking layers and providing
a single unified filesystem view
– manages the filesystems within images and
containers
Bootfs/rootfs
Base Image layer
Image layer
Container layer
...
Container
• A container is a lightweight, portable
encapsulation of an environment in which to run
applications
– shares the kernel of the host system and is isolated
from other containers in the system
– is a running instance of a Docker image
• Following the programming analogy, if an image
is a class, a container is an instance of a class—a
runtime object
• To create a Container, Docker daemon
instantiates, then adds a writable layer to the
image, it initializes settings such as network
ports, container name, ID and resource limits
Bootfs/rootfs
Base Image layer
Image layer
Container layer
...
Layers
• Docker images are read-only templates from
which Docker containers are instantiated
• Each image consists of one or more layers
• Layers are discrete entities, promoting modularity
and reuse of resources
• Each layer results from an instruction in the
Dockerfile.
Bootfs/rootfs
Base Image layer
Image layer
Container layer
...
ADD file: 89ecb642d662ee7edbb868340551106d51336c7e589fdaca4111725ec64da957 in /
CMD ["/bin/bash"]
MAINTAINER NGINX Docker Maintainers "docker-maint@nginx.com"
ENV NGINX_VERSION=1.11.10-1~jessie
RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 573BFD6B3D8FBC641079A6ABABF…
RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log
EXPOSE 443/tcp 80/tcp
CMD ["nginx" "-g" "daemon off;"]
• Below is repository information for an nginx image on GitHub.
Layers
• The eight Dockerfile instructions above result in the eight layers of the docker history output below.
Each instruction in the Dockerfile creates a new layer
Copy on Write (CoW)
• A container consists of two main parts:
– one or more read-only layers
– a read-write layer
• To modify a file at a read-only layer, that file is first copied up to the
read-write layer.
– This strategy preserves the unmodified read-only layers which can be shared
with multiple images, optimizing disk space usage
• All storage drivers use stackable image layers and the Copy-on-Write
strategy
Union File System
• “A Union File System implementation handles the amalgamation of
different file systems and directories into a single logical file
system. It allows separate file systems, to be transparently overlaid,
forming a single coherent file system”
-- https://en.wikipedia.org/wiki/UnionFS
Union File System
• Docker uses a Union File System to combine multiple layers that
make up an image into a single Docker image
– Enables implementation of a modular image, that can be de/constructed as
needed
• Layers are read top-to-bottom
– If an object is found both in a top layer and a subsequent lower layer, only
the higher layer object is used
• If an object to be modified is only in a lower, read-only layer, it is
copied up using Copy-on-Write
Identifiers &Tags
(11 slides)
Identifiers
• A Docker Container has both a Name and a Universally Unique
Identifier (UUID)
– A name can be manually assigned by the user or automatically generated
by the Docker daemon
– A UUID is an automatically generated 12 or 64-character hexadecimal
• Identifiers prevent naming conflicts and facilitate automation
Identifiers
• Name
– Manually-assigned, via either:
• --name option
• --tag option
– Automatically-assigned
• has the following format: <adjective>_<notable names>
– Adjective - a list of approximately 90 adjectives
– Notable Names - a list of approximately 150 "notable" scientists and hackers
Identifiers
admiring
adoring
affectionate
agitated
amazing
angry
awesome
blissful
boring
brave
clever
cocky
compassionate
competent
condescending
confident
cranky
dazzling
determined
distracted
albattani
allen
almeida
agnesi
archimedes
ardinghelli
aryabhata
austin
babbage
banach
bardeen
bartik
bassi
beaver
bell
benz
bhabha
bhaskara
blackwell
bohr
Adjectives Names
...
...
https://github.com/moby/moby/blob/master/pkg/namesgenerator/names-generator.go
e.g. cranky_bell
Identifiers
• UUID
– Universally Unique Identifier
– Assigned at container creation.
– automatically generated and applied
by the Docker daemon
– UUID is a set of hexadecimal numbers and
come in two forms:
• 64-character long form, e.g.
– “f78375b1c487e03c9438c729345e54db9d20cfa2ac1
fc3494b6eb60872e74778”
• 12-character short form, e.g.
– “f78375b1c487”
Identifiers
• Images and containers may be identified in one of the following ways:
• Identifiers are commonly displayed in the truncated 12-character form
Identifier Type Example Value Length
UUID long identifier f78375b1c887e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778 64-character
UUID short identifier f78375b1c887 12-character
Name Manual or pseudo-randomly generated Variable
Tag String identifying a version of an image Variable
Digest Calculated SHA value of an image 64-character
DockerTag
• A tag is an alphanumeric identifier attached to the image. It is used to distinguish one
image from another
• A tag name must be valid ASCII and may contain lower and uppercase letters, digits,
underscores, periods and dashes
• The more complete format of an image name is shown here:
– [REGISTRYHOST:[PORT]/[_/][USERNAME/]REGISTRYNAME[:TAG]
• Here are some examples:
Command What Gets Downloaded
docker pull localhost:5000/hello-world hello-world image from the local registry
docker pull nginx nginx image from the official Docker Hub registry
docker pull nginx:1.11 nginx image with tag 1.11 from the official Docker Hub registry
docker pull registry.access.redhat.com/rhel-atomic rhel-atomic image from the official Red Hat registry
DockerTag
The nginx repository
on the official Docker
registry contains
multiple images.
The same image may
have multiple tags,
e.g. the alpine stable
image has three tags
– :1.10.3
– :stable
– :1.10
that all point to the
same image.
DockerTags – Docker Hub
To see a list of tags or version identifiers associated with an <image>
And navigate to the DescriptionConnect to Docker Hub:
In this example, Debian version 16.04 is tagged latest
Docker Registry
Docker Engine
Docker Host
Docker Daemon
REST API
CLI tools
Images
Containers
Client
Registry
docker commands
Application Images
Dockerfile
• A Registry is a Docker
toolset to pack, store, and
deliver content.
• It hosts image repositories
and provides an HTTP API to
a distribution service where
Docker images can be
uploaded to (push) and
downloaded from (pull).
Docker Registry, cont’d
• Docker allows the following registry types: hub, store, private and third-
party registries
• Docker Hub
– An online repository of available Docker images
– API used to upload and download images and implements version control
– Official site is hub.docker.com
– Marked deprecated
• Docker Store
– online repository of official Docker images
– Self-service portal where Docker partners publish images and users deploy them
– Official site is store.docker.com
Docker Registry, cont’d
• Private Registry
– Local repository
– DockerTrusted Registry (DTR) is the enterprise-grade image storage
solution from Docker
– Installed on-premise or on a virtual private cloud (VPC)
• Third-Party Registry
– Providers may create their own registry sites, e.g.
• Red Hat: https://access.redhat.com/containers/
• Amazon EC2 Container Registry (ECR):
https://console.aws.amazon.com/console/home
• GoogleContainer Registry (GCR): https://cloud.google.com/container-registry/
Docker Host
(7 slides)
Docker Host
• Docker Host runs Docker Engine
– can also host containers
– can be deployed on physical
servers, virtual machines or in the
cloud
• OS that can run Docker Host
include:
– Linux, Mac OS,Windows
Docker Engine
• Consists of:
– A server called a Docker
daemon
– A REST API – interface
through which applications
talk to the daemon
– CLI client – interacts with the
Docker daemon through
scripting or CLI commands
Docker Engine
• Sets up the management
environment for containers
• Manages (builds, ships and runs)
Docker containers deployable on a
physical or virtual host, or in the
cloud.
https://docs.docker.com/engine/understanding-docker/
Docker Daemon
• service running on the
host
• creates and manages
Docker objects, such as
images, containers,
networks, and data
volumes
• The Docker client and
daemon communicate
via a REST API
Docker Engine
Docker Host
Docker Daemon
REST API
CLI tools
Images
Containers
Client
Registry
docker commands
Application Images
Docker Daemon/Client
Docker Engine
Docker Host
Docker
Daemon
Container 1
Container n
Docker Client
docker
commands
Linux Kernel
namespaces
cgroups
REST
TCP Socket
runC
libcontainer The Docker Client and daemon communicate
using a RESTAPI, UNIX sockets or network
interface
runC is a wrapper around libcontainer
Libcontainer is an interface to various Linux kernel
isolation features, like namespaces and cgroups.
The Docker Daemon:
• communicates directly with the containers
• enables container encapsulation and isolation
Docker Client
• The Docker client, in the form of the docker binary, is the primary
user interface to Docker
• accepts commands and configuration flags from the user and
communicates with a Docker daemon
• One client can communicate with multiple local or remote
daemons
• Other tools include: docker, docker-machine, docker-compose
Docker Networking
(23 slides)
Docker Networking
• Containers are isolated, single-application environments
• A network connects containers to each other, the host and the external
network
• Docker Networking design themes include:
– Portability – portability across diverse network environments
– Service discovery – locate services even as they are scaled and migrated
– Load balancing – dynamically share load across services
– Security – segmentation and access control
– Performance – minimize latency and maximize bandwidth
– Scalability – maintain linearity of characteristics as applications scale across hosts
See https://github.com/docker/labs/tree/master/networking for more information
Docker Networking
• Container Network Model (CNM) provides the forwarding rules,
network segmentation, and management tools for complex
network policies
• It formalizes the steps required to enable networking for containers
while providing an abstraction that can be used to support multiple
network drivers
• Docker uses several networking technologies to implement the
CNM network drivers including Linux bridges, network
namespaces, veth pairs, and iptables.
Docker Networking
• CNM is built on three components, sandbox, endpoint, network:
• Sandbox
– container's network stack configuration, e.g.
• interface management
• routing table, DNS settings
– implemented as a Linux Network Namespace
– may contain multiple endpoints from multiple networks
– local scope - associated with a specific host
• Endpoint
– joins a Sandbox to a Network
– Endpoint can be a veth pair
• Network
– group of Endpoints that can directly communicate with one other
– implemented as a Linux bridge, aVLAN, etc.
Docker Networking
Docker Networking – Exposing Ports
• To expose a port:
– Use the EXPOSE instruction in the Dockerfile or
– the –expose=x to expose a specific port
– --expose=x–y to expose a range of ports
• Exposing a container port announces the container accepts
incoming connections on that port
– e.g. the web service container listening on port 80.
– EXPOSE documents, however does not create any mapping on the host
– --expose exposes port at runtime, however does not create any host
mapping
Docker Networking – Exposing Ports
• The EXPOSE instruction informs Docker that the container listens on the
specified network port(s) at runtime
– e.g. EXPOSE 80 443 indicates the container listens for connections on two ports: 80 and 443
• EXPOSE does not make the ports of the container accessible to the host
– To do that, publish the port with either:
• -p flag to publish a range of ports OR
• -P flag to publish all of the exposed ports
• command line option --expose exposes a port or a range of ports at runtime
Docker Networking – Publishing Port
• Exposing and publishing ports allows containers communicate with
each other and externally
• The difference between an exposed port and a published port is
that the published port is bound on the host
• Publishing either:
– binds all container ports to random ports on the host (via –P) OR
– binds a specific port or port range from container to host (via –p)
Docker Networking – Publishing Port
• $ docker run -d -P redis
• Run redis detached and publish all exposed ports to random ports (-P)
– container port, 6379, is exposed at the random port, 32768, to the host
– 6379 is the default port of the redis application
• Docker communicates through the random port to the exposed,
default port in the container
– The container listens on the exposed port
Docker Networking – Publishing Port
• Publish all exposed ports to random ports
– -P or --publish-all
• Publish or bind a container port or group of ports to the host
– -p, --publish list
• Syntax examples:
– Publish or bind to specific port (<hostPort>:<containerPort>)
• e.g. -p 8080:80
• Container port 80 is published to the host as port 8080
– Publish or bind to random port (<containerPort>)
• e.g. -p 80
• This binds container port 80 to a random host port, e.g. port 32768
• Specify which IP to bind on as in: <host interface>:<hostPort>:<containerPort>
– e.g. 127.0.0.1:6379:6379
– This limits the exposure of this port, 6379, to connections on IP 127.0.0.1
Docker Networking – Publishing Port
• $ docker run -d -P nginx
• Run nginx server, detached and publish all
exposed ports
– Application’s default ports, 80 and 443 are
published and available through random port(s),
32770 and 32769 respectively
– telnet to test connection to the application
listening on container port 80, by connecting to
bound random host port 32770
Docker Networking – Publishing Port
• $ docker run -d –p 8080:80 nginx
• Syntax
– -p <host port>:<container port>
• Container port 80 is published as
port 8080 to the host
• A connection to port 8080 on the
host is mapped to port 80 in the
container
• Note: <host port> is optional, if
left off, port is published to a
random host port, instead of 8080
as in this example
Docker Networking – Built-In Network Drivers
• The Docker built-in network drivers facilitate the containers' ability
to communicate on a network
– built into the Docker Engine
– invoked and used through standard docker network commands
• Network drivers:
– None
– Host
– Bridge
Docker Networking – Host Network Driver
• The host network driver has access to the hosts' network interfaces
and makes that available to the containers
– In host mode the container shares the networking namespace of the host,
directly exposing the container to the outside world
• The advantage of the host network driver includes higher
performance, and a NAT-free environment
• A disadvantage is that it is susceptible to port conflict
• Use the --net host option to run a container on a host network
Docker Networking – Bridge Network Driver
• The bridge network driver provides a single-host network on top of
which containers may communicate.
– In bridge mode, Docker automatically assigns port-maps. Bridge
networking leverages these port-mappings and NAT to communication
outside the host
• The IP address is private and not accessible from outside the host
• Use the --net bridge option to manually run a container on a bridge
network
Docker Networking – Bridge Network Driver
• By default, Docker creates a local bridge network named docker0,
using the bridge network driver
• Unless otherwise specified, containers will be created on this network:
Docker Networking – none Network Driver
• The none driver gives a container its own networking stack and network
namespace
– No external network interface; it cannot communicate outside the container
• The none network driver is an unmanaged networking option
– Docker Engine will not:
• create interfaces inside the container
• establish port mapping
• install routes for connectivity
– Guarantees container network isolation between any containers and the host
• I/O may be initiated through volumes or STDIN and STDOUT
Docker Networking – none Network Driver
Docker Networking – Overlay
• Overlay network driver creates networking tunnels
– enabling communication between hosts
• Containers on this network behave as if they are on the same host
by tunneling network subnets between hosts
– spans a network across multiple hosts
• Several tunneling technologies are supported
– e.g. virtual extensible local area network (VXLAN)
• Created when a Swarm is instantiated
Docker Networking – Underlay
• Underlay network drivers expose host interfaces, e.g. eth0, directly
to containers running on the host
– e.g. the Media Access Control virtual local area network (MACvlan).
• Allows direct connection to the hosts' physical interface
– Provides routable IP addresses to containers on the physical network
• MACvlan establishes a connection between container interfaces
and the host interface (or sub-interfaces)
• MACvlan eliminates the need for the Linux bridge, NAT and port-
mapping
Docker Networking – Plug-In Network Drivers
• Plug-In Network Drivers:
– created by users, the community and other vendors
– provide integration with incumbent software and hardware
– add specific functionality
• Network driver plugins are supported via the LibNetwork project
– The goal of libnetwork includes:
• Modularize networking logic in Docker into a single, reusable library
• Provide a consistentAPI and required network abstractions for applications
Docker Networking – Plug-In Network Drivers
• User-Defined Network
– You can create a new bridge network that is isolated from the hosts' bridge
network
Docker Networking – Plug-In Network Drivers
• Community- and vendor-created
– Network drivers created by third-party vendors or the community
– Enables integration with incumbent software and hardware
– Provides functionality not available in standard or existing network drivers
– e.g.Weave Network Plugin – creates a virtual network that connects your Docker
containers across hosts or clouds
• IPAM Drivers
– IP Address Management (IPAM) Driver
– Built-in or Plug-in IPAM drivers
– Provides default subnets or IP addresses for Networks and Endpoints if they are
not specified
• IP addressing can be manually created/assigned
Docker Networking – Network Scope
• Network driver concept of scope is the domain of the driver: local or swarm
– Local scope drivers provide connectivity and network services within the scope of the host
– Swarm scope drivers provide connectivity and network services across a swarm cluster
• Local scope networks will have a unique network ID on each host
• Swarm scope networks have the same network ID across the cluster
• Scope is identified via the docker network ls command:
Docker Orchestrate
(8 slides)
Docker Swarm Mode
• Swarm is Docker's native clustering tool
– enables orchestration of services in a pool of Docker engines
– schedules containers on to the swarm cluster based on resource availability
– Docker engines participating in a cluster are running in swarm mode
• Docker tools, APIs and services can be used in Swarm mode, enabling
scaling of the Docker ecosystem
• The tools for container management and orchestration include:
– Docker Compose
– Docker Swarm mode
– Apache Mesos
– Google Kubernetes
Docker Swarm Mode
– Two types of Docker nodes:
• Manager
– deploys applications to the swarm
– dispatches tasks (units of work) to worker nodes
– performs the orchestration and cluster management
functions
• Worker
– receives and executes tasks dispatched from
manager nodes
– runs agents which report on tasks to the manager
node
– A service is the definition of the tasks to
execute on the worker nodes
• A node is an instance of the Docker engine participating in the swarm
Docker Compose
• Dockerfile and runtime commands get increasingly complex
– Particularly with multi-tiered applications
• Docker Compose is a tool to streamline the definition and instantiation of
multi-tier, multi-container Docker applications
– docker run starts a container; Compose manages containers as a service
– Services codifies container’s behavior in a Compose configuration file
– Use configuration file and docker stack deploy to organize and spin up the container
• The Compose file provides a way to:
– document and configure application’s service dependencies (databases, caches,
web service APIs, etc.)
– scale, limit, and redeploy the container
• Enhances security and manageability by moving docker run
commands to aYAML file
Docker Compose
• Docker Compose defines and runs complex services:
– define single containers via Dockerfile
– describe a multi-container application via single
configuration file (docker-compose.yml)
– manage application stack via a single binary (docker stack
deploy)
• The Docker Compose configuration file, specifies the
services, networks, and volumes to compose:
– services – the equivalent of passing command-line
parameters to docker run
– networks – analogous to definitions from docker network
create
– volumes – analogous to definitions from docker volume
create
version: "3"
services:
web:
build: .
volumes:
- web-data:/var/www/data
redis:
image: redis:alpine
ports:
- "6379"
networks:
- default
Docker Compose
docker-compose up Launches all containers
docker-compose stop Stop all containers
docker-compose kill Kills all containers
docker-compose exec <service> <command> Executes a command in the container
Docker Q&A
• You have just inherited a Docker environment and come across the
following in a script, what does it do?
sudo docker run -v /home/user1/foo:/home/user2/src -v /projects/foo:/home/user2/data 
-p 127.0.0.1:40180:80 -p 127.0.0.1:48000:8000 -p 45820:5820 -t -i user2/foo bash
Docker Q&A
• Taking each CLI parameter in turn:
Parameter Description
sudo used to run docker as the super user if not previously setup
docker run docker run command
-v <host path>:<container path> maps a host volume into a container
-p <host IP>:<host port>:<container port> binds a container port to a host port from a specific host IP
-p <host port>:<container port> binds a container port to a host port from any host IP
-t attaches a terminal to the container
-i enables interactive mode
user2/foo image identifier
bash container startup command
Docker Q&A
• docker run, starts a container from the image, user2/foo and runs the bash executable
in the container.
• Persistent data (-v) is enabled by mounting the host directory, /projects/foo, as a
mount point /home/user2/data inside the container.
• The container exposes three container ports 80, 8000, 5820 as host mounts 40180,
48000, 45820 respectively (-p). Additionally container ports 80 and 48000 can only be
access on the host via local interface, 127.0.0.1.
• Finally -i and -t are used to enable interactive access to the standard input and output
of the container
sudo docker run -v /home/user1/foo:/home/user2/src -v /projects/foo:/home/user2/data 
-p 127.0.0.1:40180:80 -p 127.0.0.1:48000:8000 -p 45820:5820 -t -i user2/foo bash
DockerVolume
(13 slides)
NamedVolumes: Host and Container DataVolumes
• A named volume is a mechanism for decoupling persistent data needed
by your container from the image used to create the container
• Volumes are directories stored outside of the container’s filesystem and
hold reusable and shareable data that persists even after a container is
terminated
• There are three ways to create volumes with Docker:
– Create a Docker data volume (-v option with docker create or docker run)
– Add new volume viaVOLUME instructions in a Dockerfile
– Mount a host directory or file as a data volume to a container directory using the -v
option
• Volumes are not a part of the containers' Union File System
NamedVolumes
• Container data is discarded when the container is removed. As such
critical data should be kept outside the container
– Note: simply exiting a container will preserve the data
• A container’s file system is composed of layers and traversing the
layers for data decreases performance
– Data with high I/O requirements should be stored in a volume outside the
container.
Container volumes
• Docker volumes manage storage which can be shared among
containers, while storage drivers enables access to the container’s
writable layer
• A data volume is a directory or file in the Docker host’s filesystem
that is mounted directly into a container
Container volumes
• Container volumes are instantiated via docker volume create or the
VOLUME instruction in a Dockerfile
• Use docker volume create to create a volume at the command line:
– $ docker volume create --name vol44
Container volumes
• The volume can be attach to a container at run-time:
– $ docker run --rm -it -v vol44:/cvol44 alpine sh
Container DataVolumes
• Docker data volumes allow data to:
– persist after the container is removed
– be shared between the host and the Docker container
– be shared with other Docker containers
• It allows directories of the host system, managed by Docker, to be
mounted by one or more containers. It's simple to setup as you
don't need to pick a specific directory on the host system
Container DataVolumes
• This creates a volume /data/vol01 and makes it available to the container
• The container volume, /data/vol01, maps to a directory on the host file system.You can get the location
via the $ docker inspect <containerID> command. Look in the Mount section for the Source name/value
pair:
Container DataVolumes
"Mounts": [
{
"Type": "volume",
"Name": "dd517d905c98c74dc0c10370a46dd8445d67dbf84162dc0d9076b4040c395134",
"Source": "/var/lib/docker/volumes/dd517d905c98c74...dbf84162dc0d9076b4040c395134/_data",
"Destination": "/data/vol01",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
Mount host directory as a DataVolume
• Docker allows you to mount a directory from the Docker host into a
container
• Using the -v option, host directories can be mounted in two ways:
– using an existing host volume, e.g. /home/john/app01, or
– new auto-generated volume on the host, e.g.
/var/lib/docker/volumes/53404f432f0…
• You can assign the volume a name using the --name option, otherwise
Docker assigns it a 64-character volume identifier
• The advantage of Docker created host volumes is portability between
hosts. It does not require a specific volume to be available on any host
that will make the mount
Mount host directory as a DataVolume
• $ docker run -v <host_dir>:<container_dir>:ro -i -t <image> <default
executable>
– <host_dir> is the source directory
– <container_dir> is the container directory
– Add :ro to make the mount read-only
• In addition to directories, single files can also be mounted between
the host and container
Mount host directory as a DataVolume
• Mount a volume from the host filesystem in the container:
– $ docker run -v /home/john/app01:/app01 -i -t busybox
• In this example, the -v parameters are:
– /home/john/app01 – host directory
– : – colon delimiter
– /app01 – container mount for host directory
• Any existing files in the host volume (/home/john/app01) are
automatically available in the container mount
Container DataVolumes
• Volume Use Cases:
– Improved performance as it bypasses the storage driver, e.g. AUFS
– Enables data sharing between containers
– Enables data sharing between the host and the container
DockerVolume – Q&A
• Are modifications to the filesystem discarded when container exits?
– No
– Note the difference between exiting and removing the container
– Modifications only discarded once the container is removed
– In that case, useVolumes to keep data if the container is removed
Module 6
Docker Command Line
Docker Command Line
• docker
– A self-sufficient runtime for containers
– Usage:
• docker COMMAND [OPTIONS] [arg...]
• docker [ --help | -v | --version ]
• docker-machine
– Create and manage machines running Docker
– Usage:
• $ docker-machine [OPTIONS] COMMAND [arg…]
• docker-compose
– Define and run multi-container applications with Docker
– Usage:
• $ docker-compose [-f <arg>...] [options] [COMMAND] [ARGS…]
• $ docker-compose -h|--help
Docker Command Line
Docker Command Line – Combining Options
• (generally) Short-form, or single character, command line options
can be combined, e.g.:
– docker run -i -t --name test busybox sh
can be replaced with
– docker run -it --name test busybox sh
Docker Command Line – Getting Help
• Append the --help option to a Docker command, e.g.:
– docker --help
– docker <command> --help
Docker Command Line – Getting Help
• If you enter an incomplete command line, Docker will attempt to
provide useful syntax hints:
Module 7
Linux Command Line
Linux Command Line
• The Linux command line provides a way to manually interact with the operating
system
– The shell is a program that acts as an interface between the user and the operating system
– The shell display one of two prompts
• For the root user, the prompt is the hash or pound (#) symbol; £ on UK Character Sets:
For non-root users, the prompt is the $ symbol:
Linux Command Line
• The command line ends when you hit the Enter key.
• A command line however can be extended beyond a single line at
the command line or in a file
– I.e. if the command is longer than one line, the backslash can be used to
extend the command line to two or more lines, e.g.
– When the shell encounters a backslash, it ignores any Enter key, and
expects the command line to continue
– The backslash is mainly cosmetic; to improve readability
sudo docker run -v /home/user1/foo:/home/user2/src -v /projects/foo:/home/user2/data 
-p 127.0.0.1:40180:80 -p 127.0.0.1:48000:8000 -p 45820:5820 -t -i user2/foo bash
Linux Command Line
• There are many shells in Linux
• A commonly used shell is bash, the Bourne Again Shell
• When you start a Linux container in Docker, you can specify which
shell it should run, e.g.
– $ docker run --rm -it debian bash
– This starts the debian container running with the bash shell
Linux Command Line
• The Linux command line consists of three main objects types :
command, argument(s), option(s).
– command
• the program to run, e.g. ls, curl, docker, etc.
• command is always the first object on the command line
– argument
• a parameter or sub-command used to provide command with additional information
• e.g. by itself, the ls command lists the files or directories in the current directory.To list files
in another directory, enter that directory as an argument, e.g. ls /opt/bin
• zero or more arguments
– option
• used to modify the behavior of the command
• e.g. the ls command will display visible files/directories. Given the -a option, e.g. ls -a, it will
display both visible and non-visible files
• zero or more options
Linux Command Line
• Options come in two forms:
– short-form
• typically prepended with a single dash
• ls -a or docker ps –a
• options can (typically) be concatenated, instead of ls -a -F -l, enter ls -aFl
– long-form:
• prepended with two dashes. E.g.:
• ls --all or docker ps --all
• Use white-space to separate multiple options
• Can mix and match short-form and long-form options on the same
command line: ls --all -l
Module 8
Relevant Linux Features
Relevant Linux Features – I/O Stream
• Standard streams are communication channels between a program and the shell
• Linux recognizes three standard streams: stdin, stdout, stderr
• STDIN – standard input
– stream data into a program
– by default input to a command comes from the keyboard
• STDOUT – standard output
– stream data out of a program
– by default, output of a command is sent to the terminal
• STDERR – standard error
– stream error output from a program
– by default, error from a command is sent to the terminal
Relevant Linux Features – Redirection
• Linux allows I/O to be redirected away from the default source/target
• The default source of STDIN is the keyboard
– i.e. by default a command expects
to get its input from the keyboard
– To force input to come from
another location, e.g. a file, use the
< redirection symbol
• e.g. this pr command indents
input five spaces, however, the
input data is sent from file001,
instead of the keyboard
Relevant Linux Features
• The default target of STDOUT is the
terminal or screen
– by default a command expects to send
its output to the screen
– To direct its output elsewhere, use the
> symbol
• This example “redirects” the output of the
docker images -q command to a file, instead
of the default target, the screen
– Note:To append output to an existing
file, instead of overwriting it, use >>
instead
Relevant Linux Features
• The default target of STDERR is the screen
– by default a command expects to send its error output to the screen
• To redirect it elsewhere, use the "2>" symbol:
Note: "command 2> file" send the output to a file, file. If file already exists, any existing content is overwritten.
To append output to an existing file, use 2>> instead, i.e. "command 2>> file".
Relevant Linux Features – Pipe
• The pipe is implemented with the "|" symbol
• It takes the output (stdout) of the command on the left and sends it
as input (stdin) for the command on the right
Relevant Linux Features – Pipe
• In the example below, docker run --help is the first command. Its output is used
as input to the more command, which displays the output, one screen at a
time:
Note: stderr (standard error) cannot be passed through the pipe, only stdout.
Relevant Linux Features – Command Substitution
• In command substitution, the shell runs command, however instead
of displaying the output of command, it stores the output in a
variable
– You can then pass that variable as input to another command.
• The syntax of command substitution is $(command) or the older
`command`, using back-ticks.
Relevant Linux Features – Command Substitution
• Let's say you want to remove the most recent container running
– Use docker ps -a which lists all containers by ID, starting with most recent,
then copy the Container ID into the docker rm <Container ID> command:
Relevant Linux Features – Command Substitution
• Alternatively, use Command Substitution, letting the shell do the work:
– $ docker rm $(docker ps -lq)
• docker ps -lq first gets the ID of the most recent container, then passes it to the docker rm
command:
Relevant Linux Features – Control Operator
• A Control Operator is a token that performs a control function
• It is one of the following symbols: || & && ; ;; ( ) | |& <newline>
– Let’s focus on the && and || control operators
• On occasion you might need to group Docker commands. Let's see
a few ways to do this in Linux with three of the control operators
Relevant Linux Features – Control Operator
Control operators Description
; Semicolon - delimits commands in a sequence
Used to run multiple commands one after the other
Similar to hitting ENTER after each command
$ docker run --rm -it debian bash -c "ls /var; sleep 1; ls /"
Run the container and execute the three commands one after the other, separated by ;
(semicolon)
Relevant Linux Features – Control Operator
Control operators Description
&& AND - runs commands conditionally, on success
has the form A && B where B is run IF AND ONLY IF A succeeds
i.e. if A returns an exit status of zero
Example: $ apt-get update && apt-get install -y openssh-server
This runs the 2
nd
command, apt-get install -y openssh-server, IF AND ONLY IF the 1
st
command, apt-get update succeeded.
Relevant Linux Features – Control Operator
Control operators Description
|| OR - runs command conditionally, on failure
has the form A || B where B is run IF AND ONLY IF A fails
i.e. if A returns a non-zero exit status
This runs the second command, IF AND ONLY IF, the first command fails. In this example,
since the first command, false will always fail, i.e. return a non-zero exit status, the second
command, true, runs and sets the zero exit status
Relevant Linux Features – Exit Status
• When a command ends, it returns an exit status (also known as return status
or exit code)
• Exit status is an integer value ranging from 0 to 255.
– By default, a command that ends successfully has an exit status of zero, 0.
– A command that ends with an error has a non-zero (1 - 255) exit status.
• Commands are free to choose which value to use to reflect success or failure.
However some values are reserved: http://www.tldp.org/LDP/abs/html/exitcodes.html
0 the exit status of a command on success
1 - 255 the exit status of a command on failure
? holds the exit status of the last command executed
$? reads the exit status of the last command executed
• A command writes its exit status into the ? shell variable, accessible via $?
– ? holds one value at a time; overwritten by
the exit status of the next command
– To read the command's exit status, display
the variable $?, e.g. echo $?
Relevant Linux Features – Exit Status
• By default if a command succeeds, on exit it sets a zero, 0, exit status
– If directory /var/log/apt exists, the command ls /var/log/apt succeeds with a zero exit status
– If the directory, is not accessible the ls command will fail with a non-zero exit status:
Success results in a zero exit status, however commands can decide what non-zero integer, between 1 and 255 to
use, to reflect error. In the above example, ls uses exit status 2 to reflect that a directory is not accessible. And
docker chooses an exit status of 125 to reflect that it is “Unable to find image” locally
Relevant Linux Features – Signals
• A Linux signal is a type of inter-process communication
• The operating system uses it to send an action item to a process
• The action taken depends on the signal received
• A signal can come from various sources:
– Keyboard – e.g. by entering CTRL-C
– Function – e.g. kill() system call from an application
– Processes – e.g. a child process send SIGCHLD when it exits
– Command – e.g. kill -s <SIGNAL Name> <processID>
Relevant Linux Features – Signals
• Signal names start with SIG, and an associated positive integer:
• Processes take one of three things upon receiving a signal:
– Ignore the signal
– Take a different action
– Take the default action
SIGINT 2 Interrupt from keyboard
SIGKILL 9 Kill signal
SIGTERM 15 Terminate signal
SIGSTOP 19 Stop process
Relevant Linux Features – Docker and Sudo
• Docker is a privileged command reserved for system administrator
• To use docker, you must be root or have system administrator
privileges
– From a security point of view it's best to login as a non-root user and only
elevate privileges as needed
• The sudo command allows a non-root user to run commands
reserved only for root
• Depending on your host configuration, you may be required to
prepend docker commands with sudo
Relevant Linux Features – Docker and Sudo
Relevant Linux Features – Docker and Sudo
• Users that are part of the docker group can use docker without
having to prepend sudo
– E.g. edit the /etc/group file and a update the line:
• docker:x:999: to docker:x:999:user
– where user is the username of a user on the system. Docker can run without
prepending with sudo
– Note:This is not a best practice
Relevant Linux Features – UNIX Domain Socket
• UNIX domain socket
– also known as IPC (inter-process communication) socket
– a data communications endpoint for exchanging data between processes
on the same host
– implemented as a file, /var/run/docker.sock in Docker
• /var/run/docker.sock is owned by the root user of the Docker Host
– as such it represents a potential security risk
Relevant Linux Features – UNIX Domain Socket
• Docker daemon listens on
/var/run/docker.sock, as a server
process, for communications
from client container processes
• Used to facilitate
communication between the
Docker daemon and containers
• UNIX domain socket is bi-
directional, i.e. it enables a two-
way communications channel
Relevant Linux Features – UNIX Domain Socket
• Summary:
– UNIX domain socket allows processes on the same host to communicate
– All communication occurs entirely within the operating system kernel
– Unix domain sockets use the file system as their address name space
– A UNIX domain socket is known by a pathname
– Security implications should be considered
• The /var/run/docker.sock is an implementation of the UNIX domain socket
• In Linux it is a special socket file.
Relevant Linux Features – Similar data exchange concepts:
• TCP Sockets
– Enables bi-directional communication channel between two endpoints
– The endpoints can be on the same computer or separated by a network
– Client/server implementation; server listens at a port and client talks on that port
• Pipes
– One-way communication channel between commands on the local host
– A sequence of processes chained together by their standard streams
• FIFO
– First In First Out
– Also known as a Named Pipe
– Unidirectional communication channel between two processes on the local host
– Can be accessed by two processes, one to write data, the other to read data
– Implemented as a specially formatted file on the local host
– Can be created and named by commands: mkfifo or mknod
Module 9
Docker Commands
Docker Commands – docker ps
• docker ps -a
– List all containers (running or not)
• docker ps
– lists any currently running containers
Docker Commands – docker pull
• docker pull <image>
– Docker will connect to the Docker Hub and attempt to pull, i.e. download
and install an <image> locally
– E.g. docker pull ubuntu downloads and installs the latest version of the
image named ubuntu from Docker Hub
Note: The above command downloads the most up-to-date version of ubuntu image, or to be
technically correct, it pulls the ubuntu image that has the tag latest from the Docker Hub.
Docker Commands – docker images
• Lists all images on the local host
Docker Commands – docker help
• docker run --help
– See a list of all flags supported by the run argument.
• You can append the --help option to any Docker command
– e.g. docker <command> --help
Docker Commands – docker run
• docker run debian ls -ls
• With the run argument, Docker daemon finds the image (debian), creates the container and
runs ls -ls in that container.
• In this case, ls -ls is an argument passed on to the container executable (debian), and you see
the following:
• Note: If the image does not exist locally, an
attempt is made to download it from the
repository:
Docker Commands – docker run
• docker run -it alpine /bin/sh
• When you run this command, Docker daemon does the following:
– Runs the alpine image: If the image exists locally, Docker daemon uses it for the new container.
Otherwise, Docker Engine pulls it from a registry, e.g. Docker Hub
– Creates a new container: Docker allocates a filesystem and mounts a read-write layer on top of
the image.
– Configures the container: Creates a network bridge interface and attaches an available IP
address from a pool
– Executes the starting command: Runs the default executable or in this case, /bin/sh from the
command line
– Manages the data I/O stream: Connects and logs standard input, output and error streams
• Running the run command with the -it flags attaches us to an interactive TTY in the
container. Now you can run as many commands in the container as you want.
Docker Commands – docker run
• docker run alpine echo "hello from alpine"
• In this case, the Docker daemon starts the alpine container, which
runs the echo command with the "hello from alpine" argument.The
container then immediately exits.
Docker Commands – docker run
• docker run –name web01 -d -p 8080:80 nginx
– Starts nginx web server in detached mode, names it web01
– Maps port 80 of the container to port 8080 of the host machine; exposing port 8080
– Access it via http://localhost:8080 or http://<ip_address:8080>
Docker Commands – docker run
• Running docker ps will show if any containers are currently active (running)
• docker images lists images available on the local host: nginx, ubuntu, debian, alpine
• With docker run, Docker Engine starts the local alpine image running as a container, in
interactive mode (-i) and attaches aTTY device (-t) for I/O.After the container starts, it runs
the application, in this case the Linux shell, /bin/sh.
• Behind the scenes, before the prompt:
– Filesystem allocated & mounted as R/O layer
– The default, bridge network driver interface is created
– IP address is allocated from a pool
– The default executable, /bin/sh is run
– The standard input, output and
error streams is attached
Docker Commands – docker rmi
• docker rmi <image ID>
– Remove one (or more) images
Docker Commands – docker rm
• docker rm <container ID>
– Remove one (or more) containers
• Note:You can identify the container(s) to remove using either CONTAINER ID or NAMES
Docker Commands – docker run
• docker run --rm
– Creates a transient container, i.e. the container is removed after it exits.
Runs the equivalent of $ docker rm <containerID> after container is exited.
Docker Commands – docker attach
• docker attach <container>
– Attach to a running container.
– Container must be running. If its stopped, start it, then attach to it.
Docker Commands – docker exec
• docker exec
– Start additional process in a running container
Let's say the nginx
container is running
in detached (-d)
mode, you can use
docker exec to start
another process in
the container
Note: If the
container is
stopped, it must
first be started with
docker start.
The process status command, ps is run inside the nginx container.
Docker Commands – docker search
• docker search <ImageName>
– Looks like command line version
of a Docker Hub search
NAME is the image name.
Names in the format <UserID>/<ImageName> represent
images uploaded by non-official sources.
STARS represent the number of likes for a specific image.
OFFICIAL identifies official vendor images.
Docker Commands – docker build
• docker build -t <DockerID>/<ImageName> <Dockerfile name>
– To build a new image using a Dockerfile
Docker Commands – docker build
• Using docker build is the preferred way to build a Docker Image
• The build instructions are laid out in the Dockerfile, which allows an automated,
documented and repeatable way to generate a specific Docker image.
• Associated with the docker build command is its context.The build’s context is the
files at a specified location: PATH or URL.
PATH is a directory on your local filesystem
URL is a Git repository location
• By default the build instructions are read from a file called Dockerfile at
the root (or top level) of your context
– E.g. if the docker build command is run from a subdirectory called Files, this
becomes its context
– The Docker daemon searches this directory and any subdirectories for objects it
needs, e.g. Dockerfile.
Docker Commands – docker build
• By default the build instructions are
read from a file called Dockerfile at the
root (or top level) of your context
– E.g. if the docker build command is run
from a subdirectory called Files, this
becomes its context
– The Docker daemon searches this directory
and any subdirectories for objects it needs,
e.g. Dockerfile.
• Note: if the Dockerfile is located
outside the context, use the -f option
to specify the Dockerfile
– e.g. $ docker build -f /path/to/a/Dockerfile .
Docker Commands – docker commit
• docker commit <container ID>
• Containers are by design ephemeral and stateless
– Changes made while in the container are discarded when the container is
removed
– One way to make container updates or configuration changes persistent, is
to freeze the container, i.e. convert it into an image.
Docker Commands – docker commit
• The docker commit command is used to create a new image based
on changes made in a container.
• Confirm that changes made in the original
container were successfully committed.
• I.e. start a container, configure it to taste, then
commit those changes into a new Docker image:
Note: Building an image via docker commit is not considered a best
practice as it is not repeatable or self-documenting like using docker
build and the Dockerfile.
Docker Commands – docker info
• docker info
– Display system-wide Docker information
Docker Commands – docker history
• docker history <image_name>
– Show the history of an image. In effect, it identifies the "layers" in an image.
Docker Commands – docker inspect
• docker inspect
– Return low-level information on Docker objects
• The target of this command is an object that can be identified via a
Name or an ID, e.g. image, container, network, service, etc.
• The output of the command is information about the object
displayed as a JSON array
Docker Commands – docker inspect
@ubuntu:~$ docker inspect wizardly_jang
[
{
"Id": "c794e33bda6bfa60cdc039795ad7712c62df68ca5f8a6d14b906a6a06bc08e43",
"Created": "2017-04-01T06:02:04.840341671Z",
"Path": "nginx",
"Args": [
"-g",
"daemon off;"
],
"State": {
"Status": "running",
"Running": true,
. . .
To output a specific field, use the --format or -f option.
docker inspect --format "{{.NetworkSettings.IPAddress}}" <container ID>
to view the IP Address section of the docker inspect output:
Docker Commands – docker diff
• docker diff <container ID>
– Inspect changes to a container's filesystem
A Added File
C Changed File
D Deleted File
Docker Commands – docker network
• docker network connect
– Connect a running container to a network
Use docker inspect 00db80208c35 to confirm
container is connected to a network
Container connected to both bridge and myNeto1 networks
Docker
Registry
Dockerfile
Daemon
commit
push
build
create
exec
pull
attach
Tar Archive
logs
pause / unpause
rename
rm
wait
save
start/stop
kill
export
load
run
Network
inspect
rmi Text Editor
Docker Commands
history
images
import
info
login
logout
port
ps
search
stats
top
update
version
inspectinspect
Volume
cp
diff
restart
Container Image
tag
inspect
create
ls
prune
rm
docker volume
neokobo.blogspot.com
Module 10
Docker & ContainerTerms
JSON – JavaScript Object Notation
• JSON is short for JavaScript Object
Notation
– implements a lightweight data
interchange format based on a subset
of JavaScript language
– provides a way to store information
such that it is easy for machines to
parse and generate
– a way to store information in an
organized, easy-to-access manner
– used primarily to transmit data, as an
alternative to XML
• Docker uses JSON as its default
Logging Driver to communicate
information.
JSON – JavaScript Object Notation
• Example of how Docker leverages JSON
$ docker inspect 978d
[
{
"Id": "sha256:978d85d02b87aea199e4ae8664f6abf32fdea331884818e46b8a01106b114cee",
"RepoTags": [
"debian:latest"
],
"Container": "4799c1aee3356a0d8e51a1e6e48edc1c4ca224e55750e26916f917bdecd96079",
"ContainerConfig": {
"Hostname": "ed11f485244a",
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD ["/bin/bash"]"
],
},
}
]
JSON – JavaScript Object Notation
• JSON is built on two structures:
– Name/Value pairs, delimited by comma
• NAME:VALUE, NAME:VALUE,…
• e.g. "Hostname": "ed11f485244a"
– Ordered list of values
• realized as an array, vector, list, or
sequence
• e.g. ["/bin/sh","-c","#(nop) ","CMD
["/bin/bash"]"]
In JSON, data structures include:
• Array
– An associative array of values
– begins with [ (left bracket) and ends with ]
(right bracket)
– Values are separated by , (comma)
• Object
– Begins with { (left curly brace) and ends with }
(right curly brace)
– An unordered set of name/value pairs
– Name and value separated by : (colon)
– Name/Value pairs delimited by , (comma)
– Object
• {string : value,…}
• Value
– string
– number
– object
– array
– true
– false
– null
DockerTerms
• Microservices Architecture
– The application is built up of a modular set of interconnected services
instead of a single monolithic application.
– Services can be developed and deployed independent of one other,
enabling innovation, agility and efficiency
– The services are independently deployable and updateable, with minimal
dependencies
DockerTerms
• Microservices vs. Monolithic applications
– An application consists of a set of services.
• For monolithic applications, these services are tightly integrated into the application
• For microservices, these services are deployed as modular, standalone apps with standard
interfaces
– Multiple applications on a system might leverage a set of common services (e.g.
Authentication, Logging, Messaging, etc.)
• In a monolithic application environment, each application has built into it a copy of these
common services
• In a Microservices environment, these services are decoupled from the application, enabling
agility and efficiency, e.g. the same service can be shared between applications
• For example, Authentication is a service. In the monolithic environment, a separate instance
of the Authentication service might be built into each application needing authentication. In
a microservices environment, there might be just one Authentication service, created as a
microservice. Every application needing Authentication services would simply "link" to it.
DockerTerms
• Runtime
– Docker Container Runtime is the instantiation of a Docker Image
– /usr/bin/docker-containerd is the core container runtime on Linux
• Containerd spins up runC (or other OCI compliant runtime) to run and monitor Containers
• Docker architecture is broken into four components:
– Docker engine
– Containerd
– containerd-shim
– runC
• runC then runs the container
DockerTerms
• Universal Control Plane (UCP)
– Manage multi-container applications on a custom host installation (on-
premise, on a cloud provider)
– Manage a cluster of Docker hosts like a single machine
– Docker Enterprise Edition Add-on
DockerTerms
• DockerTrusted Registry (DTR)
– An enterprise image repository solution installable behind a firewall to
manage images and access
– Runs a private repository of container images and makes them available to
a UCP instance
– Can be installed on-premises or on a cloud infrastructure
– Docker Enterprise Edition Add-on
DockerTerms
• Composable
– units that are well integrated, yet independent and modular
DockerTerms
• Sandbox
– A Network Sandbox, is a concept within the Docker Container Networking
Model (CNM)
– It contains the configuration of a container's network stack
• This includes container interfaces, routing table, DNS settings.
DockerTerms
• Linux Bridge
– A Linux bridge is a Layer 2 device that is the virtual implementation of a
physical switch inside the Linux kernel
– It forwards traffic based on MAC addresses which it learns dynamically by
inspecting traffic
– A Linux bridge is not to be confused with the bridge Docker network driver
which is a higher level implementation of the Linux bridge.
DockerTerms
• Network Namespaces
– A Linux network namespace is an isolated network stack in the kernel with its own
interfaces, routes, and firewall rules
– It is a security aspect of containers and Linux; it is used to isolate containers
– Similar toVirtual Routing and Forwarding (VRF) that segments the network control and
data plane inside the host, Network Namespaces provide the construct to provide a unique
network experience to different processes running on the host
– Network namespaces ensure that two containers on the same host will not be able to
communicate with each other or the host unless configured to do so via Docker networks
– Typically,Container Network Model (CNM) network drivers implement separate
namespaces for each container. However, containers can share the same network
namespace or even be a part of the host's network namespace
– The host network namespace contains the host interfaces and host routing table.This
network namespace is called the global network namespace.
DockerTerms
• Virtual Ethernet Devices
– A virtual Ethernet device (veth) is a Linux networking interface that acts as a
connecting wire between two network namespaces
– A veth is a full duplex link that has a single interface in each namespace.
Traffic in one interface is directed out the other interface
– Docker network drivers utilize veths to provide explicit connections
between namespaces when Docker networks are created
– When a container is attached to a Docker network, one end of the veth is
placed inside the container (usually seen as the ethX interface) while the
other is attached to the Docker network.
DockerTerms
• Iptables
– iptables is an L3/L4 firewall that provides rule chains for packet marking,
masquerading, and dropping
– It is the native packet filtering system that is part of the Linux kernel
– The built-in Docker network
drivers utilize iptables extensively
to segment network traffic,
provide host port mapping, and to
mark traffic for load balancing
decisions.
DockerTerms
• Red Hat Atomic Host
– Optimized for running containerized environments
DockerTerms
• Orchestration
– Orchestration is an important part of the Container ecosystem
– Docker Swarm, Google Kubernetes, Apache Mesos, are some of the
orchestration solutions
DockerTerms
• User-Space vs. Kernel-Space
– User-space is that portion of system memory in which user processes (i.e.,
everything other than the kernel) run
– This contrasts with kernel-space, which is that portion of memory in which the
kernel executes and provides its services
– User-space processes are allowed
to access the kernel-space only
through the use of system calls
System Memory
DockerTerms
• Default Executable
– The entry point to the container is an executable, specifically the default
executable. It is the process running with PID 1 in the container
– The entry point to a virtual machine is the kernel or the init program
– In aVM (or the standalone Linux server), the init process has PID 1 and it is
the parent of all other processes on the system.
DockerTerms
• Unikernels
– Also called Library Operating System or Cloud Operating System
– Unikernels are specialized, single-address-space machine images constructed by
using library operating systems, intended to be run within aVirtual Machine
– Developer selects a minimal set of libraries required for the app or service to run
• libraries are compiled with the app and configuration code to build sealed, fixed-purpose
images (unikernels)
• images run directly on hypervisor without an intervening OS such as Linux or Windows
– Benefits include:
• Security and efficiency as a result of the smaller attack surface and resource footprint
• Performance as they are built by compiling high-level languages directly into specialized
machine images that run directly on a hypervisor, or bare metal.
• Portability as hypervisors are ubiquitous and they also run on bare metal
• Cost is minimized as the framework lends itself to pay-per-use and "as a service" model
http://unikernel.org/
DockerTerms
Hardware
Kernel
Doc ke r
Container
Container
Container
Linux Containers
Unikernels
Hardware
Hypervisor
libOS
Application
libOS
Application
libOS
Application
Virtual Machines
Hardware
Hypervisor
gOS gOSgOS
VM
VM
VM
Isolation Agility Specialization
Docker Superseded Products andTools
Docker Hub
Docker Store
Docker Cloud
Docker Swarm
Swarm mode
DockerToolbox Docker for Mac
Docker forWindows
• Windows on Docker
• Networking Introduction
• Library OS
• Unikernels
• More Docker Commands
• build, ship, run Commands
• New Release Cadence
231
Topics for upcoming update:
A. Akpaffiong, 2017
References
• Intro/Review:
– https://neokobo.blogspot.com/
– https://docs.docker.com/engine/understanding-docker/
– https://docs.docker.com/get-started/
– https://veggiemonk.github.io/awesome-docker/
– http://training.play-with-docker.com/
• Unikernels:
– https://en.wikipedia.org/wiki/Unikernel
– http://unikernel.org/
– https://wiki.xenproject.org/wiki/Unikernels
• Misc:
– http://www.linfo.org/user_space.html
– https://github.com/docker/labs/blob/master/networking/concept
s/03-linux-networking.md
– https://github.com/docker/labs/blob/master/networking/concept
s/01-cnm.md
– https://github.com/containerd/containerd/blob/master/design/ar
chitecture.md
– https://blog.docker.com/2016/12/containerd-core-runtime-
component/
– http://man7.org/linux/man-pages/man7/signal.7.html
– https://docs.docker.com/engine/tutorials/dockervolumes/#moun
t-a-host-directory-as-a-data-volume
– h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA6-
2761ENW
– https://docs.docker.com/docker-cloud/apps/volumes/
– https://docs.docker.com/get-started/part3/#docker-composeyml
– https://docs.docker.com/get-started/part3/#docker-composeyml
– https://github.com/docker/labs/blob/master/networking/concept
s/02-drivers.md#userdefined
– https://github.com/docker/labs/blob/master/networking/A1-
network-basics.md
– https://github.com/docker/libnetwork
– https://github.com/docker/labs/blob/master/networking/concept
s/07-macvlan.md
References
• Misc:
– http://www.nuagenetworks.net/blog/docker-networking-overview/
– https://www.ctl.io/developers/blog/post/docker-networking-rules/
– https://github.com/docker/labs/tree/master/networking
– https://medium.com/aws-activate-startup-blog/a-better-dev-test-
experience-docker-and-aws-291da5ab1238
– https://docs.docker.com/engine/understanding-docker/
– https://en.wikipedia.org/wiki/UnionFS
– https://github.com/moby/moby/blob/master/pkg/namesgenerator/names-
generator.go
– https://docs.docker.com/datacenter/dtr/2.1/guides/
– https://blog.octo.com/en/docker-registry-first-steps/
– https://docs.docker.com/engine/faq/
– https://clearlinux.org/sites/default/files/vmscontainers_wp_v5.pdf
– https://www.youtube.com/watch?v=qILu3vc8tBk&feature=youtu.be
– http://man7.org/linux/man-pages/man7/cgroups.7.html
– http://man7.org/linux/man-pages/man7/namespaces.7.html
– http://runc.net/index.html
– https://en.wikipedia.org/wiki/Cgroups
– https://docs.docker.com/engine/installation/
– https://blog.docker.com/2017/03/docker-enterprise-edition/
– https://www.docker.com/pricing
– https://www.hpe.com/h20195/v2/GetPDF.aspx/c05164344.pdf
– https://techcrunch.com/2017/03/02/dockers-new-enterprise-edition-gives-
containers-an-out-of-the-box-experience/
– https://www.nginx.com/blog/deploying-microservices/
– https://opensource.com/resources/what-docker
– http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=a0000141
4enw
– https://docs.docker.com/engine/installation/
– https://thenewstack.io/container-networking-breakdown-explanation-
analysis/
– https://github.com/docker/labs/blob/master/networking/concepts/06-
overlay-networks.md
– https://github.com/docker/labs/blob/master/networking/A3-overlay-
networking.md
Execution Environment
• Containerization, the ability to run multiple isolated compute
environments on a single kernel relies on two kernel features:
cgroups and namespaces
– Along with other runtime technologies such as libContainer, and RunC,
these form the foundation of Docker's ability to host multiple isolated
containers under a single kernel.
• Docker facilitates the packaging of an application image with all its
dependencies, and running it in a software container, on any
supported Docker platform
– The mantra is: “build once, run anywhere.”
The ABC of Docker: The Absolute Best Compendium of Docker

More Related Content

What's hot

Getting started with Docker
Getting started with DockerGetting started with Docker
Getting started with DockerRavindu Fernando
 
Introduction to Docker
Introduction to DockerIntroduction to Docker
Introduction to DockerLuong Vo
 
Docker Containers Deep Dive
Docker Containers Deep DiveDocker Containers Deep Dive
Docker Containers Deep DiveWill Kinard
 
Docker intro
Docker introDocker intro
Docker introOleg Z
 
Kubernetes
KubernetesKubernetes
KubernetesHenry He
 
Introduction to Docker storage, volume and image
Introduction to Docker storage, volume and imageIntroduction to Docker storage, volume and image
Introduction to Docker storage, volume and imageejlp12
 
Docker and kubernetes
Docker and kubernetesDocker and kubernetes
Docker and kubernetesDongwon Kim
 
Dockers and containers basics
Dockers and containers basicsDockers and containers basics
Dockers and containers basicsSourabh Saxena
 
Introduction to Docker Compose
Introduction to Docker ComposeIntroduction to Docker Compose
Introduction to Docker ComposeAjeet Singh Raina
 
Why Docker
Why DockerWhy Docker
Why DockerdotCloud
 
Docker swarm introduction
Docker swarm introductionDocker swarm introduction
Docker swarm introductionEvan Lin
 
Introduction to docker
Introduction to dockerIntroduction to docker
Introduction to dockerJohn Willis
 
Docker: From Zero to Hero
Docker: From Zero to HeroDocker: From Zero to Hero
Docker: From Zero to Herofazalraja
 
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...Edureka!
 
Open shift 4 infra deep dive
Open shift 4    infra deep diveOpen shift 4    infra deep dive
Open shift 4 infra deep diveWinton Winton
 
Introduction to Docker - VIT Campus
Introduction to Docker - VIT CampusIntroduction to Docker - VIT Campus
Introduction to Docker - VIT CampusAjeet Singh Raina
 

What's hot (20)

Getting started with Docker
Getting started with DockerGetting started with Docker
Getting started with Docker
 
Introduction to container based virtualization with docker
Introduction to container based virtualization with dockerIntroduction to container based virtualization with docker
Introduction to container based virtualization with docker
 
Introduction to Docker
Introduction to DockerIntroduction to Docker
Introduction to Docker
 
Docker Containers Deep Dive
Docker Containers Deep DiveDocker Containers Deep Dive
Docker Containers Deep Dive
 
Docker intro
Docker introDocker intro
Docker intro
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
Introduction to Docker storage, volume and image
Introduction to Docker storage, volume and imageIntroduction to Docker storage, volume and image
Introduction to Docker storage, volume and image
 
Docker and kubernetes
Docker and kubernetesDocker and kubernetes
Docker and kubernetes
 
Dockers and containers basics
Dockers and containers basicsDockers and containers basics
Dockers and containers basics
 
Introduction to Docker Compose
Introduction to Docker ComposeIntroduction to Docker Compose
Introduction to Docker Compose
 
Docker Introduction
Docker IntroductionDocker Introduction
Docker Introduction
 
Why Docker
Why DockerWhy Docker
Why Docker
 
Docker swarm introduction
Docker swarm introductionDocker swarm introduction
Docker swarm introduction
 
Introduction to docker
Introduction to dockerIntroduction to docker
Introduction to docker
 
Docker: From Zero to Hero
Docker: From Zero to HeroDocker: From Zero to Hero
Docker: From Zero to Hero
 
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
 
Open shift 4 infra deep dive
Open shift 4    infra deep diveOpen shift 4    infra deep dive
Open shift 4 infra deep dive
 
Docker and Devops
Docker and DevopsDocker and Devops
Docker and Devops
 
presentation on Docker
presentation on Dockerpresentation on Docker
presentation on Docker
 
Introduction to Docker - VIT Campus
Introduction to Docker - VIT CampusIntroduction to Docker - VIT Campus
Introduction to Docker - VIT Campus
 

Similar to The ABC of Docker: The Absolute Best Compendium of Docker

Kubernetes Certification Training Course | Docker and Kubernetes Training
Kubernetes Certification Training Course |  Docker and Kubernetes TrainingKubernetes Certification Training Course |  Docker and Kubernetes Training
Kubernetes Certification Training Course | Docker and Kubernetes Trainingnavyatejavisualpath
 
A curtain-raiser to the container world Docker & Kubernetes
A curtain-raiser to the container world Docker & KubernetesA curtain-raiser to the container world Docker & Kubernetes
A curtain-raiser to the container world Docker & KuberneteszekeLabs Technologies
 
Intro to docker and kubernetes
Intro to docker and kubernetesIntro to docker and kubernetes
Intro to docker and kubernetesMohit Chhabra
 
Dockers and kubernetes
Dockers and kubernetesDockers and kubernetes
Dockers and kubernetesDr Ganesh Iyer
 
Docker - Portable Deployment
Docker - Portable DeploymentDocker - Portable Deployment
Docker - Portable Deploymentjavaonfly
 
Docker Datacenter Overview and Production Setup Slides
Docker Datacenter Overview and Production Setup SlidesDocker Datacenter Overview and Production Setup Slides
Docker Datacenter Overview and Production Setup SlidesDocker, Inc.
 
Week 8 lecture material
Week 8 lecture materialWeek 8 lecture material
Week 8 lecture materialAnkit Gupta
 
Introduction to Docker
Introduction to DockerIntroduction to Docker
Introduction to DockerAditya Konarde
 
Cloud foundry Docker Openstack - Leading Open Source Triumvirate
Cloud foundry Docker Openstack - Leading Open Source TriumvirateCloud foundry Docker Openstack - Leading Open Source Triumvirate
Cloud foundry Docker Openstack - Leading Open Source TriumvirateAnimesh Singh
 
Docker - the what why and hows
Docker - the what why and howsDocker - the what why and hows
Docker - the what why and howsSouvik Maji
 
DockerCon EU 2015 Barcelona
DockerCon EU 2015 BarcelonaDockerCon EU 2015 Barcelona
DockerCon EU 2015 BarcelonaRoman Dembitsky
 
Containers in depth – Understanding how containers work to better work with c...
Containers in depth – Understanding how containers work to better work with c...Containers in depth – Understanding how containers work to better work with c...
Containers in depth – Understanding how containers work to better work with c...All Things Open
 
Devoxx 2016 - Docker Nuts and Bolts
Devoxx 2016 - Docker Nuts and BoltsDevoxx 2016 - Docker Nuts and Bolts
Devoxx 2016 - Docker Nuts and BoltsPatrick Chanezon
 
Docker - A curtain raiser to the Container world
Docker - A curtain raiser to the Container worldDocker - A curtain raiser to the Container world
Docker - A curtain raiser to the Container worldzekeLabs Technologies
 

Similar to The ABC of Docker: The Absolute Best Compendium of Docker (20)

Docker slides
Docker slidesDocker slides
Docker slides
 
Docker
DockerDocker
Docker
 
What is Docker?
What is Docker?What is Docker?
What is Docker?
 
Kubernetes Certification Training Course | Docker and Kubernetes Training
Kubernetes Certification Training Course |  Docker and Kubernetes TrainingKubernetes Certification Training Course |  Docker and Kubernetes Training
Kubernetes Certification Training Course | Docker and Kubernetes Training
 
A curtain-raiser to the container world Docker & Kubernetes
A curtain-raiser to the container world Docker & KubernetesA curtain-raiser to the container world Docker & Kubernetes
A curtain-raiser to the container world Docker & Kubernetes
 
Intro to docker and kubernetes
Intro to docker and kubernetesIntro to docker and kubernetes
Intro to docker and kubernetes
 
Dockers and kubernetes
Dockers and kubernetesDockers and kubernetes
Dockers and kubernetes
 
Docker - Portable Deployment
Docker - Portable DeploymentDocker - Portable Deployment
Docker - Portable Deployment
 
Docker Datacenter Overview and Production Setup Slides
Docker Datacenter Overview and Production Setup SlidesDocker Datacenter Overview and Production Setup Slides
Docker Datacenter Overview and Production Setup Slides
 
Week 8 lecture material
Week 8 lecture materialWeek 8 lecture material
Week 8 lecture material
 
Introduction to Docker
Introduction to DockerIntroduction to Docker
Introduction to Docker
 
Docker.pptx
Docker.pptxDocker.pptx
Docker.pptx
 
Cloud foundry Docker Openstack - Leading Open Source Triumvirate
Cloud foundry Docker Openstack - Leading Open Source TriumvirateCloud foundry Docker Openstack - Leading Open Source Triumvirate
Cloud foundry Docker Openstack - Leading Open Source Triumvirate
 
Docker - the what why and hows
Docker - the what why and howsDocker - the what why and hows
Docker - the what why and hows
 
Containers and Docker
Containers and DockerContainers and Docker
Containers and Docker
 
DockerCon EU 2015 Barcelona
DockerCon EU 2015 BarcelonaDockerCon EU 2015 Barcelona
DockerCon EU 2015 Barcelona
 
Containers in depth – Understanding how containers work to better work with c...
Containers in depth – Understanding how containers work to better work with c...Containers in depth – Understanding how containers work to better work with c...
Containers in depth – Understanding how containers work to better work with c...
 
Microservices, Containers and Docker
Microservices, Containers and DockerMicroservices, Containers and Docker
Microservices, Containers and Docker
 
Devoxx 2016 - Docker Nuts and Bolts
Devoxx 2016 - Docker Nuts and BoltsDevoxx 2016 - Docker Nuts and Bolts
Devoxx 2016 - Docker Nuts and Bolts
 
Docker - A curtain raiser to the Container world
Docker - A curtain raiser to the Container worldDocker - A curtain raiser to the Container world
Docker - A curtain raiser to the Container world
 

Recently uploaded

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 

Recently uploaded (20)

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 

The ABC of Docker: The Absolute Best Compendium of Docker

  • 1. Aniekan Akpaffiong Updated May 2017 The Absolute Best Compendium of Docker The ABC of Docker
  • 3. Presentation Introduction Codify my experience with Docker around: Docker technologies Containers vs. virtualization Critical concepts Usage examples Present lessons learned Promote the use of the Docker Container Management platform 2 Goal Consider this a work-in-progress
  • 4. Table of Contents Introduction Docker Technology Containers vs. Virtual Machines Deployment Model Docker Components Docker Command Line Linux Command Line Relevant Linux Features Docker Commands Terms 2
  • 6. • Docker enables the creation and management of lightweight, self- contained, immutable runtime environments, called Containers. • The container packages an application workload (and its dependencies) in a compute environment with its own CPU, memory, and I/O resources. • Docker enables the efficient management and friction-less deployment of containers onto any Docker platform, and at any software lifecycle phase from development to production. Introduction
  • 7. Introduction • Docker promises to encapsulate an application, deploy it in a repeatable manner across any Docker-enabled platform, and manage it efficiently at scale
  • 8. Introduction • At a high-level, Docker helps makes the development, distribution and execution of applications (packaged as Containers) as frictionless as possible • Docker provides a management framework for application virtualization • A Docker environment is configurable; manually via command line tools such as Docker Client and programmatically via REST API
  • 9. Introduction • Docker and Container are sometimes used interchangeably, however Docker is essentially a Container management solution
  • 10. Introduction • Containers offer an environment as close as possible to that of a virtual machine (VM) without the overhead that comes with running a separate kernel and simulating the hardware • A Container could be correctly described as operating system virtualization – it facilitates running multiple, isolated, user-space operating environments (containers) on top of a single kernel
  • 11. Introduction The Docker ecosystem includes: Object Layer • Container • Image Docker Layer • Docker Host (daemon, REST, clients) • Drivers and Plug-ins (storage, networking) • Docker Registry • Tools (Swarm mode, Compose) Host Layer • Linux • Mac OS • Windows Platform Layer • Bare metal • Virtual machine • Cloud
  • 12. Introduction - Docker Ecosystem • Object layer: – Docker runs application, packaged as containers – Applications are deployed from remote or local registries
  • 13. Introduction - Docker Ecosystem • Docker layer: – Docker Host (daemon, API, clients) – Drivers and Plug-ins (storage, networking) – Docker Registry (Hub and Store) – Tools such as Swarm, Compose
  • 14. Introduction - Docker Ecosystem • Host OS layer: – Docker is available on Linux, Mac OS,Windows hosts
  • 15. Introduction - Docker Ecosystem • Platform layer: – Docker host is deployable on any platform from local physical hosts, to virtual machines and the cloud
  • 16. Introduction – Putting it all together: Build, Ship, Run • Docker's Container as a Service (CaaS) workflow, i.e. how applications move from development to deployment. • Build: Docker facilitates dev/test environment. Developer creates application. Finished application is bundled as a Docker image • Ship: Docker image is pushed to a registry (an image distribution portal, e.g. DockerTrusted Registry) or Docker Hub by the DEV team. OPS accesses, and pulls down image from registry • Run: Image is instantiated (i.e. run in a container), managed and scaled on any Docker-enabled platform DevTeam OpsTeam Run Deploy & Manage Build Development Ship Content & Collaboration gettyimages.co.uk
  • 17. Introduction – Putting it all together: Build, Ship, Run • Docker provides the mechanism to build, ship, and run any app, on any OS, on any platform1 • Build an app via Docker CLI or orchestration tools such as docker build, docker create, docker-compose. • Ship the app by uploading it to a Docker Registry via docker push • Run the app by pulling its image from the registry, docker pull, and start it via docker run. • Leverage additional Docker tools (compose, swarm mode, datacenter) to orchestrate and secure the environment 1With limitations.
  • 18. Introduction – Use Case Use Cases Enabled by Docker CaaS Cloud • Cloud Migration • Hybrid Cloud • Multi-Cloud Apps • Containerization • Microservices • CI/CD - Continuous Integration, Continuous Deployment • DevOps • Self-service Portal Data • Data Processing • Pipelines
  • 19. Introduction – Use Case • Docker affords developers: – assurance that locally developed apps run unmodified on any Docker platform – application portability across platforms: physical, virtual, cloud – consistent deployment model from Development to Production – focus on writing code instead of micro-managing systems setup – access to an ecosystem of apps and easy app integration model – freedom to use rebuild/redeploy instead of upgrade deploying new app versions • Docker allows operations: – flexibility to choose a deployment model that best suites the workload – reduction in number of systems under management relative to the workload – built-in tools for management, clustering, orchestration
  • 21. Technology • Docker containers wrap, an application in an environment that contains everything it needs to run: code, runtime, system tools, system libraries • The Docker container can be executed in any Docker-enabled platform with the guarantee that the execution environment exposed to the application will be the same in development, and production
  • 22. Technology • The goal of the container is to guarantee, with as much efficiency as possible, that the application will run the same, regardless of the platform
  • 23. Technology Infrastructure Isolates application processes on a shared OS kernel Creates light, dense execution environments Enables portability across platforms Application Application and dependencies packaged as a portable, immutable environment Facilitates continuous integration and continuous development (CI/CD) Eases access to and sharing of containerized components From: Containers for the Enterprise: A Red Hat Virtual Event C on tai n e rs provi de be n e fi ts to both the i n frastruc ture an d ap p li c ati on
  • 24. Technology • ContainersTransform Applications, Infrastructure and Processes – Applications: decomposing development into services that can be developed independently, improving efficiency, agility and innovation – Infrastructure: moving from traditional datacenter to Cloud to a flexible Hybrid model – Processes: enables easy adoption ofAgile and DevOps processing over traditional Waterfall model, the goal being improved flexibility, innovation and go-to-market speed From: Why containers - Beginning of the buyer’s journey -- IT Leader audience by Red Hat
  • 25. Container Runtime Format • A container format describes how a container is packaged • Docker deployed several runtime formats before settling on containerd: – Linux Container (LXC) • LXC is an operating system-level virtualization solution for running multiple isolated Linux systems (containers) on top of a single kernel. • Available in Docker up until Docker v1.8 (optional as of Docker v0.9) – Libcontainer • Unify and standardize the way apps are packaged, delivered, and run in software containers. • a library that provides direct access for Docker to Linux container APIs, in a consistent and predictable way, and without depending on LXC or any other user-space packages • Introduced as the default at Docker 0.9
  • 26. Container Runtime Format • Current Docker container format: – runC • runC is a lightweight, portable container runtime • anAPI used by Docker to interact with system containment features • benefit includes a consistent interface to containment features across Linux distributions • is based on libcontainer – Containerd • the latest Universal Runtime on Linux • responsible for running and monitoring Docker Containers • has multiple components including Executor, Supervisor and runC
  • 27. Execution Environment • Docker combines: – kernel features (such as cgroups, namespaces, etc.) – a Union File System – a unified, low-level container format (runC) – a management framework to build, ship and run portable, immutable and efficient computing environments called containers.
  • 28. Resource Allocation & Isolation • Cgroups - resource allocation - limits usage – limits an application to a specific set of resources (CPU, memory, I/O, network, etc.) – allows Docker to share available system resources to containers and enforce limits and constraints • Namespaces - resource isolation - limits access – a feature of the Linux kernel that isolates and virtualizes system resources and applies it to a workload or a set of processes. – allows an application to have its own view and control of shared system resources such as network stack, process space, mount point, etc.
  • 29. Resource Allocation & Isolation • Cgroups and Namespaces are capabilities of the Linux kernel which sandbox processes from one another, and control their resource consumption
  • 30. Linux namespaces Namespace Description UTS UnixTimesharing System - isolates two system identifiers: nodename and domainname – e.g. allows a container to have its own hostname independently of the host and other containers. IPC Inter-ProcessCommunication - Manage access to IPC resources; queues, semaphores, and shared memory - process/groups can have own IPC resources. PID Process ID – PID isolation - segments the PIDs that one container can view and manipulate MNT Mount - filesystem mount points - processes can have their own root FS.The mount namespace gives you a scoped view of the mounts on your system. NET Network - Manage network interfaces; IP, routes, devices, etc. - provides a logical copy of the network stack, with its own routing tables, firewall rules and network devices USER UID, GUID - User namespaces allows containers to have a different view of the UID and GID space than the host system.
  • 31. Linux namespaces • A namespace wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource • Namespaces provide a form of isolation for the Docker container – It prevents processes running in a container from seeing or affecting processes in another container or in the host system – It limits what a container can see and how it presents itself to the rest of the system • Namespaces create a "wall" around a container
  • 32. Control Groups • Control groups or CGroups implement resource accounting and limiting and process prioritization – track resource usage and help ensure that each container gets its (fair) share of system resources (memory, CPU, disk I/O) – A benefit of cgroups is that it prevents a single container from bringing down a host by consuming more system resources than it should. • CGroup use cases include: – fending off certain types of denial-of-service attacks – Creating good citizens in a multi-tenant platform-as-a-service (PaaS) environments
  • 33. Container Hardware Host Operating System Docker Namespaces CGroups CGroups Namespaces If Namespaces creates a wall around a container, CGroups form the floor and ceiling of each container
  • 35. Container Security • Image pushed to a public registry might inadvertently expose sensitive private data • Be cautious when using Dockerfile instructions such as COPY, ADD or ENV that they do not inadvertently expose sensitive information – If sensitive information is needed, consider incorporating it at runtime, in the docker run command. – Docker Compose provides an improvement for keeping the Dockerfile clean of sensitive information and avoids exposing it at runtime via the use of the docker-compose.yml file
  • 36. Container Security • “Effective security is pervasive. It should be taken into account at every point in the lifecycle” • “Leverage security best practices such as: – minimizing attack surface – securing the borders – trusted sources – continuous scans – timely patching – defense-in-depth – separation of controls e.g. • middle-ware (software architects) • applications (developers) • base image (administrator)” From: Containers for the Enterprise: A Red Hat Virtual Event, March 2017
  • 37. Container Security • Docker suggests several areas to consider with respect to security: – the intrinsic and applied security of the kernel • Kernel namespaces • ControlGroups – Attack Surface of Docker Daemon – Security Configuration and Best Practices
  • 38. Container Security • Do not relax your security posture just because you use Docker Containers
  • 40. Containers vs.Virtual Machines • In HardwareVirtualization, a physical computer can be turned into one or more logical computers, called Virtual Machines (VMs) – HardwareVirtualization decouples the application from the underlying hardware – HardwareVirtualization partitions a physical computer – Virtual machines present a supporting environment for applications to run Type 1 Hypervisor Hypervisor VM1 VM2 VM3 Hardware Application Virtual Machine Guest OS
  • 41. Application Hardware OS is tightly integrated with hardware: device driver, CPU, disk, etc. Application is tightly integrated with the OS Host Operating System Moving an application between systems is complex. Moving a running application is very complex. Moving an OS between systems is very complex. Monolith Micro-services Bare Metal
  • 42. Application Hardware Hypervisor is tightly integrated with hardware Application, guest OS and VM are integrated Hypervisor Moving aVM (with integrated guest OS and application) between hypervisors is routine. Virtual Machine Guest OS Monolith Micro-services Virtualization
  • 43. Application Container Hardware Docker is tightly integrated with OS kernel Application and container are integrated Host Operating System Moving a Container (with integrated application) between Docker platforms is routine. Docker Operating System is tightly integrated with hardware Monolith Micro-services Containerization
  • 44. Containers vs.Virtual Machines • A Docker container is similar to a virtual machine, however: – Containers, operating at a higher level, decouple the application from the underlying operating system – Containers partition processes running on a single operating system – Containers share the host OS kernel.Virtual machines share the hypervisor Hardware Hypervisor is tightly integrated with hardware Application, guest OS and VM are integrated Hypervisor Hardware Docker is tightly integrated with OS kernel Application and container are integrated Host OS Kernel Docker Operating System is tightly integrated with hardware
  • 45. Containers vs.Virtual Machines Footprint • EachVM runs a complete (guest) operating system • Containers share the hosts’ operating system kernel • Advantage… to the container as sharing the kernel allows for more efficiency, e.g. a reduction in maintenance Process • EachVM hosts an operating system, with a full complement of native applications and processes • A Docker container runs by default a single application • Advantage… to the container.A single-application system provides improved agility Setup • Setting up aVM requires subject matter expertise and system resources • A container is a user-space process and demands fewer resources • Advantage… to the container. It is a more developer-friendly environment Portability • AVM is setup as a standalone environment with the full execution environment needed by its main applications • The container is a single-application environment. For multi-tier applications, multiple containers are typically used • Advantage… to the container. Both are portable, however the container provides a higher level abstraction
  • 46. Containers vs.Virtual Machines: Similarities • Containers package an application with all of its dependencies and allow it run the same on any platform. • Virtual machines package an operating system with all its dependencies and allow it to run the same independent of the hardware platform
  • 47. Containers vs.Virtual Machines: Similarities Features Container Virtual Machine Benefit Lightweight   Leverage resources more efficiently than bare-metal single server implementations Shell access   Connection to the shell remotely or via console Has own process space   Run in a partitioned environment Has own network interface   Ability to create its own network access Root access   Login as or escalate privileges to ‘administrator’ Install and update services   Independently update environment Leverage hosts’ kernel   Optimized space and memory utilization Optimized for single workloads   Enhanced portability Minimum system processes   Efficiency through reduced footprint and management, by eliminating unneeded libraries and services Runs as a process on the host OS   Management flexibility and improved resource utilization Boot a different OS   Flexibility to choose the right OS for any particular workload Maturity   Robust feature set for resiliency, management and support
  • 49. Containers vs.Virtual Machines • “Containers are toVirtual Machines asThreads are to Processes.”
  • 50. Containers vs.Virtual Machines: Complementary • The decision to use Containers orVirtual Machines should not be considered a zero-sum game • There are cases where: – Containers are a better fit, e.g. agile software lifecycle – Virtual Machines are a better fit, e.g. hostile multi-tenant environment • Containers andVirtual Machines can be complementary, e.g. – aVirtual Machine hosting a Container environment
  • 51. Containers vs.Virtual Machines: Complementary Containers andVMs can complement each other Containers require a compatible host operating system VM provides extreme isolation (e.g. in multi-tenant environments) Containers start in seconds or less AVM can take minutes to boot Containers can be deployed inside aVM to leverage the best features of each platform
  • 53. Kernel Docker Engine Debian nginx Alpine writable layer writable layer Needed resources not in the kernel (e.g. binaries, libraries, etc.) are supplied by the container's base image or by subsequent layers. Docker and the Host OS Kernel Docker uses the host operating systems' kernel as a base. The kernel contains a core set of components required by containers on the host.
  • 54. Docker Deployment Model Docker Engine Containers(s) Docker Host Docker Daemon REST API CLI Tools You can install Docker, or more specifically the Docker Engine on top of a Linux, Mac or Windows host The relationship between the components on a Docker host
  • 55. Docker can be installed as an application on older Windows or Mac systems via the Docker Toolbox. Toolbox uses docker-machine to provision a VirtualBox VM, which runs the boot2docker Linux distribution, and manages containers via the Docker daemon. Docker Deployment Model DockerToolbox Minimum System Requirements: Docker Toolbox has less rigorous requirements. Windows 64-bit Windows 7 (or higher) Hardware-Assisted Virtualization Mac macOS 10.8 “Mountain Lion” or newer Included components: Docker Machine Docker Engine Docker Compose Kinematic Boot2Docker VirtualBox
  • 56. Docker Deployment Model Docker can install natively either on a Windows OS using Hyper-V VM or on a Mac OS, using the HyperKit VM. Runs Linux containers only Docker for Windows & Docker for Mac Minimum System Requirements Mac Mac must be 2010 or newer model OS X El Capitan 10.11 or later Windows 64bitWindows 10 Pro, Enterprise and Education (1511 November update, Build 10586 or later) Included components: Docker Engine Docker Registry Docker Compose Docker Machine
  • 57. Docker can be deployed natively on a Linux Operating System. The Docker engine is installed on the system with the Docker daemon managing the containers and the Docker client providing access to the Docker daemon. Docker Deployment Model Minimum System Requirements Linux 64-bit version of distributions running version 3.10+ of the Linux kernel Native Linux
  • 58. Docker can be deployed natively on Windows Server 2016 and Windows 10. Can use Docker CLI or PowerShell to manage containers. There is no need for a virtual machine or Linux. Run any Windows application inside a Docker container Docker Deployment Model Minimum System Requirements Windows Windows Server 2016 and Windows 10 Included components: Docker Engine Docker Registry Docker Compose Docker Machine Docker on Windows
  • 59. DockerVariants Docker Community Edition (CE) Tiers Edge Stable Platform CentOS Debian Fedora, Ubuntu Mac Windows 10 Cloud: AWS, Azure, etc. Docker Enterprise Edition (EE) Tiers Basic Standard Advanced Platform CentOS Red Hat Enterprise Linux (RHEL) Ubuntu SUSE Linux Enterprise Server (SLES) Oracle Linux Windows Server 2016 Cloud: AWS, Azure, etc.
  • 61. Docker Components • Docker is a Container management tool. • It consists of: – core technologies such as images, union filesystems, administration and management software such as the Docker engine and Swarm – concepts such as layers, and tags, supporting plug-ins for volumes and networks – and more
  • 63. Docker: A Layered Environment Kernel Docker Engine Debian nginx Alpine writable layer writable layer Finally, to instantiate a Container, a writable layer is added. A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the top-most is read-only. Each layer adds to or replaces (overlays) the layer below it.
  • 64. Docker: A Layered Environment • Kernel o this is the kernel of the host operating system o shared across all containers on host • Bootfs o boot filesystem (with bootloader and kernel) o same across different Linux distributions • Rootfs o root filesystem directories: e.g. /bin, /boot, /dev, /lib, …) o different across Linux distributions • Base image o binaries/libraries o functionality not in the host OS kernel • Image(s) o deployed on top of the base image o (optional) read-only layer(s) • Container o a single writeable layer o changed container data exists here Bootfs/rootfs Base Image layer Image layer Container layer ...
  • 65. Docker: A Layered Environment A Container object is instantiated by loading the image layers into memory and adding a writable top layer. A container creates a run-time environment on top of the underlying host kernel. Note:The run-time environment includes a set of binaries and libraries needed by the application running in the container and a writeable layer where updates are stored. Bootfs/rootfs Base Image layer Image layer Container layer ...
  • 66. Dockerfile • A Docker Image is built from a simple, descriptive set of steps called instructions, which are stored in a text file called a Dockerfile. • To create an image, the Docker daemon reads the Dockerfile and the "context", which is the set of files in the directory in which the image is built, to build and output an image.
  • 67. Dockerfile • Can be described as the source code of the image or an artifact that describes how a Docker image is created • Is a text file with two types of entries: – # Comment • a line beginning with a hash symbol; used to insert a comment – INSTRUCTION • provides instructions to the docker build command • executed in the order listed; each one creating a layer of the image – Example Dockerfile: # Start with ubuntu 16.04 FROM ubuntu:16.04 MAINTAINER neokobo.blogspot.com # Instruction with three components RUN apt-get update && apt-get install emacs24 && apt-get clean CMD ["/bin/bash"] Dockerfile Instructions include: o FROM - Specify the base image o MAINTAINER - Specify the maintainer o LABEL - A key-value pair adds metadata to an image o RUN - Run a command o ADD - Add a file or directory o ENV - Create an environment variable o COPY - Copy files/directories from a source to a destination o VOLUME - enable access to a directory o CMD - process to run when executing the container o ENTRYPOINT - sets the primary command for the image
  • 68. Image • A Docker Image is a read-only template from which a Docker run- time environment (or Container) is instantiated • Docker composes images from layers, where each represents a change to a base image.
  • 69. Image • Similar in concept to a class in object- oriented programming • Can be built from scratch or an existing image can be pulled from a registry • Images can be thought of as golden images. They cannot be modified except by: – instantiating a container – modifying the resulting container – committing the changes to a new image • Docker images are stored as a series of read-only layers Bootfs/rootfs Base Image layer Image layer ...
  • 70. Container • When a container is instantiated, Docker adds a read-write layer on top of the read-only layer(s) • Docker uses storage drivers to manage the contents of the image layers and the writable container layer • The storage driver: – is responsible for stacking layers and providing a single unified filesystem view – manages the filesystems within images and containers Bootfs/rootfs Base Image layer Image layer Container layer ...
  • 71. Container • A container is a lightweight, portable encapsulation of an environment in which to run applications – shares the kernel of the host system and is isolated from other containers in the system – is a running instance of a Docker image • Following the programming analogy, if an image is a class, a container is an instance of a class—a runtime object • To create a Container, Docker daemon instantiates, then adds a writable layer to the image, it initializes settings such as network ports, container name, ID and resource limits Bootfs/rootfs Base Image layer Image layer Container layer ...
  • 72. Layers • Docker images are read-only templates from which Docker containers are instantiated • Each image consists of one or more layers • Layers are discrete entities, promoting modularity and reuse of resources • Each layer results from an instruction in the Dockerfile. Bootfs/rootfs Base Image layer Image layer Container layer ...
  • 73. ADD file: 89ecb642d662ee7edbb868340551106d51336c7e589fdaca4111725ec64da957 in / CMD ["/bin/bash"] MAINTAINER NGINX Docker Maintainers "docker-maint@nginx.com" ENV NGINX_VERSION=1.11.10-1~jessie RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 573BFD6B3D8FBC641079A6ABABF… RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log EXPOSE 443/tcp 80/tcp CMD ["nginx" "-g" "daemon off;"] • Below is repository information for an nginx image on GitHub. Layers • The eight Dockerfile instructions above result in the eight layers of the docker history output below. Each instruction in the Dockerfile creates a new layer
  • 74. Copy on Write (CoW) • A container consists of two main parts: – one or more read-only layers – a read-write layer • To modify a file at a read-only layer, that file is first copied up to the read-write layer. – This strategy preserves the unmodified read-only layers which can be shared with multiple images, optimizing disk space usage • All storage drivers use stackable image layers and the Copy-on-Write strategy
  • 75. Union File System • “A Union File System implementation handles the amalgamation of different file systems and directories into a single logical file system. It allows separate file systems, to be transparently overlaid, forming a single coherent file system” -- https://en.wikipedia.org/wiki/UnionFS
  • 76. Union File System • Docker uses a Union File System to combine multiple layers that make up an image into a single Docker image – Enables implementation of a modular image, that can be de/constructed as needed • Layers are read top-to-bottom – If an object is found both in a top layer and a subsequent lower layer, only the higher layer object is used • If an object to be modified is only in a lower, read-only layer, it is copied up using Copy-on-Write
  • 78. Identifiers • A Docker Container has both a Name and a Universally Unique Identifier (UUID) – A name can be manually assigned by the user or automatically generated by the Docker daemon – A UUID is an automatically generated 12 or 64-character hexadecimal • Identifiers prevent naming conflicts and facilitate automation
  • 79. Identifiers • Name – Manually-assigned, via either: • --name option • --tag option – Automatically-assigned • has the following format: <adjective>_<notable names> – Adjective - a list of approximately 90 adjectives – Notable Names - a list of approximately 150 "notable" scientists and hackers
  • 81. Identifiers • UUID – Universally Unique Identifier – Assigned at container creation. – automatically generated and applied by the Docker daemon – UUID is a set of hexadecimal numbers and come in two forms: • 64-character long form, e.g. – “f78375b1c487e03c9438c729345e54db9d20cfa2ac1 fc3494b6eb60872e74778” • 12-character short form, e.g. – “f78375b1c487”
  • 82. Identifiers • Images and containers may be identified in one of the following ways: • Identifiers are commonly displayed in the truncated 12-character form Identifier Type Example Value Length UUID long identifier f78375b1c887e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778 64-character UUID short identifier f78375b1c887 12-character Name Manual or pseudo-randomly generated Variable Tag String identifying a version of an image Variable Digest Calculated SHA value of an image 64-character
  • 83. DockerTag • A tag is an alphanumeric identifier attached to the image. It is used to distinguish one image from another • A tag name must be valid ASCII and may contain lower and uppercase letters, digits, underscores, periods and dashes • The more complete format of an image name is shown here: – [REGISTRYHOST:[PORT]/[_/][USERNAME/]REGISTRYNAME[:TAG] • Here are some examples: Command What Gets Downloaded docker pull localhost:5000/hello-world hello-world image from the local registry docker pull nginx nginx image from the official Docker Hub registry docker pull nginx:1.11 nginx image with tag 1.11 from the official Docker Hub registry docker pull registry.access.redhat.com/rhel-atomic rhel-atomic image from the official Red Hat registry
  • 84. DockerTag The nginx repository on the official Docker registry contains multiple images. The same image may have multiple tags, e.g. the alpine stable image has three tags – :1.10.3 – :stable – :1.10 that all point to the same image.
  • 85. DockerTags – Docker Hub To see a list of tags or version identifiers associated with an <image> And navigate to the DescriptionConnect to Docker Hub: In this example, Debian version 16.04 is tagged latest
  • 86. Docker Registry Docker Engine Docker Host Docker Daemon REST API CLI tools Images Containers Client Registry docker commands Application Images Dockerfile • A Registry is a Docker toolset to pack, store, and deliver content. • It hosts image repositories and provides an HTTP API to a distribution service where Docker images can be uploaded to (push) and downloaded from (pull).
  • 87. Docker Registry, cont’d • Docker allows the following registry types: hub, store, private and third- party registries • Docker Hub – An online repository of available Docker images – API used to upload and download images and implements version control – Official site is hub.docker.com – Marked deprecated • Docker Store – online repository of official Docker images – Self-service portal where Docker partners publish images and users deploy them – Official site is store.docker.com
  • 88. Docker Registry, cont’d • Private Registry – Local repository – DockerTrusted Registry (DTR) is the enterprise-grade image storage solution from Docker – Installed on-premise or on a virtual private cloud (VPC) • Third-Party Registry – Providers may create their own registry sites, e.g. • Red Hat: https://access.redhat.com/containers/ • Amazon EC2 Container Registry (ECR): https://console.aws.amazon.com/console/home • GoogleContainer Registry (GCR): https://cloud.google.com/container-registry/
  • 90. Docker Host • Docker Host runs Docker Engine – can also host containers – can be deployed on physical servers, virtual machines or in the cloud • OS that can run Docker Host include: – Linux, Mac OS,Windows
  • 91. Docker Engine • Consists of: – A server called a Docker daemon – A REST API – interface through which applications talk to the daemon – CLI client – interacts with the Docker daemon through scripting or CLI commands
  • 92. Docker Engine • Sets up the management environment for containers • Manages (builds, ships and runs) Docker containers deployable on a physical or virtual host, or in the cloud. https://docs.docker.com/engine/understanding-docker/
  • 93. Docker Daemon • service running on the host • creates and manages Docker objects, such as images, containers, networks, and data volumes • The Docker client and daemon communicate via a REST API Docker Engine Docker Host Docker Daemon REST API CLI tools Images Containers Client Registry docker commands Application Images
  • 94. Docker Daemon/Client Docker Engine Docker Host Docker Daemon Container 1 Container n Docker Client docker commands Linux Kernel namespaces cgroups REST TCP Socket runC libcontainer The Docker Client and daemon communicate using a RESTAPI, UNIX sockets or network interface runC is a wrapper around libcontainer Libcontainer is an interface to various Linux kernel isolation features, like namespaces and cgroups. The Docker Daemon: • communicates directly with the containers • enables container encapsulation and isolation
  • 95. Docker Client • The Docker client, in the form of the docker binary, is the primary user interface to Docker • accepts commands and configuration flags from the user and communicates with a Docker daemon • One client can communicate with multiple local or remote daemons • Other tools include: docker, docker-machine, docker-compose
  • 96.
  • 98. Docker Networking • Containers are isolated, single-application environments • A network connects containers to each other, the host and the external network • Docker Networking design themes include: – Portability – portability across diverse network environments – Service discovery – locate services even as they are scaled and migrated – Load balancing – dynamically share load across services – Security – segmentation and access control – Performance – minimize latency and maximize bandwidth – Scalability – maintain linearity of characteristics as applications scale across hosts See https://github.com/docker/labs/tree/master/networking for more information
  • 99. Docker Networking • Container Network Model (CNM) provides the forwarding rules, network segmentation, and management tools for complex network policies • It formalizes the steps required to enable networking for containers while providing an abstraction that can be used to support multiple network drivers • Docker uses several networking technologies to implement the CNM network drivers including Linux bridges, network namespaces, veth pairs, and iptables.
  • 100. Docker Networking • CNM is built on three components, sandbox, endpoint, network: • Sandbox – container's network stack configuration, e.g. • interface management • routing table, DNS settings – implemented as a Linux Network Namespace – may contain multiple endpoints from multiple networks – local scope - associated with a specific host • Endpoint – joins a Sandbox to a Network – Endpoint can be a veth pair • Network – group of Endpoints that can directly communicate with one other – implemented as a Linux bridge, aVLAN, etc.
  • 102. Docker Networking – Exposing Ports • To expose a port: – Use the EXPOSE instruction in the Dockerfile or – the –expose=x to expose a specific port – --expose=x–y to expose a range of ports • Exposing a container port announces the container accepts incoming connections on that port – e.g. the web service container listening on port 80. – EXPOSE documents, however does not create any mapping on the host – --expose exposes port at runtime, however does not create any host mapping
  • 103. Docker Networking – Exposing Ports • The EXPOSE instruction informs Docker that the container listens on the specified network port(s) at runtime – e.g. EXPOSE 80 443 indicates the container listens for connections on two ports: 80 and 443 • EXPOSE does not make the ports of the container accessible to the host – To do that, publish the port with either: • -p flag to publish a range of ports OR • -P flag to publish all of the exposed ports • command line option --expose exposes a port or a range of ports at runtime
  • 104. Docker Networking – Publishing Port • Exposing and publishing ports allows containers communicate with each other and externally • The difference between an exposed port and a published port is that the published port is bound on the host • Publishing either: – binds all container ports to random ports on the host (via –P) OR – binds a specific port or port range from container to host (via –p)
  • 105. Docker Networking – Publishing Port • $ docker run -d -P redis • Run redis detached and publish all exposed ports to random ports (-P) – container port, 6379, is exposed at the random port, 32768, to the host – 6379 is the default port of the redis application • Docker communicates through the random port to the exposed, default port in the container – The container listens on the exposed port
  • 106. Docker Networking – Publishing Port • Publish all exposed ports to random ports – -P or --publish-all • Publish or bind a container port or group of ports to the host – -p, --publish list • Syntax examples: – Publish or bind to specific port (<hostPort>:<containerPort>) • e.g. -p 8080:80 • Container port 80 is published to the host as port 8080 – Publish or bind to random port (<containerPort>) • e.g. -p 80 • This binds container port 80 to a random host port, e.g. port 32768 • Specify which IP to bind on as in: <host interface>:<hostPort>:<containerPort> – e.g. 127.0.0.1:6379:6379 – This limits the exposure of this port, 6379, to connections on IP 127.0.0.1
  • 107. Docker Networking – Publishing Port • $ docker run -d -P nginx • Run nginx server, detached and publish all exposed ports – Application’s default ports, 80 and 443 are published and available through random port(s), 32770 and 32769 respectively – telnet to test connection to the application listening on container port 80, by connecting to bound random host port 32770
  • 108. Docker Networking – Publishing Port • $ docker run -d –p 8080:80 nginx • Syntax – -p <host port>:<container port> • Container port 80 is published as port 8080 to the host • A connection to port 8080 on the host is mapped to port 80 in the container • Note: <host port> is optional, if left off, port is published to a random host port, instead of 8080 as in this example
  • 109. Docker Networking – Built-In Network Drivers • The Docker built-in network drivers facilitate the containers' ability to communicate on a network – built into the Docker Engine – invoked and used through standard docker network commands • Network drivers: – None – Host – Bridge
  • 110. Docker Networking – Host Network Driver • The host network driver has access to the hosts' network interfaces and makes that available to the containers – In host mode the container shares the networking namespace of the host, directly exposing the container to the outside world • The advantage of the host network driver includes higher performance, and a NAT-free environment • A disadvantage is that it is susceptible to port conflict • Use the --net host option to run a container on a host network
  • 111. Docker Networking – Bridge Network Driver • The bridge network driver provides a single-host network on top of which containers may communicate. – In bridge mode, Docker automatically assigns port-maps. Bridge networking leverages these port-mappings and NAT to communication outside the host • The IP address is private and not accessible from outside the host • Use the --net bridge option to manually run a container on a bridge network
  • 112. Docker Networking – Bridge Network Driver • By default, Docker creates a local bridge network named docker0, using the bridge network driver • Unless otherwise specified, containers will be created on this network:
  • 113. Docker Networking – none Network Driver • The none driver gives a container its own networking stack and network namespace – No external network interface; it cannot communicate outside the container • The none network driver is an unmanaged networking option – Docker Engine will not: • create interfaces inside the container • establish port mapping • install routes for connectivity – Guarantees container network isolation between any containers and the host • I/O may be initiated through volumes or STDIN and STDOUT
  • 114. Docker Networking – none Network Driver
  • 115. Docker Networking – Overlay • Overlay network driver creates networking tunnels – enabling communication between hosts • Containers on this network behave as if they are on the same host by tunneling network subnets between hosts – spans a network across multiple hosts • Several tunneling technologies are supported – e.g. virtual extensible local area network (VXLAN) • Created when a Swarm is instantiated
  • 116. Docker Networking – Underlay • Underlay network drivers expose host interfaces, e.g. eth0, directly to containers running on the host – e.g. the Media Access Control virtual local area network (MACvlan). • Allows direct connection to the hosts' physical interface – Provides routable IP addresses to containers on the physical network • MACvlan establishes a connection between container interfaces and the host interface (or sub-interfaces) • MACvlan eliminates the need for the Linux bridge, NAT and port- mapping
  • 117. Docker Networking – Plug-In Network Drivers • Plug-In Network Drivers: – created by users, the community and other vendors – provide integration with incumbent software and hardware – add specific functionality • Network driver plugins are supported via the LibNetwork project – The goal of libnetwork includes: • Modularize networking logic in Docker into a single, reusable library • Provide a consistentAPI and required network abstractions for applications
  • 118. Docker Networking – Plug-In Network Drivers • User-Defined Network – You can create a new bridge network that is isolated from the hosts' bridge network
  • 119. Docker Networking – Plug-In Network Drivers • Community- and vendor-created – Network drivers created by third-party vendors or the community – Enables integration with incumbent software and hardware – Provides functionality not available in standard or existing network drivers – e.g.Weave Network Plugin – creates a virtual network that connects your Docker containers across hosts or clouds • IPAM Drivers – IP Address Management (IPAM) Driver – Built-in or Plug-in IPAM drivers – Provides default subnets or IP addresses for Networks and Endpoints if they are not specified • IP addressing can be manually created/assigned
  • 120. Docker Networking – Network Scope • Network driver concept of scope is the domain of the driver: local or swarm – Local scope drivers provide connectivity and network services within the scope of the host – Swarm scope drivers provide connectivity and network services across a swarm cluster • Local scope networks will have a unique network ID on each host • Swarm scope networks have the same network ID across the cluster • Scope is identified via the docker network ls command:
  • 122. Docker Swarm Mode • Swarm is Docker's native clustering tool – enables orchestration of services in a pool of Docker engines – schedules containers on to the swarm cluster based on resource availability – Docker engines participating in a cluster are running in swarm mode • Docker tools, APIs and services can be used in Swarm mode, enabling scaling of the Docker ecosystem • The tools for container management and orchestration include: – Docker Compose – Docker Swarm mode – Apache Mesos – Google Kubernetes
  • 123. Docker Swarm Mode – Two types of Docker nodes: • Manager – deploys applications to the swarm – dispatches tasks (units of work) to worker nodes – performs the orchestration and cluster management functions • Worker – receives and executes tasks dispatched from manager nodes – runs agents which report on tasks to the manager node – A service is the definition of the tasks to execute on the worker nodes • A node is an instance of the Docker engine participating in the swarm
  • 124. Docker Compose • Dockerfile and runtime commands get increasingly complex – Particularly with multi-tiered applications • Docker Compose is a tool to streamline the definition and instantiation of multi-tier, multi-container Docker applications – docker run starts a container; Compose manages containers as a service – Services codifies container’s behavior in a Compose configuration file – Use configuration file and docker stack deploy to organize and spin up the container • The Compose file provides a way to: – document and configure application’s service dependencies (databases, caches, web service APIs, etc.) – scale, limit, and redeploy the container • Enhances security and manageability by moving docker run commands to aYAML file
  • 125. Docker Compose • Docker Compose defines and runs complex services: – define single containers via Dockerfile – describe a multi-container application via single configuration file (docker-compose.yml) – manage application stack via a single binary (docker stack deploy) • The Docker Compose configuration file, specifies the services, networks, and volumes to compose: – services – the equivalent of passing command-line parameters to docker run – networks – analogous to definitions from docker network create – volumes – analogous to definitions from docker volume create version: "3" services: web: build: . volumes: - web-data:/var/www/data redis: image: redis:alpine ports: - "6379" networks: - default
  • 126. Docker Compose docker-compose up Launches all containers docker-compose stop Stop all containers docker-compose kill Kills all containers docker-compose exec <service> <command> Executes a command in the container
  • 127. Docker Q&A • You have just inherited a Docker environment and come across the following in a script, what does it do? sudo docker run -v /home/user1/foo:/home/user2/src -v /projects/foo:/home/user2/data -p 127.0.0.1:40180:80 -p 127.0.0.1:48000:8000 -p 45820:5820 -t -i user2/foo bash
  • 128. Docker Q&A • Taking each CLI parameter in turn: Parameter Description sudo used to run docker as the super user if not previously setup docker run docker run command -v <host path>:<container path> maps a host volume into a container -p <host IP>:<host port>:<container port> binds a container port to a host port from a specific host IP -p <host port>:<container port> binds a container port to a host port from any host IP -t attaches a terminal to the container -i enables interactive mode user2/foo image identifier bash container startup command
  • 129. Docker Q&A • docker run, starts a container from the image, user2/foo and runs the bash executable in the container. • Persistent data (-v) is enabled by mounting the host directory, /projects/foo, as a mount point /home/user2/data inside the container. • The container exposes three container ports 80, 8000, 5820 as host mounts 40180, 48000, 45820 respectively (-p). Additionally container ports 80 and 48000 can only be access on the host via local interface, 127.0.0.1. • Finally -i and -t are used to enable interactive access to the standard input and output of the container sudo docker run -v /home/user1/foo:/home/user2/src -v /projects/foo:/home/user2/data -p 127.0.0.1:40180:80 -p 127.0.0.1:48000:8000 -p 45820:5820 -t -i user2/foo bash
  • 131. NamedVolumes: Host and Container DataVolumes • A named volume is a mechanism for decoupling persistent data needed by your container from the image used to create the container • Volumes are directories stored outside of the container’s filesystem and hold reusable and shareable data that persists even after a container is terminated • There are three ways to create volumes with Docker: – Create a Docker data volume (-v option with docker create or docker run) – Add new volume viaVOLUME instructions in a Dockerfile – Mount a host directory or file as a data volume to a container directory using the -v option • Volumes are not a part of the containers' Union File System
  • 132. NamedVolumes • Container data is discarded when the container is removed. As such critical data should be kept outside the container – Note: simply exiting a container will preserve the data • A container’s file system is composed of layers and traversing the layers for data decreases performance – Data with high I/O requirements should be stored in a volume outside the container.
  • 133. Container volumes • Docker volumes manage storage which can be shared among containers, while storage drivers enables access to the container’s writable layer • A data volume is a directory or file in the Docker host’s filesystem that is mounted directly into a container
  • 134. Container volumes • Container volumes are instantiated via docker volume create or the VOLUME instruction in a Dockerfile • Use docker volume create to create a volume at the command line: – $ docker volume create --name vol44
  • 135. Container volumes • The volume can be attach to a container at run-time: – $ docker run --rm -it -v vol44:/cvol44 alpine sh
  • 136. Container DataVolumes • Docker data volumes allow data to: – persist after the container is removed – be shared between the host and the Docker container – be shared with other Docker containers • It allows directories of the host system, managed by Docker, to be mounted by one or more containers. It's simple to setup as you don't need to pick a specific directory on the host system
  • 137. Container DataVolumes • This creates a volume /data/vol01 and makes it available to the container • The container volume, /data/vol01, maps to a directory on the host file system.You can get the location via the $ docker inspect <containerID> command. Look in the Mount section for the Source name/value pair:
  • 138. Container DataVolumes "Mounts": [ { "Type": "volume", "Name": "dd517d905c98c74dc0c10370a46dd8445d67dbf84162dc0d9076b4040c395134", "Source": "/var/lib/docker/volumes/dd517d905c98c74...dbf84162dc0d9076b4040c395134/_data", "Destination": "/data/vol01", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ],
  • 139. Mount host directory as a DataVolume • Docker allows you to mount a directory from the Docker host into a container • Using the -v option, host directories can be mounted in two ways: – using an existing host volume, e.g. /home/john/app01, or – new auto-generated volume on the host, e.g. /var/lib/docker/volumes/53404f432f0… • You can assign the volume a name using the --name option, otherwise Docker assigns it a 64-character volume identifier • The advantage of Docker created host volumes is portability between hosts. It does not require a specific volume to be available on any host that will make the mount
  • 140. Mount host directory as a DataVolume • $ docker run -v <host_dir>:<container_dir>:ro -i -t <image> <default executable> – <host_dir> is the source directory – <container_dir> is the container directory – Add :ro to make the mount read-only • In addition to directories, single files can also be mounted between the host and container
  • 141. Mount host directory as a DataVolume • Mount a volume from the host filesystem in the container: – $ docker run -v /home/john/app01:/app01 -i -t busybox • In this example, the -v parameters are: – /home/john/app01 – host directory – : – colon delimiter – /app01 – container mount for host directory • Any existing files in the host volume (/home/john/app01) are automatically available in the container mount
  • 142. Container DataVolumes • Volume Use Cases: – Improved performance as it bypasses the storage driver, e.g. AUFS – Enables data sharing between containers – Enables data sharing between the host and the container
  • 143. DockerVolume – Q&A • Are modifications to the filesystem discarded when container exits? – No – Note the difference between exiting and removing the container – Modifications only discarded once the container is removed – In that case, useVolumes to keep data if the container is removed
  • 145. Docker Command Line • docker – A self-sufficient runtime for containers – Usage: • docker COMMAND [OPTIONS] [arg...] • docker [ --help | -v | --version ] • docker-machine – Create and manage machines running Docker – Usage: • $ docker-machine [OPTIONS] COMMAND [arg…] • docker-compose – Define and run multi-container applications with Docker – Usage: • $ docker-compose [-f <arg>...] [options] [COMMAND] [ARGS…] • $ docker-compose -h|--help
  • 147. Docker Command Line – Combining Options • (generally) Short-form, or single character, command line options can be combined, e.g.: – docker run -i -t --name test busybox sh can be replaced with – docker run -it --name test busybox sh
  • 148. Docker Command Line – Getting Help • Append the --help option to a Docker command, e.g.: – docker --help – docker <command> --help
  • 149. Docker Command Line – Getting Help • If you enter an incomplete command line, Docker will attempt to provide useful syntax hints:
  • 151. Linux Command Line • The Linux command line provides a way to manually interact with the operating system – The shell is a program that acts as an interface between the user and the operating system – The shell display one of two prompts • For the root user, the prompt is the hash or pound (#) symbol; £ on UK Character Sets: For non-root users, the prompt is the $ symbol:
  • 152. Linux Command Line • The command line ends when you hit the Enter key. • A command line however can be extended beyond a single line at the command line or in a file – I.e. if the command is longer than one line, the backslash can be used to extend the command line to two or more lines, e.g. – When the shell encounters a backslash, it ignores any Enter key, and expects the command line to continue – The backslash is mainly cosmetic; to improve readability sudo docker run -v /home/user1/foo:/home/user2/src -v /projects/foo:/home/user2/data -p 127.0.0.1:40180:80 -p 127.0.0.1:48000:8000 -p 45820:5820 -t -i user2/foo bash
  • 153. Linux Command Line • There are many shells in Linux • A commonly used shell is bash, the Bourne Again Shell • When you start a Linux container in Docker, you can specify which shell it should run, e.g. – $ docker run --rm -it debian bash – This starts the debian container running with the bash shell
  • 154. Linux Command Line • The Linux command line consists of three main objects types : command, argument(s), option(s). – command • the program to run, e.g. ls, curl, docker, etc. • command is always the first object on the command line – argument • a parameter or sub-command used to provide command with additional information • e.g. by itself, the ls command lists the files or directories in the current directory.To list files in another directory, enter that directory as an argument, e.g. ls /opt/bin • zero or more arguments – option • used to modify the behavior of the command • e.g. the ls command will display visible files/directories. Given the -a option, e.g. ls -a, it will display both visible and non-visible files • zero or more options
  • 155. Linux Command Line • Options come in two forms: – short-form • typically prepended with a single dash • ls -a or docker ps –a • options can (typically) be concatenated, instead of ls -a -F -l, enter ls -aFl – long-form: • prepended with two dashes. E.g.: • ls --all or docker ps --all • Use white-space to separate multiple options • Can mix and match short-form and long-form options on the same command line: ls --all -l
  • 157. Relevant Linux Features – I/O Stream • Standard streams are communication channels between a program and the shell • Linux recognizes three standard streams: stdin, stdout, stderr • STDIN – standard input – stream data into a program – by default input to a command comes from the keyboard • STDOUT – standard output – stream data out of a program – by default, output of a command is sent to the terminal • STDERR – standard error – stream error output from a program – by default, error from a command is sent to the terminal
  • 158. Relevant Linux Features – Redirection • Linux allows I/O to be redirected away from the default source/target • The default source of STDIN is the keyboard – i.e. by default a command expects to get its input from the keyboard – To force input to come from another location, e.g. a file, use the < redirection symbol • e.g. this pr command indents input five spaces, however, the input data is sent from file001, instead of the keyboard
  • 159. Relevant Linux Features • The default target of STDOUT is the terminal or screen – by default a command expects to send its output to the screen – To direct its output elsewhere, use the > symbol • This example “redirects” the output of the docker images -q command to a file, instead of the default target, the screen – Note:To append output to an existing file, instead of overwriting it, use >> instead
  • 160. Relevant Linux Features • The default target of STDERR is the screen – by default a command expects to send its error output to the screen • To redirect it elsewhere, use the "2>" symbol: Note: "command 2> file" send the output to a file, file. If file already exists, any existing content is overwritten. To append output to an existing file, use 2>> instead, i.e. "command 2>> file".
  • 161. Relevant Linux Features – Pipe • The pipe is implemented with the "|" symbol • It takes the output (stdout) of the command on the left and sends it as input (stdin) for the command on the right
  • 162. Relevant Linux Features – Pipe • In the example below, docker run --help is the first command. Its output is used as input to the more command, which displays the output, one screen at a time: Note: stderr (standard error) cannot be passed through the pipe, only stdout.
  • 163. Relevant Linux Features – Command Substitution • In command substitution, the shell runs command, however instead of displaying the output of command, it stores the output in a variable – You can then pass that variable as input to another command. • The syntax of command substitution is $(command) or the older `command`, using back-ticks.
  • 164. Relevant Linux Features – Command Substitution • Let's say you want to remove the most recent container running – Use docker ps -a which lists all containers by ID, starting with most recent, then copy the Container ID into the docker rm <Container ID> command:
  • 165. Relevant Linux Features – Command Substitution • Alternatively, use Command Substitution, letting the shell do the work: – $ docker rm $(docker ps -lq) • docker ps -lq first gets the ID of the most recent container, then passes it to the docker rm command:
  • 166. Relevant Linux Features – Control Operator • A Control Operator is a token that performs a control function • It is one of the following symbols: || & && ; ;; ( ) | |& <newline> – Let’s focus on the && and || control operators • On occasion you might need to group Docker commands. Let's see a few ways to do this in Linux with three of the control operators
  • 167. Relevant Linux Features – Control Operator Control operators Description ; Semicolon - delimits commands in a sequence Used to run multiple commands one after the other Similar to hitting ENTER after each command $ docker run --rm -it debian bash -c "ls /var; sleep 1; ls /" Run the container and execute the three commands one after the other, separated by ; (semicolon)
  • 168. Relevant Linux Features – Control Operator Control operators Description && AND - runs commands conditionally, on success has the form A && B where B is run IF AND ONLY IF A succeeds i.e. if A returns an exit status of zero Example: $ apt-get update && apt-get install -y openssh-server This runs the 2 nd command, apt-get install -y openssh-server, IF AND ONLY IF the 1 st command, apt-get update succeeded.
  • 169. Relevant Linux Features – Control Operator Control operators Description || OR - runs command conditionally, on failure has the form A || B where B is run IF AND ONLY IF A fails i.e. if A returns a non-zero exit status This runs the second command, IF AND ONLY IF, the first command fails. In this example, since the first command, false will always fail, i.e. return a non-zero exit status, the second command, true, runs and sets the zero exit status
  • 170. Relevant Linux Features – Exit Status • When a command ends, it returns an exit status (also known as return status or exit code) • Exit status is an integer value ranging from 0 to 255. – By default, a command that ends successfully has an exit status of zero, 0. – A command that ends with an error has a non-zero (1 - 255) exit status. • Commands are free to choose which value to use to reflect success or failure. However some values are reserved: http://www.tldp.org/LDP/abs/html/exitcodes.html 0 the exit status of a command on success 1 - 255 the exit status of a command on failure ? holds the exit status of the last command executed $? reads the exit status of the last command executed • A command writes its exit status into the ? shell variable, accessible via $? – ? holds one value at a time; overwritten by the exit status of the next command – To read the command's exit status, display the variable $?, e.g. echo $?
  • 171. Relevant Linux Features – Exit Status • By default if a command succeeds, on exit it sets a zero, 0, exit status – If directory /var/log/apt exists, the command ls /var/log/apt succeeds with a zero exit status – If the directory, is not accessible the ls command will fail with a non-zero exit status: Success results in a zero exit status, however commands can decide what non-zero integer, between 1 and 255 to use, to reflect error. In the above example, ls uses exit status 2 to reflect that a directory is not accessible. And docker chooses an exit status of 125 to reflect that it is “Unable to find image” locally
  • 172. Relevant Linux Features – Signals • A Linux signal is a type of inter-process communication • The operating system uses it to send an action item to a process • The action taken depends on the signal received • A signal can come from various sources: – Keyboard – e.g. by entering CTRL-C – Function – e.g. kill() system call from an application – Processes – e.g. a child process send SIGCHLD when it exits – Command – e.g. kill -s <SIGNAL Name> <processID>
  • 173. Relevant Linux Features – Signals • Signal names start with SIG, and an associated positive integer: • Processes take one of three things upon receiving a signal: – Ignore the signal – Take a different action – Take the default action SIGINT 2 Interrupt from keyboard SIGKILL 9 Kill signal SIGTERM 15 Terminate signal SIGSTOP 19 Stop process
  • 174. Relevant Linux Features – Docker and Sudo • Docker is a privileged command reserved for system administrator • To use docker, you must be root or have system administrator privileges – From a security point of view it's best to login as a non-root user and only elevate privileges as needed • The sudo command allows a non-root user to run commands reserved only for root • Depending on your host configuration, you may be required to prepend docker commands with sudo
  • 175. Relevant Linux Features – Docker and Sudo
  • 176. Relevant Linux Features – Docker and Sudo • Users that are part of the docker group can use docker without having to prepend sudo – E.g. edit the /etc/group file and a update the line: • docker:x:999: to docker:x:999:user – where user is the username of a user on the system. Docker can run without prepending with sudo – Note:This is not a best practice
  • 177. Relevant Linux Features – UNIX Domain Socket • UNIX domain socket – also known as IPC (inter-process communication) socket – a data communications endpoint for exchanging data between processes on the same host – implemented as a file, /var/run/docker.sock in Docker • /var/run/docker.sock is owned by the root user of the Docker Host – as such it represents a potential security risk
  • 178. Relevant Linux Features – UNIX Domain Socket • Docker daemon listens on /var/run/docker.sock, as a server process, for communications from client container processes • Used to facilitate communication between the Docker daemon and containers • UNIX domain socket is bi- directional, i.e. it enables a two- way communications channel
  • 179. Relevant Linux Features – UNIX Domain Socket • Summary: – UNIX domain socket allows processes on the same host to communicate – All communication occurs entirely within the operating system kernel – Unix domain sockets use the file system as their address name space – A UNIX domain socket is known by a pathname – Security implications should be considered • The /var/run/docker.sock is an implementation of the UNIX domain socket • In Linux it is a special socket file.
  • 180. Relevant Linux Features – Similar data exchange concepts: • TCP Sockets – Enables bi-directional communication channel between two endpoints – The endpoints can be on the same computer or separated by a network – Client/server implementation; server listens at a port and client talks on that port • Pipes – One-way communication channel between commands on the local host – A sequence of processes chained together by their standard streams • FIFO – First In First Out – Also known as a Named Pipe – Unidirectional communication channel between two processes on the local host – Can be accessed by two processes, one to write data, the other to read data – Implemented as a specially formatted file on the local host – Can be created and named by commands: mkfifo or mknod
  • 182. Docker Commands – docker ps • docker ps -a – List all containers (running or not) • docker ps – lists any currently running containers
  • 183. Docker Commands – docker pull • docker pull <image> – Docker will connect to the Docker Hub and attempt to pull, i.e. download and install an <image> locally – E.g. docker pull ubuntu downloads and installs the latest version of the image named ubuntu from Docker Hub Note: The above command downloads the most up-to-date version of ubuntu image, or to be technically correct, it pulls the ubuntu image that has the tag latest from the Docker Hub.
  • 184. Docker Commands – docker images • Lists all images on the local host
  • 185. Docker Commands – docker help • docker run --help – See a list of all flags supported by the run argument. • You can append the --help option to any Docker command – e.g. docker <command> --help
  • 186. Docker Commands – docker run • docker run debian ls -ls • With the run argument, Docker daemon finds the image (debian), creates the container and runs ls -ls in that container. • In this case, ls -ls is an argument passed on to the container executable (debian), and you see the following: • Note: If the image does not exist locally, an attempt is made to download it from the repository:
  • 187. Docker Commands – docker run • docker run -it alpine /bin/sh • When you run this command, Docker daemon does the following: – Runs the alpine image: If the image exists locally, Docker daemon uses it for the new container. Otherwise, Docker Engine pulls it from a registry, e.g. Docker Hub – Creates a new container: Docker allocates a filesystem and mounts a read-write layer on top of the image. – Configures the container: Creates a network bridge interface and attaches an available IP address from a pool – Executes the starting command: Runs the default executable or in this case, /bin/sh from the command line – Manages the data I/O stream: Connects and logs standard input, output and error streams • Running the run command with the -it flags attaches us to an interactive TTY in the container. Now you can run as many commands in the container as you want.
  • 188. Docker Commands – docker run • docker run alpine echo "hello from alpine" • In this case, the Docker daemon starts the alpine container, which runs the echo command with the "hello from alpine" argument.The container then immediately exits.
  • 189. Docker Commands – docker run • docker run –name web01 -d -p 8080:80 nginx – Starts nginx web server in detached mode, names it web01 – Maps port 80 of the container to port 8080 of the host machine; exposing port 8080 – Access it via http://localhost:8080 or http://<ip_address:8080>
  • 190. Docker Commands – docker run • Running docker ps will show if any containers are currently active (running) • docker images lists images available on the local host: nginx, ubuntu, debian, alpine • With docker run, Docker Engine starts the local alpine image running as a container, in interactive mode (-i) and attaches aTTY device (-t) for I/O.After the container starts, it runs the application, in this case the Linux shell, /bin/sh. • Behind the scenes, before the prompt: – Filesystem allocated & mounted as R/O layer – The default, bridge network driver interface is created – IP address is allocated from a pool – The default executable, /bin/sh is run – The standard input, output and error streams is attached
  • 191. Docker Commands – docker rmi • docker rmi <image ID> – Remove one (or more) images
  • 192. Docker Commands – docker rm • docker rm <container ID> – Remove one (or more) containers • Note:You can identify the container(s) to remove using either CONTAINER ID or NAMES
  • 193. Docker Commands – docker run • docker run --rm – Creates a transient container, i.e. the container is removed after it exits. Runs the equivalent of $ docker rm <containerID> after container is exited.
  • 194. Docker Commands – docker attach • docker attach <container> – Attach to a running container. – Container must be running. If its stopped, start it, then attach to it.
  • 195. Docker Commands – docker exec • docker exec – Start additional process in a running container Let's say the nginx container is running in detached (-d) mode, you can use docker exec to start another process in the container Note: If the container is stopped, it must first be started with docker start. The process status command, ps is run inside the nginx container.
  • 196. Docker Commands – docker search • docker search <ImageName> – Looks like command line version of a Docker Hub search NAME is the image name. Names in the format <UserID>/<ImageName> represent images uploaded by non-official sources. STARS represent the number of likes for a specific image. OFFICIAL identifies official vendor images.
  • 197. Docker Commands – docker build • docker build -t <DockerID>/<ImageName> <Dockerfile name> – To build a new image using a Dockerfile
  • 198. Docker Commands – docker build • Using docker build is the preferred way to build a Docker Image • The build instructions are laid out in the Dockerfile, which allows an automated, documented and repeatable way to generate a specific Docker image. • Associated with the docker build command is its context.The build’s context is the files at a specified location: PATH or URL. PATH is a directory on your local filesystem URL is a Git repository location • By default the build instructions are read from a file called Dockerfile at the root (or top level) of your context – E.g. if the docker build command is run from a subdirectory called Files, this becomes its context – The Docker daemon searches this directory and any subdirectories for objects it needs, e.g. Dockerfile.
  • 199. Docker Commands – docker build • By default the build instructions are read from a file called Dockerfile at the root (or top level) of your context – E.g. if the docker build command is run from a subdirectory called Files, this becomes its context – The Docker daemon searches this directory and any subdirectories for objects it needs, e.g. Dockerfile. • Note: if the Dockerfile is located outside the context, use the -f option to specify the Dockerfile – e.g. $ docker build -f /path/to/a/Dockerfile .
  • 200. Docker Commands – docker commit • docker commit <container ID> • Containers are by design ephemeral and stateless – Changes made while in the container are discarded when the container is removed – One way to make container updates or configuration changes persistent, is to freeze the container, i.e. convert it into an image.
  • 201. Docker Commands – docker commit • The docker commit command is used to create a new image based on changes made in a container. • Confirm that changes made in the original container were successfully committed. • I.e. start a container, configure it to taste, then commit those changes into a new Docker image: Note: Building an image via docker commit is not considered a best practice as it is not repeatable or self-documenting like using docker build and the Dockerfile.
  • 202. Docker Commands – docker info • docker info – Display system-wide Docker information
  • 203. Docker Commands – docker history • docker history <image_name> – Show the history of an image. In effect, it identifies the "layers" in an image.
  • 204. Docker Commands – docker inspect • docker inspect – Return low-level information on Docker objects • The target of this command is an object that can be identified via a Name or an ID, e.g. image, container, network, service, etc. • The output of the command is information about the object displayed as a JSON array
  • 205. Docker Commands – docker inspect @ubuntu:~$ docker inspect wizardly_jang [ { "Id": "c794e33bda6bfa60cdc039795ad7712c62df68ca5f8a6d14b906a6a06bc08e43", "Created": "2017-04-01T06:02:04.840341671Z", "Path": "nginx", "Args": [ "-g", "daemon off;" ], "State": { "Status": "running", "Running": true, . . . To output a specific field, use the --format or -f option. docker inspect --format "{{.NetworkSettings.IPAddress}}" <container ID> to view the IP Address section of the docker inspect output:
  • 206. Docker Commands – docker diff • docker diff <container ID> – Inspect changes to a container's filesystem A Added File C Changed File D Deleted File
  • 207. Docker Commands – docker network • docker network connect – Connect a running container to a network Use docker inspect 00db80208c35 to confirm container is connected to a network Container connected to both bridge and myNeto1 networks
  • 208. Docker Registry Dockerfile Daemon commit push build create exec pull attach Tar Archive logs pause / unpause rename rm wait save start/stop kill export load run Network inspect rmi Text Editor Docker Commands history images import info login logout port ps search stats top update version inspectinspect Volume cp diff restart Container Image tag inspect create ls prune rm docker volume neokobo.blogspot.com
  • 209. Module 10 Docker & ContainerTerms
  • 210. JSON – JavaScript Object Notation • JSON is short for JavaScript Object Notation – implements a lightweight data interchange format based on a subset of JavaScript language – provides a way to store information such that it is easy for machines to parse and generate – a way to store information in an organized, easy-to-access manner – used primarily to transmit data, as an alternative to XML • Docker uses JSON as its default Logging Driver to communicate information.
  • 211. JSON – JavaScript Object Notation • Example of how Docker leverages JSON $ docker inspect 978d [ { "Id": "sha256:978d85d02b87aea199e4ae8664f6abf32fdea331884818e46b8a01106b114cee", "RepoTags": [ "debian:latest" ], "Container": "4799c1aee3356a0d8e51a1e6e48edc1c4ca224e55750e26916f917bdecd96079", "ContainerConfig": { "Hostname": "ed11f485244a", "Cmd": [ "/bin/sh", "-c", "#(nop) ", "CMD ["/bin/bash"]" ], }, } ]
  • 212. JSON – JavaScript Object Notation • JSON is built on two structures: – Name/Value pairs, delimited by comma • NAME:VALUE, NAME:VALUE,… • e.g. "Hostname": "ed11f485244a" – Ordered list of values • realized as an array, vector, list, or sequence • e.g. ["/bin/sh","-c","#(nop) ","CMD ["/bin/bash"]"] In JSON, data structures include: • Array – An associative array of values – begins with [ (left bracket) and ends with ] (right bracket) – Values are separated by , (comma) • Object – Begins with { (left curly brace) and ends with } (right curly brace) – An unordered set of name/value pairs – Name and value separated by : (colon) – Name/Value pairs delimited by , (comma) – Object • {string : value,…} • Value – string – number – object – array – true – false – null
  • 213. DockerTerms • Microservices Architecture – The application is built up of a modular set of interconnected services instead of a single monolithic application. – Services can be developed and deployed independent of one other, enabling innovation, agility and efficiency – The services are independently deployable and updateable, with minimal dependencies
  • 214. DockerTerms • Microservices vs. Monolithic applications – An application consists of a set of services. • For monolithic applications, these services are tightly integrated into the application • For microservices, these services are deployed as modular, standalone apps with standard interfaces – Multiple applications on a system might leverage a set of common services (e.g. Authentication, Logging, Messaging, etc.) • In a monolithic application environment, each application has built into it a copy of these common services • In a Microservices environment, these services are decoupled from the application, enabling agility and efficiency, e.g. the same service can be shared between applications • For example, Authentication is a service. In the monolithic environment, a separate instance of the Authentication service might be built into each application needing authentication. In a microservices environment, there might be just one Authentication service, created as a microservice. Every application needing Authentication services would simply "link" to it.
  • 215. DockerTerms • Runtime – Docker Container Runtime is the instantiation of a Docker Image – /usr/bin/docker-containerd is the core container runtime on Linux • Containerd spins up runC (or other OCI compliant runtime) to run and monitor Containers • Docker architecture is broken into four components: – Docker engine – Containerd – containerd-shim – runC • runC then runs the container
  • 216. DockerTerms • Universal Control Plane (UCP) – Manage multi-container applications on a custom host installation (on- premise, on a cloud provider) – Manage a cluster of Docker hosts like a single machine – Docker Enterprise Edition Add-on
  • 217. DockerTerms • DockerTrusted Registry (DTR) – An enterprise image repository solution installable behind a firewall to manage images and access – Runs a private repository of container images and makes them available to a UCP instance – Can be installed on-premises or on a cloud infrastructure – Docker Enterprise Edition Add-on
  • 218. DockerTerms • Composable – units that are well integrated, yet independent and modular
  • 219. DockerTerms • Sandbox – A Network Sandbox, is a concept within the Docker Container Networking Model (CNM) – It contains the configuration of a container's network stack • This includes container interfaces, routing table, DNS settings.
  • 220. DockerTerms • Linux Bridge – A Linux bridge is a Layer 2 device that is the virtual implementation of a physical switch inside the Linux kernel – It forwards traffic based on MAC addresses which it learns dynamically by inspecting traffic – A Linux bridge is not to be confused with the bridge Docker network driver which is a higher level implementation of the Linux bridge.
  • 221. DockerTerms • Network Namespaces – A Linux network namespace is an isolated network stack in the kernel with its own interfaces, routes, and firewall rules – It is a security aspect of containers and Linux; it is used to isolate containers – Similar toVirtual Routing and Forwarding (VRF) that segments the network control and data plane inside the host, Network Namespaces provide the construct to provide a unique network experience to different processes running on the host – Network namespaces ensure that two containers on the same host will not be able to communicate with each other or the host unless configured to do so via Docker networks – Typically,Container Network Model (CNM) network drivers implement separate namespaces for each container. However, containers can share the same network namespace or even be a part of the host's network namespace – The host network namespace contains the host interfaces and host routing table.This network namespace is called the global network namespace.
  • 222. DockerTerms • Virtual Ethernet Devices – A virtual Ethernet device (veth) is a Linux networking interface that acts as a connecting wire between two network namespaces – A veth is a full duplex link that has a single interface in each namespace. Traffic in one interface is directed out the other interface – Docker network drivers utilize veths to provide explicit connections between namespaces when Docker networks are created – When a container is attached to a Docker network, one end of the veth is placed inside the container (usually seen as the ethX interface) while the other is attached to the Docker network.
  • 223. DockerTerms • Iptables – iptables is an L3/L4 firewall that provides rule chains for packet marking, masquerading, and dropping – It is the native packet filtering system that is part of the Linux kernel – The built-in Docker network drivers utilize iptables extensively to segment network traffic, provide host port mapping, and to mark traffic for load balancing decisions.
  • 224. DockerTerms • Red Hat Atomic Host – Optimized for running containerized environments
  • 225. DockerTerms • Orchestration – Orchestration is an important part of the Container ecosystem – Docker Swarm, Google Kubernetes, Apache Mesos, are some of the orchestration solutions
  • 226. DockerTerms • User-Space vs. Kernel-Space – User-space is that portion of system memory in which user processes (i.e., everything other than the kernel) run – This contrasts with kernel-space, which is that portion of memory in which the kernel executes and provides its services – User-space processes are allowed to access the kernel-space only through the use of system calls System Memory
  • 227. DockerTerms • Default Executable – The entry point to the container is an executable, specifically the default executable. It is the process running with PID 1 in the container – The entry point to a virtual machine is the kernel or the init program – In aVM (or the standalone Linux server), the init process has PID 1 and it is the parent of all other processes on the system.
  • 228. DockerTerms • Unikernels – Also called Library Operating System or Cloud Operating System – Unikernels are specialized, single-address-space machine images constructed by using library operating systems, intended to be run within aVirtual Machine – Developer selects a minimal set of libraries required for the app or service to run • libraries are compiled with the app and configuration code to build sealed, fixed-purpose images (unikernels) • images run directly on hypervisor without an intervening OS such as Linux or Windows – Benefits include: • Security and efficiency as a result of the smaller attack surface and resource footprint • Performance as they are built by compiling high-level languages directly into specialized machine images that run directly on a hypervisor, or bare metal. • Portability as hypervisors are ubiquitous and they also run on bare metal • Cost is minimized as the framework lends itself to pay-per-use and "as a service" model http://unikernel.org/
  • 229. DockerTerms Hardware Kernel Doc ke r Container Container Container Linux Containers Unikernels Hardware Hypervisor libOS Application libOS Application libOS Application Virtual Machines Hardware Hypervisor gOS gOSgOS VM VM VM Isolation Agility Specialization
  • 230. Docker Superseded Products andTools Docker Hub Docker Store Docker Cloud Docker Swarm Swarm mode DockerToolbox Docker for Mac Docker forWindows
  • 231. • Windows on Docker • Networking Introduction • Library OS • Unikernels • More Docker Commands • build, ship, run Commands • New Release Cadence 231 Topics for upcoming update: A. Akpaffiong, 2017
  • 232. References • Intro/Review: – https://neokobo.blogspot.com/ – https://docs.docker.com/engine/understanding-docker/ – https://docs.docker.com/get-started/ – https://veggiemonk.github.io/awesome-docker/ – http://training.play-with-docker.com/ • Unikernels: – https://en.wikipedia.org/wiki/Unikernel – http://unikernel.org/ – https://wiki.xenproject.org/wiki/Unikernels • Misc: – http://www.linfo.org/user_space.html – https://github.com/docker/labs/blob/master/networking/concept s/03-linux-networking.md – https://github.com/docker/labs/blob/master/networking/concept s/01-cnm.md – https://github.com/containerd/containerd/blob/master/design/ar chitecture.md – https://blog.docker.com/2016/12/containerd-core-runtime- component/ – http://man7.org/linux/man-pages/man7/signal.7.html – https://docs.docker.com/engine/tutorials/dockervolumes/#moun t-a-host-directory-as-a-data-volume – h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA6- 2761ENW – https://docs.docker.com/docker-cloud/apps/volumes/ – https://docs.docker.com/get-started/part3/#docker-composeyml – https://docs.docker.com/get-started/part3/#docker-composeyml – https://github.com/docker/labs/blob/master/networking/concept s/02-drivers.md#userdefined – https://github.com/docker/labs/blob/master/networking/A1- network-basics.md – https://github.com/docker/libnetwork – https://github.com/docker/labs/blob/master/networking/concept s/07-macvlan.md
  • 233. References • Misc: – http://www.nuagenetworks.net/blog/docker-networking-overview/ – https://www.ctl.io/developers/blog/post/docker-networking-rules/ – https://github.com/docker/labs/tree/master/networking – https://medium.com/aws-activate-startup-blog/a-better-dev-test- experience-docker-and-aws-291da5ab1238 – https://docs.docker.com/engine/understanding-docker/ – https://en.wikipedia.org/wiki/UnionFS – https://github.com/moby/moby/blob/master/pkg/namesgenerator/names- generator.go – https://docs.docker.com/datacenter/dtr/2.1/guides/ – https://blog.octo.com/en/docker-registry-first-steps/ – https://docs.docker.com/engine/faq/ – https://clearlinux.org/sites/default/files/vmscontainers_wp_v5.pdf – https://www.youtube.com/watch?v=qILu3vc8tBk&feature=youtu.be – http://man7.org/linux/man-pages/man7/cgroups.7.html – http://man7.org/linux/man-pages/man7/namespaces.7.html – http://runc.net/index.html – https://en.wikipedia.org/wiki/Cgroups – https://docs.docker.com/engine/installation/ – https://blog.docker.com/2017/03/docker-enterprise-edition/ – https://www.docker.com/pricing – https://www.hpe.com/h20195/v2/GetPDF.aspx/c05164344.pdf – https://techcrunch.com/2017/03/02/dockers-new-enterprise-edition-gives- containers-an-out-of-the-box-experience/ – https://www.nginx.com/blog/deploying-microservices/ – https://opensource.com/resources/what-docker – http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=a0000141 4enw – https://docs.docker.com/engine/installation/ – https://thenewstack.io/container-networking-breakdown-explanation- analysis/ – https://github.com/docker/labs/blob/master/networking/concepts/06- overlay-networks.md – https://github.com/docker/labs/blob/master/networking/A3-overlay- networking.md
  • 234. Execution Environment • Containerization, the ability to run multiple isolated compute environments on a single kernel relies on two kernel features: cgroups and namespaces – Along with other runtime technologies such as libContainer, and RunC, these form the foundation of Docker's ability to host multiple isolated containers under a single kernel. • Docker facilitates the packaging of an application image with all its dependencies, and running it in a software container, on any supported Docker platform – The mantra is: “build once, run anywhere.”

Editor's Notes

  1. Title of Seminar - Module #
  2. Image from: penbrokemarine.files.wordpress.com/2014/10/container-inspection.jpg (penbrokemarine.wordpress.com/tag/pms-shipping/page/3/)
  3. Docker promises to encapsulate any application, deploy it in a repeatable manner across any platform, and manage it efficiently at scale.
  4. https://docs.docker.com/engine/installation/
  5. https://docs.docker.com/engine/installation/ Image: http://media.gettyimages.com/photos/relay-race-male-athletes-passing-relay-baton-picture-idBD0149-002
  6. Ref: https://www.nginx.com/blog/deploying-microservices/ https://opensource.com/resources/what-docker Containerization (Compute Consolidation) (http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=a00001414enw) CI/CD - Continuous Integration, Continuous Deployment Proof an application across 10 different Linux distributions
  7. Ref: https://www.nginx.com/blog/deploying-microservices/ https://opensource.com/resources/what-docker http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=a00001414enw mechanism for microservices deployment and automation
  8. https://docs.docker.com/engine/installation/ https://blog.docker.com/2017/03/docker-enterprise-edition/ https://www.docker.com/pricing https://www.hpe.com/h20195/v2/GetPDF.aspx/c05164344.pdf https://techcrunch.com/2017/03/02/dockers-new-enterprise-edition-gives-containers-an-out-of-the-box-experience/ Edge is for users wanting a drop of the latest and greatest features every month Stable is released quarterly and is for users that want an easier-to-maintain release pace
  9. It does it with minimal duplication of resources in a maximally isolated environments
  10. It does this by defining an abstraction of required machine-specific settings With containers, "It works on my laptop." is no longer an excuse for delays in moving to production; with containers, we know that if it works on the developers' laptop, it works in production.
  11. Ref: From: Containers for the Enterprise: A Red Hat Virtual Event
  12. From: Why containers - Beginning of the buyer’s journey -- IT Leader audience by Red Hat
  13. -- Ref: http://runc.net/index.html namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations
  14. CGroups , introduced into the Linux kernel around 2008.
  15. -- Also: https://docs.docker.com/engine/understanding-docker/
  16. Ref: https://en.wikipedia.org/wiki/Cgroups
  17. Ref: https://en.wikipedia.org/wiki/Cgroups a Resource Management and Resource Accounting solution. Facilitate the sharing of available hardware resources to containers Ref: https://en.wikipedia.org/wiki/Cgroups Provides private allocation of resources to workloads
  18. -- Ref: http://runc.net/index.html
  19. Control Groups, or cgroups, are a Linux kernel feature which allow processes to be organized into hierarchical groups. The process' resource usage can then be controlled and monitored. Control Groups was merged into the Linux kernel in January 2008.
  20. The application container has access to host operating system’s kernel and it packages any other needed dependencies: libraries, middleware, runtimes in the container’s base image. Additionally, these dependencies exist in their own isolated user-space. Applications in containers have their own view of the name spaces, file system, etc.
  21. Ref: https://clearlinux.org/sites/default/files/vmscontainers_wp_v5.pdf https://www.youtube.com/watch?v=qILu3vc8tBk&feature=youtu.be
  22. The application has access to libraries, middleware, runtimes in the operating system.
  23. The application has access to libraries, middleware, runtimes in the guest operating system.
  24. The application container has access to host operating system’s kernel and it packages any other needed dependencies: libraries, middleware, runtimes in the container’s base image. Additionally, these dependencies exist in their own isolated user-space. Applications in containers have their own view of the name spaces, file system, etc.
  25. Ref: http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA6-2761ENW
  26. If Virtualization is a force multiplier for Systems Administrators, Containerization is a force multiplier for Developers.
  27. Ref: https://docs.docker.com/engine/faq/
  28. Containers running on a Native Docker platform provide better performance than those running on top of a virtual machine.
  29. Ref: https://docs.docker.com/engine/faq/
  30. All containers running on the host share this kernel. An image has one or more layers.
  31. For instance you might start with a Debian base image, and add the nginx reverse proxy server. Each of these is a distinct layer.
  32. Containers are sandboxed processes that leverage the host’s OS Kernel.
  33. MAINTAINER deprecated
  34. Multiple containers may share some or all read-only image data. Two containers started from the same image share 100% of the read-only data, while two containers with different images which have layers in common share those common layers Use ‘docker ps –s’ to see size of the running container
  35. See also: https://blog.octo.com/en/docker-registry-first-steps/
  36. Ref: https://docs.docker.com/datacenter/dtr/2.1/guides/
  37. Ref: https://docs.docker.com/datacenter/dtr/2.1/guides/
  38. The Docker daemon assigns the UUID and a pseudo-random name to the container
  39. Note: if you run $ docker pull nginx you get the “latest” image, which happens to be in the “mainline” tree. I.e. the above command does the equivalent of $ docker pull nginx:latest The latest tag applies to an image that was last built and pushed onto the repository without a specific tag provided. The latest tag is used as the default tag if no tag is specified when pushing an image to a repository, also if you pull an image without specifying a tag, you will get the image tagged latest.
  40. Can be paired with a copy-on-write implementation, which enables changes to the underlying filesystem to be made by "copying up" those entries to a scratch or read-write layer of the filesystem. Variants of the Union File System includes: AUFS (Advanced multi layered Unification File System), btrfs, VFS, DeviceMapper
  41. Using docker-machine commands, you can manage (create, start, inspect, stop, restart) a managed host, locally, remotely over the network or in the cloud Docker Machine Overview
  42. conceptually similar to a hypervisor in that it creates a virtualized environment and facilitates access to system resources
  43. The Docker CLI and daemon communicate using a REST API, over UNIX sockets or a network interface
  44. A Docker client doesn’t communicate directly with the running containers. https://medium.com/aws-activate-startup-blog/a-better-dev-test-experience-docker-and-aws-291da5ab1238
  45. EXPOSE rule only as a hint to which ports will provide services
  46. With EXPOSE instruction or --expose command line option, ports are only exposed to the container IP. To expose a container to the external world, publish the port via the -p or -P runtime options.
  47. See: https://www.ctl.io/developers/blog/post/docker-networking-rules/ EXPOSE is often used as a documentation mechanism -- that is, just to signal to the user what port will be providing services All published (-p or -P) ports are implicitly exposed, but not all exposed (EXPOSE or --expose) ports are published
  48. --Ref: http://www.nuagenetworks.net/blog/docker-networking-overview/ host mode gives the container full access to local system services and is considered insecure host mode gives better networking performance than in bridge mode as it uses the host’s native networking stack This means port mapping is needed in order for services running in the container, to be accessible.
  49. No external network interface, only local loopback address is available
  50. The overlay driver creates an overlay network that supports multi-host networks Uses a combination of local Linux bridges and VXLAN to overlay container-to-container communications over physical network infrastructure Utilizes an industry-standard VXLAN data plane that decouples the container network from the underlying physical network (the underlay) Encapsulates container traffic in a VXLAN header which allows the traffic to traverse the physical Layer 2 or Layer 3 network Created when a Swarm is instantiated See: https://github.com/docker/labs/blob/master/networking/concepts/06-overlay-networks.md See: https://github.com/docker/labs/blob/master/networking/A3-overlay-networking.md
  51. See: https://github.com/docker/labs/blob/master/networking/concepts/07-macvlan.md See: https://thenewstack.io/container-networking-breakdown-explanation-analysis/
  52. https://github.com/docker/libnetwork https://github.com/docker/libnetwork/blob/master/ROADMAP.md
  53. See: https://github.com/docker/labs/blob/master/networking/concepts/02-drivers.md#userdefined See: https://github.com/docker/labs/blob/master/networking/A1-network-basics.md
  54. When you create a service, you specify which container image to use and which commands to execute inside running containers. A task carries a Docker container and the commands to run inside the container. It is the atomic scheduling unit of swarm. https://docs.docker.com/engine/swarm/key-concepts/
  55. Docker Compose simplifies the containerization of a multi-tier, multi-container application, which can be stitched together using the docker-compose.yml configuration file and the docker-compose command to provide a single application service YAML Ain’t Markup Language (YAML) https://docs.docker.com/get-started/part3/#docker-composeyml
  56. -i and -t enable interactive access to the stdin and stdout of the container, i.e. you can enter commands directly at the keyboard and see the output on the terminal. Note: The back-slash (\) at the end of the line is a continuation mark. It tells the Linux Shell that the command line continues on the next line; it joins the two lines together as one contiguous command line.
  57. Data in named volumes can be shared between a container and the host machine, as well as between multiple containers. https://docs.docker.com/docker-cloud/apps/volumes/
  58. http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA6-2761ENW
  59. Data volumes are not controlled by the storage driver. Reads and writes to data volumes bypass the storage driver and operate at native host speeds.
  60. Container volumes are containers that store and can share data. Container volumes take up no more resources than needed to provide storage services.
  61. A use-case for Volumes is as a way to share directories across containers
  62. https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume
  63. The shell displays the shell prompt Users enter commands at this prompt
  64. There is normally one command object per command line. An exception is if you have pipes (|). A pipe allows multiple commands to be run in series on the same "command line". More on pipes in a later article.
  65. Options come in two forms: short-form, typically prepended with a single dash, and long-form, prepended with two dashes. Examples: short-form option: ls -a or docker ps -a Long-form option: ls --all or docker ps --all There can be zero or more options per command line. Use a space to separate multiple options. For the short-form notation, you are allowed to concatenate the options. I.e. instead of ls -a -F -l, it's OK to combine the options, prepending the set with a single dash, e.g. ls -aFl. An exception to being able to combine options is if the option requires an argument, i.e. the -v option in Docker requires the volume path or directory as an argument, e.g. docker run -v /data, as such it should stand by itself. You can mix and match short-form and long-form options on the same command line: ls --all -l
  66. Success and failure, with respect to exit status is relative
  67. To add multiple users delimit each name with a comma.
  68. Any process that can write to this file effectively has root access on the Docker Host. Bind mounting /var/run/docker.sock in a container (as with docker run -d -p 9090:9090 -v /var/run/docker.sock:/var/run/docker.sock…, gives root privileges to the Docker Host from inside the container.
  69. Take some time to run your favorite commands in this alpine container.
  70. This allows it to be identified by this name or the automatically generated 64-character ID. Note: Nginx (pronounced "engine-x") is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). It runs on Linux, BSD variants, macOS X, Solaris, AIX, HP-UX. The above command create a container environment with its own isolated: process space, network stack and file system, names it web01, starts the nginx application running in detached mode (or as a daemon) and exposes container port 80 as local port 8080.
  71. A context is processed recursively, i.e. any subdirectories in the PATH and any submodules in the URL are processed for objects.
  72. Starting this container from the newly generated image shows that changes made in the original container were successfully committed.
  73. Note: There is an online tool, imagelayers.io that can be used to visualize the layers of an image.
  74. Inspired by a post at troubleshooter.com this flowchart helps illustrate Docker objects and the Docker commands that apply to them.   For example to preserve changes made in a container object and convert it into an image, use the docker commit command.
  75. Ref: https://github.com/containerd/containerd/blob/master/design/architecture.md https://blog.docker.com/2016/12/containerd-core-runtime-component/
  76. Ref: https://github.com/docker/labs/blob/master/networking/concepts/01-cnm.md
  77. Ref: https://github.com/docker/labs/blob/master/networking/concepts/03-linux-networking.md
  78. Ref: https://github.com/docker/labs/blob/master/networking/concepts/03-linux-networking.md In networking terminology they are akin to a Virtual Routing and Forwarding (VRF) that segments the network control and data plane inside the host. They both provide the construct to provide a unique network experience to different processes running on the host
  79. Ref: https://github.com/docker/labs/blob/master/networking/concepts/03-linux-networking.md
  80. Ref: https://github.com/docker/labs/blob/master/networking/concepts/03-linux-networking.md
  81. Ref: http://www.linfo.org/user_space.html
  82. https://en.wikipedia.org/wiki/Unikernel http://unikernel.org/
  83. Title of Seminar - Module #
  84. Title of Seminar - Module #
  85. Original slide #29
  86. CoW Works in conjunction with the Union File System and its layers