While Docker leads the field of containerization and isolated platforms, it is essential to explore the way those container would communicate. In the session, we will explore various network structures of Docker technology and will create a basic network structure for an application to work.
2. Lack of etiquette and manners is a huge turn off.
KnolX Etiquettes
Punctuality
Join the session 5 minutes prior to
the session start time. We start on
time and conclude on time!
Feedback
Make sure to submit a constructive
feedback for all sessions as it is
very helpful for the presenter.
Silent Mode
Keep your mobile devices in silent
mode, feel free to move out of
session in case you need to attend
an urgent call.
Avoid Disturbance
Avoid unwanted chit chat during
the session.
4. Overview
Docker provides OS independent containers that run in an isolated environment. Docker networking enables these containers to
communicate with other containers and other required places like the internet to gain the updates of applications. With the
understanding of networking in Docker, we can create custom networks for our containers as per the requirements.
5. Container Network Model (CNM)
Container Network Model (CNM) designates the steps for providing network to containers while providing an abstraction that supports
multiple network drivers. This is implemented by docker using an open source library called libenetwork.
6. Objects in CNM
The CNM has interfaces for IPAM plugins and network plugins.
The IPAM plugin is used to create/delete address pools and allocate/deallocate container IP addresses, whereas
the network plugin is used to create/delete networks and add/remove containers from networks.
CNM Objects
NetworkController provides the entry-point into libnetwork that exposes simple APIs for the users (such as Docker Engine) to allocate
and manage Networks. It allows user to bind a particular driver to a given network.
Driver owns the network and is responsible for managing the network. They can be both inbuilt (such as Bridge, Host, None & overlay)
and remote (from plugin providers)
Network is a group of Endpoints that are able to communicate with each-other directly. NetworkController provides APIs to create and
manage Network object. Whenever a Network is created or updated, the corresponding Driver will be notified of the event.
Endpoint Endpoint represents a Service Endpoint. It provides the connectivity for services exposed by a container in a network with
other services provided by other containers in the network. Network object provides APIs to create and manage an endpoint. An endpoint
can be attached to only one network.
Sandbox Sandbox object represents container's network configuration such as IP address, MAC address, routes, DNS entries. A
Sandbox object is created when the user requests to create an endpoint on a network. The Driver that handles the Network is responsible
for allocating the required network resources (such as the IP address) and passing the info called SandboxInfo back to libnetwork.
libnetwork will make use of OS specific constructs to populate the network configuration into the containers that is represented by the
Sandbox. A Sandbox can have multiple endpoints attached to different networks.
7. Network Drivers
None (null): this driver is used for network where container requires a truly
isolated environment with not network to connect. No container can access
this container or vice versa. This network is useful for a loopback device.
Host: this driver used in network mode for a container, that is to not to be
isolated from the Docker host (the container shares the host’s networking
namespace), and the container does not get its own IP-address allocated.
So with this, you will not be able to run multiple web containers on the same
host, on the same port as the port is now common to all containers in the
host network. The host networking driver only works on Linux hosts, and is
not supported on Docker Desktop.
Macvlan: Allows you to assign a MAC address to a container, making it
appear as a physical device on your network. Then, the Docker daemon
routes traffic to containers by their MAC addresses. Macvlan driver is the
best choice when you are expected to be directly connected to the physical
network, rather than routed through the Docker host network stack.
8. Network Drivers
Overlay: this driver used in network mode where we require orchestration of multiple containers. It creates an internal private network that
spans across all the nodes participating in the swarm cluster. So, Overlay networks facilitate communication between a swarm service and a
standalone container, or between two standalone containers on different Docker Daemons.
Ipvlan: Similar to Macvlan it assigns mac and IP address to
containers and make them appear as physical devices. The
difference is that this does create multiple mac addresses but
shares MAC address of primary network or parent network.
9. Network Drivers
Bridge: The bridge network is a private default internal network created by
docker on the host. So, all containers get an internal IP address and these
containers can access each other, using this internal IP. The Bridge networks
are usually used when your applications run in standalone containers that
need to communicate.