SlideShare a Scribd company logo
1 of 30
Managing multicast stream on
Docker
Thierry GAYET - 09/2023
The purpose of these slides is to give some
information on the use of muticast streams
in a Dockerized context.
GOAL
So that Docker containers can communicate with each other but also with the outside world via the host
machine, then a networking layer is necessary. This network layer adds a portion of container isolation, and
therefore makes it possible to create Docker applications that work together securely.
Docker supports different types of networks that are suitable for certain use cases, which we will see through
this chapter.
The Docker network system uses drivers. Several drivers exist and provide different functionalities.
Network drivers
within docker
The bridge driver First, when you install Docker for the first time,
it automatically creates a bridge network
named bridge connected to the docker0
network interface (viewable with the ip addr
show docker0 command). Each new Docker
container is automatically connected to this
network unless a custom network is specified.
Furthermore, the bridge network is the most
commonly used type of network. It is limited to
containers on a single host running the Docker
engine. Containers that use this driver can
only communicate with each other, however
they are not accessible from the outside.
Before containers on the bridge network can
communicate or be accessible from the
outside world, you must configure port
mapping.
The none driver
This is the ideal network type, if you want to prohibit all internal and external communication with your
container, because your container will be devoid of any network interface (except the loopback / lo
interface).
The host driver
This type of network allows containers to use
the same interface as the host.
It therefore removes network isolation
between containers and will by default be
accessible from the outside.
As a result, it will take the same IP as your
host machine.
The host driver
Network context from the host :
$ ip addr show eno8403
eno8403:
<BROADCAST,MULTICAST,UP,LOWER_UP>
mtu 1500 qdisc noqueue state UP group default
qlen 1000
link/ether dc:85:de:ce:04:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.11/24 brd 192.168.0.255 scope
global dynamic noprefixroute wlp3s0
valid_lft 54874sec preferred_lft 54874sec
inet6 fe80::335:f1f5:127d:b62c/64 scope link
noprefixroute
valid_lft forever preferred_lft forever
$ docker run -it --rm --network host --name net
alpine ip addr show eno8403
eno8403:
<BROADCAST,MULTICAST,UP,LOWER_UP>
mtu 1500 qdisc noqueue state UP group default
qlen 1000
link/ether dc:85:de:ce:04:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.11/24 brd 192.168.0.255 scope
global dynamic noprefixroute wlp3s0
valid_lft 54874sec preferred_lft 54874sec
inet6 fe80::335:f1f5:127d:b62c/64 scope link
noprefixroute
valid_lft forever preferred_lft forever
→ Same results with the two context !
The overlay driver
If you want native multi-host
networking, you need to use an
overlay driver.
It creates a distributed network
between multiple hosts with the
Docker engine.
Docker transparently manages the
routing of each packet to and from
the right host and container.
The macvlan driver
Using the macvlan driver is
sometimes the best choice when
using applications that expect to be
directly connected to the physical
network, because the Macvlan
driver allows you to assign a MAC
address to a container, making it
appear as a device physically on
your network.
The Docker engine routes traffic to
containers based on their MAC
addresses.
Docker & multicast
The host driver
Managing multicast streams with Docker can be a bit tricky because Docker containers are primarily designed for unicast network
communication. Multicast networking requires special handling since multicast packets are sent to a group of hosts, not a single host. While
Docker doesn't natively support multicast, you can work with it using some workarounds and network configurations.
Here's a general approach to manage multicast streams with Docker:
Use Host Networking Mode:
When you run a Docker container, you can specify the network mode using the --network flag. To enable multicast within a container, you can
use the host network mode, which allows the container to share the host's network namespace. However, note that this approach is less isolated
and may not be suitable for all use cases.
bash
$ docker run --network host <your-image>
Enable Multicast on the Host:
Ensure that multicast is enabled on your host machine.
You might need to configure your host's network stack to accept multicast packets.
For Linux, this usually involves setting kernel parameters.
For example, you can enable multicast routing and add multicast routes as needed :
# Enable multicast routing (you may need to modify this based on your requirements)
$ echo 1 > /proc/sys/net/ipv4/ip_forward
# Add a multicast route (replace <multicast-group> with the actual multicast group)
$ ip route add <multicast-group> dev <interface> scope link
The host driver
Configure the Multicast Application:
Your multicast application running inside the Docker container should be configured to send or receive multicast packets using the appropriate
multicast group and port.
Specify the Multicast Group Address:
Ensure that your multicast application is set to use the specific multicast group address you intend to work with. This address must match the
multicast group address you set up on the host.
Test Your Setup:
Run your Docker container with the host network mode and test the multicast functionality within the container. This may involve sending or
receiving multicast packets as per your application's requirements.
Security Considerations:
Be mindful of the security implications of using host networking mode, as it grants the container more access to the host's network stack.
Ensure that your Docker setup is secure and that you follow best practices for container security.
Monitoring and Troubleshooting:
Use tools like tcpdump or Wireshark to monitor multicast traffic on the host and within the container. This can help you diagnose any network-
related issues.
The host driver
The bridge driver
Sinon en mode bridge c'est aussi possible en gérant à la mains les abonnement à igmp :
Multicast to/from a docker bridge network is currently not possible.
This is due to limitations with how linux kernels provide support for multicast routing.
Packets are forwarded to the docker bridge using iptables and the unicast routing table, but multicast packets are handled differently in linux
kernels.
A workaround is to run a tool like smcrouted (https://github.com/troglobit/smcroute) on the host (or in a container with access to the host
network).
This process does the work of managing the linux multicast forwarding cache.
The macvlan driver
Managing multicast streams using a macvlan driver in Docker can be a more straightforward approach compared to other
networking modes. The macvlan driver allows each Docker container to have its own unique MAC address and appear as
a separate device on the network. Here's how you can manage multicast streams using the macvlan driver:
Create a Macvlan Network:
$ docker network create -d macvlan 
--subnet=<subnet> 
--gateway=<gateway> 
--ip-range=<ip-range> 
-o parent=<physical-interface> 
<network-name>
<subnet>: The subnet for your containers.
<gateway>: The gateway IP for your containers.
<ip-range>: The range of IPs that can be allocated to containers.
<physical-interface>: The name of your physical network interface.
<network-name>: The name of the macvlan network.
Replace the placeholders with your specific network configuration.
The macvlan driver
Run Containers with Macvlan Networking:
$ docker run --network=<network-name> -itd --name=<container-name> <your-image>
<network-name>: The name of the macvlan network you created.
<container-name>: A name for your Docker container.
<your-image>: The Docker image you want to run.
Within your Docker containers, you can manage multicast streams as you would on a physical host. Configure your multicast application to use the macvlan
network interface for sending and receiving multicast traffic.
Test your multicast streams to ensure that they are functioning as expected within the Docker containers.
Please note the following considerations:
Containers connected to a macvlan network have direct access to the physical network and may require appropriate permissions and configurations on your
network infrastructure.
Ensure that your multicast application inside the container is configured correctly to use the macvlan network interface for multicast communication.
Depending on your network and router configurations, you may need to set up multicast routing or enable multicast support on your network infrastructure to
ensure proper multicast traffic flow.
Always exercise caution when working with multicast traffic, as it can have complex interactions with network infrastructure and may require additional
configuration and permissions.
Docker networking
The command to create a Docker network is:
$ docker network create --driver <DRIVER TYPE> <NETWORK NAME>
In this example we will create a bridge type network named mon-bridge:
$ docker network create --driver bridge my-bridge
We will then list the Docker networks with the following command:
$ docker network ls
Result :
NETWORK ID NAME DRIVER SCOPE
58b8305ce041 bridge bridge local
91d7f01dad50 host host local
ccdbdbf708db mon-bridge bridge local
10ee25f56420 myimagedocker_default bridge local
6851e9b8e06e none null local
Create and collect information from a Docker network
It is possible to collect information on the Docker network, such as the network config, by typing the following command:
docker network inspect mon-bridge
Result :
[
{
"Name": "mon-bridge",
"Id": "ccdbdbf708db7fa901b512c8256bc7f700a7914dfaf6e8182bb5183a95f8dd9b",
...
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
...
"Labels": {}
}
]
You can override the Subnet and Gateway value by using the --subnet and --gateway options of the docker
network create command, as follows:
$ docker network create bridge --subnet=172.16.86.0/24 --gateway=172.16.86.1 my-bridge
For this example, we will connect two containers to our previously created bridge network:
$ docker run -dit --name alpine1 --network mon-bridge alpine
$ docker run -dit --name alpine2 --network mon-bridge alpine
If we inspect our mon-bridge network again, we will see our two new containers in the information returned:
docker network inspect mon-bridge
Result :
[
{
"Name": "mon-bridge",
"Id": "ccdbdbf708db7fa901b512c8256bc7f700a7914dfaf6e8182bb5183a95f8dd9b",
...
"Containers": {
"1ab5f1815d98cd492c69a63662419e0eba891c0cadb2cbdd0fb939ab25f94b33": {
"Name": "alpine1",
"EndpointID": "5f04963f9ec084df659cfc680b9ec32c44237dc89e96184fe4f2310ba6af7570",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
},
"a935d2e1ddf76fe49cdb1950653f4a093928020b49ebfea4130ff9d712ffb1d6": {
"Name": "alpine2",
"EndpointID": "3e009b56104a1bf9106bc622043a2ee06010b102279e24b4807c7b7ffec166dd",
"MacAddress": "02:42:ac:15:00:03",
"IPv4Address": "172.21.0.3/16",
"IPv6Address": ""
}
},
...
}
]
From the result, we can see that our alpine1 container has the IP address 172.21.0.2, and our alpine2 container
has the IP address 172.21.0.3. Let's try to make them communicate together using the ping command:
$ docker exec alpine1 ping -c 1 172.21.0.3
Result :
PING 172.21.0.3 (172.21.0.3): 56 data bytes
64 bytes from 172.21.0.3: seq=0 ttl=64 time=0.101 ms
$ docker exec alpine2 ping -c 1 172.21.0.2
Result :
PING 172.21.0.2 (172.21.0.2): 56 data bytes
64 bytes from 172.21.0.2: seq=0 ttl=64 time=0.153 mss
For information, you cannot create a network host, because you use the interface of your host machine. Moreover, if you
try to create it then you will receive the following error:
docker network create --driver host my-host
Error :
Error response from daemon: only one instance of "host" network is allowed
You can only use the host driver but not create it. In this example we will start an Apache container on port 80 of the host
machine. From a networking perspective, this is the same level of isolation as if the Apache process was running directly
on the host machine and not in a container. However, the process remains completely isolated from the host machine.
This procedure requires that port 80 is available on the host machine:
$ docker run --rm -d --network host --name my_httpd httpd
Without any mapping, you can access the Apache server by accessing http://localhost:80/, you will then see the
message "It works!".
From your host machine, you can check which process is bound to port 80 using the netstat command:
$ sudo netstat -tulpn | grep:80
This is indeed the httpd process that uses port 80 without using port mapping:
tcp 0 0 127.0.0.1:8000 0.0.0.0:* LISTEN 5084/php
tcp6 0 0 :::80 :::* LISTEN 11133/httpd
tcp6 0 0 :::8080 :::* LISTEN 3122/docker-prox
Finally stop the container which will be deleted automatically because it was started using the --rm option:
$ docker container stop my_httpd
Remove, connect, and connect a Docker network
Before deleting your docker network, it is necessary to first delete any container connected to your docker
network, or otherwise just disconnect your container from your docker network without necessarily deleting it.
We will choose method 2, disconnecting all containers using the mon-bridge docker network:
$ docker network disconnect mon-bridge alpine1
$ docker network disconnect mon-bridge alpine2
Now, if you check the network interfaces of your containers based on the alpine image, you will only see the
loopback interface as for the none driver:
$ docker exec alpine1 ip a
Result :
lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
Once you have disconnected all your containers from the mon-bridge docker network, you can then delete it:
$ docker network rm mon-bridge
However, your containers are now without a bridge network interface, so you must reconnect your containers to
the default bridge network so that they can communicate with each other again:
$ docker network connect bridge alpine1
$ docker network connect bridge alpine2
Then check if your containers have received the correct IP:
$ docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
Result :
/alpine2 - 172.17.0.3
/alpine1 - 172.17.0.2
You can create as many bridge networks as you want, it remains a good way to secure communication between
your containers, because the containers connected to bridge1 cannot communicate with the containers on
bridge2, thus limiting unnecessary communications.
## Create a docker network
docker network create --driver <DRIVER TYPE> <NETWORK NAME>
# List docker networks
docker network ls
## Delete one or more docker network(s)
docker network rm <NETWORK NAME>
## Collect information on a Docker network
docker network inspect <NETWORK NAME>
-v or --verbose: verbose mode for better diagnostics
## Delete all unused docker networks
docker network plum
-f or --force: force deletion
## Connect a container to a Docker network
docker network connect <NETWORK NAME> <CONTAINER NAME>
## Disconnect a docker network container
docker network disconnect <NETWORK NAME> <CONTAINER NAME>
-f or --force: force disconnection
## Start a container and connect it to a docker network
docker run --network <NETWORK NAME> <IMAGE NAME>
Summary :
QUESTIONS & ECHANGES

More Related Content

What's hot

2.3. Configuracion ACLs IPv4
2.3. Configuracion ACLs IPv42.3. Configuracion ACLs IPv4
2.3. Configuracion ACLs IPv4David Narváez
 
Building Topology in NS3
Building Topology in NS3Building Topology in NS3
Building Topology in NS3Rahul Hada
 
Tecnologías ethernet reducido
Tecnologías ethernet reducidoTecnologías ethernet reducido
Tecnologías ethernet reducidoJorge Arroyo
 
high speed modem
high speed modemhigh speed modem
high speed modemAmol Bhole
 
Modulacion de señales en la comunicacion satelital
Modulacion de señales en la comunicacion satelital Modulacion de señales en la comunicacion satelital
Modulacion de señales en la comunicacion satelital Joshua M Noriega
 
Codificacion unipolar, polar y bipolar
Codificacion unipolar, polar y bipolarCodificacion unipolar, polar y bipolar
Codificacion unipolar, polar y bipolarthejp
 
Probabilidad de error en modulación digital
Probabilidad de error en modulación digitalProbabilidad de error en modulación digital
Probabilidad de error en modulación digitalFrancisco Apablaza
 
Provisión Automática de máquinas con Ansible
Provisión Automática de máquinas con AnsibleProvisión Automática de máquinas con Ansible
Provisión Automática de máquinas con AnsibleCarlos Gimeno Yáñez
 
lte channel types
lte channel typeslte channel types
lte channel typesavneesh7
 
Rfu hardware description(v100 r008c00 04)(pdf)-en
Rfu hardware description(v100 r008c00 04)(pdf)-enRfu hardware description(v100 r008c00 04)(pdf)-en
Rfu hardware description(v100 r008c00 04)(pdf)-enCharles Mbaziira
 
NFV : Virtual Network Function Architecture
NFV : Virtual Network Function ArchitectureNFV : Virtual Network Function Architecture
NFV : Virtual Network Function Architecturesidneel
 
Adoptive retransmission in TCP
Adoptive retransmission in TCPAdoptive retransmission in TCP
Adoptive retransmission in TCPselvakumar_b1985
 

What's hot (20)

2.3. Configuracion ACLs IPv4
2.3. Configuracion ACLs IPv42.3. Configuracion ACLs IPv4
2.3. Configuracion ACLs IPv4
 
Que es el Subneteo
Que es el SubneteoQue es el Subneteo
Que es el Subneteo
 
Subredes
SubredesSubredes
Subredes
 
Capítulo III - Linea TX
Capítulo III - Linea TXCapítulo III - Linea TX
Capítulo III - Linea TX
 
ensayo cable coaxial.pdf
ensayo cable coaxial.pdfensayo cable coaxial.pdf
ensayo cable coaxial.pdf
 
Bss parameter huawei
Bss parameter huaweiBss parameter huawei
Bss parameter huawei
 
Building Topology in NS3
Building Topology in NS3Building Topology in NS3
Building Topology in NS3
 
Catv - Tv digital
Catv - Tv digitalCatv - Tv digital
Catv - Tv digital
 
Lecture 09
Lecture 09Lecture 09
Lecture 09
 
Tecnologías ethernet reducido
Tecnologías ethernet reducidoTecnologías ethernet reducido
Tecnologías ethernet reducido
 
high speed modem
high speed modemhigh speed modem
high speed modem
 
Modulacion de señales en la comunicacion satelital
Modulacion de señales en la comunicacion satelital Modulacion de señales en la comunicacion satelital
Modulacion de señales en la comunicacion satelital
 
Codificacion unipolar, polar y bipolar
Codificacion unipolar, polar y bipolarCodificacion unipolar, polar y bipolar
Codificacion unipolar, polar y bipolar
 
aloha
alohaaloha
aloha
 
Probabilidad de error en modulación digital
Probabilidad de error en modulación digitalProbabilidad de error en modulación digital
Probabilidad de error en modulación digital
 
Provisión Automática de máquinas con Ansible
Provisión Automática de máquinas con AnsibleProvisión Automática de máquinas con Ansible
Provisión Automática de máquinas con Ansible
 
lte channel types
lte channel typeslte channel types
lte channel types
 
Rfu hardware description(v100 r008c00 04)(pdf)-en
Rfu hardware description(v100 r008c00 04)(pdf)-enRfu hardware description(v100 r008c00 04)(pdf)-en
Rfu hardware description(v100 r008c00 04)(pdf)-en
 
NFV : Virtual Network Function Architecture
NFV : Virtual Network Function ArchitectureNFV : Virtual Network Function Architecture
NFV : Virtual Network Function Architecture
 
Adoptive retransmission in TCP
Adoptive retransmission in TCPAdoptive retransmission in TCP
Adoptive retransmission in TCP
 

Similar to Managing multicast/igmp stream on Docker

Docker Networking - Common Issues and Troubleshooting Techniques
Docker Networking - Common Issues and Troubleshooting TechniquesDocker Networking - Common Issues and Troubleshooting Techniques
Docker Networking - Common Issues and Troubleshooting TechniquesSreenivas Makam
 
Docker meetup
Docker meetupDocker meetup
Docker meetupsyed1
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking OverviewSreenivas Makam
 
JDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachJDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachPROIDEA
 
Docker Networking - Current Status and goals of Experimental Networking
Docker Networking - Current Status and goals of Experimental NetworkingDocker Networking - Current Status and goals of Experimental Networking
Docker Networking - Current Status and goals of Experimental NetworkingSreenivas Makam
 
Academy PRO: Docker. Lecture 3
Academy PRO: Docker. Lecture 3Academy PRO: Docker. Lecture 3
Academy PRO: Docker. Lecture 3Binary Studio
 
Building a sdn solution for the deployment of web application stacks in docker
Building a sdn solution for the deployment of web application stacks in dockerBuilding a sdn solution for the deployment of web application stacks in docker
Building a sdn solution for the deployment of web application stacks in dockerJorge Juan Mendoza
 
Docker Meetup: Docker Networking 1.11 with Madhu Venugopal
Docker Meetup: Docker Networking 1.11 with Madhu VenugopalDocker Meetup: Docker Networking 1.11 with Madhu Venugopal
Docker Meetup: Docker Networking 1.11 with Madhu VenugopalDocker, Inc.
 
Docker 1.11 Meetup: Networking Showcase
Docker 1.11 Meetup: Networking ShowcaseDocker 1.11 Meetup: Networking Showcase
Docker 1.11 Meetup: Networking ShowcaseDocker, Inc.
 
Docker Meetup: Docker Networking 1.11, by Madhu Venugopal
Docker Meetup: Docker Networking 1.11, by Madhu VenugopalDocker Meetup: Docker Networking 1.11, by Madhu Venugopal
Docker Meetup: Docker Networking 1.11, by Madhu VenugopalMichelle Antebi
 
Practical Design Patterns in Docker Networking
Practical Design Patterns in Docker NetworkingPractical Design Patterns in Docker Networking
Practical Design Patterns in Docker NetworkingDocker, Inc.
 
Docker Networking - Boulder Linux Users Group (BLUG)
Docker Networking - Boulder Linux Users Group (BLUG)Docker Networking - Boulder Linux Users Group (BLUG)
Docker Networking - Boulder Linux Users Group (BLUG)Dan Mackin
 
Octo talk : docker multi-host networking
Octo talk : docker multi-host networking Octo talk : docker multi-host networking
Octo talk : docker multi-host networking Hervé Leclerc
 
Docker Networking with New Ipvlan and Macvlan Drivers
Docker Networking with New Ipvlan and Macvlan DriversDocker Networking with New Ipvlan and Macvlan Drivers
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
 
Docker Multi Host Networking, Rachit Arora, IBM
Docker Multi Host Networking, Rachit Arora, IBMDocker Multi Host Networking, Rachit Arora, IBM
Docker Multi Host Networking, Rachit Arora, IBMNeependra Khare
 
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...Ajeet Singh Raina
 
Buildservicewithdockerin90mins
Buildservicewithdockerin90minsBuildservicewithdockerin90mins
Buildservicewithdockerin90minsYong Cha
 
Network Design patters with Docker
Network Design patters with DockerNetwork Design patters with Docker
Network Design patters with DockerDaniel Finneran
 

Similar to Managing multicast/igmp stream on Docker (20)

Docker Networking - Common Issues and Troubleshooting Techniques
Docker Networking - Common Issues and Troubleshooting TechniquesDocker Networking - Common Issues and Troubleshooting Techniques
Docker Networking - Common Issues and Troubleshooting Techniques
 
Docker meetup
Docker meetupDocker meetup
Docker meetup
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking Overview
 
JDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachJDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
JDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
 
Docker Networking - Current Status and goals of Experimental Networking
Docker Networking - Current Status and goals of Experimental NetworkingDocker Networking - Current Status and goals of Experimental Networking
Docker Networking - Current Status and goals of Experimental Networking
 
Academy PRO: Docker. Lecture 3
Academy PRO: Docker. Lecture 3Academy PRO: Docker. Lecture 3
Academy PRO: Docker. Lecture 3
 
Demystfying container-networking
Demystfying container-networkingDemystfying container-networking
Demystfying container-networking
 
Building a sdn solution for the deployment of web application stacks in docker
Building a sdn solution for the deployment of web application stacks in dockerBuilding a sdn solution for the deployment of web application stacks in docker
Building a sdn solution for the deployment of web application stacks in docker
 
Docker Meetup: Docker Networking 1.11 with Madhu Venugopal
Docker Meetup: Docker Networking 1.11 with Madhu VenugopalDocker Meetup: Docker Networking 1.11 with Madhu Venugopal
Docker Meetup: Docker Networking 1.11 with Madhu Venugopal
 
Docker 1.11 Meetup: Networking Showcase
Docker 1.11 Meetup: Networking ShowcaseDocker 1.11 Meetup: Networking Showcase
Docker 1.11 Meetup: Networking Showcase
 
Docker Meetup: Docker Networking 1.11, by Madhu Venugopal
Docker Meetup: Docker Networking 1.11, by Madhu VenugopalDocker Meetup: Docker Networking 1.11, by Madhu Venugopal
Docker Meetup: Docker Networking 1.11, by Madhu Venugopal
 
Practical Design Patterns in Docker Networking
Practical Design Patterns in Docker NetworkingPractical Design Patterns in Docker Networking
Practical Design Patterns in Docker Networking
 
Docker Networking - Boulder Linux Users Group (BLUG)
Docker Networking - Boulder Linux Users Group (BLUG)Docker Networking - Boulder Linux Users Group (BLUG)
Docker Networking - Boulder Linux Users Group (BLUG)
 
Octo talk : docker multi-host networking
Octo talk : docker multi-host networking Octo talk : docker multi-host networking
Octo talk : docker multi-host networking
 
Docker Networking with New Ipvlan and Macvlan Drivers
Docker Networking with New Ipvlan and Macvlan DriversDocker Networking with New Ipvlan and Macvlan Drivers
Docker Networking with New Ipvlan and Macvlan Drivers
 
Docker Multi Host Networking, Rachit Arora, IBM
Docker Multi Host Networking, Rachit Arora, IBMDocker Multi Host Networking, Rachit Arora, IBM
Docker Multi Host Networking, Rachit Arora, IBM
 
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
 
moscmy2016: Extending Docker
moscmy2016: Extending Dockermoscmy2016: Extending Docker
moscmy2016: Extending Docker
 
Buildservicewithdockerin90mins
Buildservicewithdockerin90minsBuildservicewithdockerin90mins
Buildservicewithdockerin90mins
 
Network Design patters with Docker
Network Design patters with DockerNetwork Design patters with Docker
Network Design patters with Docker
 

Recently uploaded

Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Angel Borroy López
 
A healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfA healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfMarharyta Nedzelska
 
SoftTeco - Software Development Company Profile
SoftTeco - Software Development Company ProfileSoftTeco - Software Development Company Profile
SoftTeco - Software Development Company Profileakrivarotava
 
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...Cizo Technology Services
 
Introduction to Firebase Workshop Slides
Introduction to Firebase Workshop SlidesIntroduction to Firebase Workshop Slides
Introduction to Firebase Workshop Slidesvaideheekore1
 
SpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at RuntimeSpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at Runtimeandrehoraa
 
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Natan Silnitsky
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalLionel Briand
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Matt Ray
 
Not a Kubernetes fan? The state of PaaS in 2024
Not a Kubernetes fan? The state of PaaS in 2024Not a Kubernetes fan? The state of PaaS in 2024
Not a Kubernetes fan? The state of PaaS in 2024Anthony Dahanne
 
2024 DevNexus Patterns for Resiliency: Shuffle shards
2024 DevNexus Patterns for Resiliency: Shuffle shards2024 DevNexus Patterns for Resiliency: Shuffle shards
2024 DevNexus Patterns for Resiliency: Shuffle shardsChristopher Curtin
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanyChristoph Pohl
 
Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsSafe Software
 
Patterns for automating API delivery. API conference
Patterns for automating API delivery. API conferencePatterns for automating API delivery. API conference
Patterns for automating API delivery. API conferencessuser9e7c64
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsChristian Birchler
 
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdfExploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdfkalichargn70th171
 
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptxThe Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptxRTS corp
 
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
 
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptxReal-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptxRTS corp
 
Salesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZSalesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZABSYZ Inc
 

Recently uploaded (20)

Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
 
A healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfA healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdf
 
SoftTeco - Software Development Company Profile
SoftTeco - Software Development Company ProfileSoftTeco - Software Development Company Profile
SoftTeco - Software Development Company Profile
 
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
 
Introduction to Firebase Workshop Slides
Introduction to Firebase Workshop SlidesIntroduction to Firebase Workshop Slides
Introduction to Firebase Workshop Slides
 
SpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at RuntimeSpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at Runtime
 
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive Goal
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
 
Not a Kubernetes fan? The state of PaaS in 2024
Not a Kubernetes fan? The state of PaaS in 2024Not a Kubernetes fan? The state of PaaS in 2024
Not a Kubernetes fan? The state of PaaS in 2024
 
2024 DevNexus Patterns for Resiliency: Shuffle shards
2024 DevNexus Patterns for Resiliency: Shuffle shards2024 DevNexus Patterns for Resiliency: Shuffle shards
2024 DevNexus Patterns for Resiliency: Shuffle shards
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
 
Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data Streams
 
Patterns for automating API delivery. API conference
Patterns for automating API delivery. API conferencePatterns for automating API delivery. API conference
Patterns for automating API delivery. API conference
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
 
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdfExploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
Exploring Selenium_Appium Frameworks for Seamless Integration with HeadSpin.pdf
 
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptxThe Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
 
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
 
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptxReal-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
 
Salesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZSalesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZ
 

Managing multicast/igmp stream on Docker

  • 1. Managing multicast stream on Docker Thierry GAYET - 09/2023
  • 2. The purpose of these slides is to give some information on the use of muticast streams in a Dockerized context. GOAL
  • 3. So that Docker containers can communicate with each other but also with the outside world via the host machine, then a networking layer is necessary. This network layer adds a portion of container isolation, and therefore makes it possible to create Docker applications that work together securely. Docker supports different types of networks that are suitable for certain use cases, which we will see through this chapter. The Docker network system uses drivers. Several drivers exist and provide different functionalities.
  • 5. The bridge driver First, when you install Docker for the first time, it automatically creates a bridge network named bridge connected to the docker0 network interface (viewable with the ip addr show docker0 command). Each new Docker container is automatically connected to this network unless a custom network is specified. Furthermore, the bridge network is the most commonly used type of network. It is limited to containers on a single host running the Docker engine. Containers that use this driver can only communicate with each other, however they are not accessible from the outside. Before containers on the bridge network can communicate or be accessible from the outside world, you must configure port mapping.
  • 6. The none driver This is the ideal network type, if you want to prohibit all internal and external communication with your container, because your container will be devoid of any network interface (except the loopback / lo interface).
  • 7. The host driver This type of network allows containers to use the same interface as the host. It therefore removes network isolation between containers and will by default be accessible from the outside. As a result, it will take the same IP as your host machine.
  • 8. The host driver Network context from the host : $ ip addr show eno8403 eno8403: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether dc:85:de:ce:04:55 brd ff:ff:ff:ff:ff:ff inet 192.168.0.11/24 brd 192.168.0.255 scope global dynamic noprefixroute wlp3s0 valid_lft 54874sec preferred_lft 54874sec inet6 fe80::335:f1f5:127d:b62c/64 scope link noprefixroute valid_lft forever preferred_lft forever $ docker run -it --rm --network host --name net alpine ip addr show eno8403 eno8403: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether dc:85:de:ce:04:55 brd ff:ff:ff:ff:ff:ff inet 192.168.0.11/24 brd 192.168.0.255 scope global dynamic noprefixroute wlp3s0 valid_lft 54874sec preferred_lft 54874sec inet6 fe80::335:f1f5:127d:b62c/64 scope link noprefixroute valid_lft forever preferred_lft forever → Same results with the two context !
  • 9. The overlay driver If you want native multi-host networking, you need to use an overlay driver. It creates a distributed network between multiple hosts with the Docker engine. Docker transparently manages the routing of each packet to and from the right host and container.
  • 10. The macvlan driver Using the macvlan driver is sometimes the best choice when using applications that expect to be directly connected to the physical network, because the Macvlan driver allows you to assign a MAC address to a container, making it appear as a device physically on your network. The Docker engine routes traffic to containers based on their MAC addresses.
  • 12. The host driver Managing multicast streams with Docker can be a bit tricky because Docker containers are primarily designed for unicast network communication. Multicast networking requires special handling since multicast packets are sent to a group of hosts, not a single host. While Docker doesn't natively support multicast, you can work with it using some workarounds and network configurations. Here's a general approach to manage multicast streams with Docker: Use Host Networking Mode: When you run a Docker container, you can specify the network mode using the --network flag. To enable multicast within a container, you can use the host network mode, which allows the container to share the host's network namespace. However, note that this approach is less isolated and may not be suitable for all use cases. bash $ docker run --network host <your-image>
  • 13. Enable Multicast on the Host: Ensure that multicast is enabled on your host machine. You might need to configure your host's network stack to accept multicast packets. For Linux, this usually involves setting kernel parameters. For example, you can enable multicast routing and add multicast routes as needed : # Enable multicast routing (you may need to modify this based on your requirements) $ echo 1 > /proc/sys/net/ipv4/ip_forward # Add a multicast route (replace <multicast-group> with the actual multicast group) $ ip route add <multicast-group> dev <interface> scope link The host driver
  • 14. Configure the Multicast Application: Your multicast application running inside the Docker container should be configured to send or receive multicast packets using the appropriate multicast group and port. Specify the Multicast Group Address: Ensure that your multicast application is set to use the specific multicast group address you intend to work with. This address must match the multicast group address you set up on the host. Test Your Setup: Run your Docker container with the host network mode and test the multicast functionality within the container. This may involve sending or receiving multicast packets as per your application's requirements. Security Considerations: Be mindful of the security implications of using host networking mode, as it grants the container more access to the host's network stack. Ensure that your Docker setup is secure and that you follow best practices for container security. Monitoring and Troubleshooting: Use tools like tcpdump or Wireshark to monitor multicast traffic on the host and within the container. This can help you diagnose any network- related issues. The host driver
  • 15. The bridge driver Sinon en mode bridge c'est aussi possible en gérant à la mains les abonnement à igmp : Multicast to/from a docker bridge network is currently not possible. This is due to limitations with how linux kernels provide support for multicast routing. Packets are forwarded to the docker bridge using iptables and the unicast routing table, but multicast packets are handled differently in linux kernels. A workaround is to run a tool like smcrouted (https://github.com/troglobit/smcroute) on the host (or in a container with access to the host network). This process does the work of managing the linux multicast forwarding cache.
  • 16. The macvlan driver Managing multicast streams using a macvlan driver in Docker can be a more straightforward approach compared to other networking modes. The macvlan driver allows each Docker container to have its own unique MAC address and appear as a separate device on the network. Here's how you can manage multicast streams using the macvlan driver: Create a Macvlan Network: $ docker network create -d macvlan --subnet=<subnet> --gateway=<gateway> --ip-range=<ip-range> -o parent=<physical-interface> <network-name> <subnet>: The subnet for your containers. <gateway>: The gateway IP for your containers. <ip-range>: The range of IPs that can be allocated to containers. <physical-interface>: The name of your physical network interface. <network-name>: The name of the macvlan network. Replace the placeholders with your specific network configuration.
  • 17. The macvlan driver Run Containers with Macvlan Networking: $ docker run --network=<network-name> -itd --name=<container-name> <your-image> <network-name>: The name of the macvlan network you created. <container-name>: A name for your Docker container. <your-image>: The Docker image you want to run. Within your Docker containers, you can manage multicast streams as you would on a physical host. Configure your multicast application to use the macvlan network interface for sending and receiving multicast traffic. Test your multicast streams to ensure that they are functioning as expected within the Docker containers. Please note the following considerations: Containers connected to a macvlan network have direct access to the physical network and may require appropriate permissions and configurations on your network infrastructure. Ensure that your multicast application inside the container is configured correctly to use the macvlan network interface for multicast communication. Depending on your network and router configurations, you may need to set up multicast routing or enable multicast support on your network infrastructure to ensure proper multicast traffic flow. Always exercise caution when working with multicast traffic, as it can have complex interactions with network infrastructure and may require additional configuration and permissions.
  • 19. The command to create a Docker network is: $ docker network create --driver <DRIVER TYPE> <NETWORK NAME> In this example we will create a bridge type network named mon-bridge: $ docker network create --driver bridge my-bridge We will then list the Docker networks with the following command: $ docker network ls Result : NETWORK ID NAME DRIVER SCOPE 58b8305ce041 bridge bridge local 91d7f01dad50 host host local ccdbdbf708db mon-bridge bridge local 10ee25f56420 myimagedocker_default bridge local 6851e9b8e06e none null local Create and collect information from a Docker network
  • 20. It is possible to collect information on the Docker network, such as the network config, by typing the following command: docker network inspect mon-bridge Result : [ { "Name": "mon-bridge", "Id": "ccdbdbf708db7fa901b512c8256bc7f700a7914dfaf6e8182bb5183a95f8dd9b", ... "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.21.0.0/16", "Gateway": "172.21.0.1" } ] }, ... "Labels": {} } ]
  • 21. You can override the Subnet and Gateway value by using the --subnet and --gateway options of the docker network create command, as follows: $ docker network create bridge --subnet=172.16.86.0/24 --gateway=172.16.86.1 my-bridge For this example, we will connect two containers to our previously created bridge network: $ docker run -dit --name alpine1 --network mon-bridge alpine $ docker run -dit --name alpine2 --network mon-bridge alpine
  • 22. If we inspect our mon-bridge network again, we will see our two new containers in the information returned: docker network inspect mon-bridge Result : [ { "Name": "mon-bridge", "Id": "ccdbdbf708db7fa901b512c8256bc7f700a7914dfaf6e8182bb5183a95f8dd9b", ... "Containers": { "1ab5f1815d98cd492c69a63662419e0eba891c0cadb2cbdd0fb939ab25f94b33": { "Name": "alpine1", "EndpointID": "5f04963f9ec084df659cfc680b9ec32c44237dc89e96184fe4f2310ba6af7570", "MacAddress": "02:42:ac:15:00:02", "IPv4Address": "172.21.0.2/16", "IPv6Address": "" }, "a935d2e1ddf76fe49cdb1950653f4a093928020b49ebfea4130ff9d712ffb1d6": { "Name": "alpine2", "EndpointID": "3e009b56104a1bf9106bc622043a2ee06010b102279e24b4807c7b7ffec166dd", "MacAddress": "02:42:ac:15:00:03", "IPv4Address": "172.21.0.3/16", "IPv6Address": "" } }, ... } ]
  • 23. From the result, we can see that our alpine1 container has the IP address 172.21.0.2, and our alpine2 container has the IP address 172.21.0.3. Let's try to make them communicate together using the ping command: $ docker exec alpine1 ping -c 1 172.21.0.3 Result : PING 172.21.0.3 (172.21.0.3): 56 data bytes 64 bytes from 172.21.0.3: seq=0 ttl=64 time=0.101 ms $ docker exec alpine2 ping -c 1 172.21.0.2 Result : PING 172.21.0.2 (172.21.0.2): 56 data bytes 64 bytes from 172.21.0.2: seq=0 ttl=64 time=0.153 mss
  • 24. For information, you cannot create a network host, because you use the interface of your host machine. Moreover, if you try to create it then you will receive the following error: docker network create --driver host my-host Error : Error response from daemon: only one instance of "host" network is allowed You can only use the host driver but not create it. In this example we will start an Apache container on port 80 of the host machine. From a networking perspective, this is the same level of isolation as if the Apache process was running directly on the host machine and not in a container. However, the process remains completely isolated from the host machine. This procedure requires that port 80 is available on the host machine: $ docker run --rm -d --network host --name my_httpd httpd Without any mapping, you can access the Apache server by accessing http://localhost:80/, you will then see the message "It works!". From your host machine, you can check which process is bound to port 80 using the netstat command: $ sudo netstat -tulpn | grep:80
  • 25. This is indeed the httpd process that uses port 80 without using port mapping: tcp 0 0 127.0.0.1:8000 0.0.0.0:* LISTEN 5084/php tcp6 0 0 :::80 :::* LISTEN 11133/httpd tcp6 0 0 :::8080 :::* LISTEN 3122/docker-prox Finally stop the container which will be deleted automatically because it was started using the --rm option: $ docker container stop my_httpd
  • 26. Remove, connect, and connect a Docker network Before deleting your docker network, it is necessary to first delete any container connected to your docker network, or otherwise just disconnect your container from your docker network without necessarily deleting it. We will choose method 2, disconnecting all containers using the mon-bridge docker network: $ docker network disconnect mon-bridge alpine1 $ docker network disconnect mon-bridge alpine2 Now, if you check the network interfaces of your containers based on the alpine image, you will only see the loopback interface as for the none driver: $ docker exec alpine1 ip a Result : lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever
  • 27. Once you have disconnected all your containers from the mon-bridge docker network, you can then delete it: $ docker network rm mon-bridge However, your containers are now without a bridge network interface, so you must reconnect your containers to the default bridge network so that they can communicate with each other again: $ docker network connect bridge alpine1 $ docker network connect bridge alpine2 Then check if your containers have received the correct IP: $ docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq) Result : /alpine2 - 172.17.0.3 /alpine1 - 172.17.0.2
  • 28. You can create as many bridge networks as you want, it remains a good way to secure communication between your containers, because the containers connected to bridge1 cannot communicate with the containers on bridge2, thus limiting unnecessary communications.
  • 29. ## Create a docker network docker network create --driver <DRIVER TYPE> <NETWORK NAME> # List docker networks docker network ls ## Delete one or more docker network(s) docker network rm <NETWORK NAME> ## Collect information on a Docker network docker network inspect <NETWORK NAME> -v or --verbose: verbose mode for better diagnostics ## Delete all unused docker networks docker network plum -f or --force: force deletion ## Connect a container to a Docker network docker network connect <NETWORK NAME> <CONTAINER NAME> ## Disconnect a docker network container docker network disconnect <NETWORK NAME> <CONTAINER NAME> -f or --force: force disconnection ## Start a container and connect it to a docker network docker run --network <NETWORK NAME> <IMAGE NAME> Summary :