Successfully reported this slideshow.

Docker meetup

5

Share

Upcoming SlideShare
Docker network
Docker network
Loading in …3
×
1 of 16
1 of 16

Docker meetup

5

Share

Download to read offline

Description

Slides for the docker meetup talk on overlay networks in Docker.

Transcript

  1. 1. Network Overlay Options in Docker Syed Mushtaq Ahmed syed.ahmed4@mail.mcgill.ca
  2. 2. Networking in Docker ● When docker daemon starts, it sets up the docker0 bridge which is the entry point to all container traffic. ● Communication between local containers works but anything outside should be port forwarded. ● Can cause problems if multiple containers want to communicate over the same port. ● Overlay networks allow the possibility of seamless communication between multiple containers without jumping multiple hoops. ● We examine three overly networking options that are available in Docker. Weave, Flannel, Libnetwork.
  3. 3. Setup
  4. 4. Weave ● “Weave creates a virtual network that connects Docker containers deployed across multiple hosts and enables their automatic discovery.”[1] ● Weave creates a custom bridge to which each container connects. ● Uses a “router” container which intercepts packets destined to the bridge, encapsulates them and sends it over to the right peer router. ● Each router learns which mac addresses belong to which peer router and is also aware of the overall topology. ● Uses a custom encapsulation format and batches multiple frames in a single UDP payload [1] https://github.com/weaveworks/weave#readme
  5. 5. Weave Setup #install curl -L git.io/weave -o /usr/local/bin/weave chmod a+x /usr/local/bin/weave #start weave launch [$PEER_IP] eval $(weave env) #run docker run --name c1 -it ubuntu
  6. 6. Weave Overlay Throughput Native (Inter-VM) 3.2 Gb/s Weave 147 Mb/s Weave is slow because the router container uses PCAP to capture packets and encapsulate them in userspace which is every expensive.
  7. 7. Flannel ● “Flannel is a virtual network that gives a subnet to each host for use with container runtimes.”[2] ● Each host gets a subnet assigned to it and IP allocations to a container happen from that subnet. ● Uses etcd for storing configuration. ● Can have multiple “backends” (UDP, VxLAN, AWS-VPC) ● docker0 is kept the default bridge so no extra interfaces in the container. ● Supports multi-network mode but is static and still experimental. [2] https://github.com/coreos/flannel
  8. 8. # Setup Etcd ... #Build flannel git clone https://github.com/coreos/flannel.git cd flannel docker run -v `pwd`:/opt/flannel -i -t google/golang /bin/bash -c "cd /opt/flannel && ./build" #push network config to etcd curl -L http://127.0.0.1:2379/v2/keys/coreos.com/network/config -XPUT -d value='{ "Network": "10.0.0.0/8", "SubnetLen": 20, "SubnetMin": "10.10.0.0", "SubnetMax": "10.99.0.0", "Backend": { "Type": "vxlan", "Port": 7890 } }' Flannel Setup (kernel > 3.15)
  9. 9. #start flannel cd flannel/bin ./flanneld -etcd-endpoints="http://127.0.0.1:2379" # start docker with the flannel (you may have to change the docker0's IP service docker stop source /run/flannel/subnet.env docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} #start containers normally docker run -it ubuntu Flannel Setup (kernel > 3.15)
  10. 10. Flannel Overlay Throughput Native (Inter-VM) 3.2 Gb/s Weave 147 Mb/s Flannel 1.22 Gb/s Flannel is faster than Weave because it uses the kernel Vxlan driver thus avoiding packet copy to user space.
  11. 11. Libnetwork ● Currently in active development. ● Integrated tightly with Docker, it provides native multi-host networking. ● Flexible to support any external drivers (Weave for example). ● Defines networks and services as top level objects. ● We can dynamically create multiple networks, services belonging to different networks and attach them to containers.
  12. 12. #build docker binary with experimental support git clone https://github.com/docker/docker.git; cd docker DOCKER_EXPERIMENTAL=1 make #setup a Key-Value store (using Consul here) #host1 consul agent -server -bootstrap -data-dir /tmp/consul -bind=<host-1-ip-address> #host2 consul agent -data-dir /tmp/consul -bind <host-2-ip-address> consul join <host-1-ip-address> #start docker docker -d --kv-store=consul:localhost:8500 -- label=com.docker.network.driver.overlay.bind_interface=eth0 [--label=com.docker.network.driver.overlay.neighbor_ip=<host-1-ip-address>] Libnetwork Setup (kernel > 3.15)
  13. 13. #Create network with overlay driver docker network create -d overlay mynet #create a service under the network #host1 docker service publish svc1.mynet #host2 docker service publish svc2.mynet #start a container and attach the service to it #host1 CID=$(docker run -itd ubuntu) docker service attach $CID svc1.mynet #host2 CID=$(docker run -itd ubuntu) docker service attach $CID svc2.mynet Libnetwork Setup (kernel > 3.15)
  14. 14. Libnetwork Overlay Throughput Native (Inter-VM) 3.2 Gb/s Weave 147 Mb/s Flannel 1.22 Gb/s Libnetwork 1.32 Gb/s Libnetwork uses the same Vxlan driver as Flannel. It has a slightly higher throughput possibly because Flannel sets a slightly lower MTU (1450 instead of 1500) on the docker bridge.
  15. 15. Other approaches ● Rancher uses IPSec tunnels between different hosts to implement their overlay. ● Socketplane used Open VSwitch as their container bridge and used its VxLAN tunneling capability.
  16. 16. Questions?

Description

Slides for the docker meetup talk on overlay networks in Docker.

Transcript

  1. 1. Network Overlay Options in Docker Syed Mushtaq Ahmed syed.ahmed4@mail.mcgill.ca
  2. 2. Networking in Docker ● When docker daemon starts, it sets up the docker0 bridge which is the entry point to all container traffic. ● Communication between local containers works but anything outside should be port forwarded. ● Can cause problems if multiple containers want to communicate over the same port. ● Overlay networks allow the possibility of seamless communication between multiple containers without jumping multiple hoops. ● We examine three overly networking options that are available in Docker. Weave, Flannel, Libnetwork.
  3. 3. Setup
  4. 4. Weave ● “Weave creates a virtual network that connects Docker containers deployed across multiple hosts and enables their automatic discovery.”[1] ● Weave creates a custom bridge to which each container connects. ● Uses a “router” container which intercepts packets destined to the bridge, encapsulates them and sends it over to the right peer router. ● Each router learns which mac addresses belong to which peer router and is also aware of the overall topology. ● Uses a custom encapsulation format and batches multiple frames in a single UDP payload [1] https://github.com/weaveworks/weave#readme
  5. 5. Weave Setup #install curl -L git.io/weave -o /usr/local/bin/weave chmod a+x /usr/local/bin/weave #start weave launch [$PEER_IP] eval $(weave env) #run docker run --name c1 -it ubuntu
  6. 6. Weave Overlay Throughput Native (Inter-VM) 3.2 Gb/s Weave 147 Mb/s Weave is slow because the router container uses PCAP to capture packets and encapsulate them in userspace which is every expensive.
  7. 7. Flannel ● “Flannel is a virtual network that gives a subnet to each host for use with container runtimes.”[2] ● Each host gets a subnet assigned to it and IP allocations to a container happen from that subnet. ● Uses etcd for storing configuration. ● Can have multiple “backends” (UDP, VxLAN, AWS-VPC) ● docker0 is kept the default bridge so no extra interfaces in the container. ● Supports multi-network mode but is static and still experimental. [2] https://github.com/coreos/flannel
  8. 8. # Setup Etcd ... #Build flannel git clone https://github.com/coreos/flannel.git cd flannel docker run -v `pwd`:/opt/flannel -i -t google/golang /bin/bash -c "cd /opt/flannel && ./build" #push network config to etcd curl -L http://127.0.0.1:2379/v2/keys/coreos.com/network/config -XPUT -d value='{ "Network": "10.0.0.0/8", "SubnetLen": 20, "SubnetMin": "10.10.0.0", "SubnetMax": "10.99.0.0", "Backend": { "Type": "vxlan", "Port": 7890 } }' Flannel Setup (kernel > 3.15)
  9. 9. #start flannel cd flannel/bin ./flanneld -etcd-endpoints="http://127.0.0.1:2379" # start docker with the flannel (you may have to change the docker0's IP service docker stop source /run/flannel/subnet.env docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} #start containers normally docker run -it ubuntu Flannel Setup (kernel > 3.15)
  10. 10. Flannel Overlay Throughput Native (Inter-VM) 3.2 Gb/s Weave 147 Mb/s Flannel 1.22 Gb/s Flannel is faster than Weave because it uses the kernel Vxlan driver thus avoiding packet copy to user space.
  11. 11. Libnetwork ● Currently in active development. ● Integrated tightly with Docker, it provides native multi-host networking. ● Flexible to support any external drivers (Weave for example). ● Defines networks and services as top level objects. ● We can dynamically create multiple networks, services belonging to different networks and attach them to containers.
  12. 12. #build docker binary with experimental support git clone https://github.com/docker/docker.git; cd docker DOCKER_EXPERIMENTAL=1 make #setup a Key-Value store (using Consul here) #host1 consul agent -server -bootstrap -data-dir /tmp/consul -bind=<host-1-ip-address> #host2 consul agent -data-dir /tmp/consul -bind <host-2-ip-address> consul join <host-1-ip-address> #start docker docker -d --kv-store=consul:localhost:8500 -- label=com.docker.network.driver.overlay.bind_interface=eth0 [--label=com.docker.network.driver.overlay.neighbor_ip=<host-1-ip-address>] Libnetwork Setup (kernel > 3.15)
  13. 13. #Create network with overlay driver docker network create -d overlay mynet #create a service under the network #host1 docker service publish svc1.mynet #host2 docker service publish svc2.mynet #start a container and attach the service to it #host1 CID=$(docker run -itd ubuntu) docker service attach $CID svc1.mynet #host2 CID=$(docker run -itd ubuntu) docker service attach $CID svc2.mynet Libnetwork Setup (kernel > 3.15)
  14. 14. Libnetwork Overlay Throughput Native (Inter-VM) 3.2 Gb/s Weave 147 Mb/s Flannel 1.22 Gb/s Libnetwork 1.32 Gb/s Libnetwork uses the same Vxlan driver as Flannel. It has a slightly higher throughput possibly because Flannel sets a slightly lower MTU (1450 instead of 1500) on the docker bridge.
  15. 15. Other approaches ● Rancher uses IPSec tunnels between different hosts to implement their overlay. ● Socketplane used Open VSwitch as their container bridge and used its VxLAN tunneling capability.
  16. 16. Questions?

More Related Content

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all

×