Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Octo talk : docker multi-host networking

7,694 views

Published on

The Docker point view about the SDN
OCTO Meetup SDN 12/19/2015

Octo talk : docker multi-host networking

  1. 1. Herve Leclerc@dt DOCKER MULTI-HOST NETWORKING
  2. 2. ALTER WAY
  3. 3. LIBNETWORK DOCKER Herve Leclerc@dt
  4. 4. DOCKER LIBNETWORK OPEN SOURCE SINCE APRIL 2015 Multiple OS > 500 PR > 500 ⭐ Herve Leclerc@dt
  5. 5. DOCKER LIBNETWORK Implements Container Network Model (CNM) 3 main components Sandbox Endpoint Network network endpoint sandbox Herve Leclerc@dt
  6. 6. docker Container #1 Network Sandbox endpoint docker Container #2 Network Sandbox docker Container #3 Network Sandbox endpoint endpointendpoint Backend Network Backend Network Network Sandbox An isolated environment where the Networking configuration for a Docker Container lives. Endpoint A network interface that can be used for communication over a specific network. Endpoints join exactly one network and multiple endpoints can exist within a single Network Sandbox. Network A network is a uniquely identifiable group of endpoints that are able to communicate with each other. You could create a “Frontend” and “Backend” network and they would be completely isolated. CNM Herve Leclerc@dt
  7. 7. The Network drivers Implement the Driver API Provide the specifics of how a network and endpoint are implemented Create Network Create Container (attach to the network) DOCKER LIBNETWORK Herve Leclerc@dt
  8. 8. Create a linux Bridge for each network Create a veth pair for each endpoint One end attached to the bridge the other as eth0 inside containers iptables rules created for NAT DOCKER LIBNETWORK : BRIDGE DRIVER Herve Leclerc@dt
  9. 9. Create a separate network namespace for every network Create a linux Bridge and VXLAN tunnels to every other discovered host Creates a veth pair for each endpoint One is attached to the bridge The other appears as eth0 inside container Network namespace connected to host network using NAT DOCKER LIBNETWORK : OVERLAY DRIVER Herve Leclerc@dt
  10. 10. Implemented using lib network's remote driver Use JSON-RPC transport Can be written in any language Can be deployed as a container DOCKER LIBNETWORK : NETWORK PLUGINS Herve Leclerc@dt
  11. 11. HOW DOCKER NETWORKS A CONTAINER ? Docker Host container X d o c k e r 0 lo eth0lo vethXXXeth0 docker run : --net=bridge (default) --net=host --net=container:NAME_or_ID --net=none --net=overlay_name Herve Leclerc@dt
  12. 12. HOW DOCKER NETWORKS A CONTAINER ? Docker Host container babase d o c k e r 0 lo eth0lo vethXXXeth0 # docker run -tid --name babase -e database=mabase alpine ash # docker run -tid --link babase:babase --name frontend alpine ash # docker exec frontend env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=e83cfafdbca0 TERM=xterm BABASE_NAME=/frontend/babase BABASE_ENV_database=mabase HOME=/root # docker exec cat /etc/hosts 172.17.0.5 e83cfafdbca0 172.17.0.4 babase fa10fbead100 # docker exec frontend ping babase PING babase (172.17.0.4): 56 data bytes 64 bytes from 172.17.0.4: seq=0 ttl=64 time=0.080 ms container frontend vethXXX lo eth0 Herve Leclerc@dt
  13. 13. eth0 10.0.0.2 02:42:0A:00:00:02 eth1 172.18.0.2 02:42:AC:12:00:02 overlaybr0 10.0.0.1 vethXX vxlan1 eth1 192.168.99.103 eth0 10.0.2.15 docker0 172.17.0.1 docker_gwbridge 172.18.0.1 iptables (masquerade) iptables -t nat -L -vn Chain PREROUTING (policy ACCEPT 427 packets, 54721 bytes) pkts bytes target prot opt in out source destination 431 26098 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 425 packets, 54618 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 391 packets, 28774 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 391 packets, 28774 bytes) pkts bytes target prot opt in out source destination 2 103 MASQUERADE all -- * !docker_gwbridge 172.18.0.0/16 0.0.0.0/0 4 240 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 vethXX netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth1 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 ip netns exec 3-2eb093042e ip a 2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 36:89:6b:73:b9:7d brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::4cc0:d1ff:fe82:4730/64 scope link valid_lft forever preferred_lft forever 19: vxlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default link/ether 42:d5:16:ca:78:11 brd ff:ff:ff:ff:ff:ff inet6 fe80::40d5:16ff:feca:7811/64 scope link valid_lft forever preferred_lft forever 21: veth2@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default link/ether 36:89:6b:73:b9:7d brd ff:ff:ff:ff:ff:ff inet6 fe80::3489:6bff:fe73:b97d/64 scope link valid_lft forever preferred_lft forever Overlay Network
  14. 14. eth0 10.0.0.2 02:42:0A:00:00:02 eth1 172.18.0.2 02:42:AC:12:00:02 overlaybr0 10.0.0.1 vethXX vxlan1 eth1 192.168.99.103 eth0 10.0.2.15 docker0 172.17.0.1 docker_gwbridge 172.18.0.1 iptables (masquerade) vethXX eth0 10.0.0.3 02:42:0A:00:00:02 eth1 172.18.0.2 02:42:AC:12:00:02 overlaybr0 10.0.0.1 vethXX vxlan1 eth1 192.168.99.102 eth0 10.0.2.15 docker0 172.17.0.1 docker_gwbridge 172.18.0.1 iptables (masquerade) vethXX Tunnel VXLAN Overlay Network VXLAN Herve Leclerc@dt
  15. 15. OVS bridge vRouter midonet udp/vxlan ipsec LibNetwork Alternatives Herve Leclerc@dt
  16. 16. #docker-machine ssh node-1 #docker network ls NETWORK ID NAME DRIVER 242afbff907a none null 66828f636422 host host ee7119d1b81e bridge bridge Herve Leclerc@dt
  17. 17. #docker-machine ssh node-2 #docker network ls NETWORK ID NAME DRIVER cda2918963c5 bridge bridge 5071d7e9fd33 none null 7e24198aef09 host host Herve Leclerc@dt
  18. 18. #docker-machine ssh node-1 #docker network create -d overlay skynet 2eb093042eac5429027a48ccf72758cc325dd7d09c2b901078bbc3aab46f04d6 #docker network ls NETWORK ID NAME DRIVER 2eb093042eac skynet overlay ee7119d1b81e bridge bridge 242afbff907a none null 66828f636422 host host #docker-machine ssh node-2 NETWORK ID NAME DRIVER 2eb093042eac skynet overlay cda2918963c5 bridge bridge 5071d7e9fd33 none null 7e24198aef09 host host Herve Leclerc@dt
  19. 19. #docker run -tid --name c1 --net skynet alpine ash #docker network ls NETWORK ID NAME DRIVER 2eb093042eac skynet overlay 5071d7e9fd33 none null 7e24198aef09 host host 17400307644a docker_gwbridge bridge cda2918963c5 bridge bridge Herve Leclerc@dt
  20. 20. #ln -s /var/run/docker/netns/1-2eb093042e /var/run/netns/ 1-2eb093042e #ip net list 1-2eb093042e ip netns exec 1-2eb093042e ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 26:91:17:80:1e:46 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::d0a9:e1ff:fe04:ff07/64 scope link valid_lft forever preferred_lft forever 8: vxlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default link/ether 26:91:17:80:1e:46 brd ff:ff:ff:ff:ff:ff inet6 fe80::2491:17ff:fe80:1e46/64 scope link valid_lft forever preferred_lft forever 10: veth2@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default link/ether 92:5a:06:e8:5d:86 brd ff:ff:ff:ff:ff:ff inet6 fe80::905a:6ff:fee8:5d86/64 scope link valid_lft forever preferred_lft forever eth0 container Herve Leclerc@dt
  21. 21. #ip netns exec 1-2eb093042e ip -d link show vxlan1 14: vxlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default link/ether 0a:48:c5:7f:f1:3d brd ff:ff:ff:ff:ff:ff promiscuity 1 vxlan id 256 srcport 0 0 dstport 4789 proxy l2miss l3miss ageing 300 bridge_slave Herve Leclerc@dt
  22. 22. #netstat -natup | grep udp udp 0 0 0.0.0.0:4789 0.0.0.0:* - udp 0 0 192.168.99.103:7946 0.0.0.0:* 2386/docker Herve Leclerc@dt
  23. 23. # cd /Users/hleclerc/dev/projets/DOCKER/GunConsul/project # gun dev :: network 2b7b4dc784c2de74ee00755208402fcd06feb53a0276b6f3c477b98ea45cb153/ { "addrSpace": "GlobalDefault", "enableIPv6": false, "generic": { "com.docker.network.generic": {} }, "id": "2b7b4dc784c2de74ee00755208402fcd06feb53a0276b6f3c477b98ea45cb153", "ipamType": "default", "ipamV4Config": "[{"PreferredPool":"","SubPool":"","Options":null,"Gateway":"","AuxAddresses":null}]", "ipamV4Info": "[{"IPAMData":"{"AddressSpace":"","Gateway":"10.0.0.1/24","Pool":"10.0.0.0/24"}","PoolID":"Globa lDefault/10.0.0.0/24"}]", "name": "skynet", "networkType": "overlay", "persist": true } Herve Leclerc@dt
  24. 24. Overlay Network / SWARM / CONSUL b skynet skynet skynetbh h h c1 c2 c3 ping c2 ping c3.skynet docker run --ti -d --net=skynet alpine 8500 libkv libkv ZooKeeper etc BoltDB consul Herve Leclerc@dt
  25. 25. Herve Leclerc@dt
  26. 26. Overlay network demo #2 #1 (d1) docker run -ti -d --name=A1 alpine /bin/sh (d1) docker run -ti -d --name=A2 alpine /bin/sh (d1) inspect --format '{{ .NetworkSettings.IPAddress }}' A1 (d1) inspect --format '{{ .NetworkSettings.IPAddress }}' A2 (d1) docker attach A2 (d1) cat /etc/hosts # (on note qu’il n’y a pas de mise à jour du fichier) (d1) ping [IP de A1] ------------------------------------------------------------------------------------------------------------------------------------------------------------------ (d1) docker network create d1net (d1) docker run -ti -d --name=B1 --net=d1net alpine /bin/sh (d1) docker run -ti -d --name=B2 --net=d1net alpine /bin/sh (d1) docker attach B2 (d1) cat /etc/hosts # (on note qu’il n’y a une mise à jour du fichier avec b1 et b1.d1.net) (d1) ping [IP de A1] (pas de réponse) (d1) ping B1.d1net (ping OK) # Attention les casse est importante avec alpine :( ------------------------------------------------------------------------------------------------------------------------------------------------------------------ (d1) docker network create skynet (d2) docker network ls (d1) docker run -ti -d --name=C1 --net=skynet alpine /bin/sh (d2) docker run -ti -d --name=C2 --net=skynet alpine /bin/sh (d2) docker attach C2 (d2) cat /etc/hosts # (on note qu’il n’y a une mise à jour du fichier avec C1 et C1.skynet) (d2) ping [IP de A1] (pas de réponse) (d2) ping B1.d1net (pas de réponse) (d2) ping C1.skynet (ping ok) A1 A2 d1netB1 B2 skynet C1 C2 Docket Host #1 Docket Host #2 Herve Leclerc@dt
  27. 27. Overlay network demo #3 #2 Orchestrer le déploiement et l’utilisation d’une stack lamp skynet http Docker #1 Docker #2 mysql php-fpm NFS GlusterFS EC2... /var/www /var/lib/mysql 80 bridge Herve Leclerc@dt
  28. 28. httpd: hostname: httpd-demo-wp image: alterway/httpd:2.4 env_file: - ./httpd.env - ./phpfpm.env net: ${NETWORK} ports: - 80:80 volumes_from: - sources mysql: image: alterway/mysql:5.6 container_name: db env_file: - ./mysql.env environment: - constraint:node==${NODE_2} net: ${NETWORK} volumes: - /var/lib/mysql php: image: alterway/php:5.4-fpm container_name: phpfpm env_file: - ./php.env - ./wordpress.env - ./mysql.env hostname: php-demo-wp net: ${NETWORK} volumes_from: - sources sources: image: www-data stdin_open: true volumes: - ${APP_PATH}:/var/www environment: - constraint:node==${NODE_1} docker-compose.ml Herve Leclerc@dt

×