Your SlideShare is downloading. ×
1
Tél : +33 (0)1 58 56 10 00
Fax : +33 (0)1 58 56 10 01
www.octo.com© OCTO 2013
50, avenue des Champs-Elysées
75008 Paris ...
2
REMINDER: DOCKER NETWORKING
Agenda
INTRODUCTION TO SOFTWARE DEFINED NETWORKS
MIX UP
1
TAKE AWAY
2
4
3
3
Reminder: Docker networking
4
Host
Natively, each container runs isolated from the outside world
A bridge network (an internal, virtual switch) is pro...
5
Exposing ports allows containers services to talk together through the
bridge network
Dockerfile: add line “EXPOSE <port...
6
Linking enables a client container to get information related to a
resource container (also known as the linked containe...
7
Mapping ports enable to publish container‟s ports on “external
interfaces”. Ports may be translated.
Command line switch...
8
These characteristics enables containers:
To either remain fully isolated or talk to all other containers of the same
ho...
9
Introduction to Software Defined Networks
10
Traditional datacenter management
VM
VM
VM
VM
VM
VM
VM
VM
VM
Internet
Internet
DMZ
Physical overview
Logical overview
T...
11
Multi datacenter perspective
VM
VM
VM
VM
VM
VM
VM
VM
VM
Physical overview
VM
VM
VM VM
VM
VM
VM
VM
VM
• Limited to 2048 ...
12
SDNs proposes network solutions embracing cloud paradigms
Massively multi-tenant
Thousands tenants, massively scalable
...
13
Enables to abstract
networking topologies &
services from wiring and
equipments
Centralize network
intelligence
Standar...
14
On hosts (hypervisors, Docker hosts…), SDNs mostly rely on:
Allocating network bridges (virtual and internal switches)
...
15
Network perspective – Low level isolation
Host #3
SDN
network #1
SDN
network #1
Host #1
SDN
network #1
SDN
network #1
H...
16
Full-mesh, network agnostic and encapsulated approach
Relies on L3 networks and GRE, VXLAN… tunnels for inter-hosts com...
17
Mix up
18
How to support several containers related to a same tenant, accross multiple hosts
(or even multiple datacenters or pro...
19
Host #1 Host #3Host #2
0. From where we start
Container „app‟
Container „db‟
Bridgedocker0
Container „app‟
Container „d...
20
Use OpenVSwitch to create bridges on each host, for each tenant‟s subnet. For
instance, on host #1:
» ovs-vsctl add-br ...
21
2. Link SDN bridges
Host #1
sdn-br0
Host #3
sdn-br0
sdn-br1
Host #2
sdn-br0
sdn-br1
Use OpenVSwitch to link correspondi...
22
U
3. Instanciate VMs and map them to the bridges
Host #1
sdn-br0
Host #3
sdn-br0
sdn-br1
Host #2
sdn-br0
sdn-br1
Contai...
23
U
4. Connect containers to the appropriate bridge
Host #1
sdn-br0
Host #3
sdn-br0
sdn-br1
Host #2
sdn-br0
sdn-br1
Conta...
24
Switch
Insight - Detailed view between two hosts
Previous drawings were simplified to ease overall understanding
Follow...
25
Bonus: Alternate design - OpenStack Neutron paradigms
Alternate design, based on OpenStack Neutron paradigms – notice t...
26
Take away
27
Use Docker for containers hosting, externalize SDN management
Disable bridges management features in Docker, use OpenVS...
28
Upcoming SlideShare
Loading in...5
×

Meetup docker using software defined networks

941

Published on

0 Comments
9 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
941
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
0
Comments
0
Likes
9
Embeds 0
No embeds

No notes for slide

Transcript of "Meetup docker using software defined networks"

  1. 1. 1 Tél : +33 (0)1 58 56 10 00 Fax : +33 (0)1 58 56 10 01 www.octo.com© OCTO 2013 50, avenue des Champs-Elysées 75008 Paris - FRANCE Follow us on Twitter! @AdrienBlind @ArnaudMazin
  2. 2. 2 REMINDER: DOCKER NETWORKING Agenda INTRODUCTION TO SOFTWARE DEFINED NETWORKS MIX UP 1 TAKE AWAY 2 4 3
  3. 3. 3 Reminder: Docker networking
  4. 4. 4 Host Natively, each container runs isolated from the outside world A bridge network (an internal, virtual switch) is provided with Docker but does not allow containerized processes to communicate together without further config (see next slides) Isolation by design Container „app‟ Bridge docker0 Container „db‟ Tomcat (port 8080) MySQL (port 3306)
  5. 5. 5 Exposing ports allows containers services to talk together through the bridge network Dockerfile: add line “EXPOSE <port>” (several ports can be appended) Command line switch: “docker run ... –expose <port>” Example for the „app‟ container: “docker run ... –expose 8080” Exposing ports Host Container „app‟ Bridge docker0 Container „db‟ Tomcat (port 8080) MySQL (port 3306) TCP 3306 exposed TCP 8080 exposed « The app is able to talk to its database »
  6. 6. 6 Linking enables a client container to get information related to a resource container (also known as the linked container) Command line switch: “docker run -link <containername:alias>” Example on „app‟ container: “docker –run … –link db:dbalias” Linking containers Host Container „app‟ Bridge docker0 Container „db‟ Tomcat (port 8080) MySQL (port 3306) Env variables DBALIAS_PORT_3306_TCP=tcp://1.2.3.4:33 06 DBALIAS_PORT_3306_TCP_PROTO=tcp DBALIAS_PORT_3306_TCP_ADDR=1.2.3.4 DBALIAS_PORT_3306_TCP_PORT=3306 « The app knows where to join its database »
  7. 7. 7 Mapping ports enable to publish container‟s ports on “external interfaces”. Ports may be translated. Command line switch: “docker run -p <externalport:insideport>” Example for „app‟ container: “docker –run … –p 80:8080” Mapping ports Host Container „app‟ Bridge docker0 Container „db‟ Tomcat (port 8080) MySQL (port 3306) « The app is reachable from the external network at port 80 of the host, while the database container only remains reachable to other containers » IP_interface TCP 80 map Tomcat‟s 8080 port External network
  8. 8. 8 These characteristics enables containers: To either remain fully isolated or talk to all other containers of the same host having their services exposed To map one-by-one services on the hosts interface Hmm… What if I want to isolate groups of containers? Conclusion
  9. 9. 9 Introduction to Software Defined Networks
  10. 10. 10 Traditional datacenter management VM VM VM VM VM VM VM VM VM Internet Internet DMZ Physical overview Logical overview Tenant #1 Tenant #3 Tenant #2 LAN DMZ LAN DMZ1 DMZ2 Logical topologies are quite different from physical ones These designs traditionally relies a lot on low network layers (L2, etc.)
  11. 11. 11 Multi datacenter perspective VM VM VM VM VM VM VM VM VM Physical overview VM VM VM VM VM VM VM VM VM • Limited to 2048 VLANs • Lack of dynamic provisioning • Involves subnetting or encapsulation to flow over L3 networks Internet
  12. 12. 12 SDNs proposes network solutions embracing cloud paradigms Massively multi-tenant Thousands tenants, massively scalable Easy & fast (de)provisioning Infra as code, API centric Infrastructure agnostic L3, does not stick with lower levels (physical designs, vlans & co) Decouple infrastructure & tenants lifecycles Cross technology, vendor agnostic SDNs value proposal
  13. 13. 13 Enables to abstract networking topologies & services from wiring and equipments Centralize network intelligence Standardized management APIs: ex. OpenFlow for both physical & virtual network equipments SDNs overview (Source Wikipedia)
  14. 14. 14 On hosts (hypervisors, Docker hosts…), SDNs mostly rely on: Allocating network bridges (virtual and internal switches) Setting ACLs to decide which flow to allow or deny Connecting these bridges to external world through real host‟s interfaces Host perspective
  15. 15. 15 Network perspective – Low level isolation Host #3 SDN network #1 SDN network #1 Host #1 SDN network #1 SDN network #1 Host #2 SDN network #1 L2-Network focused Keep traditional paradigms but use API/centralize intelligence Requires VLANs, VPLS… to spread same virtual networks accross several hosts Enforces low-level isolation Consider API-based network configuration (ie. OpenFlow) to centralize and facilitates network management, making it more dynamic VPLS Router Physical net. VPLS network The link tags both virtual network trafics in vlans The VPLS ensure providing L2 networks at all ends – strong req.!
  16. 16. 16 Full-mesh, network agnostic and encapsulated approach Relies on L3 networks and GRE, VXLAN… tunnels for inter-hosts communication (avoid using L2) Network agnostic, til hosts can route trafic SDN Routers must route traffic between an inner virtual net and the ext. world Network perspective – Full mesh Host #3 Host #1 Host #2 SDN network #1 SDN network #1 SDN network #1 SDN network #1 SDN network #1 Router Physical net. Flow encapsulation L3 network
  17. 17. 17 Mix up
  18. 18. 18 How to support several containers related to a same tenant, accross multiple hosts (or even multiple datacenters or providers), and avoid them to talk to other containers in the same time ? Answer to this question enables several usecases Isolate containers of a same app without having to face limits of a single Docker host Resilience (ex. spread an app server farm over multiple hosts) Multi providers (ex. spread containers over several clouds/hosters, overflow mgmt…) … Usecases – Back to Docker Internet LAN DMZ DMZ DMZ 1 DMZ 2 LAN Container Tenant #1 Tenant #3 Tenant #2
  19. 19. 19 Host #1 Host #3Host #2 0. From where we start Container „app‟ Container „db‟ Bridgedocker0 Container „app‟ Container „db‟ Bridgedocker0 Container „app‟ Container „db‟ Container „app‟ Bridgedocker0 Container „app‟ TCP ports of services running in containers are mapped Get rid of actual Docker bridges implementation ! If not, all containers will talk together across a same host
  20. 20. 20 Use OpenVSwitch to create bridges on each host, for each tenant‟s subnet. For instance, on host #1: » ovs-vsctl add-br tech-br » ovs-vsctl add-port tech-br tep0 -- set interface tep0 type=internal » ifconfig tep0 192.168.1.1 netmask 255.255.255.0 » ovs-vsctl add-br sdn-br0 » ovs-vsctl set bridge sdn-br0 stp_enable=true 1. Create SDN compliant bridges Host #1 sdn-br0 Host #3 sdn-br0 sdn-br1 Host #2 sdn-br0 sdn-br1 Simplified view. Detailed insight exposed in later slides Once per host: common plumbing (detailed slide 24) For each bridge: create and protect against ethernet loops using Spanning Tree Protocol (beware, in complex/large deployments, it may consumes a lot of CPU!)
  21. 21. 21 2. Link SDN bridges Host #1 sdn-br0 Host #3 sdn-br0 sdn-br1 Host #2 sdn-br0 sdn-br1 Use OpenVSwitch to link corresponding bridges accross hosts In this example, we decided to use the full-mesh approach. On host #1: ovs-vsctl add-port sdn-br0 gre0 --set interface gre0 type=gre options:remote_ip:1.2.3.2 ovs-vsctl add-port sdn-br0 gre1 --set interface gre1 type=gre options:remote_ip:1.2.3.3 1.2.3.1 1.2.3.2 1.2.3.3 Simplified view. Detailed insight exposed in later slides
  22. 22. 22 U 3. Instanciate VMs and map them to the bridges Host #1 sdn-br0 Host #3 sdn-br0 sdn-br1 Host #2 sdn-br0 sdn-br1 Container „app‟ Container „db‟ Container „app‟ Container „db‟ Container „app‟ Container „app‟ Container „db‟ Container „app‟ Container „db‟ Instanciate containers without pre-built interfaces to avoid plugging containers to native docker0 bridge Use “docker run … -n=false” switch in “docker run” calls
  23. 23. 23 U 4. Connect containers to the appropriate bridge Host #1 sdn-br0 Host #3 sdn-br0 sdn-br1 Host #2 sdn-br0 sdn-br1 Container „app‟ Container „db‟ Container „app‟ Container „db‟ Container „app‟ Container „app‟ Container „db‟ Container „app‟ Container „db‟ Use pipework.sh from J. Petazzoni to easily plug containers to a chosen bridge https://github.com/jpetazzo/pipework Creates network adapter in each containers, allocate them an IP (manually/static vs DHCP), and plug it to the bridge. Per container: “pipework bridge_name $container_id container_ip/24”
  24. 24. 24 Switch Insight - Detailed view between two hosts Previous drawings were simplified to ease overall understanding Following schema details more deeply what happened inside a single host Host #1 sdn-br0 Container Host #2 Container gre 0 tech-br tep0 eth0 sdn-br0 Container Container gre 0 tech-br tep0 eth0 Switch ovs-vsctl add-br tech-br ovs-vsctl add-port tech-br tep0 -- set interface tep0 type=internal ovs-vsctl add-port sdn-br0 gre0 --set interface gre0 type=gre options:remote_ip:1.1.1.1 GRE tunnel in which the traffic really flows Ethernet link between an adapter and a switch L2 switch Network adapter pipework sdn-br0 $container_id 192.168.0.3/24 eth1 eth1ovs-vsctl add-br sdn-br0eth1 eth1 1.1.1.1/24 2.2.2.2/24 192.168.1.2/24 192.168.0.3/24192.168.0.2/24192.168.0.1/24 192.168.0.4/24 4 7 6 1 2 ifconfig tep0 192.168.1.1 netmask 255.255.255.0 3 L3 routed network Docker container Docker host pipework sdn-br0 $container_id 192.168.0.4/24 8 192.168.1.1/24 Repeat step #6 to create additional tunnels in order to reach other hosts ovs-vsctl set bridge sdn-br0 stp_enable=true 5Virtual, direct link established through the GRE tunnel
  25. 25. 25 Bonus: Alternate design - OpenStack Neutron paradigms Alternate design, based on OpenStack Neutron paradigms – notice that use of VLAN limits to 2048 tenant networks on each hosts All VMs/containers of a same tenant network are segregated inside a dedicated, local VLAN of a shared unique bridge Full-mesh of GRE tunnels between all hosts On each host, local mapping between a local tenant network VLAN and its GRE identifier shared across all hosts Full story from VM A to B (tenant #1): traffic outgoing A is tagged with VLAN 20 entering br-int, flows to br-tun, is untagged entering GRE tunnel while the GRE identifier 11111 is set on transmitted packets. At the other end of the GRE tunnel, the traffic having the GRE id 11111 is assigned to VLAN 40, then flow outside br-tun, to br-int, and is finally untagged before entering B. Full story from A to C (tenant #1): traffic is tagged with VLAN 20 entering br-int, then flows to br-eth1 which finally untag inbound trafic (or assign a different VLAN corresponding to the external world) Switch Host #1 br-int br-eth0 eth0 Switch GRE tunnel in which the traffic really flows Ethernet link between an adapter and a switch L2 switch Network adapter L3 routed network VM or container Host/server br-eth1 eth1 br-tun Tenant #2 Local VLAN 30 Tenant #1 Local VLAN 20 A single bridge is used for all VMs/containers ; VMs of different tenants isolated using local VLANs (not exposed externally !) A single bridge is used as end-points for all GRE tunnels used for full- mesh One bridge is created for each physical interface Host #2 br-int br-eth0 eth0 br-tun Tenant #2 Local VLAN 50 Tenant #1 Local VLAN 40 Switch or VLAN (not related to internal VLANs) gre 0 gre 0 veth A B Server C Flow rules: Tenant #1: VLAN 20  GRE ID 11111 Tenant #2: VLAN 30  GRE ID 11112 Single patch link between the two bridges supporting all traffic from/to full- mesh Flow rules: Tenant #1: VLAN 40  GRE ID 11111 Tenant #2: VLAN 50  GRE ID 11112
  26. 26. 26 Take away
  27. 27. 27 Use Docker for containers hosting, externalize SDN management Disable bridges management features in Docker, use OpenVSwitch Abstract from low level network considerations whenever possible, between hosts (L2 VLANs for instances): consider tunneling Get further Use OpenStack Neutron to centralize & automatize the whole network conf. You definitively use VLANs ? Consider encapsulating several VLANs in your own tenant network Other (dirty) options: Docker in VMs nesting Multiple Docker instances on the same host ebtables/iptables/openvswitch acls on flat network Take away
  28. 28. 28

×