Lessons learned from building a custom networking solution to embracing the new docker networking model in the Admiral container management platform - https://github.com/vmware/admiral
Lessons learned in reaching multi-host container networking
1. Docker networking
Lessons learned in reaching multi-host container networking
Tony Georgiev
Software Engineer, Cloud Automation Platform at VMware
2. History
• Building container management solution long long time ago (last October) -
https://github.com/vmware/admiral
• Intelligent policy based scheduler
• Deploying connected containers on single host
• Deploying dis-connected containers on multi hosts
2
4. State of networking pre Docker 1.9
• Single host Container-2-container communication with docker links (legacy)
• Network mode: none, host, bridge (docker0)
• 3rd party drivers (Flannel, Weave, Calico)
4
5. What we tried
• DNS
• DNS load balancing (AKA poor man’s load balancing)
• The standard HAProxy container as ambassador
• Custom built HAProxy based container as ambassador – agent
5
6. Our (old) networking solution
6
Host A Host B Host C
Network
Agent Service A Agent Service B Agent DB
Service B
/etc/hosts
172.17.0.1 service-b
172.17.0.1 db
bind 172.17.0.1:80
…
bind 172.17.0.1:3306
…
7. Agent specs
• Based on the Ambassador linking pattern
• Written in GO
• Docker image based on Alpine and PhotonOS
• Based on HAProxy with zero downtime reloading
• Configuration is pushed from the orchestrator
• Layer 4 routing (based on source ips and ports)
• Load balancing
7
8. Pros
• Unobtrusive, can be deployed on any host
• Does not require any 3rd party drivers or manual host setup
• Docker compose compatible (legacy links)
• Same definition that was used before for a single host
• Works the same on single as well on multi hosts
8
9. Cons
• Different than tools Ops are comfortable with
• Requires service’s ports to be exposed.
• 1 port per service
• Agent/container that needs to be deployed and managed
• Not compatible with newer Docker compose having networks, i.e. different that how people
build apps.
9
10. State of networking in Docker 1.9-1.12
• Acquired Socketplane.io
• Native multi-host networking (overlay)
• Control plane requires shared KV store (1.9+) or Swarm mode (1.12) (gossip based)
• User defined networks (user defined bridge, isolated from other bridges)
• Plugins & Drivers
10
11. Docker networking under the hood
• DNS (inside the host)
• DNS based load balancer (1.11)
11
Graphic source:
https://sreeninet.wordpress.com/2016/07/29/service-discovery-and-load-balancing-internals-in-docker-1-12/
12. Docker networking under the hood
• IPVS (IP Virtual Server) – Layer 4 load balancer
Load balancer based on VIP & IPVS (on every container) (1.12 swarm mode)
12
Graphic source:
https://sreeninet.wordpress.com/2016/07/29/service-discovery-and-load-balancing-internals-in-docker-1-12/
13. Docker networking under the hood
• VXLAN (Virtual extensible LAN) – network virtualization tunneling protocol
• Every host is VTEP (VXLAN Tunnel Endpoint)
• Secure dataplane (IPSec)
13
14. New networking solution
14
Host A Host B Host C
Agent Service A Agent Service B Agent DB
Service B
KV store
(etcd, zookeeper, consul,
Admiral)
Network (underlay)
VXLAN
tunnel
VXLAN
tunnel
VTE
P
VTE
P
VTE
P
DNS
In this session we will show what we learned, the obstacles and solutions we went through in order to deliver unobtrusive and simple to use multi-host container networking in Admiral - the container management solution. We will talk about the state of Docker networking before user defined networking, the implementation of custom networking solution, it’s pros and cons, and wrap up with the current state of Docker networking and how we adapted it.
Lessons learned while implementing multi-host container networking in the container management solution - Admiral.
In the docker0 bridge all containers on the same host can talk to each other – not desired.
NATing/port mapping
User defined Bridge networks – isolated; Containers between bridge networks cannot talk to each other
Docker networking uses Linux kernel features
Overlay network is achieved using VXLAN tunnels
VIP & IPVS – IP Virtual Server - Layer 4 switching http://www.linuxvirtualserver.org/software/ipvs.html
Security can be enabled when creating the overlay network - https://en.wikipedia.org/wiki/Ipsec
User defined Bridge networks – isolated; Containers between bridge networks cannot talk to each other
Docker networking uses Linux kernel features
Overlay network is achieved using VXLAN tunnels
VIP & IPVS – IP Virtual Server - Layer 4 switching http://www.linuxvirtualserver.org/software/ipvs.html
Security can be enabled when creating the overlay network - https://en.wikipedia.org/wiki/Ipsec
User defined Bridge networks – isolated; Containers between bridge networks cannot talk to each other
Docker networking uses Linux kernel features
Overlay network is achieved using VXLAN tunnels
VIP & IPVS – IP Virtual Server - Layer 4 switching http://www.linuxvirtualserver.org/software/ipvs.html
Security can be enabled when creating the overlay network - https://en.wikipedia.org/wiki/Ipsec