Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Container Network Interface: Network Plugins for Kubernetes and beyond

12,544 views

Published on

With the rise of modern containers comes new problems to solve – especially in networking. Numerous container SDN solutions have recently entered the market, each best suited for a particular environment. Combined with multiple container runtimes and orchestrators available today, there exists a need for a common layer to allow interoperability between them and the network solutions.

As different environments demand different networking solutions, multiple vendors and viewpoints look to a specification to help guide interoperability. Container Network Interface (CNI) is a specification started by CoreOS with the input from the wider open source community aimed to make network plugins interoperable between container execution engines. It aims to be as common and vendor-neutral as possible to support a wide variety of networking options — from MACVLAN to modern SDNs such as Weave and flannel.

CNI is growing in popularity. It got its start as a network plugin layer for rkt, a container runtime from CoreOS. Today rkt ships with multiple CNI plugins allowing users to take advantage of virtual switching, MACVLAN and IPVLAN as well as multiple IP management strategies, including DHCP. CNI is getting even wider adoption with Kubernetes adding support for it. Kubernetes accelerates development cycles while simplifying operations, and with support for CNI is taking the next step toward a common ground for networking. For continued success toward interoperability, Kubernetes users can come to this session to learn the CNI basics.

This talk will cover the CNI interface, including an example of how to build a simple plugin. It will also show Kubernetes users how CNI can be used to solve their networking challenges and how they can get involved.

KubeCon schedule link: http://sched.co/4VAo

Published in: Technology

Container Network Interface: Network Plugins for Kubernetes and beyond

  1. 1. Container Network Interface: Network plugins for Kubernetes and beyond Eugene Yakubovich @eyakubovich
  2. 2. Kubernetes networking model - IP per pod - Pods in the cluster can be addressed by their IP
  3. 3. How to network containers together? - Cloud provider integration - AWS - GCE
  4. 4. How to network containers together? linux-bridge macvlan ipvlan Open vSwitch Weave Project Calico flannel
  5. 5. How to allocate IP addresses? - From a fixed block on a host - DHCP - IPAM system backed by SQL database - SDN assigned: e.g. Weave
  6. 6. How do you mix and match? (macvlan | ipvlan) + (DHCP | host-local)
  7. 7. Order matters! - macvlan + DHCP ○ Create macvlan device ○ Use the device to DHCP ○ Configure device with allocated IP - Routed + IPAM ○ Ask IPAM for an IP ○ Create veth and routes on host and/or fabric ○ Configure device with allocated IP
  8. 8. Container Runtime (e.g. k8s) veth macvlan ipvlan OVS Container Networking Interface (CNI)
  9. 9. CNI - Container can join multiple networks - Network described by JSON config - Plugin supports two commands - Add container to the network - Remove container from the network
  10. 10. User configures a network $ cat /etc/cni/net.d/10-mynet.conf { "name": "mynet", "type": "bridge", "ipam": { "type": "host-local", "subnet": "10.10.0.0/16" } }
  11. 11. CNI: Step 1 Container runtime creates network namespace and gives it a named handle $ cd /var/lib/cni $ touch myns $ unshare -n mount --bind /proc/self/ns/net myns
  12. 12. CNI: Step 2 Container runtime invokes the CNI plugin $ export CNI_COMMAND=ADD $ export CNI_NETNS=/var/lib/cni/myns $ export CNI_CONTAINERID=5248e9f8-3c91-11e5-... $ export CNI_IFNAME=eth0 $ $CNI_PATH/bridge </etc/cni/net.d/10-mynet.conf
  13. 13. CNI: Step 3 Inside the bridge plugin (1): $ brctl addbr mynet $ ip link add veth123 type veth peer name $CNI_IFNAME $ brctl addif mynet veth123 $ ip link set $CNI_IFNAME netns $CNI_IFNAME $ ip link set veth123 up
  14. 14. CNI: Step 3 Inside the bridge plugin (2): $ IPAM_PLUGIN=host-local # from network conf $ echo $IPAM_PLUGIN { "ip4": { "ip": "10.10.5.9/16", "gateway": "10.10.0.1" } }
  15. 15. CNI: Step 3 Inside the bridge plugin (3): # switch to container namespace $ ip addr add 10.0.5.9/16 dev $CNI_IFNAME # Finally, print IPAM result JSON to stdout
  16. 16. Kubernetes + CNI + Docker - Kubernetes has its own network plugins - CNI "driver" is a k8s network plugin - Future: make CNI native plugin system
  17. 17. Kubernetes + CNI + Docker - k8s starts "pause" container to create netns - k8s invokes its plugin (CNI driver) - k8s CNI driver executes a CNI plugin - CNI plugin joins "pause" container to network - Pod containers use "pause" container netns
  18. 18. Kubernetes + rkt - rkt natively supports CNI - Kubernetes delegates to rkt to invoke CNI plugins
  19. 19. Get involved! https://github.com/appc/cni
  20. 20. Want to work on upstream Kubernetes or distributed systems infrastructure? CoreOS San Francisco is hiring. Work at CoreOS coreos.com/careers

×