Network Virtualization & Software-defined Networking


Published on

Enterprise Datacenter Virtualization und Cloud Computing stellen neue Anforderungen an das Netzwerk. Traditionsgemäss wurden virtuelle Workloads über als Bridge fungierende virtuelle Switches mit VLANs auf dem physischen Netzwerk verbunden. Mit dem Wachstum der Anfordungen an Skalierung und Automatisierung stossen diese Modelle an Grenzen.

Thomas Graf bot an diesem OpenTuesday einen Einblick in Protokolle und Technologien wie OpenFlow, VXLAN, OpenStack Neutron und Open vSwitch, die eingesetzt werden, um neue automatisierte Netzwerkkonzepte der nächsten Generation, wie Software Defined Networking oder Network Function Virtualization, umzusetzen.

Published in: Software, Technology

Network Virtualization & Software-defined Networking

  1. 1. SDN & NFV Introduction Open Source Data Center Networking Thomas Graf <> Red Hat, Inc. Spring, 2014
  2. 2. Agenda ● Problem Statement – Networking Challenges ● Path to resolution – Software Defined Networking, Network Virtualization, NFV & Service Chaining ● What about Code? – OpenDaylight, Open vSwitch, OpenStack ● Look Ahead – Group Based Policy Abstraction
  3. 3. Problem Statement: Networking Challenges
  4. 4. She can't take much more of this, captain!
  5. 5. Managing Forwarding Elements ● Vendor specific management tools ● Little automation ● Slow and error prone Developer NetOps Service Ticket 1d – 2 weeks CLI Vendor UI
  6. 6. Change in Traffic Patterns ● Increased demand for bisectional traffic ● Limited room for additional costs 5% 95% 80% by 2014* 20% * Gartner Synergy Report
  7. 7. Dynamic Workloads ● Virtualization (Live Migration)& Cloud ● Respond in real time – Services are started/stopped dynamically, network needs to adapt. ● Bring Your Own Device Hypervisor Hypervisor VMVM Live Migration
  8. 8. Debugging Debugging complex networks is hard
  9. 9. Cost per Core
  10. 10. Network Definition ● Collection of endpoints and forwarding elements ● Responsible for moving packets between hosts ● Source hosts identify destination ● Forwarding elements direct traffic at each intersection
  11. 11. Classic Forwarding Device Data / Forwarding Plane Fabric, Flow Table, Forwarding Engine Data / Forwarding Plane Fabric, Flow Table, Forwarding Engine Control Plane Forwarding Decision (Learning, RIB Lookup), Routing Protocols (OSPF, BGP, ...) Control Plane Forwarding Decision (Learning, RIB Lookup), Routing Protocols (OSPF, BGP, ...) Management interface CLI, Console, SNMP, ... Management interface CLI, Console, SNMP, ...
  12. 12. Path to Resolution: Software Defined Networking
  13. 13. Software Defined Networking In the Software Defined Networking architecture, the control and data planes are decoupled, network intelligence and state are logically centralized, and the underlying network infrastructure is abstracted from the applications. Software-Defined Networking: The New Norm for Networks ONF White Paper April 13, 2012
  14. 14. SDN – Abstraction Controller App App AppSNMPVendor Specific Protocol Control Plane Data Plane A logically centralized controller programs the network based on a global view. Control Plane Data Plane Control Plane Data Plane Console Control Plane Data Plane Data Plane Data Plane Data Plane Data Plane
  15. 15. “We've taken over the network” James Hamilton VP, Amazon Web Services Nov, 2013
  16. 16. What Really Matters ● Closed Source ● Network Engineer ● Vendor Lead ● CLIs ● Network Appliances ● Open Source ● Network Developer ● Community Driven ● APIs ● NFV (Software)
  17. 17. Open Source Defines SDN
  18. 18. SDN Promises ● Highly automated & dynamically provisioned ● Enables innovation, experimentation & optimizations ● Virtualizes network & abstracts the hardware ● Makes the network programmable ● Enables overlays with control at edges
  19. 19. OpenFlow Match on bits in packet header L2- L4 plus meta data Execute actions ● Forward to port ● Drop ● Send to controller ● Mangle packet 2.2. An Open Standard behind SDN OpenFlow enables networks to evolve, by giving a remote controller the power to modify the behavior of network devices, through a well-defined "forwarding instruction set". The growing OpenFlow ecosystem now includes routers, switches, virtual switches, and access points from a range of vendors. ONF Website 11..
  20. 20. Programmable Flow Table ● Extensive flow matching capabilities: – Layer 1 – Tunnel ID, In Port, QoS priority, skb mark – Layer 2 – MAC address, VLAN ID, Ethernet type – Layer 3 – IPv4/IPv6 fields, ARP – Layer 4 – TCP/UDP, ICMP, ND ● One or more actions: – Output to port (port range, flood, mirror) – Discard, Resubmit to table x – Packet Mangling (Push/Pop VLAN header, TOS, ...) – Send to controller, Learn
  21. 21. Is it production ready? Google claims 95% network utilization!
  22. 22. Path to Resolution: Network Virtualization
  23. 23. Network Virtualization What do we need? 1. Virtualize network topology on Layer 2-7 - Run previous workload without changes 2. Decouple logical from physical topology - A virtual network should run anywhere 3. Allow for isolated tentant networks - Multiple customers/applications per network 4. Provide APIs to manage network abstraction - Orchestrate & automate
  24. 24. Naive VLAN Mapping Switch Compute Node vSwitch VM1 Compute Node VM2 VM3 vSwitch VLAN 2 Switch Switch Switch VM1 Compute Node VM2 VM3 vSwitch VLAN 3 VM1 VM2 VM3 VLAN 1 Max 4096 VLANs
  25. 25. VLAN Trunking Compute Node VM1 vSwitch Compute Node vSwitch Compute Node vSwitch VM1VM1 VM2VM2VM2 VM3VM3VM3 Switch Switch Switch Switch Max 4096 VLANs
  26. 26. Network Overlay Compute Node VM1 vSwitch Compute Node vSwitch Compute Node vSwitch VM1VM1 VM2VM2VM2 VM3VM3VM3 Switch Switch Switch Switch
  27. 27. Encapsulation Stateless VXLAN, NVGRE, Geneve, GUE, LISP, STT, .. Stateful VPN, L2TP, SSH, ...
  28. 28. VXLAN Encapsulation
  29. 29. Network Abstraction VM VM VM VM VM VM VM VM VM Switch Switch Switch Switch Switch Switch Switch Logical Physical
  30. 30. NFV & Service Chaining
  31. 31. NFV Problem Statement ● Non commodity hardware ● Physical install per appliance per site ● Large development barriers ● Innovation constraints & limited competition
  32. 32. NFV What do we want? 1. Virtualization – Run functions on scaleable commodity hardware 2. Abstraction – Limited dependency on physical layer 3. Programmability – APIs to implement automation 4. Orchestration – Centralized orchestration – Reduced maintenance
  33. 33. NFV
  34. 34. Who is behind NFV? ● Originally operator driven – ETSI – European Telecommunications Standards Institute ● Evolved into a generic concept ● Open to any company
  35. 35. Service Chaining Moving network functions into software means that building a service chain no longer requires acquiring hardware.
  36. 36. Build your own Open Source Data Center
  37. 37. OpenDaylight’s mission is to facilitate a community-led, industry-supported open source platform, including code and architecture, to accelerate adoption of Software-Defined Networking and Network Functions Virtualization.
  38. 38. Framework
  39. 39. Controller (Open Daylight) Controller (Open Daylight) OpenFlow / OVSDBOpenFlow / OVSDB VM VM Open vSwitch is a virtual multi layer switch for hypervisors providing network connectivity to virtual machines. VM VM SwitchSwitch SwitchSwitch
  40. 40. Open vSwitch ● Apache License (User Space), GPL (Kernel) ● Extensive flow table programming capabilities ● OpenFlow 1.1+ (1.1, 1.2, 1.3, extensions) ● Designed to manage overlay networks ● VLAN, VXLAN, GRE, LISP, ... ● Remote management protocol (OVSDB) ● Monitoring capabilities
  41. 41. L2 Segregation (VLAN) VM1 Host system VM2 VM3 Open vSwitch VLAN 1 VLAN 2 VLAN isolation enforces VLAN membership of a VM without the knowledge of the guest itself. vSwitchvSwitch Virtual Machine Remove VLAN header Add VLAN header # ovs-vsctl add-port ovsbr port2 tag=10
  42. 42. Overlay Networks VM1 Compute Node 1 VM2 VM3 Open vSwitch VM4 Compute Node 2 VM5 VM6 Open vSwitch ControllerController O pen Flow O VSD B O pen Flow O VSDBTunnel VNET 1 VNET 1VNET 2 VNET 2 Tunneling provides isolation and reduces dependencies on the physical network. NetworkNetwork
  43. 43. Visibility ● ● NetFlow ● Port Mirroring ● SPAN ● RSPAN ● ERSPAN Supports industry standard technology to monitor the use of a network.
  44. 44. Feature Quality of Service ● Uses existing Traffic Control Layer ● Policer (Ingress rate limiter) ● HTB, HFSC (Egress traffic classes) ● Controller (Open Flow) can select Traffic Class VM1 Compute Node VM2 ovsbr VLAN 10 port1 port2 1mbit # ovs-vsctl set Interface port2 ingress_policing_rate=1000
  45. 45. To produce the ubiquitous open source cloud computing platform that will meet the needs of public and private cloud providers regardless of size, by being simple to implement and massively scalable.
  46. 46. OpenStack Architecture
  47. 47. Overlay Networks with OpenStack Neutron and Open vSwitch A1 Compute Node 1 br-int B1 br-tun A2 Compute Node 2 br-int B2 br-tun A3 Compute Node C3 br-tun B3 br-int Compute Node 3 br-tun B3 br-int Network Node DHCP br-tun L3 br-ex VXLAN VXLAN br-int C3 VID 11 ↔ VNI 1 VID 49 ↔ VNI 13
  48. 48. Group Based Policy Abstraction
  49. 49. Network APIs are there. Now what? Applications do not care about subnets, ports, or virtual networks.
  50. 50. Application Centric APIs Allow application administrators to express networking requirements using group and policy abstraction. Leave the technical implementation to the network.
  51. 51. Terminology Connectivity Group: Collection of endpoints (MAC/IP on vNIC) with a common policy. Policy: Set of Policy Rule objects describing policy. Policies may be applied between groups, or alternatively, applied to a single group using provide / consume relations. Policy Rule: Specific <classifier, action> pair, part of a policy. – Classifier: L4 ports + protocol – Actions: Permit / Deny, QoS action, service chain redirection
  52. 52. Policy as a Service ● Group is providing service as defined by policy ● Service mostly unaware of consumer
  53. 53. Policy between Groups ● Policy defined between pair of groups ● Policy may apply to multiple relationships ● Producer is aware of consumer
  54. 54. Example: Policy between Groups
  55. 55. Questions
  56. 56. References Opendaylight – Open vSwitch – OpenFlow – Open Networking Foundation – Inter-Datacenter WAN with centralized TE using SDN and OpenFlow [Google] – Red Hat OpenStack – OpenStack –
  57. 57. Backup
  58. 58. Open vSwitch Deep Dive
  59. 59. Flow Table VM User space Slow Path Physical Interface Kernel Fast Path Controller programs flow table in the slow path that feeds the flow table in the fast path upon request. tap VM VM VM tap tap tap Open vSwitch OpenFlow
  60. 60. Architecture ovsdbvswitchd Datapath OpenFlow Kernel User space Management ovs-vsctl Flow Table ovs-dpctl upcall Netlink sFlow To DeviceFrom Device Promiscuous Mode reinject 1 2 (3) 4 5 6 7 Packet Processing Management Workflow ovsdb-tool ovs-ofctl
  61. 61. Flow Table Rules ● Flow matching capabilities ● Meta – Tunnel ID, In Port, QoS priority, skb mark ● Layer 2 – MAC address, VLAN ID, Ethernet type ● Layer 3 – IPv4/IPv6 fields, ARP ● Layer 4 – TCP/UDP, ICMP, ND ● Possible chain of actions ● Output to port (port range, flood, mirror) ● Discard, Resubmit to table x ● Packet Mangling (Push/Pop VLAN header, TTL,NAT, ...) ● Send to controller, Learn
  62. 62. Modifying the Flow Table # ovs-ofctl add-flow ovsbr dl_src=11:22:33:44:55:66,actions=strip_vlan,output:1 # ovs-ofctl dump-flows ovsbr [...] cookie=0x0, duration=36.24s, table=0, n_packets=0, n_bytes=0, idle_age=36, dl_src=11:22:33:44:55:66 actions=strip_vlan,output:1 Strip VLAN header of all packets from MAC address 11:22:33:44:55:66 and forward packet to port 1.
  63. 63. Megaflows ● Fast path made capable of handling wildcard flows ● Transparent optimization in_port=3 src_mac=02:80:37:ec:02:00, dst_mac=0a:e0:5a:43:b6:a1, vlan=10, eth_type=0x0800 ip_src=, ip_dst=, tcp_src=80, tcp_dst=32990, ... in_port=3, src_mac=02:80:37:ec:02:00, dst_mac=0a:e0:5a:43:b6:a1, vlan=10
  64. 64. Multi Threading CPU Core 1 NIC CPU Core 2 CPU Core 3 ovs-vswitchd CPU Core 1 NIC CPU Core 2 CPU Core 3 OVS OVS OVS ● Multiqueue NICs spread load across all cores ● Maps kernel NIC Queue => CPU core mapping to user space ● Allows slow path to scale across cores
  65. 65. Examples
  66. 66. Defining a Switch & Ports # service openvswitch start # ovs-vsctl add-br ovsbr # ovs-vsctl add-port ovsbr port1 Creating a new virtual switch “ovsbr” with port “vm1” # ovs-vsctl show 7c68e54f-1618-41f4-bd16- 2fd781488266 Bridge ovsbr Port ovsbr Interface ovsbr type: internal Port "port1" Interface "port1" ovs_version: "1.7.3" VM1 Compute Node ovsbr port1
  67. 67. Using Red Hat ifcfg- TYPE=OVSBridge DEVICE=ovsbr ONBOOT=yes /etc/sysconfig/network-scripts/ifcfg-ovsbr TYPE=OVSIntPort OVS_BRIDGE=ovsbr DEVICE=port1 ONBOOT=yes /etc/sysconfig/network-scripts/ifcfg-port1 # ifup port1 VM1 Compute Node ovsbr port1
  68. 68. <interface type='bridge'> <source bridge='ovsbr'/> <virtualport type='openvswitch' /> </interface> ... with libvirt TYPE=OVSBridge DEVICE=ovsbr ONBOOT=yes /etc/sysconfig/network-scripts/ifcfg-ovsbr virsh# edit <domain> Start VM and it just works! VM1 Compute Node ovsbr UUID
  69. 69. VLAN Isolation # ovs-vsctl add-port ovsbr port2 tag=10 VM1 Compute Node VM2 ovsbr VLAN 10 port1 port2
  70. 70. Traffic Shaping # ovs-vsctl set Interface port2 ingress_policing_rate=1000 Limit all traffic received from VM on port port2 to 1Mbit/s VM1 Virtual Host VM2 ovsbr VLAN 10 port1 port2 1mbit