© 2016 NETRONOME SYSTEMS, INC.
Linley Data Center Conference
February 9, 2016
Open vSwitch
Implementation Options
Nick Tausanovitch
VP of Solutions Architecture
© 2016 NETRONOME SYSTEMS, INC. 2
The Modern Data Center Landscape
Modern Public and Private Cloud Data Center applications are driving:
▶  Rapid sprawl of Virtual Machines (VM’s) and containers to scale data centers
▶  The need for SDN control of Network Virtualization overlays
▶  SDN controlled per-VM and per Tenant policies for zero-trust security
▶  Continuous SW changes to accommodate networking feature adds
Enter the Era of Server-Based Networking…
© 2016 NETRONOME SYSTEMS, INC. 3
The New Data Center Infrastructure Conundrum
…But server-based networking using software-based virtual switches creates a
new set of challenges
The Low Throughput
creates a data bottleneck
and starves applications,
limiting performance
Many CPU cores are
needed resulting in a CPU
Tax that dilutes effective
compute resources
The added Latency of
software switches
precludes use in many real
time applications
© 2016 NETRONOME SYSTEMS, INC. 4
Introducing the Agilio™ Server Networking Platform
10/40GbE Production Solutions Now
Agilio
Server-Based
Networking
Software
Agilio-CX Intelligent Server Adapters
!  Agilio Accelerates the virtual switch Data Path
!  Agilio Offloads the virtual switch processing from servers
!  The Agilio functionality is flexible and extensible
© 2016 NETRONOME SYSTEMS, INC. 5
NFP-4000 Chip (Used on CX-4000 Adapters)
Network Flow Processor used on CX-4000 Intelligent Server Adapters
▶  Highly Parallel Multithreaded Processing Architecture for high throughput
▶  H/W Accelerators to further maximize efficiency (throughput/watt)
▶  Purpose built Flow Processing Cores maximize flexibility
Comprehensive feature set with Agilio Software
▶  RX/TX with SR-IOV and stateless offloads
▶  Extensive, flexible tunneling support (e.g. VXLAN, MPLS, GRE)
▶  Flexible Match/Action with transparent offload of OVS
▶  High Scale and very granular security policies
External DDR3 support for Millions of Flows
Easy function extensibility with P4 and C Sandbox
© 2016 NETRONOME SYSTEMS, INC. 6
Agilio™ Open vSwitch Implementation
Agilio Software Architecture for Open vSwitch
© 2016 NETRONOME SYSTEMS, INC. 7
Software Models Provide Maximum Flexibility
P4 and C Sandbox Data Path Programming
Flow Configuration and API programming with Agilio Software Agilio Data Path Extensibility with C Sandbox
•  Agilio Software with simple API-level programming
provides a market ready solution with rich features and
robust roadmap
•  Extension of Agilio software with custom features can be
achieved incrementally with C Sandbox programming
•  Fuller control of data plane functionality while providing
abstraction of the underlying hardware with P4
Open Source P4 and C Programming Tools available at http://www.open-NFP.org
© 2016 NETRONOME SYSTEMS, INC. 8
Open vSwitch Benchmarking Scenarios
© 2016 NETRONOME SYSTEMS, INC. 9
OVS Benchmark Testing Overview
Data collected for key endpoint NIC use cases
▶  OVS offload with Network-VM, VM-Network, VM-VM data flows
▶  OVS-based L2, L3 forwarding and actions
▶  VXLAN Tunnel Endpoint Processing
▶  Standard netdev and dpdk poll mode drivers
Collected data across a range of parameters
▶  Packet size
▶  Number of Wildcard Rules
▶  Number of Micro-Flows
Key metrics collected and analyzed
▶  Packets-per-second throughput
▶  CPU core usage
▶  Latency
*Netronome Trafgen source/sink user space application is being open sourced and will be made available on www.open-nfp.org
Benchmark Testing Setup with Netronome Trafgen*
Test Gen Server
DPDK PMD
Trafgen
(Source)
1x40G link
DPDK PMD
Trafgen
(Source)
Server Adapter
DUT Server
DPDK PMD
Trafgen
(Sink)
DPDK PMD
Trafgen
(Sink)
DUT Server Adapter
© 2016 NETRONOME SYSTEMS, INC. 10
OVS Benchmark Throughput Results
OVS L2 Forward to VMs
Packet Size
MillionPacketsPerSecond
OVS VXLAN + L2 to VMs
Packet Size
MillionPacketsPerSecond
40GbE Network-to-VM Traffic with 64,000 Micro-flows matching against 1000 Wildcard Rules
© 2016 NETRONOME SYSTEMS, INC. 11
Server CPU Core Allocations
•  12 CPU Cores Dedicated to OVS
•  12 CPU Cores for Application
•  1 CPU Core Dedicated to OVS
•  23 CPU Cores for Application
Software (Kernel and User) OVS
Agilio OVS
© 2016 NETRONOME SYSTEMS, INC. 12
Per Server CPU Core Efficiency
Throughput with single Server CPU Core
MillionPacketsPerSecond
•  50X Efficiency Gain vs. Kernel OVS
•  20X Efficiency Gain vs. User OVS
© 2016 NETRONOME SYSTEMS, INC. 13
Summary and Conclusion
X86 Cores Available to Run APPs
DataDeliveredtoAPPs
Agilio
OVS
DPDK OVS
Kernel
OVS
•  Eliminate vSwitch Data Bottlenecks
•  Reduce the OVS server CPU Tax
•  Improved packet latencies
•  Maintain software innovation velocity
Agilio Intelligent Server Adapters to enable the next
stage of the server-based networking revolution…
© 2016 NETRONOME SYSTEMS, INC. 14
Efficiency Drives Massive TCO Savings
Server
TOR
Server
Server
Server
Server
Server
Server
Server
Server
Server
Server
Rack Throughput: 120Mpps
VNF’s Per Rack: 240
Traditional NIC with
Software OVS
Rack Throughput: 440Mpps
VNF’s Per Rack: 880
Agilio-CX with
Accelerated OVS
74% Lower
CAPEX
75% Lower
OPEX
Data Center TCO*
3.7x More
VNFs
3.7x More
Throughput
Per Rack Capacity
Server
Server
Server
Server
TOR
Server
Server
Server
Server
Server
Server
Server
Server
Server
Server
Server
Server
Server
ServerServer
20Serverswith40GbE
20Serverswith40GbE
*Based on Data Output from Netronome ROI Calculator available online at http://netronome.com/products/ovs/roi-calculator
© 2016 NETRONOME SYSTEMS, INC.
Thank You

Open vSwitch Implementation Options

  • 1.
    © 2016 NETRONOMESYSTEMS, INC. Linley Data Center Conference February 9, 2016 Open vSwitch Implementation Options Nick Tausanovitch VP of Solutions Architecture
  • 2.
    © 2016 NETRONOMESYSTEMS, INC. 2 The Modern Data Center Landscape Modern Public and Private Cloud Data Center applications are driving: ▶  Rapid sprawl of Virtual Machines (VM’s) and containers to scale data centers ▶  The need for SDN control of Network Virtualization overlays ▶  SDN controlled per-VM and per Tenant policies for zero-trust security ▶  Continuous SW changes to accommodate networking feature adds Enter the Era of Server-Based Networking…
  • 3.
    © 2016 NETRONOMESYSTEMS, INC. 3 The New Data Center Infrastructure Conundrum …But server-based networking using software-based virtual switches creates a new set of challenges The Low Throughput creates a data bottleneck and starves applications, limiting performance Many CPU cores are needed resulting in a CPU Tax that dilutes effective compute resources The added Latency of software switches precludes use in many real time applications
  • 4.
    © 2016 NETRONOMESYSTEMS, INC. 4 Introducing the Agilio™ Server Networking Platform 10/40GbE Production Solutions Now Agilio Server-Based Networking Software Agilio-CX Intelligent Server Adapters !  Agilio Accelerates the virtual switch Data Path !  Agilio Offloads the virtual switch processing from servers !  The Agilio functionality is flexible and extensible
  • 5.
    © 2016 NETRONOMESYSTEMS, INC. 5 NFP-4000 Chip (Used on CX-4000 Adapters) Network Flow Processor used on CX-4000 Intelligent Server Adapters ▶  Highly Parallel Multithreaded Processing Architecture for high throughput ▶  H/W Accelerators to further maximize efficiency (throughput/watt) ▶  Purpose built Flow Processing Cores maximize flexibility Comprehensive feature set with Agilio Software ▶  RX/TX with SR-IOV and stateless offloads ▶  Extensive, flexible tunneling support (e.g. VXLAN, MPLS, GRE) ▶  Flexible Match/Action with transparent offload of OVS ▶  High Scale and very granular security policies External DDR3 support for Millions of Flows Easy function extensibility with P4 and C Sandbox
  • 6.
    © 2016 NETRONOMESYSTEMS, INC. 6 Agilio™ Open vSwitch Implementation Agilio Software Architecture for Open vSwitch
  • 7.
    © 2016 NETRONOMESYSTEMS, INC. 7 Software Models Provide Maximum Flexibility P4 and C Sandbox Data Path Programming Flow Configuration and API programming with Agilio Software Agilio Data Path Extensibility with C Sandbox •  Agilio Software with simple API-level programming provides a market ready solution with rich features and robust roadmap •  Extension of Agilio software with custom features can be achieved incrementally with C Sandbox programming •  Fuller control of data plane functionality while providing abstraction of the underlying hardware with P4 Open Source P4 and C Programming Tools available at http://www.open-NFP.org
  • 8.
    © 2016 NETRONOMESYSTEMS, INC. 8 Open vSwitch Benchmarking Scenarios
  • 9.
    © 2016 NETRONOMESYSTEMS, INC. 9 OVS Benchmark Testing Overview Data collected for key endpoint NIC use cases ▶  OVS offload with Network-VM, VM-Network, VM-VM data flows ▶  OVS-based L2, L3 forwarding and actions ▶  VXLAN Tunnel Endpoint Processing ▶  Standard netdev and dpdk poll mode drivers Collected data across a range of parameters ▶  Packet size ▶  Number of Wildcard Rules ▶  Number of Micro-Flows Key metrics collected and analyzed ▶  Packets-per-second throughput ▶  CPU core usage ▶  Latency *Netronome Trafgen source/sink user space application is being open sourced and will be made available on www.open-nfp.org Benchmark Testing Setup with Netronome Trafgen* Test Gen Server DPDK PMD Trafgen (Source) 1x40G link DPDK PMD Trafgen (Source) Server Adapter DUT Server DPDK PMD Trafgen (Sink) DPDK PMD Trafgen (Sink) DUT Server Adapter
  • 10.
    © 2016 NETRONOMESYSTEMS, INC. 10 OVS Benchmark Throughput Results OVS L2 Forward to VMs Packet Size MillionPacketsPerSecond OVS VXLAN + L2 to VMs Packet Size MillionPacketsPerSecond 40GbE Network-to-VM Traffic with 64,000 Micro-flows matching against 1000 Wildcard Rules
  • 11.
    © 2016 NETRONOMESYSTEMS, INC. 11 Server CPU Core Allocations •  12 CPU Cores Dedicated to OVS •  12 CPU Cores for Application •  1 CPU Core Dedicated to OVS •  23 CPU Cores for Application Software (Kernel and User) OVS Agilio OVS
  • 12.
    © 2016 NETRONOMESYSTEMS, INC. 12 Per Server CPU Core Efficiency Throughput with single Server CPU Core MillionPacketsPerSecond •  50X Efficiency Gain vs. Kernel OVS •  20X Efficiency Gain vs. User OVS
  • 13.
    © 2016 NETRONOMESYSTEMS, INC. 13 Summary and Conclusion X86 Cores Available to Run APPs DataDeliveredtoAPPs Agilio OVS DPDK OVS Kernel OVS •  Eliminate vSwitch Data Bottlenecks •  Reduce the OVS server CPU Tax •  Improved packet latencies •  Maintain software innovation velocity Agilio Intelligent Server Adapters to enable the next stage of the server-based networking revolution…
  • 14.
    © 2016 NETRONOMESYSTEMS, INC. 14 Efficiency Drives Massive TCO Savings Server TOR Server Server Server Server Server Server Server Server Server Server Rack Throughput: 120Mpps VNF’s Per Rack: 240 Traditional NIC with Software OVS Rack Throughput: 440Mpps VNF’s Per Rack: 880 Agilio-CX with Accelerated OVS 74% Lower CAPEX 75% Lower OPEX Data Center TCO* 3.7x More VNFs 3.7x More Throughput Per Rack Capacity Server Server Server Server TOR Server Server Server Server Server Server Server Server Server Server Server Server Server ServerServer 20Serverswith40GbE 20Serverswith40GbE *Based on Data Output from Netronome ROI Calculator available online at http://netronome.com/products/ovs/roi-calculator
  • 15.
    © 2016 NETRONOMESYSTEMS, INC. Thank You