Accelerating Neutron with Intel DPDK from #vBrownBag session at OpenStack Summit Atlanta 2014.
1. Many OpenStack deployments use Open vSwitch plugin for Neutron.
2. But its performance and scalability are not enough for production.
3. Intel DPDK vSwitch - an DPDK optimized version of Open vSwitch developed by Intel and publicly available at 01.org. But it doesn't have enough functionality for Neutron. We have implemented the needed parts included GRE and ARP stacks. Neutron pluging
4. We got 5 times performance improving for netwroking in OpenStack!
Many OpenStack deployments use Open vSwitch plugin for Neutron. But at the same time there are a lot of discussions that its performance and scalability is not enough for production. In this talk we will address this performance issues and present our work on accelerating Open vSwitch in Neutron in both VLAN and GRE operational modes. We already got 5 times performance improving!
Presents a logical API and a corresponding plug-in architecture that separates the description of network connectivity from its implementation
It is complex and expensive:Complex heavily layered universal code.Expensive system calls and data copies just to move packets from/to kernel.Costly packet encapsulation in the kernel (per-packet malloc/free, refcounter pointers, data sharing,…)
Accelerating Neutron with Intel DPDK
Accelerating OpenStack Neutron
with Intel DPDK based Open vSwitch
• Open source software for building private and
– Create VMs, attach storage to them, and connect
them into virtual topology
• Neutron = OpenStack networking
• Main responsibility is to provide virtual network
to the tenant:
– Setup L2 network
– Setup L3 network (DHCP, gateway, ACL)
• Plugin architecture
– Linux Bridge
– Open vSwitch
– Ryu OpenFlow controller
– Nicira NVP
• Based on Open vSwitch, virtual software switch
• The plugin supports three operational modes:
– FLAT: virtual networks share on L2 domain
– VLAN: each virtual network has its own VLAN tag
– GRE: traffic goes through NVGRE tunnels and separated by
+: Very popular, any Ethernet fabric, 2^32 networks
-: Bad performance and scalability (between VMs
located on different compute nodes)
Open vSwitch plugin
OVS/GRE plugin architecture
VM VM VM VM
OVS bridgespatch port
OVS/GRE plugin bottlenecks
Standard networking stack is
slow, yes. It’s well know fact. But
the traffic speed is even slower.
• Heavily layered
GRE port in OVS
• OVS lays finishing of GRE packet encapsulation on
MACs IPs Payload
OVS GRE port
MACs IPs PayloadGRE
Linux Networking Stack
MACs IPs PayloadGREMACs IPs
Require more work for route and arp lookups. Thus, slower!
• Don’t use Linux Networking Stack - use special
fast paths instead:
– Intel DPDK,
• Open vSwitch should be integrated with such
• Our choose to start was Intel DPDK vSwitch.
Intel DPDK vSwitch
• Basically it is OVS daemon connected to DPDK-based
• But it required some work for OpenStack:
– patch ports,
– multiple datapaths/bridges,
– GRE stack
• Still need route and arp lookups.
Our GRE port architecture
• GRE port should fully be responsible for packet
• Tasks of GRE port:
– Determine local IP.
• Must specify both local and remote IPs;
– Determine MAC addresses
• Maintain our own ARP table;
– What NIC to send
• Route table is encoded into main flow table and
encapsulated packets are returned back to the datapath and
matched again against the flow table.
• Still suit OpenStack Neutron plugin
Our GRE port architecture
Flow table Flow table
OVS DPDK OVS
Accelerated OVS Original OVS
physical-to-physical 5 Mpps 1.1 Mpps
physical-to-vm 3.3 Mpps 0.44 Mpps
physical-to-physical 1.6 Mpps 0.22 Mpps
physical-to-vm 1.12 Mpps 0.11 Mpps
* 10 Gb channel
** 64 bytes UDP packets
*** theoretical max is 15Mpps
**** Intel Xeon E3-1240 2.4GHz/6
***** IVSHMEM VM connection (~0.6Mpps without)
It is 5 times performance improving!
• The code is available in ARCCN github: