Your SlideShare is downloading. ×
Accelerating OpenStack Neutron
with Intel DPDK based Open vSwitch
Alexander Shalimov
http://arccn.ru/
ashalimov@arccn.ru
@...
OpenStack
• Open source software for building private and
public cloud.
– Create VMs, attach storage to them, and connect
...
OpenStack Neutron
• Neutron = OpenStack networking
• Main responsibility is to provide virtual network
to the tenant:
– Se...
• Based on Open vSwitch, virtual software switch
• The plugin supports three operational modes:
– FLAT: virtual networks s...
OVS/GRE plugin architecture
5
VM VM VM VM
br-int
br-ex
Compute Node
VMs
OVS bridgespatch port
GRE port
NIC
physical layer
...
OVS/GRE plugin bottlenecks
6
br-ex
patch port
GRE port
NIC
Linux Networking
Stack
Standard networking stack is
slow, yes. ...
GRE port in OVS
• OVS lays finishing of GRE packet encapsulation on
networking stack.
7
MACs IPs Payload
VM
OVS GRE port
M...
Accelerating Neutron
• Don’t use Linux Networking Stack - use special
fast paths instead:
– Intel DPDK,
– Netmap.
• Open v...
Intel DPDK vSwitch
• Basically it is OVS daemon connected to DPDK-based
datapath
– https://github.com/01org/dpdk-ovs
• But...
Our GRE port architecture
• GRE port should fully be responsible for packet
encapsulation.
• Tasks of GRE port:
– Determin...
Our GRE port architecture
11
VM VM
GREGRE NIC
NIC
Original Our
Flow table Flow table
Linux
networking
stack
DPDK
OVS DPDK ...
Experimental evaluation
12
Accelerated OVS Original OVS
VLAN
physical-to-physical 5 Mpps 1.1 Mpps
physical-to-vm 3.3 Mpps ...
Questions?
• The code is available in ARCCN github:
– arccn.github.io
13
http://arccn.ru/ ashalimov@arccn.ru
@alex_shali
@...
Upcoming SlideShare
Loading in...5
×

Accelerating Neutron with Intel DPDK

6,271

Published on

Accelerating Neutron with Intel DPDK from #vBrownBag session at OpenStack Summit Atlanta 2014.
1. Many OpenStack deployments use Open vSwitch plugin for Neutron.
2. But its performance and scalability are not enough for production.
3. Intel DPDK vSwitch - an DPDK optimized version of Open vSwitch developed by Intel and publicly available at 01.org. But it doesn't have enough functionality for Neutron. We have implemented the needed parts included GRE and ARP stacks. Neutron pluging
4. We got 5 times performance improving for netwroking in OpenStack!

Published in: Technology, Business
1 Comment
30 Likes
Statistics
Notes
No Downloads
Views
Total Views
6,271
On Slideshare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
0
Comments
1
Likes
30
Embeds 0
No embeds

No notes for slide
  • Many OpenStack deployments use Open vSwitch plugin for Neutron. But at the same time there are a lot of discussions that its performance and scalability is not enough for production. In this talk we will address this performance issues and present our work on accelerating Open vSwitch in Neutron in both VLAN and GRE operational modes. We already got 5 times performance improving!
  • Presents a logical API and a corresponding plug-in architecture that separates the description of network connectivity from its implementation
  • It is complex and expensive:Complex heavily layered universal code.Expensive system calls and data copies just to move packets from/to kernel.Costly packet encapsulation in the kernel (per-packet malloc/free, refcounter pointers, data sharing,…)
  • Transcript of "Accelerating Neutron with Intel DPDK"

    1. 1. Accelerating OpenStack Neutron with Intel DPDK based Open vSwitch Alexander Shalimov http://arccn.ru/ ashalimov@arccn.ru @alex_shali @arccnnews
    2. 2. OpenStack • Open source software for building private and public cloud. – Create VMs, attach storage to them, and connect them into virtual topology 2
    3. 3. OpenStack Neutron • Neutron = OpenStack networking • Main responsibility is to provide virtual network to the tenant: – Setup L2 network – Setup L3 network (DHCP, gateway, ACL) • Plugin architecture – Linux Bridge – Open vSwitch – Ryu OpenFlow controller – Nicira NVP 3
    4. 4. • Based on Open vSwitch, virtual software switch • The plugin supports three operational modes: – FLAT: virtual networks share on L2 domain – VLAN: each virtual network has its own VLAN tag – GRE: traffic goes through NVGRE tunnels and separated by XID. OVS/GRE setup: +: Very popular, any Ethernet fabric, 2^32 networks -: Bad performance and scalability (between VMs located on different compute nodes) 4 Open vSwitch plugin
    5. 5. OVS/GRE plugin architecture 5 VM VM VM VM br-int br-ex Compute Node VMs OVS bridgespatch port GRE port NIC physical layer ??? SLOW http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/open-vswitch-and-its-usage-in-neutron
    6. 6. OVS/GRE plugin bottlenecks 6 br-ex patch port GRE port NIC Linux Networking Stack Standard networking stack is slow, yes. It’s well know fact. But the traffic speed is even slower. • Heavily layered • Allocation • ….
    7. 7. GRE port in OVS • OVS lays finishing of GRE packet encapsulation on networking stack. 7 MACs IPs Payload VM OVS GRE port MACs IPs PayloadGRE Linux Networking Stack MACs IPs PayloadGREMACs IPs Require more work for route and arp lookups. Thus, slower!
    8. 8. Accelerating Neutron • Don’t use Linux Networking Stack - use special fast paths instead: – Intel DPDK, – Netmap. • Open vSwitch should be integrated with such fast paths. • Our choose to start was Intel DPDK vSwitch. 8
    9. 9. Intel DPDK vSwitch • Basically it is OVS daemon connected to DPDK-based datapath – https://github.com/01org/dpdk-ovs • But it required some work for OpenStack: – patch ports, – multiple datapaths/bridges, – GRE stack • Still need route and arp lookups. 9
    10. 10. Our GRE port architecture • GRE port should fully be responsible for packet encapsulation. • Tasks of GRE port: – Determine local IP. • Must specify both local and remote IPs; – Determine MAC addresses • Maintain our own ARP table; – What NIC to send • Route table is encoded into main flow table and encapsulated packets are returned back to the datapath and matched again against the flow table. • Still suit OpenStack Neutron plugin 10
    11. 11. Our GRE port architecture 11 VM VM GREGRE NIC NIC Original Our Flow table Flow table Linux networking stack DPDK OVS DPDK OVS
    12. 12. Experimental evaluation 12 Accelerated OVS Original OVS VLAN physical-to-physical 5 Mpps 1.1 Mpps physical-to-vm 3.3 Mpps 0.44 Mpps GRE physical-to-physical 1.6 Mpps 0.22 Mpps physical-to-vm 1.12 Mpps 0.11 Mpps * 10 Gb channel ** 64 bytes UDP packets *** theoretical max is 15Mpps **** Intel Xeon E3-1240 2.4GHz/6 ***** IVSHMEM VM connection (~0.6Mpps without) It is 5 times performance improving!
    13. 13. Questions? • The code is available in ARCCN github: – arccn.github.io 13 http://arccn.ru/ ashalimov@arccn.ru @alex_shali @arccnnews

    ×