SlideShare a Scribd company logo
James Denton – Network Architect
Twitter: @jimmdenton
Open Infrastructure Summit
Jonathan Almaleh – Network Architect
Twitter: @ckent99999
Denver, Colorado 2019
Pushing Packets:
How do the ML2 Drivers Stack Up
What is ML2
Mechanism Drivers
Comparisons
Summary
Agenda
In the beginning, there was the monolithic
plugin.
Using a monolithic plugin, operators were
limited to a single virtual networking
technology within a given cloud.
Writing new plugins was difficult, and often
resulted in duplicate code and efforts.
What is ML2
Around the Havana release, the Modular
Layer 2 (ML2) plugin was introduced.
By using the ML2 plugin and related
drivers, operators were no longer limited to
a single virtual networking technology
within a given cloud.
With the modular approach, developers
could focus on developing drivers for their
respective L2 mechanisms without having
to reimplement other components.
What is ML2
Type drivers:
Define how an OpenStack network is
realized. Examples are VLAN, VXLAN,
GENEVE, Flat, etc.
Mechanism drivers:
Are responsible for implementing the
network. Examples include Open
vSwitch, Linux Bridge, SR-IOV, etc.
What is ML2
What is ML2
Mechanism Drivers
Comparisons
Summary
Mechanism Drivers
Test Environment
iPerf Server (VM) / DUT
CPU 8-core Intel Xeon E5-2678 v3 @ 2.5 Ghz
Memory 16 GB
Operating System Ubuntu 18.04.2 LTS 4.15.0-47-generic
Mechanism Driver Network Driver (in VM)
Linux Bridge vhost_net
Open vSwitch +/- DPDK vhost_net
SR-IOV (Direct) mlx5_core
SR-IOV (Indirect) vhost_net
Open vSwitch + ASAP mlx5_core
VPP vhost_net
Compute Node Specs
CPU 2x 12-core Intel Xeon E5-2678 v3 @ 2.5 Ghz
Memory 64 GB
NIC Mellanox ConnectX-4 Lx EN (10/25)
Mellanox ConnectX-5 Ex (40/100)
Operating System Ubuntu 18.04.2 LTS 4.15.0-47-generic
Kernel Parameters GRUB_CMDLINE_LINUX="intel_iommu=on iommu=pt”
OpenStack Release Stein
iPerf Client / T-Rex Specs
CPU 2x 8-core Intel Xeon E5-2667 v2 @ 3.3 Ghz
Memory 128 GB
NIC Mellanox ConnectX-4 Lx EN (10/25)
Mellanox ConnectX-5 Ex (40/100)
Operating System Ubuntu 18.04.2 LTS 4.15.0-47-generic
Kernel Parameters GRUB_CMDLINE_LINUX="intel_iommu=on
iommu=pt”
Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
Linux Bridge
The Linux Bridge driver implements a single
Linux bridge per network on a given compute
node.
A bridge can be connected to a physical
interface, vlan interface, or vxlan interface,
depending on the network type.
brq0
Instance Instance
Linux Bridge
Requirements:
One or more network interfaces to be associated with provider
network(s)
An IP address to be used for overlay traffic (if applicable)
Mechanism Driver linuxbridge
Agent neutron-linuxbridge-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:ens1f0
25g 25g:ens1f1
40g 40g:ens3f0
100g 100g:ens3f1
Linux Bridge
brq10G brq25G brq40G brq100G
Instance
(iPerf Server)
Standalone Node
(iPerf Client)
Compute Node
9.93
18.7
20.4
21.9
9.91
23.9
38.3
69.8
0
25
50
75
100
10 Gbps 25 Gbps 40 Gbps 100 Gbps
Realized Throughput vs Baremetal
LXB Baremetal
Advantages Disadvantages
Supports overlay networking (VXLAN)
Easy to troubleshoot using well-known tools
Supports highly-available routers using VRRP
Supports bonding / LAG
Widely used / documented / supported
Kernel datapath
Iptables is likely in-path, even with port security disabled
Does not support distributed virtual routers
Does not support advanced services such as TaaS,
FWaaS v2, and others
Usually lags in new features compared to OVS
Linux Bridge
Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
Open vSwitch
Open vSwitch is an open-source
virtual switch that connects virtual
network resources to the physical
network infrastructure.
The openvswitch mechanism
driver implements multiple OVS-
based virtual switches and uses
them for various purposes.
The integration bridge connects virtual
machine instances to local Layer 2 networks.
The tunnel bridge connects
Layer 2 networks between
compute nodes using
overlay networks such as
VXLAN or GRE.
The provider bridge
connects local Layer 2
networks to the physical
network infrastructure.
br-int
br-tun br-provider
Open vSwitch
Requirements:
One or more network interfaces to be associated with provider
network(s)
An IP address to be used for overlay traffic (if applicable)
Packages not included with base install (e.g. openvswitch-
common, openvswitch-switch)
Mechanism Driver openvswitch
Agent neutron-openvswitch-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:br-10g
25g 25g:br-25g
40g 40g:br-40g
100g 100g:br-100g
Open vSwitch
br-10G br-25G br-40G br-100G
Instance
(iPerf Server)
Standalone Node
(iPerf Client)
Compute Node
br-int
9.84
18.2 20.4
22.19.91
23.9
38.3
69.8
0
25
50
75
100
10 Gbps 25 Gbps 40 Gbps 100 Gbps
Realized Throughput vs Baremetal (iperf - P 4)
OVS Baremetal
Advantages Disadvantages
Supports overlay networking (VXLAN, GRE)
Supports highly-available routers using VRRP, as well
as distributed virtual routers for better fault tolerance
Supports a more efficient openflow-based firewall in lieu
of Iptables
Supports advanced services such as TaaS, FWaaS v2,
etc.
Supports bonding / LAG
Widely used / documented / supported
Split Userspace/Kernel datapath
Uses a more convoluted command set
Interfaces may not be accessible using traditional tools
such as tcpdump, ifconfig, etc.
Open vSwitch
Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
Open vSwitch + DPDK
Data Plane Development Kit, or DPDK, provides a
framework for applications to directly interact with
network hardware, bypassing the kernel and related
interrupts, system calls, and context switching.
Developers can create applications using DPDK, or
in the case of OpenStack Neutron, Open vSwitch is
the application leveraging DPDK – no user
interaction needed*.
* Ok, that’s not really true.
Image credit: https://blog.selectel.com/introduction-dpdk-architecture-principles/
Open vSwitch + DPDK
Requirements:
One or more network interfaces to be associated with provider
network(s)
An IP address to be used for overlay traffic (if applicable)
Hugepage memory allocations and related flavors
Compatible network interface card (NIC)
NUMA awareness
OVS Binary with DPDK support (e.g. openvswitch-switch-dpdk)
Mechanism Driver openvswitch
Agent neutron-openvswitch-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:br-10g
25g 25g:br-25g
40g 40g:br-40g
100g 100g:br-100g
Open vSwitch + DPDK
br-10G br-25G br-40G br-100G
Instance
(iPerf Server)
Standalone Node
(iPerf Client)
Compute Node
br-int
9.88
22
27.9 27.9
9.84
18.2 20.4 22.1
9.91
23.9
38.3
69.8
0
25
50
75
100
10 Gbps 25 Gbps 40 Gbps 100 Gbps
Realized Throughout vs Baremetal (iperf –P 4)
OVS + DPDK Vanilla OVS Baremetal
Advantages Disadvantages
Supports overlay networking
Userspace datapath
Better performance compared to vanilla OVS
Uses a more convoluted command set
Interfaces may not be accessible using traditional tools
such as tcpdump, ifconfig, etc.
Non-instance ports are not performant, including those
used by DVR, FWaaS, and LBaaS
Hugepages required
Knowledge of NUMA topology required
One or more cores tied up for poll-mode drivers (PMDs)
OpenStack flavors require hugepages and CPU pinning
for best results
Open vSwitch + DPDK
Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
SR-IOV (Direct Mode)
SR-IOV allows a PCI device to separate
access to its resources, resulting in:
• Single pNIC / Physical Function (PF)
• Multiple vNICs / Virtual Function(s) (VF)
Mechanism Driver sriovnicswitch
Agent neutron-sriov-nic-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:ens1f0
25g 25g:ens1f1
40g 40g:ens3f0
100g 100g:ens3f1
SR-IOV (Direct Mode)
Instance
(iPerf Server)
Standalone Node
(iPerf Client)
Compute Node
VF
PF
9.91
23.9
37
70.3
9.91
23.9
38.3
69.8
0
25
50
75
100
10 Gbps 25 Gbps 40 Gbps 100 Gbps
Realized Throughput vs Baremetal (iperf –P 4)
SRIOV Baremetal
Advantages Disadvantages
Traffic does not traverse kernel (direct to
instance)
Near line-rate performance
Live migration not supported (Work being done here)
Interfaces may not be accessible using traditional tools such
as tcpdump, ifconfig, etc. on the host
Only supports instance ports
Bonding / LAG not supported (Work being done here, too)
Port security / security groups not supported
Changes to workflow to create port ahead of instance (e.g.
vnic_type=direct)
Interface hotplugging not supported
SR-IOV (Direct Mode)
Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
SR-IOV (Indirect Mode)
SR-IOV (indirect mode) uses an intermediary device,
such as macvtap*, to provide connectivity to
instances. Rather than attaching the VF directly to
an instance, the VF is attached to a macvtap
interface which is then attached to the instance.
Mechanism Driver sriovnicswitch
Agent neutron-sriov-nic-agent
Communication AMQP (RabbitMQ)
Image courtesy of https://www.fir3net.com/UNIX/Linux/what-is-macvtap.html
* NOT to be confused with the actual, ya know, macvtap mechanism driver.
Provider Network Mapping
10g 10g:ens1f0
25g 25g:ens1f1
40g 40g:ens3f0
100g 100g:ens3f1
SR-IOV (Indirect Mode)
Instance
(iPerf Server)
Standalone Node
(iPerf Client)
Compute Node
VF
PF
macvtap macvtap macvtap macvtap
9.9
16.5
20.9
19.6
9.91
23.9
38.3
69.8
0
25
50
75
100
10 Gbps 25 Gbps 40 Gbps 100 Gbps
Realized Throughput vs Baremetal (iperf –P 4)
Macvtap Baremetal
Advantages Disadvantages
Live migration
Traffic visible via macvtap interface on
compute node
No additional setup beyond SR-IOV
Poor performance > 10G in tests
Changes to workflow to create port ahead
of instance (e.g. vnic_type=macvtap)
Performance benefits of SR-IOV lost
SR-IOV (Indirect Mode)
Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
Open vSwitch + ASAP2
ASAP2 is a feature of certain Mellanox NICs, including
ConnectX-5 and some ConnectX-4 models, that offloads
Open vSwitch data-plane processing onto NIC hardware
using switchdev API.
ASAP2 leverages SR-IOV, iproute2, tc, and Open vSwitch to
provide this functionality.
Mechanism Driver openvswitch
Agent neutron-openvswitch-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:br-10g
25g 25g:br-25g
40g 40g:br-40g
100g 100g:br-100g
Open vSwitch + ASAP2
Instance
(iPerf Server)
Standalone Node
(iPerf Client)
Compute Node
VF
PF
br-100G
HW eSwitch HW eSwitch HW eSwitch HW eSwitch
br-int
VF
Representor
9.91
24
35.7
73.6
9.91
23.9
38.3
69.8
0
25
50
75
100
10 Gbps 25 Gbps 40 Gbps 100 Gbps
Realized Throughput vs Baremetal (iperf –P 4)
OVS + ASAP Baremetal
Open vSwitch + ASAP2
Advantages Disadvantages
Supports LAG / bonding at vSwitch level
Supports traffic mirroring via standard OVS
procedures (vs SRIOV Direct)
Majority of packet processing done in
hardware
Not officially supported on non-RHEL based
operating systems
The use of security groups / port security
means packet processing is NOT offloaded
(addressed in future updates)
Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
Macvtap (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
FD.io VPP
VPP is a software switch that works with DPDK to provide
very fast packet processing.
The networking-vpp project is responsible for providing
the mechanism driver to interface with the FD.io VPP
software switch.
Project URL:
https://wiki.openstack.org/wiki/Networking-vpp
Mechanism Driver vpp
Agent neutron-vpp-agent
Communication etcd
Provider Network Mapping
10g 10g:TwentyFiveGigabitEthernet4/0/0
25g 25g:TwentyFiveGigabitEthernet4/0/1
40g 40g:HundredGigabitEthernet8/0/0
100g 100g:HundredGigabitEthernet8/0/1
FD.io VPP
Instance
(iPerf Server)
Standalone Node
(iPerf Client)
Compute Node
VPP vSwitch
9.87
24.5
16.6 16.5
9.91
23.9 38.3
69.8
0
25
50
75
100
10 Gbps 25 Gbps 40 Gbps 100 Gbps
Realized Throughput vs Baremetal (iperf –P 4)
VPP Baremetal
Advantages Disadvantages
Accelerated dataplane vs LXB and vanilla
OVS (Up thru 25G)
May still be considered experimental for the
greater community
May require recompile of DPDK + VPP to
support certain NICs, including Mellanox
Advanced services may not be supported
May require OFED, and particular versions
of DPDK and OVS
FD.io VPP
What is ML2
Mechanism Drivers
Comparisons
Summary
iPerf - 10 Gbps
0
2
4
6
8
10
VPPASAPMacvtapSRIOVDPDKOVSLXBBaremetal
Gbps
10 Gbps
iPerf - 25 Gbps
0
5
10
15
20
25
VPPASAPMacvtapSRIOVDPDKOVSLXBBaremetal
Gbps
25 Gbps
iPerf - 40 Gbps
0
5
10
15
20
25
30
35
40
VPPASAPMacvtapSRIOVDPDKOVSLXBBaremetal
Gbps
40 Gbps
iPerf - 100 Gbps
0
10
20
30
40
50
60
70
80
90
100
VPPASAPMacvtapSRIOVDPDKOVSLXBBaremetal
Gbps
100 Gbps
T-Rex
THE GOAL:
Pump as much traffic as we can to see how the respective
vSwitch performs compared to baremetal.
T-Rex – 10G
THE TEST:
• T-Rex sfr_delay_10_1G x 10
• 120 second total duration
• Initiates roughly 40,000 cps
• Pumps roughly 2,000,000 pps
DUT:
• Virtual Machine
• Ubuntu 18.04.2 LTS
• 8 cores
• 16 GB RAM
• Mellanox ConnectX-4 Lx EN
95.60% 94.15%
40.21%
0.89%
80.97%
0.88%
24.50%
0.00%
0%
25%
50%
75%
100%
LXB OVS DPDK SR-IOV Macvtap ASAP VPP Baremetal
PercentageDropped
(ShorterisBetter)
T-Rex – Packet Loss at 10G
T-Rex – 25G
THE TEST:
• T-Rex sfr_delay_10_1G x 25
• 120 second total duration
• Initiates roughly 100,000 cps
• Pumps roughly 5,000,000 pps
DUT:
• Virtual Machine
• Ubuntu 18.04.2 LTS
• 8 cores
• 16 GB RAM
• Mellanox ConnectX-4 Lx EN
99.31% 99.36%
76.32%
21.93%
93.28%
18.34%
67.37%
0.00%
0%
25%
50%
75%
100%
LXB OVS DPDK SR-IOV Macvtap ASAP VPP Baremetal
PercentageDropped
(ShorterisBetter)
T-Rex - Packet Loss at 25G
T-Rex – 40G
THE TEST:
• T-Rex sfr_delay_10_1G x 40
• 120 second total duration
• Initiates roughly 160,000 cps
• Pumps roughly 8,000,000 pps
DUT:
• Virtual Machine Instance
• Ubuntu 18.04.2 LTS
• 8 cores
• 16 GB RAM
• Mellanox ConnectX-5 Ex
98.77% 98.07%
82.09%
40.25%
96.58%
35.37%
80.24%
4.62%
0%
25%
50%
75%
100%
LXB OVS DPDK SR-IOV Macvtap ASAP VPP Baremetal
PercentageDropped
(ShorterisBetter)
T-Rex - Packet Loss at 40G
T-Rex – 100G
THE TEST:
• T-Rex sfr_delay_10_1G x 100
• 120 second total duration
• Initiates roughly 400,000 cps
• Pumps roughly 20,000,000 pps
DUT:
• Virtual Machine Instance
• Ubuntu 18.04.2 LTS
• 8 cores
• 16 GB RAM
• Mellanox ConnectX-5 Ex
99.20% 98.73%
86.97%
54.13%
96.24%
53.56%
86.67%
22.48%
0%
25%
50%
75%
100%
LXB OVS DPDK SR-IOV Macvtap ASAP VPP Baremetal
PercentageDropped
(ShorterisBetter)
T-Rex - Packet Loss at 100G
What is ML2
Mechanism Drivers
Comparisons
Summary
Summary
Performance isn’t everything
Operators who deploy OpenStack should consider many harder-to-quantify attributes of a given
mechanism driver and related technology, including:
Upstream support
Bug fix completion rate
Community adoption
Feature completeness
Hardware compatibility
Ease-of-support
LinuxBridge
Open vSwitch
SR-IOV (Direct)
SR-IOV (Indirect)
Open vSwitch + DPDK
Open vSwitch + ASAP
VPP
Summary
Tuning is almost always required
Parameter tuning can often lead to increases in performance for a given virtual switch, but:
Each vSwitch requires its own tweaks
Different workloads may need different settings
You can’t squeeze water from a stone
Summary
Hardware makes a difference
Newer generations of processors required for best performance
PCIe 4.0 to maximize 100G+ networking
Summary
The road less travelled can be a bumpy one
Less experience means you’re on your own
Testing is even more important
Summary
For best performance: For best support:
SR-IOV (Direct Mode)
Open vSwitch + Hardware Offloading (e.g.
ASAP2)
Linux Bridge
Open vSwitch
Resources
DPDK: https://docs.openstack.org/neutron/stein/admin/config-ovs-dpdk.html
DPDK: https://doc.dpdk.org/guides/linux_gsg/index.html
OVS: https://superuser.openstack.org/articles/openvswitch-openstack-sdn/
T-Rex: https://trex-tgn.cisco.com/trex/doc/trex_manual.html
Offload: https://docs.openstack.org/neutron/stein/admin/config-ovs-offload.html
VPP: https://fd.io/wp-content/uploads/sites/34/2017/07/FDioVPPwhitepaperJuly2017.pdf
Acceleration: https://www.metaswitch.com/blog/accelerating-the-nfv-data-plane
ASAP: http://www.mellanox.com/related-docs/whitepapers/WP_SDNsolution.pdf
ASAP: https://www.mellanox.com/related-docs/prod_software/ASAP2_Hardware_Offloading_for_vSwitches_User_Manual_v4.4.pdf

More Related Content

What's hot

오픈스택 기반 클라우드 서비스 구축 방안 및 사례
오픈스택 기반 클라우드 서비스 구축 방안 및 사례오픈스택 기반 클라우드 서비스 구축 방안 및 사례
오픈스택 기반 클라우드 서비스 구축 방안 및 사례
SONG INSEOB
 
Deploying IPv6 on OpenStack
Deploying IPv6 on OpenStackDeploying IPv6 on OpenStack
Deploying IPv6 on OpenStack
Vietnam Open Infrastructure User Group
 
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
Thomas Graf
 
[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험
NHN FORWARD
 
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/NeutronOverview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
vivekkonnect
 
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
Ian Choi
 
OpenStack概要 ~仮想ネットワーク~
OpenStack概要 ~仮想ネットワーク~OpenStack概要 ~仮想ネットワーク~
OpenStack概要 ~仮想ネットワーク~
Masaya Aoyama
 
The Basic Introduction of Open vSwitch
The Basic Introduction of Open vSwitchThe Basic Introduction of Open vSwitch
The Basic Introduction of Open vSwitch
Te-Yen Liu
 
Openstack Neutron, interconnections with BGP/MPLS VPNs
Openstack Neutron, interconnections with BGP/MPLS VPNsOpenstack Neutron, interconnections with BGP/MPLS VPNs
Openstack Neutron, interconnections with BGP/MPLS VPNs
Thomas Morin
 
OpenStack入門 2016/06/10
OpenStack入門 2016/06/10OpenStack入門 2016/06/10
OpenStack入門 2016/06/10
株式会社 NTTテクノクロス
 
MP BGP-EVPN 실전기술-1편(개념잡기)
MP BGP-EVPN 실전기술-1편(개념잡기)MP BGP-EVPN 실전기술-1편(개념잡기)
MP BGP-EVPN 실전기술-1편(개념잡기)
JuHwan Lee
 
Understanding Open vSwitch
Understanding Open vSwitch Understanding Open vSwitch
Understanding Open vSwitch
YongKi Kim
 
L3HA-VRRP-20141201
L3HA-VRRP-20141201L3HA-VRRP-20141201
L3HA-VRRP-20141201
Manabu Ori
 
How VXLAN works on Linux
How VXLAN works on LinuxHow VXLAN works on Linux
How VXLAN works on LinuxEtsuji Nakai
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
Ji-Woong Choi
 
知っているようで知らないNeutron -仮想ルータの冗長と分散- - OpenStack最新情報セミナー 2016年3月
知っているようで知らないNeutron -仮想ルータの冗長と分散- - OpenStack最新情報セミナー 2016年3月 知っているようで知らないNeutron -仮想ルータの冗長と分散- - OpenStack最新情報セミナー 2016年3月
知っているようで知らないNeutron -仮想ルータの冗長と分散- - OpenStack最新情報セミナー 2016年3月
VirtualTech Japan Inc.
 
Open vSwitch Offload: Conntrack and the Upstream Kernel
Open vSwitch Offload: Conntrack and the Upstream KernelOpen vSwitch Offload: Conntrack and the Upstream Kernel
Open vSwitch Offload: Conntrack and the Upstream Kernel
Netronome
 
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
OpenStack
 
[242]open stack neutron dataplane 구현
[242]open stack neutron   dataplane 구현[242]open stack neutron   dataplane 구현
[242]open stack neutron dataplane 구현
NAVER D2
 
Neutron packet logging framework
Neutron packet logging frameworkNeutron packet logging framework
Neutron packet logging framework
Vietnam Open Infrastructure User Group
 

What's hot (20)

오픈스택 기반 클라우드 서비스 구축 방안 및 사례
오픈스택 기반 클라우드 서비스 구축 방안 및 사례오픈스택 기반 클라우드 서비스 구축 방안 및 사례
오픈스택 기반 클라우드 서비스 구축 방안 및 사례
 
Deploying IPv6 on OpenStack
Deploying IPv6 on OpenStackDeploying IPv6 on OpenStack
Deploying IPv6 on OpenStack
 
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
 
[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험
 
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/NeutronOverview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
 
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
[OpenStack] 공개 소프트웨어 오픈스택 입문 & 파헤치기
 
OpenStack概要 ~仮想ネットワーク~
OpenStack概要 ~仮想ネットワーク~OpenStack概要 ~仮想ネットワーク~
OpenStack概要 ~仮想ネットワーク~
 
The Basic Introduction of Open vSwitch
The Basic Introduction of Open vSwitchThe Basic Introduction of Open vSwitch
The Basic Introduction of Open vSwitch
 
Openstack Neutron, interconnections with BGP/MPLS VPNs
Openstack Neutron, interconnections with BGP/MPLS VPNsOpenstack Neutron, interconnections with BGP/MPLS VPNs
Openstack Neutron, interconnections with BGP/MPLS VPNs
 
OpenStack入門 2016/06/10
OpenStack入門 2016/06/10OpenStack入門 2016/06/10
OpenStack入門 2016/06/10
 
MP BGP-EVPN 실전기술-1편(개념잡기)
MP BGP-EVPN 실전기술-1편(개념잡기)MP BGP-EVPN 실전기술-1편(개념잡기)
MP BGP-EVPN 실전기술-1편(개념잡기)
 
Understanding Open vSwitch
Understanding Open vSwitch Understanding Open vSwitch
Understanding Open vSwitch
 
L3HA-VRRP-20141201
L3HA-VRRP-20141201L3HA-VRRP-20141201
L3HA-VRRP-20141201
 
How VXLAN works on Linux
How VXLAN works on LinuxHow VXLAN works on Linux
How VXLAN works on Linux
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
 
知っているようで知らないNeutron -仮想ルータの冗長と分散- - OpenStack最新情報セミナー 2016年3月
知っているようで知らないNeutron -仮想ルータの冗長と分散- - OpenStack最新情報セミナー 2016年3月 知っているようで知らないNeutron -仮想ルータの冗長と分散- - OpenStack最新情報セミナー 2016年3月
知っているようで知らないNeutron -仮想ルータの冗長と分散- - OpenStack最新情報セミナー 2016年3月
 
Open vSwitch Offload: Conntrack and the Upstream Kernel
Open vSwitch Offload: Conntrack and the Upstream KernelOpen vSwitch Offload: Conntrack and the Upstream Kernel
Open vSwitch Offload: Conntrack and the Upstream Kernel
 
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
 
[242]open stack neutron dataplane 구현
[242]open stack neutron   dataplane 구현[242]open stack neutron   dataplane 구현
[242]open stack neutron dataplane 구현
 
Neutron packet logging framework
Neutron packet logging frameworkNeutron packet logging framework
Neutron packet logging framework
 

Similar to Pushing Packets - How do the ML2 Mechanism Drivers Stack Up

SDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingSDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingThomas Graf
 
Network Virtualization & Software-defined Networking
Network Virtualization & Software-defined NetworkingNetwork Virtualization & Software-defined Networking
Network Virtualization & Software-defined Networking
Digicomp Academy AG
 
OpenStack Neutron Dragonflow l3 SDNmeetup
OpenStack Neutron Dragonflow l3 SDNmeetupOpenStack Neutron Dragonflow l3 SDNmeetup
OpenStack Neutron Dragonflow l3 SDNmeetup
Eran Gampel
 
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack
 
Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualization
SDN Hub
 
DragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutronDragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutron
Eran Gampel
 
Simplify Networking for Containers
Simplify Networking for ContainersSimplify Networking for Containers
Simplify Networking for Containers
LinuxCon ContainerCon CloudOpen China
 
PLNOG 13: Nicolai van der Smagt: SDN
PLNOG 13: Nicolai van der Smagt: SDNPLNOG 13: Nicolai van der Smagt: SDN
PLNOG 13: Nicolai van der Smagt: SDN
PROIDEA
 
Opencontrail network virtualization
Opencontrail network virtualizationOpencontrail network virtualization
Opencontrail network virtualization
Nicolai van der Smagt
 
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
Sungman Jang
 
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
OpenStack Korea Community
 
OpenStack Networking and Automation
OpenStack Networking and AutomationOpenStack Networking and Automation
OpenStack Networking and Automation
Adam Johnson
 
Harmonia open iris_basic_v0.1
Harmonia open iris_basic_v0.1Harmonia open iris_basic_v0.1
Harmonia open iris_basic_v0.1
Yongyoon Shin
 
An Introduce of OPNFV (Open Platform for NFV)
An Introduce of OPNFV (Open Platform for NFV)An Introduce of OPNFV (Open Platform for NFV)
An Introduce of OPNFV (Open Platform for NFV)
Mario Cho
 
SDN/OpenFlow #lspe
SDN/OpenFlow #lspeSDN/OpenFlow #lspe
SDN/OpenFlow #lspe
Chris Westin
 
OpenFlow tutorial
OpenFlow tutorialOpenFlow tutorial
OpenFlow tutorial
openflow
 
Sdn dell lab report v2
Sdn dell lab report v2Sdn dell lab report v2
Sdn dell lab report v2
Oded Rotter
 
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
David Pasek
 
Netsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfvNetsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfv
Intel
 
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
LINE Corporation
 

Similar to Pushing Packets - How do the ML2 Mechanism Drivers Stack Up (20)

SDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingSDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center Networking
 
Network Virtualization & Software-defined Networking
Network Virtualization & Software-defined NetworkingNetwork Virtualization & Software-defined Networking
Network Virtualization & Software-defined Networking
 
OpenStack Neutron Dragonflow l3 SDNmeetup
OpenStack Neutron Dragonflow l3 SDNmeetupOpenStack Neutron Dragonflow l3 SDNmeetup
OpenStack Neutron Dragonflow l3 SDNmeetup
 
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
 
Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualization
 
DragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutronDragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutron
 
Simplify Networking for Containers
Simplify Networking for ContainersSimplify Networking for Containers
Simplify Networking for Containers
 
PLNOG 13: Nicolai van der Smagt: SDN
PLNOG 13: Nicolai van der Smagt: SDNPLNOG 13: Nicolai van der Smagt: SDN
PLNOG 13: Nicolai van der Smagt: SDN
 
Opencontrail network virtualization
Opencontrail network virtualizationOpencontrail network virtualization
Opencontrail network virtualization
 
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
 
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
 
OpenStack Networking and Automation
OpenStack Networking and AutomationOpenStack Networking and Automation
OpenStack Networking and Automation
 
Harmonia open iris_basic_v0.1
Harmonia open iris_basic_v0.1Harmonia open iris_basic_v0.1
Harmonia open iris_basic_v0.1
 
An Introduce of OPNFV (Open Platform for NFV)
An Introduce of OPNFV (Open Platform for NFV)An Introduce of OPNFV (Open Platform for NFV)
An Introduce of OPNFV (Open Platform for NFV)
 
SDN/OpenFlow #lspe
SDN/OpenFlow #lspeSDN/OpenFlow #lspe
SDN/OpenFlow #lspe
 
OpenFlow tutorial
OpenFlow tutorialOpenFlow tutorial
OpenFlow tutorial
 
Sdn dell lab report v2
Sdn dell lab report v2Sdn dell lab report v2
Sdn dell lab report v2
 
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
 
Netsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfvNetsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfv
 
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
 

Recently uploaded

Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
CatarinaPereira64715
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
Abida Shariff
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Jeffrey Haguewood
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
Fwdays
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 

Recently uploaded (20)

Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 

Pushing Packets - How do the ML2 Mechanism Drivers Stack Up

  • 1. James Denton – Network Architect Twitter: @jimmdenton Open Infrastructure Summit Jonathan Almaleh – Network Architect Twitter: @ckent99999 Denver, Colorado 2019 Pushing Packets: How do the ML2 Drivers Stack Up
  • 2. What is ML2 Mechanism Drivers Comparisons Summary Agenda
  • 3. In the beginning, there was the monolithic plugin. Using a monolithic plugin, operators were limited to a single virtual networking technology within a given cloud. Writing new plugins was difficult, and often resulted in duplicate code and efforts. What is ML2
  • 4. Around the Havana release, the Modular Layer 2 (ML2) plugin was introduced. By using the ML2 plugin and related drivers, operators were no longer limited to a single virtual networking technology within a given cloud. With the modular approach, developers could focus on developing drivers for their respective L2 mechanisms without having to reimplement other components. What is ML2
  • 5. Type drivers: Define how an OpenStack network is realized. Examples are VLAN, VXLAN, GENEVE, Flat, etc. Mechanism drivers: Are responsible for implementing the network. Examples include Open vSwitch, Linux Bridge, SR-IOV, etc. What is ML2
  • 6. What is ML2 Mechanism Drivers Comparisons Summary
  • 8. Test Environment iPerf Server (VM) / DUT CPU 8-core Intel Xeon E5-2678 v3 @ 2.5 Ghz Memory 16 GB Operating System Ubuntu 18.04.2 LTS 4.15.0-47-generic Mechanism Driver Network Driver (in VM) Linux Bridge vhost_net Open vSwitch +/- DPDK vhost_net SR-IOV (Direct) mlx5_core SR-IOV (Indirect) vhost_net Open vSwitch + ASAP mlx5_core VPP vhost_net Compute Node Specs CPU 2x 12-core Intel Xeon E5-2678 v3 @ 2.5 Ghz Memory 64 GB NIC Mellanox ConnectX-4 Lx EN (10/25) Mellanox ConnectX-5 Ex (40/100) Operating System Ubuntu 18.04.2 LTS 4.15.0-47-generic Kernel Parameters GRUB_CMDLINE_LINUX="intel_iommu=on iommu=pt” OpenStack Release Stein iPerf Client / T-Rex Specs CPU 2x 8-core Intel Xeon E5-2667 v2 @ 3.3 Ghz Memory 128 GB NIC Mellanox ConnectX-4 Lx EN (10/25) Mellanox ConnectX-5 Ex (40/100) Operating System Ubuntu 18.04.2 LTS 4.15.0-47-generic Kernel Parameters GRUB_CMDLINE_LINUX="intel_iommu=on iommu=pt”
  • 9. Mechanism Drivers Linux Bridge Open vSwitch Open vSwitch + DPDK SR-IOV (Direct Mode) SR-IOV (Indirect Mode) Open vSwitch + ASAP2 Vector Packet Processing (VPP)
  • 10. Linux Bridge The Linux Bridge driver implements a single Linux bridge per network on a given compute node. A bridge can be connected to a physical interface, vlan interface, or vxlan interface, depending on the network type. brq0 Instance Instance
  • 11. Linux Bridge Requirements: One or more network interfaces to be associated with provider network(s) An IP address to be used for overlay traffic (if applicable) Mechanism Driver linuxbridge Agent neutron-linuxbridge-agent Communication AMQP (RabbitMQ) Provider Network Mapping 10g 10g:ens1f0 25g 25g:ens1f1 40g 40g:ens3f0 100g 100g:ens3f1
  • 12. Linux Bridge brq10G brq25G brq40G brq100G Instance (iPerf Server) Standalone Node (iPerf Client) Compute Node 9.93 18.7 20.4 21.9 9.91 23.9 38.3 69.8 0 25 50 75 100 10 Gbps 25 Gbps 40 Gbps 100 Gbps Realized Throughput vs Baremetal LXB Baremetal
  • 13. Advantages Disadvantages Supports overlay networking (VXLAN) Easy to troubleshoot using well-known tools Supports highly-available routers using VRRP Supports bonding / LAG Widely used / documented / supported Kernel datapath Iptables is likely in-path, even with port security disabled Does not support distributed virtual routers Does not support advanced services such as TaaS, FWaaS v2, and others Usually lags in new features compared to OVS Linux Bridge
  • 14. Mechanism Drivers Linux Bridge Open vSwitch Open vSwitch + DPDK SR-IOV (Direct Mode) SR-IOV (Indirect Mode) Open vSwitch + ASAP2 Vector Packet Processing (VPP)
  • 15. Open vSwitch Open vSwitch is an open-source virtual switch that connects virtual network resources to the physical network infrastructure. The openvswitch mechanism driver implements multiple OVS- based virtual switches and uses them for various purposes. The integration bridge connects virtual machine instances to local Layer 2 networks. The tunnel bridge connects Layer 2 networks between compute nodes using overlay networks such as VXLAN or GRE. The provider bridge connects local Layer 2 networks to the physical network infrastructure. br-int br-tun br-provider
  • 16. Open vSwitch Requirements: One or more network interfaces to be associated with provider network(s) An IP address to be used for overlay traffic (if applicable) Packages not included with base install (e.g. openvswitch- common, openvswitch-switch) Mechanism Driver openvswitch Agent neutron-openvswitch-agent Communication AMQP (RabbitMQ) Provider Network Mapping 10g 10g:br-10g 25g 25g:br-25g 40g 40g:br-40g 100g 100g:br-100g
  • 17. Open vSwitch br-10G br-25G br-40G br-100G Instance (iPerf Server) Standalone Node (iPerf Client) Compute Node br-int 9.84 18.2 20.4 22.19.91 23.9 38.3 69.8 0 25 50 75 100 10 Gbps 25 Gbps 40 Gbps 100 Gbps Realized Throughput vs Baremetal (iperf - P 4) OVS Baremetal
  • 18. Advantages Disadvantages Supports overlay networking (VXLAN, GRE) Supports highly-available routers using VRRP, as well as distributed virtual routers for better fault tolerance Supports a more efficient openflow-based firewall in lieu of Iptables Supports advanced services such as TaaS, FWaaS v2, etc. Supports bonding / LAG Widely used / documented / supported Split Userspace/Kernel datapath Uses a more convoluted command set Interfaces may not be accessible using traditional tools such as tcpdump, ifconfig, etc. Open vSwitch
  • 19. Mechanism Drivers Linux Bridge Open vSwitch Open vSwitch + DPDK SR-IOV (Direct Mode) SR-IOV (Indirect Mode) Open vSwitch + ASAP2 Vector Packet Processing (VPP)
  • 20. Open vSwitch + DPDK Data Plane Development Kit, or DPDK, provides a framework for applications to directly interact with network hardware, bypassing the kernel and related interrupts, system calls, and context switching. Developers can create applications using DPDK, or in the case of OpenStack Neutron, Open vSwitch is the application leveraging DPDK – no user interaction needed*. * Ok, that’s not really true. Image credit: https://blog.selectel.com/introduction-dpdk-architecture-principles/
  • 21. Open vSwitch + DPDK Requirements: One or more network interfaces to be associated with provider network(s) An IP address to be used for overlay traffic (if applicable) Hugepage memory allocations and related flavors Compatible network interface card (NIC) NUMA awareness OVS Binary with DPDK support (e.g. openvswitch-switch-dpdk) Mechanism Driver openvswitch Agent neutron-openvswitch-agent Communication AMQP (RabbitMQ) Provider Network Mapping 10g 10g:br-10g 25g 25g:br-25g 40g 40g:br-40g 100g 100g:br-100g
  • 22. Open vSwitch + DPDK br-10G br-25G br-40G br-100G Instance (iPerf Server) Standalone Node (iPerf Client) Compute Node br-int 9.88 22 27.9 27.9 9.84 18.2 20.4 22.1 9.91 23.9 38.3 69.8 0 25 50 75 100 10 Gbps 25 Gbps 40 Gbps 100 Gbps Realized Throughout vs Baremetal (iperf –P 4) OVS + DPDK Vanilla OVS Baremetal
  • 23. Advantages Disadvantages Supports overlay networking Userspace datapath Better performance compared to vanilla OVS Uses a more convoluted command set Interfaces may not be accessible using traditional tools such as tcpdump, ifconfig, etc. Non-instance ports are not performant, including those used by DVR, FWaaS, and LBaaS Hugepages required Knowledge of NUMA topology required One or more cores tied up for poll-mode drivers (PMDs) OpenStack flavors require hugepages and CPU pinning for best results Open vSwitch + DPDK
  • 24. Mechanism Drivers Linux Bridge Open vSwitch Open vSwitch + DPDK SR-IOV (Direct Mode) SR-IOV (Indirect Mode) Open vSwitch + ASAP2 Vector Packet Processing (VPP)
  • 25. SR-IOV (Direct Mode) SR-IOV allows a PCI device to separate access to its resources, resulting in: • Single pNIC / Physical Function (PF) • Multiple vNICs / Virtual Function(s) (VF) Mechanism Driver sriovnicswitch Agent neutron-sriov-nic-agent Communication AMQP (RabbitMQ) Provider Network Mapping 10g 10g:ens1f0 25g 25g:ens1f1 40g 40g:ens3f0 100g 100g:ens3f1
  • 26. SR-IOV (Direct Mode) Instance (iPerf Server) Standalone Node (iPerf Client) Compute Node VF PF 9.91 23.9 37 70.3 9.91 23.9 38.3 69.8 0 25 50 75 100 10 Gbps 25 Gbps 40 Gbps 100 Gbps Realized Throughput vs Baremetal (iperf –P 4) SRIOV Baremetal
  • 27. Advantages Disadvantages Traffic does not traverse kernel (direct to instance) Near line-rate performance Live migration not supported (Work being done here) Interfaces may not be accessible using traditional tools such as tcpdump, ifconfig, etc. on the host Only supports instance ports Bonding / LAG not supported (Work being done here, too) Port security / security groups not supported Changes to workflow to create port ahead of instance (e.g. vnic_type=direct) Interface hotplugging not supported SR-IOV (Direct Mode)
  • 28. Mechanism Drivers Linux Bridge Open vSwitch Open vSwitch + DPDK SR-IOV (Direct Mode) SR-IOV (Indirect Mode) Open vSwitch + ASAP2 Vector Packet Processing (VPP)
  • 29. SR-IOV (Indirect Mode) SR-IOV (indirect mode) uses an intermediary device, such as macvtap*, to provide connectivity to instances. Rather than attaching the VF directly to an instance, the VF is attached to a macvtap interface which is then attached to the instance. Mechanism Driver sriovnicswitch Agent neutron-sriov-nic-agent Communication AMQP (RabbitMQ) Image courtesy of https://www.fir3net.com/UNIX/Linux/what-is-macvtap.html * NOT to be confused with the actual, ya know, macvtap mechanism driver. Provider Network Mapping 10g 10g:ens1f0 25g 25g:ens1f1 40g 40g:ens3f0 100g 100g:ens3f1
  • 30. SR-IOV (Indirect Mode) Instance (iPerf Server) Standalone Node (iPerf Client) Compute Node VF PF macvtap macvtap macvtap macvtap 9.9 16.5 20.9 19.6 9.91 23.9 38.3 69.8 0 25 50 75 100 10 Gbps 25 Gbps 40 Gbps 100 Gbps Realized Throughput vs Baremetal (iperf –P 4) Macvtap Baremetal
  • 31. Advantages Disadvantages Live migration Traffic visible via macvtap interface on compute node No additional setup beyond SR-IOV Poor performance > 10G in tests Changes to workflow to create port ahead of instance (e.g. vnic_type=macvtap) Performance benefits of SR-IOV lost SR-IOV (Indirect Mode)
  • 32. Mechanism Drivers Linux Bridge Open vSwitch Open vSwitch + DPDK SR-IOV (Direct Mode) SR-IOV (Indirect Mode) Open vSwitch + ASAP2 Vector Packet Processing (VPP)
  • 33. Open vSwitch + ASAP2 ASAP2 is a feature of certain Mellanox NICs, including ConnectX-5 and some ConnectX-4 models, that offloads Open vSwitch data-plane processing onto NIC hardware using switchdev API. ASAP2 leverages SR-IOV, iproute2, tc, and Open vSwitch to provide this functionality. Mechanism Driver openvswitch Agent neutron-openvswitch-agent Communication AMQP (RabbitMQ) Provider Network Mapping 10g 10g:br-10g 25g 25g:br-25g 40g 40g:br-40g 100g 100g:br-100g
  • 34. Open vSwitch + ASAP2 Instance (iPerf Server) Standalone Node (iPerf Client) Compute Node VF PF br-100G HW eSwitch HW eSwitch HW eSwitch HW eSwitch br-int VF Representor 9.91 24 35.7 73.6 9.91 23.9 38.3 69.8 0 25 50 75 100 10 Gbps 25 Gbps 40 Gbps 100 Gbps Realized Throughput vs Baremetal (iperf –P 4) OVS + ASAP Baremetal
  • 35. Open vSwitch + ASAP2 Advantages Disadvantages Supports LAG / bonding at vSwitch level Supports traffic mirroring via standard OVS procedures (vs SRIOV Direct) Majority of packet processing done in hardware Not officially supported on non-RHEL based operating systems The use of security groups / port security means packet processing is NOT offloaded (addressed in future updates)
  • 36. Mechanism Drivers Linux Bridge Open vSwitch Open vSwitch + DPDK SR-IOV (Direct Mode) Macvtap (Indirect Mode) Open vSwitch + ASAP2 Vector Packet Processing (VPP)
  • 37. FD.io VPP VPP is a software switch that works with DPDK to provide very fast packet processing. The networking-vpp project is responsible for providing the mechanism driver to interface with the FD.io VPP software switch. Project URL: https://wiki.openstack.org/wiki/Networking-vpp Mechanism Driver vpp Agent neutron-vpp-agent Communication etcd Provider Network Mapping 10g 10g:TwentyFiveGigabitEthernet4/0/0 25g 25g:TwentyFiveGigabitEthernet4/0/1 40g 40g:HundredGigabitEthernet8/0/0 100g 100g:HundredGigabitEthernet8/0/1
  • 38. FD.io VPP Instance (iPerf Server) Standalone Node (iPerf Client) Compute Node VPP vSwitch 9.87 24.5 16.6 16.5 9.91 23.9 38.3 69.8 0 25 50 75 100 10 Gbps 25 Gbps 40 Gbps 100 Gbps Realized Throughput vs Baremetal (iperf –P 4) VPP Baremetal
  • 39. Advantages Disadvantages Accelerated dataplane vs LXB and vanilla OVS (Up thru 25G) May still be considered experimental for the greater community May require recompile of DPDK + VPP to support certain NICs, including Mellanox Advanced services may not be supported May require OFED, and particular versions of DPDK and OVS FD.io VPP
  • 40. What is ML2 Mechanism Drivers Comparisons Summary
  • 41. iPerf - 10 Gbps 0 2 4 6 8 10 VPPASAPMacvtapSRIOVDPDKOVSLXBBaremetal Gbps 10 Gbps
  • 42. iPerf - 25 Gbps 0 5 10 15 20 25 VPPASAPMacvtapSRIOVDPDKOVSLXBBaremetal Gbps 25 Gbps
  • 43. iPerf - 40 Gbps 0 5 10 15 20 25 30 35 40 VPPASAPMacvtapSRIOVDPDKOVSLXBBaremetal Gbps 40 Gbps
  • 44. iPerf - 100 Gbps 0 10 20 30 40 50 60 70 80 90 100 VPPASAPMacvtapSRIOVDPDKOVSLXBBaremetal Gbps 100 Gbps
  • 45. T-Rex THE GOAL: Pump as much traffic as we can to see how the respective vSwitch performs compared to baremetal.
  • 46. T-Rex – 10G THE TEST: • T-Rex sfr_delay_10_1G x 10 • 120 second total duration • Initiates roughly 40,000 cps • Pumps roughly 2,000,000 pps DUT: • Virtual Machine • Ubuntu 18.04.2 LTS • 8 cores • 16 GB RAM • Mellanox ConnectX-4 Lx EN 95.60% 94.15% 40.21% 0.89% 80.97% 0.88% 24.50% 0.00% 0% 25% 50% 75% 100% LXB OVS DPDK SR-IOV Macvtap ASAP VPP Baremetal PercentageDropped (ShorterisBetter) T-Rex – Packet Loss at 10G
  • 47. T-Rex – 25G THE TEST: • T-Rex sfr_delay_10_1G x 25 • 120 second total duration • Initiates roughly 100,000 cps • Pumps roughly 5,000,000 pps DUT: • Virtual Machine • Ubuntu 18.04.2 LTS • 8 cores • 16 GB RAM • Mellanox ConnectX-4 Lx EN 99.31% 99.36% 76.32% 21.93% 93.28% 18.34% 67.37% 0.00% 0% 25% 50% 75% 100% LXB OVS DPDK SR-IOV Macvtap ASAP VPP Baremetal PercentageDropped (ShorterisBetter) T-Rex - Packet Loss at 25G
  • 48. T-Rex – 40G THE TEST: • T-Rex sfr_delay_10_1G x 40 • 120 second total duration • Initiates roughly 160,000 cps • Pumps roughly 8,000,000 pps DUT: • Virtual Machine Instance • Ubuntu 18.04.2 LTS • 8 cores • 16 GB RAM • Mellanox ConnectX-5 Ex 98.77% 98.07% 82.09% 40.25% 96.58% 35.37% 80.24% 4.62% 0% 25% 50% 75% 100% LXB OVS DPDK SR-IOV Macvtap ASAP VPP Baremetal PercentageDropped (ShorterisBetter) T-Rex - Packet Loss at 40G
  • 49. T-Rex – 100G THE TEST: • T-Rex sfr_delay_10_1G x 100 • 120 second total duration • Initiates roughly 400,000 cps • Pumps roughly 20,000,000 pps DUT: • Virtual Machine Instance • Ubuntu 18.04.2 LTS • 8 cores • 16 GB RAM • Mellanox ConnectX-5 Ex 99.20% 98.73% 86.97% 54.13% 96.24% 53.56% 86.67% 22.48% 0% 25% 50% 75% 100% LXB OVS DPDK SR-IOV Macvtap ASAP VPP Baremetal PercentageDropped (ShorterisBetter) T-Rex - Packet Loss at 100G
  • 50. What is ML2 Mechanism Drivers Comparisons Summary
  • 51. Summary Performance isn’t everything Operators who deploy OpenStack should consider many harder-to-quantify attributes of a given mechanism driver and related technology, including: Upstream support Bug fix completion rate Community adoption Feature completeness Hardware compatibility Ease-of-support LinuxBridge Open vSwitch SR-IOV (Direct) SR-IOV (Indirect) Open vSwitch + DPDK Open vSwitch + ASAP VPP
  • 52. Summary Tuning is almost always required Parameter tuning can often lead to increases in performance for a given virtual switch, but: Each vSwitch requires its own tweaks Different workloads may need different settings You can’t squeeze water from a stone
  • 53. Summary Hardware makes a difference Newer generations of processors required for best performance PCIe 4.0 to maximize 100G+ networking
  • 54. Summary The road less travelled can be a bumpy one Less experience means you’re on your own Testing is even more important
  • 55. Summary For best performance: For best support: SR-IOV (Direct Mode) Open vSwitch + Hardware Offloading (e.g. ASAP2) Linux Bridge Open vSwitch
  • 56. Resources DPDK: https://docs.openstack.org/neutron/stein/admin/config-ovs-dpdk.html DPDK: https://doc.dpdk.org/guides/linux_gsg/index.html OVS: https://superuser.openstack.org/articles/openvswitch-openstack-sdn/ T-Rex: https://trex-tgn.cisco.com/trex/doc/trex_manual.html Offload: https://docs.openstack.org/neutron/stein/admin/config-ovs-offload.html VPP: https://fd.io/wp-content/uploads/sites/34/2017/07/FDioVPPwhitepaperJuly2017.pdf Acceleration: https://www.metaswitch.com/blog/accelerating-the-nfv-data-plane ASAP: http://www.mellanox.com/related-docs/whitepapers/WP_SDNsolution.pdf ASAP: https://www.mellanox.com/related-docs/prod_software/ASAP2_Hardware_Offloading_for_vSwitches_User_Manual_v4.4.pdf

Editor's Notes

  1. https://etherpad.openstack.org/p/jjdenver One of the big use cased or needs for accelerated data plane is NFV. This talk introduces the viewer/reader to various ML2 drivers and virtual switching technologies supports by OpenStack, and compares features/functionality/performance. Our goal is to shed some light on a few of the mechanism drivers and virtual switching technologies available for OpenStack, and possibly help you determine which could be a good fit for your cloud.
  2. JAMES Brief overview of what ML2 is Describe some of the most common mechanism drivers available to deployers Provide some comparisons between mech drivers and related vswitch technologies. When comparing these drivers and respective virtual switching technologies, we focused on an out of the box deployment with little no tuning using off-the-shelf tools like iPerf and limited Trex tests. The results seen here may not tell the whole story, but provide some interesting data nonetheless.
  3. As new monolithic plugins came onboard to support their respective network technology, they had to implement the entire Neutron API or risk forgoing features. WHAT WERE THE FEATURES???
  4. In theory, operators could deploy ML2 drivers that would be responsible for: Programming physical hardware switches SR-IOV vSwitch like LXB and/or OVS
  5. The ML2 plugin relies on two types of drivers: type drives and mechanism drivers. TYPE drivers describe a type of network and its attributes -- VXLAN network type, with its VNI. Or VLAN network type with its 802.1q VLAN ID MECHANISM drivers are responsible for implementing a network type. -- linuxbridge driver implements networks using linux bridges. Not all type drivers are supported by all mechanism drivers! Multiple mechanisms can be used simultaneously within a cloud to access different ports of the same virtual network. The mechanisms, such as LinuxBridge, Open vSwitch, SRIOV, and others, can utilize agents that reside on the network/compute hosts to manipulate the virtual network implementation, or even interact with physical hardware such as a switch.
  6. JONNY
  7. JONNY
  8. JAMES The test environment was composed of a 2-node openstack cloud (infra/compute) and a 3rd standalone node running an iperf client and T-Rex for traffic generation. The compute and standalone nodes were directly cross-connected. Each node contained a single dual-port Mellanox CX4-Lx En and CX5. We enabled jumbo frames and disabled port security/security groups for all ports involved in testing. Just to preface this - The tests demonstrated through this presentation do not focus on the ”potential” of any given driver and related technology. While some of the results surprised us, with enough tuning, we would likely have seen an increase in performance in some cases. Whether baremetal performance in attainable for some, is questionable.
  9. JONNY In today’s talk we will be discussing 4 major mechanism drivers are their derivatives, including: linuxbridge open vswitch open vswitch + dpdk open vswitch + asap^2 sriov macvtap + sriov vpp
  10. Like the name implies, the linuxbridge mechanism driver uses linux bridges to provide L2 connectivity to instances. For every Neutron network – flat, vlan, vxlan, local - a linux bridge is created. Instances are attached to the respective bridge, which in turn is connected to a tagged, untagged, or vxlan interface. As a new instance is created and scheduled to a compute node, a corresponding network bridge is either created (the first time) or an existing bridge is used.
  11. The linuxbridge plugin is pretty meager in its requirements. It simply asks for a network interface (or bond) to be associated with a provider network. If using overlay tenant networks, an IP address is required for the VTEP address. The LXB agent handles the creation of linux bridges and virtual interfaces necessary to connect VM instances to the network.
  12. For this test, we had a single virtual machine instance connected to four different flat provider networks simultaneously. The NICs were directly cabled to the standalone node, which acted as an iPerf client. The DUT was a VM instance with 8 cores and 16 GB of RAM. At 10G, the results were on par with what we saw against the baremetal compute node. At 25G, we start to see a drop off in performance and max out at around 18 Gbps. At 40G, the drop continues and the best we got was ~20 Gbps At 100G, there was no improvement over ~20 Gbps.
  13. JAMES
  14. JONNY
  15. Open vSwitch is a software switch that when used in conjuction with OpenStack, uses openflow to influence traffic forwarding decisions. Openflow rules are applied to each bridge and perform things like vlan tag manipulation, qos, possible firewalling, and more.
  16. We won’t go into the architecture of OVS, as that is much better described by others in the OVS and OpenStack community. We will say, however, that … ?
  17. When installing OpenStack from something like OSA, Kolla, TripleO or others, necessary packages will likely be installed for you.
  18. For the initial OVS tests, we used the same flavor of instance and repeated the iPerf tests. At 10G, the results were on par with what we saw against the baremetal compute node. At 25G, we start to see a drop off in performance and max out at around 18 Gbps. At 40G, the drop continues and the best we got was ~20 Gbps At 100G, there was little improvement over ~20 Gbps. So – it looked very much like LXB.
  19. JAMES
  20. With traditional network drivers, getting packets off the wire is interrupt-driven. When traffic is received by the NIC, an interrupt is generated and the CPU stops what its doing to grab the data and perform further processing. The more traffic, the more interruptions, resulting in less performance. DPDK uses the concept of ‘poll mode’ drivers vs interrupts, which means that one or more cores are constantly ‘polling’ the queue rather than relying on CPU interrupts for traffic processing
  21. User can create applications using DPDK libraries. Will have different OVS binary (w/ DPDK support compiled in)
  22. The bridge setup for OVS+DPDK looks just like vanilla OVS, except that additional attributes are configured to allow proper function of DPDK acceleration. Network interfaces are added to the bridge using the same ovs-vsctl add-port command, but DPDK-specific attributes are required. At 10G, the results were on par with what we saw against the baremetal compute node, LXB and vanilla OVS. At 25G, we start to see a drop off in performance, but not as drastic as the others. We were able to sustain ~22 Gbps. At 40G, we maxxed out around 27 Gbps At 100G, the max was again ~ 27 Gbps We show vanilla OVS against OVS+DPDK to demonstrate the improvement of DPDK acceleration for this particular test scenario.
  23. Disadvantages: You must call out the number of cores to reserve for PMDs on each numa node, maybe even particular core numbers and sibling threads. You may need to isolate cores from the Linux scheduler You may need to reserve a certain number of cores for host operations Leaving you with a smaller subnet of cores available to virtual machines
  24. JAMES With SR-IOV, a network card can be carved up and made to appear as multiple network interfaces. You use SR-IOV when you need I/O performance that approaches that of the physical bare metal interfaces. Different NICs support varying numbers of VFs, but usually enough to support a few dozen VMs on a host. Operators must configure the neutron-sriov-agent on compute nodes; network nodes must utilize LXB or OVS. The SRIOV agent can run in parallel to the other agents on a compute. https://www.redhat.com/en/blog/red-hat-enterprise-linux-openstack-platform-6-sr-iov-networking-part-i-understanding-basics
  25. JONNY When testing SR-IOV, we attached a single virtual function per network to the instance. At 10G, the results were on par with what we saw against the baremetal compute node, LXB, OVS and DPDK At 25G, we saw performance comparible to the baremetal node with no dropoff in throughput At 40G, we saw a slight dropoff and we able to obtain approx 37 Gbps At 100G, the max ~ 70 Gbps, but right there with baremetal performance and near the max for PCIe 3.0
  26. JAMES
  27. In today’s talk we will be discussing 4 major mechanism drivers are their derivatives, including: linuxbridge open vswitch open vswitch + dpdk open vswitch + asap^2 sriov macvtap + sriov vpp
  28. SR-IOV (indirect mode) uses an intermediary device, such as macvtap, to provide connectivity to instances. Rather than attaching the VF directly to an instance, the VF is attached to a macvtap interface which is then attached to the instance. This particular architecture addresses some of the shortcomings of direct mode, especially related to live migration. But as we’ll see in the next slide, the performance benefit over 10G is effectively lost.
  29. Adding macvtap interfaces as an intermediary between the instance and the NIC resulted in a significant drop in performance at the 25/40/100 marks – comparable or WORSE to that of LXB or vanilla OVS. At 10G, the results were on par with what we saw against the baremetal compute node, LXB, OVS and DPDK and direct SRIOV At 25G, however, the results were even lower than that of LXV or OVS, about 16 Gbps sustained At 40G, we sat around 21 Gbps At 100G, nearly the same at around 20 Gbps. Any performance gained from using virtual function was lost in sriov indirect mode with macvtap.
  30. JAMES
  31. JAMES ASAP squared DIRECT works in conjunction with Open vSwitch, and offloads packet processing onto the embedded switch in the NIC using the switchdev API. ASAP FLEX works differently, in that some packets are still processed in software. Flex is not in scope or supported with openstack at this time. More ASAP info at: http://www.mellanox.com/related-docs/whitepapers/WP_SDNsolution.pdf
  32. With ASAP squared, the concept of an eSwitch and VF representor port is introduced. An eSwitch is a switch that lives on the NIC. A representor port is a virtual representation of an eSwitch port. These ports are created and associated with a VF and attached to the OVS integration bridge. The instance itself is attached to the VF and requires drivers for the VF, much like SR-IOV direct mode. When you send traffic to the representor port, it arrives on the VF and to the VM. When the VF recieves a packet from the VM and the eSwitch doesn’t have a rule, the packet hits the SLOW PATH and the packet is processed in software. Subsequent packets are processed in hardware. ----------- At 10G, the results were on par with what we saw against the baremetal compute node, LXB, OVS and DPDK and direct SRIOV. Basically everything we’ve tested so far. At 25G, the results were as good as BM and similar to SR-IOV direct. At 40G and 100G, about the same and certainly within the margin of error. BENEFITS of ASAP??
  33. If eSwitch resources are consumed, fallback mode is to the slow path (kernel)
  34. In today’s talk we will be discussing 4 major mechanism drivers are their derivatives, including: linuxbridge open vswitch open vswitch + dpdk open vswitch + asap^2 sriov macvtap + sriov vpp
  35. With VPP, there is a single vswitch connecting instance ports and network interfaces. At 10G, the results were on par with what we saw against the baremetal compute node and everything else tested. At 25G, the results were as good as BM and similar to SR-IOV direct. At 40G and 100G, significant drop off – worst performing of the bunch.
  36. With the 25Gbps test, we start to see a drop in performance for LXB, OVS and Macvtap.
  37. With 40Gbps, the drop continues. VPP took a significant hit.
  38. Line rate on a 100G NIC in a PCIe 3.0 slot (16x) is slated to be roughly 80Gbps max. Here we see DPDK, SR-IOV and ASAP keeping up with baremetal performance.
  39. With the T-Rex test, we setup a virtual machine as a router simply by enabling ip forwarding and configuring static routes described in the T-Rex configuration guide. We sought to run an SFR 1G profile that includes a mix of traffic of different protocols and payload sizes such as http/s, exchange, pop, smtp, sip, and dns.
  40. For the 10G test, we executed the profile and multiplied it by 10 to have T-Rex sustain 10Gbps of traffic over a 120 second period. Against the baremetal node, there was zero packet loss. For SR-IOV/ASAP, we saw less than 1% packet loss over the 120 second period. VPP and DPDK followed with 24 and 40%, respectively. Indirect SR-IOV with macvtap, OVS and LXB led the rear with 80-96% packet loss. Common factor between the three worst performers? Virtio with no dataplane acceleration in place (i.e. DPDK).
  41. For the 25G test, we executed the profile and multiplied it by 25 to have T-Rex sustain 25Gbps of traffic over a 120 second period. Against the baremetal node, there was roughly zero packet loss. For SR-IOV/ASAP, we saw between 18-22% packet loss over the 120 second period. VPP and DPDK followed with 67 and 76%, respectively. Indirect SR-IOV with macvtap, OVS and LXB flirted with nearly 100% packet loss
  42. For the 40G test, it gets worse. Against the baremetal node, there was roughly zero packet loss. For SR-IOV/ASAP, we saw between 18-22% packet loss over the 120 second period. VPP and DPDK followed with 67 and 76%, respectively. Indirect SR-IOV with macvtap, OVS and LXB flirted with nearly 100% packet loss. ~~ It is possible we would have seen better performance splitting incoming and outgoing traffic across two different NICs ~~
  43. With the 40 and 100 tests, T-Rex began to report ‘buffer full’ issues. Performance suffered across the board, but the SR-IOV and ASAP still came out ahead over the others. ~~ Like the 40G test, It is possible we would have seen better performance splitting incoming and outgoing traffic across two different NICs as well as being more attentive to tuning ~~
  44. Upstream support: The built-in mechanism drivers see the highest adoption rates and are supported by a large community. IRC, Launchpad, and mailing lists exist to discuss problems and feature requests. Bug fix completion rates: Smaller projects may have less individuals available to triage and fix bugs. Community adoption: There is strength in numbers. A higher adoption rate of a given driver allows for more resources to be assigned to the driver. Feature completeness: A driver may excel at a few use cases, but may not provide features needed for a well-rounded cloud. This will vary from driver to driver and use-case to use-case. What’s important to one cloud may not be to another. Hardware compatibility: Some drives are limited to certain hardware – ASAP -> Mellanox, and DPDK -> Mellanox, Intel, and others (but only a subset). Ease of support: The most difficult to quantify – deployment tools may struggle with certain drivers, deploying in a one-size-fits-all style, or teams may resist new tooling. More interaction with vendors may be required, and heavy optimization is needed in some cases to eek out the best performance.
  45. JAMES Each vSwitch requires its own tweaks: For DPDK-accelerated vSwitch, this may mean differences in hugepage configuration (2M vs 1G), hugepage allocation (static vs transparent) and (1G vs 4G based on MTU (DPDK docs)), NUMA considerations, etc. Different workloads, different settings: Great care should be taken to ensure an instance is leveraging same NUMA resources as NIC for best performance. Sometimes you must straddle NUMA nodes and performance could suffer. Can’t polish a turd: LXB and Vanilla OVS may do well for little over 10Gbps, but no amount of tuning will get them on-par with a DPDK or SR-IOV type performance.
  46. JAMES Newer gen CPU PCIe matters for performance TAKE OUT SPECTRE
  47. JAMES There is a tendancy to want to run the latest latest technology and live on the bleeding edge, but doing so comes at a cost. In the case of a DPDK-accelerated vSwitch, the lack of drivers for certain hardware in the upstream release may mean you’re rolling your own versions of DPDK, OVS, or VPP. Maintaining this configuration in the long run can be painful and not worth the trouble, especially if your use cases don’t really justify the cost.
  48. If you’re looking for more of a set it and forget it approach, your best bet is to stay in the upstream lanes of LXB or OVS. Both are heavily contributed to and have been around since the beginning. While LXB may lack some features seen in an OVS-based solution, like DVR, Tap-aaS, FW-aaS v2,., it does its job well enough and remains a favorite of support engineers everywhere. Both LXB and vanilla OVS offer ”good enough” performance for < 10G networking.