Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
This presentation covers the basics about OpenvSwitch and its components. OpenvSwitch is a Open Source implementation of OpenFlow by the Nicira team.
It also also talks about OpenvSwitch and its role in OpenStack Networking
OVN (Open Virtual Network) を用いる事により、OVS (Open vSwitch)が動作する複数のサーバー(Hypervisor/Chassis)を横断する仮想ネットワークを構築する事ができます。
本スライドはOVNを用いた論理ネットワークの構成と設定サンプルのメモとなります。
Using OVN, you can build logical network among multiple servers (Hypervisor/Chassis) running OVS (Open vSwitch).
This slide is describes HOW TO example of OVN configuration to create 2 logical switch connecting 4 VMs running on 2 chassis.
This presentation covers the basics about OpenvSwitch and its components. OpenvSwitch is a Open Source implementation of OpenFlow by the Nicira team.
It also also talks about OpenvSwitch and its role in OpenStack Networking
OVN (Open Virtual Network) を用いる事により、OVS (Open vSwitch)が動作する複数のサーバー(Hypervisor/Chassis)を横断する仮想ネットワークを構築する事ができます。
本スライドはOVNを用いた論理ネットワークの構成と設定サンプルのメモとなります。
Using OVN, you can build logical network among multiple servers (Hypervisor/Chassis) running OVS (Open vSwitch).
This slide is describes HOW TO example of OVN configuration to create 2 logical switch connecting 4 VMs running on 2 chassis.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
OpenStack 운영을 통해 얻은 교훈을 공유합니다.
목차
1. TOAST 클라우드 지금의 모습
2. OpenStack 선택의 이유
3. 구성의 어려움과 극복 사례
4. 활용 사례
5. 풀어야 할 문제들
대상
- TOAST 클라우드를 사용하고 싶은 분
- WMI를 처음 들어보시는 분
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
OpenStack Ceph & Neutron에 대한 설명을 담고 있습니다.
1. OpenStack
2. How to create instance
3. Ceph
- Ceph
- OpenStack with Ceph
4. Neutron
- Neutron
- How neutron works
5. OpenStack HA- controller- l3 agent
6. OpenStack multi-region
Open vSwitch Offload: Conntrack and the Upstream KernelNetronome
Offloading all or part of the Open vSwitch datapath to SmartNICs has been shown to not only release CPU resources on the server, but improve traffic processing performance. Recently steps have been made to support such offloading in the upstream Linux kernel. This has focused on creating an OVS datapath using the TC flower filter and utilizing the offload hooks already present here. This presentation focuses on how Connection Tracking (Conntrack) may fit into this model. It describes current work being undertaken with the Netfilter community to allow offloading of Conntrack entries. It continues to link this work with the offloading of Conntrack rules within OVS-TC.
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...OpenStack
Audience Level
Intermediate
Synopsis
The latest SDN revolution is centered on creating efficient virtualized data center networks using VXLAN & EVPN. We will talk about the scale, performance, and cost advantages of using a modern controller-free virtualized network solution built on 100 Gigabit Ethernet switches with hardware based VXLAN Routing. We will explore the ease of automating such a network in an OpenStack environment and take you through a real world use case of using OpenStack Network Node bridging between a bare metal cloud (EVPN) and a fully virtualized cloud environments (orchestrated by Neutron).
Speaker Bio:
David has held leadership roles at 3COM, Cisco Systems, Nortel Networks, and IBM where he promoted advanced network technologies including High Speed Ethernet, Layer 4-7 switching, Virtual Machine-aware networking, and Software Defined Networking.
David’s current focus is on the evolving landscape of data center networking, scale out storage, Open Networking, and cloud computing.
Enterprise Datacenter Virtualization und Cloud Computing stellen neue Anforderungen an das Netzwerk. Traditionsgemäss wurden virtuelle Workloads über als Bridge fungierende virtuelle Switches mit VLANs auf dem physischen Netzwerk verbunden. Mit dem Wachstum der Anfordungen an Skalierung und Automatisierung stossen diese Modelle an Grenzen.
Thomas Graf bot an diesem OpenTuesday einen Einblick in Protokolle und Technologien wie OpenFlow, VXLAN, OpenStack Neutron und Open vSwitch, die eingesetzt werden, um neue automatisierte Netzwerkkonzepte der nächsten Generation, wie Software Defined Networking oder Network Function Virtualization, umzusetzen.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
OpenStack 운영을 통해 얻은 교훈을 공유합니다.
목차
1. TOAST 클라우드 지금의 모습
2. OpenStack 선택의 이유
3. 구성의 어려움과 극복 사례
4. 활용 사례
5. 풀어야 할 문제들
대상
- TOAST 클라우드를 사용하고 싶은 분
- WMI를 처음 들어보시는 분
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
OpenStack Ceph & Neutron에 대한 설명을 담고 있습니다.
1. OpenStack
2. How to create instance
3. Ceph
- Ceph
- OpenStack with Ceph
4. Neutron
- Neutron
- How neutron works
5. OpenStack HA- controller- l3 agent
6. OpenStack multi-region
Open vSwitch Offload: Conntrack and the Upstream KernelNetronome
Offloading all or part of the Open vSwitch datapath to SmartNICs has been shown to not only release CPU resources on the server, but improve traffic processing performance. Recently steps have been made to support such offloading in the upstream Linux kernel. This has focused on creating an OVS datapath using the TC flower filter and utilizing the offload hooks already present here. This presentation focuses on how Connection Tracking (Conntrack) may fit into this model. It describes current work being undertaken with the Netfilter community to allow offloading of Conntrack entries. It continues to link this work with the offloading of Conntrack rules within OVS-TC.
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...OpenStack
Audience Level
Intermediate
Synopsis
The latest SDN revolution is centered on creating efficient virtualized data center networks using VXLAN & EVPN. We will talk about the scale, performance, and cost advantages of using a modern controller-free virtualized network solution built on 100 Gigabit Ethernet switches with hardware based VXLAN Routing. We will explore the ease of automating such a network in an OpenStack environment and take you through a real world use case of using OpenStack Network Node bridging between a bare metal cloud (EVPN) and a fully virtualized cloud environments (orchestrated by Neutron).
Speaker Bio:
David has held leadership roles at 3COM, Cisco Systems, Nortel Networks, and IBM where he promoted advanced network technologies including High Speed Ethernet, Layer 4-7 switching, Virtual Machine-aware networking, and Software Defined Networking.
David’s current focus is on the evolving landscape of data center networking, scale out storage, Open Networking, and cloud computing.
Enterprise Datacenter Virtualization und Cloud Computing stellen neue Anforderungen an das Netzwerk. Traditionsgemäss wurden virtuelle Workloads über als Bridge fungierende virtuelle Switches mit VLANs auf dem physischen Netzwerk verbunden. Mit dem Wachstum der Anfordungen an Skalierung und Automatisierung stossen diese Modelle an Grenzen.
Thomas Graf bot an diesem OpenTuesday einen Einblick in Protokolle und Technologien wie OpenFlow, VXLAN, OpenStack Neutron und Open vSwitch, die eingesetzt werden, um neue automatisierte Netzwerkkonzepte der nächsten Generation, wie Software Defined Networking oder Network Function Virtualization, umzusetzen.
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack
Audience Level
Beginner
Synopsis
Layer 2 versus Layer 3, MLAG, Spanning-Tree, switch mechanism drivers, overlays and routing-on-the-host — What scales and what does not? The underlying plumbing of an OpenStack network is something you’d rather not have to think about. This presentation examines the network architectures of web-scale and large enterprise OpenStack users and how those same efficiencies can be used in deployments of all sizes.
Speaker Bio:
Scott is a Member of Technical Staff at Cumulus Networks where he designs, supports and deploys web-scale technologies and architectures in enterprise networks globally. Prior to becoming a founding member of the Cumulus office in Australia, Scott started his career as a network administrator before joining Cisco Systems to support their data centre products.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
DragonFlow sdn based distributed virtual router for openstack neutronEran Gampel
Dragonflow is an implementation of a fully distributed virtual router for OpenStack® Neutron™ that is based on a light weight SDN controller
blog.gampel.net
Presentation given at the 2017 LinuxCon China
With the booming of Container technology, it brings obvious advantages for cloud: simple and faster deployment, portability and lightweight cost. But the networking challenges are significant. Users need to restructure their network and support container deployment with current cloud framework, like container and VMs.
In this presentation, we will introduce new container networking solution, which provides one management framework to work with different network componenets through Open/friendly modelling mechnism. iCAN can simplify network deployment and management with most orchestration systems and a variety of data plane components, and design extendsible architect to define and validate Service Level Agreement(SLA) for cloud native applications, which is important factor for enterprise to deliver successful and stable service via containers.
Nicolai van der Smagt has been in the business of designing, implementing and running SP networks for over 15 years. He has worked with DOCSIS, DSL and FTTH operators. Nowadays, Nicolai is helping Infradata’s pan-European customers build better access, aggregation and core networks, but his focus is on the data center, SDN, NFV and the whitebox switching revolution. His motto: “Simplicity is sophistication”.
Topic of Presentation: SDN
Language: English
Abstract:
Open source SDN that actually works -today
OpenContrail is an open source (Apache 2.0 licensed) project that provides network virtualization in the data center, using tried and tested open standards. It provides northbound APIs, integrates in Openstack or Cloudstack and is available today!
In this slot we’ll show you the architecture and ideas behind the technology and how OpenContrail enables you to avoid the pitfalls that other (closed) SDN solutions bring. If time permits we’ll also demo the technology.
Overview of OpenStack nova-networking evolution towards Neutron. Architecture overview of OVS plugin, ML2, and MidoNet Overlay product. Overview and example of Heat templates, along with automation of physical switches using Cumulus
An Introduce of OPNFV (Open Platform for NFV)Mario Cho
OPNFV is Open Platform for Network Function Virtualization.
It lecture are talk on Open Software Conference 2015.
The Lecture of OPNFV explain OPNFV sub-software technology like The Linux Kernel, Virtualization, Software Defined Network, OpenStack, OpenDaylight, and Network Function Virtualization.
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
We are observing different network throughputs on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting.
Packet processing in the fast path involves looking up bit patterns and deciding on an actions at line rate. The complexity of these functions at Line Rate, have been traditionally handled by ASICs and NPUs. However with the availability of faster and cheaper CPUs and hardware/software accelerations, it is possible to move these functions onto commodity hardware. This tutorial will talk about the various building blocks available to speed up packet processing both hardware based e.g. SR-IOV, RDT, QAT, VMDq, VTD and software based e.g. DPDK, Fd.io/VPP, OVS etc and give hands on lab experience on DPDK and fd.io fast path look up with following sessions. 1: Introduction to Building blocks: Sujata Tibrewala
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Pushing Packets - How do the ML2 Mechanism Drivers Stack Up
1. James Denton – Network Architect
Twitter: @jimmdenton
Open Infrastructure Summit
Jonathan Almaleh – Network Architect
Twitter: @ckent99999
Denver, Colorado 2019
Pushing Packets:
How do the ML2 Drivers Stack Up
3. In the beginning, there was the monolithic
plugin.
Using a monolithic plugin, operators were
limited to a single virtual networking
technology within a given cloud.
Writing new plugins was difficult, and often
resulted in duplicate code and efforts.
What is ML2
4. Around the Havana release, the Modular
Layer 2 (ML2) plugin was introduced.
By using the ML2 plugin and related
drivers, operators were no longer limited to
a single virtual networking technology
within a given cloud.
With the modular approach, developers
could focus on developing drivers for their
respective L2 mechanisms without having
to reimplement other components.
What is ML2
5. Type drivers:
Define how an OpenStack network is
realized. Examples are VLAN, VXLAN,
GENEVE, Flat, etc.
Mechanism drivers:
Are responsible for implementing the
network. Examples include Open
vSwitch, Linux Bridge, SR-IOV, etc.
What is ML2
8. Test Environment
iPerf Server (VM) / DUT
CPU 8-core Intel Xeon E5-2678 v3 @ 2.5 Ghz
Memory 16 GB
Operating System Ubuntu 18.04.2 LTS 4.15.0-47-generic
Mechanism Driver Network Driver (in VM)
Linux Bridge vhost_net
Open vSwitch +/- DPDK vhost_net
SR-IOV (Direct) mlx5_core
SR-IOV (Indirect) vhost_net
Open vSwitch + ASAP mlx5_core
VPP vhost_net
Compute Node Specs
CPU 2x 12-core Intel Xeon E5-2678 v3 @ 2.5 Ghz
Memory 64 GB
NIC Mellanox ConnectX-4 Lx EN (10/25)
Mellanox ConnectX-5 Ex (40/100)
Operating System Ubuntu 18.04.2 LTS 4.15.0-47-generic
Kernel Parameters GRUB_CMDLINE_LINUX="intel_iommu=on iommu=pt”
OpenStack Release Stein
iPerf Client / T-Rex Specs
CPU 2x 8-core Intel Xeon E5-2667 v2 @ 3.3 Ghz
Memory 128 GB
NIC Mellanox ConnectX-4 Lx EN (10/25)
Mellanox ConnectX-5 Ex (40/100)
Operating System Ubuntu 18.04.2 LTS 4.15.0-47-generic
Kernel Parameters GRUB_CMDLINE_LINUX="intel_iommu=on
iommu=pt”
9. Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
10. Linux Bridge
The Linux Bridge driver implements a single
Linux bridge per network on a given compute
node.
A bridge can be connected to a physical
interface, vlan interface, or vxlan interface,
depending on the network type.
brq0
Instance Instance
11. Linux Bridge
Requirements:
One or more network interfaces to be associated with provider
network(s)
An IP address to be used for overlay traffic (if applicable)
Mechanism Driver linuxbridge
Agent neutron-linuxbridge-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:ens1f0
25g 25g:ens1f1
40g 40g:ens3f0
100g 100g:ens3f1
13. Advantages Disadvantages
Supports overlay networking (VXLAN)
Easy to troubleshoot using well-known tools
Supports highly-available routers using VRRP
Supports bonding / LAG
Widely used / documented / supported
Kernel datapath
Iptables is likely in-path, even with port security disabled
Does not support distributed virtual routers
Does not support advanced services such as TaaS,
FWaaS v2, and others
Usually lags in new features compared to OVS
Linux Bridge
14. Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
15. Open vSwitch
Open vSwitch is an open-source
virtual switch that connects virtual
network resources to the physical
network infrastructure.
The openvswitch mechanism
driver implements multiple OVS-
based virtual switches and uses
them for various purposes.
The integration bridge connects virtual
machine instances to local Layer 2 networks.
The tunnel bridge connects
Layer 2 networks between
compute nodes using
overlay networks such as
VXLAN or GRE.
The provider bridge
connects local Layer 2
networks to the physical
network infrastructure.
br-int
br-tun br-provider
16. Open vSwitch
Requirements:
One or more network interfaces to be associated with provider
network(s)
An IP address to be used for overlay traffic (if applicable)
Packages not included with base install (e.g. openvswitch-
common, openvswitch-switch)
Mechanism Driver openvswitch
Agent neutron-openvswitch-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:br-10g
25g 25g:br-25g
40g 40g:br-40g
100g 100g:br-100g
18. Advantages Disadvantages
Supports overlay networking (VXLAN, GRE)
Supports highly-available routers using VRRP, as well
as distributed virtual routers for better fault tolerance
Supports a more efficient openflow-based firewall in lieu
of Iptables
Supports advanced services such as TaaS, FWaaS v2,
etc.
Supports bonding / LAG
Widely used / documented / supported
Split Userspace/Kernel datapath
Uses a more convoluted command set
Interfaces may not be accessible using traditional tools
such as tcpdump, ifconfig, etc.
Open vSwitch
19. Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
20. Open vSwitch + DPDK
Data Plane Development Kit, or DPDK, provides a
framework for applications to directly interact with
network hardware, bypassing the kernel and related
interrupts, system calls, and context switching.
Developers can create applications using DPDK, or
in the case of OpenStack Neutron, Open vSwitch is
the application leveraging DPDK – no user
interaction needed*.
* Ok, that’s not really true.
Image credit: https://blog.selectel.com/introduction-dpdk-architecture-principles/
21. Open vSwitch + DPDK
Requirements:
One or more network interfaces to be associated with provider
network(s)
An IP address to be used for overlay traffic (if applicable)
Hugepage memory allocations and related flavors
Compatible network interface card (NIC)
NUMA awareness
OVS Binary with DPDK support (e.g. openvswitch-switch-dpdk)
Mechanism Driver openvswitch
Agent neutron-openvswitch-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:br-10g
25g 25g:br-25g
40g 40g:br-40g
100g 100g:br-100g
23. Advantages Disadvantages
Supports overlay networking
Userspace datapath
Better performance compared to vanilla OVS
Uses a more convoluted command set
Interfaces may not be accessible using traditional tools
such as tcpdump, ifconfig, etc.
Non-instance ports are not performant, including those
used by DVR, FWaaS, and LBaaS
Hugepages required
Knowledge of NUMA topology required
One or more cores tied up for poll-mode drivers (PMDs)
OpenStack flavors require hugepages and CPU pinning
for best results
Open vSwitch + DPDK
24. Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
25. SR-IOV (Direct Mode)
SR-IOV allows a PCI device to separate
access to its resources, resulting in:
• Single pNIC / Physical Function (PF)
• Multiple vNICs / Virtual Function(s) (VF)
Mechanism Driver sriovnicswitch
Agent neutron-sriov-nic-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:ens1f0
25g 25g:ens1f1
40g 40g:ens3f0
100g 100g:ens3f1
27. Advantages Disadvantages
Traffic does not traverse kernel (direct to
instance)
Near line-rate performance
Live migration not supported (Work being done here)
Interfaces may not be accessible using traditional tools such
as tcpdump, ifconfig, etc. on the host
Only supports instance ports
Bonding / LAG not supported (Work being done here, too)
Port security / security groups not supported
Changes to workflow to create port ahead of instance (e.g.
vnic_type=direct)
Interface hotplugging not supported
SR-IOV (Direct Mode)
28. Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
29. SR-IOV (Indirect Mode)
SR-IOV (indirect mode) uses an intermediary device,
such as macvtap*, to provide connectivity to
instances. Rather than attaching the VF directly to
an instance, the VF is attached to a macvtap
interface which is then attached to the instance.
Mechanism Driver sriovnicswitch
Agent neutron-sriov-nic-agent
Communication AMQP (RabbitMQ)
Image courtesy of https://www.fir3net.com/UNIX/Linux/what-is-macvtap.html
* NOT to be confused with the actual, ya know, macvtap mechanism driver.
Provider Network Mapping
10g 10g:ens1f0
25g 25g:ens1f1
40g 40g:ens3f0
100g 100g:ens3f1
31. Advantages Disadvantages
Live migration
Traffic visible via macvtap interface on
compute node
No additional setup beyond SR-IOV
Poor performance > 10G in tests
Changes to workflow to create port ahead
of instance (e.g. vnic_type=macvtap)
Performance benefits of SR-IOV lost
SR-IOV (Indirect Mode)
32. Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
SR-IOV (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
33. Open vSwitch + ASAP2
ASAP2 is a feature of certain Mellanox NICs, including
ConnectX-5 and some ConnectX-4 models, that offloads
Open vSwitch data-plane processing onto NIC hardware
using switchdev API.
ASAP2 leverages SR-IOV, iproute2, tc, and Open vSwitch to
provide this functionality.
Mechanism Driver openvswitch
Agent neutron-openvswitch-agent
Communication AMQP (RabbitMQ)
Provider Network Mapping
10g 10g:br-10g
25g 25g:br-25g
40g 40g:br-40g
100g 100g:br-100g
35. Open vSwitch + ASAP2
Advantages Disadvantages
Supports LAG / bonding at vSwitch level
Supports traffic mirroring via standard OVS
procedures (vs SRIOV Direct)
Majority of packet processing done in
hardware
Not officially supported on non-RHEL based
operating systems
The use of security groups / port security
means packet processing is NOT offloaded
(addressed in future updates)
36. Mechanism Drivers
Linux Bridge
Open vSwitch
Open vSwitch + DPDK
SR-IOV (Direct Mode)
Macvtap (Indirect Mode)
Open vSwitch + ASAP2
Vector Packet Processing (VPP)
37. FD.io VPP
VPP is a software switch that works with DPDK to provide
very fast packet processing.
The networking-vpp project is responsible for providing
the mechanism driver to interface with the FD.io VPP
software switch.
Project URL:
https://wiki.openstack.org/wiki/Networking-vpp
Mechanism Driver vpp
Agent neutron-vpp-agent
Communication etcd
Provider Network Mapping
10g 10g:TwentyFiveGigabitEthernet4/0/0
25g 25g:TwentyFiveGigabitEthernet4/0/1
40g 40g:HundredGigabitEthernet8/0/0
100g 100g:HundredGigabitEthernet8/0/1
39. Advantages Disadvantages
Accelerated dataplane vs LXB and vanilla
OVS (Up thru 25G)
May still be considered experimental for the
greater community
May require recompile of DPDK + VPP to
support certain NICs, including Mellanox
Advanced services may not be supported
May require OFED, and particular versions
of DPDK and OVS
FD.io VPP
51. Summary
Performance isn’t everything
Operators who deploy OpenStack should consider many harder-to-quantify attributes of a given
mechanism driver and related technology, including:
Upstream support
Bug fix completion rate
Community adoption
Feature completeness
Hardware compatibility
Ease-of-support
LinuxBridge
Open vSwitch
SR-IOV (Direct)
SR-IOV (Indirect)
Open vSwitch + DPDK
Open vSwitch + ASAP
VPP
52. Summary
Tuning is almost always required
Parameter tuning can often lead to increases in performance for a given virtual switch, but:
Each vSwitch requires its own tweaks
Different workloads may need different settings
You can’t squeeze water from a stone
53. Summary
Hardware makes a difference
Newer generations of processors required for best performance
PCIe 4.0 to maximize 100G+ networking
54. Summary
The road less travelled can be a bumpy one
Less experience means you’re on your own
Testing is even more important
55. Summary
For best performance: For best support:
SR-IOV (Direct Mode)
Open vSwitch + Hardware Offloading (e.g.
ASAP2)
Linux Bridge
Open vSwitch
https://etherpad.openstack.org/p/jjdenver
One of the big use cased or needs for accelerated data plane is NFV. This talk introduces the viewer/reader to various ML2 drivers and virtual switching technologies supports by OpenStack, and compares features/functionality/performance.
Our goal is to shed some light on a few of the mechanism drivers and virtual switching technologies available for OpenStack, and possibly help you determine which could be a good fit for your cloud.
JAMES
Brief overview of what ML2 is
Describe some of the most common mechanism drivers available to deployers
Provide some comparisons between mech drivers and related vswitch technologies.
When comparing these drivers and respective virtual switching technologies, we focused on an out of the box deployment with little no tuning using off-the-shelf tools like iPerf and limited Trex tests. The results seen here may not tell the whole story, but provide some interesting data nonetheless.
As new monolithic plugins came onboard to support their respective network technology, they had to implement the entire Neutron API or risk forgoing features.
WHAT WERE THE FEATURES???
In theory, operators could deploy ML2 drivers that would be responsible for:
Programming physical hardware switches
SR-IOV
vSwitch like LXB and/or OVS
The ML2 plugin relies on two types of drivers: type drives and mechanism drivers.
TYPE drivers describe a type of network and its attributes -- VXLAN network type, with its VNI. Or VLAN network type with its 802.1q VLAN ID
MECHANISM drivers are responsible for implementing a network type. -- linuxbridge driver implements networks using linux bridges.
Not all type drivers are supported by all mechanism drivers!
Multiple mechanisms can be used simultaneously within a cloud to access different ports of the same virtual network.
The mechanisms, such as LinuxBridge, Open vSwitch, SRIOV, and others, can utilize agents that reside on the network/compute hosts to manipulate the virtual network implementation, or even interact with physical hardware such as a switch.
JONNY
JONNY
JAMES
The test environment was composed of a 2-node openstack cloud (infra/compute) and a 3rd standalone node running an iperf client and T-Rex for traffic generation. The compute and standalone nodes were directly cross-connected. Each node contained a single dual-port Mellanox CX4-Lx En and CX5. We enabled jumbo frames and disabled port security/security groups for all ports involved in testing.
Just to preface this - The tests demonstrated through this presentation do not focus on the ”potential” of any given driver and related technology. While some of the results surprised us, with enough tuning, we would likely have seen an increase in performance in some cases. Whether baremetal performance in attainable for some, is questionable.
JONNY
In today’s talk we will be discussing 4 major mechanism drivers are their derivatives, including:
linuxbridge
open vswitch
open vswitch + dpdk
open vswitch + asap^2
sriov
macvtap + sriov
vpp
Like the name implies, the linuxbridge mechanism driver uses linux bridges to provide L2 connectivity to instances.
For every Neutron network – flat, vlan, vxlan, local - a linux bridge is created. Instances are attached to the respective bridge, which in turn is connected to a tagged, untagged, or vxlan interface.
As a new instance is created and scheduled to a compute node, a corresponding network bridge is either created (the first time) or an existing bridge is used.
The linuxbridge plugin is pretty meager in its requirements. It simply asks for a network interface (or bond) to be associated with a provider network.
If using overlay tenant networks, an IP address is required for the VTEP address.
The LXB agent handles the creation of linux bridges and virtual interfaces necessary to connect VM instances to the network.
For this test, we had a single virtual machine instance connected to four different flat provider networks simultaneously. The NICs were directly cabled to the standalone node, which acted as an iPerf client.
The DUT was a VM instance with 8 cores and 16 GB of RAM.
At 10G, the results were on par with what we saw against the baremetal compute node.
At 25G, we start to see a drop off in performance and max out at around 18 Gbps.
At 40G, the drop continues and the best we got was ~20 Gbps
At 100G, there was no improvement over ~20 Gbps.
JAMES
JONNY
Open vSwitch is a software switch that when used in conjuction with OpenStack, uses openflow to influence traffic forwarding decisions. Openflow rules are applied to each bridge and perform things like vlan tag manipulation, qos, possible firewalling, and more.
We won’t go into the architecture of OVS, as that is much better described by others in the OVS and OpenStack community.
We will say, however, that … ?
When installing OpenStack from something like OSA, Kolla, TripleO or others, necessary packages will likely be installed for you.
For the initial OVS tests, we used the same flavor of instance and repeated the iPerf tests.
At 10G, the results were on par with what we saw against the baremetal compute node.
At 25G, we start to see a drop off in performance and max out at around 18 Gbps.
At 40G, the drop continues and the best we got was ~20 Gbps
At 100G, there was little improvement over ~20 Gbps.
So – it looked very much like LXB.
JAMES
With traditional network drivers, getting packets off the wire is interrupt-driven. When traffic is received by the NIC, an interrupt is generated and the CPU stops what its doing to grab the data and perform further processing. The more traffic, the more interruptions, resulting in less performance.
DPDK uses the concept of ‘poll mode’ drivers vs interrupts, which means that one or more cores are constantly ‘polling’ the queue rather than relying on CPU interrupts for traffic processing
User can create applications using DPDK libraries.
Will have different OVS binary (w/ DPDK support compiled in)
The bridge setup for OVS+DPDK looks just like vanilla OVS, except that additional attributes are configured to allow proper function of DPDK acceleration.
Network interfaces are added to the bridge using the same ovs-vsctl add-port command, but DPDK-specific attributes are required.
At 10G, the results were on par with what we saw against the baremetal compute node, LXB and vanilla OVS.
At 25G, we start to see a drop off in performance, but not as drastic as the others. We were able to sustain ~22 Gbps.
At 40G, we maxxed out around 27 Gbps
At 100G, the max was again ~ 27 Gbps
We show vanilla OVS against OVS+DPDK to demonstrate the improvement of DPDK acceleration for this particular test scenario.
Disadvantages:
You must call out the number of cores to reserve for PMDs on each numa node, maybe even particular core numbers and sibling threads.
You may need to isolate cores from the Linux scheduler
You may need to reserve a certain number of cores for host operations
Leaving you with a smaller subnet of cores available to virtual machines
JAMES
With SR-IOV, a network card can be carved up and made to appear as multiple network interfaces.
You use SR-IOV when you need I/O performance that approaches that of the physical bare metal interfaces.
Different NICs support varying numbers of VFs, but usually enough to support a few dozen VMs on a host.
Operators must configure the neutron-sriov-agent on compute nodes; network nodes must utilize LXB or OVS. The SRIOV agent can run in parallel to the other agents on a compute.
https://www.redhat.com/en/blog/red-hat-enterprise-linux-openstack-platform-6-sr-iov-networking-part-i-understanding-basics
JONNY
When testing SR-IOV, we attached a single virtual function per network to the instance.
At 10G, the results were on par with what we saw against the baremetal compute node, LXB, OVS and DPDK
At 25G, we saw performance comparible to the baremetal node with no dropoff in throughput
At 40G, we saw a slight dropoff and we able to obtain approx 37 Gbps
At 100G, the max ~ 70 Gbps, but right there with baremetal performance and near the max for PCIe 3.0
JAMES
In today’s talk we will be discussing 4 major mechanism drivers are their derivatives, including:
linuxbridge
open vswitch
open vswitch + dpdk
open vswitch + asap^2
sriov
macvtap + sriov
vpp
SR-IOV (indirect mode) uses an intermediary device, such as macvtap, to provide connectivity to instances. Rather than attaching the VF directly to an instance, the VF is attached to a macvtap interface which is then attached to the instance.
This particular architecture addresses some of the shortcomings of direct mode, especially related to live migration. But as we’ll see in the next slide, the performance benefit over 10G is effectively lost.
Adding macvtap interfaces as an intermediary between the instance and the NIC resulted in a significant drop in performance at the 25/40/100 marks – comparable or WORSE to that of LXB or vanilla OVS.
At 10G, the results were on par with what we saw against the baremetal compute node, LXB, OVS and DPDK and direct SRIOV
At 25G, however, the results were even lower than that of LXV or OVS, about 16 Gbps sustained
At 40G, we sat around 21 Gbps
At 100G, nearly the same at around 20 Gbps.
Any performance gained from using virtual function was lost in sriov indirect mode with macvtap.
JAMES
JAMES
ASAP squared DIRECT works in conjunction with Open vSwitch, and offloads packet processing onto the embedded switch in the NIC using the switchdev API.
ASAP FLEX works differently, in that some packets are still processed in software. Flex is not in scope or supported with openstack at this time.
More ASAP info at: http://www.mellanox.com/related-docs/whitepapers/WP_SDNsolution.pdf
With ASAP squared, the concept of an eSwitch and VF representor port is introduced.
An eSwitch is a switch that lives on the NIC. A representor port is a virtual representation of an eSwitch port. These ports are created and associated with a VF and attached to the OVS integration bridge.
The instance itself is attached to the VF and requires drivers for the VF, much like SR-IOV direct mode.
When you send traffic to the representor port, it arrives on the VF and to the VM.
When the VF recieves a packet from the VM and the eSwitch doesn’t have a rule, the packet hits the SLOW PATH and the packet is processed in software. Subsequent packets are processed in hardware.
-----------
At 10G, the results were on par with what we saw against the baremetal compute node, LXB, OVS and DPDK and direct SRIOV. Basically everything we’ve tested so far.
At 25G, the results were as good as BM and similar to SR-IOV direct.
At 40G and 100G, about the same and certainly within the margin of error.
BENEFITS of ASAP??
If eSwitch resources are consumed, fallback mode is to the slow path (kernel)
In today’s talk we will be discussing 4 major mechanism drivers are their derivatives, including:
linuxbridge
open vswitch
open vswitch + dpdk
open vswitch + asap^2
sriov
macvtap + sriov
vpp
With VPP, there is a single vswitch connecting instance ports and network interfaces.
At 10G, the results were on par with what we saw against the baremetal compute node and everything else tested.
At 25G, the results were as good as BM and similar to SR-IOV direct.
At 40G and 100G, significant drop off – worst performing of the bunch.
With the 25Gbps test, we start to see a drop in performance for LXB, OVS and Macvtap.
With 40Gbps, the drop continues. VPP took a significant hit.
Line rate on a 100G NIC in a PCIe 3.0 slot (16x) is slated to be roughly 80Gbps max.
Here we see DPDK, SR-IOV and ASAP keeping up with baremetal performance.
With the T-Rex test, we setup a virtual machine as a router simply by enabling ip forwarding and configuring static routes described in the T-Rex configuration guide.
We sought to run an SFR 1G profile that includes a mix of traffic of different protocols and payload sizes such as http/s, exchange, pop, smtp, sip, and dns.
For the 10G test, we executed the profile and multiplied it by 10 to have T-Rex sustain 10Gbps of traffic over a 120 second period.
Against the baremetal node, there was zero packet loss.
For SR-IOV/ASAP, we saw less than 1% packet loss over the 120 second period.
VPP and DPDK followed with 24 and 40%, respectively.
Indirect SR-IOV with macvtap, OVS and LXB led the rear with 80-96% packet loss.
Common factor between the three worst performers? Virtio with no dataplane acceleration in place (i.e. DPDK).
For the 25G test, we executed the profile and multiplied it by 25 to have T-Rex sustain 25Gbps of traffic over a 120 second period.
Against the baremetal node, there was roughly zero packet loss.
For SR-IOV/ASAP, we saw between 18-22% packet loss over the 120 second period.
VPP and DPDK followed with 67 and 76%, respectively.
Indirect SR-IOV with macvtap, OVS and LXB flirted with nearly 100% packet loss
For the 40G test, it gets worse.
Against the baremetal node, there was roughly zero packet loss.
For SR-IOV/ASAP, we saw between 18-22% packet loss over the 120 second period.
VPP and DPDK followed with 67 and 76%, respectively.
Indirect SR-IOV with macvtap, OVS and LXB flirted with nearly 100% packet loss.
~~ It is possible we would have seen better performance splitting incoming and outgoing traffic across two different NICs ~~
With the 40 and 100 tests, T-Rex began to report ‘buffer full’ issues.
Performance suffered across the board, but the SR-IOV and ASAP still came out ahead over the others.
~~ Like the 40G test, It is possible we would have seen better performance splitting incoming and outgoing traffic across two different NICs as well as being more attentive to tuning ~~
Upstream support: The built-in mechanism drivers see the highest adoption rates and are supported by a large community. IRC, Launchpad, and mailing lists exist to discuss problems and feature requests.
Bug fix completion rates: Smaller projects may have less individuals available to triage and fix bugs.
Community adoption: There is strength in numbers. A higher adoption rate of a given driver allows for more resources to be assigned to the driver.
Feature completeness: A driver may excel at a few use cases, but may not provide features needed for a well-rounded cloud. This will vary from driver to driver and use-case to use-case. What’s important to one cloud may not be to another.
Hardware compatibility: Some drives are limited to certain hardware – ASAP -> Mellanox, and DPDK -> Mellanox, Intel, and others (but only a subset).
Ease of support: The most difficult to quantify – deployment tools may struggle with certain drivers, deploying in a one-size-fits-all style, or teams may resist new tooling. More interaction with vendors may be required, and heavy optimization is needed in some cases to eek out the best performance.
JAMES
Each vSwitch requires its own tweaks: For DPDK-accelerated vSwitch, this may mean differences in hugepage configuration (2M vs 1G), hugepage allocation (static vs transparent) and (1G vs 4G based on MTU (DPDK docs)), NUMA considerations, etc.
Different workloads, different settings: Great care should be taken to ensure an instance is leveraging same NUMA resources as NIC for best performance. Sometimes you must straddle NUMA nodes and performance could suffer.
Can’t polish a turd: LXB and Vanilla OVS may do well for little over 10Gbps, but no amount of tuning will get them on-par with a DPDK or SR-IOV type performance.
JAMES
Newer gen CPU
PCIe matters for performance
TAKE OUT SPECTRE
JAMES
There is a tendancy to want to run the latest latest technology and live on the bleeding edge, but doing so comes at a cost.
In the case of a DPDK-accelerated vSwitch, the lack of drivers for certain hardware in the upstream release may mean you’re rolling your own versions of DPDK, OVS, or VPP. Maintaining this configuration in the long run can be painful and not worth the trouble, especially if your use cases don’t really justify the cost.
If you’re looking for more of a set it and forget it approach, your best bet is to stay in the upstream lanes of LXB or OVS. Both are heavily contributed to and have been around since the beginning. While LXB may lack some features seen in an OVS-based solution, like DVR, Tap-aaS, FW-aaS v2,., it does its job well enough and remains a favorite of support engineers everywhere.
Both LXB and vanilla OVS offer ”good enough” performance for < 10G networking.