This document discusses ingress scheduling in OvS-DPDK. It introduces several use cases for traffic prioritization in NFV and describes the current state of the OvS-DPDK datapath. It then explores implementing traffic classification and queue selection on the NIC to prioritize certain packets at the ingress of the datapath. Next steps are discussed to further develop this functionality.
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecasesLF_OpenvSwitch
1) Mobile networks today handle a large number of simultaneous short duration flows, with high call rates of 100k-200k connections per second. Statistics like call duration and bandwidth usage need to be tracked for each flow for billing purposes.
2) Testing was conducted injecting a 10Gbps mobile traffic profile of 1 million flows into OVS, with 200k flows created and destroyed per second. Key metrics measured were maximum throughput, latency, and jitter at different flow table sizes and core counts.
3) Conntrack performance was tested for OVS kernel and DPDK versions. For 100k flows, OVS kernel achieved 152k pps for 4-tuple matching while OVS-DPDK achieved
LF_OVS_17_OVS-DPDK: Embracing your NUMA nodes.LF_OpenvSwitch
This document discusses configuring OVS-DPDK parameters for a multi-NUMA environment. It recommends associating physical NICs and virtual ports to their respective NUMA nodes, provisioning CPUs on both nodes using pmd-cpu-mask, allocating hugepages for memory on each node using dpdk-socket-mem, and debugging by checking PMD thread placement and other_config settings. Correct configuration of these OVS-DPDK parameters is necessary for performance when using multiple NUMA nodes.
LF_OVS_17_OVS-DPDK Installation and GotchasLF_OpenvSwitch
1) The document provides instructions for installing and configuring OVS DPDK on Ubuntu 17.04, including specifying hardware, installing prerequisites, configuring grub, identifying NIC ports, binding interfaces to DPDK drivers, setting up the OVS bridge and adding ports.
2) Key steps include reserving hugepages in grub, binding NICs to igb_uio or vfio-pci drivers, setting OVS configuration like datapath type and memory allocation, and adding interfaces to the OVS bridge.
3) The scripts provided automate many of these steps but additional manual configuration may still be needed and issues can occur with making interfaces persistent after reboots.
LF_OVS_17_OVS Performance on Steroids - Hardware Acceleration MethodologiesLF_OpenvSwitch
This document discusses hardware acceleration methodologies for Open vSwitch (OVS) using Mellanox ConnectX network interface cards. It describes two approaches: full OVS offload using single-root I/O virtualization (SR-IOV) and partial offload using DPDK. Full offload moves the entire virtual switch to hardware, improving performance significantly over software-only approaches. Partial offload uses hardware for packet classification to accelerate parts of the OVS pipeline in DPDK. The document outlines performance benefits and future work areas like table offloading and live migration support. It also reviews community contributions to kernel, OVS, OpenStack and DPDK integration of these hardware offload techniques.
The Open vSwitch kernel datapath may have flows offloaded to hardware using the TC Flower classifier and related actions. This is a powerful mechanism to both increase throughput and reduce CPU utilisation. This presentation will give an overview of the evolution of this offload mechanism: features available in OvS v2.8, those targeted at v2.9 and possible future directions.
This document discusses integrating Open vSwitch (OVS) and Open Virtual Network (OVN) with containers and orchestrators like Docker and Kubernetes. It provides examples of using OVS and OVN commands to connect containers running in different namespaces to logical switches managed by OVN. It also describes how OVN implements Kubernetes networking concepts like pods, services, load balancing, and network policies using logical ports, switches and routers.
LF_OVS_17_LXC Linux Containers over Open vSwitchLF_OpenvSwitch
This document discusses using LXC Linux containers over Open vSwitch (OVS) for container networking. It provides information on the Orabuntu-LXC project which builds and installs OVS and LXC to deploy containerized Oracle software. It describes how LXC 2.1.0 added explicit support for OVS and how OVS can be configured as a systemd service. It also discusses using OVS with containerized DNS/DHCP and sending container traffic over GRE tunnels.
LF_OVS_17_Open vSwitch Offload: Conntrack and the Upstream KernelLF_OpenvSwitch
This document summarizes a presentation on conntrack offloading in Open vSwitch. It discusses the current approach of using Netfilter Conntrack in the OVS kernel, efforts to offload conntrack rules to a SmartNIC, and ongoing work to offload established conntrack flows without offloading the entire conntrack table. Key advantages of the latter approach include keeping initial flow decisions in the kernel for consistency while allowing unlimited offloaded flows handled by the SmartNIC.
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecasesLF_OpenvSwitch
1) Mobile networks today handle a large number of simultaneous short duration flows, with high call rates of 100k-200k connections per second. Statistics like call duration and bandwidth usage need to be tracked for each flow for billing purposes.
2) Testing was conducted injecting a 10Gbps mobile traffic profile of 1 million flows into OVS, with 200k flows created and destroyed per second. Key metrics measured were maximum throughput, latency, and jitter at different flow table sizes and core counts.
3) Conntrack performance was tested for OVS kernel and DPDK versions. For 100k flows, OVS kernel achieved 152k pps for 4-tuple matching while OVS-DPDK achieved
LF_OVS_17_OVS-DPDK: Embracing your NUMA nodes.LF_OpenvSwitch
This document discusses configuring OVS-DPDK parameters for a multi-NUMA environment. It recommends associating physical NICs and virtual ports to their respective NUMA nodes, provisioning CPUs on both nodes using pmd-cpu-mask, allocating hugepages for memory on each node using dpdk-socket-mem, and debugging by checking PMD thread placement and other_config settings. Correct configuration of these OVS-DPDK parameters is necessary for performance when using multiple NUMA nodes.
LF_OVS_17_OVS-DPDK Installation and GotchasLF_OpenvSwitch
1) The document provides instructions for installing and configuring OVS DPDK on Ubuntu 17.04, including specifying hardware, installing prerequisites, configuring grub, identifying NIC ports, binding interfaces to DPDK drivers, setting up the OVS bridge and adding ports.
2) Key steps include reserving hugepages in grub, binding NICs to igb_uio or vfio-pci drivers, setting OVS configuration like datapath type and memory allocation, and adding interfaces to the OVS bridge.
3) The scripts provided automate many of these steps but additional manual configuration may still be needed and issues can occur with making interfaces persistent after reboots.
LF_OVS_17_OVS Performance on Steroids - Hardware Acceleration MethodologiesLF_OpenvSwitch
This document discusses hardware acceleration methodologies for Open vSwitch (OVS) using Mellanox ConnectX network interface cards. It describes two approaches: full OVS offload using single-root I/O virtualization (SR-IOV) and partial offload using DPDK. Full offload moves the entire virtual switch to hardware, improving performance significantly over software-only approaches. Partial offload uses hardware for packet classification to accelerate parts of the OVS pipeline in DPDK. The document outlines performance benefits and future work areas like table offloading and live migration support. It also reviews community contributions to kernel, OVS, OpenStack and DPDK integration of these hardware offload techniques.
The Open vSwitch kernel datapath may have flows offloaded to hardware using the TC Flower classifier and related actions. This is a powerful mechanism to both increase throughput and reduce CPU utilisation. This presentation will give an overview of the evolution of this offload mechanism: features available in OvS v2.8, those targeted at v2.9 and possible future directions.
This document discusses integrating Open vSwitch (OVS) and Open Virtual Network (OVN) with containers and orchestrators like Docker and Kubernetes. It provides examples of using OVS and OVN commands to connect containers running in different namespaces to logical switches managed by OVN. It also describes how OVN implements Kubernetes networking concepts like pods, services, load balancing, and network policies using logical ports, switches and routers.
LF_OVS_17_LXC Linux Containers over Open vSwitchLF_OpenvSwitch
This document discusses using LXC Linux containers over Open vSwitch (OVS) for container networking. It provides information on the Orabuntu-LXC project which builds and installs OVS and LXC to deploy containerized Oracle software. It describes how LXC 2.1.0 added explicit support for OVS and how OVS can be configured as a systemd service. It also discusses using OVS with containerized DNS/DHCP and sending container traffic over GRE tunnels.
LF_OVS_17_Open vSwitch Offload: Conntrack and the Upstream KernelLF_OpenvSwitch
This document summarizes a presentation on conntrack offloading in Open vSwitch. It discusses the current approach of using Netfilter Conntrack in the OVS kernel, efforts to offload conntrack rules to a SmartNIC, and ongoing work to offload established conntrack flows without offloading the entire conntrack table. Key advantages of the latter approach include keeping initial flow decisions in the kernel for consistency while allowing unlimited offloaded flows handled by the SmartNIC.
This document discusses Open vSwitch and its support for stateful services like connection tracking (conntrack) and network address translation (NAT). Open vSwitch is designed to manage overlay networks and provides programmable flow tables and remote management. It aims to integrate conntrack to enable stateful firewalling and NAT functions. This will allow matching on connection states and leveraging existing Linux conntrack and NAT modules. Examples are given of how conntrack and NAT rules could be implemented using these new Open vSwitch capabilities.
This document discusses DPDK support for new hardware offloads. It describes the Netronome Agilio SmartNIC, which has hardware accelerators and can offload tasks like cryptography and flow processing. It discusses using the SmartNIC with DPDK and OVS for improved performance over kernel-based solutions. Full flow classification and action offloading to the SmartNIC is proposed to reduce CPU usage, along with exploring eBPF/XDP offloading possibilities and virtio offloading to enable VM migration.
LF_OVS_17_Enabling Hardware Offload of OVS Control & Data plane using LiquidIOLF_OpenvSwitch
This document discusses enabling Open vSwitch (OVS) hardware offload using Cavium's LiquidIO smart network interface cards (NICs). It describes two models for offloading OVS - data plane offload which keeps the control plane on the host, and full offload which moves both control and data planes to the NIC. The LiquidIO model represents a full offload where OVS runs natively on the NIC's processor. Performance tests show LiquidIO OVS offload achieving higher throughput and lower CPU usage than software-based OVS. Integration with OpenStack is also discussed.
The TC Flower Classifier allows control of packets based on flows determined by matching of well-known packet fields and metadata. This is inspired by similar flow classification described by OpenFlow and implemented by Open vSwitch. Offload of the TC Flower classifier and related modules provides a powerful mechanism to both increase throughput and reduce CPU utilisation for users of such flow-based systems. This presentation will give an overview of the evolution of offload of the TC Flower classifier: where it came from, the current status and possible future directions.
Open vSwitch - Stateful Connection Tracking & Stateful NATThomas Graf
Update on status of connection tracking and stateful NAT addition to the Linux kernel datapath. Followed by a discussion on the topic to collect ideas and come up with next steps.
This document discusses OpenvSwitch, an open source virtual switch that provides virtual networking and network virtualization capabilities. It describes OpenvSwitch's architecture, features, configuration, and use cases with OpenStack, VMware NSX, MidoNet, Pica8, and Intel DPDK. OpenvSwitch supports virtual networking functions like VLANs, STP, QoS, and tunneling protocols. It integrates with hypervisors and controllers to enable network virtualization and software-defined networking.
The document provides instructions for using DPDK and OVS-DPDK on Ubuntu 14.04 LTS. It begins by cloning the DPDK and OVS source code and checking out specific versions. It then builds DPDK and runs the testpmd application to verify basic packet forwarding. The document configures hugepages and binds NICs before starting testpmd. It provides output of the testpmd commands. Finally, it mentions setting up OVS-DPDK which involves the host and guest OS as well as Qemu and Fedora in a VM.
The Next Generation Firewall for Red Hat Enterprise Linux 7 RCThomas Graf
FirewallD provides firewall management as a service in RHEL 7, abstracting policy definition and handling configuration. The kernel includes new filtering capabilities like connection tracking targets and extended accounting. Nftables, a new packet filtering subsystem to eventually replace iptables, uses a state machine-based approach with unified nft user interface.
The document is describing OpenStack networking components including Linux bridges, Open vSwitch, virtual network interfaces (TAP and VETH), and how they work together to provide virtual networking.
It explains that TAP interfaces connect virtual machines to hypervisors, VETH pairs connect virtual bridges, Linux bridges act as hubs to connect multiple interfaces, and Open vSwitch bridges act like virtual switches with configurable ports and VLAN tagging. Traffic flows through these components via OpenFlow rules with tags added or stripped as needed.
BPF & Cilium - Turning Linux into a Microservices-aware Operating SystemThomas Graf
Container runtimes cause Linux to return to its original purpose: to serve applications interacting directly with the kernel. At the same time, the Linux kernel is traditionally difficult to change and its development process is full of myths. A new efficient in-kernel programming language called eBPF is changing this and allows everyone to extend existing kernel components or glue them together in new forms without requiring to change the kernel itself.
Accelerating Neutron with Intel DPDK from #vBrownBag session at OpenStack Summit Atlanta 2014.
1. Many OpenStack deployments use Open vSwitch plugin for Neutron.
2. But its performance and scalability are not enough for production.
3. Intel DPDK vSwitch - an DPDK optimized version of Open vSwitch developed by Intel and publicly available at 01.org. But it doesn't have enough functionality for Neutron. We have implemented the needed parts included GRE and ARP stacks. Neutron pluging
4. We got 5 times performance improving for netwroking in OpenStack!
The document discusses Distributed Virtual Router (DVR) and L3 High Availability in OpenStack Networking (Juno). It describes DVR packet flow including SNAT on the network node, floating IP/DNAT on compute nodes, and East-West traffic flow between instances on different compute nodes using GRE tunnels. Compute nodes perform distributed routing functions using Open vSwitch and namespaces.
OVN has come a long way since its initial focus on providing virtual networking for OpenStack in a way that had minimal dependencies and complexity. Over several OpenStack releases, OVN has implemented key Neutron abstractions like logical switches, routers, security groups, and load balancing using OpenFlow. It provides distributed implementations of these functions along with new capabilities like ACL logging, DHCP services, and L3 gateway high availability. OVN is now reusable outside of OpenStack as well with integrations for Kubernetes, Docker, and other platforms.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
Cilium - API-aware Networking and Security for Containers based on BPFThomas Graf
Cilium provides network security and visibility for microservices. It uses eBPF/XDP to provide fast and scalable networking and security controls at layers 3-7. Key features include identity-based firewalling, load balancing, and mutual TLS authentication between services. It integrates with Kubernetes to apply network policies using standard Kubernetes resources and custom CiliumNetworkPolicy resources for finer-grained control.
Offloading TC Rules on OVS Internal Ports Netronome
This document discusses two approaches to allowing traffic control (TC) rules to offload rules on internal ports in Open vSwitch (OVS). The first approach is to add a TC ingress hook to OVS internal port modules so the rules can be applied. The second approach is to offload rules as egress hooks, which achieves the same outcome as ingress hooks on internal ports by generating an OVS ingress action when egressing an internal port. Currently, TC rules outputting to an internal port are not offloaded, so this bypasses the OVS kernel datapath. The proposed approaches aim to address this by allowing hardware offload of rules on internal ports.
This document provides information for attendees of the Open vSwitch 2017 Fall Conference, including details about the agenda, speakers, sponsors, and code of conduct. The conference will take place over two days and include talks on topics such as DPDK, OVN, and connection tracking. WiFi and power are available, and lunch will be in the garage. Recordings will be posted online after the event.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
This document discusses Open vSwitch and its support for stateful services like connection tracking (conntrack) and network address translation (NAT). Open vSwitch is designed to manage overlay networks and provides programmable flow tables and remote management. It aims to integrate conntrack to enable stateful firewalling and NAT functions. This will allow matching on connection states and leveraging existing Linux conntrack and NAT modules. Examples are given of how conntrack and NAT rules could be implemented using these new Open vSwitch capabilities.
This document discusses DPDK support for new hardware offloads. It describes the Netronome Agilio SmartNIC, which has hardware accelerators and can offload tasks like cryptography and flow processing. It discusses using the SmartNIC with DPDK and OVS for improved performance over kernel-based solutions. Full flow classification and action offloading to the SmartNIC is proposed to reduce CPU usage, along with exploring eBPF/XDP offloading possibilities and virtio offloading to enable VM migration.
LF_OVS_17_Enabling Hardware Offload of OVS Control & Data plane using LiquidIOLF_OpenvSwitch
This document discusses enabling Open vSwitch (OVS) hardware offload using Cavium's LiquidIO smart network interface cards (NICs). It describes two models for offloading OVS - data plane offload which keeps the control plane on the host, and full offload which moves both control and data planes to the NIC. The LiquidIO model represents a full offload where OVS runs natively on the NIC's processor. Performance tests show LiquidIO OVS offload achieving higher throughput and lower CPU usage than software-based OVS. Integration with OpenStack is also discussed.
The TC Flower Classifier allows control of packets based on flows determined by matching of well-known packet fields and metadata. This is inspired by similar flow classification described by OpenFlow and implemented by Open vSwitch. Offload of the TC Flower classifier and related modules provides a powerful mechanism to both increase throughput and reduce CPU utilisation for users of such flow-based systems. This presentation will give an overview of the evolution of offload of the TC Flower classifier: where it came from, the current status and possible future directions.
Open vSwitch - Stateful Connection Tracking & Stateful NATThomas Graf
Update on status of connection tracking and stateful NAT addition to the Linux kernel datapath. Followed by a discussion on the topic to collect ideas and come up with next steps.
This document discusses OpenvSwitch, an open source virtual switch that provides virtual networking and network virtualization capabilities. It describes OpenvSwitch's architecture, features, configuration, and use cases with OpenStack, VMware NSX, MidoNet, Pica8, and Intel DPDK. OpenvSwitch supports virtual networking functions like VLANs, STP, QoS, and tunneling protocols. It integrates with hypervisors and controllers to enable network virtualization and software-defined networking.
The document provides instructions for using DPDK and OVS-DPDK on Ubuntu 14.04 LTS. It begins by cloning the DPDK and OVS source code and checking out specific versions. It then builds DPDK and runs the testpmd application to verify basic packet forwarding. The document configures hugepages and binds NICs before starting testpmd. It provides output of the testpmd commands. Finally, it mentions setting up OVS-DPDK which involves the host and guest OS as well as Qemu and Fedora in a VM.
The Next Generation Firewall for Red Hat Enterprise Linux 7 RCThomas Graf
FirewallD provides firewall management as a service in RHEL 7, abstracting policy definition and handling configuration. The kernel includes new filtering capabilities like connection tracking targets and extended accounting. Nftables, a new packet filtering subsystem to eventually replace iptables, uses a state machine-based approach with unified nft user interface.
The document is describing OpenStack networking components including Linux bridges, Open vSwitch, virtual network interfaces (TAP and VETH), and how they work together to provide virtual networking.
It explains that TAP interfaces connect virtual machines to hypervisors, VETH pairs connect virtual bridges, Linux bridges act as hubs to connect multiple interfaces, and Open vSwitch bridges act like virtual switches with configurable ports and VLAN tagging. Traffic flows through these components via OpenFlow rules with tags added or stripped as needed.
BPF & Cilium - Turning Linux into a Microservices-aware Operating SystemThomas Graf
Container runtimes cause Linux to return to its original purpose: to serve applications interacting directly with the kernel. At the same time, the Linux kernel is traditionally difficult to change and its development process is full of myths. A new efficient in-kernel programming language called eBPF is changing this and allows everyone to extend existing kernel components or glue them together in new forms without requiring to change the kernel itself.
Accelerating Neutron with Intel DPDK from #vBrownBag session at OpenStack Summit Atlanta 2014.
1. Many OpenStack deployments use Open vSwitch plugin for Neutron.
2. But its performance and scalability are not enough for production.
3. Intel DPDK vSwitch - an DPDK optimized version of Open vSwitch developed by Intel and publicly available at 01.org. But it doesn't have enough functionality for Neutron. We have implemented the needed parts included GRE and ARP stacks. Neutron pluging
4. We got 5 times performance improving for netwroking in OpenStack!
The document discusses Distributed Virtual Router (DVR) and L3 High Availability in OpenStack Networking (Juno). It describes DVR packet flow including SNAT on the network node, floating IP/DNAT on compute nodes, and East-West traffic flow between instances on different compute nodes using GRE tunnels. Compute nodes perform distributed routing functions using Open vSwitch and namespaces.
OVN has come a long way since its initial focus on providing virtual networking for OpenStack in a way that had minimal dependencies and complexity. Over several OpenStack releases, OVN has implemented key Neutron abstractions like logical switches, routers, security groups, and load balancing using OpenFlow. It provides distributed implementations of these functions along with new capabilities like ACL logging, DHCP services, and L3 gateway high availability. OVN is now reusable outside of OpenStack as well with integrations for Kubernetes, Docker, and other platforms.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
Cilium - API-aware Networking and Security for Containers based on BPFThomas Graf
Cilium provides network security and visibility for microservices. It uses eBPF/XDP to provide fast and scalable networking and security controls at layers 3-7. Key features include identity-based firewalling, load balancing, and mutual TLS authentication between services. It integrates with Kubernetes to apply network policies using standard Kubernetes resources and custom CiliumNetworkPolicy resources for finer-grained control.
Offloading TC Rules on OVS Internal Ports Netronome
This document discusses two approaches to allowing traffic control (TC) rules to offload rules on internal ports in Open vSwitch (OVS). The first approach is to add a TC ingress hook to OVS internal port modules so the rules can be applied. The second approach is to offload rules as egress hooks, which achieves the same outcome as ingress hooks on internal ports by generating an OVS ingress action when egressing an internal port. Currently, TC rules outputting to an internal port are not offloaded, so this bypasses the OVS kernel datapath. The proposed approaches aim to address this by allowing hardware offload of rules on internal ports.
This document provides information for attendees of the Open vSwitch 2017 Fall Conference, including details about the agenda, speakers, sponsors, and code of conduct. The conference will take place over two days and include talks on topics such as DPDK, OVN, and connection tracking. WiFi and power are available, and lunch will be in the garage. Recordings will be posted online after the event.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
LF_OVS_17_Riley: Pushing networking to the edgeLF_OpenvSwitch
The document describes Riley, a new data center network design that uses extremely simple switches without a switch operating system (OS). Riley aims to simplify switches by removing unnecessary components found in traditional designs. It shows that Riley switches can provide comparable throughput, job completion times, and end-host resource usage to traditional IP-based designs, but with significantly less switch resource usage in terms of TCAM, SRAM, CPU, and memory requirements. The goal of Riley is to design the simplest possible data center switch.
LF_OVS_17_OvS-CD: Optimizing Flow Classification for OvS using the DPDK Membe...LF_OpenvSwitch
The document discusses using the DPDK Membership Library to optimize Open vSwitch (OvS) flow classification performance. The Membership Library provides set summaries that allow OvS to perform a two-level lookup for megaflows, first checking the set summary to direct packets to the correct sub-table and avoiding a sequential search. This approach provides a 2-3x improvement in OvS throughput for uniform traffic patterns compared to the original OvS-DPDK implementation. The Membership Library is included in the recently released DPDK V17.11.
Nutanix uses a hyperconverged infrastructure to simplify hybrid cloud environments and manage workloads across private and public clouds. It offers a disaster recovery as a service solution called Xi Cloud to allow customers to recover workloads on public cloud if their on-premises infrastructure fails. Nutanix requires an SDN solution to provide overlay networking and features like microsegmentation across its hybrid environments. It is evaluating the OVN project for this purpose due to benefits like simpler abstractions, rich feature support, and integration with OVS, but also faces challenges with documentation, scaling limits, and feature parity. Nutanix plans to contribute to OVN by improving documentation, sharing reference architectures, and participating in code reviews and community efforts
This document discusses proposed IPsec functionality for securing VXLAN traffic in a datacenter. It describes using IPsec in transport mode with AES-CBC and HMAC-SHA1-96 to provide confidentiality, integrity and authentication. A new "vxlanipsec" interface type is proposed to handle VXLAN encapsulation/decap and ESP encapsulation/decap using DPDK cryptodev for hardware acceleration. Performance metrics show encap rates of 2.7-7.1 million packets per second for a single PMD instance on Intel hardware. Future work includes supporting GCM mode, IPsec tunnels, dynamic key re-keying and integrating with OVS and RTE_Security.
LF_OVS_17_DigitalOcean Cloud Firewalls: powered by OvS and conntrackLF_OpenvSwitch
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
LF_OVS_17_The birth of SmartNICs -- offloading dataplane traffic to...softwareLF_OpenvSwitch
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document provides the agenda for Day 2 of the Open vSwitch 2017 Fall Conference. It outlines that the day will include a keynote in the morning followed by multiple sessions on various technical topics related to Open vSwitch. It also provides instructions for speakers, including submitting slides to the AV desk in advance and paying attention to timers during talks. Finally, it notes that extra conference t-shirts are available.
LF_OVS_17_OVS-DPDK for NFV: go live feedback!LF_OpenvSwitch
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow and levels of neurotransmitters and endorphins which elevate and stabilize mood.
LF_OVS_17_CORD: An open source platform to reinvent the network edgeLF_OpenvSwitch
CORD is an open platform for building virtualized network services at the edge of telecommunications networks. It uses open source software and white box hardware to provide a flexible edge cloud platform. Several major telecommunications providers worldwide have deployed or are planning to deploy CORD, including AT&T, Verizon, China Mobile and Deutsche Telekom. CORD comes in different variants like R-CORD for residential services, M-CORD for 5G mobile networks, and E-CORD for enterprise services. It aims to provide a programmable edge cloud platform with automated provisioning and operations.
The document discusses several ways to optimize TCP/IP network performance for high-bandwidth connections. It recommends using large MTUs, tuning TCP window sizes based on bandwidth-delay products, enabling features like SACK and window scaling, and using queue management techniques like RED to reduce packet loss. Proper configuration of these TCP parameters is important for achieving high throughput over high-speed networks.
The analysis of Microburs (Burstiness) on Virtual SwitchChunghan Lee
This document analyzes microbursts (sudden spikes in network traffic) on a virtual switch in a Network Functions Virtualization (NFV) infrastructure. Testing was done by generating foreground and background UDP traffic. Analysis found that microbursts caused packet loss at the receiver's socket buffer, due to sudden spikes in throughput overwhelming buffer capacity. Profiling identified the packet queuing discipline (qdisc) as a major cause of microbursts, frequently becoming full despite a 10Gbps sending rate. Future work is needed to further clarify why the qdisc is full and modify the system to better handle microbursts.
Many applications are network I/O bound, including common database-based applications and service-based architectures. But operating systems and applications are often untuned to deliver high performance. This session uncovers hidden issues that lead to low network performance, and shows you how to overcome them to obtain the best network performance possible.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...The Linux Foundation
While datacenters are increasingly adopting VMs to provide elastic cloud services, they still rely on traditional TCP for congestion control. In this talk, I will first show that VM scheduling delays can heavily contaminate RTTs sensed by VM senders, preventing TCP from correctly learning the physical network condition. Focusing on the incast problem, which is commonly seen in large-scale distributed data processing such as MapReduce and web search, I find that the solutions that have been developed for *physical* clusters fall short in a Xen *virtual* cluster. Second, I will provide a concrete understanding of the problem, and reveal that the situations that when the sending VM is preempted versus when the receiving VM is preempted, are different. Third, I will introduce my recent attempts on paravirtualizing TCP to overcome the negative effect caused by VM scheduling delays.
Summit 16: Achieving Low Latency Network Function with OpnfvOPNFV
It's challenging to have low-latency VNFs in virtualization and cloud environment. OPNFV KVM4NFV project, together with other OPNFV projects like OVSNFV, helps to achieve low latency network functionality. This session will firstly introduce KVM4NFV project. Then some DPDK workload will be used to show how the KVM4NFV project helps reducing the packet latency and compare the result with and without OPNFV environment. In the end, experience will be shared on how to setup the OPNFV environment correctly, and how to tune the OPNFV environment to meet the latency and performance requirement.
Openstack Networking Internals - Advanced Part
The pictures of the VNI were taken with the "Show my network state" tool
https://sites.google.com/site/showmynetworkstate/
NUSE (Network Stack in Userspace) at #osioHajime Tazaki
This document describes Network Stack in Userspace (NUSE), which implements a full network stack as a userspace library. NUSE aims to allow faster evolution of network stacks outside the kernel and enable network protocol personalization. It works by patching the Linux kernel to include a new architecture, implementing the network stack components as a userspace library, and hijacking POSIX socket calls to redirect them to the NUSE implementation. Performance tests show NUSE adding only small overhead compared to kernel implementations. NUSE can also integrate with the ns-3 network simulator to enable controllable and reproducible network simulations using real protocol implementations.
Many applications are network I/O bound, including common database-based applications and service-based architectures. But operating systems and applications are often untuned to deliver high performance. This session uncovers hidden issues that lead to low network performance, and shows you how to overcome them to obtain the best network performance possible.
The WS-C2960+48PST-L is a Cisco switch that provides:
1. 48 Ethernet ports that support PoE, 2 SFP uplink ports, and 2 1000BASE-T uplink ports.
2. It has a 1U rack-mountable enclosure and provides up to 370W of PoE power.
3. Management features include standard Cisco IOS software, SNMP, and various port, VLAN, and traffic management protocols.
The document proposes RAMPTCP, a receiver-assisted extension to MPTCP for edge clouds. RAMPTCP aims to improve MPTCP performance in edge-to-edge networks by having the receiver send network condition information to help the sender make better scheduling decisions. Preliminary ns3 simulations show RAMPTCP achieves around 19% higher throughput and 58% fewer retransmissions compared to default MPTCP in a scenario where one network path experiences packet loss. Future work includes incorporating different access technologies and developing effective RAMPTCP control actions.
QUIC is a new transport protocol developed by Google that aims to solve issues with TCP and TLS by multiplexing streams over UDP. It includes features like stream multiplexing, connection migration, 0-RTT connection establishment, and forward error correction. The document provides technical details on QUIC including its version history, wire format specifications, frame types, cryptographic handshake process, and examples of 0-RTT, 1-RTT, and 2-RTT connection establishment.
The document discusses Polyraptor, a transport protocol designed for data center networks. It supports various data transfer patterns including:
- One-to-many: Where clients fetch data from multiple servers. Polyraptor uses RaptorQ erasure coding where encoding symbols from different senders can be used to decode the original data.
- Many-to-one: Where data is replicated to multiple servers. With RaptorQ, each server can contribute encoding symbols at its available capacity to transmit the data.
- Incast: RaptorQ's rateless and systematic properties make it resilient to packet loss and out-of-order delivery, eliminating the need for extensive buffering.
CS4344 09/10 Lecture 10: Transport Protocol for Networked GamesWei Tsang Ooi
The document discusses transport protocols for networked games and compares TCP and UDP. While TCP provides reliable delivery, it has higher latency than UDP. UDP has lower overhead but is unreliable. The document examines why certain popular games use TCP or UDP and outlines strategies to make TCP perform better for games, such as reducing delays, retransmitting bundles of data, and combining thin streams. It suggests the Stream Control Transmission Protocol (SCTP) as a potentially ideal transport for games since it allows flexibility in reliability and ordering of messages.
Nessus scan report using the defualt scan policy - Tareq HanayshaHanaysha
The Nessus scan report summarizes the results of a vulnerability scan performed on a Windows Vista system. The scan found 20 open ports, with 46 low, 8 medium and no high severity issues. Common services like MySQL, HTTP, and SMB were identified. The operating system was determined to be Windows Vista Home and the host name was tareq-laptop. Detailed information is provided about issues found on specific ports including unknown services, web servers, and NetBIOS information retrieved from the host.
Seven years ago at LCA, Van Jacobsen introduced the concept of net channels but since then the concept of user mode networking has not hit the mainstream. There are several different user mode networking environments: Intel DPDK, BSD netmap, and Solarflare OpenOnload. Each of these provides higher performance than standard Linux kernel networking; but also creates new problems. This talk will explore the issues created by user space networking including performance, internal architecture, security and licensing.
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...PROIDEA
This document discusses Cisco's Vidmon solution for monitoring video quality in IP networks. It provides an overview of the challenges of transmitting video over IP, including QoS, monitoring and fault localization. It then describes how Vidmon can help operators actively monitor video streams, locate issues, and improve customer satisfaction. Benefits of deploying Vidmon in Vectra's network include faster fault identification, better problem diagnosis, cost savings, improved visibility of signal quality issues, and proactive monitoring.
This document provides an overview of multi-path VPN technologies. It discusses using Linux bridge, Rapid STP, virtual Ethernet NICs, and tunneling protocols like OpenVPN and L2TPv3 to enable multi-path VPNs across multiple cloud providers. It also covers related topics like performance benchmarking and tuning the Linux kernel for improved throughput.
This document summarizes the closing of the Open vSwitch 2017 Fall Conference. It thanks the organizers and attendees and provides information on where to find videos, slides and future OVS content, including the likely dates and location for the 2018 conference in November. Contact information is also provided for submitting comments or suggestions.
This document discusses the author's experiences using Open vSwitch Network Virtualization (OVN) with Kelda, a platform that encodes operational expertise in code. The author found OVN to be extremely stable but sees opportunities for improving ACL scaling and adding programming language support beyond C. Overall, OVN compares favorably to other networking solutions for containers but could benefit from more marketing efforts to increase awareness in the container community.
LF_OVS_17_OvS manipulation with Go at DigitalOceanLF_OpenvSwitch
The document discusses DigitalOcean's past and present use of Open vSwitch (OvS) for virtual networking. In the past, OvS was manipulated using Perl scripts that built flow strings and called ovs-ofctl. This had issues like lack of testing and non-atomic flow applications. Now, a Go package called ovs is used to programmatically control OvS. It builds flows without string manipulation and applies them atomically. DigitalOcean also uses packages like hvflow and gRPC services hvflowctl and hvflowd to configure OvS flows from network parameters. The future may involve orchestrating OvS directly through OpenFlow to avoid parsing tool outputs and directly applying flows.
LF_OVS_17_OvS Hardware Offload with TC FlowerLF_OpenvSwitch
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
LF_OVS_17_Enabling hardware acceleration in OVS-DPDK using DPDK Framework.LF_OpenvSwitch
The document discusses enabling hardware acceleration in OVS-DPDK using the DPDK 'Framework'. It describes challenges with hardware acceleration and introduces the DPDK Framework, which provides APIs and components to abstract different hardware features. The Framework is used in OVS-DPDK by initializing it for hardware offload, adding ports to the switch, reporting exception packets, installing flows in software and hardware pipelines. Next steps include publishing Framework APIs, getting early feedback, and implementing OVS-DPDK integration using the Framework.
LF_OVS_17_Red Hat's perspective on OVS HW Offload StatusLF_OpenvSwitch
This document summarizes Red Hat's perspective on the status of OVS hardware offloading. It discusses why offloading is needed to avoid using too many CPU cores for software switching. It provides examples of performance gains seen with various NIC vendors' offloading solutions integrated into the kernel and OVS. While many vendors now have offerings, more work remains to be done and is ongoing to fully integrate offloading capabilities.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
LF_OVS_17_Ingress Scheduling
1. Ingress Scheduling in OvS-DPDK
Billy O’Mahony – Intel
Jan Scheurich – Ericsson
November 16-17, 2017 | San Jose, CA
2. Introduction
u Use cases for traffic prioritization in NFV
u State of the art in OvS-DPDK datapath
u Rx queue prioritization in DPDK datapath
u Traffic classification and queue selection on NIC
u Next steps
3. Compute Node
OvS
br-intbr-ctrl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsVIM
Components
Host
networking
bond
br-prv
tag tag
Compute Node
OvS
Scenario: NFVI on Converged Data Center
VIM control plane sharing physical network with tenant data
ToR A ToR B
dpdk0 dpdk1
br-intbr-ctl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsVIM
Components
Host
networking
bond
br-prv
tag tag
VIM = Virtual Infrastructure Manager
For example OpenStack component: Nova,
Neutron services and their local agents
4. Compute Node
OvS
br-intbr-ctrl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsVIM
Components
Host
networking
bond
br-prv
tag tag
Compute Node
OvS
Use Case 1: In-band OvS Control Plane
ToR A ToR B
dpdk0 dpdk1
br-intbr-ctl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsVIM
Components
Host
networking
bond
br-prv
tag tag
LACP bond supervision
LACP = Link Aggregation Control Protocol
Here: in-band heart-beat between OVS and each ToR
5. Compute Node
OvS
br-intbr-ctrl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsVIM
Components
Host
networking
bond
br-prv
tag tag
Compute Node
OvS
Use Case 1: In-band OvS Control Plane
ToR A ToR B
dpdk0 dpdk1
br-intbr-ctl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsVIM
Components
Host
networking
bond
br-prv
tag tag
BFD tunnel monitoring BFD packets are
sent inside tunnel
BFD = Bidirectional Forwarding Detection
Here: Heart-beat between OVS instances
connected through tunnel mesh
6. Compute Node
OvS
br-intbr-ctrl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsVIM
Components
Host
networking
bond
br-prv
tag tag
Compute Node
OvS
Use Case 2: VIM Control Plane
ToR A ToR B
dpdk0 dpdk1
br-intbr-ctl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsNova,
Neutron, …
Host
networking
bond
br-prv
tag tag
VIM Control
Plane
VIM Control Plane
In OpenStack: RabbitMQ, REST APIs calls, …
7. Compute Node
OvS
br-intbr-ctrl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsVIM
Components
Host
networking
bond
br-prv
tag tag
Compute Node
OvS
Use Case 2: VIM Control Plane
ToR A ToR B
dpdk0 dpdk1
br-intbr-ctl
Tenant
VM
Tenant
VM
vhostuser
Local Agents
Local Agents
Local AgentsVIM
Components
Host
networking
bond
br-prv
tag tag
OvS control plane:
OpenFlow and OVSDB
OpenFlow and OVSDB are special cases of
VIM control plane
8. OvS
Status Quo in OvS DPDK Datapath
NIC
PMD
1
PMD
2
ovs-vswitchd
thread
Tenant VM
BFD
LACP
RSS
HW
Scheduler
VIM
Components
Host
networking
br-ctl
10. OvS
Scenario: Egress Link Overload
NIC
PMD
1
PMD
2
ovs-vswitchd
thread
Tenant VM
BFD
LACP
RSS
VIM
Components
Host
networking
br-ctl
PMD Tx queues
full. Tenant data
being dropped.
Separate Tx queue for
ovs-vswitchd. HW
scheduler can provide
fair share of link band-
width
Egress link band-
width exhausted
by tenant data
11. Offered load on physical
port [Kpps] 2000 2200 2400 2600 2800 3200 3600 4000
Offered load on phy port
[Gbit/s] 1.54 1.69 1.84 2.00 2.15 2.46 2.76 3.07
PMD overload factor [%] 0% 9% 18% 27% 45% 64% 82%
PMD utilization [%] 99.95% 99.99% 100% 100% 100% 100% 100% 100%
Phy port rx drop [%] 0% 0% 8% 15% 21% 31% 39% 45%
ping -f average RTT [ms] 0.45 0.50 3.02 3.03 3.15 3.10 3.69 3.95
ping -f packet drop [%] 0 0 10% 16% 21% 37% 45% 49%
BFD flappings [1/min] 1.85 3.75 5.66 5.71
Num flaps 0 0 17 17 43 20
OpenFlow connection
timeouts in OVS 0 0 0 0 0 0 3 3
Connection closed by
peer (ODL) 20 15
Connection reset by peer
(ODL) 2 0
37
5.03
PMD polling physical port overloaded with 64B packets
u Packet drops in the Rx queue of
physical port equally affect tenant
data, BFD and OVS control plane
packets
u “ping –f“ to br-ctl interface to
quantify control plane impact
u Ping packet drop in line with overall
packet drop
u RTT jumps from 50 us to 3 ms
u BFD flapping occurs already at
moderate overload
u The rate increases with overload
u Above 45% packet drop the
OpenFlow control channel breaks
due to missed Echo Replies
Measurements
Impact of PMD Overload
source: Ericsson
CPU: Dual socket, Xeon CPU E5-2697 v3 @2.60GHz, 14 cores + HT, 896K L1, 3584K L2, 35MB L3 cache; NIC: Intel Fortville X710, 4 x 10Gbit/s;
OvS: version 2.6, 1 PMD, 1 phy port, 1 vhostuser port; VM: TRex DPDK traffic source/sink
12. u Egress link overload does not affect
the control plane
u Outgoing packets are forwarded by
the ovs-vswitchd thread, which has
its dedicated TX queue in the
Fortville NIC
u The NIC schedules packets from each
of the TX queues in some fair
manner, so that the ovs-vswitchd
queue gets sufficient bandwidth on
the link
u Incoming packets are not affected as
neither link nor PMDs are overloaded
u No BFD flapping
Measurements
Impact of Egress Link Overload
Offered load from VM [Kpps] 800 900 1000 1200 1600
Offered load from VM [Gbit/s] 9.80 11.03 12.26 14.71 19.61
Transmitted load on phy port [Gbit/s] 9.81 9.90 9.88 9.88 9.88
Link overload 0% 11% 24% 49% 99%
PMD utilization [%] 41.45% 46.30% 50.20% 56.36% 69.89%
ping -f average RTT [ms] 0.109 0.205 0.206 0.210 0.204
ping -f packet drop [%] 0% 0% 0% 0% 0%
BFD flappings [1/min]
Num flaps 0 0 0 0 0
OpenFlow connection timeouts in OVS 0 0 0 0 0
Connection closed by peer (ODL)
Connection reset by peer (ODL)
10G Link from OvS overloaded with outgoing traffic from VM
(1500 byte packets)
source: Ericsson
CPU: Dual socket, Xeon CPU E5-2697 v3 @2.60GHz, 14 cores + HT, 896K L1, 3584K L2, 35MB L3 cache; NIC: Intel Fortville X710, 4 x 10Gbit/s;
OvS: version 2.6, 1 PMD, 1 phy port, 1 vhostuser port; VM: TRex DPDK traffic source/sink
13. Use Case 3: QoS for Tenant Data
All tenant data traffic is equal!?
Well, some packets are more equal than others!
u Virtual Network Functions send/receive a large variety of network traffic
u Top prio: Critical internal control plane (e.g. cluster membership)
u …
u Min prio: Bulk user plane
u VNFs need prioritization for their critical traffic in the NFVI
u How to orchestrate and implement the necessary QoS end-to-end?
u Will need additional priority levels and packet marking (e.g. IP Diffserv)
14. Desired Ingress Prioritization on
Physical Ports
u Priority 1: In-band control plane
u Untagged LACP packets
u BFD packets inside tunnel based on IP DSCP of outer IP header
u Priority 2: VIM control plane
u Certain prioritized VLAN tags
u Priority 3+: Prioritized tenant data
u E.g. based on IP DSCP of outer IP header
u Base Priority
u Non-prioritized traffic spread through RSS over multiple Rx queues
15. PMD
“Schedulers arrange and/or rearrange packets for output.”
-- http://www.tldp.org/HOWTO/html_single/Traffic-Control-HOWTO/#e-scheduling
Ingress Scheduling
RX Queue
à TX Queue
à
?
ovs-vswitchd
BFD
LACP
Priority packet – e.g
control plane.
16. Ingress Scheduling - Implementation
RX Queue x2
à
TX Queue
à
?
DPDK rte_flowAPI
installs rxq
assignment filters
on supported
vendors NICs
ovs-vswitchd
BFD
LACP
PMD empties
priority queue
before reading non-
priority queue
PMD
17. Ingress Scheduling - Implementation
1. Move packet prioritization decision to the NIC
2. Place prioritized packets on separate RX Queue
3. Read preferentially from “priority” RxQ. Keep it simple:
u Read from priority queue until it’s empty
u Service other queues
u Repeat
18. Ingress Scheduling – Latency effect
~99.9 % of packets
already have have a
latency <20us
There are x10 to
x50 less packets in
any given latency
bucket – good.
But worst case
latency does not
improve.
CPU: Dual socket, Xeon(R) CPU E5-2695 v3 @2.30GHz 14 core no-HT, 896K L1, 3584K L2, 35MB L3 cache; NIC: Intel Fortville X710, 4 x
10Gbit/s;
OvS: version 2.7.90, 1 PMD, 2 phy port, Hardware trafficsource/sink
Source: Intel
19. Ingress Scheduling – Overload protection
RX Queue x2
à
TX Queue
à
L
?
ovs-vswitchd
BFD
LACP
PMD
1
20. dpdk1
Ingress Scheduling – Traffic Protection
u Overload PMD through 64 byte DPDK traffic on dpdk0
à 100% PMD load in pmd-stats-show
à 25% rx packet drop on dpdk0
u Add iperf3 UDP traffic (256 bytes) in parallel over dpdk1
u Measurement result:
SUT Server
OvS
VM
dpdk
testpmd
vhostuser
iperf3
udp
server
PMD
br-prv
ToR
dpdk0dpdk1
TGen Server
BM
dpdk
pktgen
iperf3
udp
client
dpdk0
Low priorityHigh/Low
priority
source: Ericsson
Condition 1
dpdk1
low priority
Condition 2
dpdk1
high priority
iperf3 UDP
throughput
not
measured
1 Gbit/s
460 Kpps 1)
iperf3 UDP
packet loss 28% 0%
1) iperf3 throughput limited by UDP/IP stack on client side
CPU: Dual socket, Xeon CPU E5-2680 v4 @2.40GHz, 14 cores + HT, 896K L1, 3584K L2, 35MB L3 cache
NIC: Intel Fortville X710, 4 x 10Gbit/s; OvS: version 2.6, 1 PMD, all ports and VM on NUMA node 0
21. u $ ovs-vsctl set Interface phy1
ingress_sched:
eth_type=0x8809
Ingress Scheduling – Configuration
Field as per ovs-fields(7)
and ofctl add-flow. Not
all netdevs/NICs will
support all combinations.
Single prioritization
condition.
22. u $ ovs-vsctl set Interface phy1
ingress_sched:
vlan_tci=0x1123/0x1fff,ip,ip_dscp=0x5
Ingress Scheduling – Configuration (future)
Several different
prioritization
conditions. a AND b.
23. u $ ovs-vsctl set Interface phy1
ingress_sched:
filter=vlan_tci=0x1123/0x1fff
filter=ip,ip_dscp=0x5
Ingress Scheduling – Configuration (future)
Several different
prioritization
conditions. a OR b.
24. u $ ovs-vsctl set Interface phy1
ingress_sched:
prio=1,
filter,vlan_tci=0x1123/0x1fff,
filter,eth_type=0x8809,
prio=2,
filter,ip,ip_dscp=0x5,
Ingress Scheduling – Configuration (future)
Traffic Priority Levels
Support several levels of
prioritization: High and Low
but also a Critical level for
instance.
25. u $ ovs-vsctl set Interface phy1
ingress_sched:
prio=2,
filter,ip,ip_dscp=0x5,
prio=1,
filter=vlan_tci=0x1123/0x1fff,
filter,eth_type=0x8809
Ingress Scheduling – Configuration (future)
Filter Priority:
Filter groups are
applied in the order in
which they appear on
the configuration line.
26. u ovsdb-schema
<table name="Interface"…
<column name="ingress_sched" key="err">
If the specified ingress scheduling could not be
applied, Open vSwitch sets this column to an error
description in human readable form. Otherwise, Open
vSwitch clears this column.
Ingress Scheduling –Error reporting
27. u $ ovs-vsctl set Interface phy1
options:n_rxq=4
u $ ovs-vsctl set Interface phy1
ingress_sched:
prio=2,
filter,ip,ip_dscp=0x5,
prio=1,
filter=vlan_tci=0x1123/0x1fff,
filter,eth_type=0x8809
Ingress Scheduling – RxQ’s & RSS
RSS queues
Additional Priority
Queues
28. Ingress Scheduling – Next Steps
PMD PMD PMD PMD
u Avoid poor rxq -> pmd assignment
29. Ingress Scheduling – Next Steps
u Use rte_flow API for offload
u Extend to several priorities
u Priorities of overlapping filters
u Multiple traffic priorities
u Working with RFC ‘Flow Offload’ feature…
u …
u Prioritization to the Guest…
30. Summary
u OvS-DPDK in NFVI context needs ingress scheduling to
protect priority traffic against PMD overload
u SW priority queue handling in the PMD loop is effective
u Could be upstreamed first, priority configurable per port
u Off-loading classification and queue selection to NIC
through rte_flow API allows generic solution
u Interaction with RFC Flow Classification Offload
u Work in progress
u Lots left to figure out
u We are open for suggestions/collaboration