High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
USENIX LISA2021 talk by Brendan Gregg (https://www.youtube.com/watch?v=_5Z2AU7QTH4). This talk is a deep dive that describes how BPF (eBPF) works internally on Linux, and dissects some modern performance observability tools. Details covered include the kernel BPF implementation: the verifier, JIT compilation, and the BPF execution environment; the BPF instruction set; different event sources; and how BPF is used by user space, using bpftrace programs as an example. This includes showing how bpftrace is compiled to LLVM IR and then BPF bytecode, and how per-event data and aggregated map data are fetched from the kernel.
Netronome's half-day tutorial on host data plane acceleration at ACM SIGCOMM 2018 introduced attendees to models for host data plane acceleration and provided an in-depth understanding of SmartNIC deployment models at hyperscale cloud vendors and telecom service providers.
Presenter Bios
Jakub Kicinski is a long term Linux kernel contributor, who has been leading the kernel team at Netronome for the last two years. Jakub’s major contributions include the creation of BPF hardware offload mechanisms in the kernel and bpftool user space utility, as well as work on the Linux kernel side of OVS offload.
David Beckett is a Software Engineer at Netronome with a strong technical background of computer networks including academic research with DDoS. David has expertise in the areas of Linux architecture and computer programming. David has a Masters Degree in Electrical, Electronic Engineering at Queen’s University Belfast and continues as a PhD student studying Emerging Application Layer DDoS threats.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
USENIX LISA2021 talk by Brendan Gregg (https://www.youtube.com/watch?v=_5Z2AU7QTH4). This talk is a deep dive that describes how BPF (eBPF) works internally on Linux, and dissects some modern performance observability tools. Details covered include the kernel BPF implementation: the verifier, JIT compilation, and the BPF execution environment; the BPF instruction set; different event sources; and how BPF is used by user space, using bpftrace programs as an example. This includes showing how bpftrace is compiled to LLVM IR and then BPF bytecode, and how per-event data and aggregated map data are fetched from the kernel.
Netronome's half-day tutorial on host data plane acceleration at ACM SIGCOMM 2018 introduced attendees to models for host data plane acceleration and provided an in-depth understanding of SmartNIC deployment models at hyperscale cloud vendors and telecom service providers.
Presenter Bios
Jakub Kicinski is a long term Linux kernel contributor, who has been leading the kernel team at Netronome for the last two years. Jakub’s major contributions include the creation of BPF hardware offload mechanisms in the kernel and bpftool user space utility, as well as work on the Linux kernel side of OVS offload.
David Beckett is a Software Engineer at Netronome with a strong technical background of computer networks including academic research with DDoS. David has expertise in the areas of Linux architecture and computer programming. David has a Masters Degree in Electrical, Electronic Engineering at Queen’s University Belfast and continues as a PhD student studying Emerging Application Layer DDoS threats.
Video: https://www.facebook.com/atscaleevents/videos/1693888610884236/ . Talk by Brendan Gregg from Facebook's Performance @Scale: "Linux performance analysis has been the domain of ancient tools and metrics, but that's now changing in the Linux 4.x series. A new tracer is available in the mainline kernel, built from dynamic tracing (kprobes, uprobes) and enhanced BPF (Berkeley Packet Filter), aka, eBPF. It allows us to measure latency distributions for file system I/O and run queue latency, print details of storage device I/O and TCP retransmits, investigate blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. This talk will summarize this new technology and some long-standing issues that it can solve, and how we intend to use it at Netflix."
Kernel load-balancing for Docker containers using IPVSDocker, Inc.
Many companies use expensive proprietary hardware and software to provide load-balancing and routing for their users and services. I'm going to demonstrate how the same or even exceeding performance and feature set can be achieved using an open-source technology which has been a part of the mainline Linux kernel for over a decade – IPVS. Specifically, you'll see how IPVS can be used to automatically configure load balancing and routing for Docker containers using a simple Go daemon and a Docker plugin.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
Using eBPF for High-Performance Networking in CiliumScyllaDB
The Cilium project is a popular networking solution for Kubernetes, based on eBPF. This talk uses eBPF code and demos to explore the basics of how Cilium makes network connections, and manipulates packets so that they can avoid traversing the kernel's built-in networking stack. You'll see how eBPF enables high-performance networking as well as deep network observability and security.
Decompressed vmlinux: linux kernel initialization from page table configurati...Adrian Huang
Talk about how Linux kernel initializes the page table.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Page cache mechanism in Linux kernel.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
eBPF is an exciting new technology that is poised to transform Linux performance engineering. eBPF enables users to dynamically and programatically trace any kernel or user space code path, safely and efficiently. However, understanding eBPF is not so simple. The goal of this talk is to give audiences a fundamental understanding of eBPF, how it interconnects existing Linux tracing technologies, and provides a powerful aplatform to solve any Linux performance problem.
OVN (Open Virtual Network) を用いる事により、OVS (Open vSwitch)が動作する複数のサーバー(Hypervisor/Chassis)を横断する仮想ネットワークを構築する事ができます。
本スライドはOVNを用いた論理ネットワークの構成と設定サンプルのメモとなります。
Using OVN, you can build logical network among multiple servers (Hypervisor/Chassis) running OVS (Open vSwitch).
This slide is describes HOW TO example of OVN configuration to create 2 logical switch connecting 4 VMs running on 2 chassis.
Talk for SCaLE13x. Video: https://www.youtube.com/watch?v=_Ik8oiQvWgo . Profiling can show what your Linux kernel and appliacations are doing in detail, across all software stack layers. This talk shows how we are using Linux perf_events (aka "perf") and flame graphs at Netflix to understand CPU usage in detail, to optimize our cloud usage, solve performance issues, and identify regressions. This will be more than just an intro: profiling difficult targets, including Java and Node.js, will be covered, which includes ways to resolve JITed symbols and broken stacks. Included are the easy examples, the hard, and the cutting edge.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
Boosting I/O Performance with KVM io_uringShapeBlue
Storage performance is becoming much more important. KVM io_uring attempts to bring the I/O performance of a virtual machine on almost the same level of bare metal. Apache CloudStack has support for io_uring since version 4.16. Wido will show the difference in performance io_uring brings to the table.
Wido den Hollander is the CTO of CLouDinfra, an infrastructure company offering total Webhosting solutions. CLDIN provides datacenter, IP and virtualization services for the companies within TWS. Wido den Hollander is a PMC member of the Apache CloudStack Project and a Ceph expert. He started with CloudStack 9 years ago. What attracted his attention is the simplicity of CloudStack and the fact that it is an open-source solution. During the years Wido became a contributor, a PMC member and he was a VP of the project for a year. He is one of our most active members, who puts a lot of efforts to keep the project active and transform it into a turnkey solution for cloud builders.
-----------------------------------------
The CloudStack European User Group 2022 took place on 7th April. The day saw a virtual get together for the European CloudStack Community, hosting 265 attendees from 25 countries. The event hosted 10 sessions with from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
------------------------------------------
About CloudStack: https://cloudstack.apache.org/
Reverse Mapping (rmap) in Linux KernelAdrian Huang
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
There are many systems that handle heavy UDP transactions, like DNS and RADIUS servers. Nowadays 10G Ethernet NICs are so widely deployed and even 40G and 100G NICs are out there. This makes it difficult for a single server to get enough performance to consume link bandwidth with short packet transactions. Since usually Linux is by default not tuned for dedicated UDP servers, we are investigating ways to boost such UDP transaction performance.
This talk will show how we analyze the bottleneck and give tips we found to make the performance better. Also we discuss challenges to improve it even more.
This presentation was given at LinuxCon Japan 2016 by Toshiaki Makita
The advantages of Arista/OVH configurations, and the technologies behind buil...OVHcloud
Arista will put an emphasis on the technologies behind building and operating datacentres, and the reasons they give the results expected from them (varied traffic spike management, increasing bandwidth, end points and security), including very large-scale production environments.
Video: https://www.facebook.com/atscaleevents/videos/1693888610884236/ . Talk by Brendan Gregg from Facebook's Performance @Scale: "Linux performance analysis has been the domain of ancient tools and metrics, but that's now changing in the Linux 4.x series. A new tracer is available in the mainline kernel, built from dynamic tracing (kprobes, uprobes) and enhanced BPF (Berkeley Packet Filter), aka, eBPF. It allows us to measure latency distributions for file system I/O and run queue latency, print details of storage device I/O and TCP retransmits, investigate blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. This talk will summarize this new technology and some long-standing issues that it can solve, and how we intend to use it at Netflix."
Kernel load-balancing for Docker containers using IPVSDocker, Inc.
Many companies use expensive proprietary hardware and software to provide load-balancing and routing for their users and services. I'm going to demonstrate how the same or even exceeding performance and feature set can be achieved using an open-source technology which has been a part of the mainline Linux kernel for over a decade – IPVS. Specifically, you'll see how IPVS can be used to automatically configure load balancing and routing for Docker containers using a simple Go daemon and a Docker plugin.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
Using eBPF for High-Performance Networking in CiliumScyllaDB
The Cilium project is a popular networking solution for Kubernetes, based on eBPF. This talk uses eBPF code and demos to explore the basics of how Cilium makes network connections, and manipulates packets so that they can avoid traversing the kernel's built-in networking stack. You'll see how eBPF enables high-performance networking as well as deep network observability and security.
Decompressed vmlinux: linux kernel initialization from page table configurati...Adrian Huang
Talk about how Linux kernel initializes the page table.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Page cache mechanism in Linux kernel.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
eBPF is an exciting new technology that is poised to transform Linux performance engineering. eBPF enables users to dynamically and programatically trace any kernel or user space code path, safely and efficiently. However, understanding eBPF is not so simple. The goal of this talk is to give audiences a fundamental understanding of eBPF, how it interconnects existing Linux tracing technologies, and provides a powerful aplatform to solve any Linux performance problem.
OVN (Open Virtual Network) を用いる事により、OVS (Open vSwitch)が動作する複数のサーバー(Hypervisor/Chassis)を横断する仮想ネットワークを構築する事ができます。
本スライドはOVNを用いた論理ネットワークの構成と設定サンプルのメモとなります。
Using OVN, you can build logical network among multiple servers (Hypervisor/Chassis) running OVS (Open vSwitch).
This slide is describes HOW TO example of OVN configuration to create 2 logical switch connecting 4 VMs running on 2 chassis.
Talk for SCaLE13x. Video: https://www.youtube.com/watch?v=_Ik8oiQvWgo . Profiling can show what your Linux kernel and appliacations are doing in detail, across all software stack layers. This talk shows how we are using Linux perf_events (aka "perf") and flame graphs at Netflix to understand CPU usage in detail, to optimize our cloud usage, solve performance issues, and identify regressions. This will be more than just an intro: profiling difficult targets, including Java and Node.js, will be covered, which includes ways to resolve JITed symbols and broken stacks. Included are the easy examples, the hard, and the cutting edge.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
Boosting I/O Performance with KVM io_uringShapeBlue
Storage performance is becoming much more important. KVM io_uring attempts to bring the I/O performance of a virtual machine on almost the same level of bare metal. Apache CloudStack has support for io_uring since version 4.16. Wido will show the difference in performance io_uring brings to the table.
Wido den Hollander is the CTO of CLouDinfra, an infrastructure company offering total Webhosting solutions. CLDIN provides datacenter, IP and virtualization services for the companies within TWS. Wido den Hollander is a PMC member of the Apache CloudStack Project and a Ceph expert. He started with CloudStack 9 years ago. What attracted his attention is the simplicity of CloudStack and the fact that it is an open-source solution. During the years Wido became a contributor, a PMC member and he was a VP of the project for a year. He is one of our most active members, who puts a lot of efforts to keep the project active and transform it into a turnkey solution for cloud builders.
-----------------------------------------
The CloudStack European User Group 2022 took place on 7th April. The day saw a virtual get together for the European CloudStack Community, hosting 265 attendees from 25 countries. The event hosted 10 sessions with from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
------------------------------------------
About CloudStack: https://cloudstack.apache.org/
Reverse Mapping (rmap) in Linux KernelAdrian Huang
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
There are many systems that handle heavy UDP transactions, like DNS and RADIUS servers. Nowadays 10G Ethernet NICs are so widely deployed and even 40G and 100G NICs are out there. This makes it difficult for a single server to get enough performance to consume link bandwidth with short packet transactions. Since usually Linux is by default not tuned for dedicated UDP servers, we are investigating ways to boost such UDP transaction performance.
This talk will show how we analyze the bottleneck and give tips we found to make the performance better. Also we discuss challenges to improve it even more.
This presentation was given at LinuxCon Japan 2016 by Toshiaki Makita
The advantages of Arista/OVH configurations, and the technologies behind buil...OVHcloud
Arista will put an emphasis on the technologies behind building and operating datacentres, and the reasons they give the results expected from them (varied traffic spike management, increasing bandwidth, end points and security), including very large-scale production environments.
Presented at the CloudStack Silicon Valley User Group in September 2015 at Nuage Networks. Discussed impact of containers, emerging software defined networking platforms, NFV, IPv6 and performance.
Presentation for the July 2018 @medianetlab meetup at NCSR "Demokritos"
Relative blog post can be found here: https://medianetlab.gr/mnlab-meetup-kubernetes/
and the video: https://www.youtube.com/watch?v=l2ce5U9bh6M
Snabb Switch: Riding the HPC wave to simpler, better network appliances (FOSD...Igalia
By Katerina Barone-Adesi.
Driven by the needs of scientific computing, rapid rises in memory bandwidth have made it possible to implement high-performance network functions in a radically simpler way. Snabb Switch rides this wave, bypassing the kernel to process network packets in terse Lua, leaving the programmer free to focus on the essence of their problem. This talk presents our experiences delivering a carrier-grade implementation of "lightweight 4 over 6", an IPv4-as-a-service architecture that tunnels access to the IPv4 internet through specialized Snabb appliances.
We report on our recent experience implementing a carrier-grade virtualized network function, with observations on what it is like to build real-world, high-performance Snabb applications. (and kernel bypass). Each instance runs at essentially line speed on two ten-gigabit Ethernet cards.
Lightweight 4-over-6 (lw4o6) defines an IPv4-as-a-service architecture that allows ISPs to internally operate an IPv6-only network, tunneling IPv4 connections between lw4o6-aware endpoints installed at the customer's site (e.g. in OpenWRT) and an internet-facing "lwAFTR". Lw4o6 was specified in 2015 as RFC 7596 and has the architectural advantage that the carrier-side lwAFTR only needs per-customer state, not per-flow state. An lw4o6 system can also be configured to share IPv4 addresses between multiple customers as part of an IPv4 exhaustion strategy. It allows IPv4 networks to interoperate smoothly, while a carrier between them runs a pure-IPv6 network.
Igalia has built an open source "lwAFTR" implementation that is ready to deploy in production. We describe the joys of hacking with Snabb, giving a quick intro to Snabb, modern x86, and lw4o6 along the way.
(c) 2016 FOSDEM VZW
CC BY 2.0 BE
https://archive.fosdem.org/2016/
Pluggable Infrastructure with CI/CD and DockerBob Killen
The docker cluster ecosystem is still young, and highly modular. This presentation covers some of the challenges we faced deciding on what infrastructure to deploy, and a few tips and tricks in making both applications and infrastructure easily adaptable.
VMworld 2013: Real-world Deployment Scenarios for VMware NSX VMworld
VMworld 2013
Taruna Gandhi, VMware
Jeremy Hanmer, DreamHost
Funs Kessen, Schuberg Philis
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Talk given at ClueCon 2016 that discusses FreeSWITCH and its place in a microservices architecture. Covers a specific deployment case using Docker and Adhearsion, along with certain features that make FreeSWITCH a model use-case for such a technology stack.
1. Cover common Network terminology
2. Provide an overview of how networks can have positive and negative impacts to our solution.
3. Highlight differences between onsites and hosted servers.
4. Tools that can be used for both validation and troubleshooting with common use cases.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
[En] IPVS for Docker Containers
1. IPVS for Docker Containers
Production-level load balancing and request routing without spending a single penny.
2. What is IPVS
• Stands for IP Virtual Server. Built on top of Netfilter and works in kernel space. No userland copying of network
packets involved.
• It’s in the mainline Linux Kernel since 2.4.x. Surprisingly, it’s one of the technologies which successfully pumps
your stuff through world’s networks for more than 15 years and almost nobody ever heard of it.
• Used in many world-scale companies such as Google, Facebook, LinkedIn, Dropbox, GitHub, Alibaba, Yandex
and so on. During my time in Yandex, we used to route millions of requests per second through a few IPVS
boxes without a sweat.
• Tested with fire, water and brass trombones (untranslatable Russian proverb).
• Provides flexible, configurable load-balancing and request routing done in kernel space – it plugs in even before
PCAP layer – making it bloody fucking fast.
• As all kernel tech, it looks like a incomprehensible magical artifact from outer space, but bear with me – it’s
actually very simple to use.
And why didn’t I hear about it before?
3. What is IPVS
• Supports TCP, SCTP and UDP (both v4 and v6), works on L4 in kernel space.
• Load balancing using weighted RR, LC, replicated locality-based LC (LBLCR), destination and source-
based hashing, shortest expected delay (SED) and more via plugins:
‐ LBLCR: «sticky» least-connections, picks a backend and sends all jobs to it until it’s full, then picks
the next one. If all backends are full, replicates one with the least jobs and adds it to the service
backend set.
• Supports persistent connections for SSL, FTP and One Packet Mode for connectionless protocols such
as DNS – will trigger scheduling for each packet.
• Supports request forwarding with standard Masquerading (DNAT), Tunneling (IPIP) or Direct Routing (DR).
Supports Netfilter Marks (FWMarks) for virtual service aggregation, e.g. http and https.
• Built-in cluster synchronization daemon – only need to configure one router box, other boxes will keep
their configurations up-to-date automatically.
And what can it do if you say it’s so awesome?
4. What is IPVS
• DNAT is your ordinary NAT. It will replace each packet’s destination IP with the chosen backend IP and put the packet
back into the networking stack.
‐ Pros: you don’t need to do anything to configure it.
‐ Cons: responses will be a martian packets in your network. But you can use a local reverse proxy to workaround
this or point the default gateway on backends to load balancer’s IP address for IPVS to take care of it.
• DR (or DSR: direct server response) is the fastest forwarding method in the world.
‐ Pros: speed. It will be as fast as possible given your network conditions. In fact, it will be so fast that you will be
able to pump more traffic through your balancers than the interface capacity they have.
‐ Cons: all backends and routers should be in the same L2 segment, need to configure NoARP devices.
• IPIP is a tun/tap alternative to DR. Like IPSEC, but without encryption.
‐ Pros: it can be routed anywhere and won’t trigger martian packets.
‐ Cons: lower MSS. Also, you need to configure NoARP tun devices on all backend boxes, like in DR.
And a little bit more about all these weird acronyms.
5. What is IPVS
And a little bit more about all these weird acronyms.
IPIP
Encapsulates IP
Routable anywhere
DNAT
Rewrites DST IP
Same L4
DSR
Rewrites DST MAC
Same L2
6. What is IPVS
• Works by replacing the job’s destination MAC with the chosen backend MAC and then putting
the packet back into the INPUT queue. Once again: the requirement for this mode is that all
backends should be in the same L2 network segment.
• The response will travel directly to the requesting client, bypassing the router box. This
essentially allows to cut router box traffic by more than 50% since responses are usually heavier
than requests.
• This also allows you to load balance more traffic than the load balancer interface capacity
since with a proper setup it will only load balance TCP SYNs.
• It’s a little bit tricky to set up, but if done well is indistinguishable from magic (as well as any
sufficiently advanced technology).
• Fun fact: this is how video streaming services survive and avoid bankruptcy!
And a little bit more about DR since it’s awesome!
7. I don’t need this
• If you have more than one instance of your application, you need load balancing, otherwise you
won’t be able to distribute requests among those instances.
• If you have more than one application or more than one version deployed at the same time in
production, you need request routing. Typically, only one version is deployed to production at the
same time, and one testing version is waiting in line.
• Imagine what you can do if you could deploy 142 different versions of your application to
production:
‐ A/B testing: route 50% to version A, route 50% to version B.
‐ Intelligent routing: if requester’s cookie is set to «NSA», route to version «HoneyPot».
‐ Experiments: randomly route 1% of users to experimental version «EvenMorePointlessAds»
and stick them to that version.
And why would I load balance and route anything at all?
8. I don’t need this
• In modern world, instances come and go as they may, sometimes multiple times per second. This rate
will only go up in the future.
• Instances migrate across physical nodes. Instances might be stopped when no load or low load is in
effect. New instances might appear to accommodate the growing request rate and be torn down later.
• Neither nginx (for free) nor haproxy allows for online configuration. Many hacks exist which
essentially regenerate the configuration file on the routing box and then cycle the reverse proxy, but
it’s neither sustainable nor mainstream.
• Neither hipache (written in node.js) nor vulcand (under development) allows for non-RR balancing
methods.
• The fastest userland proxy to date – HAProxy – is ~40% slower than IPVS in DR mode according to
many publicly available benchmark results.
Also, my nginx (haproxy, third option) setup works fine, get off the stage please!
9. I don’t need this
• Lucky you!
• But running stuff in the cloud is not an option in many circumstances:
‐ Performance and/or reliability critical applications.
‐ Exotic hardware or software requirements. CG or CUDA farms, BSD environments and so on.
‐ Clouds are a commodity in US and Europe, but in some countries with a thriving IT culture AWS
is more of a fairy tale (e.g. Vietnam).
‐ Security considerations and vendor lock-in.
• Cloud-based load balancers are surprisingly dumb, for example AWS ELB doesn’t really allow you to
configure anything except real servers and some basic stuff like SSL and Health Checks.
• After a certain point, AWS/GCE becomes quite expensive, unless you’re Netflix.
And I run my stuff in the cloud, it takes care of everything – my work is perpetual siesta.
10. What is IPVS, again
• Make sure you have a properly configured kernel (usually you do):
‐ cat /boot/config-$(uname -r) | grep IP_VS
• Install IPVS CLI to verify that it’s healthy and up:
‐ sudo apt-get install ipvsadm
‐ sudo ipvsadm -l
• Command-line tool allows you to do everything out there with IPVS, but it’s the 21st century and CLIs are only
good to show off your keyboard-fu ;)
• That’s why I’ve coded a dumb but loyal REST API daemon that talks to kernel and configures IPVS for you. In
Go, because, you know.
• You can use it to add and remove virtual services and backends in runtime and get metrics about existing virtual
services. It’s totally open source and free but might not work (as everything open source and free).
And how do I use it now since it sounds amazing!
11. GORB
• It’s here: https://github.com/kobolog/gorb
• Based on native Go netlink library – https://github.com/tehnerd/gnl2go. The library itself talks directly to the
Kernel and already supports the majority of IPVS operations. This library is a port of Facebook’s gnl2py – https://
github.com/facebook/gnlpy
• It exposes a very straightforward JSON-based REST API:
‐ PUT or DELETE /service/<vsID> – create or remove a virtual service.
‐ PUT or DELETE /service/<vsID>/<rsID> – add or remove a backend for a virtual service
‐ GET /service/<vsID> – get virtual service metrics (incl. health checks).
‐ GET /service/<vsID>/<rsID> – get backend configuration.
‐ PATCH /service/<vsID>/<rsID> – update backend weight and other parameters without restarting
anything.
Go Routing and Balancing.
12. GORB
• Every time you spin up a new container, it can be magically automatically registered with
GORB, so that your application’s clients could reach it via the balancer-provided endpoint.
• GORB will automatically do TCP checks (if it’s a TCP service) on your container and inhibit all
traffic coming to it if, for some reason, it disappeared without gracefully de-registering first:
network outage, oom-killer or whatnot.
• HTTP checks are also available, if required – e.g. your app can have a /health endpoint
which will respond 200 only if all downstream dependencies are okay as well.
• Essentially, the only thing you need to do is to start a tiny little daemon on Docker Engine
boxes that will listen for events on Events API and send commands to GORB servers.
• More checks can be added in the future, e.g. Consul, ZooKeeper or etcd!
And why is it cool for Docker Containers.
13. GORB
kobolog@bulldozer:~$ docker-machine create -d virtualbox gorb
kobolog@bulldozer:~$ docker-machine ssh gorb sudo modprobe ip_vs
kobolog@bulldozer:~$ eval $(docker-machine env gorb)
kobolog@bulldozer:~$ docker build -t gorb src/gorb
kobolog@bulldozer:~$ docker run -d —net=host --privileged gorb -f -i eth1
kobolog@bulldozer:~$ docker build -t gorb-link src/gorb/gorb-docker-link
kobolog@bulldozer:~$ docker run -d --net=host -v /var/run/docker.sock:/var/run/
docker.sock gorb-link -r $(docker-machine ip default):4672 -i eth1
kobolog@bulldozer:~$ docker run -d -p 80 nginx (repeat 4 times)
kobolog@bulldozer:~$ curl -i http://$(docker-machine ip gorb):80
And how do I use it? Live demo or GTFO!
14. A few words about BGP
• Once you have a few router boxes to avoid having a SPOF, you might wonder how clients would find out which one to
use. One option could be DNS RR, where you put them all behind one domain name and let DNS resolvers to do their job.
• But there’s a better way – BGP host routes, also known as anycast routing:
‐ It’s a protocol used by network routers to agree on network topologies.
‐ The idea behind anycast is that each router box announces the same IP address on the network via BGP host
route advertisements.
‐ When a client hits this IP address, it’s automatically routed to one of the GORB boxes based on configured routing
metrics.
‐ You don’t need any special hardware – it’s already built into your routers.
‐ You can do this using one of the BGP packages, such as Bird or Quagga. Notable ecosystem projects in this area:
Project Calico.
‐ Also can be done with IPv6 anycast, but I’ve never seen it implemented.
Black belt in networking is not complete without a few words about BGP
15. GORB
• No, you cannot blame me, but I’m here to help and answer questions!
• IPVS is production ready since I was in college.
• GORB is not production ready and is not tested in production, but since it’s just a configuration daemon, it won’t actually
affect your traffic, I hope.
• Here are some nice numbers to think about:
‐ 1 GBPS line rate uses 1% CPU in DR mode.
‐ Can utilize 40G/100G interface.
‐ Costs you €0.
‐ Typical enterprise hardware load-balancer – €25,000.
• Requires generic hardware, software, tools – also easy to learn. Ask your Ops whether they want to read a 1000-page
vendor manual or a 1000-word manpage.
• Also, no SNMP involved. Amazing and unbelievable!
Is it stable? Is it production-ready? Can I blame you if it doesn’t work?
16. This guy on the stage
• You shouldn’t. In fact, don’t trust anybody on this stage – get your hands dirty and verify
everything yourself. Although last month I turned 30, so I’m trustworthy!
• I’ve been building distributed systems and networking for more than 7 years in companies with
worldwide networks and millions of users, multiple datacenters and other impressive things.
• These distributed systems and networks are still operational and serve billions of user requests
each day.
• I’m the author of IPVS-based routing and load balancing framework for Yandex’s very own open-
source platform called Cocaine: https://github.com/cocaine.
• As of now, I’m a Senior Infrastructure Software Engineer in Uber in NYC, continuing my self-
proclaimed crusade to make magic infrastructure look less magical and more useful for humans.
Who the hell are you and why should I believe a Russian?
17. Gracias, Cataluña
• This guy:
‐ Twitter (it’s boring and mostly in Russian): @kobolog
‐ Questions when I’m not around: me@kobology.ru
‐ My name is Andrey Sibiryov.
• IPVS: http://www.linuxvirtualserver.org/software/ipvs.html
• GORB sources: https://github.com/kobolog/gorb
• Bird BGP: http://bird.network.cz
• Stage version of this slide deck: http://bit.ly/1S1A3cT
• Also, it’s right about time to ask your questions!
Some links, more links, some HR and questions!