This document provides an overview of networking concepts in Linux including layer 2 and layer 3 topics. It discusses link aggregation (LAGs), VLANs, bridges, routing tables, policy-based routing (PBR), VRFs, and network namespaces (NetNS). Key points covered include using LACP for LAGs, VLAN tagging formats, the purpose of bridges, routing tables other than the default main table, and how VRFs and NetNS provide layer 3 and layer 1 separation respectively. Real-world applications of tunnels and VPNs with VRFs are also highlighted.
Tutorial: Using GoBGP as an IXP connecting routerShu Sugimoto
- Show you how GoBGP can be used as a software router in conjunction with quagga
- (Tutorial) Walk through the setup of IXP connecting router using GoBGP
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
This talk will provide a brief overview about some of the latest developments in the Linux networking world: Things like VLAN-aware-bridges, VXLAN, VRF-Lites, as well as MPLS support will be shown with practical examples.
Everyone still using »ifconfig«, »route«, »arp« etc. might want to attend to get an idea how to use the Linux swiss army knife for networkers (»ip«) which already has replaced or will replace all the old tools on current distributions.
For Debian based systems ifupdown2 provides a convenient replacement for the old ifupdown toolchain including configuration for VLAN interfaces and LAGs which previously required auxiliary tools.
At the end you will get a glimpse into building your own SDN with Debian Linux, ifupdown2, Salt Stack and Python.
back to top
In this session, we’ll review how previous efforts, including Netfilter, Berkley Packet Filter (BPF), Open vSwitch (OVS), and TC, approached the problem of extensibility. We’ll show you an open source solution available within the Red Hat Enterprise Linux kernel, where extending and merging some of the existing concepts leads to an extensible framework that satisfies the networking needs of datacenter and cloud virtualization.
Tutorial: Using GoBGP as an IXP connecting routerShu Sugimoto
- Show you how GoBGP can be used as a software router in conjunction with quagga
- (Tutorial) Walk through the setup of IXP connecting router using GoBGP
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
This talk will provide a brief overview about some of the latest developments in the Linux networking world: Things like VLAN-aware-bridges, VXLAN, VRF-Lites, as well as MPLS support will be shown with practical examples.
Everyone still using »ifconfig«, »route«, »arp« etc. might want to attend to get an idea how to use the Linux swiss army knife for networkers (»ip«) which already has replaced or will replace all the old tools on current distributions.
For Debian based systems ifupdown2 provides a convenient replacement for the old ifupdown toolchain including configuration for VLAN interfaces and LAGs which previously required auxiliary tools.
At the end you will get a glimpse into building your own SDN with Debian Linux, ifupdown2, Salt Stack and Python.
back to top
In this session, we’ll review how previous efforts, including Netfilter, Berkley Packet Filter (BPF), Open vSwitch (OVS), and TC, approached the problem of extensibility. We’ll show you an open source solution available within the Red Hat Enterprise Linux kernel, where extending and merging some of the existing concepts leads to an extensible framework that satisfies the networking needs of datacenter and cloud virtualization.
eBPF is an exciting new technology that is poised to transform Linux performance engineering. eBPF enables users to dynamically and programatically trace any kernel or user space code path, safely and efficiently. However, understanding eBPF is not so simple. The goal of this talk is to give audiences a fundamental understanding of eBPF, how it interconnects existing Linux tracing technologies, and provides a powerful aplatform to solve any Linux performance problem.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
Cilium - Container Networking with BPF & XDPThomas Graf
This talk demonstrates that programmability and performance does not require user space networking, it can be achieved in the kernel by generating BPF programs and leveraging the existing kernel subsystems. We will demo an early prototype which provides fast IPv6 & IPv4 connectivity to containers, container labels based security policy with avg cost O(1), and debugging and monitoring based on the per-cpu perf ring buffer. We encourage a lively discussion on the approach taken and next steps.
Using eBPF for High-Performance Networking in CiliumScyllaDB
The Cilium project is a popular networking solution for Kubernetes, based on eBPF. This talk uses eBPF code and demos to explore the basics of how Cilium makes network connections, and manipulates packets so that they can avoid traversing the kernel's built-in networking stack. You'll see how eBPF enables high-performance networking as well as deep network observability and security.
Moved to https://speakerdeck.com/ebiken/zebra-srv6-cli-on-linux-dataplane-enog-number-49
Introduction to SRv6, Linux SRv6 implementation and how to add SRv6 CLI to Zebra 2.0 Open Source Network Operation Stack.
Presented at ENOG (Echigo NOG) #49.
An Introduction to eBPF (and cBPF). Topics covered include history, implementation, program types & maps. Also gives a brief introduction to XDP and DPDK
Agenda:
In this session, Shmulik Ladkani discusses the kernel's net_device abstraction, its interfaces, and how net-devices interact with the network stack. The talk covers many of the software network devices that exist in the Linux kernel, the functionalities they provide and some interesting use cases.
Speaker:
Shmulik Ladkani is a Tech Lead at Ravello Systems.
Shmulik started his career at Jungo (acquired by NDS/Cisco) implementing residential gateway software, focusing on embedded Linux, Linux kernel, networking and hardware/software integration.
51966 coffees and billions of forwarded packets later, with millions of homes running his software, Shmulik left his position as Jungo’s lead architect and joined Ravello Systems (acquired by Oracle) as tech lead, developing a virtual data center as a cloud service. He's now focused around virtualization systems, network virtualization and SDN.
The Linux kernel is undergoing the most fundamental architecture evolution in history and is becoming a microkernel. Why is the Linux kernel evolving into a microkernel? The potentially biggest fundamental change ever happening to the Linux kernel. This talk covers how companies like Facebook and Google use BPF to patch 0-day exploits, how BPF will change the way features are added to the kernel forever, and how BPF is introducing a new type of application deployment method for the Linux kernel.
Die monatlichen Anlässe in Zusammenarbeit mit dem Swiss IPv6 Council behandeln verschiedene technische Themenbereiche von IPv6.
Das Referat von Jen Linkova vom 30. November 2015 widmete sich dem Neighbor Discovery Protokoll, einem Schlüsselmechanismus um Verbindungen zwischen IPv6 Knotenpunkten und LANs aufzubauen. Die Referentin fokussierte sich in der Präsentation auf die technischen Details des Designs, der Implementierung sowie Sicherheitsaspekten.
Gerne stellen wir Ihnen die Präsentation zum Anschauen und Herunterladen zur Verfügung. Haben Sie Feedback zum Event? Wir sind gespannt auf Ihre Meinung.
BPF & Cilium - Turning Linux into a Microservices-aware Operating SystemThomas Graf
Container runtimes cause Linux to return to its original purpose: to serve applications interacting directly with the kernel. At the same time, the Linux kernel is traditionally difficult to change and its development process is full of myths. A new efficient in-kernel programming language called eBPF is changing this and allows everyone to extend existing kernel components or glue them together in new forms without requiring to change the kernel itself.
USENIX LISA2021 talk by Brendan Gregg (https://www.youtube.com/watch?v=_5Z2AU7QTH4). This talk is a deep dive that describes how BPF (eBPF) works internally on Linux, and dissects some modern performance observability tools. Details covered include the kernel BPF implementation: the verifier, JIT compilation, and the BPF execution environment; the BPF instruction set; different event sources; and how BPF is used by user space, using bpftrace programs as an example. This includes showing how bpftrace is compiled to LLVM IR and then BPF bytecode, and how per-event data and aggregated map data are fetched from the kernel.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
eBPF is an exciting new technology that is poised to transform Linux performance engineering. eBPF enables users to dynamically and programatically trace any kernel or user space code path, safely and efficiently. However, understanding eBPF is not so simple. The goal of this talk is to give audiences a fundamental understanding of eBPF, how it interconnects existing Linux tracing technologies, and provides a powerful aplatform to solve any Linux performance problem.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
Cilium - Container Networking with BPF & XDPThomas Graf
This talk demonstrates that programmability and performance does not require user space networking, it can be achieved in the kernel by generating BPF programs and leveraging the existing kernel subsystems. We will demo an early prototype which provides fast IPv6 & IPv4 connectivity to containers, container labels based security policy with avg cost O(1), and debugging and monitoring based on the per-cpu perf ring buffer. We encourage a lively discussion on the approach taken and next steps.
Using eBPF for High-Performance Networking in CiliumScyllaDB
The Cilium project is a popular networking solution for Kubernetes, based on eBPF. This talk uses eBPF code and demos to explore the basics of how Cilium makes network connections, and manipulates packets so that they can avoid traversing the kernel's built-in networking stack. You'll see how eBPF enables high-performance networking as well as deep network observability and security.
Moved to https://speakerdeck.com/ebiken/zebra-srv6-cli-on-linux-dataplane-enog-number-49
Introduction to SRv6, Linux SRv6 implementation and how to add SRv6 CLI to Zebra 2.0 Open Source Network Operation Stack.
Presented at ENOG (Echigo NOG) #49.
An Introduction to eBPF (and cBPF). Topics covered include history, implementation, program types & maps. Also gives a brief introduction to XDP and DPDK
Agenda:
In this session, Shmulik Ladkani discusses the kernel's net_device abstraction, its interfaces, and how net-devices interact with the network stack. The talk covers many of the software network devices that exist in the Linux kernel, the functionalities they provide and some interesting use cases.
Speaker:
Shmulik Ladkani is a Tech Lead at Ravello Systems.
Shmulik started his career at Jungo (acquired by NDS/Cisco) implementing residential gateway software, focusing on embedded Linux, Linux kernel, networking and hardware/software integration.
51966 coffees and billions of forwarded packets later, with millions of homes running his software, Shmulik left his position as Jungo’s lead architect and joined Ravello Systems (acquired by Oracle) as tech lead, developing a virtual data center as a cloud service. He's now focused around virtualization systems, network virtualization and SDN.
The Linux kernel is undergoing the most fundamental architecture evolution in history and is becoming a microkernel. Why is the Linux kernel evolving into a microkernel? The potentially biggest fundamental change ever happening to the Linux kernel. This talk covers how companies like Facebook and Google use BPF to patch 0-day exploits, how BPF will change the way features are added to the kernel forever, and how BPF is introducing a new type of application deployment method for the Linux kernel.
Die monatlichen Anlässe in Zusammenarbeit mit dem Swiss IPv6 Council behandeln verschiedene technische Themenbereiche von IPv6.
Das Referat von Jen Linkova vom 30. November 2015 widmete sich dem Neighbor Discovery Protokoll, einem Schlüsselmechanismus um Verbindungen zwischen IPv6 Knotenpunkten und LANs aufzubauen. Die Referentin fokussierte sich in der Präsentation auf die technischen Details des Designs, der Implementierung sowie Sicherheitsaspekten.
Gerne stellen wir Ihnen die Präsentation zum Anschauen und Herunterladen zur Verfügung. Haben Sie Feedback zum Event? Wir sind gespannt auf Ihre Meinung.
BPF & Cilium - Turning Linux into a Microservices-aware Operating SystemThomas Graf
Container runtimes cause Linux to return to its original purpose: to serve applications interacting directly with the kernel. At the same time, the Linux kernel is traditionally difficult to change and its development process is full of myths. A new efficient in-kernel programming language called eBPF is changing this and allows everyone to extend existing kernel components or glue them together in new forms without requiring to change the kernel itself.
USENIX LISA2021 talk by Brendan Gregg (https://www.youtube.com/watch?v=_5Z2AU7QTH4). This talk is a deep dive that describes how BPF (eBPF) works internally on Linux, and dissects some modern performance observability tools. Details covered include the kernel BPF implementation: the verifier, JIT compilation, and the BPF execution environment; the BPF instruction set; different event sources; and how BPF is used by user space, using bpftrace programs as an example. This includes showing how bpftrace is compiled to LLVM IR and then BPF bytecode, and how per-event data and aggregated map data are fetched from the kernel.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Was ist dieses Ethernet, was haben wir da für Geräte und warum? Was tun die? Was hat das mit Bäumen zu tun und wer ist dieses MAC?
Was ist eine IP-Adresse? Wie funktioniert Subnetting mit CIDR und was sind eigentlich diese Netzwerkklassen von denen immernoch Menschen reden? Was sind private und öffentliche IPs und wo bekomme ich die her? Wie konfiguriere ich das alles unter Linux? Was sind Routingtabellen und warum habe ich davon eigentlich mindestens drei Stück?
Dieser Vortrag gibt Antworten auf alle diese Fragen und noch einige mehr. Subnetting nach CIDR bildet die Grundlagen für Routing in heutigen IP-Netzwerken;
RFC1918, RFC3927 und RFC6598 definieren jeweils “private” IP-Bereich für interne Nutzung, für öffentliche IPs haben wir in Europa das RIPE. Eine Einführung in iproute2 zeigt, wie man all das unter Linux “zu Fuß” konfiguriert und wie man die Netzwerkkonfiguration am Beispiel von Debian reboot-save einrichtet.
Dynamische Routingprotokolle Aufzucht und Pflege - OSPFMaximilan Wilhelm
Herzlichen Glückwunsch! Sie dürfen ein Netzwerk mit mehr als 2 Routern administrieren. Dieser Vortrag erläutert, warum statisches Routing keine Lösung ist und schneller als einem lieb ist zum Problem werden kann. Als Einführung in dynamisches Routing und OSPF, erklärt dieser Vortrag wie sich Router gegenseitig finden, Routen austauschen, was eine Area ist und wie die Link-State Datenbank funktioniert.
OSPF wird praktisch am Beispiel des Bird Internet Routing Daemons und in Zusammenspiel mit klassischen Herstellern gezeigt.
Los computadores actuales, desde los sistemas on-chip de los móviles hasta los más potentes supercomputadores, son paralelos. La escala va desde los 8 cores de un móvil hasta los millones desplegados por los grandes superomputadores. La necesidad de hacer visible la memoria del sistema a cada uno de sus cores se resuelve, independientemente de la escala, interconectando todos los cores con el rendimiento adecuado a unos costes acotados. En los sistemas de menor escala (MPSoCs), la memoria se comparte usando redes on-chip que transportan líneas de cache y comandos de coherencia. Unos pocos MPSoCs se interconectan formando servidores que usan mecanismos para extender la coherencia y compartir memoria usando una arquitectura CC-NUMA. Decenas de estos servidores se apilan en un rack y un número de racks (hasta centenares) constituyen un datacenter o un supercomputador. La memoria global resultante no puede ser compartida, pero sus contenidos son transferibles mediante el envío de mensajes a través de la red de sistema. Por ello, las redes son sistemas críticos y básicos en las arquitecturas de memoria de los computadores de cualquier gama. En esta charla se ofrecerá una visión argumentada de las elecciones que hacen diferentes fabricantes para el despliegue de las redes on-chip y de sistema que interconectan los computadores actuales.
Ensure that only reliable networks are set up in your systems by listening to our short Webinar teaching you all about the basics of industrial ethernet communications and computer networking. Starting from the ground up, this presentation covers the basics of how network connections work, and how one computer talks to another.
Switching – A Process of using the MAC address on LAN is called Layer 2 Switching.
Layer 2 Switching is the process of using hardware address of devices on a LAN to segment a network.
Switching breaks up large collision domains into smaller ones and that a collision domain is a network
segment with two or more devices sharing the same bandwidth.
The Systems Engineering / SRE world has undergone a shift of thinking towards intend driven holistic configuration management a long time ago, but it feels like the majority of network automation solutions are still following the idea of making incremental changes to the routers and switches out there, which at the same time might also be managed manually by operators typing (or copying) magic spells into a CLI. This makes the device configuration the synchronization point and we don’t really have an idea of what this configuration will look like in full without checking back on the device.
I believe we as Network (Automation) Engineers need to follow suit, make the mental shift to the holistic approach, let Perl, Shell and expect scripts be, and bring software engineering methods to network automation. This way we are able to tackle the problems at hand at an abstract level, build solutions which can be reasoned with, tested on their own, and scale to our needs. For the most daunting problem of configuration management this means plugging some of those systems together and building a solution which generates and owns the full device configuration.
Dealing with diverging configuration parts, across the fleet, carefully cleaning up old approaches to configure X, doing incremental changes, and figuring out how to interact with a platform API, a dialect of NETCONF, YANG, etc. would all be from the past –-- wouldn’t that be great?
A recording of this talk can be found at https://media.ccc.de/v/froscon2022-2820-this_is_the_way_-_holistic_network_automation
Fun with PRB, VRFs and NetNS on Linux - What is it, how does it work, what ca...Maximilan Wilhelm
Linux has become a 1st class Network Citizen for many years and doesn't fall short compared to commercial solutions. It in fact is the very essence many of those are build on and is used as the foundation for nearly all cloud solutions out there.
This talk will touch on methods and features to set up Layer3 network separation and will walk through and show case
* Policy-based routing
* VRFs (with and without MPLS)
* Network Namespaces
We will compare features and options and go through a number of use cases, covering Linux as a router, VPN server, load balancer, etc.
A basic understanding of networking, routing and how the Internet works certainly help, some aha moments will be there in any way.
This talk will show how to build your own simple, cheap and scalable CGN solutions with stateful-failover with commodity servers with a decent NIC running Linux, nftables, and bird.
We were in need to introduce NAT into the network and a commercial solution would have required a 6 figure invest, so we build it ourselves for <10% of that cost.
Two Dell servers with a recent CPU, two Mellanox NICs and nftables as well as bird do the trick and make for a simple, cheap and scalable CGN box, supporting ECMP, simple draining and orchestration by your usual Linux tool chain as well as stateful-failover.
Video at: https://www.youtube.com/watch?v=qHsHkjhGibA
Contemporary network configuration for linux - ifupdown-ngMaximilan Wilhelm
There are many different ways to configure networking on Linux. Debian and Alpine use ifupdown1, and Cumulus Networks invented ifupdown2; other distributions have various other systems, such as systemd-networkd and NetworkManager.
This talk will present ifupdown-ng, a new project by the Network Services Association intended as a drop-in replacement for ifupdown1 and ifupdown2 installations. Presently, Alpine and Debian are the primary supported environments. Support for other Linux distributions and BSD is planned.
With its modular design, ifupdown-ng intends to allow flexibility for today's modern networking setups, while being easy to extend.
ifupdown-ng is Open Source and can be found on GitHub at: https://github.com/ifupdown-ng/ifupdown-ng/
Angewandte Netzwerkgrundlagen reloaded - von Layer 1 bis 3Maximilan Wilhelm
Dieses Jahr versuchen wir uns auf vielfachen Wunsch an einem noch praktischer orientierten Grundlagen-Vortrag. Wir fangen an bei Verkabelung (Kupfer, Glasfasern, Stecker, etc.), gehen weiter zu Ethernet (STP, VLANs, LAGs / Bonding) und enden unseren Ausflug bei IP und Grundlagen des Debugging (Ping, Traceroute).
Intent driven, fully automated deployment of anycasted load balancers with ha...Maximilan Wilhelm
Keeping your service configuration aligned over hundreds of hosts is not a simple task. In this talk, we illustrate how we automated the integration of HAProxy into our infrastructure at University of Paderborn.
As our current generation of commercial load balancer appliances approached end of life, we thought about replacement options and improving how we manage our services while being at it. The main goal was building a scaleable, consistent, active-active setup of load balancers which could be easily automated with open source tools.
We needed a way to define what a service is and how/where it should be configured, balanced and monitored we created a simple service defintion format in YAML and small Python library to help with parsing, inheritence, defaults etc. The automation framework bcfg2 was a given as it was already in use to manage hundreds of Linux and Windows systems and services. As it's written in Python it's easily extendable.
As load balacing options we implemented anycast (for examples for Kerberos KDCs) as well balancing by HAproxy nodes where the HAproxy frontend IPs might be anycasted as well. When running production services it's important to know when things break before the user does, so setting up monitoring for frontend and backend services is part of the picture, too. All bits of configuration for HAproxy, anycast, route reflection, monitoring with Icinga2, netfilter (nftables) rules, etc. are automagically generated based on the service configuration. This talk will lay out how all those parts fit together and are generated.
Of course, we also explain the pitfalls of this setup and what we (hopefully) learned from it.
Es gibt viele Möglichkeiten hoch verfügbare und/oder skalierbare Dienste zu bauen, die weitläufig im Einsatz sind: DNS Round-Robin, ein Satz Loadbalancer oder Reverse-Proxies, etc. pp. An Anycast und BGP im eigenen Rechenzentrum trauen sich einige Admins und Entscheider nicht heran.
Warum es OK ist, wenn einige bis viele Server die selbe IP-Adresse haben, viele Wege nach Rom führen und wie man so ein Setup aufbaut und betreibt soll in diesem Vortrag praxisnah gezeigt werden. Wir bauen auf Basis von Debian Linux, Bird und Bind einen Cluster von Webservern und spielen ein bisschen damit herum (wenn noch genug Zeit ist).
APUs als Backbonerouter sind toll, klein, kraftvoll genug[tm] und einfach zu handlen.
Aber was tun, wenn das Board beim reboot (remote aus gelöst natürlich) hängen bleibt? Man das Netzwerk kaputtkonfiguriert hat, oder der Kernel Schluckauf hat? OOB muss her!
Wir zeigen, wie wir das in unserem Backbone mit einer "Management-Backboor" und einem Raspberry PI pro APU gelöst habe, sodass wir eine serielle Konsole und einen remote-reset-Knopf für unseren Backbonerouter haben.
Wie baue ich ein Freifunkbackbone - Was wir in den letzten 5 Jahren gelernt h...Maximilan Wilhelm
Der Freifunk Hochstift betreibt seit 2014 ein Richtfunkbackbone, das im Laufe der Jahre stark gewachsen ist und sie an einigen Stellen weiterentwickelt hat.
Wir wollen Euch die Geschichte unseres Richtfunkbackbones erzählen mit allen ihren hellen und dunklen Seiten und allen Ideen die mal gut schienen und sich als "eher so mittel" herausgestellt haben.
Best Current Operational Practices - Dos, Don’ts and lessons learnedMaximilan Wilhelm
Max und Falk versammeln knapp 42 Jahre Erfahrung in der Netzwerk- und Open-Source Praxis. In diesem Vortrag stellen sie schmerzhafte Erfahrungen vor und leiten daraus Best Practices für den Netzwerkbetrieb ab. Zusätzlich werden Best Community Practices vorgestellt und der ein oder andere Schwank aus den Anfangszeiten des Internet in Deutschland erzählt.
Overlays & IP-Fabrics - viele Wege führen nach Rom und warum Layer2 keine Lös...Maximilan Wilhelm
SDN ist in aller Munde und Ohren, mindestens auf den Golfplätzen. Welche Technologien Software Defined Netzwerke ermöglichen und warum ein geswitchtes Underlay ab einer bestimmten Größe unhandlich wird und warum Netzwerker gerne Dinge in Dingen einpacken, wird in diesem Vortrag erklärt.
Dieser Vortrag erklärt Begriffe wie GRE, VXLAN und EVPN und erläutert wie man diese unter Linux benutzt, um entsprechende Overlay Strukturen zu etablieren und welchen realweltichen Probleme man damit lösen kann.
Dynamische Routingprotokolle Aufzucht und Pflege - BGPMaximilan Wilhelm
Sie möchten Ihr großes internes Netzwerk - ein Autonomes System - mit dem Internet verbinden, eine IP-Fabric aufbauen oder interne Dienste per Anycast in Ihrem Netzwerk anbieten. Für all diese Dinge ist das Border Gateway Protokoll entwickelt worden und auch hervorragend geeignet.
Dieser Vortag vermittelt die Funktionsweise von BGP im externen und internen Einsatz, gibt einen Überblick über die Steuermechanismen und Stellschrauben und zeigt den praktischen Einsatz mit dem Bird Internet Routing Daemon auf.
Nach 20 Jahren IPv6 (RFC2460 erschien im Dezember 1998) und knapp 40% Verbreitung an Deutschlands Internetzugängen stellt sich IPv6 für die meisten Admins immer noch als Mysterium dar. Teilweise wird sogar von führenden Experten empfohlen IPv6 abzuschalten "weil das nur Probleme macht". Warum das nicht so ist, und warum man sich doch auf die "neue" Welt einlassen sollte erklärt dieser praxisorientierte Vortrag.
Der Vortag führt ein in Adresskonzepte, Adressvergabe und -auflösung (SLAAC, DHCPv6, DHCPv6-PD, ND, RDNSS, etc.) und zeigt einen typischen Adressierunsplan auf. Brückentechnologien wie NAT64, DS-lite und Teredo werden vorgestellt und eingeordnet. Die Konfiguration von IPv6 unter Linux wird am Beispiel von iproute2 bzw. Debian Netzwerkkonfiguration sowie sysctls aufgezeigt.
Building your own sdn with debian linux salt stack and pythonMaximilan Wilhelm
Topics like Infrastructure Automation / Orchestration, Cloud, and Software Defined Networks are on everyones tongue and nearly all network vendors who think highly of themselves provide products and maybe even solutions in this sphere of buzzwords.
Within the last years there has been a paradigm shift towards host and segment routing – think »IP Fabric« – as well as a focus on open protocols and standards like OSPF, IS-IS, BGP & MPLS not only in the data center. This even brought us some new standards like VXLAN and a bunch of open source based “open networking” platforms. Now we aren't always locked to the operating systems of a networking vendor but can choose the control plane software from a variety of Linux based solutions which can be managed and orchestrated by lots of different means.
Thanks to the Linux basis and the Open Source spirit of some vendors, some features (VRFs, MPLS forwarding plane, …) today are part of the upstream Linux kernel and available for everyone! Most notable are the contributions of the Debian Linux based platform from Cumulus Networks, which include the VRF support for Linux, some MPLS patches for FRR and ifupdown2 (which is written in Python :-)).
Putting a bunch of these technologies and ideas together will open up a lot of powerful options for building low budget yet mighty networks. This talk will lay out how to build a SDN based service provide like infrastructure with the help of Salt Stack, some 1000 lines of Python and a bunch of affordable hardware where overlay networks and anycast aren't things to be scared of. The Freifunk Hochstift network and server infrastructure will be used as an example.
The target audience mainly consists of (Linux-) system and network engineers / architects, who already have some experience with the other world. A positive attitude towards automation and magic is a plus.
AS201701 - Building an Internet backbone with pure 1he servers and LinuxMaximilan Wilhelm
Talk held at May 9th 2017 at #RIPE74 in Budapest about the german Freifunk Backbone running as AS201701 and the efforts it took to build it and keep in running.
See https://ripe74.ripe.net/programme/meeting-plan/plenary/ for a video recording of the talk.
Die Themen Infrastructure Automation / Orchestration, Cloud und Software Defined Networks sind in aller Munde und nahezu jeder Netzwerkhersteller, der etwas auf sich hält,bietet Produkte und stellenweise sogar Lösungen in dieser Buzzwordblase an.
Der in den letzten Jahren vollzogene Paradigmenwechsel hin zu mehr (Host/Segment-)Routing und weniger Layer2-Magie – Stickwort >>IP Fabric<< - sowie die Besinnung auf offene Standards (OSPF, ISIS, BGP, MPLS) nicht nur in Data-Center-Netzwerken hat neue Standards (z.B. VXLAN) beschert und Open-Source-basierte "Open Networking"-Plattformen auf dem Markt erscheinen lassen. Auf einmal ist man nicht mehr an das Betriebsystem und die Vorgaben des Hardwarevendors gebunden, sondern kann die Control-Plane einiger Gerate mit verschiedenen Linux-basierten Produkten nahezu vollstandig selbst kontrollieren und orchestrieren.
Dank der Linux-Basis und Freude am Open-Source-Gedanken mancher Hersteller sind einige Features in Open-Source-Komponenten (Linux-VRFs, MPLS-Forwarding-Plane im Kernel, etc.) gewandert und stehen somit überall zur Verfügung. Besonders zu erwähnen ist hier das Debian-basierte System von Cumulus Networks, aus deren Feder ifupdown2 sowie VRF-Support in Linux stammen. Eine Sammlung dieser Technologien und Ansätze lassen sich auch in Low-Budget- und/oder Eigenbau-Netzwerken anwenden und können hier erstaunliche und mächtige Optionen eröffnen.
Der Vortrag wird am Beispiel der Netzwerk- und Server-Infrastruktur des Freifunk Hochstift darlegen, wie man mit ein bisschen SaltStack, knapp 1000 Zeilen Python und erschwinglicher Hardware eine SDN-basierte Service-Provider Infrastruktur bereitstellen kann, in der Overlay-Netze und Anycast keine Fremdworte sind.
Neben einem “Technology-Overview” wird es eine Failosophy und Lessons Learned aus dem echten Leben eines Freifunker geben ;-)
Das Zielpublikum des Vortrags umfasst in erster Linie (Linux-)Administratoren und Netzwerker, die bereits Erfahrungen mit der jeweils anderen Welt haben und wissen was Routing ist. Eine positive Einstellung zu Automatisierung ist von Vorteil.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
ER(Entity Relationship) Diagram for online shopping - TAEHimani415946
https://bit.ly/3KACoyV
The ER diagram for the project is the foundation for the building of the database of the project. The properties, datatypes, and attributes are defined by the ER diagram.
L2/L3 für Fortgeschrittene - Helle und dunkle Magie im Linux-Netzwerkstack
1. L2/L3 für Fortgeschrittene
Helle und dunkle Magie imLinux-Netzwerkstack
FrOSCon 13 Network Track
Falk Stern, Maximilian Wilhelm
1 / 36
2. Agenda
1. Who are we
2. Layer 2
1. Link Aggregation
2. VLANs
3. Bridges
3. Layer 3
1. Policy based routing
2. VRFs
3. NetNS
2 / 36
3. Who's who Falk Stern
Full Stack Infrastructure Engineer
IPv6 fanboy
Runs his own Kubernetes cluster in his basement
Consultant @ Profi Engineering Systems AG
Contact
@wrf42
falk@fourecks.de
3 / 36
4. Who's who Maximilian Wilhelm
Networker
OpenSource Hacker
Fanboy of
(Debian) Linux
ifupdown2
Occupation:
By day: Senior Infrastructure Architect, Uni Paderborn
By night: Infrastructure Archmage, Freifunk Hochstift
In between: Freelance Solution Architect for hire
Contact
@BarbarossaTM
max@sdn.clinic
4 / 36
6. Who's who
Layer 2
LAGs
Link Aggregation
Combine one or more physical links between two peers to one virtual link, to
increase over-all bandwidth
create a redundant Layer 2 link
both
Also know as:
LAG
Bonding (Linux)
Aggregated Ethernet (Juniper)
Port-Channel (Cisco)
Trunk (3Com, HP?)
NIC-Teaming
6 / 36
7. Who's who
Layer 2
LAGs
Link Aggregation - Simple Linux bonding
Just use multiple links and hope the peer does, too.
Drawbacks:
If media converters are involved a link-down event may not propagate
No way to tell it the peer is configured the same way
7 / 36
8. Who's who
Layer 2
LAGs
Link Aggregation - LACP
Link Aggregation Control Protocol (802.3ad / 802.1AX)
De-facto standard within networking world
Use LACP signalling to set up LAG with peer
Maximum of 8 interface per LAG
Keep alive every 1s (fast) or every 30s (slow)
An interface can be on one of two modes:
active: send out LACP packets to activly form the LAG
passive: wait for and only then reply to LACP packets
8 / 36
9. Who's who
Layer 2
LAGs
Multi-Chassis Link Aggregation Groups
Link Aggregation between more than two peers
At least on peer as to do magic to make this work
Also know as:
MC-LAG
MLAG
Virtual Port-Channel (vPC)
Source: Wikipedia
9 / 36
10. Who's who
Layer 2
LAGs
Loadbalancing Tra c over LAGs
Round-Robin
One packet on link 1, one on link 2, ..., and repeat
Hashing of header elds
Layer 2 (src MAC + dst MAC)
Only useful if communication is to multiple stations within local subnet
Layer 2+3 (src MAC + dst MAC + src IP + dst IP)
Might be more useful for communication without local subnet
Layer 3+4 (src IP + dst IP + src Port + dst Port)
Probably most useful when communicating with multiple peers
10 / 36
15. Who's who
Layer 2
LAGs
VLANs
Bridges
Bridges
The switch(es) within your Linux box
Usage: ... bridge [ forward_delay FORWARD_DELAY ]
[ hello_time HELLO_TIME ]
[ max_age MAX_AGE ]
[ ageing_time AGEING_TIME ]
[ stp_state STP_STATE ]
[ vlan_filtering VLAN_FILTERING ]
[ vlan_default_pvid VLAN_DEFAULT_PVID ]
[ mcast_snooping MULTICAST_SNOOPING ]
[...]
[ nf_call_iptables NF_CALL_IPTABLES ]
[ nf_call_ip6tables NF_CALL_IP6TABLES ]
[ nf_call_arptables NF_CALL_ARPTABLES ]
ip link add br0 type bridge
ip link set br0 up
ip link set eth0 master br0
15 / 36
16. Who's who
Layer 2
LAGs
VLANs
Bridges
VLANs and Bridges
Two options, both suck
External trunk as bridge member
External interface is part of the bridge
All VLANs transported within the bridge
All VLANs forwarded on any port
External trunk with many bridges
One interface per VLAN on trunk (e.g. bond0.2342)
One bridge per VLAN (e.g. br2342)
16 / 36
17. Who's who
Layer 2
LAGs
VLANs
Bridges
VXLAN and Bridges
One bridge per VNI
Possibly multiple physical or virtual NICs within bridge, too
VLAN interfaces
VM interfaces (e.g. on KVM host)
17 / 36
18. Who's who
Layer 2
LAGs
VLANs
Bridges
Vlan-aware Bridges
VLANs and bridges have been a challenge
That ain't true no more
Now it's a “regular switch”
Configured with bridge utility from iproute
Real World Use Case:
Simple KVM/Qemu hook for VLAN assignment
https://github.com/FreifunkHochstift/ffho-salt-public/blob/master/kvm/qemu-hook
18 / 36
19. Who's who
Layer 2
LAGs
VLANs
Bridges
Vlan-aware Bridges
Port VLAN management
bridge vlan { add | del }
vid VLAN_ID dev DEV
[ pvid ] [ untagged ]
[ self ] [ master ]
bridge vlan show [ dev DEV ]
[ vid VLAN_ID ]
Forwarding database
bridge fdb [...]
19 / 36
23. Who's who
Layer 2
Layer 3
Routing tables
Every Linux box has a number of routing tables
$ ip route help
Usage: ip route { list | flush } SELECTOR
...
SELECTOR := ... [ table TABLE_ID ]
...
TABLE_ID := [ local | main | default | all | NUMBER ]
By default routing table main is used
So ip route show and ip route show table main show the same thing
23 / 36
24. Who's who
Layer 2
Layer 3
Routing tables
Table local
Contains all routes to
Locally connected IPs
Broadcast addresses
Table main
Contains "usual" routes
Locally connected subnets
Routes to remote subnets
Table default
Usually empty
24 / 36
25. Who's who
Layer 2
Layer 3
PBR
Policy based routing
Available since Linux 2.2 (1999)
Defaut routing policy on every Linux box:
$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
Drawbacks
No mechanism for persitancy available
Beware to close every loophole
Rule for IPv4
Rule for IPv6
Rule for incoming interface
25 / 36
26. Who's who
Layer 2
Layer 3
PBR
VRFs
Virtual Routing and Forwarding (VRFs)
Independent routing instances
L3-VPNs
Usually in combination with MPLS
Layer 3 separation
VRF interface is master for “real” interfaces
Defines routing table for VRF
Since Kernel 4.[345] (use >= 4.9)
26 / 36
27. Who's who
Layer 2
Layer 3
PBR
VRFs
Virtual Routing and Forwarding (VRFs)
By foot
ip link add vrf_external type vrf table 1023
ip link set eth0 master vrf_external
ifupdown2
auto eth0
iface eth0
address 2002:db8:23:42::2/64
gateway 2001:db8:23:42::1/64
vrf vrf_external
auto vrf_external
iface vrf_external
vrf-table 1023
Device routes move from table main and local to table 1023
27 / 36
28. Who's who
Layer 2
Layer 3
PBR
VRFs
Connecting VRFs
Requires vEth pair
Like a virtual network cable within the box
A end in main VRF, Z end in VRF “foo”
Usual routing
Static
Bird talking BGP to itself
28 / 36
29. Who's who
Layer 2
Layer 3
PBR
VRFs
Connecting VRFs
By foot
ip link add VETH_END1 type veth
peer name VETH_END2
ifupdown2*
iface veth_ext2int
link-type veth
veth-peer-name veth_int2ext
vrf vrf_external
iface veth_int2ext
link-type veth
veth-peer-name veth_ext2int
* veth-peer-name not merged upstream yet (PR25) 29 / 36
30. Who's who
Layer 2
Layer 3
PBR
VRFs
Real World Applications for VRFs
External interface in VRF
External interface is part of vrf_external
GRE / OpenVPN tunnel sent / receive encapsulated packets over VRF
Local tunnel endpoint is in main VRF
Helpful sysctl
/proc/sys/net/ipv4/tcp_l3mdev_accept
l3mdev == Layer3 Master Device
VRF info is added to socket
Replies send out in VRF where request originated
30 / 36
31. Who's who
Layer 2
Layer 3
PBR
VRFs
Real World Applications - Tunnels / GRE
Outer and/or inner side of tunnel can be part of a VRF
Send
ip link add DEVICE type gre remote ADDR local ADDR dev PHYS_DEV
If PHYS_DEV is within a VRF, all encapsulated packets are send/received in VRF
That's how your internet access is built right now :)
Pushing the inner side of a tunnel into a VRF is equally simple:
ip link set DEVICE master VRF
31 / 36
32. Who's who
Layer 2
Layer 3
PBR
VRFs
Real World Applications - Tunnel / OpenVPN
Pushing the inner side of an OpenVPN tunnel into a VRF is a simple as before.
Sending/receiving encapsulated packets into/from a VRF is more complicated
But there's a patch since October 2016
https://github.com/OpenVPN/openvpn/pull/65
Used to glue remote POPs from Freifunk Hochstift together
openvpn --config your_config.cfg --bind-dev VRF
Now go and motivate Gert - Hi Gert! - to merge it, so we all can us it :)
32 / 36
33. Who's who
Layer 2
Layer 3
PBR
VRFs
NetNS
Network Namespaces (NetNS)
Layer 1 separation
Since Kernel 2.6.29
Own set of routing tables
VRFs and PBR available within NetNS
Own set of netfilter rules
A process can be run in a special NetNS
Two NetNS can be connected by vETH, too.
33 / 36
34. Who's who
Layer 2
Layer 3
Takeaways
Key takeaways
Linux networking has evolved A LOT
Linux today is a first class citizen wrt networking
Vlan-aware bridges are great for virtualization hosts
VRFs can help separte layer 3 domains nicely
Tunneling technologies integrate accordingly
34 / 36
35. Who's who
Layer 2
Layer 3
Takeaways
Links
Further Reading
Contemporary Linux Networking - DENOG9 (2017)
https://www.slideshare.net/BarbarossaTM/contemporary-linux-networking
VRFs
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/
networking/vrf.txt
https://cumulusnetworks.com/blog/vrf-for-linux/
https://de.slideshare.net/CumulusNetworks/operationalizing-vrf-in-the-data-center
35 / 36