This document provides a summary of a presentation on monitoring cloud infrastructure. It discusses using containers, virtualization, and open source tools like SaltStack for automation and orchestration. Redundancy is achieved through BGP routing of virtual IP addresses between replicated container services on different cloud hosts.
This document summarizes an introduction to OpenvSwitch presented by pichuang@sdnds-tw. It provides an overview of OpenvSwitch including that it is a production quality, multi-layer virtual switch that supports OpenFlow and is written in platform-independent C. It then describes some key OpenvSwitch features and components and how they interact, such as the datapath, ovs-vswitchd, and ovsdb-server. It concludes with suggestions for debugging and setting OpenvSwitch.
OVN provides virtual networking capabilities for Open vSwitch including logical switches, routers, security groups, and ACLs. It uses OVSDB to configure OVN components and provides native integration with OpenStack Neutron. OVN's architecture includes a northbound database for logical network definitions, a southbound database for physical mappings, and daemons like ovn-northd and ovn-controller that translate between the databases.
This document provides instructions for compiling and testing the WDT (Facebook's open-source data transfer library). It summarizes the steps to install prerequisites like Cmake and OpenSSL. It then describes compiling WDT from source, including issues encountered with specific library versions. The document tests WDT's transfer speed compared to SCP by sending a 5GB directory from one Ubuntu system to another. It notes WDT requires specifying a start port unlike SCP.
OpenStack DVR (Distributed Virtual Router) allows L3 routing functions to be distributed across compute nodes by creating router namespaces on each compute node. This avoids bottlenecks and single points of failure at network nodes. DVR supports east-west inter-subnet routing, SNAT for external access without floating IPs, and floating IPs associated with internal VMs for direct external access. Traffic flows are encapsulated in VXLAN/GRE tunnels between compute nodes and routed appropriately within each node's router namespace.
The document evaluates Lustre 2.9 and OpenStack for providing isolated POSIX file systems to tenants in OpenStack, finding that Lustre 2.9 allows uid mapping that can isolate tenants while maintaining high performance, and that physical and virtual Lustre routers can route traffic between tenants effectively albeit with some increased east-west traffic with virtual routers.
Netronome's Nick Tausanovitch, VP of Solutions Architecture and Silicon Product Management, Linley Data Center Conference in Santa Clara, CA on February 9, 2016.
Managing Open vSwitch Across a Large Heterogenous Fleetandyhky
Open vSwitch (OVS) is one of the more popular ways to provide VM connectivity in OpenStack. Rackspace has been using Open vSwitch in production since late 2011. In this session, we will detail the challenges faced with managing and upgrading Open vSwitch across a large heterogenous fleet. Finally, we will share some of the tools we have created to monitor OVS availability and performance.
Specific topics covered will include:
Why upgrade OVS?
Measuring OVS
Minimizing downtime with upgrades
Bridge fail modes
Kernel module gotchas
Monitoring OVS
The document describes how to configure a VXLAN network using Cumulus switches. Key steps include:
1. Configuring OSPF routing between the Cumulus switches to provide L3 connectivity.
2. Creating bridges on each Cumulus switch and connecting them to virtual VTEP interfaces to encapsulate L2 traffic in VXLAN tunnels between the switches.
3. Configuring IP addresses on router interfaces and ARP entries to allow L2 communication between routers connected to each switch via the VXLAN.
This document summarizes an introduction to OpenvSwitch presented by pichuang@sdnds-tw. It provides an overview of OpenvSwitch including that it is a production quality, multi-layer virtual switch that supports OpenFlow and is written in platform-independent C. It then describes some key OpenvSwitch features and components and how they interact, such as the datapath, ovs-vswitchd, and ovsdb-server. It concludes with suggestions for debugging and setting OpenvSwitch.
OVN provides virtual networking capabilities for Open vSwitch including logical switches, routers, security groups, and ACLs. It uses OVSDB to configure OVN components and provides native integration with OpenStack Neutron. OVN's architecture includes a northbound database for logical network definitions, a southbound database for physical mappings, and daemons like ovn-northd and ovn-controller that translate between the databases.
This document provides instructions for compiling and testing the WDT (Facebook's open-source data transfer library). It summarizes the steps to install prerequisites like Cmake and OpenSSL. It then describes compiling WDT from source, including issues encountered with specific library versions. The document tests WDT's transfer speed compared to SCP by sending a 5GB directory from one Ubuntu system to another. It notes WDT requires specifying a start port unlike SCP.
OpenStack DVR (Distributed Virtual Router) allows L3 routing functions to be distributed across compute nodes by creating router namespaces on each compute node. This avoids bottlenecks and single points of failure at network nodes. DVR supports east-west inter-subnet routing, SNAT for external access without floating IPs, and floating IPs associated with internal VMs for direct external access. Traffic flows are encapsulated in VXLAN/GRE tunnels between compute nodes and routed appropriately within each node's router namespace.
The document evaluates Lustre 2.9 and OpenStack for providing isolated POSIX file systems to tenants in OpenStack, finding that Lustre 2.9 allows uid mapping that can isolate tenants while maintaining high performance, and that physical and virtual Lustre routers can route traffic between tenants effectively albeit with some increased east-west traffic with virtual routers.
Netronome's Nick Tausanovitch, VP of Solutions Architecture and Silicon Product Management, Linley Data Center Conference in Santa Clara, CA on February 9, 2016.
Managing Open vSwitch Across a Large Heterogenous Fleetandyhky
Open vSwitch (OVS) is one of the more popular ways to provide VM connectivity in OpenStack. Rackspace has been using Open vSwitch in production since late 2011. In this session, we will detail the challenges faced with managing and upgrading Open vSwitch across a large heterogenous fleet. Finally, we will share some of the tools we have created to monitor OVS availability and performance.
Specific topics covered will include:
Why upgrade OVS?
Measuring OVS
Minimizing downtime with upgrades
Bridge fail modes
Kernel module gotchas
Monitoring OVS
The document describes how to configure a VXLAN network using Cumulus switches. Key steps include:
1. Configuring OSPF routing between the Cumulus switches to provide L3 connectivity.
2. Creating bridges on each Cumulus switch and connecting them to virtual VTEP interfaces to encapsulate L2 traffic in VXLAN tunnels between the switches.
3. Configuring IP addresses on router interfaces and ARP entries to allow L2 communication between routers connected to each switch via the VXLAN.
In this presentation from the DDN User Meeting at SC13, Tommy Minyard from the Texas Advanced Computing Center describes TACC's new Corral data storage system.
Watch the video presentation: http://insidehpc.com/2013/11/13/ddn-user-meeting-coming-sc13-nov-18/
Packets in Open vSwitch flow through two main components - the vswitchd daemon in userspace which configures connections to controllers, and the datapath module in the kernel which performs packet forwarding based on flows. The datapath first receives packets from a NIC or VM's virtual NIC. It either follows instructions from vswitchd on how to handle the packet, or passes it to vswitchd if no instructions exist. Vswitchd then determines how to handle the packet and passes it back with actions for the datapath to cache and apply to similar future packets.
Open vSwitch is an open source virtual switch software that is compatible with the Linux standard bridge. The presentation will provide an overview of Open vSwitch, how to use its basic functions such as setting up bridges and ports, and its data structure that is managed in an ovsdb database.
Large scale overlay networks with ovn: problems and solutionsHan Zhou
Han Zhou presents problems and solutions for scaling Open Virtual Network (OVN) components in large overlay networks. The key challenges addressed are:
1. Scaling the OVN controller by moving from recomputing all flows to incremental processing based on changes.
2. Scaling the southbound OVN database by increasing probe intervals, enabling fast resync on reconnect, and improving performance of the clustered mode.
3. Further work is planned to incrementally install flows, reduce per-host data, and scale out the southbound database with replicas.
This presentation covers the basics about OpenvSwitch and its components. OpenvSwitch is a Open Source implementation of OpenFlow by the Nicira team.
It also also talks about OpenvSwitch and its role in OpenStack Networking
This document outlines steps to run Mininet with multiple OpenDaylight controllers on different VMs:
1) Configure the ODL and Mininet VMs with IP addresses and run ODL with clustering enabled on each controller VM.
2) Configure Mininet to connect to the controller cluster and verify connection.
3) Check sharding data on each controller to verify cluster setup.
Kubernetes from scratch at veepee sysadmins days 2019🔧 Loïc BLOT
1. The document discusses Kubernetes components, tools, and architecture for deployment at Veepee. It covers the control plane components, node architecture, and tooling used including DNS resolution, metrics collection, and logging.
2. For the control plane, it describes deploying etcd, the API server, scheduler, and controller manager across multiple datacenters. It also discusses configuring the API server and admission controllers.
3. For nodes, it discusses choosing containerd over Docker, configuring the network using kube-router with BGP, and using CoreDNS for internal DNS resolution in the cluster.
4. It provides details on tooling used for DNS, metrics collection, and centralized logging to
Fedora Virtualization Day: Linux Containers & CRIUAndrey Vagin
This document discusses Linux containers and checkpoint/restore (C/R) functionality. It provides an overview of different types of virtualization including containers and virtual machines. It then focuses on C/R, describing how it allows saving and restoring process states. It outlines the history and key components of C/R, including how it works, interfaces it uses, and features supported in the Linux kernel to enable C/R. It also discusses testing and future plans for C/R.
Toshiaki Hatano presented on integrating VXLAN support natively in Linux to address the VLAN ID limit in CloudStack. VXLAN allows for more isolated guest networks by using 16 million VXLAN network identifiers instead of the 4096 VLAN IDs. The implementation strategy is to initially target KVM hypervisors with Linux bridging, and add a VXLAN isolation method and VXLAN guest network driver while keeping most of the existing VLAN logic. This would allow CloudStack to provide larger virtual private cloud deployments with network isolation comparable to VLANs but without being restricted by the VLAN ID limit.
OpenStack networking can use either VLAN tagging or GRE tunneling to provide logical isolation between tenant networks. With VLAN, packets are tagged with a VLAN ID at the compute and network nodes to associate them with a particular tenant network. With GRE, packets are encapsulated with a GRE header that includes a tunnel ID to associate them with a tenant network. Security groups are applied using iptables rules to filter traffic between VMs in different networks.
QUIC is a new transport protocol developed by Google to replace TCP+TLS. It aims to reduce latency by eliminating OSI layers and supporting features like 0-RTT handshakes. The document provides a high-level overview of QUIC including its architecture, use of TLS 1.3, streams for multiplexing data, and support for features like connection migration through the use of connection IDs. It also discusses QUIC's current implementation status and adoption. Examples are given of QUIC packets and the handshake process.
CRIU: Time and Space Travel for Linux ContainersKirill Kolyshkin
This talk describes CRIU (checkpoint/restore in userspace) software, used to checkpoint, restore, and live migrate Linux containers and processes. It describes the live migration, compares it to that of VM, and shows other uses for checkpoint/restore.
Anatomy of neutron from the eagle eyes of troubelshoortersSadique Puthen
This document summarizes the anatomy of OpenStack Neutron through examples of real-life troubleshooting scenarios. It explores four examples: security group rules not being effective, instances not getting IP addresses from DHCP, floating IP connections randomly failing, and slow provider network communications. For each example, it explains the root cause found by understanding Neutron's architecture and packet flows, and describes the troubleshooting steps taken such as examining logs, monitoring processes, and using tools like tcpdump. The goal is to demonstrate Neutron anatomy and troubleshooting methods rather than just state the problems and solutions.
The document discusses IP anycasting and some of its effects. It notes that IP anycasting is widely used to deploy DNS services by having the same IP addresses on multiple locations and routing packets to the nearest node. While this provides benefits like lower latency and load sharing, it can also break path MTU discovery as ICMP error messages may be delivered to unintended anycast nodes. However, the document also argues that anycasting introduces difficulties for IP fragment injection attacks by hiding network topology information.
This document summarizes eBay's operationalization of OVN at scale as their preferred SDN solution. Some key points:
1. eBay migrated from a legacy vendor SDN to OVN for improved scalability, open source benefits, and reduced vendor lock-in. OVN is used for OpenStack VMs, Kubernetes, and load balancing.
2. Typical OVN deployments at eBay include 25+ routers, 10k+ ports, 35k+ MAC bindings, and 1k+ hypervisors per availability zone. Control planes use a 3 node Raft cluster for high availability.
3. Migration from the legacy SDN to OVN was done gradually by workload type to minimize impact. Some surprises included
This document discusses route redistribution between different routing protocols and the issues that can arise, particularly with multipoint redistribution. It recommends using route tagging as the best solution to prevent routing loops from occurring when routes are redistributed in both directions between protocols. Specifically, it shows how to configure route maps to tag redistributed routes so they are not endlessly propagated between protocols. While route tagging can prevent loops, suboptimal routing may still exist, so careful metric tuning is also advised to prefer internal routes over external ones.
This document analyzes the performance of OpenVSwitch (OVS) using various tools. Key findings include:
- DPDK-based OVS achieves 9.9 Gbps throughput for a single flow on one core, far surpassing standard and Linux bridge OVS.
- Latency is lowest for direct NIC-NIC communication and increases for OVS and VM-based setups.
- The OVS kernel flow cache supports up to 200,000 flows but throughput degrades by 5% at 2,048 flows due to misses.
- Removing IPTables modules improves VM-OVS-VM throughput by up to 15%, and enabling VXLAN offload provides an additional 25
This document discusses Contrail 3.0.2 cloud solution with nested KVM virtual machines. It begins with an overview of data center orchestration with OpenStack and Contrail. It then covers overlay networking using MPLS over GRE and MPLS over UDP tunnels. The document demonstrates how to create nested KVM virtualization and shows routes and packet forwarding between nested virtual machines and physical hosts. It provides commands to view routes, tunnels, and trace packets between nested and physical systems.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
The document describes the network architecture of PennNet, the campus network at the University of Pennsylvania. It discusses key details of PennNet such as its distributed core, IPv4 and IPv6 addressing, routing protocols, wireless network configuration, and connections to Internet2 and MAGPI. It also provides an example of BGP configuration and routing entries for peering relationships with Level 3, MAGPI, and looking up a route to MIT.
In this presentation from the DDN User Meeting at SC13, Tommy Minyard from the Texas Advanced Computing Center describes TACC's new Corral data storage system.
Watch the video presentation: http://insidehpc.com/2013/11/13/ddn-user-meeting-coming-sc13-nov-18/
Packets in Open vSwitch flow through two main components - the vswitchd daemon in userspace which configures connections to controllers, and the datapath module in the kernel which performs packet forwarding based on flows. The datapath first receives packets from a NIC or VM's virtual NIC. It either follows instructions from vswitchd on how to handle the packet, or passes it to vswitchd if no instructions exist. Vswitchd then determines how to handle the packet and passes it back with actions for the datapath to cache and apply to similar future packets.
Open vSwitch is an open source virtual switch software that is compatible with the Linux standard bridge. The presentation will provide an overview of Open vSwitch, how to use its basic functions such as setting up bridges and ports, and its data structure that is managed in an ovsdb database.
Large scale overlay networks with ovn: problems and solutionsHan Zhou
Han Zhou presents problems and solutions for scaling Open Virtual Network (OVN) components in large overlay networks. The key challenges addressed are:
1. Scaling the OVN controller by moving from recomputing all flows to incremental processing based on changes.
2. Scaling the southbound OVN database by increasing probe intervals, enabling fast resync on reconnect, and improving performance of the clustered mode.
3. Further work is planned to incrementally install flows, reduce per-host data, and scale out the southbound database with replicas.
This presentation covers the basics about OpenvSwitch and its components. OpenvSwitch is a Open Source implementation of OpenFlow by the Nicira team.
It also also talks about OpenvSwitch and its role in OpenStack Networking
This document outlines steps to run Mininet with multiple OpenDaylight controllers on different VMs:
1) Configure the ODL and Mininet VMs with IP addresses and run ODL with clustering enabled on each controller VM.
2) Configure Mininet to connect to the controller cluster and verify connection.
3) Check sharding data on each controller to verify cluster setup.
Kubernetes from scratch at veepee sysadmins days 2019🔧 Loïc BLOT
1. The document discusses Kubernetes components, tools, and architecture for deployment at Veepee. It covers the control plane components, node architecture, and tooling used including DNS resolution, metrics collection, and logging.
2. For the control plane, it describes deploying etcd, the API server, scheduler, and controller manager across multiple datacenters. It also discusses configuring the API server and admission controllers.
3. For nodes, it discusses choosing containerd over Docker, configuring the network using kube-router with BGP, and using CoreDNS for internal DNS resolution in the cluster.
4. It provides details on tooling used for DNS, metrics collection, and centralized logging to
Fedora Virtualization Day: Linux Containers & CRIUAndrey Vagin
This document discusses Linux containers and checkpoint/restore (C/R) functionality. It provides an overview of different types of virtualization including containers and virtual machines. It then focuses on C/R, describing how it allows saving and restoring process states. It outlines the history and key components of C/R, including how it works, interfaces it uses, and features supported in the Linux kernel to enable C/R. It also discusses testing and future plans for C/R.
Toshiaki Hatano presented on integrating VXLAN support natively in Linux to address the VLAN ID limit in CloudStack. VXLAN allows for more isolated guest networks by using 16 million VXLAN network identifiers instead of the 4096 VLAN IDs. The implementation strategy is to initially target KVM hypervisors with Linux bridging, and add a VXLAN isolation method and VXLAN guest network driver while keeping most of the existing VLAN logic. This would allow CloudStack to provide larger virtual private cloud deployments with network isolation comparable to VLANs but without being restricted by the VLAN ID limit.
OpenStack networking can use either VLAN tagging or GRE tunneling to provide logical isolation between tenant networks. With VLAN, packets are tagged with a VLAN ID at the compute and network nodes to associate them with a particular tenant network. With GRE, packets are encapsulated with a GRE header that includes a tunnel ID to associate them with a tenant network. Security groups are applied using iptables rules to filter traffic between VMs in different networks.
QUIC is a new transport protocol developed by Google to replace TCP+TLS. It aims to reduce latency by eliminating OSI layers and supporting features like 0-RTT handshakes. The document provides a high-level overview of QUIC including its architecture, use of TLS 1.3, streams for multiplexing data, and support for features like connection migration through the use of connection IDs. It also discusses QUIC's current implementation status and adoption. Examples are given of QUIC packets and the handshake process.
CRIU: Time and Space Travel for Linux ContainersKirill Kolyshkin
This talk describes CRIU (checkpoint/restore in userspace) software, used to checkpoint, restore, and live migrate Linux containers and processes. It describes the live migration, compares it to that of VM, and shows other uses for checkpoint/restore.
Anatomy of neutron from the eagle eyes of troubelshoortersSadique Puthen
This document summarizes the anatomy of OpenStack Neutron through examples of real-life troubleshooting scenarios. It explores four examples: security group rules not being effective, instances not getting IP addresses from DHCP, floating IP connections randomly failing, and slow provider network communications. For each example, it explains the root cause found by understanding Neutron's architecture and packet flows, and describes the troubleshooting steps taken such as examining logs, monitoring processes, and using tools like tcpdump. The goal is to demonstrate Neutron anatomy and troubleshooting methods rather than just state the problems and solutions.
The document discusses IP anycasting and some of its effects. It notes that IP anycasting is widely used to deploy DNS services by having the same IP addresses on multiple locations and routing packets to the nearest node. While this provides benefits like lower latency and load sharing, it can also break path MTU discovery as ICMP error messages may be delivered to unintended anycast nodes. However, the document also argues that anycasting introduces difficulties for IP fragment injection attacks by hiding network topology information.
This document summarizes eBay's operationalization of OVN at scale as their preferred SDN solution. Some key points:
1. eBay migrated from a legacy vendor SDN to OVN for improved scalability, open source benefits, and reduced vendor lock-in. OVN is used for OpenStack VMs, Kubernetes, and load balancing.
2. Typical OVN deployments at eBay include 25+ routers, 10k+ ports, 35k+ MAC bindings, and 1k+ hypervisors per availability zone. Control planes use a 3 node Raft cluster for high availability.
3. Migration from the legacy SDN to OVN was done gradually by workload type to minimize impact. Some surprises included
This document discusses route redistribution between different routing protocols and the issues that can arise, particularly with multipoint redistribution. It recommends using route tagging as the best solution to prevent routing loops from occurring when routes are redistributed in both directions between protocols. Specifically, it shows how to configure route maps to tag redistributed routes so they are not endlessly propagated between protocols. While route tagging can prevent loops, suboptimal routing may still exist, so careful metric tuning is also advised to prefer internal routes over external ones.
This document analyzes the performance of OpenVSwitch (OVS) using various tools. Key findings include:
- DPDK-based OVS achieves 9.9 Gbps throughput for a single flow on one core, far surpassing standard and Linux bridge OVS.
- Latency is lowest for direct NIC-NIC communication and increases for OVS and VM-based setups.
- The OVS kernel flow cache supports up to 200,000 flows but throughput degrades by 5% at 2,048 flows due to misses.
- Removing IPTables modules improves VM-OVS-VM throughput by up to 15%, and enabling VXLAN offload provides an additional 25
This document discusses Contrail 3.0.2 cloud solution with nested KVM virtual machines. It begins with an overview of data center orchestration with OpenStack and Contrail. It then covers overlay networking using MPLS over GRE and MPLS over UDP tunnels. The document demonstrates how to create nested KVM virtualization and shows routes and packet forwarding between nested virtual machines and physical hosts. It provides commands to view routes, tunnels, and trace packets between nested and physical systems.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
The document describes the network architecture of PennNet, the campus network at the University of Pennsylvania. It discusses key details of PennNet such as its distributed core, IPv4 and IPv6 addressing, routing protocols, wireless network configuration, and connections to Internet2 and MAGPI. It also provides an example of BGP configuration and routing entries for peering relationships with Level 3, MAGPI, and looking up a route to MIT.
Multicloud connectivity using OpenNHRPBob Melander
The document discusses using OpenNHRP to enable multicloud connectivity across hybrid cloud deployments. It provides instructions for installing and configuring OpenNHRP on Ubuntu to set up a dynamic multipoint VPN (DMVPN) with one hub and two spoke nodes in different cloud environments. The configuration allows the spoke nodes to connect directly via an encrypted GRE tunnel without traversing the hub, providing optimized traffic flow across clouds.
This document provides an overview of IP routing and the Routing Information Protocol (RIP). It discusses the basic components and functions of routing, including static and dynamic routing. RIP is introduced as a distance-vector routing protocol that uses hop count as its metric. Key aspects of RIP covered include route updates every 30 seconds, supporting up to 15 hops, and RIP version 2 allowing for variable length subnet masks. The document also discusses verifying and troubleshooting RIP configurations.
This document provides an overview of IP routing and the Routing Information Protocol (RIP). It discusses the basic components and functions of routing, including static and dynamic routing. RIP is introduced as a distance-vector routing protocol that uses hop count as its metric. Key aspects of RIP covered include route updates every 30 seconds, support for up to 16 hops, and RIP version 2 allowing for variable length subnet masks. The document also discusses verifying and troubleshooting RIP configurations.
Webinar topic: Tuning OSPF: Bidirectional Forwarding Detection (BFD)
Presenter: Achmad Mardiansyah
In this webinar series, we discussed about how to tune OSPF network by Bidirectional Forwarding Detection (BFD). we also discuss benefits of implementing BFD in OSPF network.
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/en/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram also discord
Recording is available on youtube:
https://youtu.be/zrCLXd5s4mM
The document outlines an upcoming webinar on tuning OSPF routing protocols. It provides an agenda that will include a review of prerequisite networking knowledge on topics like routing, OSPF areas, and a live practice and Q&A session. Background is given on the trainer and their experience with networks in various countries. Finally, a sample network topology is shown that will likely be used to demonstrate OSPF configurations.
Tuning OSPF: area hierarchy, LSA, and area typeGLC Networks
Webinar topic: Tuning OSPF: Area hierarchy, LSA, Area type
Presenter: Achmad Mardiansyah
In this webinar series, we discussed about how to tune OSPF network by applying area approriately. we also discuss differences between area type: NSSA and stub.
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/en/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram also discord
Recording is available on youtube:
https://youtu.be/EKE5jKI_n04
The document discusses the Cisco Catalyst 6500 series chassis and components. It provides details on the different chassis models, supervisor engines, line cards, and modules available. It also covers features like redundant power supplies, Route Processor Redundancy, EtherChannels, Unidirectional Link Detection, and Flex Links.
This document discusses network virtualization and its history. It provides the following key points:
1) Network virtualization aims to decouple virtual networks from physical infrastructure through techniques like tunneling and encapsulation, allowing independent address spaces and topologies.
2) Early work included overlay networks for deployment and experimentation. Virtualization is now used in data centers to isolate tenant traffic and connect virtual machines across sites.
3) The OpenVirteX project aims to advance network virtualization by exposing the entire physical topology to virtual network controllers and allowing independent address spaces and topologies through header rewriting. This would provide more flexibility than existing solutions.
This document provides an overview of the Operations for Network Intent Composition (NIC) using intents in OpenDaylight. It discusses the NIC modules and how they interact, and provides demonstrations of allowing, blocking, and QoS attribute mapping using intents. The agenda includes an introduction to intents and NIC, describing the NIC modules and their roles, configuring the environment, and a demo of creating an intent to allow bidirectional traffic. Key points are that intents describe what is needed at a high level without specifying how, and the NIC translates intents into specific protocols and vendor features.
In this presentation, we are comparing the two most commonly used routing protocols in the world: BGP and OSPF. we discuss their similarities and differences
Steering traffic in OSPF: Interface costGLC Networks
Webinar topic: Steering traffic in OSPF: Interface cost
Presenter: Achmad Mardiansyah
In this webinar series, we discussed about how to steer traffic in OSPF network by adjusting cost.
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/en/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram also discord
Recording is available on youtube:
https://youtu.be/63NXY1BefYw
Nicolai van der Smagt has been in the business of designing, implementing and running SP networks for over 15 years. He has worked with DOCSIS, DSL and FTTH operators. Nowadays, Nicolai is helping Infradata’s pan-European customers build better access, aggregation and core networks, but his focus is on the data center, SDN, NFV and the whitebox switching revolution. His motto: “Simplicity is sophistication”.
Topic of Presentation: SDN
Language: English
Abstract:
Open source SDN that actually works -today
OpenContrail is an open source (Apache 2.0 licensed) project that provides network virtualization in the data center, using tried and tested open standards. It provides northbound APIs, integrates in Openstack or Cloudstack and is available today!
In this slot we’ll show you the architecture and ideas behind the technology and how OpenContrail enables you to avoid the pitfalls that other (closed) SDN solutions bring. If time permits we’ll also demo the technology.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
Webinar topic: MPLS on Router OS V7 - Part 1
Presenter: Achmad Mardiansyah & M. Taufik Nurhuda
In this webinar series, How MPLS on Router OS V7 works
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/en/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram also discord
Recording available on Youtube
https://youtu.be/SvZrYNA0-rQ
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
2. I am:
Raymond Burkholder
In and Out of:
Software Development
Linux Administration
Network Management
System Monitoring
raymond@burkholder.net
ray@oneunified.net
https://blog.raymond.burkholder.net
4. Items To Talk About
● Virtualization
● Redundancy & Resiliency
● Networking
● Firewall
● Connectivity
● Open Source Tools:
– Iproute2 – kernel tools for building sophisticated connections
– Open vSwitch -- for layer 2 switching, firewalling
– Free Range Routing -- layer 2/3 route distribution with BGP, EVPN, anycast
– LXC -- containers – lighter weight than Docker
– Nftables – successor to iptables for ACL with connection tracking
– SaltStack – living documentation, automation, orchestration
Over All Goals: a) total remote access, b) total re-creation of solution via automation
5. Monitoring Replica – Cloud ‘nn’
nftables
dnsmasqcache-ng
saltcheck_mksmtp
Free Range Routing
Open vSwitch
6. Console Serial Connections
Cloud01 Cloud03Cloud02
Console Server Console Server
PDU
PDU
MellanoxSw.
MellanoxSw.
Host HostStorage StorageHost
Dual Console Servers for Diagnostics - Side A & Side B
7. Ethernet Management
Cloud01 Cloud03Cloud02
Console Server A
PDUA
PDUB
MellanoxSw.A
MellanoxSw.B
Host HostStorage StorageHost
Console Server B
Ethernet Management Ports distributed across Cloud interfaces
[any Cloudxx can get to any other’s serial interface via one of two console servers]
8. Hand in Hand
● eBGP vs iBGP
– Multiple ASNs vs Single ASN (eBGP used in this installation)
● VxLAN vs LAN
– 16 million encaps vs 4000 encaps
– VXLAN, also called virtual extensible LAN , is designed to provide
layer 2 overlay networks on top of a layer 3 network by using MAC
address-in-user datagram protocol (MAC-in-UDP) encapsulation.
In simple terms, VXLAN can offer the same services as VLAN
does, but with greater extensibility and flexibility.
● aka EVPN via MP-BGP (enhanced VPN via Multi-Protocol BGP) used
for auto-distribution of VxLAN MAC/IP
Layer 2 is cocaine. It has never been right — and yet people keep packaging it in various ways and
selling it’s virtues and capabilities. -- @trumanboyes
9. Light vs Heavy Virtualization
● LXC – (Linux Containers) is an operating-system-level
virtualization method for running multiple isolated Linux systems
(containers) on a control host using a single Linux kernel.
● KVM - (Kernel-based Virtual Machine) is a full virtualization
solution for Linux on x86 hardware containing virtualization
extensions ... that provides the core virtualization
infrastructure ... where one can run multiple virtual machines
running unmodified Linux or Windows images. Each virtual
machine has private virtualized hardware: a network card, disk,
graphics adapter, etc.
10. Virtualization Selection
● Since no customer applications are running on the
management cloud hosts, light virtualization in the form of LXC
containers is used
● Goal is to keep the base host install as plain and simple as
possible – all services and management functionality should be
segregated into individual containers
● Containers, and their configurations can then be destroyed and
rebuilt at will as bugs and upgrades require
13. Resiliency
● Choices:
– Consul (dns for service resolution)
● Require heartbeats and for each service type
– HAProxy (layer 3 load balancing – userland)
● Overkill for service load type
– IPVS (l2 kernel based load balancing)
● Only local to the machine
– BGP AnyCast (routing based load distribution)
● Proven routing based resiliency
14. AnyCast
● Add Container Unique Loopback Address
● Add Service Common Loopback Address – advertised into BGP
by each common service container
● When container dies, common loopback address disappears.
● Loopback addresses are weighted in BGP so local services use
local services in preference
15. Host Functions
● Host functions are minimized. Management functions relegated
to containers
● Host has main BGP router, connects to BGP instances of each
of the other two hosts
● Configured to handle the VxLAN/EVPN MAC/IP advertisements
to/from each container
● Keeps container traffic ‘segregated’ from host ‘native’ routing
tables – virtualizes networking within and across the hosts
16. eBGP
● Next set of slides show eBGP routing tables to show the
resiliency created by routing.
● A non-production two-cloudbox is shown as an example
17. host01.ny1 neighbors
host01.ny1# sh ip bgp sum
IPv4 Unicast Summary:
BGP router identifier 10.20.1.1, local AS number 64601 vrf-id 0
BGP table version 62
RIB entries 55, using 8360 bytes of memory
Peers 9, using 174 KiB of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
host02.ny1(10.20.3.2) 4 64602 100218 100229 0 0 0 07w4d05h 18
pprx01.ny1(10.20.5.11) 4 64701 100132 100147 0 0 0 09w6d12h 2
nacl01.ny1(10.20.5.12) 4 64702 100139 100157 0 0 0 09w6d06h 2
ntp01.ny1(10.20.5.13) 4 64705 100132 100148 0 0 0 09w6d12h 2
dmsq01.ny1(10.20.5.14) 4 64703 100133 100149 0 0 0 09w6d12h 2
bind01.ny1(10.20.5.15) 4 64706 100133 100150 0 0 0 09w6d12h 2
prxy01.ny1(10.20.5.17) 4 64704 100132 100146 0 0 0 09w6d12h 2
smtp01.ny1(10.20.5.18) 4 64707 100132 100145 0 0 0 09w6d12h 2
fw01.ny1(10.20.5.19) 4 64708 100130 100148 0 0 0 09w6d12h 1
Total number of neighbors 9
host01 has private ASN 64601, host02 has ASN 64602
18. host02.ny1 neighbors
host02.ny1# sh ip bgp sum
IPv4 Unicast Summary:
BGP router identifier 10.20.1.2, local AS number 64602 vrf-id 0
BGP table version 54
RIB entries 55, using 8360 bytes of memory
Peers 9, using 174 KiB of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
host01.ny1(10.20.3.3) 4 64601 100233 100223 0 0 0 07w4d05h 18
pprx02.ny1(10.20.6.11) 4 64801 100135 100145 0 0 0 09w6d12h 2
nacl02.ny1(10.20.6.12) 4 64802 100135 100145 0 0 0 09w6d12h 2
ntp02.ny1(10.20.6.13) 4 64805 100135 100145 0 0 0 09w6d12h 2
dmsq02.ny1(10.20.6.14) 4 64803 100135 100146 0 0 0 09w6d12h 2
bind02.ny1(10.20.6.15) 4 64806 100136 100147 0 0 0 09w6d12h 2
prxy02.ny1(10.20.6.17) 4 64804 100135 100145 0 0 0 09w6d12h 2
smtp02.ny1(10.20.6.18) 4 64807 100135 100144 0 0 0 09w6d12h 2
fw02.ny1(10.20.6.19) 4 64808 100134 100145 0 0 0 09w6d12h 1
Total number of neighbors 9
Containers on host01 have private ASN 647xx, host02 containers use ASN 648xx
19. host01.ny1 loopbacks view A
host01.ny1# sh ip bgp
BGP table version is 62, local router ID is 10.20.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 10.20.1.1/32 0.0.0.0 0 32768 ?
*> 10.20.1.2/32 10.20.3.2 0 0 64602 ?
*> 10.20.1.17/32 10.20.5.11 0 0 64701 ?
*> 10.20.1.18/32 10.20.5.12 0 0 64702 ?
*> 10.20.1.19/32 10.20.5.14 0 0 64703 ?
*> 10.20.1.20/32 10.20.5.17 0 0 64704 ?
*> 10.20.1.21/32 10.20.5.13 0 0 64705 ?
*> 10.20.1.22/32 10.20.5.15 0 0 64706 ?
*> 10.20.1.23/32 10.20.5.18 0 0 64707 ?
*> 10.20.1.24/32 10.20.5.19 0 0 64708 ?
*> 10.20.1.33/32 10.20.3.2 0 64602 64801 ?
*> 10.20.1.34/32 10.20.3.2 0 64602 64802 ?
*> 10.20.1.35/32 10.20.3.2 0 64602 64803 ?
*> 10.20.1.36/32 10.20.3.2 0 64602 64804 ?
*> 10.20.1.37/32 10.20.3.2 0 64602 64805 ?
*> 10.20.1.38/32 10.20.3.2 0 64602 64806 ?
*> 10.20.1.39/32 10.20.3.2 0 64602 64807 ?
*> 10.20.1.40/32 10.20.3.2 0 64602 64808 ?
... on next slide
Loopbacks 10.20.1.x/32 are unique per container
Containers on
host01 are seen
as local hops
Containers on host02
are seen as two hops
away via host02
20. host01.ny1 loopbacks view B
* 10.20.2.101/32 10.20.3.2 0 64602 64801 ?
*> 10.20.5.11 0 0 64701 ?
* 10.20.2.102/32 10.20.3.2 0 64602 64802 ?
*> 10.20.5.12 0 0 64702 ?
* 10.20.2.103/32 10.20.3.2 0 64602 64803 ?
*> 10.20.5.14 0 0 64703 ?
* 10.20.2.104/32 10.20.3.2 0 64602 64804 ?
*> 10.20.5.17 0 0 64704 ?
* 10.20.2.105/32 10.20.3.2 0 64602 64805 ?
*> 10.20.5.13 0 0 64705 ?
* 10.20.2.106/32 10.20.3.2 0 64602 64806 ?
*> 10.20.5.15 0 0 64706 ?
* 10.20.2.107/32 10.20.3.2 0 64602 64807 ?
*> 10.20.5.18 0 0 64707 ?
* 10.20.3.2/31 10.20.3.2 0 0 64602 ?
*> 0.0.0.0 0 32768 ?
*> 10.20.5.0/24 0.0.0.0 0 32768 ?
*> 10.20.6.0/24 10.20.3.2 0 0 64602 ?
Displayed 28 routes and 36 total paths
Loopbacks 10.20.2.x/32 are unique per service
Service loopbacks are
seen on two separate
containers
on two different hosts
with the local container taking
precedence
30. Nftables YAML to Config to Runningpolicy:
local-private:
from: local
to: private
default: accept
local-public:
from: local
to: public
default: accept
private-local:
from: private
to: local
default: accept
private-public:
from: private
to: public
default: drop
public-private:
from: public
to: private
default: drop
policy:
local-private:
from: local
to: private
default: accept
local-public:
from: local
to: public
default: accept
private-local:
from: private
to: local
default: accept
private-public:
from: private
to: public
default: drop
public-private:
from: public
to: private
default: drop
public-local:
from: public
to: local
default: drop
rule:
# salt clients
- proto: tcp
saddr:
- 192.168.195.100
- 172.16.42.192/27
- 172.16.43.192/27
- 172.16.42.224/28
- 172.16.43.224/28
dport:
- 4505
- 4506
public-local:
from: public
to: local
default: drop
rule:
# salt clients
- proto: tcp
saddr:
- 192.168.195.100
- 172.16.42.192/27
- 172.16.43.192/27
- 172.16.42.224/28
- 172.16.43.224/28
dport:
- 4505
- 4506
# exerpt from /etc/nftables.conf:
add chain ip filter public_local
add rule ip filter public_local tcp dport {4505,4506} ip saddr
{192.168.195.100,172.16.42.192/27,172.16.43.192/27,172.16.42.224/28,172.16.43.224/28} accept
add rule ip filter public_local tcp sport {4505,4506} ip saddr {192.168.195.100} accept
add rule ip filter public_local tcp dport 22 accept
add rule ip filter input iifname eth443 goto public_local
add rule ip filter public_local iifname eth443 counter goto loginput
add rule ip filter public_local log prefix "public_local:DROP:" group 0 counter drop
# exerpt from /etc/nftables.conf:
add chain ip filter public_local
add rule ip filter public_local tcp dport {4505,4506} ip saddr
{192.168.195.100,172.16.42.192/27,172.16.43.192/27,172.16.42.224/28,172.16.43.224/28} accept
add rule ip filter public_local tcp sport {4505,4506} ip saddr {192.168.195.100} accept
add rule ip filter public_local tcp dport 22 accept
add rule ip filter input iifname eth443 goto public_local
add rule ip filter public_local iifname eth443 counter goto loginput
add rule ip filter public_local log prefix "public_local:DROP:" group 0 counter drop
# exerpt from nft list ruleset:
chain public_local {
tcp dport { 4505, 4506} ip saddr { 172.16.42.192-172.16.42.239, 172.16.43.192-172.16.43.239, 192.168.195.100} accept
tcp sport { 4505, 4506} ip saddr { 192.168.195.100} accept
tcp dport ssh accept
iifname "eth443" counter packets 155364 bytes 7981388 goto loginput
log prefix "public_local:DROP:" group 0 counter packets 0 bytes 0 drop
}
# exerpt from nft list ruleset:
chain public_local {
tcp dport { 4505, 4506} ip saddr { 172.16.42.192-172.16.42.239, 172.16.43.192-172.16.43.239, 192.168.195.100} accept
tcp sport { 4505, 4506} ip saddr { 192.168.195.100} accept
tcp dport ssh accept
iifname "eth443" counter packets 155364 bytes 7981388 goto loginput
log prefix "public_local:DROP:" group 0 counter packets 0 bytes 0 drop
}
- proto: tcp
saddr:
- 192.168.195.100
sport:
- 4505
- 4506
# ssh from anywhere
- proto: tcp
dport: 22
- proto: tcp
saddr:
- 192.168.195.100
sport:
- 4505
- 4506
# ssh from anywhere
- proto: tcp
dport: 22
A simple zone based firewall configuration in YAML in the pillar file
Excerpt from the auto-generated configuration file, based upon the above YAML file
Once the configuration file is installed into the kernel via nftables, the result installed ruleset can be viewed
31. Example 2 - Network Constructs
brPub421
(linux bridge)
(connects of FRR)
veth
ovsbr0
(Open vSwitch bridge)
vlan421
vbPub421
voPub421
veth
edge01
(lxc)
fw01
(lxc)
veth
eth421 eth421
ve-edge01-v421ve-fw01-v421
enp2s0f1 [physical]
(vxlan encap on ip)
vxPub421
(linux vxlan interface)
(mac/ip to FRR)
(encap over net)
32. Ex2 - Map Salt -> Interface/BGP Config
# less pillar/net/example/ny1/host01.sls
enp2s0f1:
description: enp2s0f1.host02.ny1.example.net
auto: True
inet: manual
addresses:
- 10.20.3.3/31
bgp:
prefix_lists:
plIpv4ConnIntMgmt:
- prefix: 10.20.3.2/31
neighbors:
- remoteas: 64602
peer:
ipv4: 10.20.3.2
password: oneunified
Mtu: 9000
# Portion of /etc/network/interfaces:
# description: enp2s0f1.host02.ny1.example.net
auto enp2s0f1
iface enp2s0f1
address 10.20.3.3/31
Mtu 9000
# part of bgp route-map
ip prefix-list plIpv4ConnIntMgmt seq 5 permit 10.20.5.0/24
ip prefix-list plIpv4ConnIntMgmt seq 10 permit 10.20.3.2/31
route-map rmIpv4Connected permit 110
match ip address prefix-list plIpv4ConnLoop
set community 64601:1001
!
route-map rmIpv4Connected permit 120
match ip address prefix-list plIpv4ConnIntMgmt
set community 64601:1002 64601:1202
!
route-map rmIpv4Connected permit 130
match ip address prefix-list plIpv4ConnInt
set community 64601:1002
!
route-map rmIpv4Connected deny 190
# linux bash
# ip route show 10.20.3.2/31
10.20.3.2/31 dev enp2s0f1 proto kernel scope link src 10.20.3.3
# free range routing vtysh
host01.ny1# sh ip route 10.20.3.2/31
Routing entry for 10.20.3.2/31
Known via "connected", distance 0, metric 0, best
Last update 07w1d22h ago
* directly connected, enp2s0f1
# vtysh sh run exerpt
router bgp 64601
bgp router-id 10.20.1.1
bgp log-neighbor-changes
no bgp default ipv4-unicast
bgp default show-hostname
coalesce-time 1000
neighbor 10.20.3.2 remote-as 64602
neighbor 10.20.3.2 password oneunified
This exerpt of a pillar file is used to build ...
... BGP Configuration
... interface configuration
With the following run time results:
Parameters in pillar file kept together to
facilitate readability and clarify relationships
33. VNI -> Pillar for VxLAN
# cat pillar/net/example/ny1/vni.sls
#
# the vni is used to build the second part of a route-descriptor (rd)
# type 0: 2 byte ASN, 4 byte value
# type 1: 4 byte IP, 2 byte value
# type 2: 4 byte ASN, 2 byte value
# if vlans are kept in the range of 1 - 999:
# use a realm of 1 - 64, use rd of
# ip:rrvvv
# up to 16m vxlan identifiers can be used, will need to evolve if/when
# scale requires it
# but... since ebgp is being used predominately, which provides a unique asn to each
# device, it is conceivable that type 0 RDs could be used, which would provide
# for the 16 million vxlan identifiers
vni:
- id: 1012
desc: vlan12 10.20.7.0/24
member:
- 10.20.1.1
- 10.20.1.2
- id: 1101
desc: edge0[1-2] v101
member:
- 10.20.1.1
- 10.20.1.2
- id: 1421
desc: public services
member:
- 10.20.1.1
- 10.20.1.2
Some pillar files have
information shared across
multiple instances –
common configuration
elements are factored out
and included in the top.sls
file where necessary
34. Auto Config: BGP, Interfaces, Links# exerpt from BGP configuration file
address-family l2vpn evpn
neighbor 10.20.3.2 activate
vni 1101
rd 10.20.1.1:1101
route-target import 10.20.1.2:1101
route-target export 10.20.1.1:1101
exit-vni
vni 1012
rd 10.20.1.1:1012
route-target import 10.20.1.2:1012
route-target export 10.20.1.1:1012
exit-vni
vni 1421
rd 10.20.1.1:1421
route-target import 10.20.1.2:1421
route-target export 10.20.1.1:1421
exit-vni
advertise-all-vni
exit-address-family
# exerpt from BGP configuration file
address-family l2vpn evpn
neighbor 10.20.3.2 activate
vni 1101
rd 10.20.1.1:1101
route-target import 10.20.1.2:1101
route-target export 10.20.1.1:1101
exit-vni
vni 1012
rd 10.20.1.1:1012
route-target import 10.20.1.2:1012
route-target export 10.20.1.1:1012
exit-vni
vni 1421
rd 10.20.1.1:1421
route-target import 10.20.1.2:1421
route-target export 10.20.1.1:1421
exit-vni
advertise-all-vni
exit-address-family
# exerpt from /etc/network/interfaces:
# description: shared external containers
auto vlan421
iface vlan421
pre-up brctl addbr brPub421
pre-up brctl stp brPub421 off
up ip link set dev brPub421 up
pre-up ip link add vxPub421 type vxlan id 1421 dstport 4789 local 10.20.1.1 nolearning
pre-up brctl addif brPub421 vxPub421
up ip link set dev vxPub421 up
pre-up ip link add vbPub421 type veth peer name voPub421
pre-up brctl addif brPub421 vbPub421
pre-up ovs-vsctl --may-exist add-port ovsbr0 voPub421 tag=421
up ip link set dev vbPub421 up
up ip link set dev voPub421 up
down ip link set dev vbPub421 down
down ip link set dev voPub421 down
pre-up ovs-vsctl --may-exist add-port ovsbr0 vlan421 tag=421 -- set interface vlan421 type=internal
post-down ovs-vsctl --if-exists del-port ovsbr0 vlan421
# exerpt from /etc/network/interfaces:
# description: shared external containers
auto vlan421
iface vlan421
pre-up brctl addbr brPub421
pre-up brctl stp brPub421 off
up ip link set dev brPub421 up
pre-up ip link add vxPub421 type vxlan id 1421 dstport 4789 local 10.20.1.1 nolearning
pre-up brctl addif brPub421 vxPub421
up ip link set dev vxPub421 up
pre-up ip link add vbPub421 type veth peer name voPub421
pre-up brctl addif brPub421 vbPub421
pre-up ovs-vsctl --may-exist add-port ovsbr0 voPub421 tag=421
up ip link set dev vbPub421 up
up ip link set dev voPub421 up
down ip link set dev vbPub421 down
down ip link set dev voPub421 down
pre-up ovs-vsctl --may-exist add-port ovsbr0 vlan421 tag=421 -- set interface vlan421 type=internal
post-down ovs-vsctl --if-exists del-port ovsbr0 vlan421
# ip link show dev brPub421
17: brPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group
default qlen 1000
link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff
# ip link show vxPub421
18: vxPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state UNKNOWN
mode DEFAULT group default qlen 1000
link/ether ee:38:74:6c:99:3f brd ff:ff:ff:ff:ff:ff
# ip link show voPub421
19: voPub421@vbPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system
state UP mode DEFAULT group default qlen 1000
link/ether 9a:e4:51:35:89:83 brd ff:ff:ff:ff:ff:ff
# ip link show vbPub421
20: vbPub421@voPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state
UP mode DEFAULT group default qlen 1000
link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff
# ip link show vlan421
21: vlan421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group
default qlen 1000
link/ether 62:06:81:20:29:09 brd ff:ff:ff:ff:ff:ff
# ip link show dev brPub421
17: brPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group
default qlen 1000
link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff
# ip link show vxPub421
18: vxPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state UNKNOWN
mode DEFAULT group default qlen 1000
link/ether ee:38:74:6c:99:3f brd ff:ff:ff:ff:ff:ff
# ip link show voPub421
19: voPub421@vbPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system
state UP mode DEFAULT group default qlen 1000
link/ether 9a:e4:51:35:89:83 brd ff:ff:ff:ff:ff:ff
# ip link show vbPub421
20: vbPub421@voPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state
UP mode DEFAULT group default qlen 1000
link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff
# ip link show vlan421
21: vlan421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group
default qlen 1000
link/ether 62:06:81:20:29:09 brd ff:ff:ff:ff:ff:ff
a) a simple config is used
to build ...
b) ... the complicated interface configuration in the diagram shown previously ....
c) ... with the resulting instances installed into the kernel
35. Process
● With salt state, pillar and reactor files defined for all services
and configuration elements, two commands only are necessary
for rebuilding any one of the three cloud management boxes:
– destroy the boot sector
– reboot
36. Process
● Upon reboot, the physical box obtains the pxeboot installation
files, allocates and formats the file system, installs operating
system, installs Salt agent, and automatically reboots
● Upon the reboot, the Salt agent contacts one of the remaining
Salt Masters, and automatically starts provisioning the system
and services as defined in the Salt state and pillar files.
● LXC containers are instantiated and started at this time
● The Salt agent in each container contacts the Salt Master to
initiate the build of each specific container, using services
supplied in the containers of surviving hosts