This document summarizes Dragonflow, an OpenStack Neutron plugin that implements a distributed SDN controller. Some key points:
- Dragonflow provides a full implementation of the Neutron API and acts as a lightweight distributed SDN controller using a pluggable database.
- It aims to provide advanced networking services like security groups, load balancing, and DHCP in an efficient, scalable way.
- As an integral part of OpenStack, it is fully open source and designed for performance, scalability, and low latency. Its distributed control plane can sync policies across compute nodes.
Dragonflow is an integral part of OpenStack that provides distributed SDN capabilities for Neutron including scale, performance, and latency. It uses a lightweight and easily extensible distributed control plane with pluggable database support. Current features include L2/L3 networking, tunnels, distributed DHCP, and selective database distribution. The roadmap includes adding container, SNAT/DNAT, reactive database, and service chaining support.
Neutron Done the SDN Way
Dragonflow is an open source distributed control plane implementation of Neutron which is an integral part of OpenStack. Dragonflow introduces innovative solutions and features to implement networking and distributed network services in a manner that is both lightweight and simple to extend, yet targeted towards performance-intensive and latency-sensitive applications. Dragonflow aims at solving the performance
DragonFlow sdn based distributed virtual router for openstack neutronEran Gampel
Dragonflow is an implementation of a fully distributed virtual router for OpenStack® Neutron™ that is based on a light weight SDN controller
blog.gampel.net
Topology Service Injection using Dragonflow & KuryrEshed Gal-Or
Topology service injection allows network services like firewalls and intrusion prevention systems to be deployed in distributed software-defined networks like OpenStack. Dragonflow is an OpenStack native distributed SDN that supports advanced virtual network services and topology service injection through external applications. Kuryr provides container networking and integration with OpenStack Neutron, allowing containers to leverage network services and features normally available only to virtual machines.
OpenStack networking can use either VLAN tagging or GRE tunneling to provide logical isolation between tenant networks. With VLAN, packets are tagged with a VLAN ID at the compute and network nodes to associate them with a particular tenant network. With GRE, packets are encapsulated with a GRE header that includes a tunnel ID to associate them with a tenant network. Security groups are applied using iptables rules to filter traffic between VMs in different networks.
Dragonflow is an integral part of OpenStack that provides distributed SDN capabilities for Neutron including scale, performance, and latency. It uses a lightweight and easily extensible distributed control plane with pluggable database support. Current features include L2/L3 networking, tunnels, distributed DHCP, and selective database distribution. The roadmap includes adding container, SNAT/DNAT, reactive database, and service chaining support.
Neutron Done the SDN Way
Dragonflow is an open source distributed control plane implementation of Neutron which is an integral part of OpenStack. Dragonflow introduces innovative solutions and features to implement networking and distributed network services in a manner that is both lightweight and simple to extend, yet targeted towards performance-intensive and latency-sensitive applications. Dragonflow aims at solving the performance
DragonFlow sdn based distributed virtual router for openstack neutronEran Gampel
Dragonflow is an implementation of a fully distributed virtual router for OpenStack® Neutron™ that is based on a light weight SDN controller
blog.gampel.net
Topology Service Injection using Dragonflow & KuryrEshed Gal-Or
Topology service injection allows network services like firewalls and intrusion prevention systems to be deployed in distributed software-defined networks like OpenStack. Dragonflow is an OpenStack native distributed SDN that supports advanced virtual network services and topology service injection through external applications. Kuryr provides container networking and integration with OpenStack Neutron, allowing containers to leverage network services and features normally available only to virtual machines.
OpenStack networking can use either VLAN tagging or GRE tunneling to provide logical isolation between tenant networks. With VLAN, packets are tagged with a VLAN ID at the compute and network nodes to associate them with a particular tenant network. With GRE, packets are encapsulated with a GRE header that includes a tunnel ID to associate them with a tenant network. Security groups are applied using iptables rules to filter traffic between VMs in different networks.
This document provides an overview and agenda for a Docker networking deep dive presentation. The presentation covers key concepts in Docker networking including libnetwork, the Container Networking Model (CNM), multi-host networking capabilities, service discovery, load balancing, and new features in Docker 1.12 like routing mesh and secured control/data planes. The agenda demonstrates Docker networking use cases like default bridge networks, user-defined bridge networks, and overlay networks. It also covers networking drivers, Docker 1.12 swarm mode networking functionality, and how packets flow through systems using routing mesh and load balancing.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
Jakub Pavlik discusses high availability versus disaster recovery in OpenStack clouds. He describes four types of high availability in OpenStack: physical infrastructure, OpenStack control services, virtual machines, and applications. For each type, he outlines concepts like active/passive and active/active configurations, specific technologies used like Pacemaker, Corosync, HAProxy, and MySQL Galera, and considerations for shared and non-shared storage. Finally, he provides examples of high availability architectures and methods used by different OpenStack vendors.
The document discusses troubleshooting common issues in OpenStack, specifically focusing on tracebacks, Nova issues, and Neutron issues. It provides tips on reading tracebacks and diagnosing specific failures related to the Nova scheduler, Neutron DHCP agent, L2 agent, and L3 agent. Key troubleshooting techniques include checking logs, packet captures, and debugging configuration issues. The presenters emphasize becoming familiar with underlying technologies like Open vSwitch, iptables, and Linux bridging to properly diagnose OpenStack problems.
Cumulus Linux Network OS Brings Modern Data Center Networking to the Enterprise
Cumulus® Linux® 2.2 brings greater flexibility, simplified operations and end-to-end resiliency along with a new hardware architecture and new ecosystem solutions
The document discusses high availability (HA) techniques in OpenStack. It covers HA concepts for both stateless and stateful services. For compute HA, it discusses server evacuation and instance migration without and with shared storage. It then covers different HA options for OpenStack controllers, including Pacemaker/Corosync/DRBD for active-passive HA and Galera for active-active MySQL HA. It also discusses using Keepalived, HAProxy and VRRP for load balancing and failover of API services. Finally, it presents a sample highly available OpenStack architecture and lists additional resources.
No Surprises Geo Replication - Pulsar Virtual Summit Europe 2021StreamNative
The session will cover the details on how geo replicated topic works under the hood while also touching lightly on the replicated subscriptions. Then we will steer towards the pulsar’s behaviour in various scenarios like updating replicated topic, changes in cluster topology, outages, etc and end with the metrics & configurations to look out for. We will also look into configurations to have predictable failover for replicated subscriptions when dealing with unbounded cross-region lag or subscription lag itself.
Network Automation (Bay Area Juniper Networks Meetup)Alejandro Salinas
Network Automation provides three examples of network automation projects and their learnings:
1. A script to find a host and change its VLAN using Python showed that small, focused scripts are good starting points and don't require extensive systems.
2. Automating a new datacenter configuration using Python templates and YAML files helped manage crises by standardizing cabling and configurations. Permanently improving requires focusing on delivery over systems.
3. Exposing network data through a REST API allowed querying operational status, configurations, and security policies. Sharing information benefits teams and moves beyond just automating the network team's work.
Running Neutron at Scale - Gal Sagie & Eran Gampel - OpenStack Day Israel 2016Cloud Native Day Tel Aviv
Dragonflow is an integral project in OpenStack that is designed to help OpenStack networking scale to thousands of compute nodes. It addresses limitations in Neutron's scalability, performance, and operability. Dragonflow uses a lightweight distributed SDN control plane architecture with pluggable database and publish-subscribe drivers. This allows it to distribute network services like DHCP, security groups, and DNAT across compute nodes for improved scalability and performance.
This document discusses various approaches to implementing high availability (HA) in OpenStack including active/active and active/passive configurations. It provides an overview of HA techniques used at Deutsche Telekom and eBay/PayPal including load balancing APIs and databases, replicating RabbitMQ and MySQL, and configuring Pacemaker/Corosync for OpenStack services. It also discusses lessons learned around testing failures, placing services across availability zones, and having backups for HA infrastructures.
Advanced Data Retrieval and Analytics with Apache Spark and Openstack SwiftDaniel Krook
Lightning talk from the OpenStack NYC meetup on October 8, 2014.
http://bit.ly/ibm-os-meetup
By Gil Vernik
The integration between Apache Spark and Swift, and the use of Storlets for smart retrieval via filtering and privacy-support.
The content of this talk is a statement from the IBM Research division, not IBM product divisions, and is not a statement from IBM regarding its plans, directions or product intents. Any activities described by this talk are subject to change.
This document outlines several OpenStack topology setups:
1. The All-in-One setup is a single node that runs all OpenStack services for development/testing.
2. The Private Cloud setup separates services across multiple controller and compute nodes.
3. The Public Cloud setup exposes OpenStack to external users through a self-service portal.
4. The Hybrid Cloud setup connects an on-premise private cloud to external public clouds.
5. The High Availability setup uses technologies like Galera, Pacemaker, and HAProxy for fault tolerance.
Slides from a presentation by Monal Daxini at Disney, Glendale CA about Netflix Open Source Software, Cloud Data Persistence, and Cassandra best Practices
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack
Audience Level
Beginner
Synopsis
Layer 2 versus Layer 3, MLAG, Spanning-Tree, switch mechanism drivers, overlays and routing-on-the-host — What scales and what does not? The underlying plumbing of an OpenStack network is something you’d rather not have to think about. This presentation examines the network architectures of web-scale and large enterprise OpenStack users and how those same efficiencies can be used in deployments of all sizes.
Speaker Bio:
Scott is a Member of Technical Staff at Cumulus Networks where he designs, supports and deploys web-scale technologies and architectures in enterprise networks globally. Prior to becoming a founding member of the Cumulus office in Australia, Scott started his career as a network administrator before joining Cisco Systems to support their data centre products.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Stacking up with OpenStack: Building for High Availability discusses designing applications for high availability (HA) when using OpenStack. It recommends eliminating single points of failure in the OpenStack infrastructure, designing applications to withstand server, zone, and cloud failures through techniques like replication, auto-scaling, and keeping management layers separate from infrastructure. The document also discusses different disaster recovery strategies and the tradeoffs between availability and cost.
Getting up to speed with MirrorMaker 2 | Mickael Maison, IBM and Ryanne Dolan...HostedbyConfluent
More and more Enterprises are relying on Apache Kafka to run their businesses. Cluster administrators need the ability to mirror data between clusters to provide high availability and disaster recovery.
MirrorMaker 2, released recently as part of Kafka 2.4.0, allows you to mirror multiple clusters and create many replication topologies. Learn all about this awesome new tool and how to reliably and easily mirror clusters.
We will first describe how MirrorMaker 2 works, including how it addresses all the shortcomings of MirrorMaker 1. We will also cover how to decide between its many deployment modes. Finally, we will share our experience running it in production as well as our tips and tricks to get a smooth ride.
Docker network performance in the public cloudArjan Schaaf
Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
At Splunk, we have made the decision to deprecate a home-brewed platform that powers the DSP's (Data Stream Processor) connector framework in favor of a framework that is powered by Pulsar IO.
In this talk, I will go over our evaluation and decision process on choosing to use the Pulsar IO framework. I will also discuss how the Splunk's DSP product is leveraging the Pulsar IO framework and especially batch sources that was recently added to Pulsar IO. I will conclude the talk with discussing the various improvements that we at Splunk have contributed to the Pulsar Functions/IO framework to increase scalability and stability. In my final remarks, I will also discuss how we intend to leverage and use Pulsar IO/Functions further in the future at Splunk.
Linux Foundation CNCF CloudNativeDay
What's Hot in Containers & OpenStack - Duane De Capite
Duane De Capite details what's hot in container ecosystems including the emerging Linux Foundation Open Container Initiative (OCI) and Cloud Native Computing Foundation (CNCF) collaborative projects. Learn how OpenStack can leverage and enable container deployments including details on OpenStack projects Magnum, Kuryr and Kolla. Learn how networking projects including Calico and Contiv can integrate and scale container and OpenStack deployments. This session also summarized the buzz about Mantl.io, a new Platform as a Service (PaaS) project for containers and OpenStack.
http://events.linuxfoundation.org/events/cloudnativeday/program/schedule
This document provides an overview and agenda for a Docker networking deep dive presentation. The presentation covers key concepts in Docker networking including libnetwork, the Container Networking Model (CNM), multi-host networking capabilities, service discovery, load balancing, and new features in Docker 1.12 like routing mesh and secured control/data planes. The agenda demonstrates Docker networking use cases like default bridge networks, user-defined bridge networks, and overlay networks. It also covers networking drivers, Docker 1.12 swarm mode networking functionality, and how packets flow through systems using routing mesh and load balancing.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
Jakub Pavlik discusses high availability versus disaster recovery in OpenStack clouds. He describes four types of high availability in OpenStack: physical infrastructure, OpenStack control services, virtual machines, and applications. For each type, he outlines concepts like active/passive and active/active configurations, specific technologies used like Pacemaker, Corosync, HAProxy, and MySQL Galera, and considerations for shared and non-shared storage. Finally, he provides examples of high availability architectures and methods used by different OpenStack vendors.
The document discusses troubleshooting common issues in OpenStack, specifically focusing on tracebacks, Nova issues, and Neutron issues. It provides tips on reading tracebacks and diagnosing specific failures related to the Nova scheduler, Neutron DHCP agent, L2 agent, and L3 agent. Key troubleshooting techniques include checking logs, packet captures, and debugging configuration issues. The presenters emphasize becoming familiar with underlying technologies like Open vSwitch, iptables, and Linux bridging to properly diagnose OpenStack problems.
Cumulus Linux Network OS Brings Modern Data Center Networking to the Enterprise
Cumulus® Linux® 2.2 brings greater flexibility, simplified operations and end-to-end resiliency along with a new hardware architecture and new ecosystem solutions
The document discusses high availability (HA) techniques in OpenStack. It covers HA concepts for both stateless and stateful services. For compute HA, it discusses server evacuation and instance migration without and with shared storage. It then covers different HA options for OpenStack controllers, including Pacemaker/Corosync/DRBD for active-passive HA and Galera for active-active MySQL HA. It also discusses using Keepalived, HAProxy and VRRP for load balancing and failover of API services. Finally, it presents a sample highly available OpenStack architecture and lists additional resources.
No Surprises Geo Replication - Pulsar Virtual Summit Europe 2021StreamNative
The session will cover the details on how geo replicated topic works under the hood while also touching lightly on the replicated subscriptions. Then we will steer towards the pulsar’s behaviour in various scenarios like updating replicated topic, changes in cluster topology, outages, etc and end with the metrics & configurations to look out for. We will also look into configurations to have predictable failover for replicated subscriptions when dealing with unbounded cross-region lag or subscription lag itself.
Network Automation (Bay Area Juniper Networks Meetup)Alejandro Salinas
Network Automation provides three examples of network automation projects and their learnings:
1. A script to find a host and change its VLAN using Python showed that small, focused scripts are good starting points and don't require extensive systems.
2. Automating a new datacenter configuration using Python templates and YAML files helped manage crises by standardizing cabling and configurations. Permanently improving requires focusing on delivery over systems.
3. Exposing network data through a REST API allowed querying operational status, configurations, and security policies. Sharing information benefits teams and moves beyond just automating the network team's work.
Running Neutron at Scale - Gal Sagie & Eran Gampel - OpenStack Day Israel 2016Cloud Native Day Tel Aviv
Dragonflow is an integral project in OpenStack that is designed to help OpenStack networking scale to thousands of compute nodes. It addresses limitations in Neutron's scalability, performance, and operability. Dragonflow uses a lightweight distributed SDN control plane architecture with pluggable database and publish-subscribe drivers. This allows it to distribute network services like DHCP, security groups, and DNAT across compute nodes for improved scalability and performance.
This document discusses various approaches to implementing high availability (HA) in OpenStack including active/active and active/passive configurations. It provides an overview of HA techniques used at Deutsche Telekom and eBay/PayPal including load balancing APIs and databases, replicating RabbitMQ and MySQL, and configuring Pacemaker/Corosync for OpenStack services. It also discusses lessons learned around testing failures, placing services across availability zones, and having backups for HA infrastructures.
Advanced Data Retrieval and Analytics with Apache Spark and Openstack SwiftDaniel Krook
Lightning talk from the OpenStack NYC meetup on October 8, 2014.
http://bit.ly/ibm-os-meetup
By Gil Vernik
The integration between Apache Spark and Swift, and the use of Storlets for smart retrieval via filtering and privacy-support.
The content of this talk is a statement from the IBM Research division, not IBM product divisions, and is not a statement from IBM regarding its plans, directions or product intents. Any activities described by this talk are subject to change.
This document outlines several OpenStack topology setups:
1. The All-in-One setup is a single node that runs all OpenStack services for development/testing.
2. The Private Cloud setup separates services across multiple controller and compute nodes.
3. The Public Cloud setup exposes OpenStack to external users through a self-service portal.
4. The Hybrid Cloud setup connects an on-premise private cloud to external public clouds.
5. The High Availability setup uses technologies like Galera, Pacemaker, and HAProxy for fault tolerance.
Slides from a presentation by Monal Daxini at Disney, Glendale CA about Netflix Open Source Software, Cloud Data Persistence, and Cassandra best Practices
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack
Audience Level
Beginner
Synopsis
Layer 2 versus Layer 3, MLAG, Spanning-Tree, switch mechanism drivers, overlays and routing-on-the-host — What scales and what does not? The underlying plumbing of an OpenStack network is something you’d rather not have to think about. This presentation examines the network architectures of web-scale and large enterprise OpenStack users and how those same efficiencies can be used in deployments of all sizes.
Speaker Bio:
Scott is a Member of Technical Staff at Cumulus Networks where he designs, supports and deploys web-scale technologies and architectures in enterprise networks globally. Prior to becoming a founding member of the Cumulus office in Australia, Scott started his career as a network administrator before joining Cisco Systems to support their data centre products.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Stacking up with OpenStack: Building for High Availability discusses designing applications for high availability (HA) when using OpenStack. It recommends eliminating single points of failure in the OpenStack infrastructure, designing applications to withstand server, zone, and cloud failures through techniques like replication, auto-scaling, and keeping management layers separate from infrastructure. The document also discusses different disaster recovery strategies and the tradeoffs between availability and cost.
Getting up to speed with MirrorMaker 2 | Mickael Maison, IBM and Ryanne Dolan...HostedbyConfluent
More and more Enterprises are relying on Apache Kafka to run their businesses. Cluster administrators need the ability to mirror data between clusters to provide high availability and disaster recovery.
MirrorMaker 2, released recently as part of Kafka 2.4.0, allows you to mirror multiple clusters and create many replication topologies. Learn all about this awesome new tool and how to reliably and easily mirror clusters.
We will first describe how MirrorMaker 2 works, including how it addresses all the shortcomings of MirrorMaker 1. We will also cover how to decide between its many deployment modes. Finally, we will share our experience running it in production as well as our tips and tricks to get a smooth ride.
Docker network performance in the public cloudArjan Schaaf
Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
At Splunk, we have made the decision to deprecate a home-brewed platform that powers the DSP's (Data Stream Processor) connector framework in favor of a framework that is powered by Pulsar IO.
In this talk, I will go over our evaluation and decision process on choosing to use the Pulsar IO framework. I will also discuss how the Splunk's DSP product is leveraging the Pulsar IO framework and especially batch sources that was recently added to Pulsar IO. I will conclude the talk with discussing the various improvements that we at Splunk have contributed to the Pulsar Functions/IO framework to increase scalability and stability. In my final remarks, I will also discuss how we intend to leverage and use Pulsar IO/Functions further in the future at Splunk.
Linux Foundation CNCF CloudNativeDay
What's Hot in Containers & OpenStack - Duane De Capite
Duane De Capite details what's hot in container ecosystems including the emerging Linux Foundation Open Container Initiative (OCI) and Cloud Native Computing Foundation (CNCF) collaborative projects. Learn how OpenStack can leverage and enable container deployments including details on OpenStack projects Magnum, Kuryr and Kolla. Learn how networking projects including Calico and Contiv can integrate and scale container and OpenStack deployments. This session also summarized the buzz about Mantl.io, a new Platform as a Service (PaaS) project for containers and OpenStack.
http://events.linuxfoundation.org/events/cloudnativeday/program/schedule
Multi tier-app-network-topology-neutron-finalSadique Puthen
This document discusses how Neutron builds network topology for multi-tier applications. It explains that Neutron uses network namespaces to isolate tenant resources and correlate application topology to Neutron components. It provides details on how Neutron creates networks, routers, load balancers, firewalls, and VPN connections to build the necessary infrastructure for a sample multi-tier application topology across two OpenStack sites.
- The document discusses Neutron L3 HA (VRRP) and summarizes a presentation given on the topic.
- Neutron L3 HA uses the VRRP protocol to provide redundancy and failover for virtual routers across multiple network nodes. A heartbeat network is created for each tenant using their tenant network.
- When a router is created, a heartbeat port and interface are created on each L3 agent node using the tenant's heartbeat network to enable communication between the agents for the VRRP implementation.
OpenStack Neutron Havana Overview - Oct 2013Edgar Magana
Presentation about OpenStack Neutron Overview presented during three meet-ups in NYC, Connecticut and Philadelphia during October 2013 by Edgar Magana from PLUMgrid
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
Tungsten Fabric provides a network fabric connecting all environments and clouds. It aims to be the most ubiquitous, easy-to-use, scalable, secure, and cloud-grade SDN stack. It has over 300 contributors and 100 active developers. Recent improvements include better support for microservices, containers, ingress/egress policies, and load balancing. It can provide consistent security and networking across VMs, containers, and bare metal.
This document summarizes an article about SDN, OpenFlow, and the ONF. It discusses how OpenFlow and SDN are emerging technologies that have the potential to enable network innovation and optimize costs. It also introduces the Open Networking Foundation (ONF) and how the community around SDN and OpenFlow has grown rapidly.
1. The document discusses OpenStack Neutron and Open vSwitch (OVS), describing their architecture and configuration. It explains that Neutron uses OVS to provide virtual networking and switching capabilities between virtual machines.
2. Key components of the Neutron-OVS architecture include the Neutron server, OVS agents on compute nodes, and the OVS daemon that implements the switch in the kernel and userspace.
3. The document also provides examples of configuring an OVS bridge and ports for virtual networking in OpenStack.
Presentation given at the 2017 LinuxCon China
With the booming of Container technology, it brings obvious advantages for cloud: simple and faster deployment, portability and lightweight cost. But the networking challenges are significant. Users need to restructure their network and support container deployment with current cloud framework, like container and VMs.
In this presentation, we will introduce new container networking solution, which provides one management framework to work with different network componenets through Open/friendly modelling mechnism. iCAN can simplify network deployment and management with most orchestration systems and a variety of data plane components, and design extendsible architect to define and validate Service Level Agreement(SLA) for cloud native applications, which is important factor for enterprise to deliver successful and stable service via containers.
An Introduce of OPNFV (Open Platform for NFV)Mario Cho
OPNFV is Open Platform for Network Function Virtualization.
It lecture are talk on Open Software Conference 2015.
The Lecture of OPNFV explain OPNFV sub-software technology like The Linux Kernel, Virtualization, Software Defined Network, OpenStack, OpenDaylight, and Network Function Virtualization.
As containers are being deployed as part of multi tenant clusters, virtual multi layer switches become essential to interconnect containers while providing isolation guarantees. Assigning tenants their own private networks requires stateful network address translation (NAT) implemented in a scalable architecture to expose containers to public networks. Existing virtual switches integrated into the Linux kernel did not support stateful NAT so far. This presentation introduces a new virtual NAT service deployable as container built using existing kernel functionality such as network namespaces, routing rules and Netfilter to provide NAT services to existing virtual switches such as Open vSwitch and the Linux bridge but also the core L3 layer of Linux.
- OpenStack provides network virtualization and automation capabilities through projects like Neutron, Heat, and plugins like Midonet.
- Neutron evolved networking in OpenStack to allow pluggable networking models beyond the initial Nova networking. It supports overlay technologies and network automation.
- Heat allows you to define infrastructure like servers, networks, and their relationships in templates that can be deployed through the OpenStack API. This provides automation of virtual network deployment.
- Plugins like Midonet provide distributed virtual networking models to improve scalability and performance over overlay approaches like OVS. They also allow automation of physical network configuration.
The document provides an overview of network virtualization and the Network Virtualization Platform (NVP). It defines network virtualization as decoupling, automating, and making network behavior independent of physical network state. NVP allows for logical networks that are isolated, location-independent and independent of physical network changes. It introduces NVP components and architecture including the control plane, gateways, service nodes, and integration with hypervisors and OpenStack. The document also discusses treating physical networks like compute servers and fabric/pod network designs.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
Quantum - Virtual networks for Openstacksalv_orlando
An overview of Quantum, the soon-to-be default Openstack network service.
These slides introduce Quantum, its design goals, and discusses the API. It also tries to address how quantum relates to Software Defined Networking (SDN)
Development, test, and characterization of MEC platforms with Teranium and Dr...Michelle Holley
Mobile edge computing delivers cloud computing at the edge of the cellular network to drive services quality and innovation. The ability for CSPs and ISVs to effectively develop, deliver, and deploy MEC services on a given platform directly correlates with the availability and maturity of associated tools and test environment. Dronava is a hyper-connected, web-scale network reference design for the 5G mobile network, suitable for use as a test and development socket for cloud applications developed for MEC platforms with tools such as the Intel NEV SDK. With Dronava, developers can drive the application with real traffics from the network edge to the EPC core, and if need be, connect with services in the core network in order to fully characterize the functionalities, latency, and throughput of the platform and application.Teranium is an integrated development environment that simplifies the development, packaging, and deployment/management of cloud applications. Teranium can be utilized to develop and deploy MEC applications on a number of platforms. Together with Dronava, Teranium helps to reduce complexity and improve efficiency in the ability of CSPs and ISVs to adopt and deploy MEC-base services.
The document discusses software-defined networking (SDN) and OpenFlow, including:
1) OpenFlow allows the control logic to be separated from the forwarding hardware by defining an open interface between the two. This enables more flexible and programmable networks.
2) OpenFlow works by defining flows that match packets and actions that are applied to the matched packets. The flows are populated and managed by an external controller through the OpenFlow protocol.
3) OpenFlow is being deployed in over 100 organizations and is enabling network innovation through its programmable and customizable nature.
OpenStack cloud for ConoHa, Z.com and GMO AppsCloud in okinawa opendays 2015 ...Naoto Gohko
1. GMO Internet has optimized their OpenStack models over time, initially using Nova network on Onamae.com VPS with Diablo, then implementing Quantum overlay network on ConoHa with Grizzly. They launched GMO AppsCloud with Havana featuring Cinder, Swift, and baremetal compute.
2. ConoHa and GMO AppsCloud have since upgraded to Juno, with ConoHa adding multi-region support across Tokyo, Singapore, and San Jose with Designate DNSaaS and domain structures for tenants.
3. GMO Internet shares a Swift object storage cluster between their different OpenStack installations.
Discovery Day 2019 Sofia - Big data clustersIvan Donev
This document provides an overview of the architecture and components of SQL Server 2019 Big Data Clusters. It describes the key Kubernetes concepts used in Big Data Clusters like pods, services, and nodes. It then explains the different planes (control, compute, data) and nodes that make up a Big Data Cluster and their roles. Components in each plane like the SQL master instance, compute pools, storage pools, and data pools are also outlined.
Neutron Advanced Services - Akanda - Astara 201 presentationEric Lopez
Openstack Neutron Advanced Services talk at the Openstack Boston Meetup on Nov 19, 2015. This is an introduction to the Openstack projects Neutron and Astara.
Building a sdn solution for the deployment of web application stacks in dockerJorge Juan Mendoza
This document discusses building a SDN solution for deploying web application stacks in Docker containers. It proposes developing a wSDN network plugin driver for Docker's libnetwork that implements the Docker plugin API and network driver protocol. This would allow wSDN to manage container networking and provide features like multi-host networking, IP address management, and tenant isolation across multiple data centers. It also discusses Docker's existing networking limitations and outlines requirements for a SDN solution to address Docker's needs for large web application deployments in a multi-tenant environment.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
2. What is Dragonflow?
Full Implementation of OpenStack Neutron API
Lightweight Distributed SDN Controller with pluggable
database
Project mission
To Implement advanced networking services in a manner
that is efficient, elegant and resource-nimble
Page 2
3. Dragonflow Highlights
Page 3
• Integral part of OpenStack
• Fully Open Source
• Scale, Performance and Latency
• Lightweight and Simple
• Easily Extendable
• Distributed SDN Control Plane
• Sync Policy Level abstraction to the CN
4. Dragonflow - Distributed SDN
Neutron-Server
Dragonflow Plugin
DB
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
DB
VM VM
..
VM VM
..
VM VM
.. VM VM
..
5. Compute NodeCompute NodeCompute Node
Dragonflow
Network DB
OVS
Neutron
Server
OVSDB
OVSDB-Server
ETCD RethinkDBRAMCloud
Kernel Datapath Module
NIC
User Space
Kernel Space
Dragonflow DB Drivers
OVSDB ETCD RethinkDBRMC
Future
Dragonflow Plugin
Route
Core
API
SG
vswitchd
Container
VM Dragonflow Controller
Abstraction Layer
L2 App L3 App DHCP App
Fault
Detection
SG
LBaaS …FWaaS
Pluggable DB
Layer
NBDBDrivers
SB DB Drivers
smartNIC OVSDB
OVSDB
ETCD
RMC
RethinkDB
OpenFlow
Dragonflow – Under The Hood
6. Current Release Features (Liberty)
L2 core API, IPv4, IPv6
GRE/VxLAN/Geneve tunneling protocols
Distributed L3 Virtual Router
Hybrid proactive + reactive flow installation
North-South traffic is still centralized
Distributed DHCP
(with just 500 lines of code!)
Pluggable Distributed Database
ETCD, RethinkDB, RAMCloud, OVSDB
9. 1 VM Send DHCP_DISCOVER
2 Classify Flow as DHCP, Forward to Controller
3 DHCP App sends DHCP_OFFER back to VM
4 VM Send DHCP_REQUEST
5 Classify Flow as DHCP, Forward to Controller
6 DHCP App populates DHCP_OPTIONS from DB/CFG and send
DHCP_ACK
Dragonflow Distributed DHCP
VM DHCP SERVER
1
3
4
6
7
Compute Node
Dragonflow
VM
OVS
VM
1 2
br-int
qvoXXX qvoXXX
OpenFlow
1
4
2
5
7
Dragonflow Controller
Abstraction Layer
L2
App
L3
App
DHCP
App
SG
36
Pluggable DB
Layer
DB
10. Dragonflow Distributed DHCP
Match:
Broadcast +UDP +S_Port=68 +D_Port=67
Action:
Send to DHCP table
Service Table
DHCP Table
Match: in_port => Action:
Set metadata with port unique key
SEND TO CONTROLLER
(for every local port that its network has DHCP
enabled)
Default:
goto “L2 Lookup Table”
Compute Node
VM
OVS
br-int
qvoXXX
VM
qvoXXX
1 2
Dragonflow
Dragonflow Local Controller
Abstraction Layer
L2
App
L3
App
DHCP
App
SG
DB
OpenFlow
Ingress Port Security
Ingress Classification
Dispatch to Ports
12. Database Framework
Requirements
• HA + Scalability
• Different Environments have different requirements
• Performance, Latency, Scalability, etc.
Why Pluggable?
• Long time to productize
• Mature Open Source alternatives
• Allow us to focus on the networking services only
13. DB Driver API
Implementations
RAMCloud
ETCD
RethinkDB
Zookeeper
Dragonflow Pluggable Database
Compute NodeCompute NodeCompute Node
Dragonflow
Local
Controller
Pluggable
DB Layer
Applicative
DB Layer
Adapter
DB
Driver
API
Expose DB
Features
Neutron Server
Dragonflow
Neutron
Plugin
DB Operations
Database
Server
DB Adapter
DB Adapter
DB Adapter
14. Distributed
Database
DB Data 3
DB Data 2
DB Data 1
Full Distribution
Compute Node 1
Dragonflow
Local Cache
OVS
Compute Node N
Dragonflow
OVS
Local Cache
Dragonflow DB Drivers
OVSDB ETCD RethinkDBRMC
DB Data 3
DB Data 2
DB Data 1
DB Data 3
DB Data 2
DB Data 1
15. Distributed
Database
DB Data 3
DB Data 2
DB Data 1
Selective Proactive Distribution
Compute Node 1
Dragonflow
Local Cache
OVS
DB Data 1
Compute Node N
Dragonflow
OVS
Local Cache
DB Data 3
DB Data 2
Dragonflow DB Drivers
OVSDB ETCD RethinkDBRMC
18. Dragonflow Pipeline
Installed in every OVS
Service
Traffic
Classification
Ingress Processing
(NAT, BUM)
ARP DHCP
L2
Lookup
L3
Lookup
DVR
Egress
Dispatching outgoing
traffic to external
nodes or local ports
Ingress
Port
Security
(ARP spoofing , SG, …)
Egress
Port
Security
Egress
Processing
(NAT)
Fully Proactive
Has Reactive Flows to Controller
Security Groups
…
Outgoing from local
port Classification and
tagging
Dispatching Incoming
traffic from external
nodes to local ports
20. Roadmap
Additional DBs Drivers ZooKeeper, Redis …
Selective Proactive DB
Hierarchical Port Binding (SDN ToR) move to ML2
Pluggable Pub/Sub Mechanism
DB Consistency
Distributed DNAT
Security Group
Containers (Kuryr plugin and nested VM support)
Topology Service Injection / Service Chaining
Inter Cloud Connectivity (Border Gateway / L2GW)
…
21. Hierarchical Port Binding (SDN ToR)
move to ML2
Rack n
ToR
VLAN
Segmentation
Rack 1
ToR
Rack 2
ToR
Rack 3
ToR
Vxlan
Segmentation
22. Dargonflow Hierarchical Port Binding (SDN ToR)
Neutron Server
REST API
Neutron Core plugins
ML2
Cisco(Nexus,
N1Kv)
OVN
Morevendor
plugins
Type Drivers Mechanism Drivers
VLAN
GRE
VXLAN
ONOS
Dragonflow
TOR
Neutron Service plugins
DragonflowDB
Rack n
ToR
VLAN
Segmentation
Vxlan
Segmentation
Compute Node
Dragonflow
VM
OVS
VM
br-int
qvoXXX qvoXXX
OpenFlow
Dragonflow Controller
Abstraction Layer
Vlan
L2
App
L3
App
DHCP
App
SG
Pluggable DB
Layer
DBDB
ToRMach
Driver
OpenDayLight
23. Pluggable Pub/Sub Mechanism
Neutron-Server
Dragonflow Plugin
DB
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
DB
VM VM
..VM VM
..
VM VM
.. VM VM
..
Pub/Sub
if the DB internally supports
Pub sub then we use it
24. Pluggable Pub/Sub Mechanism
Neutron-Server
Dragonflow Plugin
DB
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
DB
VM VM
..
VM VM
..
VM VM
..
VM VM
..
Pub/Sub
Why do we need it ?
Not all DBs support pub-sub (e.g. RamCloud)
We need to be able to customize
Performance, Latency, Scalability, etc.
25. DB Consistency Common Problem to all SDN Solution
SDN Controller
North-bound Interface (REST?)
South-bound Interface (Openflow)
SDN Apps
SDN DB
Neutron
DB
Neutron-server
ML2-Core-Plugin
ML2.Drivers.Mechanism.XXX
Services-Plugin
Service
Network
Neutron API Nova API
CLI / Dashboard (Horizon) / Orchestration Tool (Heat)
HW Switch
Nova
Nova Compute
VM VM
Nova Compute
VM VM
Virtual Switch (OVS?) Virtual Switch (OVS?)
Neutron
Plugin Agent
Neutron
Plugin Agent
Vendor-specific API
Message Queue (AMQP)
Neutron-L3-Agent
Neutron-DHCP-Agent
LoadBalancer
Firewall
VPN
L3Services
TopologyMgr.
OverlayMgr.
Security
26. Dragonflow DB Consistency
Neutron-Server
Dragonflow Plugin
DB
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
DB
VM VM
..
VM VM
..
Neutron
DB
The Neutron DB is the master Database
Introduce a full-sync diff based mechanism
NDB DDB
Introduce a virtual transaction mechanism
NDB DDB
Key DB Requirement from multi production
environments
Optimized for Read, multiple read request in
very high volume from nova, Horizon …
Multi Neutron server API running on
different hosts
Neutron-Server
Dragonflow Plugin
DB
Neutron-Server
Dragonflow Plugin
DB
27. Join the project Dragonflow
• Documentation
https://wiki.openstack.org/wiki/Dragonflow
• Bugs & blueprints
https://launchpad.net/dragonflow
• DF IRC channel
#openstack-dragonflow
Weekly on Monday at 0900 UTC in #openstack-meeting-4 (IRC)
30. Security Groups Problems
• Data plane performance
• Additional Linux Bridge on the Path
• Control plane performance
• Rules needs to be re-compiled on port changes
• Many rules due to security group capabilities
• Iptable commands issued by CLI process
• RPC bulks
32. Security Groups Translations
Direction:Egress, Type:IPv4, IP Protocol:TCP, Port Range:Any, Remote IP
Prefix:0.0.0.0/0
match:ct_state=+new+trk,tcp,reg6=X
actions=ct(commit,zone=NXM_NX_REG6[0..15]),resubmit(,<next_table>)
Direction:Egress, Type:IPv4, IP Protocol:TCP, Port Range:Any, Remote
Security Group: Y
match:ct_state=+new+trk,tcp,reg6=X,reg5=Y,
actions=ct(commit,zone=NXM_NX_REG6[0..15]),resubmit(,<next_table>)
33. Distributed DNAT (Floating IP)
OVS
VM
Compute Node
Public
network
OVS
VM
Compute Node
Public
network
OVS
Network Node
Router
Namespace
35. Neutron and libnetwork
A Docker
Container
Network
Sandbox
Endpoint
A Docker Container
Network Sandbox
Endpoint
A Docker
Container
Network
Sandbox
Endpoint
Frontend
Network
Endpoint
Backend
Network
Tenant A Net1
192.168.1.0/0
Tenant A Net2
192.168.5.0/0
VM1
192.168.1.5
VM2
192.168.1.7
192.168.5.2
36. Kuryr Project Overview
• Open source
• Part of OpenStack Neutron’s big stadium
• Under OpenStack big tent from next release!!!
• Brings the Neutron networking model as a provider for the Docker
CNM
• Aims to support different Container Orchestration Engines
• E.g. Kubernetes, Mesos, Docker Swarm
• Weekly IRC meetings
• Working together with OpenStack community
• Neutron, Magnum, Kolla
38. Dragonflow and Kuryr plans
• Dragonflow to support containers networking use cases
• Nested containers inside VMs support
• Containers can leverage all of Dragonflow features
• Distributed DHCP
• Security and QoS
• Containers performance and fault management
• Port forwarding
• Dragonflow distributed load balancer
• DNS as a Service in Dragonflow
• Integration with Kubernetes
• Full Integration of Dragonflow and Kuryr
• Containerized image of Dragonflow
• VIF Binding to Dragonflow
• OVS, IPVLAN
43. Simple But Extendable
• Various special services and behavior's
• VPN
• QoS (DSCP tagging)
• Dynamic Routing
• Inter clouds connectivity
• And so much more…
• External applications
• Centralized “SDN” applications
• New distributed networking services
• Networking as a Service to NFV
45. Classic Service Chaining
• Chain of ports the traffic traverses
• Classifier for entry point
• Different types of chains
• Static or dynamic
• Different underlying technologies
• NSH
• MPLS
• App ports
• End points of various kinds
• VMs
• Containers
• User space applications
• Physical devices
47. Service Injection Hooks
Logical Router
Logical Switch Logical Switch
VM 1 VM 2 VM 3
DSCP
Marking
DPI
Distributed
Load
Balancing
48. Topology Service Injection
• Interact with base OpenFlow pipeline
• Leverage classification metadata
• Distributed network services
• Flow based
• Compatible with SDN Applications
• Can use OpenFlow
• Expose virtual topology
• Inject services in specific hooks
• Easily extendable
• No code modifications
49. Service Injection Example – IPS
Compute Node
VM 1 IPS
Table 0 Service
Chains
Table N…
IPS Manager
Data Path App
IPS recognizes infected VM
50. Service Injection Example – IPS
Compute Node
VM 1 IPS
Table 0 Service
Chains
Table N…
IPS Manager
Data Path App
IPS app manager installs
blocking flows for VM1
traffic (Quarantine)
51. Use Cases
• Security Appliance
• Send specific traffic for inspection
• Traffic Mirroring
• Implement TAP on various different locations in the path
• Applicative Load Balancing
• Receive first packets of a connection and wire connection in flows
• Tenants Differentiate service between clouds
• Inter Cloud connectivity
• Border Gateway / L2GW
52. Server Server
Detect Elephant Flows
0 1 … 64
Flow Table
Test 1
10.0.0.3
Test 2
10.0.0.4
0 1 … 64
Flow Table
Elephant
detector
Detect elephant flow:
10.0.0.3 10.0.0.4 TCP port 1234
Write flows to
tableDSCP=64
slow path
fast path
Collect
sFlow
stats
53. Dragonflow Inter Cloud Connectivity (Border Gateway)
CN
CN
CN
NN
CN
CN
CN
NN
Data Center B
GW-GW
Tunnel
Data Center A
Intra-Cloud
Tunnels
Intra-Cloud
Tunnels
Connecting
Bare-Metal
Servers as
before
192.168.10.2
192.168.10.3
192.168.10.8
Why is this a good thing?
Common Applicative DB Adapter Layer
Same layer is used by all clients
Dragonflow Neutron plugin
Dragonflow local controller
External/Internal applications
Expressed in terms of the schema model
Converts model to “Key / Value”
Calls the DB Driver API for DB Operations
Leverage DB advance features
Knows to receive and wait for DB changes
According to a pre defined generic API with the driver
Selective publish-subscribe
Each local controller sync only relevant data according to its local ports
Depends on the virtual topology
Local controller gets all local ports information
DB framework must support waiting for changes on specific entry column values
The plugin tags the related objects with a special column value
Reduce the sync load and change rate
Each local controller only gets the subset of the data that is relevant for it
Each local controller sync only relevant data according to its local ports
Depends on the virtual topology
Local controller gets all local ports information
DB framework must support waiting for changes on specific entry column values
The plugin tags the related objects with a special column value
Reduce the sync load and change rate
Each local controller only gets the subset of the data that is relevant for it
Each local controller sync only relevant data according to its local ports
Depends on the virtual topology
Local controller gets all local ports information
DB framework must support waiting for changes on specific entry column values
The plugin tags the related objects with a special column value
Reduce the sync load and change rate
Each local controller only gets the subset of the data that is relevant for it
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here I'd stop to thank Neutron drivers for welcoming us into the big stadium/>
<voice note: Talk about how this may be straight away support or by the plugins for this platforms that we can incorporate in our repository/>
<voice note: Here tell the people to join us and contribute/>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>
<voice note: Here we'd explain the part about them being vendor specific makes that each Neutron vendor would have to make its own implementation of libnetwork or cni reinventing the wheel and without the ability to share the common parts./>