Provide network interconnections between Openstack clouds/regions ?
Neutron offers floating IPs and IPSec VPNaaS. However this is not always good enough: sometimes network isolation is needed, but without the the overhead of IPSec encryption.
How to avoid putting the burden on an orchestrator ?
Solutions exist to create interconnections in ways specific to each overlay technology or SDN backends, but they require central coordination via an orchestrator (not always easy), and sometimes also the provisioing of network devices (not always simple).
"Neutron talking to Neutron"
This presentation exposes a solution developed in the Neutron project, where tenants define their network interconnection needs across regions or clouds, and Neutron components in the different regions coordinate together to setup these private isolated interconnections. Without orchestration nor network device configuration.
OpenStack 운영을 통해 얻은 교훈을 공유합니다.
목차
1. TOAST 클라우드 지금의 모습
2. OpenStack 선택의 이유
3. 구성의 어려움과 극복 사례
4. 활용 사례
5. 풀어야 할 문제들
대상
- TOAST 클라우드를 사용하고 싶은 분
- WMI를 처음 들어보시는 분
Cluster API によるKubernetes環境のライフサイクル管理とマルチクラウド環境での適用Motonori Shindo
Cluster API は Kubernetes の宣言的APIとリソースの管理機能を活かし、Kubernetes環境のライフサイクル管理を行うもので、Kubernetesコミュニティで仕様の策定と開発が進められています。
これまでもKubernetes環境の構築を支援するツールはいくつかありましたが、Cluster APIはコミュニティからの大きな支持を得ており、Cluster APIのエコシステムが広がりつつあります。
本セッションでは Cluster API の概要と最新の動向、また、Cluster APIを利用した大規模マルチクラウド環境への適用などをデモを交えながら解説を行います。
本資料はCloud Operator Days Tokyo 2020登壇時の資料です。
Talk held at DevOps Gathering 2019 in Bochum on 2019-03-13.
Abstract: This talk will address one of the most common challenges of organizations adopting Kubernetes on a medium to large scale: how to keep cloud costs under control without babysitting each and every deployment and cluster configuration? How to operate 80+ Kubernetes clusters in a cost-efficient way for 200+ autonomous development teams?
This talk provides insights on how Zalando approaches this problem with central cost optimizations (e.g. Spot), cost monitoring/alerting, active measures to reduce resource slack, and automated cluster housekeeping. We will focus on how to ingrain cost efficiency in tooling and developer workflows while balancing rigid cost control with developer convenience and without impacting availability or performance. We will show our use case running Kubernetes on AWS, but all shown tools are open source and can be applied to most other infrastructure environments.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
OpenStack 운영을 통해 얻은 교훈을 공유합니다.
목차
1. TOAST 클라우드 지금의 모습
2. OpenStack 선택의 이유
3. 구성의 어려움과 극복 사례
4. 활용 사례
5. 풀어야 할 문제들
대상
- TOAST 클라우드를 사용하고 싶은 분
- WMI를 처음 들어보시는 분
Cluster API によるKubernetes環境のライフサイクル管理とマルチクラウド環境での適用Motonori Shindo
Cluster API は Kubernetes の宣言的APIとリソースの管理機能を活かし、Kubernetes環境のライフサイクル管理を行うもので、Kubernetesコミュニティで仕様の策定と開発が進められています。
これまでもKubernetes環境の構築を支援するツールはいくつかありましたが、Cluster APIはコミュニティからの大きな支持を得ており、Cluster APIのエコシステムが広がりつつあります。
本セッションでは Cluster API の概要と最新の動向、また、Cluster APIを利用した大規模マルチクラウド環境への適用などをデモを交えながら解説を行います。
本資料はCloud Operator Days Tokyo 2020登壇時の資料です。
Talk held at DevOps Gathering 2019 in Bochum on 2019-03-13.
Abstract: This talk will address one of the most common challenges of organizations adopting Kubernetes on a medium to large scale: how to keep cloud costs under control without babysitting each and every deployment and cluster configuration? How to operate 80+ Kubernetes clusters in a cost-efficient way for 200+ autonomous development teams?
This talk provides insights on how Zalando approaches this problem with central cost optimizations (e.g. Spot), cost monitoring/alerting, active measures to reduce resource slack, and automated cluster housekeeping. We will focus on how to ingrain cost efficiency in tooling and developer workflows while balancing rigid cost control with developer convenience and without impacting availability or performance. We will show our use case running Kubernetes on AWS, but all shown tools are open source and can be applied to most other infrastructure environments.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
CERN is the home of the Large Hadron Collider (LHC), a 27km circular proton accelerator generating tens of petabytes of new data every year. Data is stored and processed using a large amount of resources totaling over 250.000 cores and 1000s of storage servers, managed by OpenStack.
Networking is a critical part of our infrastructure and arguably the hardest to evolve. Given the size of CERN’s infrastructure, its flat network is partitioned in segments each representing a separate broadcast domain and potentially offering different levels of service. This fragmentation improves scalability and reduces the impact of misbehaving systems in the datacentre to individual segments. On the other hand, having multiple broadcast domains means features like floating and virtual IPs are much harder to offer.
We will tell the story of OpenStack Networking at CERN. First integration with Nova Network, the migration to Neutron and how we're adding SDN in our infrastructure.
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
OpenStack Ceph & Neutron에 대한 설명을 담고 있습니다.
1. OpenStack
2. How to create instance
3. Ceph
- Ceph
- OpenStack with Ceph
4. Neutron
- Neutron
- How neutron works
5. OpenStack HA- controller- l3 agent
6. OpenStack multi-region
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
Interconnecting Neutron and Network Operators' BGP VPNsThomas Morin
joint presentation given at OpenStack summit Barcelona (Oct. 2016) with Paul Carver and Tim Irnich
talk video: https://www.youtube.com/watch?v=LCDeR7MwTzE
demo: https://www.youtube.com/watch?v=5iRoZcmQyuU
[ lightning talk done during the OpenStack Summit, Sydney Nov. 2018 ]
Provide network interconnections between Openstack clouds ? between regions ? DC pods ?
Neutron today offers floating IPs and IPSec VPNaaS. However these are not always good enough: sometimes private addressing and network isolation is needed, but avoiding the overhead of IPSec encryption would be preferable.
How to avoid the overhead of adding an orchestrator ?
Solutions also exists to create interconnections in ways specific to each overlay technology or SDN backends, but they will require central coordination via an orchestrator (not always possible), and sometimes also the provisioing of network devices (not always simple).
"Neutron talking to Neutron"
This talk exposes and showcases a solution where Openstack projects define their network interconnection needs across regions or clouds, and Neutron endpoints in the different regions coordinate together in a simple way to setup these private isolated interconnections. Without orchestration nor network device configuration.
CERN is the home of the Large Hadron Collider (LHC), a 27km circular proton accelerator generating tens of petabytes of new data every year. Data is stored and processed using a large amount of resources totaling over 250.000 cores and 1000s of storage servers, managed by OpenStack.
Networking is a critical part of our infrastructure and arguably the hardest to evolve. Given the size of CERN’s infrastructure, its flat network is partitioned in segments each representing a separate broadcast domain and potentially offering different levels of service. This fragmentation improves scalability and reduces the impact of misbehaving systems in the datacentre to individual segments. On the other hand, having multiple broadcast domains means features like floating and virtual IPs are much harder to offer.
We will tell the story of OpenStack Networking at CERN. First integration with Nova Network, the migration to Neutron and how we're adding SDN in our infrastructure.
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
OpenStack Ceph & Neutron에 대한 설명을 담고 있습니다.
1. OpenStack
2. How to create instance
3. Ceph
- Ceph
- OpenStack with Ceph
4. Neutron
- Neutron
- How neutron works
5. OpenStack HA- controller- l3 agent
6. OpenStack multi-region
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
Interconnecting Neutron and Network Operators' BGP VPNsThomas Morin
joint presentation given at OpenStack summit Barcelona (Oct. 2016) with Paul Carver and Tim Irnich
talk video: https://www.youtube.com/watch?v=LCDeR7MwTzE
demo: https://www.youtube.com/watch?v=5iRoZcmQyuU
[ lightning talk done during the OpenStack Summit, Sydney Nov. 2018 ]
Provide network interconnections between Openstack clouds ? between regions ? DC pods ?
Neutron today offers floating IPs and IPSec VPNaaS. However these are not always good enough: sometimes private addressing and network isolation is needed, but avoiding the overhead of IPSec encryption would be preferable.
How to avoid the overhead of adding an orchestrator ?
Solutions also exists to create interconnections in ways specific to each overlay technology or SDN backends, but they will require central coordination via an orchestrator (not always possible), and sometimes also the provisioing of network devices (not always simple).
"Neutron talking to Neutron"
This talk exposes and showcases a solution where Openstack projects define their network interconnection needs across regions or clouds, and Neutron endpoints in the different regions coordinate together in a simple way to setup these private isolated interconnections. Without orchestration nor network device configuration.
Enterprise Datacenter Virtualization und Cloud Computing stellen neue Anforderungen an das Netzwerk. Traditionsgemäss wurden virtuelle Workloads über als Bridge fungierende virtuelle Switches mit VLANs auf dem physischen Netzwerk verbunden. Mit dem Wachstum der Anfordungen an Skalierung und Automatisierung stossen diese Modelle an Grenzen.
Thomas Graf bot an diesem OpenTuesday einen Einblick in Protokolle und Technologien wie OpenFlow, VXLAN, OpenStack Neutron und Open vSwitch, die eingesetzt werden, um neue automatisierte Netzwerkkonzepte der nächsten Generation, wie Software Defined Networking oder Network Function Virtualization, umzusetzen.
Flexible NFV WAN interconnections with Neutron BGP VPNThomas Morin
[talk given during the OpenStack Summit, May 2018 in Vancouver, BC]
Telcos use OpenStack to deploy virtualized network functions, and have specific requirements to interconnect these OpenStack deployments to their backbones and mobile backhaul networks. These interconnections, in particular, need to involve dynamic routing and interconnections with operators internal VPNs.
This talk will explain the role that the networking-bgpvpn Neutron Stadium project plays to address this need, from the basics of the BGPVPN Interconnection API, to more advanced uses made possible by evolutions of this API delivered in Queens.
The more interesting use cases will be the opportunity for a step by step demo.
We'll give a status of where the project stands today in terms of feature coverage, look at the set of SDN controllers providing an implementation for this API beyond the implementation in reference drivers, and last, look at the future of the project.
Azure Networking: Innovative Features and Multi-VNet TopologiesMarius Zaharia
Are you looking to deploy a more complex structure of resources in Azure, all secured and segregated by precise boundaries while closely communicating with each other? Following the arrival of the advanced IaaS networking features in Azure (network security groups, routing, multi-NIC, …) and their maturation in the last months, here is the moment for you to find a modern architectural vision of networking in Azure, with focus on multi-VNET / VPN topologies, and based on ARM deployment model.
Paper presentation with title "Building and Operating Distributed SDN-CloudTestbed with Hyper-convergent SmartX Boxes" in EAI Cloud Computing Conference in Daejeon Seoul Korea.
Overview of OpenStack nova-networking evolution towards Neutron. Architecture overview of OVS plugin, ML2, and MidoNet Overlay product. Overview and example of Heat templates, along with automation of physical switches using Cumulus
This presentation for a talk at the Linux Tag 2014 has a couple of new Slides compared to earlier presentations that explain some different networking models like Flat, VLAN based, 'SDN Fabric based', etc.
Packet processing in the fast path involves looking up bit patterns and deciding on an actions at line rate. The complexity of these functions at Line Rate, have been traditionally handled by ASICs and NPUs. However with the availability of faster and cheaper CPUs and hardware/software accelerations, it is possible to move these functions onto commodity hardware. This tutorial will talk about the various building blocks available to speed up packet processing both hardware based e.g. SR-IOV, RDT, QAT, VMDq, VTD and software based e.g. DPDK, Fd.io/VPP, OVS etc and give hands on lab experience on DPDK and fd.io fast path look up with following sessions. 1: Introduction to Building blocks: Sujata Tibrewala
Presentation given at the 2017 LinuxCon China
With the booming of Container technology, it brings obvious advantages for cloud: simple and faster deployment, portability and lightweight cost. But the networking challenges are significant. Users need to restructure their network and support container deployment with current cloud framework, like container and VMs.
In this presentation, we will introduce new container networking solution, which provides one management framework to work with different network componenets through Open/friendly modelling mechnism. iCAN can simplify network deployment and management with most orchestration systems and a variety of data plane components, and design extendsible architect to define and validate Service Level Agreement(SLA) for cloud native applications, which is important factor for enterprise to deliver successful and stable service via containers.
Nicolai van der Smagt has been in the business of designing, implementing and running SP networks for over 15 years. He has worked with DOCSIS, DSL and FTTH operators. Nowadays, Nicolai is helping Infradata’s pan-European customers build better access, aggregation and core networks, but his focus is on the data center, SDN, NFV and the whitebox switching revolution. His motto: “Simplicity is sophistication”.
Topic of Presentation: SDN
Language: English
Abstract:
Open source SDN that actually works -today
OpenContrail is an open source (Apache 2.0 licensed) project that provides network virtualization in the data center, using tried and tested open standards. It provides northbound APIs, integrates in Openstack or Cloudstack and is available today!
In this slot we’ll show you the architecture and ideas behind the technology and how OpenContrail enables you to avoid the pitfalls that other (closed) SDN solutions bring. If time permits we’ll also demo the technology.
Microservices and containers networking: Contiv, an industry leading open sou...Codemotion
Contiv provides a higher level of networking abstraction for microservices: it provides built-in service discovery and service routing for scale out services, working with schedulers like Docker Swarm, Kubernetes, Mesos and Nomad. We will see some code examples, basic use cases and an easy tutorial on the web.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
2. 2
How to interconnect two OpenStack deployments ?
( …… two or more OpenStack ? two regions ?)
Between datacenters or between NFV POPs,
you may want network interconnections with
the following properties:
on demandon demand
VM VM VM VM VM VM
?
… between NFV PoPs
and/or datacenters…
private addressing
& isolation
private addressing
& isolation
++
avoid the overhead of
packet encryption
avoid the overhead of
packet encryption
++
3. 3
Doing this by adding
an orchestrator on top
of the clouds?
VM VM VM VM VM VM
?
… between NFV PoPs
and/or datacenters…
?
? ?
not always possible...
not always wanted...
this orchestrator may need admin
rights to setup networking
contexts where there isn't a single
organization involved
need to expose an API to
the projects
extra complexity
4. 4
“Neutron-Neutron” API:
check if the reverse
interconnection
is defined on the other side
bis: not yet
bis: OK !
expose/retrieve the technical
details on how to do realize
connectivity ()
parameters vary depending
on the technique to use
At the end of the exchange...
Each side has the necessary
information and can setup the
interconnection ()
“User facing” API :
let a project define that a local
Network A is interconnected to a
Network B on another Openstack
=> define the link symmetrically
on both sides (,)
Let's extend
Neutron's API !
VM VM VM VM VM VM
Neutron Neutron
bis
bis
5. 5
Trusting that the interconnection preserves isolation
Goal:
no interconnection setup unless explictly asked by each project/tenant
How ?
Interconnect if and only if both sides agree (symmetric link check)
Each OpenStack instance has to trust the packets from the other OpenStack instances
This proposal is for organizations/entities trusting each other,
and trusting the network used to carry interconnections
Authenticating Neutron-Neutron API exchanges
Each Neutron component on each side needs credentials to talk to the other side
Not to act as the project/tenant
Not to act as admin
Only need read-only access to interconnection info
Keystone federation is not strictly needed for functionality,
but will be in practice necessary to reduce configuration overhead
Multitenancy & need for network isolation
imply that we address trust questions
6. 6
Multiple interconnection techniques are possible...
The design is agnostic to interconnection techniques
(« interconnection technique » : what we end up using so that
packets actually flow between what we interconnected)
to allow a given technique to be used to setup network
connectivity : just write a driver for it !
the Neutron-Neutron API exchange : a simple conduit to carry
whatever information need to be exchanged to establish the
interconnection (dataplane IDs, routing IDs, parameters)
How does the service select the technique to use for a given
interconnection ?
(in the case where more than one is supported by a given
deployment)
→ via configuration: straightforward
→ negociation: the API could be used to do that, but do we want
this complexity or do we want to Keep It Simple Stupid ?
Requirements for a technique to be
applicable:
– Provide isolated network
connectivity (L2 and/or L3)
– Interoperability preferred
makes the solution applicable
between two OpenStack that do not
use the same SDN controller solution
Examples
– VLAN hand-off
– VXLAN gateway
– L2GW
– BGP VPNs
– GRE
– … pick your poison !
7. 7
Each side can independently allocate network isolation identifiers
No need to choose a single identifier for a given interconnection
=> no need to coordinate the use of a common space of identifiers !
Light & quick driver implementation
leverage the existing Neutron BGPVPN Interconnection API !
(networking-bgpvpn)
no per-SDN solution driver needed
solution usable on « day one » with:
Neutron ML2/OVS
TungstenFabric/Contrail
OpenDaylight
Nuage Networks
VM VM VM VM VM VM
BGP VPN
routes
service composition !
yay !!
Flexible WAN deployment options:
Overlay on top of an IP WAN connectivity
Peering with WAN IP/MPLS
BGP VPN routing
Applicable to both IP and Ethernet interconnects
The example of BGP VPNs as the interconnection technique
Why is this a great fit in this context ?
8. 8
Demo !
two clouds: mars and pluto
●
each « cloud » is a devstack with :
●
neutron (ML2 OVS driver)
●
the neutron-neutron interconnection service plugin
●
using the bgpvpn interconnection driver
●
networking-bgpvpn
BGP peering between the two
●
gobgp (could have been FRRouting, etc.)
`openstack` CLI configured for both clouds
VM 1
Neutron
gobgp
netA
mars
VM 2
Neutron
gobgp
netB
pluto
neutron-
interconnection
API exchanges
BGP VPN
routes
IP
network
9. 9
What happened behind the scene with this 'bgpvpn' driver ?
Preliminary configuration: not much ! /etc/neutron/neutron.conf
pluto: mars:
… …
[neutron_interconnection] [neutron_interconnection]
router_driver = bgpvpn router_driver = bgpvpn
network_l3_driver = bgpvpn network_l3_driver = bgpvpn
network_l2_driver = bgpvpn network_l2_driver = bgpvpn
bgpvpn_rtnn = 5000,5999 bgpvpn_rtnn = 3000,3999
Information exchanges:
Each side advertises the BGP VPN Route Target that it uses to advertise its own routes
To send trafic, the other side will import the routes carrying this Route Target into the relevant network
How is this done ?
(on each side:) the driver for the interconnection service uses the already existing Neutron BGPVPN API
to create BGPVPNs and associate them to the network
10. 11
Handle the lifecycle of an interconnection correctly
e.g. when deleted on one side, need to teardown on the other side
→ need for an explicit (and robust state machine)
Handle cases where the other Neutron is not available
→ periodic retries
Do the work asynchronously from API calls
an API call should return instantly
work with the other Neutron instances needs to happen
behind the scene
Handle local concurrency right
background tasks and API call processing need to
operate consistently on a given interconnection
→ introduce intermediate states in the state machine, acting as locks
Robust global state distribution - Keep It Simple Stupid :
local state machine does not need to know the state of
the remote state machines
simple interactions between state machines : GET, refresh
Implementation details (where the devil is !)
11. 12
With the proposed solution, the following needs to be taken care of by the end users :
Choose IP addresses consistently across the different clouds
Create Security Group rules to let traffic through
Need to specific explicit addresses for remote ends (remote prefix),
because remote security group not usable
This is acceptable, but can we do better ?
Prevent end users from shooting themselves in the foot with overlapping IP addresses
Make security groups work seemlessly across clouds
Need to distribute security-group membership between clouds/regions
What about IP address allocations, security groups ?
(food for thought..)
15. 16
Need to interconnect clouds/regions that use different SDN controllers ?
Need to migrate from SDNA to SDN B, with connectivity between the two until A is phased out ?
Applicability use cases
Address SDN-controllers heterogeneity
VM VM VM VM VM VM
VM VM VM VM VM VM
16. 17
Specs proposed in Spring, merged in Neutron specs :
https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutron-inter.html
Project recently created under neutron umbrella
●
https://git.openstack.org/cgit/openstack/neutron-interconnection
Code submissions and reviews to start there very soon
It's the right time to jump in !
Implementation status
Neutron
Stadium
17. 18
Neutron-Neutron interconnections
Wrap up
Allows interconnections
On-demand
No need for an orchestrator
Light on packet dataplane (no IPSec)
between OpenStack instances
two or more OpenStack instances
multiple regions of a given cloud
multiple clouds (between trusting entities)
including when these instances use a different SDN solution
First driver will work with Neutron and many SDN controllers on day-1
without waiting for an SDN controller-specific driver !
What if BGP VPNs not a good fit for you ?
The solution is agnostic, drivers for other solution can be developped !
Next steps ?
Code submission & reviews
openstack/neutron-interconnection project
Demo with heterogenous SDN controllers ?
VM VM VM VM VM VM
Credits
Yannick Thomas
Przemysłav Jasek