Neutron-to-Neutron:
interconnecting multiple
OpenStack deployments
a.k.a neutron-interconnection
Thomas Morin
Orange Labs Networks
OpenStack Summit Berlin
2018-11
VM VM VM VM VM VM
?
2
How to interconnect two OpenStack deployments ?
( …… two or more OpenStack ? two regions ?)
Between datacenters or between NFV POPs,
you may want network interconnections with
the following properties:
on demandon demand
VM VM VM VM VM VM
?
… between NFV PoPs
and/or datacenters…
private addressing
& isolation
private addressing
& isolation
++
avoid the overhead of
packet encryption
avoid the overhead of
packet encryption
++
3
Doing this by adding
an orchestrator on top
of the clouds?
VM VM VM VM VM VM
?
… between NFV PoPs
and/or datacenters…
?
? ?
not always possible...
not always wanted...

this orchestrator may need admin
rights to setup networking

contexts where there isn't a single
organization involved

need to expose an API to
the projects

extra complexity
4
“Neutron-Neutron” API:

check if the reverse
interconnection
is defined on the other side

bis: not yet 

bis: OK ! 

expose/retrieve the technical
details on how to do realize
connectivity ()

parameters vary depending
on the technique to use
At the end of the exchange...
Each side has the necessary
information and can setup the
interconnection ()
“User facing” API :
let a project define that a local
Network A is interconnected to a
Network B on another Openstack
=> define the link symmetrically
on both sides (,)
Let's extend
Neutron's API !
VM VM VM VM VM VM
Neutron Neutron
 

 
bis

bis
5
 Trusting that the interconnection preserves isolation
 Goal:
 no interconnection setup unless explictly asked by each project/tenant
 How ?
 Interconnect if and only if both sides agree (symmetric link check)
 Each OpenStack instance has to trust the packets from the other OpenStack instances
 This proposal is for organizations/entities trusting each other,
and trusting the network used to carry interconnections
 Authenticating Neutron-Neutron API exchanges
 Each Neutron component on each side needs credentials to talk to the other side
 Not to act as the project/tenant
 Not to act as admin
 Only need read-only access to interconnection info
 Keystone federation is not strictly needed for functionality,
but will be in practice necessary to reduce configuration overhead
Multitenancy & need for network isolation
imply that we address trust questions
6
Multiple interconnection techniques are possible...
The design is agnostic to interconnection techniques

(« interconnection technique » : what we end up using so that
packets actually flow between what we interconnected)

to allow a given technique to be used to setup network
connectivity : just write a driver for it !

the Neutron-Neutron API exchange : a simple conduit to carry
whatever information need to be exchanged to establish the
interconnection (dataplane IDs, routing IDs, parameters)
How does the service select the technique to use for a given
interconnection ?
(in the case where more than one is supported by a given
deployment)
→ via configuration: straightforward
→ negociation: the API could be used to do that, but do we want
this complexity or do we want to Keep It Simple Stupid ?
Requirements for a technique to be
applicable:
– Provide isolated network
connectivity (L2 and/or L3)
– Interoperability preferred
makes the solution applicable
between two OpenStack that do not
use the same SDN controller solution
Examples
– VLAN hand-off
– VXLAN gateway
– L2GW
– BGP VPNs
– GRE
– … pick your poison !
7
Each side can independently allocate network isolation identifiers
No need to choose a single identifier for a given interconnection
=> no need to coordinate the use of a common space of identifiers !
Light & quick driver implementation
leverage the existing Neutron BGPVPN Interconnection API !
(networking-bgpvpn)
no per-SDN solution driver needed
solution usable on « day one » with:

Neutron ML2/OVS

TungstenFabric/Contrail

OpenDaylight

Nuage Networks
VM VM VM VM VM VM
BGP VPN
routes
service composition !
yay !!
Flexible WAN deployment options:

Overlay on top of an IP WAN connectivity

Peering with WAN IP/MPLS
BGP VPN routing
Applicable to both IP and Ethernet interconnects
The example of BGP VPNs as the interconnection technique
Why is this a great fit in this context ?
8
Demo !
 two clouds: mars and pluto
●
each « cloud » is a devstack with :
●
neutron (ML2 OVS driver)
●
the neutron-neutron interconnection service plugin
●
using the bgpvpn interconnection driver
●
networking-bgpvpn
 BGP peering between the two
●
gobgp (could have been FRRouting, etc.)
 `openstack` CLI configured for both clouds
VM 1
Neutron
gobgp
netA
mars
VM 2
Neutron
gobgp
netB
pluto
neutron-
interconnection
API exchanges
BGP VPN
routes
IP
network
9
What happened behind the scene with this 'bgpvpn' driver ?
Preliminary configuration: not much ! /etc/neutron/neutron.conf
pluto: mars:
… …
[neutron_interconnection] [neutron_interconnection]
router_driver = bgpvpn router_driver = bgpvpn
network_l3_driver = bgpvpn network_l3_driver = bgpvpn
network_l2_driver = bgpvpn network_l2_driver = bgpvpn
bgpvpn_rtnn = 5000,5999 bgpvpn_rtnn = 3000,3999
Information exchanges:

Each side advertises the BGP VPN Route Target that it uses to advertise its own routes

To send trafic, the other side will import the routes carrying this Route Target into the relevant network

How is this done ?

(on each side:) the driver for the interconnection service uses the already existing Neutron BGPVPN API
to create BGPVPNs and associate them to the network
11
 Handle the lifecycle of an interconnection correctly

e.g. when deleted on one side, need to teardown on the other side

→ need for an explicit (and robust state machine)
 Handle cases where the other Neutron is not available

→ periodic retries
 Do the work asynchronously from API calls

an API call should return instantly

work with the other Neutron instances needs to happen
behind the scene
 Handle local concurrency right

background tasks and API call processing need to
operate consistently on a given interconnection

→ introduce intermediate states in the state machine, acting as locks
 Robust global state distribution - Keep It Simple Stupid :

local state machine does not need to know the state of
the remote state machines

simple interactions between state machines : GET, refresh
Implementation details (where the devil is !)
12
 With the proposed solution, the following needs to be taken care of by the end users :

Choose IP addresses consistently across the different clouds

Create Security Group rules to let traffic through
Need to specific explicit addresses for remote ends (remote prefix),
because remote security group not usable
 This is acceptable, but can we do better ?

Prevent end users from shooting themselves in the foot with overlapping IP addresses

Make security groups work seemlessly across clouds

Need to distribute security-group membership between clouds/regions
What about IP address allocations, security groups ?
(food for thought..)
13
Applicability use cases
Between two OpenStack clouds
VM VM VM VM VM VM
14
Applicability use cases
Between two regions of an OpenStack cloud
VM VM VM VM VM VM
RegionOne RegionTwo
15
Applicability use cases
Between two... or more !
VM VM VM VM VM VM
VM VM VM
16
 Need to interconnect clouds/regions that use different SDN controllers ?
 Need to migrate from SDNA to SDN B, with connectivity between the two until A is phased out ?
Applicability use cases
Address SDN-controllers heterogeneity
VM VM VM VM VM VM
VM VM VM VM VM VM
17
 Specs proposed in Spring, merged in Neutron specs :
https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutron-inter.html
 Project recently created under neutron umbrella
●
https://git.openstack.org/cgit/openstack/neutron-interconnection
 Code submissions and reviews to start there very soon
 It's the right time to jump in !
Implementation status
Neutron
Stadium
18
Neutron-Neutron interconnections
Wrap up

Allows interconnections

On-demand

No need for an orchestrator

Light on packet dataplane (no IPSec)

between OpenStack instances

two or more OpenStack instances

multiple regions of a given cloud

multiple clouds (between trusting entities)

including when these instances use a different SDN solution

First driver will work with Neutron and many SDN controllers on day-1
without waiting for an SDN controller-specific driver !

What if BGP VPNs not a good fit for you ?
The solution is agnostic, drivers for other solution can be developped !

Next steps ?

Code submission & reviews

openstack/neutron-interconnection project

Demo with heterogenous SDN controllers ?
VM VM VM VM VM VM
Credits

Yannick Thomas

Przemysłav Jasek

Neutron-to-Neutron: interconnecting multiple OpenStack deployments

  • 1.
    Neutron-to-Neutron: interconnecting multiple OpenStack deployments a.k.aneutron-interconnection Thomas Morin Orange Labs Networks OpenStack Summit Berlin 2018-11 VM VM VM VM VM VM ?
  • 2.
    2 How to interconnecttwo OpenStack deployments ? ( …… two or more OpenStack ? two regions ?) Between datacenters or between NFV POPs, you may want network interconnections with the following properties: on demandon demand VM VM VM VM VM VM ? … between NFV PoPs and/or datacenters… private addressing & isolation private addressing & isolation ++ avoid the overhead of packet encryption avoid the overhead of packet encryption ++
  • 3.
    3 Doing this byadding an orchestrator on top of the clouds? VM VM VM VM VM VM ? … between NFV PoPs and/or datacenters… ? ? ? not always possible... not always wanted...  this orchestrator may need admin rights to setup networking  contexts where there isn't a single organization involved  need to expose an API to the projects  extra complexity
  • 4.
    4 “Neutron-Neutron” API:  check ifthe reverse interconnection is defined on the other side  bis: not yet   bis: OK !   expose/retrieve the technical details on how to do realize connectivity ()  parameters vary depending on the technique to use At the end of the exchange... Each side has the necessary information and can setup the interconnection () “User facing” API : let a project define that a local Network A is interconnected to a Network B on another Openstack => define the link symmetrically on both sides (,) Let's extend Neutron's API ! VM VM VM VM VM VM Neutron Neutron      bis  bis
  • 5.
    5  Trusting thatthe interconnection preserves isolation  Goal:  no interconnection setup unless explictly asked by each project/tenant  How ?  Interconnect if and only if both sides agree (symmetric link check)  Each OpenStack instance has to trust the packets from the other OpenStack instances  This proposal is for organizations/entities trusting each other, and trusting the network used to carry interconnections  Authenticating Neutron-Neutron API exchanges  Each Neutron component on each side needs credentials to talk to the other side  Not to act as the project/tenant  Not to act as admin  Only need read-only access to interconnection info  Keystone federation is not strictly needed for functionality, but will be in practice necessary to reduce configuration overhead Multitenancy & need for network isolation imply that we address trust questions
  • 6.
    6 Multiple interconnection techniquesare possible... The design is agnostic to interconnection techniques  (« interconnection technique » : what we end up using so that packets actually flow between what we interconnected)  to allow a given technique to be used to setup network connectivity : just write a driver for it !  the Neutron-Neutron API exchange : a simple conduit to carry whatever information need to be exchanged to establish the interconnection (dataplane IDs, routing IDs, parameters) How does the service select the technique to use for a given interconnection ? (in the case where more than one is supported by a given deployment) → via configuration: straightforward → negociation: the API could be used to do that, but do we want this complexity or do we want to Keep It Simple Stupid ? Requirements for a technique to be applicable: – Provide isolated network connectivity (L2 and/or L3) – Interoperability preferred makes the solution applicable between two OpenStack that do not use the same SDN controller solution Examples – VLAN hand-off – VXLAN gateway – L2GW – BGP VPNs – GRE – … pick your poison !
  • 7.
    7 Each side canindependently allocate network isolation identifiers No need to choose a single identifier for a given interconnection => no need to coordinate the use of a common space of identifiers ! Light & quick driver implementation leverage the existing Neutron BGPVPN Interconnection API ! (networking-bgpvpn) no per-SDN solution driver needed solution usable on « day one » with:  Neutron ML2/OVS  TungstenFabric/Contrail  OpenDaylight  Nuage Networks VM VM VM VM VM VM BGP VPN routes service composition ! yay !! Flexible WAN deployment options:  Overlay on top of an IP WAN connectivity  Peering with WAN IP/MPLS BGP VPN routing Applicable to both IP and Ethernet interconnects The example of BGP VPNs as the interconnection technique Why is this a great fit in this context ?
  • 8.
    8 Demo !  two clouds: marsand pluto ● each « cloud » is a devstack with : ● neutron (ML2 OVS driver) ● the neutron-neutron interconnection service plugin ● using the bgpvpn interconnection driver ● networking-bgpvpn  BGP peering between the two ● gobgp (could have been FRRouting, etc.)  `openstack` CLI configured for both clouds VM 1 Neutron gobgp netA mars VM 2 Neutron gobgp netB pluto neutron- interconnection API exchanges BGP VPN routes IP network
  • 9.
    9 What happened behindthe scene with this 'bgpvpn' driver ? Preliminary configuration: not much ! /etc/neutron/neutron.conf pluto: mars: … … [neutron_interconnection] [neutron_interconnection] router_driver = bgpvpn router_driver = bgpvpn network_l3_driver = bgpvpn network_l3_driver = bgpvpn network_l2_driver = bgpvpn network_l2_driver = bgpvpn bgpvpn_rtnn = 5000,5999 bgpvpn_rtnn = 3000,3999 Information exchanges:  Each side advertises the BGP VPN Route Target that it uses to advertise its own routes  To send trafic, the other side will import the routes carrying this Route Target into the relevant network  How is this done ?  (on each side:) the driver for the interconnection service uses the already existing Neutron BGPVPN API to create BGPVPNs and associate them to the network
  • 10.
    11  Handle thelifecycle of an interconnection correctly  e.g. when deleted on one side, need to teardown on the other side  → need for an explicit (and robust state machine)  Handle cases where the other Neutron is not available  → periodic retries  Do the work asynchronously from API calls  an API call should return instantly  work with the other Neutron instances needs to happen behind the scene  Handle local concurrency right  background tasks and API call processing need to operate consistently on a given interconnection  → introduce intermediate states in the state machine, acting as locks  Robust global state distribution - Keep It Simple Stupid :  local state machine does not need to know the state of the remote state machines  simple interactions between state machines : GET, refresh Implementation details (where the devil is !)
  • 11.
    12  With theproposed solution, the following needs to be taken care of by the end users :  Choose IP addresses consistently across the different clouds  Create Security Group rules to let traffic through Need to specific explicit addresses for remote ends (remote prefix), because remote security group not usable  This is acceptable, but can we do better ?  Prevent end users from shooting themselves in the foot with overlapping IP addresses  Make security groups work seemlessly across clouds  Need to distribute security-group membership between clouds/regions What about IP address allocations, security groups ? (food for thought..)
  • 12.
    13 Applicability use cases Betweentwo OpenStack clouds VM VM VM VM VM VM
  • 13.
    14 Applicability use cases Betweentwo regions of an OpenStack cloud VM VM VM VM VM VM RegionOne RegionTwo
  • 14.
    15 Applicability use cases Betweentwo... or more ! VM VM VM VM VM VM VM VM VM
  • 15.
    16  Need tointerconnect clouds/regions that use different SDN controllers ?  Need to migrate from SDNA to SDN B, with connectivity between the two until A is phased out ? Applicability use cases Address SDN-controllers heterogeneity VM VM VM VM VM VM VM VM VM VM VM VM
  • 16.
    17  Specs proposedin Spring, merged in Neutron specs : https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutron-inter.html  Project recently created under neutron umbrella ● https://git.openstack.org/cgit/openstack/neutron-interconnection  Code submissions and reviews to start there very soon  It's the right time to jump in ! Implementation status Neutron Stadium
  • 17.
    18 Neutron-Neutron interconnections Wrap up  Allowsinterconnections  On-demand  No need for an orchestrator  Light on packet dataplane (no IPSec)  between OpenStack instances  two or more OpenStack instances  multiple regions of a given cloud  multiple clouds (between trusting entities)  including when these instances use a different SDN solution  First driver will work with Neutron and many SDN controllers on day-1 without waiting for an SDN controller-specific driver !  What if BGP VPNs not a good fit for you ? The solution is agnostic, drivers for other solution can be developped !  Next steps ?  Code submission & reviews  openstack/neutron-interconnection project  Demo with heterogenous SDN controllers ? VM VM VM VM VM VM Credits  Yannick Thomas  Przemysłav Jasek