Neutron Juno Release 
Barcelona OpenStack MeetUp Group
What’s New in Neutron for Juno 
•Nova Network Parity 
•Distributed Virtual Router 
•L3 HA 
•IPv6 
•Security Group Enhancements 
•Plugin Specific improvements 
•Bug-fixes 
Full list of release issues here 
2
Nova Network Parity 
3
4 
Nova Network Parity 
Problem: Neutron does not offer same functionality as nova-network 
does, and sometimes is doing worse. 
• Quick summary: technical debt in Neutron no allowed anymore. 
• Neutron migration DB: each plugin offers different database schema and 
migration depends on services enabled. 
– New service enabled can end up with migration error. 
– Grenade project hates Neutron project. 
– No way to migrate from one vendor plugin to another. 
• Neutron does not scale (L3 bottleneck). Nova-network does. 
• No way to migrate from nova-network to Neutron 
• Current users in nova networks do API calls that does not exist in Neutron
5 
Nova Network Parity 
Solutions 
• Heal Script. All tables from all plugins and services added. 
• Scalability issues improved by DVR and L3 HA 
developments (more about this later) 
• Tempest tests added 
• Gate tests added 
•Neutron default in devstack (not yet, devstack is not part of 
the integrated release) 
•Nova-network to Neutron migration script (not yet!) 
•Missing API calls (get_fixed_ips, get_vifs_by_vm) (in 
development!)
Distributed Virtual Router (DVR) 
6
7 
Distributed Virtual Router (DVR) 
Problem: Neutron does not scale 
• Until Icehouse, Network Node is unique in installation: 
– Single point of failure 
– Traffic bottleneck
8 
Distributed Virtual Router (DVR) 
Solution: delegate DNAT floating IP/fixed private IP to compute 
nodes 
• Traffic to floating Ips managed directly to compute nodes
9 
Distributed Virtual Router (DVR) 
Solution: Handle East-West traffic inside compute L3 
namespace 
● One namespace per tenant in compute machine 
● Router inside namespace has an ARP table of the other tenant networks MAC 
addresses
Distributed Virtual Router (DVR) 
No solution: SNAT 
● North/South traffic without Floating IPs 
• Remains as a Single Point of Failure and traffic bottleneck (without L3 HA)
11 
Distributed Virtual Router (DVR) 
Summary 
• Significant improvement in traffic bottleneck 
•Maybe now Neutron finally scales more than nova-network 
• Best option for new deployments 
• Upgrades (IMHO): pain in the ass 
– Database migration not provided from legacy to DVR ¿? 
– L3 agents must be configured and installed manually 
– Compute nodes need access to external network 
• SNAT needs to be improved (more later in L3 HA)
12 
Distributed Virtual Router (DVR) 
More info 
● Base design document 
● L2 agent changes 
● L3 agent changes 
● Atlanta Summit Slides 
● OpenStack wiki: How to enable DVR 
● Official Spec
L3 High Availability 
13
14 
L3 High Availability 
Problem: L3 SNAT con not be distributed 
● To provide internet access to Virtual Machines 
without HA, you need a SNAT service: 
– A single gateway per network by default (even 2 
gateways does not solve the problem) 
– This gateway must keep track of outgoing 
connections to redirect reverse-SNAT ingoing 
responses. 
● Single point of failure: All machines accessing to 
internet lose their connections if Network Node 
fails.
15 
L3 High Availability 
Solution: based on VRRP protocol 
● First Hop Redundancy Protocol (FHRP) 
● Multiple nodes working as router of the network. 
● Work on master (active) or slave (stand-by) mode 
● If master does not send 'hello' messages to stand-by 
nodes, they start an election process to define 
the new master 
● Active node maps a configured VIP – MAC address 
that is the gateway of the Vms subnets
16 
L3 High Availability 
Solution: based on VRRP protocol
17 
L3 High Availability 
Solution: VRRP in Neutron 
● An HA Network is created (tenant Network without tenant id) 
● Keepalived traffic is sent by this network
18 
L3 High Availability 
More info 
● Assaf Muller blog 
● Official Spec 
● How to test 
● OpenStack Wiki
IPv6 
19
20 
IPv6 
Icehouse status 
● IPv6 networks, although posible, almost useless 
– Only Link-Local addresses registered in Neutron 
– RA advertiser for SLAAC support must be 
deployed manually 
● Only one attribute in the subnet: 
– ip_version
21 
IPv6 
Juno status 
● Full support to IPv6 tenant networks 
● RADVD and DNSMASQ services deployed depending on the 
attributes 
● Current attributes: 
– ip_version 
– ipv6_ra_mode 
– ipv6_address_mode 
● These previous attributes allow provider services to pass through 
the network tenant router and offer provider hardware solutions 
● Next slide shows all the combinations available 
● Public networks not yet (current floating IP NAT does not make 
sense in IPv6)
22
23 
IPv6 
More info 
● Spec: upstream SLAAC support 
● Spec: Router Advertiser Daemon (radvd) 
● Spec: Stateful and Stateless mode in dnsmasq 
● Patch to deploy a devstack with IPv6
Security Groups Enharcements 
24
25 
Security Group Enharcements 
*Image stolen miserably from rackspace 
documentation
26 
Security Group Enharcements 
Implementation improvements 
● Using Ipset improves the readability and scalability of iptables chains:
27 
Security Group Enharcements 
Implementation Improvements 
● Problem: Communication L2 Agent – Neutron Server regarding security 
groups does not scale: 
– RCP calls block communication channel 
– Call by device 
– Long messages from Server (20-600MB!!) 
● Solution: Response based on security group aggregated information: 
– Easy to fetch from Neutron 
– Smaller messages 
– Example
28 
Security Group Enharcements 
More info 
● Ipsec spec 
● Security Group RPC calls improvement spec
Demo time! 
29
30 
More info 
● Release Notes 
● Kyle Mestery notes 
● Juno design specs 
● Technical's Comitee Neutron Gap Coverage 
● Launchpad report
Thank you 
31

What's new in Neutron Juno

  • 1.
    Neutron Juno Release Barcelona OpenStack MeetUp Group
  • 2.
    What’s New inNeutron for Juno •Nova Network Parity •Distributed Virtual Router •L3 HA •IPv6 •Security Group Enhancements •Plugin Specific improvements •Bug-fixes Full list of release issues here 2
  • 3.
  • 4.
    4 Nova NetworkParity Problem: Neutron does not offer same functionality as nova-network does, and sometimes is doing worse. • Quick summary: technical debt in Neutron no allowed anymore. • Neutron migration DB: each plugin offers different database schema and migration depends on services enabled. – New service enabled can end up with migration error. – Grenade project hates Neutron project. – No way to migrate from one vendor plugin to another. • Neutron does not scale (L3 bottleneck). Nova-network does. • No way to migrate from nova-network to Neutron • Current users in nova networks do API calls that does not exist in Neutron
  • 5.
    5 Nova NetworkParity Solutions • Heal Script. All tables from all plugins and services added. • Scalability issues improved by DVR and L3 HA developments (more about this later) • Tempest tests added • Gate tests added •Neutron default in devstack (not yet, devstack is not part of the integrated release) •Nova-network to Neutron migration script (not yet!) •Missing API calls (get_fixed_ips, get_vifs_by_vm) (in development!)
  • 6.
  • 7.
    7 Distributed VirtualRouter (DVR) Problem: Neutron does not scale • Until Icehouse, Network Node is unique in installation: – Single point of failure – Traffic bottleneck
  • 8.
    8 Distributed VirtualRouter (DVR) Solution: delegate DNAT floating IP/fixed private IP to compute nodes • Traffic to floating Ips managed directly to compute nodes
  • 9.
    9 Distributed VirtualRouter (DVR) Solution: Handle East-West traffic inside compute L3 namespace ● One namespace per tenant in compute machine ● Router inside namespace has an ARP table of the other tenant networks MAC addresses
  • 10.
    Distributed Virtual Router(DVR) No solution: SNAT ● North/South traffic without Floating IPs • Remains as a Single Point of Failure and traffic bottleneck (without L3 HA)
  • 11.
    11 Distributed VirtualRouter (DVR) Summary • Significant improvement in traffic bottleneck •Maybe now Neutron finally scales more than nova-network • Best option for new deployments • Upgrades (IMHO): pain in the ass – Database migration not provided from legacy to DVR ¿? – L3 agents must be configured and installed manually – Compute nodes need access to external network • SNAT needs to be improved (more later in L3 HA)
  • 12.
    12 Distributed VirtualRouter (DVR) More info ● Base design document ● L2 agent changes ● L3 agent changes ● Atlanta Summit Slides ● OpenStack wiki: How to enable DVR ● Official Spec
  • 13.
  • 14.
    14 L3 HighAvailability Problem: L3 SNAT con not be distributed ● To provide internet access to Virtual Machines without HA, you need a SNAT service: – A single gateway per network by default (even 2 gateways does not solve the problem) – This gateway must keep track of outgoing connections to redirect reverse-SNAT ingoing responses. ● Single point of failure: All machines accessing to internet lose their connections if Network Node fails.
  • 15.
    15 L3 HighAvailability Solution: based on VRRP protocol ● First Hop Redundancy Protocol (FHRP) ● Multiple nodes working as router of the network. ● Work on master (active) or slave (stand-by) mode ● If master does not send 'hello' messages to stand-by nodes, they start an election process to define the new master ● Active node maps a configured VIP – MAC address that is the gateway of the Vms subnets
  • 16.
    16 L3 HighAvailability Solution: based on VRRP protocol
  • 17.
    17 L3 HighAvailability Solution: VRRP in Neutron ● An HA Network is created (tenant Network without tenant id) ● Keepalived traffic is sent by this network
  • 18.
    18 L3 HighAvailability More info ● Assaf Muller blog ● Official Spec ● How to test ● OpenStack Wiki
  • 19.
  • 20.
    20 IPv6 Icehousestatus ● IPv6 networks, although posible, almost useless – Only Link-Local addresses registered in Neutron – RA advertiser for SLAAC support must be deployed manually ● Only one attribute in the subnet: – ip_version
  • 21.
    21 IPv6 Junostatus ● Full support to IPv6 tenant networks ● RADVD and DNSMASQ services deployed depending on the attributes ● Current attributes: – ip_version – ipv6_ra_mode – ipv6_address_mode ● These previous attributes allow provider services to pass through the network tenant router and offer provider hardware solutions ● Next slide shows all the combinations available ● Public networks not yet (current floating IP NAT does not make sense in IPv6)
  • 22.
  • 23.
    23 IPv6 Moreinfo ● Spec: upstream SLAAC support ● Spec: Router Advertiser Daemon (radvd) ● Spec: Stateful and Stateless mode in dnsmasq ● Patch to deploy a devstack with IPv6
  • 24.
  • 25.
    25 Security GroupEnharcements *Image stolen miserably from rackspace documentation
  • 26.
    26 Security GroupEnharcements Implementation improvements ● Using Ipset improves the readability and scalability of iptables chains:
  • 27.
    27 Security GroupEnharcements Implementation Improvements ● Problem: Communication L2 Agent – Neutron Server regarding security groups does not scale: – RCP calls block communication channel – Call by device – Long messages from Server (20-600MB!!) ● Solution: Response based on security group aggregated information: – Easy to fetch from Neutron – Smaller messages – Example
  • 28.
    28 Security GroupEnharcements More info ● Ipsec spec ● Security Group RPC calls improvement spec
  • 29.
  • 30.
    30 More info ● Release Notes ● Kyle Mestery notes ● Juno design specs ● Technical's Comitee Neutron Gap Coverage ● Launchpad report
  • 31.

Editor's Notes

  • #2 Analyst Deck <number>
  • #3 <number>
  • #5 <number>
  • #6 <number>
  • #8 <number>
  • #9 <number>
  • #10 <number>
  • #12 <number>
  • #13 <number>
  • #15 <number>
  • #16 <number>
  • #17 <number>
  • #18 <number>
  • #19 <number>
  • #21 <number>
  • #22 <number>
  • #23 <number>
  • #24 <number>
  • #26 <number>
  • #27 <number>
  • #28 <number>
  • #29 <number>
  • #31 <number>