Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Technologies

117 views

Published on

Audience Level
Intermediate

Synopsis
The latest SDN revolution is centered on creating efficient virtualized data center networks using VXLAN & EVPN. We will talk about the scale, performance, and cost advantages of using a modern controller-free virtualized network solution built on 100 Gigabit Ethernet switches with hardware based VXLAN Routing. We will explore the ease of automating such a network in an OpenStack environment and take you through a real world use case of using OpenStack Network Node bridging between a bare metal cloud (EVPN) and a fully virtualized cloud environments (orchestrated by Neutron).

Speaker Bio:
David has held leadership roles at 3COM, Cisco Systems, Nortel Networks, and IBM where he promoted advanced network technologies including High Speed Ethernet, Layer 4-7 switching, Virtual Machine-aware networking, and Software Defined Networking.

David’s current focus is on the evolving landscape of data center networking, scale out storage, Open Networking, and cloud computing.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Technologies

  1. 1. OpenStack Australia Day | June 2017 Meshing OpenStack and Bare Metal Networks with EVPN
  2. 2. © 2017 Mellanox Technologies - Mellanox Confidential - 2 SDN For OpenStack – VM & Container Clouds with VXLAN Automated self service networks:  VXLANs are easier than VLANs • No physical switches to configure  High scale of virtual networks • 4K VLANs • 16M VXLANs  VMs free to travel around data center • Cross layer 3 boundaries  All dynamic changes move to overlay  Underlay becomes: • Very static • Very stable • Very scalable • Small L2 domains = small fault domains Compute Nodes Storage Nodes Overlay VLAN 2 VLAN 2 VLAN 4 VLAN 4 VLAN 2 VLAN 4 VLAN 2 VLAN 4 VLXAN Tunnel
  3. 3. © 2017 Mellanox Technologies - Mellanox Confidential - 3 Enabling Modern Leaf-Spine Networks Compute Nodes CEPH Storage Nodes 25GbEBetter cost, power, availability, flexibility Compute Nodes Legacy Storage Nodes 40GbE Legacy “Scale up” Network Switches 10GbE 100GbE 50GbE ToR Switches Leaf Switches Physical switches: • All L3 = small fault domains • Fixed port = lower cost Spine Switches
  4. 4. © 2017 Mellanox Technologies - Mellanox Confidential - 4 Decline of the Modular Switch PercentofShipments Data Center Ethernet Switch Product Mix CREHAN RESEARCH Inc. 0% 25% 50% 75% Fixed/Top-of-Rack Modular/Chassis Blade/Embedded
  5. 5. © 2017 Mellanox Technologies - Mellanox Confidential - 5 The Hidden Cost of VM Clouds  Smart NICs needed for VM Clouds • Tunneling drives up CPU load • Encap/decap overhead • IP & TCP Checksums with VXLAN • NIC Offloads to the rescue VXLAN Offload Engine Higher Throughput 55% lower CPU utilization HigherisBetter LowerisBetter VXLAN - CPU Utilization (% per Gbps) VXLAN Throughput (Gbps)
  6. 6. © 2017 Mellanox Technologies - Mellanox Confidential - 6 OVS over DPDK versus OVS Offload 2 Fully loaded CPU cores 7.6 MPPS 33 MPPS 2 Cores 0 Cores 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0 5 10 15 20 25 30 35 OVS over DPDK OVS Offload NumberofDedicatedCores MillionPacketPerSecond Message Rate Dedicated Hypervisor Cores OVS DPDK ASAP2 Direct 1 Flow 7.6M PPS 33.0M PPS 60K Flows 1.9M PPS 16.4M PPS Zero CPU utilization
  7. 7. © 2017 Mellanox Technologies - Mellanox Confidential - 7 EVPN: Perfect Network for Bare Metal Cloud  Hardware Overlays for Bare Metal Servers • No vswitch configs • VXLAN for all the right reasons - Application team wants layer 2 - Network team wants layer 3 - Large Scale multitenant isolation - VLAN can be anywhere in the network - Overlapping VLANs & subnets  What is EVPN • Controller-free VXLAN • Control plane learning (BGP) • Standards based - Mix & match network vendors • Limited broadcast traffic • High performance hardware tunneling • Data Center Interconnect (DCI) Leaf Switches Bare Metal Servers LACP MLAG L3 L2 Overlay HW VTEP VLXAN Tunnel ironic
  8. 8. © 2017 Mellanox Technologies - Mellanox Confidential - 8 Bare Metal Cloud Switch Features  License-free BGP, VXLAN, ZTP, EVPN  VXLAN Routing • Fabric forwarding intra-tenant traffic  VTEP Scale • Head End Replication • Many switches max out at 128 VTEPs  In Rack Multitenancy • Port/VLAN to VNI • Not VLAN to VNI  QinVXLAN • Psuedowire • One VNI per tenant - BYOV  ROCE over VXLAN • NVME over Fabric • CEPH with RDMA
  9. 9. © 2017 Mellanox Technologies - Mellanox Confidential - 9 Marrying Bare Metal Cloud with VM Cloud  Why marry VM cloud to Bare metal cloud? • Tenants with Containers, VMs, and Bare Metal Servers  How to marry VM cloud to Bare metal cloud? • Use OpenStack Network Nodes (servers) as gateways • Use Hardware VTEPs (switches) controlled with OVSDB (controllers)  Overlapping infrastructure • “Ships in the night” • VM/Container VNIs different range than EVPN VNIs
  10. 10. © 2017 Mellanox Technologies - Mellanox Confidential - 10 Bare Metal EVPN Cloud Without Neutron Poor Married Cloud Design with Server Gateways VM Cloud Neutron controller node DPDK based OpenStack Network Nodes (servers) Bare Metal Servers
  11. 11. © 2017 Mellanox Technologies - Mellanox Confidential - 11 Bare Metal EVPN Cloud Married Cloud Design with Switch Gateways HW VTEP (Switches) Configured with OVSDB VM Cloud Overlay Controller Neutron controller node Bare Metal Servers
  12. 12. © 2017 Mellanox Technologies - Mellanox Confidential - 12 Bare Metal EVPN Cloud Married Cloud Design with Switch Gateways Layer 2 Border Gateways (Switches) VM Cloud Neutron controller with L2 Gateway https://wiki.openstack.org/wiki/Neutron/L2-GW L2 Agent Bare Metal ServersL2 Agent
  13. 13. © 2017 Mellanox Technologies - Mellanox Confidential - 13 Links to Community Articles  How To Configure OpenStack L2 Gateway with Mellanox Spectrum Switch (VTEP) • https://community.mellanox.com/docs/DOC-2766  Mellanox Neutron Plugin • https://wiki.openstack.org/wiki/Mellanox-Neutron  How to Install Mellanox OpenStack Plugins for Mirantis Fuel • https://community.mellanox.com/docs/DOC-2443  EVPN on Cumulus Linux • https://docs.cumulusnetworks.com/display/DOCS/Ethernet+Virtual+Private+Network+-+EVPN  Lightweight Network Virtualization (LNV) on Cumulus Linux • https://docs.cumulusnetworks.com/display/DOCS/Lightweight+Network+Virtualization+-+LNV+Overview  OpenStack Neutron ML2 and Cumulus Linus • https://docs.cumulusnetworks.com/display/DOCS/OpenStack+Neutron+ML2+and+Cumulus+Linux
  14. 14. © 2017 Mellanox Technologies - Mellanox Confidential - 14 Mellanox / Cumulus EVPN Bare Metal Lab Environment You will get Switches • 2x Spine SN2100 and 2 x Leaf SN2100 switches with 16 ports of 100G and Cumulus OS • 2 x Rack kits NICs • 4 x 100G dual port NICs Cables and Transceivers • Inter-switch 100G cables • 4 x 100G copper for MLAG • 4 x 100G fiber inter-switch links • 8 x 100G copper cables for servers • 2 x QSA adapters for 1/10G uplinks • 2 x 100G Optics for 40/100G uplinks 1 year support Cumulus Linux 1 day boot camp SN2100 100G 100G 100G Purpose An all-in-one lab environment for building next-generation software defined network with Mellanox Spectrum and Cumulus Linux Prebuilt Ansible for ZTP Test scenarios: 1. Virtual Network Overlay (VXLAN, LNV, EVPN) 2. L2 Gateway 3. Virtual Routing and Forwarding (VRF) for multi- tenant and internet-connect clouds SN2100 MLAG L3 L2 L2 L3 Overlay Bare Metal Servers
  15. 15. Thank You!

×