VMworld 2013: Advanced VMware NSX Architecture
Upcoming SlideShare
Loading in...5
×
 

VMworld 2013: Advanced VMware NSX Architecture

on

  • 184 views

VMworld 2013

VMworld 2013

Bruce Davie, VMware

Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare

Statistics

Views

Total Views
184
Views on SlideShare
184
Embed Views
0

Actions

Likes
0
Downloads
32
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

VMworld 2013: Advanced VMware NSX Architecture VMworld 2013: Advanced VMware NSX Architecture Presentation Transcript

  • Advanced VMware NSX Architecture Bruce Davie, VMware NET5716 #NET5716
  • 2 Agenda  Network Virtualization Refresher  NSX Architecture  Scale  Integrating Physical Workloads in Virtual Networks  Distributed Services  Connecting with WAN services  Summary & Future Directions
  • 3 Objective  Provide a deep dive into the architecture of NSX, with a focus on: • How the architecture is designed for scale – in the control, management and data planes • How physical devices and wide area services can be incorporated in virtual networks • Where the platform is heading in the future  Reinforce the value proposition of network virtualization
  • 4 Compute Virtualization Abstraction Layer Why we need network virtualization Physical Infrastructure • Provisioning is slow • Placement is limited • Mobility is limited • Hardware dependent • Operationally intensive Networking undoes much of the goodness of server virtualization
  • 5 The Solution – Virtualize the Network Physical Infrastructure Compute Virtualization Abstraction Layer • Programmatic provisioning • Place any workload anywhere • Move any workload anywhere • Decoupled from hardware • Operationally efficient Network Virtualization Abstraction Layer Software Defined Data Center • Provisioning is slow • Placement is limited • Mobility is limited • Hardware dependent • Operationally intensive SOFTWARE-DEFINED DATACENTER SERVICES VDC
  • 6 What is Network Virtualization? Physical Compute & Memory Server Hypervisor Requirement: x86 Virtual Machine Virtual Machine Virtual Machine Application Application Application x86 Environment Physical Network Network Virtualization Platform Requirement: IP Transport Virtual Network Virtual Network Virtual Network Workload Workload Workload L2, L3, L4-7 Network Services Decoupled
  • 7 The Starting Point for Network Virtualization: Virtual Switch Hypervisor vSwitch Hypervisor vSwitch Physical Network
  • 8 VLAN L2 L3 Virtual Network L2 NSX Gateway NSX – The Network Virtualization Platform Physical Network vSphere Host vSphere Host KVM Xen Server NSX vSwitch NSX vSwitch Open vSwitch Open vSwitch Hardware Software Controller Cluster VLAN VTEP API HW Partner VM VM “NSX API” CMP
  • 9 NSX Controller NSX Controller NSX Controller NSX Controller NSX Controller NSX Controller scale out  All nodes active  Workload sliced among nodes  Live software upgrades Node 5 Node 4 WebService API Persistent Storage Logical Network Transport Network Node 1 Node 2 Node 3 Controller Cluster OpenStack CEE Day 2013
  • 10 Tunnels are like cables Physical STT HypervisorHypervisor WORLD VXLAN VXLAN Virtual Network Cable Cable Cable Copper Cable Controller Third party hardware
  • 11 Why Not a Single Tunnel Format?  STT was designed to optimize performance for hypervisor- hypervisor traffic • Leveraging commodity NIC behavior so that tunneling has negligible performance impact • Unfortunately, it’s hard for switches to implement & can raise issues with firewalls  VXLAN is the de facto industry standard for network virtualization • Ideal for multi-vendor situations (e.g. vswitch-physical switch communication) • Will start to see NIC support for high performance in the next year  Extensibility of the header likely needed • STT has 64-bit “context” vs 24-bit VNI  Tunnel format decoupled from control plane  Tunnel format != virtualization architecture
  • 12 Visibility & Virtual Networks  Historically challenging to troubleshoot connectivity between VMs • Is the problem in vswitch or physical network? • What’s the path through the physical network? • Is there a (misconfigured) middlebox in the path?  Network virtualization gives us tools to handle this: • Decomposition: separate the physical from the virtual • Global view: see all the logical network state (port stats, drops, etc.) and tunnel health from the controller API • Synthetic traffic: insert packets at vswitch as if the VM generated them
  • 13 Hardware VTEPs  Benefits: • Fine-grained access: can pull a single physical port into the virtual world • Connect bare metal workloads with higher performance/throughput  Same operational model (provisioning, monitoring) as virtual networks Consistent provisioning and operations for entire Data Center, regardless of workloads, over a simple IP fabric
  • 14 API (OVSDB) Tunnels (VXLAN) Physical Workloads VM Controller Cluster Hypervisor vSwitch Hypervisor vSwitch Hypervisor vSwitch Hypervisor vSwitchVM VM Logical network (VNI) Connecting the Physical to the Virtual DB VM MACS PHYMACS IP Underlay (no mulitcast required)
  • 15 Demo Topology KVM Server 1 VM100 192.168.1.110 VM101 192.168.1.111 VM102 192.168.1.112 KVM Server 2 VM200 192.168.1.120 Arista 7150 Hardware VTEP Bare-metal Server 192.168.1.200 Ethernet vswitch Ethernet in VXLAN 10.10.100.200 NSX Manager NSX Controller
  • 16
  • 17 Hardware VTEP Summary  Consistent treatment of physical and virtual workloads • Virtual networks created by API calls to controller, as usual • API extended to treat <physical port, VLAN> pair like virtual port  Controller and VTEP share state via database protocol • No multicast requirement for underlay network • State sharing avoids need to flood to learn MACs • OVSDB: same protocol used for Open vSwitch configuration • draft-pfaff-ovsdb-proto-02.txt (submitted for RFC publication) • New schema specific to this usage (vtep.ovsdbschema)  Adds more options on the performance/functionality spectrum for gateways
  • 18 Distributed Services  NSX architecture allows many services to be implemented in a fully distributed way • Examples include firewalls (statefull/stateless), logical routing, load balancing  Benefits: • Scale: no central bottleneck – apply as many vswitches to the task as there are hypervisors in the logical network • Optimal forwarding through the data center – no hairpinning • Ensure all packets get appropriate services applied (cf. centralized firewall)
  • 19 Example: Distributed L3 Forwarding Logical View Hypervisor1 Hypervisor2 Hypervisor3 Hypervisor4 Open vSwitch Open vSwitch Open vSwitch Open vSwitch APP VM WEB VM Physical View L Switch L Switch L Router Web App World
  • 20 Distributed L3 Forwarding (post ARP) Logical View L Switch L Switch L Router Web App World Hypervisor3 Open vSwitch APP VM WEB VM Life of a packet Hypervisor1 SRC Src MAC = Web Dst MAC = Router Src IP = Web Dst IP = App Hypervisor1 Open vSwitch SRC Src MAC = Router Dst MAC = App Src IP = Web Dst IP = App Tunnel
  • 21 IP/MPLS CORE Hypervisor Hypervisor Hypervisor NSX Gateway Open vSwitch Open vSwitch Open vSwitch PE To Customer Sites Connecting Virtualized Data Centers to the WAN SP offers a “Cloud + VPN” service
  • 22 Option A: Map Logical Networks to VLANs NSX Gateway VRF VRF VRF Logical Networks map to VLANs; Each VLAN maps to a VRF (customer- specific routing table) PETo Customer Sites MPLS Core
  • 23 Option B: Map Logical Networks to MPLS Labels NSX Gateway Logical Network Prefixes advertised in MP-BGP with MPLS labels ASBRTo Customer Sites MPLS Core Treat interface like inter-AS (RFC 4364) MPLS Labelled Packets mapped to/from logical networks Forms the basis for federation of data centers
  • 24 What’s next for Network Virtualization?  Changing the operational model of networking • Snapshot, rollback, what-if testing, etc.  Federation/Multi-DC use cases  Physical/Virtual Integration • More network control for physical end-points • Underlay visibility/troubleshooting  Advanced L4-L7 services  Higher level policies drive networking  Application of formal methods (e.g. Header Space Analysis)  And many more…
  • 25 Summary & The Road Ahead  Network virtualization – extending benefits of server virtualization to the whole DC • It’s all about agility • And scale (but benefits appear even at modest scale)  Network virtualization brings the benefits of a programmatic operational model: • Provision complex applications & topologies in software  increased automation • Decoupled from hardware • Evolve new capabilities at software speeds  Arguably the biggest shift in networking in a generation
  • 26 Other VMware Activities Related to This Session  HOL: HOL-SDC-1303 VMware NSX Network Virtualization Platform  Breakout NET5796 Virtualization and Cloud Concepts for Network Administrators
  • THANK YOU
  • Advanced VMware NSX Architecture Bruce Davie, VMware NET5716 #NET5716