Get a technical understanding of the components of NSX, including how switching, routing, firewalling, load-balancing and other services work within NSX.
Decarbonising Buildings: Making a net-zero built environment a reality
VMworld 2015: VMware NSX Deep Dive
1. VMware NSX - Deep Dive
Jacob Rapp, VMware, Inc
NET5560
#NET5560
2. • This presentation may contain product features that are currently under development.
• This overview of new technology represents no commitment from VMware to deliver these
features in any generally available product.
• Features are subject to change, and must not be included in contracts, purchase orders, or
sales agreements of any kind.
• Technical feasibility and market demand will affect final delivery.
• Pricing and packaging for any new technologies or features discussed or presented have not
been determined.
Disclaimer
CONFIDENTIAL 2
3. What You’ve Done with NSX
CONFIDENTIAL 3
NSX Customers
700+
Production Deployments
(adding 25-50 per quarter)
100+
Organizations
invested US$1M+ in NSX
65+
What You’re Doing Next
EXPANDED SECURITY
New security partners, integrations, and projects
and applications of NSX.
DEEPER INTEGRATION
New infrastructure and operations partners,
integrations, and frameworks for IT organizations
√
APPLICATION CONTINUITY
New functionality to scale deployments across
vCenter instances, with the ability to:
• Pool resources from multiple data centers
• Recover from disasters faster
• Deploy a hybrid cloud architecture
• NSX 6.2 contains over 20 new features
• Tested against over 1000 new scenarios
4. Session Objectives
• Provide you with an in-depth understanding of the NSX architecture and components
• Understand how networking functions and services are implemented within the NSX platform
• Analyze key workflows for configuring virtual network & security services
• Provide pointers to reference design sessions and guides
CONFIDENTIAL 4
5. Provides
A Faithful Reproduction of Network & Security Services in Software
Management
APIs, UI
Switching Routing
Firewalling
Load
Balancing
VPN
Connectivity to
Physical Networks
Policies,
Groups, Tags
Data Security Activity Monitoring
CONFIDENTIAL 5
6. Physical Workloads
Security PoliciesSecurity Groups
Logical Switching, Routing, Firewall, Load Balancing
Web
App
Database
Web
“Standard Web”
Firewall – allow inbound
HTTP/S, allow outbound ANY
IPS – prevent DOS attacks,
enforce acceptable use
Database
“Standard Database”
Firewall – allow inbound SQL
Vulnerability Management –
Weekly Scan
App
“Standard App”
Firewall – allow inbound TCP
8443, allow outbound SQL
VM VM
VM VM VM
VM
“Default”
Firewall – Access shared
services (DNS, AD)
Anti-Virus – Scan Daily
Default
Creating Sophisticated Application Topologies
CONFIDENTIAL 6
16. NSX Logical Switching
• Per Application/Multi-tenant segmentation
• VM Mobility requires L2 everywhere
• Large L2 Physical Network Sprawl – STP
Issues
• HW Memory (MAC, FIB) Table Limits
• Scalable Multi-tenancy across data center
• Enabling L2 over L3 Infrastructure
• Overlay Based with VXLAN, etc.
• Logical Switches span across Physical Hosts
and Network Switches
Challenges Benefits
VMwareNSX
Logical Switch 1 Logical Switch 2 Logical Switch 3
CONFIDENTIAL 16
17. Logical View: VMs in a Single Logical Switch
Web LS
172.16.10.0/24
172.16.10.11 172.16.10.12 172.16.10.13
VM1 VM3VM2
172.16.20.12
VM5
172.16.20.11
VM4
App LS
172.16.20.0/24
CONFIDENTIAL 17
18. Physical View: VMs in a Single Logical Switch
VM1
vSphere Distributed Switch
VM2
Logical Switch 5001
VM3
Transport Subnet A 192.168.150.0/24
Physical Network
192.168.150.51 192.168.150.52 192.168.250.51
172.16.10.11 172.16.10.12 172.16.10.13
CONFIDENTIAL 18
19. IP Fabric
Host A Host B
vSphere Distributed Switch
Traffic Flow on a VXLAN Backed VDS
19
• In this setup, VM1 and VM2 are on different hosts but belong to the
same logical switch
• When these VMs communicate, a VXLAN overlay is established
between the two hosts
dvUplink-PG
Logical SW A
VM1
dvUplink-PG
dvPG-VTEP
VTEP
dvPG-VTEP
VTEP
VXLAN Overlay
Logical SW A
VM2
CONFIDENTIAL
20. Host BHost A
vSphere Distributed Switch
Traffic Flow on a VXLAN Backed VDS
• Assume VM1 sends some traffic to VM2:
dvUplink-PG
Logical SW A
VM1
dvUplink-PG
dvPG-VTEP
VTEP
dvPG-VTEP
VTEP
Logical SW A
VM2L2 frame L2 frame
IP Fabric
VXLAN Overlay
IP/UDP/VXLANL2 frame
VM1 sends L2
frame to local
VTEP
1
VTEP adds VXLAN,
UDP & IP headers2
Physical Transport
Network forwards as
a regular IP packet
3 Destination
Hypervisor VTEP
de-encapsulates
frame
4 L2 frame
delivered to
VM2
5
CONFIDENTIAL 20
21. NSX for vSphere VXLAN Replication Modes
• NSX for vSphere provides three modes
of traffic replication (two which are
Controller based, and one
which is Data Plane based
• Unicast Mode
– All replication occurs using unicast
• Hybrid Mode
– Local replication offloaded to physical
network, while remote replication occurs
via unicast
• Multicast Mode
– Requires IGMP for a Layer 2 topology and
Multicast Routing for L3 topology
• All modes require an MTU of 1600 bytes
CONFIDENTIAL 21
24. NSX Routing: Distributed, Feature-Rich
• Physical Infrastructure Scale
Challenges – Routing Scale
• VM Mobility is a challenge
• Multi-Tenant Routing Complexity
• Traffic hair-pins
Challenges
• Distributed Routing in Hypervisor
• Dynamic, API based Configuration
• Full featured – OSPF, BGP, IS-IS
• Logical Router per Tenant
• Routing Peering with Physical Switch
Benefits
SCALABLE ROUTING – Simplifying Multi-tenancy
L2
L2
Tenant A
Tenant B
L2
L2
L2
Tenant C
L2
L2
L2
CMP
CONFIDENTIAL 24
25. Logical View: VMs in a Single Logical Switch
VM1 VM3VM2
VM5VM4
Web LS
172.16.10.0/24
172.16.10.11 172.16.10.12 172.16.10.13
172.16.20.12172.16.20.11
App LS
172.16.20.0/24
CONFIDENTIAL 25
26. Logical View: VMs with Distributed Routing
172.16.10.1
192.168.10.0/29
192.168.10.1
Distributed Logical
Router Service
VM1 VM3VM2
VM5VM4
Web LS
172.16.10.0/24
172.16.10.11 172.16.10.12 172.16.10.13
172.16.20.12172.16.20.11
App LS
172.16.20.0/24
172.16.20.1
CONFIDENTIAL 26
27. Physical View: VMs in a Single Logical Switch
VM1
vSphere Distributed Switch
VM2
Logical Switch 5001
VM3
Physical Network
Transport Subnet A 192.168.150.0/24
192.168.150.51 192.168.150.52 192.168.250.51
172.16.10.11 172.16.10.12 172.16.10.13
CONFIDENTIAL 27
28. Physical View: Logical Routing
VM5
VM1
vSphere Distributed Switch
VM2
Logical Switch 5001
VM3
Physical Network
VM4
Logical Switch 5002
Controller
Management Cluster
L3 Control Plane Programming
Data Plane
Transport Subnet A 192.168.150.0/24 Transport Subnet B 192.168.250.0/24
192.168.150.51 192.168.150.52 192.168.250.51
CONFIDENTIAL 28
29. NSX Logical Routing : Components Interaction
29
NSX Edge
(Acting as next hop router)
172.16.10.0/24 172.16.20.0/24
DLR
192.168.10.1
192.168.10.2
External Network
192.168.10.3
DLR Control VM
Data
Path
Control
Controller Cluster
Control
NSX Mgr
Dynamic routing protocol is configured on
the logical router instance1
OSPF/BGP peering between the NSX
Edge and logical router control VM3
Learnt routes from the NSX Edge are
pushed to the Controller for distribution4
Controller sends the route updates to all
ESXi hosts
5
Routing kernel modules on the hosts
handle the data path traffic6
1
3
4
5
6
Controller pushes new logical router
Configuration including LIFs to ESXi hosts
2
2
Peering
OSPF, BGP
172.16.30.0/24
CONFIDENTIAL
30. Distributed East-West Routing Traffic Flow
Different Hosts
30
vSphere Host
VM1
VDS
VXLAN Transport Network
VXLAN 5001
VM2
VXLAN 5002
1
4
vSphere Host
LIF2 - ARP Table
DA: vMAC
SA: MAC1
DA: 20.20.20.20
SA: 10.10.10.10
5002
MAC1
MAC2
5
172.16.10.10
2
VM IP VM MAC
172.16.20.10 MAC2
PayloadL2 IP
DA: 172.16.20.10
SA: 172.16.10.10
PayloadL2 IP
L2 IP UDP VXLAN PayloadL2 IP
172.16.20.10
LIF1
LIF2 vMAC
LIF1
LIF2 vMAC
Host 1 Host 2
3
10.10.10.10/24 20.20.20.20/24
3
DA: MAC2
SA: vMAC
32. What Have We Seen Thus Far ..
1. NSX architecture
2. An on-demand application deployment
3. Logical switching configuration
4. Understand logical networks
5. Logical routing and possible designs
CONFIDENTIAL 32
34. NSX Distributed Firewalling
• Centralized Firewall Model
• Static Configuration
• IP Address based Rules
• 40 Gbps per Appliance
• Lack of visibility with encapsulated traffic
• Distributed at Hypervisor Level
• Dynamic, API based Configuration
• VM Name, VC Objects, Identity-based Rules
• Line Rate ~20 Gbps per host
• Full Visibility to encapsulated traffic
Challenges Benefits
PHYSICAL SECURITY MODEL DISTRIBUTED FIREWALLING
Firewall Mgmt
VMware NSX
API
CMP
NSX DFW Deep DiveSEC5589
CONFIDENTIAL 34
35. Distributed Firewall Features
VM5
VM1
vSphere Distributed Switch
Web-LS1
VM4
App-LS1
Management Cluster192.168.150.51 192.168.150.52 192.168.250.51
VM2
• Firewall rules are enforced at VNIC Level
• Policy independent of location (L2 or L3 adjacency)
• State persistent across vMotion
• Enforcement based on VM attributes like Tags, VM Names, Logical Switch, etc
Capabilities
CONFIDENTIAL 35
36. Distributed Firewall Rules
VM5
VM1
vSphere Distributed Switch
Web-LS1
VM4
App-LS1
Management Cluster192.168.150.51 192.168.150.52 192.168.250.51
VM2
Rules Based on VM Names
CONFIDENTIAL 36
38. Example Building a Web DMZ
Web-Tier
App-Tier
External Network
Source Destination Service Policy
Any Web-Tier LS HTTPS Allow
Web-VM1 Web-VM2 Block
Any Web-Tier LS Block
Web-Tier LS App-Tier LS TCP 8443 Allow
Any App-Tier LS Block
STOP
Client to Web HTTPS Traffic
Web to App
TCP/8443
CONFIDENTIAL 38
39. External Network
VDS
Guest VM
Partner
Services VM
vCenter Partner Console
DFW
Filtering Module
Slot 2
Slot 4
Traffic
Redirection
Module
NSX Distributed Firewall Packet Walk
39
DFW, Filtering Module and Traffic Redirection Module
CONFIDENTIAL
41. Features Summary
NSX Edge
Gateway Services
Rule configuration with IP, Port ranges, Grouping Objects, VC ContainersFirewall
Configuration of IP Pools, gateways, DNS servers and search domains.DHCP
IPSec site to site VPN between two Edges or other vendor VPN terminators.Site-to-Site VPN
Stretch your layer 2 across datacenters.L2VPN
Allow remote users to access the private networks behind Edge GSW.SSL VPN
Configure Virtual Servers and backend pools using IP addresses or VC ObjectsLoad Balancing
Source and Destination NAT capabilities.Network Address Translation
Active-Standby HA capability which works well with vSphere HA.High Availability
Static as well as Dynamic Routing protocols support (OSPF, BGP, ISIS)Routing
Allow configuring DNS relay and remote syslog servers.DNS/Syslog
42. NSX Edge Integrated Network Services
….
Firewall
Load Balancer
VPN
Routing/NAT
DHCP/DNS relay
DDI
VM VM VM VM VM
• Integrated L3 – L7 services
• Virtual appliance model to
provide rapid deployment and
scale-out
Overview
• Real time service instantiation
• Support for dynamic service
differentiation per
tenant/application
• Uses x86 compute capacity
Benefits
CONFIDENTIAL 42
43. NSX Load Balancing
• Application Mobility
• Multi-tenancy
• Configuration complexity – manual
deployment model
• On-demand load balancer service
• Simplified deployment model for
applications – one-arm or inline
• Layer 7, SSL, …
Challenges Benefits
LOAD BALANCER – Per Tenant Application Availability Model
Tenant A
VM1 VM2 VM1 VM2
Tenant B
NSX Load Balancing
Deep Dive
NET5612
CONFIDENTIAL 43
44. NSX L2VPN
• Brownfield NSX deployments (VLAN -> VXLAN)
• Data Center Migrations (P2V, V2V)
• Disaster Recovery & Testing
• Cloud Bursting & Onboarding
Use Cases
• Long Distance / High Latency
• Multiple management domains
• NSX present only on a single site
• Max 1500 byte MTU on WAN
Best Fit for L2 extensions with
• SSL secured L2 extension over any IP network
• Independent of vCenter Server boundaries
• Can co-exist with existing default gateway
• No specialized hardware required
• Supports up to 750Mb/s per Edge
• AES-NI supported if available
Highlights
Internet / WAN
Enterprise
Internet / WAN
Hybrid Cloud
Public
Cloud
Connecting Remote Sites with NSXNET5352
46. VMware NSX – Summary and Takeaways
• Faithful reproduction of L2 – L7 network & security services
• Services design for scale-out
• Central API for provisioning & monitoring
• All NSX components designed with resiliency
• Extensive 3rd party ecosystem for NSX platform
46CONFIDENTIAL
47. NSX Ecosystem
CONFIDENTIAL 47
Service Insertion
“Leverage full automation and
service insertion for NSX”
NSX aware
“Leverage NSX API and
metadata to bring a
solution”
Co-existence
“Let’s meet in the network”
Works with any switching fabric
Works with routing ecosystem
using
traditional protocols
Existing Physical firewall provide
security sitting in front of NSX Edge
at layer 3
Existing Physical/virtual ADC
services can connect to NSX at
layer 2 or layer 3
48. Network Virtualization Next Steps with VMware NSX
CONFIDENTIAL 48
virtualizeyournetwork.com
The online resource for the people, teams and
organizations that are adopting network virtualization
communities.vmware.com
Connect and engage with network virtualization
experts and fellow VMware NSX users
vmware.com/go/NVtraining
Build knowledge and expertise for the next step in
your career
labs.hol.vmware.com
Test drive the capabilities of VMware NSX
49.
50.
51. VMware NSX - Deep Dive
Jacob Rapp, VMware, Inc
NET5560
#NET5560
Editor's Notes
Explain each module in little detail.. Showing the value of each feature
Port Security : Provides DHCP snooping used by VXLAN module; Port Security – IP spoof guard
VXLAN – VTEP ; MTEP – Multicast replication; ARP Proxy
Distributed Router – East – West traffic between VXLAN vWires had to go through Edge gateway
Distributed Firewall – Better performance
Message Bus provides a new communication channel that allows direct communication from NSX manager to the host
User World Agent – Communicates with the controller one one side and the kernel modules on the other
TBD: Properties & Benefits (VM, separation of MP/CP/DP, scale-out, no multicast) - Functions: overlay, L2, L3 dataplane programming
Provides control plane to distribute Logical Switching and Logical Routing network information to ESXi hosts
NSX Controllers are clustered for scale out and high availability
Network information is sliced across nodes in a Controller Cluster
Removes Dependencies on Multicast from Physical Networks
Provides suppression of ARP broadcast traffic in logical networks
Functionality
NSX for vSphere centralized management plane
1:1 mapping between an NSX Manager and vCenter Server
Provides the management UI and API for NSX
Configures Controller Cluster
Generates certificates to secure control plane communications
Installs Logical Switching, Distributed Routing and Firewall kernel modules on ESXi hosts
Operationally:
Deploys NSX Controller and NSX Edge Virtual Appliances (OVF)
vSphere Web Client Plugin
Host configuration includes Distributed Firewall and NSX Edges
NSX Control Plane communication occurs over the management network.
The Control plane is protected by:
Certificate based authentication
SSL
NSX Manager generates self-signed certificates for each ESXi Hosts and Controllers
These certificates are pushed to the controller and ESXi hosts over secure channels
Mutual authentication occurs by verifying these certificates
Ethernet in IP overlay network
Entire L2 frame encapsulated in UDP
50+ bytes of overhead
24 bit VXLAN Network Identifier
16 M logical networks
VXLAN can cross Layer 3 network boundaries
Overlay between ESXi hosts
VMs do NOT see VXLAN ID
VTEP (VXLAN Tunnel End Point)
VMkernel interface which serves as the endpoint for encapsulation/de-encapsulation of VXLAN traffic
Technology submitted to IETF for standardization
With Cisco, Citrix, Red Hat, Broadcom, Arista and Others
VXLAN traffic uses a vmknic which provides VXLAN Virtual Tunnel End Point (VTEP) functionality
A single dvPortGroup per VDS is created for all VTEPs
A logical switch is a L2 broadcast domain implemented using VXLAN
A dvPortGroup is created for each logical switch
Provides local switching & isolation
VXLAN logical switches can also span multiple VDS
Support for multiple VXLAN vmknics per host to provide additional options for uplink load balancing
DSCP & COS Tag from internal frame copied to external VXLAN encapsulated header
Support for Guest VLAN tagging
vMotion callback
Dedicated TCP/IP stack for VXLAN
Ready for VXLAN hardware offloading to network adapters
A highly available and secure control plane to distribute VXLAN network information to ESXi hosts
Removes dependency on multicast routing/PIM in the physical network
Suppress broadcast traffic in VXLAN networks
ARP Directory Service & NSX Controller
In Unicast or Hybrid mode, each ESXi host will select one VTEP in every remote segment from its VTEP mapping table as a proxy. This is per VNI (balances load across proxy VTEPs)
In Unicast Mode this proxy is called a UTEP – Unicast Tunnel End Point
In Hybrid Mode it is an MTEP – Multicast Tunnel End Point
This list of UTEPs/MTEPs is then synced to each VTEP
If a UTEP or MTEP leaves a VNI the host will be updated by the Controller and then select a new proxy within the segment
Optimized Replication – VTEPs perform software replication of BUM traffic to local VTEPs and one UTEP/MTEP per remote segment
The VXLAN header format has been updated in NSX for vSphere
A new REPLICATE_LOCALLY bit is used in the VXLAN header for Unicast and Hybrid Modes
When an UTEP or MTEP receives a unicast frame with the REPLICATE_LOCALLY bit set it is responsible for re-injecting the frame to the local transport network
The behavior of the proxy depends on the traffic replication mode
UNICAST MODE
Source VTEP role
Replicates an encapsulated frame to each remote UTEP via unicast
Also replicates the frame to each active VTEP in the local segment
UTEP role
Delivers a copy of the de-encapsulated inner frame to local VMs
Sends the replicated frame to all VTEPs in the local segment
Unicast Mode has minimal dependencies on physical network, but the overhead increases as environment scales
Configurable per VNI during logical switch provisioning
Multicast Addresses are not required in Unicast Mode
Hybrid Mode
Source VTEP role
Replicates an encapsulated frame to each remote MTEP via unicast
Also replicates the frame locally via multicast
MTEP role
Delivers a copy of the de-encapsulated inner frame to it’s local VMs
Sends the replicated frame to the local segment using the multicast address assigned to the VNI
Hybrid Mode leverages the physical network to reduce the overhead of traffic replication. Overhead increases as VXLAN segments are added
Again configurable per VNI
Multicast Addresses are required in Hybrid Mode
To reduce dependencies on the physical network, ESXi hosts will now send IGMP joins & reports
An IGMP Querier on the physical network per transport network is still recommended
Multicast Mode
Source VTEP role
Replicate the VXLAN frame locally via multicast
L2 multicast will deliver to all local destination VTEPs
No UTEP or MTEPs required
Multicast routing will handle delivery to all remote segments
Multicast Mode is entirely reliant on multicast support in the physical network for local and remote traffic replication
Configurable per VNI
Multicast Addresses are required
Logical Interfaces (LIFs) on a Distributed Logical Router Instance
There are internal LIFs and uplink LIFs
VM Default Gateway traffic is handled by LIFs on the appropriate network
LIFs are distributed across every hypervisor prepared for NSX
Up to 1000 LIFs can be configured per Distributed Logical Router Instance
8 Uplink
992 Internal
An ARP table is maintained per LIF
vMAC is the MAC address of an internal LIF
vMAC is same across all hypervisors and it is never seen by the physical network (only by VMs)
pMAC is the MAC address of the dvUplink on each host through which traffic flows to the physical network
The Distributed Logical Router Control Plane is provided by a per instance DLR Control VM and the NSX Controller
Supports Dynamic Routing Protocols
OSPF
BGP
Communicates with NSX Manager and Controller Cluster
NSX Manager sends LIF information to the Control VM and Controller Cluster
Control VM sends Routing updates to the Controller Cluster
DLR Control VM and NSX Controller are not in the data path
High availability supported through Active-Standby configuration
VMware NSX provides a faithful reproduction of Network & Security Services in Software
VXLAN is the overlay technology empowering those virtual networking capabilities
Logical Routing allows for communication between virtual workloads belonging to separate IP subnets
Distributed Routing optimizes traffic flows for East-West communication inside the Data Center
Centralized Routing handles on-ramp/off-ramp communication with the external physical network
Multiple logical topologies can be built combining NSX DLR and Edge functional components
Each logical routing components can be deployed redundantly to guarantee a fully resilient design
Enterprise topology, optimizes as much E-W traffic as possible by adding as many LIFs on the one DLR instance.
Unless L2 spans across all clusters it is still common to uses NSX edge gateway even if it’s aggregating the one Distributed Logical Router instance so that VXLAN LIFs are used.
While we’re focusing on Firewalling here, note that NSX is the security platform offering Antivirus, Intrusion Prevention, Vulnerability Management, Identity and Access Management, Security Policy Management, DLP File Integrity Monitoring and more…
Traffic Redirection rules are configured using Service Composer (within Security Policy definition) or using Partner Security Services (DFW UI - NSX 6.1).
Filtering Module is an extension of DFW. Filtering Module rules are configured within Security Policy definition (Service Composer menu).
Traffic Redirection Module define which traffic are steered to Partner Services VM:
Using Service Composer:
ANY -> SG, SG1 -> SG2, SG -> ANY
ANY, TCP/UDP destination port, TCP/UDP source port
Predefined Services application & protocols (NSX 6.1)
Using Partner Security Services (under DFW UI)
Challenges
Applications are not mobile as they are tied to a physical LB instance
Multi-tenancy ?
Configuration automation