SlideShare a Scribd company logo
1 of 46
Confidential │ ©2019 VMware, Inc.
NSX-T Design for
Small
to Mid-Sized Data
Centers
February 2020
Confidential │ ©2019 VMware, Inc.
Agenda
2
The NSX-T Platform
Use cases, Architecture and Terminology
NSX-T in the Data Center (Large, Mid-Size or Small)
Design Considerations
Management, Compute and Edge
Edge Node Form Factor and Placement
vSAN Considerations
Reference Architecture
Growth Options
Summary and Q&A
Confidential │ ©2019 VMware, Inc. 3
ESX
NSX Evolution
BRANCH
DC
EDGE/IOT
PUBLIC CLOUD
PRIVATE CLOUD
vSphere
Confidential │ ©2019 VMware, Inc. 4
VMware NSX-T Solution Scope
Visibility
Automation
NSX DC – On Prem, NSX Cloud (AWS & Azure), VMC, VMC on Dell Cloud, IBM
Integration with Container Ecosystem – PKS, PAS, DIY K8 & OpenShift
Networking Security
Solution & Integrated Stack
Intrinsic Security – E-W, Multi-tenant, Application Aware & Perimeter
Heterogenous end-points – Container, VM (ESX & KVM), Bare-Metal
Confidential │ ©2019 VMware, Inc. 5
Load
Balancing
Connectivity
to physical
Switching Firewalling
VPN
NSX-T Networking and Security Services
Routing
DHCP
NAT
MetaData
Proxy
MetaData
Proxy
DNS
Forwarder
DNS
Forwarder
Confidential │ ©2019 VMware, Inc. 6
Datacenter/Computer
Room/Co-Lo Floorspace is at
a Premium
Dedicated Appliances Take-
Up Power, Produce Heat,
Take Up Space
Usually Hosting Location is
Remote from Support
Personnel
Maximize Out of the Box
Functionality
Single Pane of Glass is
Important
IT Leans Towards Jack of All
Trades Master of None
Limited Access to
Sandbox/Development
Environments
Vendor Management and
Integration Challenges
Licensing based on Socket
• Higher Density = Lower
Cost
Overhead can’t be hidden
due to the scale.
Pay for Admission but it’s All
You Can Eat After That
• Services are Included in the
License Cost
• More Services = More ROI
Small and Medium DataCenter Priorities
Design Constraints
Physical Footprint Complexity Cost
Confidential │ ©2019 VMware, Inc. 7
5 Hosts for Mgmt & Edge (3 + 1 + 1)
N+1+1 for Availability with FTT=2
5 Sockets of VSAN, vSphere, NSX
10 Hosts for Prod (8 + 1 + 1)
N+1+1 for Availability with FTT=2
20 Sockets of VSAN, vSphere, NSX
37.5% Overhead for Availability +
19% Overhead for Management
10 Hosts (8 + 1 + 1)
25% Overhead for Availability
N+1+1 for Availability with FTT=2
20 Sockets of VSAN, vSphere, NSX
Single Cluster
Management/Edge + Compute
The Smaller the Datacenter the Tighter the Window
Overhead is the Enemy
Resource Example Totals
# of VMs Required 200 25 VMs Per Host
CPU Over Subscription 3:1
RAM Over Subscription 2:1
Avg vCPU per VM 6 vCPU 1200 vCPUs
AVG RAM per VM 48 GB 9600 GB of RAM
vCPUs Per Host 2 Sockets 28 Cores Each 168 vCPUs
vRAM Per Host 768 GB @ 2:1 1536 GB
TB Per Host 12 TB Usable 96 TBs
RU Per Host 1 RU
# of Hosts 8 + 1 + 1
Confidential │ ©2019 VMware, Inc. 8
Hosts: 4-10 hosts
vSphere Clusters: 1-2
Workload VMs: 100 - 250
N-S bandwidth: <10G
Hosts: 10-100 hosts
vSphere Clusters: 4- 10
24 hosts per cluster
Workload VMs: 1000 -2500
N-S bandwidth: 10G or more
Hosts: 100+ hosts
vSphere Clusters: >10
Workload VMs: 1000s
N-S bandwidth: Multiple of 10G
(Depends on use case)
NSX-T Design for Small to Mid-Sized Data Centers
What's a SMB Data Center?
Small DataCenter Mid-Size DataCenter Large DataCenter
9
Confidential │ ©2019 VMware, Inc.
Architecture & Terminology
Confidential │ ©2019 VMware, Inc. 10
N-VDS
NSX-T Requirements on physical infrastructure
Only requirements on the physical
infrastructure:
• IP connectivity
• 1700 bytes MTU (minimum), 9K
recommended
 Mobility with Any topology:
• L2 and L3
• Spine/Leaf
 Any vendor:
• Any single vendor
• Mix of vendors
• Mix of device generations
Works with any network topology
TEP 1
P1
Physical Fabric
Any Vendor/Any Topology
N-VDS
TEP 2
P1
Compute Host 1 Compute Host 2
Confidential │ ©2019 VMware, Inc. 11
UI/API entry point,
Store desired configuration
Interact with other management
components
Cloud Service Manager
NSX Container Plugin
vCenter(s)
NSX
Manager
CMP, Automation (Ansible, Terraform, Python, Go etc…)
NSX-T Architecture
ESXi host
N-VDS
KVM host
N-VDS
NSX Edge
Bare Metal
Server
NSX
Transport Nodes:
• Host workloads (VMs, containers)
and services
• Switch data plane traffic
Private Cloud
Linux
VM
NSX
Windows
VM
NSX
NSX
Cloud
GW
NAT
Public Cloud
VMware Cloud
on AWS
VMs Containers
Maintain and propagate dynamic state
within the system
Disseminates topology information
reported by the data plane elements
NSX
Controller
Control Plane
Data Plane
Management
Plane
NSX Manager
Appliance
NSX Manager
Appliance
Cluster of 3 VMs (scale out + redundancy)
NSX Manager
Appliance
Confidential │ ©2019 VMware, Inc. 12
NSX-T Datacenter Components
Transport Node (TN)
• Host prepared for NSX (i.e. N-
VDS installed)
• Has a TEP (Tunnel End Point)
• Ex: Hypervisor, Edge Node,
bare metal server with NSX
Agent
NSX Virtual Distributed Switch
• Supports VLANs and Overlay
• Owns several physical NICs of
the Transport Node
• Can co-exist with a VSS, VDS or
another N-VDS
Terminology: Transport Nodes, N-VDS
VDS
P1 P2
N-VDS
P3 P4
Infrastructure
traffic
Workload traffic
(Overlay / Vlan)
vmk0 vmk2
vmk1 VM1 VM3
VM2
Transport Node
ESXi
Overlay or VLAN
Segment
Confidential │ ©2019 VMware, Inc. 13
NSX-T Datacenter Components
Edge Node
Service Appliances to host centralized
services like N-S connectivity, NAT,
Load balancer, Bridging etc.
Sends/Receives both Overlay and
Vlan/External traffic.
Baremetal Edge
• Host with Edge ISO installed.
• N-VDS owns the pNICs.
• Is a Transport Node
VM Edge
• It’s a VM
• Can be hosted on VSS, VDS or N-VDS
• Is a Transport Node
Edge –Baremetal and VM
N-VDS
P1 P2
Baremetal Edge
Transport Node ESXi
P1 P2
VSS/VDS or N-VDS
vNIC
Edge
VM
vnic0 vnic1 vnic2
VM Edge
Transport Node
N-VDS
Confidential │ ©2019 VMware, Inc. 14
• Tenant Isolation
• Separate control for Infra and
Tenant admin
• Eliminates dependency on
physical infrastructure when a
new tenant is provisioned
• Role- Connects to physical infra
• Manual Management
Tier-0 Gateway
Benefit
Tier-0
Gateway
Physical
Routers
Tier-1 Gateway
• Role- Per tenant first hop router
• Cloud Management Platform (CMP)
driven Management
Tenant-1 Tenant-2
Tier-1
Gateway
Tier-1
Gateway
NSX-T Datacenter Components
Logical Routing
15
Confidential │ ©2019 VMware, Inc.
NSX-T in Data Center
Click to edit optional subtitle
Confidential │ ©2019 VMware, Inc. 16
N-VDS
N-VDS
N-VDS
N-VDS
NSX-T in the Data Center
Large Datacenter
Dedicated Management, Edge and
Compute clusters
ESXi, KVM or Baremetal hosts
Edge – VM or Baremetal
Dedicated services appliances are
always present. NSX-T services
are used in conjunction with other
services.
N-VDS is not installed on hosts in
Management and Edge cluster
Enterprise or Large Data Center
Edge Cluster -Baremetal or VM
Management Cluster
Compute Clusters
N-VDS
N-VDS
N-VDS
N-VDS
N-VDS
N-VDS
N-VDS
N-VDS
VDS
VDS
VDS
VDS
VDS
VDS
VDS
VDS
Distributed Firewall
Distributed Switching
Distributed Routing
Centralized Services
Confidential │ ©2019 VMware, Inc. 17
NSX-T in the Data Center
Mid-Size Datacenter
 Shared Management and Edge
cluster
 ESXi, KVM or Baremetal hosts
 Edge – VM or Baremetal
 Predictable/Deterministic traffic flow
 NSX-T services are used in
conjunction with other services like
a dedicated firewall
 N-VDS is not installed on hosts in
shared Management and Edge
cluster
Mid Size Data Center- Shared Management and Edge Cluster
N-VDS
N-VDS
N-VDS
N-VDS Shared Management
and Edge Cluster
Compute Clusters
N-VDS
N-VDS
N-VDS
N-VDS
VDS
VDS
VDS
VDS
vSphere
Cluster1
vSphere
Cluster2
Confidential │ ©2019 VMware, Inc. 18
NSX-T in the Data Center
Small Datacenter
Shared Management, Edge and
Compute cluster
ESXi only, no KVM
Edge – Always VM form factor
NSX-T services like Gateway
firewall, Load balancer etc. are
leveraged heavily.
Each host has N-VDS installed.
Small Data Center- Shared Management, Edge and Compute Cluster
Shared Management Edge
and Compute Cluster
N-VDS
N-VDS
N-VDS
N-VDS
Confidential │ ©2019 VMware, Inc. 19
Edge node VM install on VSS or VDS
Shared Cluster- Edge VM on Compute (4 pNIC)
P1
P0 P2 P3
Uplink2
Uplink1
N-VDS
TEP-IP(Vlan 75)
NSX-T VIBs Installed on Host,
N-VDS consuming separate
pNICs than VSS/VDS
Edge-Mgmt Edge-UplinkTrunk1
vNIC1
Mgmt
vNIC2 vNIC3
N-VDS1
Edge-Uplink-Trunk2
TEP2-IP
(Vlan75)
TEP1-IP
(Vlan75)
Overlay
Segment
vmK0 vmK1
Uplink2
Uplink1
VSS/VDS
Mgmt-Seg vMotion-Seg
Confidential │ ©2019 VMware, Inc. 20
Shared Cluster- Edge VM on Compute (2 pNIC)
Edge node VM install on VSS or VDS
P1
P0
Edge-Mgmt Edge-UplinkTrunk1
Uplink2
Uplink1 TEP-IP (Vlan 75)
172.16.215.66/28
ESXi Host
Overlay
Segment
vNIC1
Mgmt
Edge Node VM
vNIC2 vNIC3
N-VDS1
vmK0
Edge-Uplink-Trunk2
N-VDS1
TEP2-IP
(Vlan78)
Compute TEP and Edge TEP
must be in different vlan (Transport
Vlan used for Compute host here is
Vlan 75, Transport Vlan used for
Edge is vlan 78)
vmK1
TEP1-IP
(Vlan78)
Mgmt-Seg vMotion-Seg
21
Confidential │ ©2019 VMware, Inc.
Design Considerations
Management, Edge and Compute
Confidential │ ©2019 VMware, Inc. 22
Sizing
• Small - POC or Lab only
• Medium - 64 Hosts
• Large >64 -1024 hosts
Same or different L2 domain
3 node or 4 node cluster
• DRS with Anti affinity rules
• vSphere HA
• Resource reservations
Throughput
Convergence
Teaming Policy
• Single TEP or Multi-TEP
High availability
• Separate Edge cluster for N-S
and separate cluster for
Services
• Failure Domain
Scale considerations
ESXi, KVM
PNIC
Teaming Policy
• Failover order or Source Port
• Single TEP or Multi-TEP
VMkernel and Overlay Traffic-
Separate pNICs or shared
pNICs
Design Considerations
Management, Edge and Compute
Management Cluster Edge Cluster Compute Cluster
Confidential │ ©2019 VMware, Inc. 23
Design Considerations
Maximum Latency between NSX
Managers = 10ms
Must have support for vMotion in
general
Use vSphere based Management
Cluster- Leverage DRS and anti-
affinity rules (NSX-T doesn’t natively
enforces this design practice)
Anti-affinity rules marked as “should”
not “must”
Loss of single manager is acceptable
Loss of two managers will cause loss of
quorum of NSX Managers
No topology changes / vMotion
Four host vSphere Cluster optimizes
ESXi upgrade and capacity control
vSphere Cluster
Management VLAN / Subnet
Host A Host B Host C
Management Port Group Connected to Management VLAN / Subnet
NSX Manager
C
NSX Manager
A
NSX Manager
B
Host D
NSX Manager
A
3 hosts or 4 hosts for Management
Confidential │ ©2019 VMware, Inc. 24
NSX Management Cluster
IP A IP B IP C
API or GUI
Client
VIP 10.1.1.1
 Single IP Availability
 Multi-subnet – no L2 across
management racks
 More complex setup with LB
configuration required
 Complex LCM and computability
 Costly
Not Common(Not recommended for
Small/Mid-Size Datacenters)
NSX Management Cluster
IP A IP B IP C
 No Layer 2 adjacent requirement
 All three node IP can be used for
GUI and API access, however upon
failure of that node, different IP has
to be used
API or GUI
Client
NSX Management Cluster
IP A IP B IP C
Cluster Virtual IP
10.1.1.1
 Low cost
 Low complexity
 Single IP address can be
used for API and UI access
 Single subnet only
 No UI and API load
distribution
Recommended
API or GUI
Client
Design Considerations
Management Cluster –L2 or L3 adjacent?
25
Confidential │ ©2019 VMware, Inc.
Design Considerations
Compute
Confidential │ ©2019 VMware, Inc. 26
Design Considerations
Compute- ESXi
Yet again, N-VDS can co-exist with VSS, VDS or N-VDS
VMkernel & Overlay Traffic- Dedicated or Shared pNICs
Teaming Policy
– Failover order (Active/standby)
– Source Port (Active/Active) (Not supported for KVM)
– Named teaming policy (Not supported for KVM)
Overlay
Traffic
Infrastructure
Traffic
ESXi Transport Node
N-VDS
P4
P3
Uplink 1 Uplink 2
Mgmt Storage vMotion Overlay
VSS or VDS
P2
P1
Uplink 2
Uplink 2
Uplink 1
All Traffic
On N-VDS
ESXi Transport Node
N-VDS
P2
P1
Uplink 1 Uplink 2
Mgmt Storage vMotion Overlay
Confidential │ ©2019 VMware, Inc. 27
Design Considerations
Compute
All Traffic
On N-VDS
Compute Transport Node
N-VDS
P2
P1
Uplink 1 Uplink 2
Storage Overlay
Teaming Policy Active Standby
Default- Overlay traffic U2 U1
Management Vlan- vmk0 U1 U2
vMotion Vlan - vmk1 U2 U1
vSAN Vlan –vmk2 U1 U2
Named Teaming Policy:
• Used only for VLAN LS
• Several different “Named Teaming Policy can exist under
single N-VDS for different types of VLAN traffic
• Used for deterministic traffic controls for non-overlay traffics
such as vMotion, VSAN etc.
Mgmt vMotion
Confidential │ ©2019 VMware, Inc. 28
Named Teaming Policy
Design Considerations
29
Confidential │ ©2019 VMware, Inc.
Design Considerations
Edge Node and Services
Confidential │ ©2019 VMware, Inc. 30
Design
Criteria
Results
Resources Leverage Existing Cluster’s Excess
Capacity
Availability Leverages vSphere HA
Slower Convergence (3+ Sec)
Operations Simple VM Processes
Deployed From Manager
Performance Lower Bandwidth (based on HW and
Network Contention)
Fewer Services (LB Instances etc.)
Cost Leverages vSphere and NSX Licensing of
the Host
Design
Criteria
Results
Resources Net New Capacity
Availability BFD for Fast Convergence (<1 Sec)
HW based RAID
Operations Boot from ISO, Lifecycle from MGR, Relies
on HW Availability
Performance High Bandwidth
High Service Capacity
Cost No vSphere License, No Storage License
NSX License Required per Socket
Low Cost Server (32GBRAM/8vCPU/200GB
HD
Edge Node VM
(Shared in MGMT or Compute)
Bare Metal Edge Nodes
Design Considerations
Edge Node VM or Bare Metal Edge
Common for SMB DC
Confidential │ ©2019 VMware, Inc. 31
Type/Size Memory vCPU Disk Load balancer
VPN (Number of
sessions)
Specific Usage Guidelines
Edge VM
Small
4GB 2 200 GB POC only POC only
PoC only, LB functionality is not
available.
Edge VM
Medium
8GB 4 200 GB
Small
Medium (2.5 onwards). 128
Suitable for production with
centralized services like NAT, Edge
firewall, VPN. Recommended for
Small Deployments.
Edge VM
Large
32GB 8 200 GB
Small, Medium & Large LB.
Max 1 Large LB and 4
medium LB.
256
Suitable for production with
centralized services like NAT, Edge
firewall, load balancer etc.
Recommended for Mid-Size
Deployments.
Design Considerations
Edge VM Size
Confidential │ ©2019 VMware, Inc. 32
N-S Connectivity Options for Edge
Design Considerations
VPC/MLAG
Mgmt Vlan 100
Overlay Vlan 200
External Vlan 300
External Vlan 400
100, 200,
300, 400
Mgmt Vlan 100
Overlay Vlan 200
External Vlan 300
External Vlan 400
Not Recommended
No Routing over VPC, MLAG
Use Named teaming policies
Tier-0
Gateway
Edge - Baremetal or VM
Mgmt Vlan 100
Overlay Vlan 200
External Vlan 300
100, 200
Mgmt Vlan 100
Overlay Vlan 200
External Vlan 400
Recommended
Tier-0
Gateway
Edge - Baremetal or VM
Confidential │ ©2019 VMware, Inc. 33
Design Considerations
Edge Cluster
NSX-T Edge nodes are pooled in edge-
cluster to provide scale out and High-
Availability for Services.
Gateway in Active/Active HA mode
• Scale out HA
• ECMP
• Stateless Services (Reflexive NAT)
Gateway in Active/Standby HA mode
Stateful Services
• SNAT/DNAT
• Load Balancer
• Edge Firewall
• DHCP server
• VPN
• Bridging
Services High Availability - Active/Active and Active/Standby
Edge Cluster
Tier-1 Tier-1
Tier-0 Tier-0
Tier-1
Tier-1
Edge Node1 Edge Node2
Confidential │ ©2019 VMware, Inc. 34
Design Considerations
Services on Tier-1
Tier-1 Gateway in Active/Standby
but Tier-0 can still provide ECMP
northbound.
Layer4 and Layer7 Load Balancer
available only on Tier-1.
Overlapping IP addresses by using
NAT per Tier-1.
Perimeter L4 –L7 Firewall per
tenant or namespace.
Exception: L2VPN available on
Tier-0 only
Where should you run Centralized Services?
Physical
Routers
EN1 Edge
Cluster
EN2
EN1 EN2
VLAN
Confidential │ ©2019 VMware, Inc. 35
TOR3 TOR4
Design Considerations
Shared or Dedicated Edge
cluster for N-S traffic & Services
• Shared Cluster
Recommended for Small &
Mid-Size Deployments.
High Availability
Scale up –Same Rack or
Different
Consider Failure domains for
Tier-1 Placement ( New feature
in NSX-T 2.5)
Edge Cluster design- Services Availability
Failure
Domain 2
Failure
Domain 1
TOR1 TOR2
Edge 2
Edge
Cluster
Edge 4
Edge 3
Edge 1
36
Confidential │ ©2019 VMware, Inc.
VSAN Considerations
Click to edit optional subtitle
Confidential │ ©2019 VMware, Inc. 37
Minimum Recommended VSAN Cluster Size of 4
Nodes
1. Deploys N+1 Hosts from an NSX Perspective
2. Uses FTT=1
3. Doesn’t Vacate the Storage From a Host by
Default
Best Practices for NSX Management/Controllers
are:
1. Deploy an N+1 (Maintenance) + 1 (Failure)
2. Utilize FTT=2
3. Vacate VSAN Storage from a Host During any
Planned Maintenance
Decide Immediately, Really Good Basket or Lots
of Baskets
1. Is a Cabinet Failure or ToR Pair Failure In
Scope?
2. If yes, Place No More Servers in a Cabinet than
you are Prepared to Lose at Once
– With FTT=2 that means 6 Hosts equally spread
across 3 cabinets
– Potentially VSAN Stretched Cluster with 3 Nodes
across 2 Cabinets and a Witness
Does L2 Stretch Across Cabinets?
1. If No, a LB is Required to Balance the
Manager/Controller Nodes
2. If No, vCenter Placement Becomes an Issue
Availability Physical Placement
VSAN Considerations
For Management
Confidential │ ©2019 VMware, Inc. 38
Simultaneous Failures of Edge Node VMs Result
in Catastrophic Outages.
When Using VSAN, if Possible, Use Multiple
vSphere Clusters or a VSAN Stretched Cluster.
If Using a Single Cluster (and VSAN Datastore):
1. At Least FTT=2 (The Larger the VSAN Cluster
the more likely a HW Failure Will Occur)
2. Always Vacate the Storage During a Planned
Maintenance Window
3. Set Anti-Affinity Rules Between Edge Node
VMs in the same Edge Cluster
Where are your Edges Peering to?
1. The ToR?
2. Through the ToR to a Border or Agg?
Affinity Rules May Be Required between Edge Node VMs and
their Desired Hosts
Is a Cabinet Failure or ToR Pair Failure In Scope?
1. If yes, Place No More Servers in a Cabinet than you are Prepared
to Lose at Once
– With FTT=2 that means 6 Hosts equally spread across 3 cabinets
– Potentially VSAN Stretched Cluster with 3 Nodes across 2 Cabinets
and a Witness
Does L2 Stretch Across Cabinets?
1. If No, Peering Networks and IPs May Be Cabinet Specific
2. If No, Edge Node VM TEP Pools per Cabinent
Availability Physical Placement
VSAN Considerations
For Edge Node VMs
39
Confidential │ ©2019 VMware, Inc.
Reference Architecture
Click to edit optional subtitle
Confidential │ ©2019 VMware, Inc. 40
Mid Size Data Center- Shared Compute and Edge Cluster
Mid-Size Datacenter
 Shared Compute and Edge cluster
 vSphere Centric Design
 Edge – VM Form Factor
 Strict Separation of Management
Plane and Data Plane
 Edge and Production Workloads
Governed by the same SLA
 N-VDS is not installed on hosts in
Management cluster
 Leverage Production Capacity to
Scale Edge Cluster
VMware Cloud Foundation and VMware Validated Design
N-VDS
N-VDS
N-VDS
N-VDS Dedicated Management Cluster
Shared Compute and Edge Cluster
N-VDS
N-VDS
N-VDS
N-VDS
VDS
VDS
VDS
VDS
vSphere
Cluster1
vSphere
Cluster2
Confidential │ ©2019 VMware, Inc. 41
Mid Size Data Center- Shared Management and Edge Cluster
Based on DellEMC VxRAIL Appliances
4 Clusters in the Basic Design
• Designed around Rack Failure Scenario
• 3 Workload Clusters (AZ01, AZ02, &
AZ03)
– Each Cluster in a Single Rack
– Applications Redundantly Deployed
Across all 3 Workload Clusters
• Management and Edge Cluster
– 6 Host Cluster with VSAN
– FTT=2 to Withstand a Rack Failure
– NSX-T Manger/Controller Cluster
Deployed Across all 3 Racks with Host
Affinity Rules
– With NSX-T 2.5, Edge Node VMs will be
deployed across 3 Failure Domains
Ensuring Services Availability
Pivotal Ready Architecture
Confidential │ ©2019 VMware, Inc. 42
Small or Mid Size Data Center
VMware NSX-T on DellEMC VxRAIL
NSX-T 2.4.0+ Requires VxRail 4.7.110+ or 4.5.300+
No Lifecycle or Visibility of N-VDS NICs in VxRail Manager
Standardized on 2 DVS
(4 pNICs)
• VDS for VMkernels
• N-VDS for Workload VMs and Edge Node VMs
Single Collapsed Cluster by Default
• Uses NIOC and Traffic Engineering for VDS
• Load NSX-T Content Pack for vRealize Log Insight
• Validate NIC Firmware and Drivers with VMware HCL*
Traffic Type Multicast
Requirements
Uplink 1
(10Gb/25Gb)
VMNIC0
Uplink 2
(10Gb/25Gb)
VMNIC1
Uplink 3
(N-VDS
Post Deploy)
Uplink 4
(N-VDS
Post Deploy)
NIOC
Shares
Management IPv6 multicast Active Standby N/A N/A 20
vMotion None Active Standby N/A N/A 50
vSAN None Standby Active N/A N/A 100
VMs None Active Standby N/A N/A 30
Guest/Overlay
Traffic
Non-Guest
Traffic
DellEMC VxRAIL Transport Node
N-VDS
P4
P3
Uplink 1 Uplink 2
Mgmt Storage vMotion Overlay
VSS or VDS
P2
P1
Uplink 2
Uplink 2
Uplink 1
Uplink1 Uplink2
Confidential │ ©2019 VMware, Inc. 45
Confidential │ ©2018 VMware, Inc.
< 250 VMs Collapsed Cluster
Makes Sense
25-50 VMs Usually RAM
Constrained
Socket Licenses = Larger Hosts
Clusters Spread Across
Cabinets.
VSAN FTT= # of hosts /Cabinet /Cluster
Edge Nodes In Multiple Cabinets
Host BW/pNIC Configuration
VSAN FTT=2+
If IP Storage use 25 Gb pNICs or Multiple
10s or Separate Edge from Workload Hosts
If HCI Storage, Less VMs per Host for more
hosts enable higher resiliency
If iSCSI/NFS/FC/VVOL, use Separate
Datastores to host each Edge Node
VM/Failure Domain or NSX-T Manager
Appliance
Collapsed Cluster or Not?
What Types of Failures to
Design For?
VSAN or Traditional Storage?
NSX-T Design Inflection Points
Questions That Need Answering
Recommendations: Recommendations: Recommendations:
1. How Many VMs?
2. How Many VMs Per Host?
3. How are you Licensed?
1. Cabinet Level Failure?
2. Single ToR Failure?
3. Compound HW Failure?
1. Is There Network Contention
2. Scale Out for Availability
3. Separate Datastores
Confidential │ ©2019 VMware, Inc. 46
Confidential │ ©2018 VMware, Inc.
Baremetal Edge -Applications need
sub-second convergence (Ex.
VOIP), Services need high
throughput
VM Edge - Recommended for SMB
customers because of flexibility.
Collapsed Cluster - Edge VM only
2 pNIC = Single N-VDS
VMkernels on N-VDS
(Use named teaming Policy)
4 pNIC = VDS + N-VDS
VMkernels on VDS
Edge on N-VDS
Bare Metal or VM Based
Edge?
2x10 or 2x25 or 4x10 or 4x25?
NSX-T Design Inflection Points
Questions That Need Answering
Recommendations: Recommendations:
1. App Network Sensitivity (Web
App vs. Voice)
2. N-S & Services Throughput?
3. Resources/Flexibility?
1. Single or Multiple DVS
2. VMkernel Placement
3. Edge Node VM Bandwidth
Contention
Use Two Tiered Routing.
Use centralized Services on T1
(A/S) to retain ECMP on T0(A/A).
Configure T0 once, delegate at T1
level to junior admins/tenants.
Use Failure domains for T1
placement.
Single Tier or Two Tier
Routing?
Recommendations:
1. Are Centralized Services
Used?
2. Operational Skillsets (Admin
vs Tenant Gateways)
Confidential │ ©2019 VMware, Inc. 47
A Single Cluster can expand to
64 hosts but that doesn’t mean it
should.
• It may be better to migrate
Management and Edge
Workloads to their own
cluster
• Additional clusters can be
added and the N-VDS
extended to it
• This may be a good time to
redistribute/deploy more
Edge Node VMs
Scale Up Options
• Edge Node VMs can be scaled
from Small to Medium to Large
and even Bare Metal
• Load Balancers can be scaled
from Small to Medium to Large
Scale Out Options
• Additional Edge Nodes VMs
can be deployed into the same
Edge Node Cluster
VMware vIDM is included with
NSX-T and can be used to
enable RBAC to delegate rights
(Security, LB, Ops)
vRealize Log Insight is included
with NSX-T to make it easier to
Support, Troubleshoot, and Audit
Security Events
vRealize Network Insight is
Available to streamline Network
Support and Operations
Growth and Expansion
Don’t be a Victim of Your Own Success
Scaling Up The Host Count Increased Services Use Operational Efficiencies
Confidential │ ©2019 VMware, Inc. 48
How to get started
Resources
LEARN TRY
nsx.techzone.vmware.com
CONNECT
TRY
@VMwareNSX
#runNSX
Learn Connect
Try
Design Guides
Demos
Take a
Hands-on Lab
Join VMUG, VMware
Communities (VMTN)

More Related Content

Similar to VMware NSX-T Design for Small to Mid-Sized Data Centers v1.0 EN.pptx

VMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - Segmentation
VMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - SegmentationVMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - Segmentation
VMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - SegmentationVMworld
 
VMware NSX - Lessons Learned from real project
VMware NSX - Lessons Learned from real projectVMware NSX - Lessons Learned from real project
VMware NSX - Lessons Learned from real projectDavid Pasek
 
NSX_Advanced_Load_Balancer_Solution_with_Oracle.pptx
NSX_Advanced_Load_Balancer_Solution_with_Oracle.pptxNSX_Advanced_Load_Balancer_Solution_with_Oracle.pptx
NSX_Advanced_Load_Balancer_Solution_with_Oracle.pptxAvi Networks
 
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...Amazon Web Services
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld
 
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_AliNET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Alishezy22
 
GAMO VMware vCloud Air
GAMO VMware vCloud AirGAMO VMware vCloud Air
GAMO VMware vCloud AirGAMO a.s.
 
VMware Cloud on AWS - 100819.pdf
VMware Cloud on AWS - 100819.pdfVMware Cloud on AWS - 100819.pdf
VMware Cloud on AWS - 100819.pdfAmazon Web Services
 
Inteligentní řízení WAN konektivity
Inteligentní řízení WAN konektivityInteligentní řízení WAN konektivity
Inteligentní řízení WAN konektivityMarketingArrowECS_CZ
 
StarlingX - Driving Compute to the Edge with OpenStack
StarlingX - Driving Compute to the Edge with OpenStackStarlingX - Driving Compute to the Edge with OpenStack
StarlingX - Driving Compute to the Edge with OpenStackStacy Véronneau
 
ENT208 Transform your Business with VMware Cloud on AWS
ENT208 Transform your Business with VMware Cloud on AWSENT208 Transform your Business with VMware Cloud on AWS
ENT208 Transform your Business with VMware Cloud on AWSAmazon Web Services
 
Building Hybrid Cloud IT Infrastructures and Operations Using VMC on AWS
Building Hybrid Cloud IT Infrastructures and Operations Using VMC on AWSBuilding Hybrid Cloud IT Infrastructures and Operations Using VMC on AWS
Building Hybrid Cloud IT Infrastructures and Operations Using VMC on AWSAmazon Web Services
 
NSX-T Architecture and Components.pptx
NSX-T Architecture and Components.pptxNSX-T Architecture and Components.pptx
NSX-T Architecture and Components.pptxAtif Raees
 
Citrix Cloud Master Class June 2014
Citrix Cloud Master Class June 2014Citrix Cloud Master Class June 2014
Citrix Cloud Master Class June 2014Citrix
 
VMworld 2013: Case Study: VMware vCloud Ecosystem Framework for Network and S...
VMworld 2013: Case Study: VMware vCloud Ecosystem Framework for Network and S...VMworld 2013: Case Study: VMware vCloud Ecosystem Framework for Network and S...
VMworld 2013: Case Study: VMware vCloud Ecosystem Framework for Network and S...VMworld
 
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep DiveVMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep DiveVMworld
 

Similar to VMware NSX-T Design for Small to Mid-Sized Data Centers v1.0 EN.pptx (20)

VMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - Segmentation
VMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - SegmentationVMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - Segmentation
VMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - Segmentation
 
VMware NSX - Lessons Learned from real project
VMware NSX - Lessons Learned from real projectVMware NSX - Lessons Learned from real project
VMware NSX - Lessons Learned from real project
 
NSX_Advanced_Load_Balancer_Solution_with_Oracle.pptx
NSX_Advanced_Load_Balancer_Solution_with_Oracle.pptxNSX_Advanced_Load_Balancer_Solution_with_Oracle.pptx
NSX_Advanced_Load_Balancer_Solution_with_Oracle.pptx
 
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep Dive
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep Dive
 
NSX, un salt natural cap a SDN
NSX, un salt natural cap a SDNNSX, un salt natural cap a SDN
NSX, un salt natural cap a SDN
 
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_AliNET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
 
GAMO VMware vCloud Air
GAMO VMware vCloud AirGAMO VMware vCloud Air
GAMO VMware vCloud Air
 
VMware Cloud on AWS - 100819.pdf
VMware Cloud on AWS - 100819.pdfVMware Cloud on AWS - 100819.pdf
VMware Cloud on AWS - 100819.pdf
 
Inteligentní řízení WAN konektivity
Inteligentní řízení WAN konektivityInteligentní řízení WAN konektivity
Inteligentní řízení WAN konektivity
 
StarlingX - Driving Compute to the Edge with OpenStack
StarlingX - Driving Compute to the Edge with OpenStackStarlingX - Driving Compute to the Edge with OpenStack
StarlingX - Driving Compute to the Edge with OpenStack
 
ENT208 Transform your Business with VMware Cloud on AWS
ENT208 Transform your Business with VMware Cloud on AWSENT208 Transform your Business with VMware Cloud on AWS
ENT208 Transform your Business with VMware Cloud on AWS
 
Building Hybrid Cloud IT Infrastructures and Operations Using VMC on AWS
Building Hybrid Cloud IT Infrastructures and Operations Using VMC on AWSBuilding Hybrid Cloud IT Infrastructures and Operations Using VMC on AWS
Building Hybrid Cloud IT Infrastructures and Operations Using VMC on AWS
 
NSX-T Architecture and Components.pptx
NSX-T Architecture and Components.pptxNSX-T Architecture and Components.pptx
NSX-T Architecture and Components.pptx
 
Citrix Cloud Master Class June 2014
Citrix Cloud Master Class June 2014Citrix Cloud Master Class June 2014
Citrix Cloud Master Class June 2014
 
04 vsx power-r65
04 vsx power-r6504 vsx power-r65
04 vsx power-r65
 
VMworld 2013: Case Study: VMware vCloud Ecosystem Framework for Network and S...
VMworld 2013: Case Study: VMware vCloud Ecosystem Framework for Network and S...VMworld 2013: Case Study: VMware vCloud Ecosystem Framework for Network and S...
VMworld 2013: Case Study: VMware vCloud Ecosystem Framework for Network and S...
 
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep DiveVMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
 
NSX-MH
NSX-MHNSX-MH
NSX-MH
 

Recently uploaded

The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 

Recently uploaded (20)

The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 

VMware NSX-T Design for Small to Mid-Sized Data Centers v1.0 EN.pptx

  • 1. Confidential │ ©2019 VMware, Inc. NSX-T Design for Small to Mid-Sized Data Centers February 2020
  • 2. Confidential │ ©2019 VMware, Inc. Agenda 2 The NSX-T Platform Use cases, Architecture and Terminology NSX-T in the Data Center (Large, Mid-Size or Small) Design Considerations Management, Compute and Edge Edge Node Form Factor and Placement vSAN Considerations Reference Architecture Growth Options Summary and Q&A
  • 3. Confidential │ ©2019 VMware, Inc. 3 ESX NSX Evolution BRANCH DC EDGE/IOT PUBLIC CLOUD PRIVATE CLOUD vSphere
  • 4. Confidential │ ©2019 VMware, Inc. 4 VMware NSX-T Solution Scope Visibility Automation NSX DC – On Prem, NSX Cloud (AWS & Azure), VMC, VMC on Dell Cloud, IBM Integration with Container Ecosystem – PKS, PAS, DIY K8 & OpenShift Networking Security Solution & Integrated Stack Intrinsic Security – E-W, Multi-tenant, Application Aware & Perimeter Heterogenous end-points – Container, VM (ESX & KVM), Bare-Metal
  • 5. Confidential │ ©2019 VMware, Inc. 5 Load Balancing Connectivity to physical Switching Firewalling VPN NSX-T Networking and Security Services Routing DHCP NAT MetaData Proxy MetaData Proxy DNS Forwarder DNS Forwarder
  • 6. Confidential │ ©2019 VMware, Inc. 6 Datacenter/Computer Room/Co-Lo Floorspace is at a Premium Dedicated Appliances Take- Up Power, Produce Heat, Take Up Space Usually Hosting Location is Remote from Support Personnel Maximize Out of the Box Functionality Single Pane of Glass is Important IT Leans Towards Jack of All Trades Master of None Limited Access to Sandbox/Development Environments Vendor Management and Integration Challenges Licensing based on Socket • Higher Density = Lower Cost Overhead can’t be hidden due to the scale. Pay for Admission but it’s All You Can Eat After That • Services are Included in the License Cost • More Services = More ROI Small and Medium DataCenter Priorities Design Constraints Physical Footprint Complexity Cost
  • 7. Confidential │ ©2019 VMware, Inc. 7 5 Hosts for Mgmt & Edge (3 + 1 + 1) N+1+1 for Availability with FTT=2 5 Sockets of VSAN, vSphere, NSX 10 Hosts for Prod (8 + 1 + 1) N+1+1 for Availability with FTT=2 20 Sockets of VSAN, vSphere, NSX 37.5% Overhead for Availability + 19% Overhead for Management 10 Hosts (8 + 1 + 1) 25% Overhead for Availability N+1+1 for Availability with FTT=2 20 Sockets of VSAN, vSphere, NSX Single Cluster Management/Edge + Compute The Smaller the Datacenter the Tighter the Window Overhead is the Enemy Resource Example Totals # of VMs Required 200 25 VMs Per Host CPU Over Subscription 3:1 RAM Over Subscription 2:1 Avg vCPU per VM 6 vCPU 1200 vCPUs AVG RAM per VM 48 GB 9600 GB of RAM vCPUs Per Host 2 Sockets 28 Cores Each 168 vCPUs vRAM Per Host 768 GB @ 2:1 1536 GB TB Per Host 12 TB Usable 96 TBs RU Per Host 1 RU # of Hosts 8 + 1 + 1
  • 8. Confidential │ ©2019 VMware, Inc. 8 Hosts: 4-10 hosts vSphere Clusters: 1-2 Workload VMs: 100 - 250 N-S bandwidth: <10G Hosts: 10-100 hosts vSphere Clusters: 4- 10 24 hosts per cluster Workload VMs: 1000 -2500 N-S bandwidth: 10G or more Hosts: 100+ hosts vSphere Clusters: >10 Workload VMs: 1000s N-S bandwidth: Multiple of 10G (Depends on use case) NSX-T Design for Small to Mid-Sized Data Centers What's a SMB Data Center? Small DataCenter Mid-Size DataCenter Large DataCenter
  • 9. 9 Confidential │ ©2019 VMware, Inc. Architecture & Terminology
  • 10. Confidential │ ©2019 VMware, Inc. 10 N-VDS NSX-T Requirements on physical infrastructure Only requirements on the physical infrastructure: • IP connectivity • 1700 bytes MTU (minimum), 9K recommended  Mobility with Any topology: • L2 and L3 • Spine/Leaf  Any vendor: • Any single vendor • Mix of vendors • Mix of device generations Works with any network topology TEP 1 P1 Physical Fabric Any Vendor/Any Topology N-VDS TEP 2 P1 Compute Host 1 Compute Host 2
  • 11. Confidential │ ©2019 VMware, Inc. 11 UI/API entry point, Store desired configuration Interact with other management components Cloud Service Manager NSX Container Plugin vCenter(s) NSX Manager CMP, Automation (Ansible, Terraform, Python, Go etc…) NSX-T Architecture ESXi host N-VDS KVM host N-VDS NSX Edge Bare Metal Server NSX Transport Nodes: • Host workloads (VMs, containers) and services • Switch data plane traffic Private Cloud Linux VM NSX Windows VM NSX NSX Cloud GW NAT Public Cloud VMware Cloud on AWS VMs Containers Maintain and propagate dynamic state within the system Disseminates topology information reported by the data plane elements NSX Controller Control Plane Data Plane Management Plane NSX Manager Appliance NSX Manager Appliance Cluster of 3 VMs (scale out + redundancy) NSX Manager Appliance
  • 12. Confidential │ ©2019 VMware, Inc. 12 NSX-T Datacenter Components Transport Node (TN) • Host prepared for NSX (i.e. N- VDS installed) • Has a TEP (Tunnel End Point) • Ex: Hypervisor, Edge Node, bare metal server with NSX Agent NSX Virtual Distributed Switch • Supports VLANs and Overlay • Owns several physical NICs of the Transport Node • Can co-exist with a VSS, VDS or another N-VDS Terminology: Transport Nodes, N-VDS VDS P1 P2 N-VDS P3 P4 Infrastructure traffic Workload traffic (Overlay / Vlan) vmk0 vmk2 vmk1 VM1 VM3 VM2 Transport Node ESXi Overlay or VLAN Segment
  • 13. Confidential │ ©2019 VMware, Inc. 13 NSX-T Datacenter Components Edge Node Service Appliances to host centralized services like N-S connectivity, NAT, Load balancer, Bridging etc. Sends/Receives both Overlay and Vlan/External traffic. Baremetal Edge • Host with Edge ISO installed. • N-VDS owns the pNICs. • Is a Transport Node VM Edge • It’s a VM • Can be hosted on VSS, VDS or N-VDS • Is a Transport Node Edge –Baremetal and VM N-VDS P1 P2 Baremetal Edge Transport Node ESXi P1 P2 VSS/VDS or N-VDS vNIC Edge VM vnic0 vnic1 vnic2 VM Edge Transport Node N-VDS
  • 14. Confidential │ ©2019 VMware, Inc. 14 • Tenant Isolation • Separate control for Infra and Tenant admin • Eliminates dependency on physical infrastructure when a new tenant is provisioned • Role- Connects to physical infra • Manual Management Tier-0 Gateway Benefit Tier-0 Gateway Physical Routers Tier-1 Gateway • Role- Per tenant first hop router • Cloud Management Platform (CMP) driven Management Tenant-1 Tenant-2 Tier-1 Gateway Tier-1 Gateway NSX-T Datacenter Components Logical Routing
  • 15. 15 Confidential │ ©2019 VMware, Inc. NSX-T in Data Center Click to edit optional subtitle
  • 16. Confidential │ ©2019 VMware, Inc. 16 N-VDS N-VDS N-VDS N-VDS NSX-T in the Data Center Large Datacenter Dedicated Management, Edge and Compute clusters ESXi, KVM or Baremetal hosts Edge – VM or Baremetal Dedicated services appliances are always present. NSX-T services are used in conjunction with other services. N-VDS is not installed on hosts in Management and Edge cluster Enterprise or Large Data Center Edge Cluster -Baremetal or VM Management Cluster Compute Clusters N-VDS N-VDS N-VDS N-VDS N-VDS N-VDS N-VDS N-VDS VDS VDS VDS VDS VDS VDS VDS VDS Distributed Firewall Distributed Switching Distributed Routing Centralized Services
  • 17. Confidential │ ©2019 VMware, Inc. 17 NSX-T in the Data Center Mid-Size Datacenter  Shared Management and Edge cluster  ESXi, KVM or Baremetal hosts  Edge – VM or Baremetal  Predictable/Deterministic traffic flow  NSX-T services are used in conjunction with other services like a dedicated firewall  N-VDS is not installed on hosts in shared Management and Edge cluster Mid Size Data Center- Shared Management and Edge Cluster N-VDS N-VDS N-VDS N-VDS Shared Management and Edge Cluster Compute Clusters N-VDS N-VDS N-VDS N-VDS VDS VDS VDS VDS vSphere Cluster1 vSphere Cluster2
  • 18. Confidential │ ©2019 VMware, Inc. 18 NSX-T in the Data Center Small Datacenter Shared Management, Edge and Compute cluster ESXi only, no KVM Edge – Always VM form factor NSX-T services like Gateway firewall, Load balancer etc. are leveraged heavily. Each host has N-VDS installed. Small Data Center- Shared Management, Edge and Compute Cluster Shared Management Edge and Compute Cluster N-VDS N-VDS N-VDS N-VDS
  • 19. Confidential │ ©2019 VMware, Inc. 19 Edge node VM install on VSS or VDS Shared Cluster- Edge VM on Compute (4 pNIC) P1 P0 P2 P3 Uplink2 Uplink1 N-VDS TEP-IP(Vlan 75) NSX-T VIBs Installed on Host, N-VDS consuming separate pNICs than VSS/VDS Edge-Mgmt Edge-UplinkTrunk1 vNIC1 Mgmt vNIC2 vNIC3 N-VDS1 Edge-Uplink-Trunk2 TEP2-IP (Vlan75) TEP1-IP (Vlan75) Overlay Segment vmK0 vmK1 Uplink2 Uplink1 VSS/VDS Mgmt-Seg vMotion-Seg
  • 20. Confidential │ ©2019 VMware, Inc. 20 Shared Cluster- Edge VM on Compute (2 pNIC) Edge node VM install on VSS or VDS P1 P0 Edge-Mgmt Edge-UplinkTrunk1 Uplink2 Uplink1 TEP-IP (Vlan 75) 172.16.215.66/28 ESXi Host Overlay Segment vNIC1 Mgmt Edge Node VM vNIC2 vNIC3 N-VDS1 vmK0 Edge-Uplink-Trunk2 N-VDS1 TEP2-IP (Vlan78) Compute TEP and Edge TEP must be in different vlan (Transport Vlan used for Compute host here is Vlan 75, Transport Vlan used for Edge is vlan 78) vmK1 TEP1-IP (Vlan78) Mgmt-Seg vMotion-Seg
  • 21. 21 Confidential │ ©2019 VMware, Inc. Design Considerations Management, Edge and Compute
  • 22. Confidential │ ©2019 VMware, Inc. 22 Sizing • Small - POC or Lab only • Medium - 64 Hosts • Large >64 -1024 hosts Same or different L2 domain 3 node or 4 node cluster • DRS with Anti affinity rules • vSphere HA • Resource reservations Throughput Convergence Teaming Policy • Single TEP or Multi-TEP High availability • Separate Edge cluster for N-S and separate cluster for Services • Failure Domain Scale considerations ESXi, KVM PNIC Teaming Policy • Failover order or Source Port • Single TEP or Multi-TEP VMkernel and Overlay Traffic- Separate pNICs or shared pNICs Design Considerations Management, Edge and Compute Management Cluster Edge Cluster Compute Cluster
  • 23. Confidential │ ©2019 VMware, Inc. 23 Design Considerations Maximum Latency between NSX Managers = 10ms Must have support for vMotion in general Use vSphere based Management Cluster- Leverage DRS and anti- affinity rules (NSX-T doesn’t natively enforces this design practice) Anti-affinity rules marked as “should” not “must” Loss of single manager is acceptable Loss of two managers will cause loss of quorum of NSX Managers No topology changes / vMotion Four host vSphere Cluster optimizes ESXi upgrade and capacity control vSphere Cluster Management VLAN / Subnet Host A Host B Host C Management Port Group Connected to Management VLAN / Subnet NSX Manager C NSX Manager A NSX Manager B Host D NSX Manager A 3 hosts or 4 hosts for Management
  • 24. Confidential │ ©2019 VMware, Inc. 24 NSX Management Cluster IP A IP B IP C API or GUI Client VIP 10.1.1.1  Single IP Availability  Multi-subnet – no L2 across management racks  More complex setup with LB configuration required  Complex LCM and computability  Costly Not Common(Not recommended for Small/Mid-Size Datacenters) NSX Management Cluster IP A IP B IP C  No Layer 2 adjacent requirement  All three node IP can be used for GUI and API access, however upon failure of that node, different IP has to be used API or GUI Client NSX Management Cluster IP A IP B IP C Cluster Virtual IP 10.1.1.1  Low cost  Low complexity  Single IP address can be used for API and UI access  Single subnet only  No UI and API load distribution Recommended API or GUI Client Design Considerations Management Cluster –L2 or L3 adjacent?
  • 25. 25 Confidential │ ©2019 VMware, Inc. Design Considerations Compute
  • 26. Confidential │ ©2019 VMware, Inc. 26 Design Considerations Compute- ESXi Yet again, N-VDS can co-exist with VSS, VDS or N-VDS VMkernel & Overlay Traffic- Dedicated or Shared pNICs Teaming Policy – Failover order (Active/standby) – Source Port (Active/Active) (Not supported for KVM) – Named teaming policy (Not supported for KVM) Overlay Traffic Infrastructure Traffic ESXi Transport Node N-VDS P4 P3 Uplink 1 Uplink 2 Mgmt Storage vMotion Overlay VSS or VDS P2 P1 Uplink 2 Uplink 2 Uplink 1 All Traffic On N-VDS ESXi Transport Node N-VDS P2 P1 Uplink 1 Uplink 2 Mgmt Storage vMotion Overlay
  • 27. Confidential │ ©2019 VMware, Inc. 27 Design Considerations Compute All Traffic On N-VDS Compute Transport Node N-VDS P2 P1 Uplink 1 Uplink 2 Storage Overlay Teaming Policy Active Standby Default- Overlay traffic U2 U1 Management Vlan- vmk0 U1 U2 vMotion Vlan - vmk1 U2 U1 vSAN Vlan –vmk2 U1 U2 Named Teaming Policy: • Used only for VLAN LS • Several different “Named Teaming Policy can exist under single N-VDS for different types of VLAN traffic • Used for deterministic traffic controls for non-overlay traffics such as vMotion, VSAN etc. Mgmt vMotion
  • 28. Confidential │ ©2019 VMware, Inc. 28 Named Teaming Policy Design Considerations
  • 29. 29 Confidential │ ©2019 VMware, Inc. Design Considerations Edge Node and Services
  • 30. Confidential │ ©2019 VMware, Inc. 30 Design Criteria Results Resources Leverage Existing Cluster’s Excess Capacity Availability Leverages vSphere HA Slower Convergence (3+ Sec) Operations Simple VM Processes Deployed From Manager Performance Lower Bandwidth (based on HW and Network Contention) Fewer Services (LB Instances etc.) Cost Leverages vSphere and NSX Licensing of the Host Design Criteria Results Resources Net New Capacity Availability BFD for Fast Convergence (<1 Sec) HW based RAID Operations Boot from ISO, Lifecycle from MGR, Relies on HW Availability Performance High Bandwidth High Service Capacity Cost No vSphere License, No Storage License NSX License Required per Socket Low Cost Server (32GBRAM/8vCPU/200GB HD Edge Node VM (Shared in MGMT or Compute) Bare Metal Edge Nodes Design Considerations Edge Node VM or Bare Metal Edge Common for SMB DC
  • 31. Confidential │ ©2019 VMware, Inc. 31 Type/Size Memory vCPU Disk Load balancer VPN (Number of sessions) Specific Usage Guidelines Edge VM Small 4GB 2 200 GB POC only POC only PoC only, LB functionality is not available. Edge VM Medium 8GB 4 200 GB Small Medium (2.5 onwards). 128 Suitable for production with centralized services like NAT, Edge firewall, VPN. Recommended for Small Deployments. Edge VM Large 32GB 8 200 GB Small, Medium & Large LB. Max 1 Large LB and 4 medium LB. 256 Suitable for production with centralized services like NAT, Edge firewall, load balancer etc. Recommended for Mid-Size Deployments. Design Considerations Edge VM Size
  • 32. Confidential │ ©2019 VMware, Inc. 32 N-S Connectivity Options for Edge Design Considerations VPC/MLAG Mgmt Vlan 100 Overlay Vlan 200 External Vlan 300 External Vlan 400 100, 200, 300, 400 Mgmt Vlan 100 Overlay Vlan 200 External Vlan 300 External Vlan 400 Not Recommended No Routing over VPC, MLAG Use Named teaming policies Tier-0 Gateway Edge - Baremetal or VM Mgmt Vlan 100 Overlay Vlan 200 External Vlan 300 100, 200 Mgmt Vlan 100 Overlay Vlan 200 External Vlan 400 Recommended Tier-0 Gateway Edge - Baremetal or VM
  • 33. Confidential │ ©2019 VMware, Inc. 33 Design Considerations Edge Cluster NSX-T Edge nodes are pooled in edge- cluster to provide scale out and High- Availability for Services. Gateway in Active/Active HA mode • Scale out HA • ECMP • Stateless Services (Reflexive NAT) Gateway in Active/Standby HA mode Stateful Services • SNAT/DNAT • Load Balancer • Edge Firewall • DHCP server • VPN • Bridging Services High Availability - Active/Active and Active/Standby Edge Cluster Tier-1 Tier-1 Tier-0 Tier-0 Tier-1 Tier-1 Edge Node1 Edge Node2
  • 34. Confidential │ ©2019 VMware, Inc. 34 Design Considerations Services on Tier-1 Tier-1 Gateway in Active/Standby but Tier-0 can still provide ECMP northbound. Layer4 and Layer7 Load Balancer available only on Tier-1. Overlapping IP addresses by using NAT per Tier-1. Perimeter L4 –L7 Firewall per tenant or namespace. Exception: L2VPN available on Tier-0 only Where should you run Centralized Services? Physical Routers EN1 Edge Cluster EN2 EN1 EN2 VLAN
  • 35. Confidential │ ©2019 VMware, Inc. 35 TOR3 TOR4 Design Considerations Shared or Dedicated Edge cluster for N-S traffic & Services • Shared Cluster Recommended for Small & Mid-Size Deployments. High Availability Scale up –Same Rack or Different Consider Failure domains for Tier-1 Placement ( New feature in NSX-T 2.5) Edge Cluster design- Services Availability Failure Domain 2 Failure Domain 1 TOR1 TOR2 Edge 2 Edge Cluster Edge 4 Edge 3 Edge 1
  • 36. 36 Confidential │ ©2019 VMware, Inc. VSAN Considerations Click to edit optional subtitle
  • 37. Confidential │ ©2019 VMware, Inc. 37 Minimum Recommended VSAN Cluster Size of 4 Nodes 1. Deploys N+1 Hosts from an NSX Perspective 2. Uses FTT=1 3. Doesn’t Vacate the Storage From a Host by Default Best Practices for NSX Management/Controllers are: 1. Deploy an N+1 (Maintenance) + 1 (Failure) 2. Utilize FTT=2 3. Vacate VSAN Storage from a Host During any Planned Maintenance Decide Immediately, Really Good Basket or Lots of Baskets 1. Is a Cabinet Failure or ToR Pair Failure In Scope? 2. If yes, Place No More Servers in a Cabinet than you are Prepared to Lose at Once – With FTT=2 that means 6 Hosts equally spread across 3 cabinets – Potentially VSAN Stretched Cluster with 3 Nodes across 2 Cabinets and a Witness Does L2 Stretch Across Cabinets? 1. If No, a LB is Required to Balance the Manager/Controller Nodes 2. If No, vCenter Placement Becomes an Issue Availability Physical Placement VSAN Considerations For Management
  • 38. Confidential │ ©2019 VMware, Inc. 38 Simultaneous Failures of Edge Node VMs Result in Catastrophic Outages. When Using VSAN, if Possible, Use Multiple vSphere Clusters or a VSAN Stretched Cluster. If Using a Single Cluster (and VSAN Datastore): 1. At Least FTT=2 (The Larger the VSAN Cluster the more likely a HW Failure Will Occur) 2. Always Vacate the Storage During a Planned Maintenance Window 3. Set Anti-Affinity Rules Between Edge Node VMs in the same Edge Cluster Where are your Edges Peering to? 1. The ToR? 2. Through the ToR to a Border or Agg? Affinity Rules May Be Required between Edge Node VMs and their Desired Hosts Is a Cabinet Failure or ToR Pair Failure In Scope? 1. If yes, Place No More Servers in a Cabinet than you are Prepared to Lose at Once – With FTT=2 that means 6 Hosts equally spread across 3 cabinets – Potentially VSAN Stretched Cluster with 3 Nodes across 2 Cabinets and a Witness Does L2 Stretch Across Cabinets? 1. If No, Peering Networks and IPs May Be Cabinet Specific 2. If No, Edge Node VM TEP Pools per Cabinent Availability Physical Placement VSAN Considerations For Edge Node VMs
  • 39. 39 Confidential │ ©2019 VMware, Inc. Reference Architecture Click to edit optional subtitle
  • 40. Confidential │ ©2019 VMware, Inc. 40 Mid Size Data Center- Shared Compute and Edge Cluster Mid-Size Datacenter  Shared Compute and Edge cluster  vSphere Centric Design  Edge – VM Form Factor  Strict Separation of Management Plane and Data Plane  Edge and Production Workloads Governed by the same SLA  N-VDS is not installed on hosts in Management cluster  Leverage Production Capacity to Scale Edge Cluster VMware Cloud Foundation and VMware Validated Design N-VDS N-VDS N-VDS N-VDS Dedicated Management Cluster Shared Compute and Edge Cluster N-VDS N-VDS N-VDS N-VDS VDS VDS VDS VDS vSphere Cluster1 vSphere Cluster2
  • 41. Confidential │ ©2019 VMware, Inc. 41 Mid Size Data Center- Shared Management and Edge Cluster Based on DellEMC VxRAIL Appliances 4 Clusters in the Basic Design • Designed around Rack Failure Scenario • 3 Workload Clusters (AZ01, AZ02, & AZ03) – Each Cluster in a Single Rack – Applications Redundantly Deployed Across all 3 Workload Clusters • Management and Edge Cluster – 6 Host Cluster with VSAN – FTT=2 to Withstand a Rack Failure – NSX-T Manger/Controller Cluster Deployed Across all 3 Racks with Host Affinity Rules – With NSX-T 2.5, Edge Node VMs will be deployed across 3 Failure Domains Ensuring Services Availability Pivotal Ready Architecture
  • 42. Confidential │ ©2019 VMware, Inc. 42 Small or Mid Size Data Center VMware NSX-T on DellEMC VxRAIL NSX-T 2.4.0+ Requires VxRail 4.7.110+ or 4.5.300+ No Lifecycle or Visibility of N-VDS NICs in VxRail Manager Standardized on 2 DVS (4 pNICs) • VDS for VMkernels • N-VDS for Workload VMs and Edge Node VMs Single Collapsed Cluster by Default • Uses NIOC and Traffic Engineering for VDS • Load NSX-T Content Pack for vRealize Log Insight • Validate NIC Firmware and Drivers with VMware HCL* Traffic Type Multicast Requirements Uplink 1 (10Gb/25Gb) VMNIC0 Uplink 2 (10Gb/25Gb) VMNIC1 Uplink 3 (N-VDS Post Deploy) Uplink 4 (N-VDS Post Deploy) NIOC Shares Management IPv6 multicast Active Standby N/A N/A 20 vMotion None Active Standby N/A N/A 50 vSAN None Standby Active N/A N/A 100 VMs None Active Standby N/A N/A 30 Guest/Overlay Traffic Non-Guest Traffic DellEMC VxRAIL Transport Node N-VDS P4 P3 Uplink 1 Uplink 2 Mgmt Storage vMotion Overlay VSS or VDS P2 P1 Uplink 2 Uplink 2 Uplink 1 Uplink1 Uplink2
  • 43. Confidential │ ©2019 VMware, Inc. 45 Confidential │ ©2018 VMware, Inc. < 250 VMs Collapsed Cluster Makes Sense 25-50 VMs Usually RAM Constrained Socket Licenses = Larger Hosts Clusters Spread Across Cabinets. VSAN FTT= # of hosts /Cabinet /Cluster Edge Nodes In Multiple Cabinets Host BW/pNIC Configuration VSAN FTT=2+ If IP Storage use 25 Gb pNICs or Multiple 10s or Separate Edge from Workload Hosts If HCI Storage, Less VMs per Host for more hosts enable higher resiliency If iSCSI/NFS/FC/VVOL, use Separate Datastores to host each Edge Node VM/Failure Domain or NSX-T Manager Appliance Collapsed Cluster or Not? What Types of Failures to Design For? VSAN or Traditional Storage? NSX-T Design Inflection Points Questions That Need Answering Recommendations: Recommendations: Recommendations: 1. How Many VMs? 2. How Many VMs Per Host? 3. How are you Licensed? 1. Cabinet Level Failure? 2. Single ToR Failure? 3. Compound HW Failure? 1. Is There Network Contention 2. Scale Out for Availability 3. Separate Datastores
  • 44. Confidential │ ©2019 VMware, Inc. 46 Confidential │ ©2018 VMware, Inc. Baremetal Edge -Applications need sub-second convergence (Ex. VOIP), Services need high throughput VM Edge - Recommended for SMB customers because of flexibility. Collapsed Cluster - Edge VM only 2 pNIC = Single N-VDS VMkernels on N-VDS (Use named teaming Policy) 4 pNIC = VDS + N-VDS VMkernels on VDS Edge on N-VDS Bare Metal or VM Based Edge? 2x10 or 2x25 or 4x10 or 4x25? NSX-T Design Inflection Points Questions That Need Answering Recommendations: Recommendations: 1. App Network Sensitivity (Web App vs. Voice) 2. N-S & Services Throughput? 3. Resources/Flexibility? 1. Single or Multiple DVS 2. VMkernel Placement 3. Edge Node VM Bandwidth Contention Use Two Tiered Routing. Use centralized Services on T1 (A/S) to retain ECMP on T0(A/A). Configure T0 once, delegate at T1 level to junior admins/tenants. Use Failure domains for T1 placement. Single Tier or Two Tier Routing? Recommendations: 1. Are Centralized Services Used? 2. Operational Skillsets (Admin vs Tenant Gateways)
  • 45. Confidential │ ©2019 VMware, Inc. 47 A Single Cluster can expand to 64 hosts but that doesn’t mean it should. • It may be better to migrate Management and Edge Workloads to their own cluster • Additional clusters can be added and the N-VDS extended to it • This may be a good time to redistribute/deploy more Edge Node VMs Scale Up Options • Edge Node VMs can be scaled from Small to Medium to Large and even Bare Metal • Load Balancers can be scaled from Small to Medium to Large Scale Out Options • Additional Edge Nodes VMs can be deployed into the same Edge Node Cluster VMware vIDM is included with NSX-T and can be used to enable RBAC to delegate rights (Security, LB, Ops) vRealize Log Insight is included with NSX-T to make it easier to Support, Troubleshoot, and Audit Security Events vRealize Network Insight is Available to streamline Network Support and Operations Growth and Expansion Don’t be a Victim of Your Own Success Scaling Up The Host Count Increased Services Use Operational Efficiencies
  • 46. Confidential │ ©2019 VMware, Inc. 48 How to get started Resources LEARN TRY nsx.techzone.vmware.com CONNECT TRY @VMwareNSX #runNSX Learn Connect Try Design Guides Demos Take a Hands-on Lab Join VMUG, VMware Communities (VMTN)