Multi-Rate1,2.5,5,10GigabitEdgePoE++
Multi-RateSpineLeafDesign(10,25,40,50,100Gigabit)
Web-scale for the rest of us...
Web-Scale for The Enterprise (Any Scale upgrades).
• SLAs with Agility (Storage Pools and Containers).
• Security, Control & Analytics (Data follows a VM as it moves).
• Predictable Scale (I/O & data locality are critical).
X460-G2 (Advanced L3 1-40G) Multirate Option
PoE
Fiber
DC
Policy
Fit The Swiss Army Knife of Switches
Half Duplex
½ & ½
3 Models
This is where: 10G on
existing copper Cat5e
and Cat6 extend the
life of the installed
cable plant. Great for
1:N Convergence.
X620 (1OG Copper or Fiber)
Speed Next Gen Edge
Lowered TCO via
Limited Lifetime Warrantee
XYZ Account Design Goals
• Fractional consumption and predictable scale
(Distributed everything).
• No single point of failure (Always-on systems).
• Extensive automation and rich analytics.
XYZ Account Fundamental Assumptions..
• Unbranded x86 servers: fail-fast systems
• All intelligence and services in software
• Linear, predictable scale-out
CAPEX or OPEX (you choose)?
Reduced Risk (just witness or take action)
Time is the critical Factor with XYZ Account Services...
Infrastructure
Businessmodel
Ownership
Considerations
Management
Location
• 32 x 100Gb
• 64 x 50Gb
• 128 x 25Gb
• 128 x 10Gb
• 32 x 40Gb
96 x 10GbE Ports
(via4x10Gb breakout)
8 x 10/25/40/
50/100G
10G
Next Gen: Spine Leaf
X670 & X770 - Hyper Ethernet
Common Features
• Data Center Bridging (DCB) features
• Low ~600 nsec chipset latency in cut through mode.
• Same PSUs and Fans as X670s (Front to back or Back to
Front) AC or DC.
X670-G2 -72X (10GbE Spine Leaf) 72 10GbE
X670-48x-4q (10GbE Spine Leaf) 48 10GbE & 4 QSFP+
QSFP+
40G DAC
Extreme Feature Packs
Core
Edge
AVB
OpenFlow
Advance
Edge
1588 PTP
MPLS
Direct Attach
Optics License
Extreme Switches
include the license
they normally need.
Like any other
software platform
you have an
upgrade path.
QSPF28
100G DAC
Disaggregated Switch
Purple Metal
XoS as a Platform . Network as a Platform...
Distributed Everything (no propietary tech).
Always-on Operations (Spine-leaf Resilience).
Extensive Automation (rich analytics).
Purposed for
Broadcom (ASICs)
XYZ Account Business Value
XoS Platform
Config L2/L3
Analytics
Any OS
Any Bare Metal Switch
Policy
Disaggregated Switch
Bare - Grey
Web-Scale
Configuration
consistency ..
What constitutes a Software
Defined Data Center (SDDC)?
Abstract pool automate across...
XYZ Account Strategic Asset
Initial Configuration Tasks...
• Multi-chassis LAG (LACP)
• Routing configuration (VRRP/HSRP)
• STP (Instances/mapping) VLANs
Recurring configuration...
• VRRP/HSRP (Advertise new subnets)
• Access lists (ACLs)
• VLANs (Adjust VLANs on trunks).
• VLANs STP/MST mapping
• Add VLANs on uplinks
• Add VLANs to server ports
Control Plane
Logical
Data Plane
Physical
compute
network
storage
Logical Router 1
VXLAN 5001
Logical Router 2
VXLAN 5002
Logical Router 3
VXLAN 5003
MAC table
ARP table
VTEP table
Controller Directory
VTEP
DHCP/DNS
Policy
Edge Services
VM VM VM VM VM
Who?
Where?
When?
Whatdevice?
How?
QuarantineRemediate
Allow
Authentication
NAC Server
Summit
Netsite
Advanced
NAC Client
Joe Smith
XYZ Account
Access
Controlled
Subnet
Enforcement
Point
Network
Access
Control
This is where
if X + Y, then Z...
• LLDP-MED
• CDPv2
• ELRP
• ZTP
If user
matches a
defined
attribute
value
ACL
QoS
Then place
user into a
defined ROLE
A port is what it is because?
Datacenter
Evolution
1990's
Client-Server
x86 x86 x86x86 x86 x86
2000s
Virtualization
x86 x86x86
2010> Cloud
Public Cloud
Intelligent
Software
Roadblocks
• Silos
• Complexity
• Scaling
Application
Experience
FullContext
App
App
Analytics
App
Stop the
finger-pointing
Application Network Response.
Flow or Bit
Bucket
Collector
3 million Flows
Sensors
X460 IPFix 4000 Flows
(2048 ingress, 2048 egress)
Sensor PV-FC-180, S or K Series (Core
Flow 2/ 1 Million Flows)
Flow-based Access Points
From the controller (8K Flows
per AP or C35 is 24K Flows)
Flows
Why not do this in the
network?
10110111011101110 101101110111011101
6 million Flows
Business Value
Context BW IP HTTP:// Apps
Platform Automation Control Experience Solution Framework
Is your network faster today than
it was 3 years ago? Going forward
it should deliver more, faster,
different
DIY Fabric for the DIY Data Center
Three fundamental building blocks for Data
Center Network Automation Solution:
• Orchestration (OpenStack, vRealize, ESX,
NSX, MS Azure, ExtremeConnect)
• Overlay (VXLAN, NVGRE..)
• Underlay (traditional L2/L3 protocols, OSPF,
MLAG etc Underlay
Overlay
Orchestration
How is a traditional
Aggregated Technology like a Duck?
A duck can swim, walk and fly but...
Z
I/O
Bandwidth
Y
Memory
Storage
X
Compute
XoS fn(x,y,z)
is like an elastic Fabric
• You can never have enough.
• Customers want Scale. made easy.
• Hypervisor integration.
The next convergence will be collapsing the
datacenter designs into smaller, elastic form
factors for compute, storage and networking.The application
is always the driver.
Summit
Cisco ACI
HP Moonshot
XYZ Account Data CenterXYZ Account Data Center
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Fabric Modules (Spine)
I/OModules(Leaf)
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3L2 L3 L2 L3L2 L3
L2 L3L2 L3 L2 L3L2 L3
L2 L3L2 L3
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3 L2 L3
L2 L3 L2 L3
L2 L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3 L2 L3
L2 L3 L2 L3
L2 L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
ServersServers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3 L2 L3
L2 L3 L2 L3
L2 L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G)
Layer2 multi-
chassis port
channel (vPC or
MLAG)
ISSU for a redundant
pair. Less than 2000ms
impact for the
upgrade.
Spine (100G)Spine (100G) Spine (100G)Spine (100G)
Core
Spine (100G) Spine (100G)
Core
ControlControl ControlControl
Border
Control Control
Border
4 x 100G 4 x 100G
Spine (100G) Spine (100G)
Core
Control Control
Border
4 x 100G 4 x 100G
LeafLeafLeafLeaf LeafLeaf
Campus
LeafLeaf
Campus
LeafLeafLeafLeaf LeafLeaf LeafLeaf LeafLeafLeafLeaf LeafLeaf
Resnet
LeafLeaf
Resnet
LeafLeaf
Campus
LeafLeaf LeafLeaf
Resnet
Minimum MAC
address table size
should be 256K.ARP table capacity
should support minimum
64K users in a single
vlan.
Deep interface
Buffer or Intelligence
buffer management.
VXlan
Minimum MAC
address table size
should be 256K.ARP table capacity
should support minimum
64K users in a single
vlan.
Deep interface
Buffer or Intelligence
buffer management.
VXlan
Scale UP
CampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads
ResnetCampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloads
Resnet
Scale UP
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
Fabric transparent to
end devices
Combines the fabric elements into
a single domain
Fabric appears as a single device
Policy and overlays applied at the
fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
SwitchSwitch
SwitchSwitch
ExtremeFabric away to simplify
network design & operation
Fabric transparent to
end devices
Combines the fabric elements into
a single domain
Fabric appears as a single device
Policy and overlays applied at the
fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
Switch
Switch
ExtremeFabric away to simplify
network design & operation
Make the
Network act like
~~~~
OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G)
Data Center Fabric
Spine (100G)Spine (100G)
Spine
LeafLeafLeafLeaf LeafLeaf LeafLeafLeafLeaf
ISP 2ISP 2
ISP 1ISP 1
Residential
Housing
Residential
Housing
Hot Spot in
Local Town
Hot Spot in
Local Town
Everything in the Spine
Leaf is just 2 hops away.
Separate Path Available
to each Spine.
Same Latency for each
Path.
Everything in the Spine
Leaf is just 2 hops away.
Separate Path Available
to each Spine.
Same Latency for each
Path.
LeafLeaf
Spine (100G)Spine (100G)
Main CampusMain Campus
UniversityUniversity
Main Campus
University
Spine (100G)Spine (100G) Spine (100G)Spine (100G)Scale OutScale Out Scale OutScale Out
Make the
Network act like ~~~~
CampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads
ResnetCampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloads
Resnet
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
Fabric transparent
to end devices
Combines the fabric elements
into a single domain
Fabric appears as a single device
Policy and overlays applied at
the fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
SwitchSwitch
SwitchSwitch
ExtremeFabric away to simplify
network design & operation
Fabric transparent
to end devices
Combines the fabric elements
into a single domain
Fabric appears as a single device
Policy and overlays applied at
the fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
Switch
Switch
ExtremeFabric away to simplify
network design & operation
Extreme Data Center Switch Options (10, 25, 40, 50, 100G)
Data Center Fabric
Spine (100G)
Spine
LeafLeaf Leaf LeafLeaf
ISP 2
ISP 1
Residential
Housing
Hot Spot in
Local Town
Everything in the Spine
Leaf is just 2 hops away.
Separate Path Available
to each Spine.
Same Latency for each
Path.
Leaf
Spine (100G)
Main Campus
University
Spine (100G) Spine (100G)Scale Out Scale Out
Make the
Network act like ~~
CampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloads
Resnet
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
Fabric transparent
to end devices
Combines the fabric elements
into a single domain
Fabric appears as a single device
Policy and overlays applied at
the fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
Switch
Switch
ExtremeFabric away to simplify
network design & operation
OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
• NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
• NSX Controllers can also be deployed into the Management Cluster
• Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
• Ability to isolate and develop span of control
• Capacity planning – CPU, Memory & NIC
• Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
• Interaction with physical network
• Overlay (VXLAN) impact
• Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
• NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
• NSX Controllers can also be deployed into the Management Cluster
• Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
• Ability to isolate and develop span of control
• Capacity planning – CPU, Memory & NIC
• Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
• Interaction with physical network
• Overlay (VXLAN) impact
• Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
• Multi-chassis LAG
• Routing configuration
• SVIs/RVIs
• VRRP/HSRP
• LACP
• VLANs
Recurring configuration
• SVIs/RVIs
• VRRP/HSRP
• Advertise new subnets
• Access lists (ACLs)
• VLANs
• Adjust VLANs on trunks
• VLANs STP/MST mapping
• VLANs STP/MST mapping
• Add VLANs on uplinks
• Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
• Multi-chassis LAG
• Routing configuration
• SVIs/RVIs
• VRRP/HSRP
• LACP
• VLANs
Recurring configuration
• SVIs/RVIs
• VRRP/HSRP
• Advertise new subnets
• Access lists (ACLs)
• VLANs
• Adjust VLANs on trunks
• VLANs STP/MST mapping
• VLANs STP/MST mapping
• Add VLANs on uplinks
• Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
• L3 ToR designs have dynamic routing protocol between
leaf and spine.
• BGP, OSPF or ISIS can be used
• Rack advertises small set of prefixes
• (Unique VLAN/subnet per rack)
• Equal cost paths to the other racks prefixes.
• Switch provides default gateway service for each VLAN
subnet
• 801.Q trunks with a small set of VLANs for VMkernel
traffic
• Rest of the session assumes L3 topology
L3
L2
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
• L3 ToR designs have dynamic routing protocol between
leaf and spine.
• BGP, OSPF or ISIS can be used
• Rack advertises small set of prefixes
• (Unique VLAN/subnet per rack)
• Equal cost paths to the other racks prefixes.
• Switch provides default gateway service for each VLAN
subnet
• 801.Q trunks with a small set of VLANs for VMkernel
traffic
• Rest of the session assumes L3 topology
L3
L2
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
• Lync Traffic Engineering with
Purview Analytics Service Insertion
• Multi-Tenant Networks Automation
and Orchestration
• Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
• Lync Traffic Engineering with
Purview Analytics Service Insertion
• Multi-Tenant Networks Automation
and Orchestration
• Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
• Network information is distributed across nodes in a
Controller Cluster (slicing)
• Remove the VXLAN dependency on multicast
routing/PIM in the physical network
• Provide suppression of ARP broadcast traffic in
VXLAN networks
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
• Lync Traffic Engineering with
Purview Analytics Service Insertion
• Multi-Tenant Networks Automation
and Orchestration
• Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
• Network information is distributed across nodes in a
Controller Cluster (slicing)
• Remove the VXLAN dependency on multicast
routing/PIM in the physical network
• Provide suppression of ARP broadcast traffic in
VXLAN networks
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
• One or more VDS can be part
of the same TZ
• A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
• One or more VDS can be part
of the same TZ
• A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDMLDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
• The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
• Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
• Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
• The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
• Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
• Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Extreme Networks
Compute, Storage Networking
Integration...
Extreme Networks
Control, Analytics & Security
Integration...
OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
• Identify design principles and implementation strategies,
Start from service requirements and leverage
standardization (Design should be driven by today s and
tomorrow s service requirements).
• Standardization limits technical and operational complexity
and related costs (Develop a reference model based on
principles (Principles enable consistent choice in long term
run).
• Leverage best practices and proven expertise, Streamline
your capability to execute and operational effectiveness
(Unleash capabilities provided by enabling technologies).
Virtual Router 1 (VoIP) - Virtualized services for application delivery
Virtual Router 1 (Oracle) - Virtualized services for application delivery
Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery
Virtual Router 1 (PACs) - Virtualized services for application delivery
Virtual Router 1 (VoIP) - Virtualized services for application delivery
Virtual Router 1 (Oracle) - Virtualized services for application delivery
Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery
Virtual Router 1 (PACs) - Virtualized services for application delivery
# of assets/ports
maintenance costs
operational costs
Next generation operations
Pay as you go
Savings
Referencearchitecture
Data center Network as a Service (NaaS)
NSX
Controller
VC for NSX Domain - A VC for NSX Domain - B
Management Cluster
NSX Manager VM - A
Management VC
Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VMCompute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VMCompute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM
NSX Manager VM - B
Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains...
• Following VMware best practices to have the Management Cluster managed by a dedicated
vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the
Edge and Compute Clusters
• NSX Manager also deployed into the Management Cluster and pared with this second vCenter
Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)
• NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to,
therefore the Controllers are usually also deployed into the Edge Cluster
Data center Network as a Service (NaaS)
NSX
Controller
VC for NSX Domain - A VC for NSX Domain - B
Management Cluster
NSX Manager VM - A
Management VC
Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM
NSX Manager VM - B
Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains...
• Following VMware best practices to have the Management Cluster managed by a dedicated
vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the
Edge and Compute Clusters
• NSX Manager also deployed into the Management Cluster and pared with this second vCenter
Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)
• NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to,
therefore the Controllers are usually also deployed into the Edge Cluster
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
Logical Router 1
VXLAN 5000
Logical Router 2
VXLAN 5001
Logical Router 3
VXLAN - 5002
Controller VXLAN
Directory Service
MAC table
ARP table
VTEPtable
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
Traffic Engineer “like ATM or MPLS”
UDP
Start
Stop
UDP UDP
Use Existing IP Network
VM
VM
VM
VM
VM
VM
VM
VM
Traffic Engineer “like ATM or MPLS”
UDP
Start
Stop
UDP UDP
Use Existing IP Network
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters
• Normally a TZ defines the span of Logical Switches (Layer 2 communication domains).
A given Logical Switch can span multiple VDS
• VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for
encap/decap VXLAN traffic. One or more VDS can be part of the same TZ
• VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically
created during the cluster VXLAN preparation.
Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for
active/active DC over any Network. This is where careful attention is required because
there is different data plane (additional header) makes Jumbo Frames a must have
and will continue to evolve
• Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants
• L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control
plane is still work in progress (even if BGP EVPNs are here)
• Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the
design above becomes a big L3 domain with L2 processing. EVI provides additional
benefits such as:
• Transport agnostic
• Up to 16 Active/Active DCs
• Active/Active VRRP default gateways for VMs
• STP outages remain local to each DC
• Improves WAN utilization by dropping unknown frames and providing ARP suppression
EVI tunnel
Physical Underlay Network
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
Logical Router 1
VXLAN 5000
Logical Router 2
VXLAN 5001
Logical Router 3
VXLAN - 5002
Controller VXLAN
Directory Service
MAC table
ARP table
VTEPtable
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
Traffic Engineer “like ATM or MPLS”
UDP
Start
Stop
UDP UDP
Use Existing IP Network
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters
• Normally a TZ defines the span of Logical Switches (Layer 2 communication domains).
A given Logical Switch can span multiple VDS
• VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for
encap/decap VXLAN traffic. One or more VDS can be part of the same TZ
• VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically
created during the cluster VXLAN preparation.
Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for
active/active DC over any Network. This is where careful attention is required because
there is different data plane (additional header) makes Jumbo Frames a must have
and will continue to evolve
• Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants
• L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control
plane is still work in progress (even if BGP EVPNs are here)
• Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the
design above becomes a big L3 domain with L2 processing. EVI provides additional
benefits such as:
• Transport agnostic
• Up to 16 Active/Active DCs
• Active/Active VRRP default gateways for VMs
• STP outages remain local to each DC
• Improves WAN utilization by dropping unknown frames and providing ARP suppression
EVI tunnel
Physical Underlay Network

Data center pov 2017 v3

  • 1.
    Multi-Rate1,2.5,5,10GigabitEdgePoE++ Multi-RateSpineLeafDesign(10,25,40,50,100Gigabit) Web-scale for therest of us... Web-Scale for The Enterprise (Any Scale upgrades). • SLAs with Agility (Storage Pools and Containers). • Security, Control & Analytics (Data follows a VM as it moves). • Predictable Scale (I/O & data locality are critical). X460-G2 (Advanced L3 1-40G) Multirate Option PoE Fiber DC Policy Fit The Swiss Army Knife of Switches Half Duplex ½ & ½ 3 Models This is where: 10G on existing copper Cat5e and Cat6 extend the life of the installed cable plant. Great for 1:N Convergence. X620 (1OG Copper or Fiber) Speed Next Gen Edge Lowered TCO via Limited Lifetime Warrantee XYZ Account Design Goals • Fractional consumption and predictable scale (Distributed everything). • No single point of failure (Always-on systems). • Extensive automation and rich analytics. XYZ Account Fundamental Assumptions.. • Unbranded x86 servers: fail-fast systems • All intelligence and services in software • Linear, predictable scale-out CAPEX or OPEX (you choose)? Reduced Risk (just witness or take action) Time is the critical Factor with XYZ Account Services... Infrastructure Businessmodel Ownership Considerations Management Location • 32 x 100Gb • 64 x 50Gb • 128 x 25Gb • 128 x 10Gb • 32 x 40Gb 96 x 10GbE Ports (via4x10Gb breakout) 8 x 10/25/40/ 50/100G 10G Next Gen: Spine Leaf X670 & X770 - Hyper Ethernet Common Features • Data Center Bridging (DCB) features • Low ~600 nsec chipset latency in cut through mode. • Same PSUs and Fans as X670s (Front to back or Back to Front) AC or DC. X670-G2 -72X (10GbE Spine Leaf) 72 10GbE X670-48x-4q (10GbE Spine Leaf) 48 10GbE & 4 QSFP+ QSFP+ 40G DAC Extreme Feature Packs Core Edge AVB OpenFlow Advance Edge 1588 PTP MPLS Direct Attach Optics License Extreme Switches include the license they normally need. Like any other software platform you have an upgrade path. QSPF28 100G DAC Disaggregated Switch Purple Metal XoS as a Platform . Network as a Platform... Distributed Everything (no propietary tech). Always-on Operations (Spine-leaf Resilience). Extensive Automation (rich analytics). Purposed for Broadcom (ASICs) XYZ Account Business Value XoS Platform Config L2/L3 Analytics Any OS Any Bare Metal Switch Policy Disaggregated Switch Bare - Grey Web-Scale Configuration consistency .. What constitutes a Software Defined Data Center (SDDC)? Abstract pool automate across... XYZ Account Strategic Asset Initial Configuration Tasks... • Multi-chassis LAG (LACP) • Routing configuration (VRRP/HSRP) • STP (Instances/mapping) VLANs Recurring configuration... • VRRP/HSRP (Advertise new subnets) • Access lists (ACLs) • VLANs (Adjust VLANs on trunks). • VLANs STP/MST mapping • Add VLANs on uplinks • Add VLANs to server ports Control Plane Logical Data Plane Physical compute network storage Logical Router 1 VXLAN 5001 Logical Router 2 VXLAN 5002 Logical Router 3 VXLAN 5003 MAC table ARP table VTEP table Controller Directory VTEP DHCP/DNS Policy Edge Services VM VM VM VM VM Who? Where? When? Whatdevice? How? QuarantineRemediate Allow Authentication NAC Server Summit Netsite Advanced NAC Client Joe Smith XYZ Account Access Controlled Subnet Enforcement Point Network Access Control This is where if X + Y, then Z... • LLDP-MED • CDPv2 • ELRP • ZTP If user matches a defined attribute value ACL QoS Then place user into a defined ROLE A port is what it is because? Datacenter Evolution 1990's Client-Server x86 x86 x86x86 x86 x86 2000s Virtualization x86 x86x86 2010> Cloud Public Cloud Intelligent Software Roadblocks • Silos • Complexity • Scaling Application Experience FullContext App App Analytics App Stop the finger-pointing Application Network Response. Flow or Bit Bucket Collector 3 million Flows Sensors X460 IPFix 4000 Flows (2048 ingress, 2048 egress) Sensor PV-FC-180, S or K Series (Core Flow 2/ 1 Million Flows) Flow-based Access Points From the controller (8K Flows per AP or C35 is 24K Flows) Flows Why not do this in the network? 10110111011101110 101101110111011101 6 million Flows Business Value Context BW IP HTTP:// Apps Platform Automation Control Experience Solution Framework Is your network faster today than it was 3 years ago? Going forward it should deliver more, faster, different DIY Fabric for the DIY Data Center Three fundamental building blocks for Data Center Network Automation Solution: • Orchestration (OpenStack, vRealize, ESX, NSX, MS Azure, ExtremeConnect) • Overlay (VXLAN, NVGRE..) • Underlay (traditional L2/L3 protocols, OSPF, MLAG etc Underlay Overlay Orchestration How is a traditional Aggregated Technology like a Duck? A duck can swim, walk and fly but... Z I/O Bandwidth Y Memory Storage X Compute XoS fn(x,y,z) is like an elastic Fabric • You can never have enough. • Customers want Scale. made easy. • Hypervisor integration. The next convergence will be collapsing the datacenter designs into smaller, elastic form factors for compute, storage and networking.The application is always the driver. Summit Cisco ACI HP Moonshot XYZ Account Data CenterXYZ Account Data Center Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach. • Can not access Line cards. • No L2/l3 recovery inside. • No access to Fabric. Disaggregated value... • Control Top-of-Rack Switches • L2/L3 protocols inside the Spline • Full access to Spine Switches Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach. • Can not access Line cards. • No L2/l3 recovery inside. • No access to Fabric. Disaggregated value... • Control Top-of-Rack Switches • L2/L3 protocols inside the Spline • Full access to Spine Switches No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar • Traditional 3-tier model (Less cabling). • Link speeds must increase at every hop (Less predictable latency). • Common in Chassis based architectures (Optimized for North/South traffic). • Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency). • Always two hops to any leaf (More resiliency, flexibility and performance). • Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar • Traditional 3-tier model (Less cabling). • Link speeds must increase at every hop (Less predictable latency). • Common in Chassis based architectures (Optimized for North/South traffic). • Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency). • Always two hops to any leaf (More resiliency, flexibility and performance). • Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). The XYZ Account handshake layer: • This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting). • Virtualization happens with VXLAN and VMotion (Control by the overlay). • N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account handshake layer: • This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting). • Virtualization happens with VXLAN and VMotion (Control by the overlay). • N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account Ethernet Expressway Layer: deliver massive scale... • This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly • Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks. • Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Fabric Modules (Spine) I/OModules(Leaf) The XYZ Account Ethernet Expressway Layer: deliver massive scale... • This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly • Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks. • Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Spline (Speed) Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach. • Can not access Line cards. • No L2/l3 recovery inside. • No access to Fabric. Disaggregated value... • Control Top-of-Rack Switches • L2/L3 protocols inside the Spline • Full access to Spine Switches No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar • Traditional 3-tier model (Less cabling). • Link speeds must increase at every hop (Less predictable latency). • Common in Chassis based architectures (Optimized for North/South traffic). • Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency). • Always two hops to any leaf (More resiliency, flexibility and performance). • Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). The XYZ Account handshake layer: • This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting). • Virtualization happens with VXLAN and VMotion (Control by the overlay). • N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account Ethernet Expressway Layer: deliver massive scale... • This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly • Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks. • Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Spline (Speed) Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking. • This is where, you can never have enough. • Customers want scale made easy. • Hypervisor integration w cloud simplicity. L2 L3L2 L3 L2 L3L2 L3 L2 L3L2 L3 L2 L3L2 L3 L2 L3L2 L3 Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking. • This is where, you can never have enough. • Customers want scale made easy. • Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters • Each cluster is independent (including servers, storage, database & interconnects). • Each cluster can be used for a different type of service. • Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters • Each cluster is independent (including servers, storage, database & interconnects). • Each cluster can be used for a different type of service. • Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective. • All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol. • VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments. • Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective. • All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol. • VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments. • Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers. • Each spine has 4 uplinks – one to each leaf (4:1 oversubscription). • Enable insertion of services without sprawl (Analytics for fabric and application forensics). • No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers. • Each spine has 4 uplinks – one to each leaf (4:1 oversubscription). • Enable insertion of services without sprawl (Analytics for fabric and application forensics). • No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Network (Fit) Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking. • This is where, you can never have enough. • Customers want scale made easy. • Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters • Each cluster is independent (including servers, storage, database & interconnects). • Each cluster can be used for a different type of service. • Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective. • All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol. • VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments. • Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers. • Each spine has 4 uplinks – one to each leaf (4:1 oversubscription). • Enable insertion of services without sprawl (Analytics for fabric and application forensics). • No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Network (Fit) Overlay Control The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Overlay Control The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Overlay Control The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Compute Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management ServersServers Storage Summit Management Switch Summit Summit Storage Management Servers Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Typical spline setup Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Compute Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach. • Can not access Line cards. • No L2/l3 recovery inside. • No access to Fabric. Disaggregated value... • Control Top-of-Rack Switches • L2/L3 protocols inside the Spline • Full access to Spine Switches No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar • Traditional 3-tier model (Less cabling). • Link speeds must increase at every hop (Less predictable latency). • Common in Chassis based architectures (Optimized for North/South traffic). • Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency). • Always two hops to any leaf (More resiliency, flexibility and performance). • Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). The XYZ Account handshake layer: • This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting). • Virtualization happens with VXLAN and VMotion (Control by the overlay). • N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account Ethernet Expressway Layer: deliver massive scale... • This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly • Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks. • Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Spline (Speed) Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking. • This is where, you can never have enough. • Customers want scale made easy. • Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters • Each cluster is independent (including servers, storage, database & interconnects). • Each cluster can be used for a different type of service. • Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective. • All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol. • VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments. • Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers. • Each spine has 4 uplinks – one to each leaf (4:1 oversubscription). • Enable insertion of services without sprawl (Analytics for fabric and application forensics). • No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Network (Fit) Overlay Control The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Compute Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage
  • 2.
    OPEX Components ofConverged Environment Security Compliance Automation Operations Compute Storage Networking X Y Z Pooled compute, network, and storage capacity XYZ Account 2017 Design CAPEX Components of Converged Environment Cores Memory Spindles Network 6 12 16 20 64GB 128GB 192GB 256GB 512GB 3.6TB 4.8TB 6TB 10TB8TB 10G RJ45 SFP+ QSFP+ QSFP28 SSD SSD 2016 Design 10G Compute, Memory and Storage Jeff Green 2017 Rev. 1 South Legend Legend 10G Passive (PN 10306 ~ 5m, 10307~ 10M) 10G SFP+ Active copper cable (upto 100m) 40G Passive (PN 10321 ~3m, 10323~ 5m) 40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m) 40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4- F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, ) 10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m) SFP+ DAC Cables QSFP+ DAC Cables 10 LRM 220m (720ft/plus mode conditioning) (PN 10303) 10GBASE-T over Class E Cat 6 (55M) (10G) 10GBASE-T over Class E Cat 6a or 7 (100M) (10G) 10 SR over OM3 (300M) or OM4 (400M) (PN 10301) 10 LR over single mode (10KM) 1310nm (PN 10302) 10 ER over single mode (40KM) 1550nm (PN 10309) 10 ZR over single mode (80KM) 1550nm (PN 10310) 802.3bz 10GBASE-T (100M) for Cat 6 (5G) 10G Fiber 10G Copper 802.3bz 10GBASE-T (100M) for Cat 5e (2.5G) Prescriptive Services10G / 40G Overlay Overall Architecture SDN NSX Underlay ACI Other Spine-Leaf MLAG NEXUS Other Applications Automated provisioning and configuration, Intelligence in software Manual Slow ExtremeCore10G ExtremeEdgePoE 25G / 50G /100G QSFP28 DACs (Passive Cables) LR4 - Up to 10 Km on Single Mode. 2 Km lower cost module (Lite). Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm). QSFP28 QSFP28 DACs (Active Cables) 10411 - 100Gb, QSFP28-QSFP28 DAC, 1m 10413 - 100Gb, QSFP28-QSFP28 DAC, 3m 10414 - 100Gb, QSFP28-QSFP28 DAC, 5m 10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m 4x25 DACS 1x1 DAC 10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m 10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m 10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m 2X50 DACs 100G => 4 x 25G lanes 10434 - 100Gb, QSFP28-QSFP28 DAC, 5m 10435 - 100Gb, QSFP28-QSFP28 DAC, 7m 10436 - 100Gb, QSFP28-QSFP28 DAC, 10m 10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 4x25 DACS 1x1 DAC 10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m 10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m 10437 - 100Gb, QSFP28-QSFP28 DAC, 20m 10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G) Layer2 multi- chassis port channel (vPC or MLAG) ISSU for a redundant pair. Less than 2000ms impact for the upgrade. Spine (100G)Spine (100G) Spine (100G)Spine (100G) Core Spine (100G) Spine (100G) Core ControlControl ControlControl Border Control Control Border 4 x 100G 4 x 100G Spine (100G) Spine (100G) Core Control Control Border 4 x 100G 4 x 100G LeafLeafLeafLeaf LeafLeaf Campus LeafLeaf Campus LeafLeafLeafLeaf LeafLeaf LeafLeaf LeafLeafLeafLeaf LeafLeaf Resnet LeafLeaf Resnet LeafLeaf Campus LeafLeaf LeafLeaf Resnet Minimum MAC address table size should be 256K.ARP table capacity should support minimum 64K users in a single vlan. Deep interface Buffer or Intelligence buffer management. VXlan Minimum MAC address table size should be 256K.ARP table capacity should support minimum 64K users in a single vlan. Deep interface Buffer or Intelligence buffer management. VXlan Scale UP CampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads ResnetCampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloads Resnet Scale UP X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration SwitchSwitch SwitchSwitch ExtremeFabric away to simplify network design & operation Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration Switch Switch ExtremeFabric away to simplify network design & operation Make the Network act like ~~~~
  • 3.
    OPEX Components ofConverged Environment Security Compliance Automation Operations Compute Storage Networking X Y Z Pooled compute, network, and storage capacity XYZ Account 2017 Design CAPEX Components of Converged Environment Cores Memory Spindles Network 6 12 16 20 64GB 128GB 192GB 256GB 512GB 3.6TB 4.8TB 6TB 10TB8TB 10G RJ45 SFP+ QSFP+ QSFP28 SSD SSD 2016 Design 10G Compute, Memory and Storage Jeff Green 2017 Rev. 1 South Legend Legend 10G Passive (PN 10306 ~ 5m, 10307~ 10M) 10G SFP+ Active copper cable (upto 100m) 40G Passive (PN 10321 ~3m, 10323~ 5m) 40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m) 40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4- F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, ) 10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m) SFP+ DAC Cables QSFP+ DAC Cables 10 LRM 220m (720ft/plus mode conditioning) (PN 10303) 10GBASE-T over Class E Cat 6 (55M) (10G) 10GBASE-T over Class E Cat 6a or 7 (100M) (10G) 10 SR over OM3 (300M) or OM4 (400M) (PN 10301) 10 LR over single mode (10KM) 1310nm (PN 10302) 10 ER over single mode (40KM) 1550nm (PN 10309) 10 ZR over single mode (80KM) 1550nm (PN 10310) 802.3bz 10GBASE-T (100M) for Cat 6 (5G) 10G Fiber 10G Copper 802.3bz 10GBASE-T (100M) for Cat 5e (2.5G) Prescriptive Services10G / 40G Overlay Overall Architecture SDN NSX Underlay ACI Other Spine-Leaf MLAG NEXUS Other Applications Automated provisioning and configuration, Intelligence in software Manual Slow ExtremeCore10G ExtremeEdgePoE 25G / 50G /100G QSFP28 DACs (Passive Cables) LR4 - Up to 10 Km on Single Mode. 2 Km lower cost module (Lite). Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm). QSFP28 QSFP28 DACs (Active Cables) 10411 - 100Gb, QSFP28-QSFP28 DAC, 1m 10413 - 100Gb, QSFP28-QSFP28 DAC, 3m 10414 - 100Gb, QSFP28-QSFP28 DAC, 5m 10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m 4x25 DACS 1x1 DAC 10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m 10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m 10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m 2X50 DACs 100G => 4 x 25G lanes 10434 - 100Gb, QSFP28-QSFP28 DAC, 5m 10435 - 100Gb, QSFP28-QSFP28 DAC, 7m 10436 - 100Gb, QSFP28-QSFP28 DAC, 10m 10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 4x25 DACS 1x1 DAC 10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m 10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m 10437 - 100Gb, QSFP28-QSFP28 DAC, 20m 10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G) Data Center Fabric Spine (100G)Spine (100G) Spine LeafLeafLeafLeaf LeafLeaf LeafLeafLeafLeaf ISP 2ISP 2 ISP 1ISP 1 Residential Housing Residential Housing Hot Spot in Local Town Hot Spot in Local Town Everything in the Spine Leaf is just 2 hops away. Separate Path Available to each Spine. Same Latency for each Path. Everything in the Spine Leaf is just 2 hops away. Separate Path Available to each Spine. Same Latency for each Path. LeafLeaf Spine (100G)Spine (100G) Main CampusMain Campus UniversityUniversity Main Campus University Spine (100G)Spine (100G) Spine (100G)Spine (100G)Scale OutScale Out Scale OutScale Out Make the Network act like ~~~~ CampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads ResnetCampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloads Resnet X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration SwitchSwitch SwitchSwitch ExtremeFabric away to simplify network design & operation Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration Switch Switch ExtremeFabric away to simplify network design & operation Extreme Data Center Switch Options (10, 25, 40, 50, 100G) Data Center Fabric Spine (100G) Spine LeafLeaf Leaf LeafLeaf ISP 2 ISP 1 Residential Housing Hot Spot in Local Town Everything in the Spine Leaf is just 2 hops away. Separate Path Available to each Spine. Same Latency for each Path. Leaf Spine (100G) Main Campus University Spine (100G) Spine (100G)Scale Out Scale Out Make the Network act like ~~ CampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloads Resnet X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration Switch Switch ExtremeFabric away to simplify network design & operation
  • 4.
    OPEX Components ofConverged Environment Security Compliance Automation Operations Compute Storage Networking X Y Z Pooled compute, network, and storage capacity XYZ Account 2017 Design CAPEX Components of Converged Environment Cores Memory Spindles Network 6 12 16 20 64GB 128GB 192GB 256GB 512GB 3.6TB 4.8TB 6TB 10TB8TB 10G RJ45 SFP+ QSFP+ QSFP28 SSD SSD 2016 Design 10G Compute, Memory and Storage Jeff Green 2017 Rev. 1 South Legend Legend 10G Passive (PN 10306 ~ 5m, 10307~ 10M) 10G SFP+ Active copper cable (upto 100m) 40G Passive (PN 10321 ~3m, 10323~ 5m) 40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m) 40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4- F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, ) 10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m) SFP+ DAC Cables QSFP+ DAC Cables 10 LRM 220m (720ft/plus mode conditioning) (PN 10303) 10GBASE-T over Class E Cat 6 (55M) (10G) 10GBASE-T over Class E Cat 6a or 7 (100M) (10G) 10 SR over OM3 (300M) or OM4 (400M) (PN 10301) 10 LR over single mode (10KM) 1310nm (PN 10302) 10 ER over single mode (40KM) 1550nm (PN 10309) 10 ZR over single mode (80KM) 1550nm (PN 10310) 802.3bz 10GBASE-T (100M) for Cat 6 (5G) 10G Fiber 10G Copper 802.3bz 10GBASE-T (100M) for Cat 5e (2.5G) Prescriptive Services10G / 40G Overlay Overall Architecture SDN NSX Underlay ACI Other Spine-Leaf MLAG NEXUS Other Applications Automated provisioning and configuration, Intelligence in software Manual Slow ExtremeCore10G ExtremeEdgePoE 25G / 50G /100G QSFP28 DACs (Passive Cables) LR4 - Up to 10 Km on Single Mode. 2 Km lower cost module (Lite). Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm). QSFP28 QSFP28 DACs (Active Cables) 10411 - 100Gb, QSFP28-QSFP28 DAC, 1m 10413 - 100Gb, QSFP28-QSFP28 DAC, 3m 10414 - 100Gb, QSFP28-QSFP28 DAC, 5m 10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m 4x25 DACS 1x1 DAC 10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m 10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m 10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m 2X50 DACs 100G => 4 x 25G lanes 10434 - 100Gb, QSFP28-QSFP28 DAC, 5m 10435 - 100Gb, QSFP28-QSFP28 DAC, 7m 10436 - 100Gb, QSFP28-QSFP28 DAC, 10m 10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 4x25 DACS 1x1 DAC 10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m 10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m 10437 - 100Gb, QSFP28-QSFP28 DAC, 20m 10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m Organizing Compute, Management & Edge Edge Leaf L3 to DC Fabric L2 to External Networks Compute Clusters Infrastructure Clusters (Edge, Storage, vCenter and Cloud Management System) WAN Internet L3 L2 L3 L2 Leaf Spine L2 VLANs f or bridging Single vCenter Server to manage all Management, Edge and Compute Clusters • NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server • NSX Controllers can also be deployed into the Management Cluster • Reduces vCenter Server licensing requirements Separation of compute, management and Edge function with following design advantage. Managing life-cycle of resources for compute and Edge functions. • Ability to isolate and develop span of control • Capacity planning – CPU, Memory & NIC • Upgrades & migration flexibility Automation control over area or function that requires frequent changes. app- tier, micro-segmentation & load-balancer. Three areas of technology require considerations. • Interaction with physical network • Overlay (VXLAN) impact • Integration with vSphere clustering Registration or Mapping WebVM WebVM VM VM WebVM Compute Cluster WebVM VM VM Compute A vCenter Server NSX Manager NSX Controller Compute B Edge and Control VM Edge Cluster Management Cluster Organizing Compute, Management & Edge Edge Leaf L3 to DC Fabric L2 to External Networks Compute Clusters Infrastructure Clusters (Edge, Storage, vCenter and Cloud Management System) WAN Internet L3 L2 L3 L2 Leaf Spine L2 VLANs f or bridging Single vCenter Server to manage all Management, Edge and Compute Clusters • NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server • NSX Controllers can also be deployed into the Management Cluster • Reduces vCenter Server licensing requirements Separation of compute, management and Edge function with following design advantage. Managing life-cycle of resources for compute and Edge functions. • Ability to isolate and develop span of control • Capacity planning – CPU, Memory & NIC • Upgrades & migration flexibility Automation control over area or function that requires frequent changes. app- tier, micro-segmentation & load-balancer. Three areas of technology require considerations. • Interaction with physical network • Overlay (VXLAN) impact • Integration with vSphere clustering Registration or Mapping WebVM WebVM VM VM WebVM Compute Cluster WebVM VM VM Compute A vCenter Server NSX Manager NSX Controller Compute B Edge and Control VM Edge Cluster Management Cluster Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Traditional Networking Configuration Tasks L3 L2 Initial configuration • Multi-chassis LAG • Routing configuration • SVIs/RVIs • VRRP/HSRP • LACP • VLANs Recurring configuration • SVIs/RVIs • VRRP/HSRP • Advertise new subnets • Access lists (ACLs) • VLANs • Adjust VLANs on trunks • VLANs STP/MST mapping • VLANs STP/MST mapping • Add VLANs on uplinks • Add VLANs to server port NSX isAGNOSTICto UnderlayNetwork L2 or L3 orAny Combination OnlyTWORequirements IPConnectivity MTUof 1600 NSX isAGNOSTICto UnderlayNetwork L2 or L3 orAny Combination OnlyTWORequirements IPConnectivity MTUof 1600 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Traditional Networking Configuration Tasks L3 L2 Initial configuration • Multi-chassis LAG • Routing configuration • SVIs/RVIs • VRRP/HSRP • LACP • VLANs Recurring configuration • SVIs/RVIs • VRRP/HSRP • Advertise new subnets • Access lists (ACLs) • VLANs • Adjust VLANs on trunks • VLANs STP/MST mapping • VLANs STP/MST mapping • Add VLANs on uplinks • Add VLANs to server port NSX isAGNOSTICto UnderlayNetwork L2 or L3 orAny Combination OnlyTWORequirements IPConnectivity MTUof 1600 Network & Security Services in Software WAN/Internet L3 L2 POD A L3 L2 POD B VLAN X Stretch VLAN Y Stretch L3 Topologies & Design Considerations. With XoS 670 Cores L2 Interfaces by default IP packet as large as 9214 Bytes can be sent and received (no configuration is required). L3 interfaces by default IP packet as large as 1500 Bytes can be sent and received. Configuration step for L3 interfaces: change MTU to 9214 “mtu ” command) IP packet as large as 9214 Bytes can be sent and received • L3 ToR designs have dynamic routing protocol between leaf and spine. • BGP, OSPF or ISIS can be used • Rack advertises small set of prefixes • (Unique VLAN/subnet per rack) • Equal cost paths to the other racks prefixes. • Switch provides default gateway service for each VLAN subnet • 801.Q trunks with a small set of VLANs for VMkernel traffic • Rest of the session assumes L3 topology L3 L2 Network & Security Services in Software WAN/Internet L3 L2 POD A L3 L2 POD B VLAN X Stretch VLAN Y Stretch L3 Topologies & Design Considerations. With XoS 670 Cores L2 Interfaces by default IP packet as large as 9214 Bytes can be sent and received (no configuration is required). L3 interfaces by default IP packet as large as 1500 Bytes can be sent and received. Configuration step for L3 interfaces: change MTU to 9214 “mtu ” command) IP packet as large as 9214 Bytes can be sent and received • L3 ToR designs have dynamic routing protocol between leaf and spine. • BGP, OSPF or ISIS can be used • Rack advertises small set of prefixes • (Unique VLAN/subnet per rack) • Equal cost paths to the other racks prefixes. • Switch provides default gateway service for each VLAN subnet • 801.Q trunks with a small set of VLANs for VMkernel traffic • Rest of the session assumes L3 topology L3 L2 XYZ Account (Spine) CORE 1 CORE 2 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Wi-FiAnalytics Security Policy Extreme s Platform • Lync Traffic Engineering with Purview Analytics Service Insertion • Multi-Tenant Networks Automation and Orchestration • Self-Provisioned Network Slicing (Proof of concept Implementation) Better Experience through simpler solutions that deliver long term value. Products – one wired and wireless platform Customer Care – Strong 1st call resolution CORE 1 CORE 2 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Wi-FiAnalytics Security Policy Extreme s Platform • Lync Traffic Engineering with Purview Analytics Service Insertion • Multi-Tenant Networks Automation and Orchestration • Self-Provisioned Network Slicing (Proof of concept Implementation) Better Experience through simpler solutions that deliver long term value. Products – one wired and wireless platform Customer Care – Strong 1st call resolution NSX Controllers Functions LogicalRouter1 VXLAN5000 LogicalRouter2 VXLAN5001 LogicalRouter3 VXLAN-5002 Controller VXLAN DirectoryService MAC table ARP table VTEPtable This is where NSX will provide XYZ Account one control plane to distribute network information to ESXi hosts. NSX Controllers are clustered for scale out and high availability. • Network information is distributed across nodes in a Controller Cluster (slicing) • Remove the VXLAN dependency on multicast routing/PIM in the physical network • Provide suppression of ARP broadcast traffic in VXLAN networks XYZ Account (Spine) CORE 1 CORE 2 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Wi-FiAnalytics Security Policy Extreme s Platform • Lync Traffic Engineering with Purview Analytics Service Insertion • Multi-Tenant Networks Automation and Orchestration • Self-Provisioned Network Slicing (Proof of concept Implementation) Better Experience through simpler solutions that deliver long term value. Products – one wired and wireless platform Customer Care – Strong 1st call resolution NSX Controllers Functions LogicalRouter1 VXLAN5000 LogicalRouter2 VXLAN5001 LogicalRouter3 VXLAN-5002 Controller VXLAN DirectoryService MAC table ARP table VTEPtable This is where NSX will provide XYZ Account one control plane to distribute network information to ESXi hosts. NSX Controllers are clustered for scale out and high availability. • Network information is distributed across nodes in a Controller Cluster (slicing) • Remove the VXLAN dependency on multicast routing/PIM in the physical network • Provide suppression of ARP broadcast traffic in VXLAN networks SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Serve rs Manageme nt Summi t Summi t Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity vSphere Host VXLAN Transport Network Host 1 VTEP2 10.20.10.11 V M VXLAN 5002 MAC2 vSphere Host VTEP3 10.20.10.12 Host 2 10.20.10.13 V M MAC4 V M MAC1 V M MAC3 VTEP4 vSphere Distributed Switch vSphere Distributed Switch VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS. For the Fabric is L2, this usually means that the same IP subnets are also used across racks for a given type of traffic. For a given host only one VDS responsible for VXLAN traffic. A single VDS can span multiple cluster.Transport Zone, VTEP, Logical Networks and VDS VTEP VMkernel interface belongs to a specific VLAN backed port- group dynamically created during the cluster VXLAN preparation • One or more VDS can be part of the same TZ • A given Logical Switch can span multiple VDS. vSphere Host(ESXi) L3 ToR Switch Routed uplinks (ECMP) VLANTrunk (802.1Q) VLAN 66 Mgmt 10.66.1.25/26 DGW: 10.66.1.1 VLAN 77 vMotion 10.77.1.25/26 GW: 10.77.1.1 VLAN 88 VXLAN 10.88.1.25/26 DGW: 10.88.1.1 VLAN 99 Storage 10.99.1.25/26 GW: 10.99.1.1 SVI 66: 10.66.1.1/26 SVI 77: 10.77.1.1/26 SVI 88: 10.88.1.1/26 SVI 99: 10.99.1.1/26 SpanofVLANs SpanofVLANs SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity vSphere Host VXLAN Transport Network Host 1 VTEP2 10.20.10.11 V M VXLAN 5002 MAC2 vSphere Host VTEP3 10.20.10.12 Host 2 10.20.10.13 V M MAC4 V M MAC1 V M MAC3 VTEP4 vSphere Distributed Switch vSphere Distributed Switch VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS. For the Fabric is L2, this usually means that the same IP subnets are also used across racks for a given type of traffic. For a given host only one VDS responsible for VXLAN traffic. A single VDS can span multiple cluster.Transport Zone, VTEP, Logical Networks and VDS VTEP VMkernel interface belongs to a specific VLAN backed port- group dynamically created during the cluster VXLAN preparation • One or more VDS can be part of the same TZ • A given Logical Switch can span multiple VDS. vSphere Host(ESXi) L3 ToR Switch Routed uplinks (ECMP) VLANTrunk (802.1Q) VLAN 66 Mgmt 10.66.1.25/26 DGW: 10.66.1.1 VLAN 77 vMotion 10.77.1.25/26 GW: 10.77.1.1 VLAN 88 VXLAN 10.88.1.25/26 DGW: 10.88.1.1 VLAN 99 Storage 10.99.1.25/26 GW: 10.99.1.1 SVI 66: 10.66.1.1/26 SVI 77: 10.77.1.1/26 SVI 88: 10.88.1.1/26 SVI 99: 10.99.1.1/26 SpanofVLANs SpanofVLANs Traditional control LDAP NAC DHCP Radius Captive Portal DNS MDMLDAP NAC DHCP Radius Captive Portal DNS MDM XYZ Account ServicesUser Repositories or Corporate Control LDAP NAC DHCP Radius Captive Portal DNS MDM XYZ Account ServicesUser Repositories or Corporate Control NAC Analytics Netsite Management Cluster (Control) Cloud Based control Leaf L2 L3 L3 L2 VMkernel VLANs VLANs for Management VMs L2 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk VMkernel VLANs VLANs for Management VMs Single Rack Connectivity Leaf L3 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk Dual Rack Connectivity L2 23 Extreme Vmware Deployment Considerations – This is where, management Cluster is typically provisioned on a single rack • The single rack design still requires redundant uplinks from host to ToR carrying VLANs for management • Dual rack design for increased resiliency (handling single rack failure scenarios) which could be the requirements for highly available design. • Typically in a small design management and Edge cluster are collapsed. Exclude management cluster from preparing VXLAN. ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server NSX Manager deployed as a virtual appliance 4 vCPU, 12 GB of RAM per node Consider reserving memory for VC to ensure good Web Client performance Can not modify configurations Traditional control LDAP NAC DHCP Radius Captive Portal DNS MDM XYZ Account ServicesUser Repositories or Corporate Control NAC Analytics Netsite Management Cluster (Control) Cloud Based control Leaf L2 L3 L3 L2 VMkernel VLANs VLANs for Management VMs L2 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk VMkernel VLANs VLANs for Management VMs Single Rack Connectivity Leaf L3 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk Dual Rack Connectivity L2 23 Extreme Vmware Deployment Considerations – This is where, management Cluster is typically provisioned on a single rack • The single rack design still requires redundant uplinks from host to ToR carrying VLANs for management • Dual rack design for increased resiliency (handling single rack failure scenarios) which could be the requirements for highly available design. • Typically in a small design management and Edge cluster are collapsed. Exclude management cluster from preparing VXLAN. ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server NSX Manager deployed as a virtual appliance 4 vCPU, 12 GB of RAM per node Consider reserving memory for VC to ensure good Web Client performance Can not modify configurations Extreme Networks Compute, Storage Networking Integration... Extreme Networks Control, Analytics & Security Integration...
  • 5.
    OPEX Components ofConverged Environment Security Compliance Automation Operations Compute Storage Networking X Y Z Pooled compute, network, and storage capacity XYZ Account 2017 Design CAPEX Components of Converged Environment Cores Memory Spindles Network 6 12 16 20 64GB 128GB 192GB 256GB 512GB 3.6TB 4.8TB 6TB 10TB8TB 10G RJ45 SFP+ QSFP+ QSFP28 SSD SSD 2016 Design 10G Compute, Memory and Storage Jeff Green 2017 Rev. 1 South Legend Legend 10G Passive (PN 10306 ~ 5m, 10307~ 10M) 10G SFP+ Active copper cable (upto 100m) 40G Passive (PN 10321 ~3m, 10323~ 5m) 40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m) 40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4- F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, ) 10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m) SFP+ DAC Cables QSFP+ DAC Cables 10 LRM 220m (720ft/plus mode conditioning) (PN 10303) 10GBASE-T over Class E Cat 6 (55M) (10G) 10GBASE-T over Class E Cat 6a or 7 (100M) (10G) 10 SR over OM3 (300M) or OM4 (400M) (PN 10301) 10 LR over single mode (10KM) 1310nm (PN 10302) 10 ER over single mode (40KM) 1550nm (PN 10309) 10 ZR over single mode (80KM) 1550nm (PN 10310) 802.3bz 10GBASE-T (100M) for Cat 6 (5G) 10G Fiber 10G Copper 802.3bz 10GBASE-T (100M) for Cat 5e (2.5G) Prescriptive Services10G / 40G Overlay Overall Architecture SDN NSX Underlay ACI Other Spine-Leaf MLAG NEXUS Other Applications Automated provisioning and configuration, Intelligence in software Manual Slow ExtremeCore10G ExtremeEdgePoE 25G / 50G /100G QSFP28 DACs (Passive Cables) LR4 - Up to 10 Km on Single Mode. 2 Km lower cost module (Lite). Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm). QSFP28 QSFP28 DACs (Active Cables) 10411 - 100Gb, QSFP28-QSFP28 DAC, 1m 10413 - 100Gb, QSFP28-QSFP28 DAC, 3m 10414 - 100Gb, QSFP28-QSFP28 DAC, 5m 10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m 4x25 DACS 1x1 DAC 10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m 10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m 10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m 2X50 DACs 100G => 4 x 25G lanes 10434 - 100Gb, QSFP28-QSFP28 DAC, 5m 10435 - 100Gb, QSFP28-QSFP28 DAC, 7m 10436 - 100Gb, QSFP28-QSFP28 DAC, 10m 10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 4x25 DACS 1x1 DAC 10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m 10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m 10437 - 100Gb, QSFP28-QSFP28 DAC, 20m 10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m • Identify design principles and implementation strategies, Start from service requirements and leverage standardization (Design should be driven by today s and tomorrow s service requirements). • Standardization limits technical and operational complexity and related costs (Develop a reference model based on principles (Principles enable consistent choice in long term run). • Leverage best practices and proven expertise, Streamline your capability to execute and operational effectiveness (Unleash capabilities provided by enabling technologies). Virtual Router 1 (VoIP) - Virtualized services for application delivery Virtual Router 1 (Oracle) - Virtualized services for application delivery Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery Virtual Router 1 (PACs) - Virtualized services for application delivery Virtual Router 1 (VoIP) - Virtualized services for application delivery Virtual Router 1 (Oracle) - Virtualized services for application delivery Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery Virtual Router 1 (PACs) - Virtualized services for application delivery # of assets/ports maintenance costs operational costs Next generation operations Pay as you go Savings Referencearchitecture Data center Network as a Service (NaaS) NSX Controller VC for NSX Domain - A VC for NSX Domain - B Management Cluster NSX Manager VM - A Management VC Compute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VMCompute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VM Compute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VMCompute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VM NSX Manager VM - B Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains... • Following VMware best practices to have the Management Cluster managed by a dedicated vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the Edge and Compute Clusters • NSX Manager also deployed into the Management Cluster and pared with this second vCenter Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains) • NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to, therefore the Controllers are usually also deployed into the Edge Cluster Data center Network as a Service (NaaS) NSX Controller VC for NSX Domain - A VC for NSX Domain - B Management Cluster NSX Manager VM - A Management VC Compute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VM Compute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VM NSX Manager VM - B Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains... • Following VMware best practices to have the Management Cluster managed by a dedicated vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the Edge and Compute Clusters • NSX Manager also deployed into the Management Cluster and pared with this second vCenter Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains) • NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to, therefore the Controllers are usually also deployed into the Edge Cluster CORE 1 CORE 2 XYZ Account (Primary) Preparation Netsite OperationLogical Switch CORE 1 CORE 2 XYZ Account (Primary) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity CORE 1 CORE 2 XYZ Account (Primary) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity CORE 1 CORE 2 XYZ Account (DR Site) Preparation Netsite OperationLogical Switch CORE 1 CORE 2 XYZ Account (DR Site) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity CORE 1 CORE 2 XYZ Account (DR Site) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity Logical Router 1 VXLAN 5000 Logical Router 2 VXLAN 5001 Logical Router 3 VXLAN - 5002 Controller VXLAN Directory Service MAC table ARP table VTEPtable ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server Traffic Engineer “like ATM or MPLS” UDP Start Stop UDP UDP Use Existing IP Network VM VM VM VM VM VM VM VM Traffic Engineer “like ATM or MPLS” UDP Start Stop UDP UDP Use Existing IP Network VM VM VM VM VM VM VM VM VTEP VTEP XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters • Normally a TZ defines the span of Logical Switches (Layer 2 communication domains). A given Logical Switch can span multiple VDS • VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for encap/decap VXLAN traffic. One or more VDS can be part of the same TZ • VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically created during the cluster VXLAN preparation. Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for active/active DC over any Network. This is where careful attention is required because there is different data plane (additional header) makes Jumbo Frames a must have and will continue to evolve • Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants • L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control plane is still work in progress (even if BGP EVPNs are here) • Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the design above becomes a big L3 domain with L2 processing. EVI provides additional benefits such as: • Transport agnostic • Up to 16 Active/Active DCs • Active/Active VRRP default gateways for VMs • STP outages remain local to each DC • Improves WAN utilization by dropping unknown frames and providing ARP suppression EVI tunnel Physical Underlay Network CORE 1 CORE 2 XYZ Account (Primary) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity CORE 1 CORE 2 XYZ Account (DR Site) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity Logical Router 1 VXLAN 5000 Logical Router 2 VXLAN 5001 Logical Router 3 VXLAN - 5002 Controller VXLAN Directory Service MAC table ARP table VTEPtable ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server Traffic Engineer “like ATM or MPLS” UDP Start Stop UDP UDP Use Existing IP Network VM VM VM VM VM VM VM VM VTEP VTEP XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters • Normally a TZ defines the span of Logical Switches (Layer 2 communication domains). A given Logical Switch can span multiple VDS • VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for encap/decap VXLAN traffic. One or more VDS can be part of the same TZ • VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically created during the cluster VXLAN preparation. Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for active/active DC over any Network. This is where careful attention is required because there is different data plane (additional header) makes Jumbo Frames a must have and will continue to evolve • Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants • L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control plane is still work in progress (even if BGP EVPNs are here) • Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the design above becomes a big L3 domain with L2 processing. EVI provides additional benefits such as: • Transport agnostic • Up to 16 Active/Active DCs • Active/Active VRRP default gateways for VMs • STP outages remain local to each DC • Improves WAN utilization by dropping unknown frames and providing ARP suppression EVI tunnel Physical Underlay Network