SlideShare a Scribd company logo
Multi-Rate1,2.5,5,10GigabitEdgePoE++
Multi-RateSpineLeafDesign(10,25,40,50,100Gigabit)
X440-G2 (L3 - Value 1G to 10G)
PoE
Fiber
DC
Policy
 SummitStack-V (WITHOUT any
additional license required).
 Upgradeable 10GbE (PN 16542 or 16543).
 Policy built-in (simplicity with multi-auth).
EXOS 21.1 or
higher
Value with Automation
First Extreme
Switch to support
Cloud Value
X460-G2 (Advanced L3 1-40G) Multirate Option
PoE
Fiber
DC
Policy
Fit The Swiss Army Knife of Switches
Half Duplex
½ & ½
3 Models
This is where: 10G on
existing copper Cat5e
and Cat6 extend the
life of the installed
cable plant. Great for
1:N Convergence.
X620 (1OG Copper or Fiber)
Speed Next Gen Edge
Lowered TCO via
Limited Lifetime Warrantee
Wallplate AP
AP + Camera
Outdoor Wave 2
Multi-Gigabit
Wireless
High Density
-pack or Wedge
Facebook
ExtremeSupport
XoS
Platform
Config L2/L3
Analytics
Any OS
Any Bare Metal Switch
Policy
Disaggregated Switch
CAPEX or OPEX (you choose)?
Reduced Risk (just witness or take action)
Time is the critical Factor with XYZ Account Services...
Infrastructure
Businessmodel
Ownership
Considerations
Management
Location
 32 x 100Gb
 64 x 50Gb
 128 x 25Gb
 128 x 10Gb
 32 x 40Gb
96 x 10GbE Ports
(via4x10Gb breakout)
8 x 10/25/40/
50/100G
10G
Next Gen: Spine Leaf
X670 & X770 - Hyper Ethernet
Common Features
 Data Center Bridging (DCB) features
 Low ~600 nsec chipset latency in cut through mode.
 Same PSUs and Fans as X670s (Front to back or Back to
Front) AC or DC.
X670-G2 -72X (10GbE Spine Leaf) 72 10GbE
X670-48x-4q (10GbE Spine Leaf) 48 10GbE & 4 QSFP+
QSFP+
40G DAC
Extreme Feature Packs
Core
Edge
AVB
OpenFlow
Advance
Edge
1588 PTP
MPLS
Direct Attach
Optics License
Extreme Switches
include the license
they normally need.
Like any other
software platform
you have an
upgrade path.
QSPF28
100G DAC
Thin & Crunchy
XoS Platform with one track of software.
Speed with Features (Simple).
Metro Functionality like ATM or SONET
Flexible Horizontal or Vertical stacking
Purposed for Broadcom
(ASICs)
So What, Who cares?
Deliver XYZ Account, the
value of HP with the feature
function of Cisco.
XYZ Account Business Value
WhyExtreme?
Summit
Summit
Policy delivers automation..
Thick & Chewy
Know and control
the who, what, when, where and the user
experience across your XYZ Account
Network.
Control with insight...
WhyEnterasys?
XYZ Account Strategic Asset
Custom ASICs
S & K Series
Chantry
Motorola
Air
Defense
So What, Who cares?
Flow Based Switching
Simplicity w Policy
Wired and Wireless
100% insourced support
Today you get both
Control
So What, Who cares?
Fit
Speed
Unique
Value
Unique
Control
Summit G2
Yesterday - Cabletron Changed the game w Structured wiring
(remember Vampire taps, Coax ethernet ect.)
Today - Extreme Delivers Structured networking
Policy
Summit
Who?
Where?
When?
Whatdevice?
How?
QuarantineRemediate
Allow
Authentication
NAC Server
Summit
Netsite
Advanced
NAC Client
Joe Smith
XYZ Account
Access
Controlled
Subnet
Enforcement
Point
Network
Access
Control
This is where
if X + Y, then Z...
 LLDP-MED
 CDPv2
 ELRP
 ZTP
If user
matches a
defined
attribute
value
ACL
QoS
Then place
user into a
defined ROLE
A port is what it is because?This is where you easily Identify
the impact and Source of
Interference Problems.
Detailed Forensic Analysis
 Device, Threats, Associations,
Traffic, Signal and Location
Trends
 Record of Wireless Issues
Network Trend Analysis
 Historical Analysis of
Intermittent Wireless
Problems
 Performance Trends a
Spectrum Analysis for
Interference Detection
 Real-time Spectrograms
 Proactive Detection of
Application Impacting
Interference
Visualize RF Coverage
 Real-time RF Visualizations
 Proactive Monitoring and
Alerting of Coverage Problem
ADSP for faster Root Cause Forensic
Analysis for SECURITY & COMPLIANCE.
Event
Sequence
Classify
Interference
Sources
Side-by-side
Comparative
Analysis
Air Defense
Application
Experience
FullContext
App
App
Analytics
App
Stop the
finger-pointing
Application Network Response.
Flow or Bit
Bucket
Collector
3 million Flows
Sensors
X460 IPFix 4000 Flows
(2048 ingress, 2048 egress)
Sensor PV-FC-180, S or K Series (Core
Flow 2/ 1 Million Flows)
Flow-based Access Points
From the controller (8K Flows
per AP or C35 is 24K Flows)
Flows
Why not do this in the
network?
10110111011101110 101101110111011101
6 million Flows
Business Value
Context BW IP HTTP:// Apps
Platform Automation Control Experience Solution Framework
Is your network faster today than
it was 3 years ago? Going forward
it should deliver more, faster,
different
X430-G2 (L2 - 1G to 10G)
PoE
Distribute content
from a single source
to hundreds of displays
Ethernet as a Utility
(PoE)
Injectors
Up to 75
Watts
XYZ Account Data CenterXYZ Account Data Center
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
 Can not access Line cards.
 No L2/l3 recovery inside.
 No access to Fabric.
Disaggregated value...
 Control Top-of-Rack Switches
 L2/L3 protocols inside the Spline
 Full access to Spine Switches
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
 Can not access Line cards.
 No L2/l3 recovery inside.
 No access to Fabric.
Disaggregated value...
 Control Top-of-Rack Switches
 L2/L3 protocols inside the Spline
 Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
 Traditional 3-tier model (Less cabling).
 Link speeds must increase at every hop (Less
predictable latency).
 Common in Chassis based architectures (Optimized
for North/South traffic).
 Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
 Always two hops to any leaf (More resiliency,
flexibility and performance).
 Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
 Traditional 3-tier model (Less cabling).
 Link speeds must increase at every hop (Less
predictable latency).
 Common in Chassis based architectures (Optimized
for North/South traffic).
 Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
 Always two hops to any leaf (More resiliency,
flexibility and performance).
 Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
 This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
 Virtualization happens with VXLAN and VMotion (Control by the overlay).
 N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account handshake layer:
 This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
 Virtualization happens with VXLAN and VMotion (Control by the overlay).
 N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
 This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
 Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
 Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Fabric Modules (Spine)
I/OModules(Leaf)
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
 This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
 Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
 Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
 Can not access Line cards.
 No L2/l3 recovery inside.
 No access to Fabric.
Disaggregated value...
 Control Top-of-Rack Switches
 L2/L3 protocols inside the Spline
 Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
 Traditional 3-tier model (Less cabling).
 Link speeds must increase at every hop (Less
predictable latency).
 Common in Chassis based architectures (Optimized
for North/South traffic).
 Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
 Always two hops to any leaf (More resiliency,
flexibility and performance).
 Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
 This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
 Virtualization happens with VXLAN and VMotion (Control by the overlay).
 N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
 This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
 Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
 Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
 This is where, you can never have enough.
 Customers want scale made easy.
 Hypervisor integration w cloud simplicity.
L2
L3
L2
L3
L2
L3
L2
L3
L2
L3
L2
L3 L2
L3
L2
L3
L2
L3
L2
L3
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
 This is where, you can never have enough.
 Customers want scale made easy.
 Hypervisor integration w cloud simplicity.
L2
L3
L2
L3
L2
L3 L2
L3
L2
L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
 Each cluster is independent
(including servers, storage,
database & interconnects).
 Each cluster can be used for
a different type of service.
 Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
 Each cluster is independent
(including servers, storage,
database & interconnects).
 Each cluster can be used for
a different type of service.
 Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
 All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
 VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
 Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
 All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
 VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
 Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
 Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
 Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
 No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
 Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
 Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
 No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
 This is where, you can never have enough.
 Customers want scale made easy.
 Hypervisor integration w cloud simplicity.
L2
L3
L2
L3
L2
L3 L2
L3
L2
L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
 Each cluster is independent
(including servers, storage,
database & interconnects).
 Each cluster can be used for
a different type of service.
 Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
 All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
 VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
 Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
 Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
 Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
 No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
 This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
 Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
 Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
The XYZ Account the VxLan forwarding plane for NSX control:
 This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
 Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
 Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
 Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
 Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
 Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
 Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
 Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
 Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
 ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
 XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
 XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
 ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
 XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
 XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
 This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
 Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
 Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
 Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
 Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
 Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
 ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
 XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
 XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
 This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
 Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
 Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
 Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
 Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
 Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
 ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
 XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
 XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
 Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
 Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
 Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
 Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
 Low density X620 might help XYZ
Account to avoid stranded ports.
 Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
 Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
ServersServers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
 Low density X620 might help XYZ
Account to avoid stranded ports.
 Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
 Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
 With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
 Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
 Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
 With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
 Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
 Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
 Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
 Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
 Low density X620 might help XYZ
Account to avoid stranded ports.
 Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
 Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
 With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
 Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
 Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
 Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
 Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
 Low density X620 might help XYZ
Account to avoid stranded ports.
 Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
 Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
 With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
 Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
 Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
 Can not access Line cards.
 No L2/l3 recovery inside.
 No access to Fabric.
Disaggregated value...
 Control Top-of-Rack Switches
 L2/L3 protocols inside the Spline
 Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
 Traditional 3-tier model (Less cabling).
 Link speeds must increase at every hop (Less
predictable latency).
 Common in Chassis based architectures (Optimized
for North/South traffic).
 Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
 Always two hops to any leaf (More resiliency,
flexibility and performance).
 Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
 This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
 Virtualization happens with VXLAN and VMotion (Control by the overlay).
 N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
 This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
 Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
 Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
 This is where, you can never have enough.
 Customers want scale made easy.
 Hypervisor integration w cloud simplicity.
L2
L3
L2
L3
L2
L3 L2
L3
L2
L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
 Each cluster is independent
(including servers, storage,
database & interconnects).
 Each cluster can be used for
a different type of service.
 Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
 All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
 VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
 Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
 Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
 Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
 No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
 This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
 Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
 Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
 Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
 Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
 Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
 ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
 XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
 XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
 Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
 Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
 Low density X620 might help XYZ
Account to avoid stranded ports.
 Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
 Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
 With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
 Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
 Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Heading
XYZ Account 2016 DesignExtremeEdgePoE
ExtremeCore10G
2016 Design
1G 2.5G/5G 10G 40G
Jeff Green
2016
Rev. 1
Florida
Legend
Legend
PoE
802.3at (PoE+)
Cat5e
30W
30W30W
60W
UPOE
No Cabling Change from PoE+
Cat5e
NBASE-T Alliance Copper Max Distances
Cat 7 Shielded 100 m
Cat 6a Shielded 100 m
Cat 6a Unshielded 100 m
Cat 6 Shielded** 100 m
Cat 6 Unshielded** 55 m
Need Correct
UTP, Patch Panel
and Adapter.
known as IEEE 802.3bz
Greenfield - Cat 6a (2.5, 5G & 10G) 100m
Cat 6 (2.5G, 5G & 10G) 55m
Brownfield - Cat 5e (2.5&5G) 100M
Requires X620 or
X460 Switch for
Multi-rate Support
plus Client that
supports Multi-rate.
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
OM3 50 µm (550m/SX) Laser, LC (PN 10051H)
OM1 62.5 µm (FDDI 220m/OM1, LC (PN 10051H)
OM2 62.5 µm (ATM 275m/OM2), LC (PN 10051H)
OM4 50 µm (550m/SX) 2Km, LC (PN 10051H)
1G Fiber (50 µm)
1G Fiber (62.5 µm)
Single-fiber
transmission uses
only one strand of
fiber for both
transmit and
receive (1310nm
and 1490nm for
1Gbps; 1310nm and
1550nm for
100Mbps)
LX (MMF 220 & 550m), SMF 10km, LC (PN 10052H)
ZX SMF 70km, LC (PN 10053H)
10/100/1000 (UTP to 100m) SFP (PN 1070H)
SR4 at least 100 m OM3 MMF (PN 10319)
SR4 at least 125 m OM4 MMF (PN 10319)
LR4 at least 10 km SMF, LC (PN 10320)
LM4 140m MMF or 1kM SMF, LC (PN 10334)
Optics
Optics +
Fan-out
Fiber Cable
QSFP-SFPP-ADPTQSFP-SFPP-ADPT – QSFP to SFP+ adapter
ER4 40km SMF, LC (PN 10335) Internal CWDM
transits four wavelengths over single fiber.
MPO to 4 x LC Fanout 10m (PN 10327) for use
with (PN 10326) MPO to 4 x LC duplex
connectors, SMF
LR4 Parallel SM, 10km SMF, MPO (PN 10326)
25/50/100G
CR10 > 10 m over copper cable (10x10 Gb/s /Twinax (7M))
SR10 > 100 m over OM3 MMF (10x10 Gb/s / Multimode (100M))
SR10 > 125 m over OM4 MMF (10x10 Gb/s/ (100M) Data Center)
LR4 > 10 km over SMF (4x25 Gb/s SMF/WDM (10km) Campus)
ER4 > 40 km over SMF (4x25 Gb/s SMF/WDM (40km) Metro)
Optics and DAC Cables
Extreme Networks will restrict the integration of non-qualified 3rd party optical devices within 40G and 100G product environments,
unless you purchase the EXOS 3rd Party 40G/100G Optics feature license to allow such integration.
Proprietary got you Keyed Optics
ModelNumber Description
10GB-LR271-SFPP 10Gb CWDM LR, SM, Channel 1271nm, LC
10GB-LR291-SFPP 10Gb CWDM LR, SM, Channel 1291nm, LC
10GB-LR311-SFPP 10Gb CWDM LR, SM, Channel 1311nm, LC
10GB-LR331-SFPP 10Gb CWDM LR, SM, Channel 1331nm, LC
MUX-CWDM-01 4 Channel O-Band CWDM Mux/Demux
MUX-RACK-01 Rack mount kit for MUX-CWDM-01
40GB-LR4-QSFP 40Gb 40GBASE-LR4, SM 10Km, LC
CWDM
MUX-CWDM-01
DACs
Notes:
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
 NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
 NSX Controllers can also be deployed into the Management Cluster
 Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
 Ability to isolate and develop span of control
 Capacity planning – CPU, Memory & NIC
 Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
 Interaction with physical network
 Overlay (VXLAN) impact
 Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
 NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
 NSX Controllers can also be deployed into the Management Cluster
 Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
 Ability to isolate and develop span of control
 Capacity planning – CPU, Memory & NIC
 Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
 Interaction with physical network
 Overlay (VXLAN) impact
 Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
 Multi-chassis LAG
 Routing configuration
 SVIs/RVIs
 VRRP/HSRP
 LACP
 VLANs
Recurring configuration
 SVIs/RVIs
 VRRP/HSRP
 Advertise new subnets
 Access lists (ACLs)
 VLANs
 Adjust VLANs on trunks
 VLANs STP/MST mapping
 VLANs STP/MST mapping
 Add VLANs on uplinks
 Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
 Multi-chassis LAG
 Routing configuration
 SVIs/RVIs
 VRRP/HSRP
 LACP
 VLANs
Recurring configuration
 SVIs/RVIs
 VRRP/HSRP
 Advertise new subnets
 Access lists (ACLs)
 VLANs
 Adjust VLANs on trunks
 VLANs STP/MST mapping
 VLANs STP/MST mapping
 Add VLANs on uplinks
 Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
 L3 ToR designs have dynamic routing protocol between
leaf and spine.
 BGP, OSPF or ISIS can be used
 Rack advertises small set of prefixes
 (Unique VLAN/subnet per rack)
 Equal cost paths to the other racks prefixes.
 Switch provides default gateway service for each VLAN
subnet
 801.Q trunks with a small set of VLANs for VMkernel
traffic
 Rest of the session assumes L3 topology
L3
L2
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
 L3 ToR designs have dynamic routing protocol between
leaf and spine.
 BGP, OSPF or ISIS can be used
 Rack advertises small set of prefixes
 (Unique VLAN/subnet per rack)
 Equal cost paths to the other racks prefixes.
 Switch provides default gateway service for each VLAN
subnet
 801.Q trunks with a small set of VLANs for VMkernel
traffic
 Rest of the session assumes L3 topology
L3
L2
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
 Lync Traffic Engineering with
Purview Analytics Service Insertion
 Multi-Tenant Networks Automation
and Orchestration
 Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
 Lync Traffic Engineering with
Purview Analytics Service Insertion
 Multi-Tenant Networks Automation
and Orchestration
 Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
 Network information is distributed across nodes in a
Controller Cluster (slicing)
 Remove the VXLAN dependency on multicast
routing/PIM in the physical network
 Provide suppression of ARP broadcast traffic in
VXLAN networks
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
 Lync Traffic Engineering with
Purview Analytics Service Insertion
 Multi-Tenant Networks Automation
and Orchestration
 Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
 Network information is distributed across nodes in a
Controller Cluster (slicing)
 Remove the VXLAN dependency on multicast
routing/PIM in the physical network
 Provide suppression of ARP broadcast traffic in
VXLAN networks
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
 One or more VDS can be part
of the same TZ
 A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
 One or more VDS can be part
of the same TZ
 A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDMLDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
 The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
 Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
 Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
 The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
 Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
 Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Extreme Networks
Compute, Storage Networking
Integration...
Extreme Networks
Control, Analytics & Security
Integration...
Heading
XYZ Account 2016 DesignExtremeEdgePoE
ExtremeCore10G
2016 Design
1G 2.5G/5G 10G 40G
Jeff Green
2016
Rev. 1
Florida
Legend
Legend
PoE
802.3at (PoE+)
Cat5e
30W
30W30W
60W
UPOE
No Cabling Change from PoE+
Cat5e
NBASE-T Alliance Copper Max Distances
Cat 7 Shielded 100 m
Cat 6a Shielded 100 m
Cat 6a Unshielded 100 m
Cat 6 Shielded** 100 m
Cat 6 Unshielded** 55 m
Need Correct
UTP, Patch Panel
and Adapter.
known as IEEE 802.3bz
Greenfield - Cat 6a (2.5, 5G & 10G) 100m
Cat 6 (2.5G, 5G & 10G) 55m
Brownfield - Cat 5e (2.5&5G) 100M
Requires X620 or
X460 Switch for
Multi-rate Support
plus Client that
supports Multi-rate.
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
OM3 50 µm (550m/SX) Laser, LC (PN 10051H)
OM1 62.5 µm (FDDI 220m/OM1, LC (PN 10051H)
OM2 62.5 µm (ATM 275m/OM2), LC (PN 10051H)
OM4 50 µm (550m/SX) 2Km, LC (PN 10051H)
1G Fiber (50 µm)
1G Fiber (62.5 µm)
Single-fiber
transmission uses
only one strand of
fiber for both
transmit and
receive (1310nm
and 1490nm for
1Gbps; 1310nm and
1550nm for
100Mbps)
LX (MMF 220 & 550m), SMF 10km, LC (PN 10052H)
ZX SMF 70km, LC (PN 10053H)
10/100/1000 (UTP to 100m) SFP (PN 1070H)
SR4 at least 100 m OM3 MMF (PN 10319)
SR4 at least 125 m OM4 MMF (PN 10319)
LR4 at least 10 km SMF, LC (PN 10320)
LM4 140m MMF or 1kM SMF, LC (PN 10334)
Optics
Optics +
Fan-out
Fiber Cable
QSFP-SFPP-ADPTQSFP-SFPP-ADPT – QSFP to SFP+ adapter
ER4 40km SMF, LC (PN 10335) Internal CWDM
transits four wavelengths over single fiber.
MPO to 4 x LC Fanout 10m (PN 10327) for use
with (PN 10326) MPO to 4 x LC duplex
connectors, SMF
LR4 Parallel SM, 10km SMF, MPO (PN 10326)
25/50/100G
CR10 > 10 m over copper cable (10x10 Gb/s /Twinax (7M))
SR10 > 100 m over OM3 MMF (10x10 Gb/s / Multimode (100M))
SR10 > 125 m over OM4 MMF (10x10 Gb/s/ (100M) Data Center)
LR4 > 10 km over SMF (4x25 Gb/s SMF/WDM (10km) Campus)
ER4 > 40 km over SMF (4x25 Gb/s SMF/WDM (40km) Metro)
Optics and DAC Cables
Extreme Networks will restrict the integration of non-qualified 3rd party optical devices within 40G and 100G product environments,
unless you purchase the EXOS 3rd Party 40G/100G Optics feature license to allow such integration.
Proprietary got you Keyed Optics
ModelNumber Description
10GB-LR271-SFPP 10Gb CWDM LR, SM, Channel 1271nm, LC
10GB-LR291-SFPP 10Gb CWDM LR, SM, Channel 1291nm, LC
10GB-LR311-SFPP 10Gb CWDM LR, SM, Channel 1311nm, LC
10GB-LR331-SFPP 10Gb CWDM LR, SM, Channel 1331nm, LC
MUX-CWDM-01 4 Channel O-Band CWDM Mux/Demux
MUX-RACK-01 Rack mount kit for MUX-CWDM-01
40GB-LR4-QSFP 40Gb 40GBASE-LR4, SM 10Km, LC
CWDM
MUX-CWDM-01
DACs
Notes:
 Identify design principles and implementation strategies,
Start from service requirements and leverage
standardization (Design should be driven by today s and
tomorrow s service requirements).
 Standardization limits technical and operational complexity
and related costs (Develop a reference model based on
principles (Principles enable consistent choice in long term
run).
 Leverage best practices and proven expertise, Streamline
your capability to execute and operational effectiveness
(Unleash capabilities provided by enabling technologies).
Virtual Router 1 (VoIP) - Virtualized services for application delivery
Virtual Router 1 (Oracle) - Virtualized services for application delivery
Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery
Virtual Router 1 (PACs) - Virtualized services for application delivery
Virtual Router 1 (VoIP) - Virtualized services for application delivery
Virtual Router 1 (Oracle) - Virtualized services for application delivery
Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery
Virtual Router 1 (PACs) - Virtualized services for application delivery
# of assets/ports
maintenance costs
operational costs
Next generation operations
Pay as you go
Savings
Referencearchitecture
Data center Network as a Service (NaaS)
NSX
Controller
VC for NSX Domain - A VC for NSX Domain - B
Management Cluster
NSX Manager VM - A
Management VC
Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VMCompute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VMCompute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM
NSX Manager VM - B
Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains...
 Following VMware best practices to have the Management Cluster managed by a dedicated
vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the
Edge and Compute Clusters
 NSX Manager also deployed into the Management Cluster and pared with this second vCenter
Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)
 NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to,
therefore the Controllers are usually also deployed into the Edge Cluster
Data center Network as a Service (NaaS)
NSX
Controller
VC for NSX Domain - A VC for NSX Domain - B
Management Cluster
NSX Manager VM - A
Management VC
Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM
NSX Manager VM - B
Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains...
 Following VMware best practices to have the Management Cluster managed by a dedicated
vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the
Edge and Compute Clusters
 NSX Manager also deployed into the Management Cluster and pared with this second vCenter
Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)
 NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to,
therefore the Controllers are usually also deployed into the Edge Cluster
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
Logical Router 1
VXLAN 5000
Logical Router 2
VXLAN 5001
Logical Router 3
VXLAN - 5002
Controller VXLAN
Directory Service
MAC table
ARP table
VTEPtable
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
Traffic Engineer “like ATM or MPLS”
UDP
Start
Stop
UDP UDP
Use Existing IP Network
VM
VM
VM
VM
VM
VM
VM
VM
Traffic Engineer “like ATM or MPLS”
UDP
Start
Stop
UDP UDP
Use Existing IP Network
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters
 Normally a TZ defines the span of Logical Switches (Layer 2 communication domains).
A given Logical Switch can span multiple VDS
 VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for
encap/decap VXLAN traffic. One or more VDS can be part of the same TZ
 VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically
created during the cluster VXLAN preparation.
Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for
active/active DC over any Network. This is where careful attention is required because
there is different data plane (additional header) makes Jumbo Frames a must have
and will continue to evolve
 Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants
 L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control
plane is still work in progress (even if BGP EVPNs are here)
 Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the
design above becomes a big L3 domain with L2 processing. EVI provides additional
benefits such as:
 Transport agnostic
 Up to 16 Active/Active DCs
 Active/Active VRRP default gateways for VMs
 STP outages remain local to each DC
 Improves WAN utilization by dropping unknown frames and providing ARP suppression
EVI tunnel
Physical Underlay Network
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
Logical Router 1
VXLAN 5000
Logical Router 2
VXLAN 5001
Logical Router 3
VXLAN - 5002
Controller VXLAN
Directory Service
MAC table
ARP table
VTEPtable
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
Traffic Engineer “like ATM or MPLS”
UDP
Start
Stop
UDP UDP
Use Existing IP Network
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters
 Normally a TZ defines the span of Logical Switches (Layer 2 communication domains).
A given Logical Switch can span multiple VDS
 VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for
encap/decap VXLAN traffic. One or more VDS can be part of the same TZ
 VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically
created during the cluster VXLAN preparation.
Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for
active/active DC over any Network. This is where careful attention is required because
there is different data plane (additional header) makes Jumbo Frames a must have
and will continue to evolve
 Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants
 L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control
plane is still work in progress (even if BGP EVPNs are here)
 Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the
design above becomes a big L3 domain with L2 processing. EVI provides additional
benefits such as:
 Transport agnostic
 Up to 16 Active/Active DCs
 Active/Active VRRP default gateways for VMs
 STP outages remain local to each DC
 Improves WAN utilization by dropping unknown frames and providing ARP suppression
EVI tunnel
Physical Underlay Network

More Related Content

What's hot

PLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof MazepaPLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
PROIDEA
 
Arista: DevOps for Network Engineers
Arista: DevOps for Network EngineersArista: DevOps for Network Engineers
Arista: DevOps for Network Engineers
Philip DiLeo
 
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld
 
Pcdvpcu en ex9200-customer-presentation-1
Pcdvpcu en ex9200-customer-presentation-1Pcdvpcu en ex9200-customer-presentation-1
Pcdvpcu en ex9200-customer-presentation-1
He Hariyadi
 
Cisco nx os
Cisco nx os Cisco nx os
Cisco nx os
Utpal Sinha
 
Cisco nexus 7009 overview
Cisco nexus 7009 overviewCisco nexus 7009 overview
Cisco nexus 7009 overview
Hamza Al-Qudah
 
Contrail Enabler for agile cloud services
Contrail Enabler for agile cloud servicesContrail Enabler for agile cloud services
Contrail Enabler for agile cloud services
Juniper Networks (日本)
 
6500overview
6500overview6500overview
6500overview
ferturcios
 
Nexus 7000 Series Innovations: M3 Module, DCI, Scale
Nexus 7000 Series Innovations: M3 Module, DCI, ScaleNexus 7000 Series Innovations: M3 Module, DCI, Scale
Nexus 7000 Series Innovations: M3 Module, DCI, Scale
Tony Antony
 
Architecting data center networks in the era of big data and cloud
Architecting data center networks in the era of big data and cloudArchitecting data center networks in the era of big data and cloud
Architecting data center networks in the era of big data and cloud
bradhedlund
 
Campus
CampusCampus
Atf 3 q15-6 - solutions for scaling the cloud computing network infrastructure
Atf 3 q15-6 - solutions for scaling the cloud computing network infrastructureAtf 3 q15-6 - solutions for scaling the cloud computing network infrastructure
Atf 3 q15-6 - solutions for scaling the cloud computing network infrastructure
Mason Mei
 
EYWA Presentation v0.1.27
EYWA Presentation v0.1.27EYWA Presentation v0.1.27
EYWA Presentation v0.1.27
JungIn Jung
 
Cisco switching technical
Cisco switching technicalCisco switching technical
Cisco switching technicalImranD1
 
Vmware 2015 with vsphereHigh performance application platforms
Vmware 2015 with vsphereHigh performance application platformsVmware 2015 with vsphereHigh performance application platforms
Vmware 2015 with vsphereHigh performance application platforms
solarisyougood
 
[OpenStack 스터디] OpenStack With Contrail
[OpenStack 스터디] OpenStack With Contrail[OpenStack 스터디] OpenStack With Contrail
[OpenStack 스터디] OpenStack With Contrail
OpenStack Korea Community
 
Next Generation Nexus 9000 Architecture
Next Generation Nexus 9000 ArchitectureNext Generation Nexus 9000 Architecture
Next Generation Nexus 9000 Architecture
Cisco Canada
 
Open contrail slides for BANV meetup
Open contrail slides for BANV meetupOpen contrail slides for BANV meetup
Open contrail slides for BANV meetup
Scott Edwards
 
Brkarc 3470 - cisco nexus 7000-7700 switch architecture (2016 las vegas) - 2 ...
Brkarc 3470 - cisco nexus 7000-7700 switch architecture (2016 las vegas) - 2 ...Brkarc 3470 - cisco nexus 7000-7700 switch architecture (2016 las vegas) - 2 ...
Brkarc 3470 - cisco nexus 7000-7700 switch architecture (2016 las vegas) - 2 ...
kds850
 
9th SDN Expert Group Seminar - Session3
9th SDN Expert Group Seminar - Session39th SDN Expert Group Seminar - Session3
9th SDN Expert Group Seminar - Session3
NAIM Networks, Inc.
 

What's hot (20)

PLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof MazepaPLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
 
Arista: DevOps for Network Engineers
Arista: DevOps for Network EngineersArista: DevOps for Network Engineers
Arista: DevOps for Network Engineers
 
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
 
Pcdvpcu en ex9200-customer-presentation-1
Pcdvpcu en ex9200-customer-presentation-1Pcdvpcu en ex9200-customer-presentation-1
Pcdvpcu en ex9200-customer-presentation-1
 
Cisco nx os
Cisco nx os Cisco nx os
Cisco nx os
 
Cisco nexus 7009 overview
Cisco nexus 7009 overviewCisco nexus 7009 overview
Cisco nexus 7009 overview
 
Contrail Enabler for agile cloud services
Contrail Enabler for agile cloud servicesContrail Enabler for agile cloud services
Contrail Enabler for agile cloud services
 
6500overview
6500overview6500overview
6500overview
 
Nexus 7000 Series Innovations: M3 Module, DCI, Scale
Nexus 7000 Series Innovations: M3 Module, DCI, ScaleNexus 7000 Series Innovations: M3 Module, DCI, Scale
Nexus 7000 Series Innovations: M3 Module, DCI, Scale
 
Architecting data center networks in the era of big data and cloud
Architecting data center networks in the era of big data and cloudArchitecting data center networks in the era of big data and cloud
Architecting data center networks in the era of big data and cloud
 
Campus
CampusCampus
Campus
 
Atf 3 q15-6 - solutions for scaling the cloud computing network infrastructure
Atf 3 q15-6 - solutions for scaling the cloud computing network infrastructureAtf 3 q15-6 - solutions for scaling the cloud computing network infrastructure
Atf 3 q15-6 - solutions for scaling the cloud computing network infrastructure
 
EYWA Presentation v0.1.27
EYWA Presentation v0.1.27EYWA Presentation v0.1.27
EYWA Presentation v0.1.27
 
Cisco switching technical
Cisco switching technicalCisco switching technical
Cisco switching technical
 
Vmware 2015 with vsphereHigh performance application platforms
Vmware 2015 with vsphereHigh performance application platformsVmware 2015 with vsphereHigh performance application platforms
Vmware 2015 with vsphereHigh performance application platforms
 
[OpenStack 스터디] OpenStack With Contrail
[OpenStack 스터디] OpenStack With Contrail[OpenStack 스터디] OpenStack With Contrail
[OpenStack 스터디] OpenStack With Contrail
 
Next Generation Nexus 9000 Architecture
Next Generation Nexus 9000 ArchitectureNext Generation Nexus 9000 Architecture
Next Generation Nexus 9000 Architecture
 
Open contrail slides for BANV meetup
Open contrail slides for BANV meetupOpen contrail slides for BANV meetup
Open contrail slides for BANV meetup
 
Brkarc 3470 - cisco nexus 7000-7700 switch architecture (2016 las vegas) - 2 ...
Brkarc 3470 - cisco nexus 7000-7700 switch architecture (2016 las vegas) - 2 ...Brkarc 3470 - cisco nexus 7000-7700 switch architecture (2016 las vegas) - 2 ...
Brkarc 3470 - cisco nexus 7000-7700 switch architecture (2016 las vegas) - 2 ...
 
9th SDN Expert Group Seminar - Session3
9th SDN Expert Group Seminar - Session39th SDN Expert Group Seminar - Session3
9th SDN Expert Group Seminar - Session3
 

Viewers also liked

8.) xo s as a platform (on off premise)
8.) xo s as a platform (on off premise)8.) xo s as a platform (on off premise)
8.) xo s as a platform (on off premise)
Jeff Green
 
Understanding "Sequestration"
Understanding "Sequestration"Understanding "Sequestration"
Understanding "Sequestration"Jeff Green
 
Youth Fitness Training Class2006
Youth Fitness Training Class2006Youth Fitness Training Class2006
Youth Fitness Training Class2006Jeff Green
 
Tap DANZing - Arista Networks Redefining the Cost of Accessing Network Traffic
Tap DANZing - Arista Networks Redefining the Cost of Accessing Network TrafficTap DANZing - Arista Networks Redefining the Cost of Accessing Network Traffic
Tap DANZing - Arista Networks Redefining the Cost of Accessing Network Traffic
Emulex Corporation
 
Journey to Software-Defined Cloud Networking
Journey to Software-Defined Cloud NetworkingJourney to Software-Defined Cloud Networking
Journey to Software-Defined Cloud Networking
Open Networking Summits
 
Animatu 2009
Animatu 2009Animatu 2009
Why Extreme August XYZ
Why Extreme August XYZWhy Extreme August XYZ
Why Extreme August XYZ
Jeff Green
 
Arista reinventing data center switching
Arista   reinventing data center switchingArista   reinventing data center switching
Arista reinventing data center switching
VLCM2015
 
Ansible Meetup NYC 060215
Ansible Meetup NYC 060215Ansible Meetup NYC 060215
Ansible Meetup NYC 060215
jedelman99
 
CORD: Central Office Re-architected as a Datacenter
CORD: Central Office Re-architected as a DatacenterCORD: Central Office Re-architected as a Datacenter
CORD: Central Office Re-architected as a Datacenter
Open Networking Summits
 
Arista Networks - Building the Next Generation Workplace and Data Center Usin...
Arista Networks - Building the Next Generation Workplace and Data Center Usin...Arista Networks - Building the Next Generation Workplace and Data Center Usin...
Arista Networks - Building the Next Generation Workplace and Data Center Usin...
Aruba, a Hewlett Packard Enterprise company
 

Viewers also liked (12)

8.) xo s as a platform (on off premise)
8.) xo s as a platform (on off premise)8.) xo s as a platform (on off premise)
8.) xo s as a platform (on off premise)
 
Understanding "Sequestration"
Understanding "Sequestration"Understanding "Sequestration"
Understanding "Sequestration"
 
Youth Fitness Training Class2006
Youth Fitness Training Class2006Youth Fitness Training Class2006
Youth Fitness Training Class2006
 
Arista linked in
Arista linked inArista linked in
Arista linked in
 
Tap DANZing - Arista Networks Redefining the Cost of Accessing Network Traffic
Tap DANZing - Arista Networks Redefining the Cost of Accessing Network TrafficTap DANZing - Arista Networks Redefining the Cost of Accessing Network Traffic
Tap DANZing - Arista Networks Redefining the Cost of Accessing Network Traffic
 
Journey to Software-Defined Cloud Networking
Journey to Software-Defined Cloud NetworkingJourney to Software-Defined Cloud Networking
Journey to Software-Defined Cloud Networking
 
Animatu 2009
Animatu 2009Animatu 2009
Animatu 2009
 
Why Extreme August XYZ
Why Extreme August XYZWhy Extreme August XYZ
Why Extreme August XYZ
 
Arista reinventing data center switching
Arista   reinventing data center switchingArista   reinventing data center switching
Arista reinventing data center switching
 
Ansible Meetup NYC 060215
Ansible Meetup NYC 060215Ansible Meetup NYC 060215
Ansible Meetup NYC 060215
 
CORD: Central Office Re-architected as a Datacenter
CORD: Central Office Re-architected as a DatacenterCORD: Central Office Re-architected as a Datacenter
CORD: Central Office Re-architected as a Datacenter
 
Arista Networks - Building the Next Generation Workplace and Data Center Usin...
Arista Networks - Building the Next Generation Workplace and Data Center Usin...Arista Networks - Building the Next Generation Workplace and Data Center Usin...
Arista Networks - Building the Next Generation Workplace and Data Center Usin...
 

Similar to Data center network reference pov jeff green 2016 v2

Data center pov 2017 v3
Data center pov 2017 v3Data center pov 2017 v3
Data center pov 2017 v3
Jeff Green
 
Manufacturing pov jeff green 2016 v2
Manufacturing pov jeff green 2016 v2Manufacturing pov jeff green 2016 v2
Manufacturing pov jeff green 2016 v2
Jeff Green
 
Sled local gov pov october 2016 v2
Sled local gov pov october 2016 v2Sled local gov pov october 2016 v2
Sled local gov pov october 2016 v2
Jeff Green
 
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta correttaWebinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Netgear Italia
 
12.) fabric (your next data center)
12.) fabric (your next data center)12.) fabric (your next data center)
12.) fabric (your next data center)
Jeff Green
 
Jeff Green April 2011 May V1
Jeff Green April 2011 May V1Jeff Green April 2011 May V1
Jeff Green April 2011 May V1JeffGreenMichigan
 
Services pov jeff green 2016 v2
Services pov jeff green 2016 v2Services pov jeff green 2016 v2
Services pov jeff green 2016 v2
Jeff Green
 
Cisco CCNA Data Center Networking Fundamentals
Cisco CCNA Data Center Networking FundamentalsCisco CCNA Data Center Networking Fundamentals
Cisco CCNA Data Center Networking Fundamentals
E.S.G. JR. Consulting, Inc.
 
Allied Telesis x550 Series
Allied Telesis x550 SeriesAllied Telesis x550 Series
Allied Telesis x550 Series
alliedtelesisnetwork
 
huawei-s5700-28x-li-dc-brochure-datasheet.pdf
huawei-s5700-28x-li-dc-brochure-datasheet.pdfhuawei-s5700-28x-li-dc-brochure-datasheet.pdf
huawei-s5700-28x-li-dc-brochure-datasheet.pdf
Hi-Network.com
 
Different Types of Switches in Networking : Notes
Different Types of Switches in Networking : NotesDifferent Types of Switches in Networking : Notes
Different Types of Switches in Networking : Notes
Subhajit Sahu
 
cisco-ws-c2960x-24ts-ll-datasheet.pdf
cisco-ws-c2960x-24ts-ll-datasheet.pdfcisco-ws-c2960x-24ts-ll-datasheet.pdf
cisco-ws-c2960x-24ts-ll-datasheet.pdf
Hi-Network.com
 
4.) switch performance (w features)
4.) switch performance (w features)4.) switch performance (w features)
4.) switch performance (w features)
Jeff Green
 
7.) convergence (w automation)
7.) convergence (w automation)7.) convergence (w automation)
7.) convergence (w automation)
Jeff Green
 
Extreme Networks Black diamond 8000-DS
Extreme Networks Black diamond 8000-DSExtreme Networks Black diamond 8000-DS
Extreme Networks Black diamond 8000-DS
N-TEK Distribution
 
Ati sbx908-ds
Ati sbx908-dsAti sbx908-ds
Ati sbx908-ds
saravana86
 
Juniper for Enterprise
Juniper for EnterpriseJuniper for Enterprise
Juniper for Enterprise
MarketingArrowECS_CZ
 
cisco-ws-c2960x-48ts-l-datasheet.pdf
cisco-ws-c2960x-48ts-l-datasheet.pdfcisco-ws-c2960x-48ts-l-datasheet.pdf
cisco-ws-c2960x-48ts-l-datasheet.pdf
Hi-Network.com
 
cisco-ws-c2960x-48ts-l-datasheet.pdf
cisco-ws-c2960x-48ts-l-datasheet.pdfcisco-ws-c2960x-48ts-l-datasheet.pdf
cisco-ws-c2960x-48ts-l-datasheet.pdf
Hi-Network.com
 
huawei-s5720-28x-si-ac-brochure-datasheet.pdf
huawei-s5720-28x-si-ac-brochure-datasheet.pdfhuawei-s5720-28x-si-ac-brochure-datasheet.pdf
huawei-s5720-28x-si-ac-brochure-datasheet.pdf
Hi-Network.com
 

Similar to Data center network reference pov jeff green 2016 v2 (20)

Data center pov 2017 v3
Data center pov 2017 v3Data center pov 2017 v3
Data center pov 2017 v3
 
Manufacturing pov jeff green 2016 v2
Manufacturing pov jeff green 2016 v2Manufacturing pov jeff green 2016 v2
Manufacturing pov jeff green 2016 v2
 
Sled local gov pov october 2016 v2
Sled local gov pov october 2016 v2Sled local gov pov october 2016 v2
Sled local gov pov october 2016 v2
 
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta correttaWebinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
 
12.) fabric (your next data center)
12.) fabric (your next data center)12.) fabric (your next data center)
12.) fabric (your next data center)
 
Jeff Green April 2011 May V1
Jeff Green April 2011 May V1Jeff Green April 2011 May V1
Jeff Green April 2011 May V1
 
Services pov jeff green 2016 v2
Services pov jeff green 2016 v2Services pov jeff green 2016 v2
Services pov jeff green 2016 v2
 
Cisco CCNA Data Center Networking Fundamentals
Cisco CCNA Data Center Networking FundamentalsCisco CCNA Data Center Networking Fundamentals
Cisco CCNA Data Center Networking Fundamentals
 
Allied Telesis x550 Series
Allied Telesis x550 SeriesAllied Telesis x550 Series
Allied Telesis x550 Series
 
huawei-s5700-28x-li-dc-brochure-datasheet.pdf
huawei-s5700-28x-li-dc-brochure-datasheet.pdfhuawei-s5700-28x-li-dc-brochure-datasheet.pdf
huawei-s5700-28x-li-dc-brochure-datasheet.pdf
 
Different Types of Switches in Networking : Notes
Different Types of Switches in Networking : NotesDifferent Types of Switches in Networking : Notes
Different Types of Switches in Networking : Notes
 
cisco-ws-c2960x-24ts-ll-datasheet.pdf
cisco-ws-c2960x-24ts-ll-datasheet.pdfcisco-ws-c2960x-24ts-ll-datasheet.pdf
cisco-ws-c2960x-24ts-ll-datasheet.pdf
 
4.) switch performance (w features)
4.) switch performance (w features)4.) switch performance (w features)
4.) switch performance (w features)
 
7.) convergence (w automation)
7.) convergence (w automation)7.) convergence (w automation)
7.) convergence (w automation)
 
Extreme Networks Black diamond 8000-DS
Extreme Networks Black diamond 8000-DSExtreme Networks Black diamond 8000-DS
Extreme Networks Black diamond 8000-DS
 
Ati sbx908-ds
Ati sbx908-dsAti sbx908-ds
Ati sbx908-ds
 
Juniper for Enterprise
Juniper for EnterpriseJuniper for Enterprise
Juniper for Enterprise
 
cisco-ws-c2960x-48ts-l-datasheet.pdf
cisco-ws-c2960x-48ts-l-datasheet.pdfcisco-ws-c2960x-48ts-l-datasheet.pdf
cisco-ws-c2960x-48ts-l-datasheet.pdf
 
cisco-ws-c2960x-48ts-l-datasheet.pdf
cisco-ws-c2960x-48ts-l-datasheet.pdfcisco-ws-c2960x-48ts-l-datasheet.pdf
cisco-ws-c2960x-48ts-l-datasheet.pdf
 
huawei-s5720-28x-si-ac-brochure-datasheet.pdf
huawei-s5720-28x-si-ac-brochure-datasheet.pdfhuawei-s5720-28x-si-ac-brochure-datasheet.pdf
huawei-s5720-28x-si-ac-brochure-datasheet.pdf
 

More from Jeff Green

Where is the beef with 6 e
Where is the beef with 6 eWhere is the beef with 6 e
Where is the beef with 6 e
Jeff Green
 
Where is the beef
Where is the beefWhere is the beef
Where is the beef
Jeff Green
 
6 e security
6 e security6 e security
6 e security
Jeff Green
 
Where is the 6 GHz beef?
Where is the 6 GHz beef?Where is the 6 GHz beef?
Where is the 6 GHz beef?
Jeff Green
 
The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)
Jeff Green
 
The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)
Jeff Green
 
The next generation ethernet gangster (part 1)
The next generation ethernet gangster (part 1)The next generation ethernet gangster (part 1)
The next generation ethernet gangster (part 1)
Jeff Green
 
The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)
Jeff Green
 
The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)
Jeff Green
 
Places in the network (featuring policy)
Places in the network (featuring policy)Places in the network (featuring policy)
Places in the network (featuring policy)
Jeff Green
 
Fortinet ngf w extreme policy
Fortinet ngf w extreme policyFortinet ngf w extreme policy
Fortinet ngf w extreme policy
Jeff Green
 
Multi fabric sales motions jg v3
Multi fabric sales motions jg v3Multi fabric sales motions jg v3
Multi fabric sales motions jg v3
Jeff Green
 
Audio video ethernet (avb cobra net dante)
Audio video ethernet (avb cobra net dante)Audio video ethernet (avb cobra net dante)
Audio video ethernet (avb cobra net dante)
Jeff Green
 
Avb pov 2017 v2
Avb pov 2017 v2Avb pov 2017 v2
Avb pov 2017 v2
Jeff Green
 
10.) vxlan
10.) vxlan10.) vxlan
10.) vxlan
Jeff Green
 
20.) physical (optics copper and power)
20.) physical (optics copper and power)20.) physical (optics copper and power)
20.) physical (optics copper and power)
Jeff Green
 
19.) security pivot (policy byod nac)
19.) security pivot (policy byod nac)19.) security pivot (policy byod nac)
19.) security pivot (policy byod nac)
Jeff Green
 
17.) layer 3 (advanced tcp ip routing)
17.) layer 3 (advanced tcp ip routing)17.) layer 3 (advanced tcp ip routing)
17.) layer 3 (advanced tcp ip routing)
Jeff Green
 
16.) layer 3 (basic tcp ip routing)
16.) layer 3 (basic tcp ip routing)16.) layer 3 (basic tcp ip routing)
16.) layer 3 (basic tcp ip routing)
Jeff Green
 
13.) analytics (user experience)
13.) analytics (user experience)13.) analytics (user experience)
13.) analytics (user experience)
Jeff Green
 

More from Jeff Green (20)

Where is the beef with 6 e
Where is the beef with 6 eWhere is the beef with 6 e
Where is the beef with 6 e
 
Where is the beef
Where is the beefWhere is the beef
Where is the beef
 
6 e security
6 e security6 e security
6 e security
 
Where is the 6 GHz beef?
Where is the 6 GHz beef?Where is the 6 GHz beef?
Where is the 6 GHz beef?
 
The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)
 
The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)
 
The next generation ethernet gangster (part 1)
The next generation ethernet gangster (part 1)The next generation ethernet gangster (part 1)
The next generation ethernet gangster (part 1)
 
The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)
 
The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)
 
Places in the network (featuring policy)
Places in the network (featuring policy)Places in the network (featuring policy)
Places in the network (featuring policy)
 
Fortinet ngf w extreme policy
Fortinet ngf w extreme policyFortinet ngf w extreme policy
Fortinet ngf w extreme policy
 
Multi fabric sales motions jg v3
Multi fabric sales motions jg v3Multi fabric sales motions jg v3
Multi fabric sales motions jg v3
 
Audio video ethernet (avb cobra net dante)
Audio video ethernet (avb cobra net dante)Audio video ethernet (avb cobra net dante)
Audio video ethernet (avb cobra net dante)
 
Avb pov 2017 v2
Avb pov 2017 v2Avb pov 2017 v2
Avb pov 2017 v2
 
10.) vxlan
10.) vxlan10.) vxlan
10.) vxlan
 
20.) physical (optics copper and power)
20.) physical (optics copper and power)20.) physical (optics copper and power)
20.) physical (optics copper and power)
 
19.) security pivot (policy byod nac)
19.) security pivot (policy byod nac)19.) security pivot (policy byod nac)
19.) security pivot (policy byod nac)
 
17.) layer 3 (advanced tcp ip routing)
17.) layer 3 (advanced tcp ip routing)17.) layer 3 (advanced tcp ip routing)
17.) layer 3 (advanced tcp ip routing)
 
16.) layer 3 (basic tcp ip routing)
16.) layer 3 (basic tcp ip routing)16.) layer 3 (basic tcp ip routing)
16.) layer 3 (basic tcp ip routing)
 
13.) analytics (user experience)
13.) analytics (user experience)13.) analytics (user experience)
13.) analytics (user experience)
 

Recently uploaded

Latest trends in computer networking.pptx
Latest trends in computer networking.pptxLatest trends in computer networking.pptx
Latest trends in computer networking.pptx
JungkooksNonexistent
 
BASIC C++ lecture NOTE C++ lecture 3.pptx
BASIC C++ lecture NOTE C++ lecture 3.pptxBASIC C++ lecture NOTE C++ lecture 3.pptx
BASIC C++ lecture NOTE C++ lecture 3.pptx
natyesu
 
How to Use Contact Form 7 Like a Pro.pptx
How to Use Contact Form 7 Like a Pro.pptxHow to Use Contact Form 7 Like a Pro.pptx
How to Use Contact Form 7 Like a Pro.pptx
Gal Baras
 
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
keoku
 
Multi-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesMulti-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Multi-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Sanjeev Rampal
 
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
3ipehhoa
 
1.Wireless Communication System_Wireless communication is a broad term that i...
1.Wireless Communication System_Wireless communication is a broad term that i...1.Wireless Communication System_Wireless communication is a broad term that i...
1.Wireless Communication System_Wireless communication is a broad term that i...
JeyaPerumal1
 
test test test test testtest test testtest test testtest test testtest test ...
test test  test test testtest test testtest test testtest test testtest test ...test test  test test testtest test testtest test testtest test testtest test ...
test test test test testtest test testtest test testtest test testtest test ...
Arif0071
 
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
eutxy
 
The+Prospects+of+E-Commerce+in+China.pptx
The+Prospects+of+E-Commerce+in+China.pptxThe+Prospects+of+E-Commerce+in+China.pptx
The+Prospects+of+E-Commerce+in+China.pptx
laozhuseo02
 
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
3ipehhoa
 
This 7-second Brain Wave Ritual Attracts Money To You.!
This 7-second Brain Wave Ritual Attracts Money To You.!This 7-second Brain Wave Ritual Attracts Money To You.!
This 7-second Brain Wave Ritual Attracts Money To You.!
nirahealhty
 
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdfJAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
Javier Lasa
 
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
3ipehhoa
 
guildmasters guide to ravnica Dungeons & Dragons 5...
guildmasters guide to ravnica Dungeons & Dragons 5...guildmasters guide to ravnica Dungeons & Dragons 5...
guildmasters guide to ravnica Dungeons & Dragons 5...
Rogerio Filho
 
Comptia N+ Standard Networking lesson guide
Comptia N+ Standard Networking lesson guideComptia N+ Standard Networking lesson guide
Comptia N+ Standard Networking lesson guide
GTProductions1
 
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Brad Spiegel Macon GA
 
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC
 
History+of+E-commerce+Development+in+China-www.cfye-commerce.shop
History+of+E-commerce+Development+in+China-www.cfye-commerce.shopHistory+of+E-commerce+Development+in+China-www.cfye-commerce.shop
History+of+E-commerce+Development+in+China-www.cfye-commerce.shop
laozhuseo02
 
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
ufdana
 

Recently uploaded (20)

Latest trends in computer networking.pptx
Latest trends in computer networking.pptxLatest trends in computer networking.pptx
Latest trends in computer networking.pptx
 
BASIC C++ lecture NOTE C++ lecture 3.pptx
BASIC C++ lecture NOTE C++ lecture 3.pptxBASIC C++ lecture NOTE C++ lecture 3.pptx
BASIC C++ lecture NOTE C++ lecture 3.pptx
 
How to Use Contact Form 7 Like a Pro.pptx
How to Use Contact Form 7 Like a Pro.pptxHow to Use Contact Form 7 Like a Pro.pptx
How to Use Contact Form 7 Like a Pro.pptx
 
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
 
Multi-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesMulti-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Multi-cluster Kubernetes Networking- Patterns, Projects and Guidelines
 
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
 
1.Wireless Communication System_Wireless communication is a broad term that i...
1.Wireless Communication System_Wireless communication is a broad term that i...1.Wireless Communication System_Wireless communication is a broad term that i...
1.Wireless Communication System_Wireless communication is a broad term that i...
 
test test test test testtest test testtest test testtest test testtest test ...
test test  test test testtest test testtest test testtest test testtest test ...test test  test test testtest test testtest test testtest test testtest test ...
test test test test testtest test testtest test testtest test testtest test ...
 
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
 
The+Prospects+of+E-Commerce+in+China.pptx
The+Prospects+of+E-Commerce+in+China.pptxThe+Prospects+of+E-Commerce+in+China.pptx
The+Prospects+of+E-Commerce+in+China.pptx
 
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
 
This 7-second Brain Wave Ritual Attracts Money To You.!
This 7-second Brain Wave Ritual Attracts Money To You.!This 7-second Brain Wave Ritual Attracts Money To You.!
This 7-second Brain Wave Ritual Attracts Money To You.!
 
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdfJAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
 
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
 
guildmasters guide to ravnica Dungeons & Dragons 5...
guildmasters guide to ravnica Dungeons & Dragons 5...guildmasters guide to ravnica Dungeons & Dragons 5...
guildmasters guide to ravnica Dungeons & Dragons 5...
 
Comptia N+ Standard Networking lesson guide
Comptia N+ Standard Networking lesson guideComptia N+ Standard Networking lesson guide
Comptia N+ Standard Networking lesson guide
 
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
 
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
 
History+of+E-commerce+Development+in+China-www.cfye-commerce.shop
History+of+E-commerce+Development+in+China-www.cfye-commerce.shopHistory+of+E-commerce+Development+in+China-www.cfye-commerce.shop
History+of+E-commerce+Development+in+China-www.cfye-commerce.shop
 
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
 

Data center network reference pov jeff green 2016 v2

  • 1. Multi-Rate1,2.5,5,10GigabitEdgePoE++ Multi-RateSpineLeafDesign(10,25,40,50,100Gigabit) X440-G2 (L3 - Value 1G to 10G) PoE Fiber DC Policy  SummitStack-V (WITHOUT any additional license required).  Upgradeable 10GbE (PN 16542 or 16543).  Policy built-in (simplicity with multi-auth). EXOS 21.1 or higher Value with Automation First Extreme Switch to support Cloud Value X460-G2 (Advanced L3 1-40G) Multirate Option PoE Fiber DC Policy Fit The Swiss Army Knife of Switches Half Duplex ½ & ½ 3 Models This is where: 10G on existing copper Cat5e and Cat6 extend the life of the installed cable plant. Great for 1:N Convergence. X620 (1OG Copper or Fiber) Speed Next Gen Edge Lowered TCO via Limited Lifetime Warrantee Wallplate AP AP + Camera Outdoor Wave 2 Multi-Gigabit Wireless High Density -pack or Wedge Facebook ExtremeSupport XoS Platform Config L2/L3 Analytics Any OS Any Bare Metal Switch Policy Disaggregated Switch CAPEX or OPEX (you choose)? Reduced Risk (just witness or take action) Time is the critical Factor with XYZ Account Services... Infrastructure Businessmodel Ownership Considerations Management Location  32 x 100Gb  64 x 50Gb  128 x 25Gb  128 x 10Gb  32 x 40Gb 96 x 10GbE Ports (via4x10Gb breakout) 8 x 10/25/40/ 50/100G 10G Next Gen: Spine Leaf X670 & X770 - Hyper Ethernet Common Features  Data Center Bridging (DCB) features  Low ~600 nsec chipset latency in cut through mode.  Same PSUs and Fans as X670s (Front to back or Back to Front) AC or DC. X670-G2 -72X (10GbE Spine Leaf) 72 10GbE X670-48x-4q (10GbE Spine Leaf) 48 10GbE & 4 QSFP+ QSFP+ 40G DAC Extreme Feature Packs Core Edge AVB OpenFlow Advance Edge 1588 PTP MPLS Direct Attach Optics License Extreme Switches include the license they normally need. Like any other software platform you have an upgrade path. QSPF28 100G DAC Thin & Crunchy XoS Platform with one track of software. Speed with Features (Simple). Metro Functionality like ATM or SONET Flexible Horizontal or Vertical stacking Purposed for Broadcom (ASICs) So What, Who cares? Deliver XYZ Account, the value of HP with the feature function of Cisco. XYZ Account Business Value WhyExtreme? Summit Summit Policy delivers automation.. Thick & Chewy Know and control the who, what, when, where and the user experience across your XYZ Account Network. Control with insight... WhyEnterasys? XYZ Account Strategic Asset Custom ASICs S & K Series Chantry Motorola Air Defense So What, Who cares? Flow Based Switching Simplicity w Policy Wired and Wireless 100% insourced support Today you get both Control So What, Who cares? Fit Speed Unique Value Unique Control Summit G2 Yesterday - Cabletron Changed the game w Structured wiring (remember Vampire taps, Coax ethernet ect.) Today - Extreme Delivers Structured networking Policy Summit Who? Where? When? Whatdevice? How? QuarantineRemediate Allow Authentication NAC Server Summit Netsite Advanced NAC Client Joe Smith XYZ Account Access Controlled Subnet Enforcement Point Network Access Control This is where if X + Y, then Z...  LLDP-MED  CDPv2  ELRP  ZTP If user matches a defined attribute value ACL QoS Then place user into a defined ROLE A port is what it is because?This is where you easily Identify the impact and Source of Interference Problems. Detailed Forensic Analysis  Device, Threats, Associations, Traffic, Signal and Location Trends  Record of Wireless Issues Network Trend Analysis  Historical Analysis of Intermittent Wireless Problems  Performance Trends a Spectrum Analysis for Interference Detection  Real-time Spectrograms  Proactive Detection of Application Impacting Interference Visualize RF Coverage  Real-time RF Visualizations  Proactive Monitoring and Alerting of Coverage Problem ADSP for faster Root Cause Forensic Analysis for SECURITY & COMPLIANCE. Event Sequence Classify Interference Sources Side-by-side Comparative Analysis Air Defense Application Experience FullContext App App Analytics App Stop the finger-pointing Application Network Response. Flow or Bit Bucket Collector 3 million Flows Sensors X460 IPFix 4000 Flows (2048 ingress, 2048 egress) Sensor PV-FC-180, S or K Series (Core Flow 2/ 1 Million Flows) Flow-based Access Points From the controller (8K Flows per AP or C35 is 24K Flows) Flows Why not do this in the network? 10110111011101110 101101110111011101 6 million Flows Business Value Context BW IP HTTP:// Apps Platform Automation Control Experience Solution Framework Is your network faster today than it was 3 years ago? Going forward it should deliver more, faster, different X430-G2 (L2 - 1G to 10G) PoE Distribute content from a single source to hundreds of displays Ethernet as a Utility (PoE) Injectors Up to 75 Watts XYZ Account Data CenterXYZ Account Data Center Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach.  Can not access Line cards.  No L2/l3 recovery inside.  No access to Fabric. Disaggregated value...  Control Top-of-Rack Switches  L2/L3 protocols inside the Spline  Full access to Spine Switches Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach.  Can not access Line cards.  No L2/l3 recovery inside.  No access to Fabric. Disaggregated value...  Control Top-of-Rack Switches  L2/L3 protocols inside the Spline  Full access to Spine Switches No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar  Traditional 3-tier model (Less cabling).  Link speeds must increase at every hop (Less predictable latency).  Common in Chassis based architectures (Optimized for North/South traffic).  Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency).  Always two hops to any leaf (More resiliency, flexibility and performance).  Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar  Traditional 3-tier model (Less cabling).  Link speeds must increase at every hop (Less predictable latency).  Common in Chassis based architectures (Optimized for North/South traffic).  Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency).  Always two hops to any leaf (More resiliency, flexibility and performance).  Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). The XYZ Account handshake layer:  This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting).  Virtualization happens with VXLAN and VMotion (Control by the overlay).  N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account handshake layer:  This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting).  Virtualization happens with VXLAN and VMotion (Control by the overlay).  N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account Ethernet Expressway Layer: deliver massive scale...  This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly  Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks.  Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Fabric Modules (Spine) I/OModules(Leaf) The XYZ Account Ethernet Expressway Layer: deliver massive scale...  This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly  Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks.  Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Spline (Speed) Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach.  Can not access Line cards.  No L2/l3 recovery inside.  No access to Fabric. Disaggregated value...  Control Top-of-Rack Switches  L2/L3 protocols inside the Spline  Full access to Spine Switches No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar  Traditional 3-tier model (Less cabling).  Link speeds must increase at every hop (Less predictable latency).  Common in Chassis based architectures (Optimized for North/South traffic).  Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency).  Always two hops to any leaf (More resiliency, flexibility and performance).  Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). The XYZ Account handshake layer:  This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting).  Virtualization happens with VXLAN and VMotion (Control by the overlay).  N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account Ethernet Expressway Layer: deliver massive scale...  This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly  Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks.  Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Spline (Speed) Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking.  This is where, you can never have enough.  Customers want scale made easy.  Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking.  This is where, you can never have enough.  Customers want scale made easy.  Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters  Each cluster is independent (including servers, storage, database & interconnects).  Each cluster can be used for a different type of service.  Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters  Each cluster is independent (including servers, storage, database & interconnects).  Each cluster can be used for a different type of service.  Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective.  All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol.  VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments.  Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective.  All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol.  VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments.  Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers.  Each spine has 4 uplinks – one to each leaf (4:1 oversubscription).  Enable insertion of services without sprawl (Analytics for fabric and application forensics).  No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers.  Each spine has 4 uplinks – one to each leaf (4:1 oversubscription).  Enable insertion of services without sprawl (Analytics for fabric and application forensics).  No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Network (Fit) Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking.  This is where, you can never have enough.  Customers want scale made easy.  Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters  Each cluster is independent (including servers, storage, database & interconnects).  Each cluster can be used for a different type of service.  Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective.  All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol.  VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments.  Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers.  Each spine has 4 uplinks – one to each leaf (4:1 oversubscription).  Enable insertion of services without sprawl (Analytics for fabric and application forensics).  No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Network (Fit) Overlay Control The XYZ Account the VxLan forwarding plane for NSX control:  This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center.  Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster.  Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N The XYZ Account the VxLan forwarding plane for NSX control:  This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center.  Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster.  Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors.  Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services.  Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies.  Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors.  Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services.  Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies.  Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises.  ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections.  XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits.  XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises.  ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections.  XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits.  XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Overlay Control The XYZ Account the VxLan forwarding plane for NSX control:  This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center.  Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster.  Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors.  Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services.  Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies.  Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises.  ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections.  XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits.  XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Overlay Control The XYZ Account the VxLan forwarding plane for NSX control:  This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center.  Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster.  Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors.  Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services.  Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies.  Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises.  ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections.  XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits.  XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Compute Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response  80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic).  Inter-rack latency: 150 micros. Lookup Storage = 80% traffic.  Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Data Center Architecture Considerations Compute Cache Database Storage Client Response  80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic).  Inter-rack latency: 150 micros. Lookup Storage = 80% traffic.  Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing.  Low density X620 might help XYZ Account to avoid stranded ports.  Availability - Dual X620s can be deployed to minimize impact to maintenance.  Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management ServersServers Storage Summit Management Switch Summit Summit Storage Management Servers Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing.  Low density X620 might help XYZ Account to avoid stranded ports.  Availability - Dual X620s can be deployed to minimize impact to maintenance.  Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution.  With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches.  Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server.  Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Typical spline setup Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution.  With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches.  Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server.  Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response  80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic).  Inter-rack latency: 150 micros. Lookup Storage = 80% traffic.  Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing.  Low density X620 might help XYZ Account to avoid stranded ports.  Availability - Dual X620s can be deployed to minimize impact to maintenance.  Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution.  With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches.  Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server.  Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Compute Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response  80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic).  Inter-rack latency: 150 micros. Lookup Storage = 80% traffic.  Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing.  Low density X620 might help XYZ Account to avoid stranded ports.  Availability - Dual X620s can be deployed to minimize impact to maintenance.  Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution.  With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches.  Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server.  Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach.  Can not access Line cards.  No L2/l3 recovery inside.  No access to Fabric. Disaggregated value...  Control Top-of-Rack Switches  L2/L3 protocols inside the Spline  Full access to Spine Switches No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar  Traditional 3-tier model (Less cabling).  Link speeds must increase at every hop (Less predictable latency).  Common in Chassis based architectures (Optimized for North/South traffic).  Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency).  Always two hops to any leaf (More resiliency, flexibility and performance).  Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). The XYZ Account handshake layer:  This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting).  Virtualization happens with VXLAN and VMotion (Control by the overlay).  N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account Ethernet Expressway Layer: deliver massive scale...  This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly  Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks.  Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Spline (Speed) Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking.  This is where, you can never have enough.  Customers want scale made easy.  Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters  Each cluster is independent (including servers, storage, database & interconnects).  Each cluster can be used for a different type of service.  Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective.  All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol.  VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments.  Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers.  Each spine has 4 uplinks – one to each leaf (4:1 oversubscription).  Enable insertion of services without sprawl (Analytics for fabric and application forensics).  No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Network (Fit) Overlay Control The XYZ Account the VxLan forwarding plane for NSX control:  This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center.  Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster.  Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors.  Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services.  Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies.  Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises.  ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections.  XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits.  XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Compute Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response  80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic).  Inter-rack latency: 150 micros. Lookup Storage = 80% traffic.  Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing.  Low density X620 might help XYZ Account to avoid stranded ports.  Availability - Dual X620s can be deployed to minimize impact to maintenance.  Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution.  With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches.  Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server.  Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage
  • 2. Heading XYZ Account 2016 DesignExtremeEdgePoE ExtremeCore10G 2016 Design 1G 2.5G/5G 10G 40G Jeff Green 2016 Rev. 1 Florida Legend Legend PoE 802.3at (PoE+) Cat5e 30W 30W30W 60W UPOE No Cabling Change from PoE+ Cat5e NBASE-T Alliance Copper Max Distances Cat 7 Shielded 100 m Cat 6a Shielded 100 m Cat 6a Unshielded 100 m Cat 6 Shielded** 100 m Cat 6 Unshielded** 55 m Need Correct UTP, Patch Panel and Adapter. known as IEEE 802.3bz Greenfield - Cat 6a (2.5, 5G & 10G) 100m Cat 6 (2.5G, 5G & 10G) 55m Brownfield - Cat 5e (2.5&5G) 100M Requires X620 or X460 Switch for Multi-rate Support plus Client that supports Multi-rate. 10G Passive (PN 10306 ~ 5m, 10307~ 10M) 10G SFP+ Active copper cable (upto 100m) 40G Passive (PN 10321 ~3m, 10323~ 5m) 40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m) 40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4- F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, ) 10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m) SFP+ DAC Cables QSFP+ DAC Cables 10 LRM 220m (720ft/plus mode conditioning) (PN 10303) 10GBASE-T over Class E Cat 6 (55M) (10G) 10GBASE-T over Class E Cat 6a or 7 (100M) (10G) 10 SR over OM3 (300M) or OM4 (400M) (PN 10301) 10 LR over single mode (10KM) 1310nm (PN 10302) 10 ER over single mode (40KM) 1550nm (PN 10309) 10 ZR over single mode (80KM) 1550nm (PN 10310) 802.3bz 10GBASE-T (100M) for Cat 6 (5G) 10G Fiber 10G Copper 802.3bz 10GBASE-T (100M) for Cat 5e (2.5G) OM3 50 µm (550m/SX) Laser, LC (PN 10051H) OM1 62.5 µm (FDDI 220m/OM1, LC (PN 10051H) OM2 62.5 µm (ATM 275m/OM2), LC (PN 10051H) OM4 50 µm (550m/SX) 2Km, LC (PN 10051H) 1G Fiber (50 µm) 1G Fiber (62.5 µm) Single-fiber transmission uses only one strand of fiber for both transmit and receive (1310nm and 1490nm for 1Gbps; 1310nm and 1550nm for 100Mbps) LX (MMF 220 & 550m), SMF 10km, LC (PN 10052H) ZX SMF 70km, LC (PN 10053H) 10/100/1000 (UTP to 100m) SFP (PN 1070H) SR4 at least 100 m OM3 MMF (PN 10319) SR4 at least 125 m OM4 MMF (PN 10319) LR4 at least 10 km SMF, LC (PN 10320) LM4 140m MMF or 1kM SMF, LC (PN 10334) Optics Optics + Fan-out Fiber Cable QSFP-SFPP-ADPTQSFP-SFPP-ADPT – QSFP to SFP+ adapter ER4 40km SMF, LC (PN 10335) Internal CWDM transits four wavelengths over single fiber. MPO to 4 x LC Fanout 10m (PN 10327) for use with (PN 10326) MPO to 4 x LC duplex connectors, SMF LR4 Parallel SM, 10km SMF, MPO (PN 10326) 25/50/100G CR10 > 10 m over copper cable (10x10 Gb/s /Twinax (7M)) SR10 > 100 m over OM3 MMF (10x10 Gb/s / Multimode (100M)) SR10 > 125 m over OM4 MMF (10x10 Gb/s/ (100M) Data Center) LR4 > 10 km over SMF (4x25 Gb/s SMF/WDM (10km) Campus) ER4 > 40 km over SMF (4x25 Gb/s SMF/WDM (40km) Metro) Optics and DAC Cables Extreme Networks will restrict the integration of non-qualified 3rd party optical devices within 40G and 100G product environments, unless you purchase the EXOS 3rd Party 40G/100G Optics feature license to allow such integration. Proprietary got you Keyed Optics ModelNumber Description 10GB-LR271-SFPP 10Gb CWDM LR, SM, Channel 1271nm, LC 10GB-LR291-SFPP 10Gb CWDM LR, SM, Channel 1291nm, LC 10GB-LR311-SFPP 10Gb CWDM LR, SM, Channel 1311nm, LC 10GB-LR331-SFPP 10Gb CWDM LR, SM, Channel 1331nm, LC MUX-CWDM-01 4 Channel O-Band CWDM Mux/Demux MUX-RACK-01 Rack mount kit for MUX-CWDM-01 40GB-LR4-QSFP 40Gb 40GBASE-LR4, SM 10Km, LC CWDM MUX-CWDM-01 DACs Notes: Organizing Compute, Management & Edge Edge Leaf L3 to DC Fabric L2 to External Networks Compute Clusters Infrastructure Clusters (Edge, Storage, vCenter and Cloud Management System) WAN Internet L3 L2 L3 L2 Leaf Spine L2 VLANs f or bridging Single vCenter Server to manage all Management, Edge and Compute Clusters  NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server  NSX Controllers can also be deployed into the Management Cluster  Reduces vCenter Server licensing requirements Separation of compute, management and Edge function with following design advantage. Managing life-cycle of resources for compute and Edge functions.  Ability to isolate and develop span of control  Capacity planning – CPU, Memory & NIC  Upgrades & migration flexibility Automation control over area or function that requires frequent changes. app- tier, micro-segmentation & load-balancer. Three areas of technology require considerations.  Interaction with physical network  Overlay (VXLAN) impact  Integration with vSphere clustering Registration or Mapping WebVM WebVM VM VM WebVM Compute Cluster WebVM VM VM Compute A vCenter Server NSX Manager NSX Controller Compute B Edge and Control VM Edge Cluster Management Cluster Organizing Compute, Management & Edge Edge Leaf L3 to DC Fabric L2 to External Networks Compute Clusters Infrastructure Clusters (Edge, Storage, vCenter and Cloud Management System) WAN Internet L3 L2 L3 L2 Leaf Spine L2 VLANs f or bridging Single vCenter Server to manage all Management, Edge and Compute Clusters  NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server  NSX Controllers can also be deployed into the Management Cluster  Reduces vCenter Server licensing requirements Separation of compute, management and Edge function with following design advantage. Managing life-cycle of resources for compute and Edge functions.  Ability to isolate and develop span of control  Capacity planning – CPU, Memory & NIC  Upgrades & migration flexibility Automation control over area or function that requires frequent changes. app- tier, micro-segmentation & load-balancer. Three areas of technology require considerations.  Interaction with physical network  Overlay (VXLAN) impact  Integration with vSphere clustering Registration or Mapping WebVM WebVM VM VM WebVM Compute Cluster WebVM VM VM Compute A vCenter Server NSX Manager NSX Controller Compute B Edge and Control VM Edge Cluster Management Cluster Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Traditional Networking Configuration Tasks L3 L2 Initial configuration  Multi-chassis LAG  Routing configuration  SVIs/RVIs  VRRP/HSRP  LACP  VLANs Recurring configuration  SVIs/RVIs  VRRP/HSRP  Advertise new subnets  Access lists (ACLs)  VLANs  Adjust VLANs on trunks  VLANs STP/MST mapping  VLANs STP/MST mapping  Add VLANs on uplinks  Add VLANs to server port NSX isAGNOSTICto UnderlayNetwork L2 or L3 orAny Combination OnlyTWORequirements IPConnectivity MTUof 1600 NSX isAGNOSTICto UnderlayNetwork L2 or L3 orAny Combination OnlyTWORequirements IPConnectivity MTUof 1600 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Traditional Networking Configuration Tasks L3 L2 Initial configuration  Multi-chassis LAG  Routing configuration  SVIs/RVIs  VRRP/HSRP  LACP  VLANs Recurring configuration  SVIs/RVIs  VRRP/HSRP  Advertise new subnets  Access lists (ACLs)  VLANs  Adjust VLANs on trunks  VLANs STP/MST mapping  VLANs STP/MST mapping  Add VLANs on uplinks  Add VLANs to server port NSX isAGNOSTICto UnderlayNetwork L2 or L3 orAny Combination OnlyTWORequirements IPConnectivity MTUof 1600 Network & Security Services in Software WAN/Internet L3 L2 POD A L3 L2 POD B VLAN X Stretch VLAN Y Stretch L3 Topologies & Design Considerations. With XoS 670 Cores L2 Interfaces by default IP packet as large as 9214 Bytes can be sent and received (no configuration is required). L3 interfaces by default IP packet as large as 1500 Bytes can be sent and received. Configuration step for L3 interfaces: change MTU to 9214 “mtu ” command) IP packet as large as 9214 Bytes can be sent and received  L3 ToR designs have dynamic routing protocol between leaf and spine.  BGP, OSPF or ISIS can be used  Rack advertises small set of prefixes  (Unique VLAN/subnet per rack)  Equal cost paths to the other racks prefixes.  Switch provides default gateway service for each VLAN subnet  801.Q trunks with a small set of VLANs for VMkernel traffic  Rest of the session assumes L3 topology L3 L2 Network & Security Services in Software WAN/Internet L3 L2 POD A L3 L2 POD B VLAN X Stretch VLAN Y Stretch L3 Topologies & Design Considerations. With XoS 670 Cores L2 Interfaces by default IP packet as large as 9214 Bytes can be sent and received (no configuration is required). L3 interfaces by default IP packet as large as 1500 Bytes can be sent and received. Configuration step for L3 interfaces: change MTU to 9214 “mtu ” command) IP packet as large as 9214 Bytes can be sent and received  L3 ToR designs have dynamic routing protocol between leaf and spine.  BGP, OSPF or ISIS can be used  Rack advertises small set of prefixes  (Unique VLAN/subnet per rack)  Equal cost paths to the other racks prefixes.  Switch provides default gateway service for each VLAN subnet  801.Q trunks with a small set of VLANs for VMkernel traffic  Rest of the session assumes L3 topology L3 L2 XYZ Account (Spine) CORE 1 CORE 2 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Wi-FiAnalytics Security Policy Extreme s Platform  Lync Traffic Engineering with Purview Analytics Service Insertion  Multi-Tenant Networks Automation and Orchestration  Self-Provisioned Network Slicing (Proof of concept Implementation) Better Experience through simpler solutions that deliver long term value. Products – one wired and wireless platform Customer Care – Strong 1st call resolution CORE 1 CORE 2 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Wi-FiAnalytics Security Policy Extreme s Platform  Lync Traffic Engineering with Purview Analytics Service Insertion  Multi-Tenant Networks Automation and Orchestration  Self-Provisioned Network Slicing (Proof of concept Implementation) Better Experience through simpler solutions that deliver long term value. Products – one wired and wireless platform Customer Care – Strong 1st call resolution NSX Controllers Functions LogicalRouter1 VXLAN5000 LogicalRouter2 VXLAN5001 LogicalRouter3 VXLAN-5002 Controller VXLAN DirectoryService MAC table ARP table VTEPtable This is where NSX will provide XYZ Account one control plane to distribute network information to ESXi hosts. NSX Controllers are clustered for scale out and high availability.  Network information is distributed across nodes in a Controller Cluster (slicing)  Remove the VXLAN dependency on multicast routing/PIM in the physical network  Provide suppression of ARP broadcast traffic in VXLAN networks XYZ Account (Spine) CORE 1 CORE 2 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Wi-FiAnalytics Security Policy Extreme s Platform  Lync Traffic Engineering with Purview Analytics Service Insertion  Multi-Tenant Networks Automation and Orchestration  Self-Provisioned Network Slicing (Proof of concept Implementation) Better Experience through simpler solutions that deliver long term value. Products – one wired and wireless platform Customer Care – Strong 1st call resolution NSX Controllers Functions LogicalRouter1 VXLAN5000 LogicalRouter2 VXLAN5001 LogicalRouter3 VXLAN-5002 Controller VXLAN DirectoryService MAC table ARP table VTEPtable This is where NSX will provide XYZ Account one control plane to distribute network information to ESXi hosts. NSX Controllers are clustered for scale out and high availability.  Network information is distributed across nodes in a Controller Cluster (slicing)  Remove the VXLAN dependency on multicast routing/PIM in the physical network  Provide suppression of ARP broadcast traffic in VXLAN networks SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Serve rs Manageme nt Summi t Summi t Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity vSphere Host VXLAN Transport Network Host 1 VTEP2 10.20.10.11 V M VXLAN 5002 MAC2 vSphere Host VTEP3 10.20.10.12 Host 2 10.20.10.13 V M MAC4 V M MAC1 V M MAC3 VTEP4 vSphere Distributed Switch vSphere Distributed Switch VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS. For the Fabric is L2, this usually means that the same IP subnets are also used across racks for a given type of traffic. For a given host only one VDS responsible for VXLAN traffic. A single VDS can span multiple cluster.Transport Zone, VTEP, Logical Networks and VDS VTEP VMkernel interface belongs to a specific VLAN backed port- group dynamically created during the cluster VXLAN preparation  One or more VDS can be part of the same TZ  A given Logical Switch can span multiple VDS. vSphere Host(ESXi) L3 ToR Switch Routed uplinks (ECMP) VLANTrunk (802.1Q) VLAN 66 Mgmt 10.66.1.25/26 DGW: 10.66.1.1 VLAN 77 vMotion 10.77.1.25/26 GW: 10.77.1.1 VLAN 88 VXLAN 10.88.1.25/26 DGW: 10.88.1.1 VLAN 99 Storage 10.99.1.25/26 GW: 10.99.1.1 SVI 66: 10.66.1.1/26 SVI 77: 10.77.1.1/26 SVI 88: 10.88.1.1/26 SVI 99: 10.99.1.1/26 SpanofVLANs SpanofVLANs SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity vSphere Host VXLAN Transport Network Host 1 VTEP2 10.20.10.11 V M VXLAN 5002 MAC2 vSphere Host VTEP3 10.20.10.12 Host 2 10.20.10.13 V M MAC4 V M MAC1 V M MAC3 VTEP4 vSphere Distributed Switch vSphere Distributed Switch VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS. For the Fabric is L2, this usually means that the same IP subnets are also used across racks for a given type of traffic. For a given host only one VDS responsible for VXLAN traffic. A single VDS can span multiple cluster.Transport Zone, VTEP, Logical Networks and VDS VTEP VMkernel interface belongs to a specific VLAN backed port- group dynamically created during the cluster VXLAN preparation  One or more VDS can be part of the same TZ  A given Logical Switch can span multiple VDS. vSphere Host(ESXi) L3 ToR Switch Routed uplinks (ECMP) VLANTrunk (802.1Q) VLAN 66 Mgmt 10.66.1.25/26 DGW: 10.66.1.1 VLAN 77 vMotion 10.77.1.25/26 GW: 10.77.1.1 VLAN 88 VXLAN 10.88.1.25/26 DGW: 10.88.1.1 VLAN 99 Storage 10.99.1.25/26 GW: 10.99.1.1 SVI 66: 10.66.1.1/26 SVI 77: 10.77.1.1/26 SVI 88: 10.88.1.1/26 SVI 99: 10.99.1.1/26 SpanofVLANs SpanofVLANs Traditional control LDAP NAC DHCP Radius Captive Portal DNS MDMLDAP NAC DHCP Radius Captive Portal DNS MDM XYZ Account ServicesUser Repositories or Corporate Control LDAP NAC DHCP Radius Captive Portal DNS MDM XYZ Account ServicesUser Repositories or Corporate Control NAC Analytics Netsite Management Cluster (Control) Cloud Based control Leaf L2 L3 L3 L2 VMkernel VLANs VLANs for Management VMs L2 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk VMkernel VLANs VLANs for Management VMs Single Rack Connectivity Leaf L3 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk Dual Rack Connectivity L2 23 Extreme Vmware Deployment Considerations – This is where, management Cluster is typically provisioned on a single rack  The single rack design still requires redundant uplinks from host to ToR carrying VLANs for management  Dual rack design for increased resiliency (handling single rack failure scenarios) which could be the requirements for highly available design.  Typically in a small design management and Edge cluster are collapsed. Exclude management cluster from preparing VXLAN. ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server NSX Manager deployed as a virtual appliance 4 vCPU, 12 GB of RAM per node Consider reserving memory for VC to ensure good Web Client performance Can not modify configurations Traditional control LDAP NAC DHCP Radius Captive Portal DNS MDM XYZ Account ServicesUser Repositories or Corporate Control NAC Analytics Netsite Management Cluster (Control) Cloud Based control Leaf L2 L3 L3 L2 VMkernel VLANs VLANs for Management VMs L2 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk VMkernel VLANs VLANs for Management VMs Single Rack Connectivity Leaf L3 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk Dual Rack Connectivity L2 23 Extreme Vmware Deployment Considerations – This is where, management Cluster is typically provisioned on a single rack  The single rack design still requires redundant uplinks from host to ToR carrying VLANs for management  Dual rack design for increased resiliency (handling single rack failure scenarios) which could be the requirements for highly available design.  Typically in a small design management and Edge cluster are collapsed. Exclude management cluster from preparing VXLAN. ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server NSX Manager deployed as a virtual appliance 4 vCPU, 12 GB of RAM per node Consider reserving memory for VC to ensure good Web Client performance Can not modify configurations Extreme Networks Compute, Storage Networking Integration... Extreme Networks Control, Analytics & Security Integration...
  • 3. Heading XYZ Account 2016 DesignExtremeEdgePoE ExtremeCore10G 2016 Design 1G 2.5G/5G 10G 40G Jeff Green 2016 Rev. 1 Florida Legend Legend PoE 802.3at (PoE+) Cat5e 30W 30W30W 60W UPOE No Cabling Change from PoE+ Cat5e NBASE-T Alliance Copper Max Distances Cat 7 Shielded 100 m Cat 6a Shielded 100 m Cat 6a Unshielded 100 m Cat 6 Shielded** 100 m Cat 6 Unshielded** 55 m Need Correct UTP, Patch Panel and Adapter. known as IEEE 802.3bz Greenfield - Cat 6a (2.5, 5G & 10G) 100m Cat 6 (2.5G, 5G & 10G) 55m Brownfield - Cat 5e (2.5&5G) 100M Requires X620 or X460 Switch for Multi-rate Support plus Client that supports Multi-rate. 10G Passive (PN 10306 ~ 5m, 10307~ 10M) 10G SFP+ Active copper cable (upto 100m) 40G Passive (PN 10321 ~3m, 10323~ 5m) 40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m) 40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4- F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, ) 10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m) SFP+ DAC Cables QSFP+ DAC Cables 10 LRM 220m (720ft/plus mode conditioning) (PN 10303) 10GBASE-T over Class E Cat 6 (55M) (10G) 10GBASE-T over Class E Cat 6a or 7 (100M) (10G) 10 SR over OM3 (300M) or OM4 (400M) (PN 10301) 10 LR over single mode (10KM) 1310nm (PN 10302) 10 ER over single mode (40KM) 1550nm (PN 10309) 10 ZR over single mode (80KM) 1550nm (PN 10310) 802.3bz 10GBASE-T (100M) for Cat 6 (5G) 10G Fiber 10G Copper 802.3bz 10GBASE-T (100M) for Cat 5e (2.5G) OM3 50 µm (550m/SX) Laser, LC (PN 10051H) OM1 62.5 µm (FDDI 220m/OM1, LC (PN 10051H) OM2 62.5 µm (ATM 275m/OM2), LC (PN 10051H) OM4 50 µm (550m/SX) 2Km, LC (PN 10051H) 1G Fiber (50 µm) 1G Fiber (62.5 µm) Single-fiber transmission uses only one strand of fiber for both transmit and receive (1310nm and 1490nm for 1Gbps; 1310nm and 1550nm for 100Mbps) LX (MMF 220 & 550m), SMF 10km, LC (PN 10052H) ZX SMF 70km, LC (PN 10053H) 10/100/1000 (UTP to 100m) SFP (PN 1070H) SR4 at least 100 m OM3 MMF (PN 10319) SR4 at least 125 m OM4 MMF (PN 10319) LR4 at least 10 km SMF, LC (PN 10320) LM4 140m MMF or 1kM SMF, LC (PN 10334) Optics Optics + Fan-out Fiber Cable QSFP-SFPP-ADPTQSFP-SFPP-ADPT – QSFP to SFP+ adapter ER4 40km SMF, LC (PN 10335) Internal CWDM transits four wavelengths over single fiber. MPO to 4 x LC Fanout 10m (PN 10327) for use with (PN 10326) MPO to 4 x LC duplex connectors, SMF LR4 Parallel SM, 10km SMF, MPO (PN 10326) 25/50/100G CR10 > 10 m over copper cable (10x10 Gb/s /Twinax (7M)) SR10 > 100 m over OM3 MMF (10x10 Gb/s / Multimode (100M)) SR10 > 125 m over OM4 MMF (10x10 Gb/s/ (100M) Data Center) LR4 > 10 km over SMF (4x25 Gb/s SMF/WDM (10km) Campus) ER4 > 40 km over SMF (4x25 Gb/s SMF/WDM (40km) Metro) Optics and DAC Cables Extreme Networks will restrict the integration of non-qualified 3rd party optical devices within 40G and 100G product environments, unless you purchase the EXOS 3rd Party 40G/100G Optics feature license to allow such integration. Proprietary got you Keyed Optics ModelNumber Description 10GB-LR271-SFPP 10Gb CWDM LR, SM, Channel 1271nm, LC 10GB-LR291-SFPP 10Gb CWDM LR, SM, Channel 1291nm, LC 10GB-LR311-SFPP 10Gb CWDM LR, SM, Channel 1311nm, LC 10GB-LR331-SFPP 10Gb CWDM LR, SM, Channel 1331nm, LC MUX-CWDM-01 4 Channel O-Band CWDM Mux/Demux MUX-RACK-01 Rack mount kit for MUX-CWDM-01 40GB-LR4-QSFP 40Gb 40GBASE-LR4, SM 10Km, LC CWDM MUX-CWDM-01 DACs Notes:  Identify design principles and implementation strategies, Start from service requirements and leverage standardization (Design should be driven by today s and tomorrow s service requirements).  Standardization limits technical and operational complexity and related costs (Develop a reference model based on principles (Principles enable consistent choice in long term run).  Leverage best practices and proven expertise, Streamline your capability to execute and operational effectiveness (Unleash capabilities provided by enabling technologies). Virtual Router 1 (VoIP) - Virtualized services for application delivery Virtual Router 1 (Oracle) - Virtualized services for application delivery Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery Virtual Router 1 (PACs) - Virtualized services for application delivery Virtual Router 1 (VoIP) - Virtualized services for application delivery Virtual Router 1 (Oracle) - Virtualized services for application delivery Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery Virtual Router 1 (PACs) - Virtualized services for application delivery # of assets/ports maintenance costs operational costs Next generation operations Pay as you go Savings Referencearchitecture Data center Network as a Service (NaaS) NSX Controller VC for NSX Domain - A VC for NSX Domain - B Management Cluster NSX Manager VM - A Management VC Compute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VMCompute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VM Compute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VMCompute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VM NSX Manager VM - B Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains...  Following VMware best practices to have the Management Cluster managed by a dedicated vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the Edge and Compute Clusters  NSX Manager also deployed into the Management Cluster and pared with this second vCenter Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)  NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to, therefore the Controllers are usually also deployed into the Edge Cluster Data center Network as a Service (NaaS) NSX Controller VC for NSX Domain - A VC for NSX Domain - B Management Cluster NSX Manager VM - A Management VC Compute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VM Compute Cluster Edge Cluster Compute A Compute B Web VM Web VM VM VM NSX Controller Edge and Control VM NSX Manager VM - B Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains...  Following VMware best practices to have the Management Cluster managed by a dedicated vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the Edge and Compute Clusters  NSX Manager also deployed into the Management Cluster and pared with this second vCenter Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)  NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to, therefore the Controllers are usually also deployed into the Edge Cluster CORE 1 CORE 2 XYZ Account (Primary) Preparation Netsite OperationLogical Switch CORE 1 CORE 2 XYZ Account (Primary) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity CORE 1 CORE 2 XYZ Account (Primary) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity CORE 1 CORE 2 XYZ Account (DR Site) Preparation Netsite OperationLogical Switch CORE 1 CORE 2 XYZ Account (DR Site) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity CORE 1 CORE 2 XYZ Account (DR Site) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity Logical Router 1 VXLAN 5000 Logical Router 2 VXLAN 5001 Logical Router 3 VXLAN - 5002 Controller VXLAN Directory Service MAC table ARP table VTEPtable ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server Traffic Engineer “like ATM or MPLS” UDP Start Stop UDP UDP Use Existing IP Network VM VM VM VM VM VM VM VM Traffic Engineer “like ATM or MPLS” UDP Start Stop UDP UDP Use Existing IP Network VM VM VM VM VM VM VM VM VTEP VTEP XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters  Normally a TZ defines the span of Logical Switches (Layer 2 communication domains). A given Logical Switch can span multiple VDS  VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for encap/decap VXLAN traffic. One or more VDS can be part of the same TZ  VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically created during the cluster VXLAN preparation. Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for active/active DC over any Network. This is where careful attention is required because there is different data plane (additional header) makes Jumbo Frames a must have and will continue to evolve  Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants  L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control plane is still work in progress (even if BGP EVPNs are here)  Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the design above becomes a big L3 domain with L2 processing. EVI provides additional benefits such as:  Transport agnostic  Up to 16 Active/Active DCs  Active/Active VRRP default gateways for VMs  STP outages remain local to each DC  Improves WAN utilization by dropping unknown frames and providing ARP suppression EVI tunnel Physical Underlay Network CORE 1 CORE 2 XYZ Account (Primary) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity CORE 1 CORE 2 XYZ Account (DR Site) Preparation Netsite OperationLogical Switch SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs Serv ers Manage ment Sum mit Sum mit Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity Logical Router 1 VXLAN 5000 Logical Router 2 VXLAN 5001 Logical Router 3 VXLAN - 5002 Controller VXLAN Directory Service MAC table ARP table VTEPtable ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server Traffic Engineer “like ATM or MPLS” UDP Start Stop UDP UDP Use Existing IP Network VM VM VM VM VM VM VM VM VTEP VTEP XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters  Normally a TZ defines the span of Logical Switches (Layer 2 communication domains). A given Logical Switch can span multiple VDS  VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for encap/decap VXLAN traffic. One or more VDS can be part of the same TZ  VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically created during the cluster VXLAN preparation. Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for active/active DC over any Network. This is where careful attention is required because there is different data plane (additional header) makes Jumbo Frames a must have and will continue to evolve  Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants  L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control plane is still work in progress (even if BGP EVPNs are here)  Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the design above becomes a big L3 domain with L2 processing. EVI provides additional benefits such as:  Transport agnostic  Up to 16 Active/Active DCs  Active/Active VRRP default gateways for VMs  STP outages remain local to each DC  Improves WAN utilization by dropping unknown frames and providing ARP suppression EVI tunnel Physical Underlay Network