SlideShare a Scribd company logo
Multi-Rate1,2.5,5,10GigabitEdgePoE++
Multi-RateSpineLeafDesign(10,25,40,50,100Gigabit)
Web-scale for the rest of us...
Web-Scale for The Enterprise (Any Scale upgrades).
• SLAs with Agility (Storage Pools and Containers).
• Security, Control & Analytics (Data follows a VM as it moves).
• Predictable Scale (I/O & data locality are critical).
X460-G2 (Advanced L3 1-40G) Multirate Option
PoE
Fiber
DC
Policy
Fit The Swiss Army Knife of Switches
Half Duplex
½ & ½
3 Models
This is where: 10G on
existing copper Cat5e
and Cat6 extend the
life of the installed
cable plant. Great for
1:N Convergence.
X620 (1OG Copper or Fiber)
Speed Next Gen Edge
Lowered TCO via
Limited Lifetime Warrantee
XYZ Account Design Goals
• Fractional consumption and predictable scale
(Distributed everything).
• No single point of failure (Always-on systems).
• Extensive automation and rich analytics.
XYZ Account Fundamental Assumptions..
• Unbranded x86 servers: fail-fast systems
• All intelligence and services in software
• Linear, predictable scale-out
CAPEX or OPEX (you choose)?
Reduced Risk (just witness or take action)
Time is the critical Factor with XYZ Account Services...
Infrastructure
Businessmodel
Ownership
Considerations
Management
Location
• 32 x 100Gb
• 64 x 50Gb
• 128 x 25Gb
• 128 x 10Gb
• 32 x 40Gb
96 x 10GbE Ports
(via4x10Gb breakout)
8 x 10/25/40/
50/100G
10G
Next Gen: Spine Leaf
X670 & X770 - Hyper Ethernet
Common Features
• Data Center Bridging (DCB) features
• Low ~600 nsec chipset latency in cut through mode.
• Same PSUs and Fans as X670s (Front to back or Back to
Front) AC or DC.
X670-G2 -72X (10GbE Spine Leaf) 72 10GbE
X670-48x-4q (10GbE Spine Leaf) 48 10GbE & 4 QSFP+
QSFP+
40G DAC
Extreme Feature Packs
Core
Edge
AVB
OpenFlow
Advance
Edge
1588 PTP
MPLS
Direct Attach
Optics License
Extreme Switches
include the license
they normally need.
Like any other
software platform
you have an
upgrade path.
QSPF28
100G DAC
Disaggregated Switch
Purple Metal
XoS as a Platform . Network as a Platform...
Distributed Everything (no propietary tech).
Always-on Operations (Spine-leaf Resilience).
Extensive Automation (rich analytics).
Purposed for
Broadcom (ASICs)
XYZ Account Business Value
XoS Platform
Config L2/L3
Analytics
Any OS
Any Bare Metal Switch
Policy
Disaggregated Switch
Bare - Grey
Web-Scale
Configuration
consistency ..
What constitutes a Software
Defined Data Center (SDDC)?
Abstract pool automate across...
XYZ Account Strategic Asset
Initial Configuration Tasks...
• Multi-chassis LAG (LACP)
• Routing configuration (VRRP/HSRP)
• STP (Instances/mapping) VLANs
Recurring configuration...
• VRRP/HSRP (Advertise new subnets)
• Access lists (ACLs)
• VLANs (Adjust VLANs on trunks).
• VLANs STP/MST mapping
• Add VLANs on uplinks
• Add VLANs to server ports
Control Plane
Logical
Data Plane
Physical
compute
network
storage
Logical Router 1
VXLAN 5001
Logical Router 2
VXLAN 5002
Logical Router 3
VXLAN 5003
MAC table
ARP table
VTEP table
Controller Directory
VTEP
DHCP/DNS
Policy
Edge Services
VM VM VM VM VM
Who?
Where?
When?
Whatdevice?
How?
QuarantineRemediate
Allow
Authentication
NAC Server
Summit
Netsite
Advanced
NAC Client
Joe Smith
XYZ Account
Access
Controlled
Subnet
Enforcement
Point
Network
Access
Control
This is where
if X + Y, then Z...
• LLDP-MED
• CDPv2
• ELRP
• ZTP
If user
matches a
defined
attribute
value
ACL
QoS
Then place
user into a
defined ROLE
A port is what it is because?
Datacenter
Evolution
1990's
Client-Server
x86 x86 x86x86 x86 x86
2000s
Virtualization
x86 x86x86
2010> Cloud
Public Cloud
Intelligent
Software
Roadblocks
• Silos
• Complexity
• Scaling
Application
Experience
FullContext
App
App
Analytics
App
Stop the
finger-pointing
Application Network Response.
Flow or Bit
Bucket
Collector
3 million Flows
Sensors
X460 IPFix 4000 Flows
(2048 ingress, 2048 egress)
Sensor PV-FC-180, S or K Series (Core
Flow 2/ 1 Million Flows)
Flow-based Access Points
From the controller (8K Flows
per AP or C35 is 24K Flows)
Flows
Why not do this in the
network?
10110111011101110 101101110111011101
6 million Flows
Business Value
Context BW IP HTTP:// Apps
Platform Automation Control Experience Solution Framework
Is your network faster today than
it was 3 years ago? Going forward
it should deliver more, faster,
different
DIY Fabric for the DIY Data Center
Three fundamental building blocks for Data
Center Network Automation Solution:
• Orchestration (OpenStack, vRealize, ESX,
NSX, MS Azure, ExtremeConnect)
• Overlay (VXLAN, NVGRE..)
• Underlay (traditional L2/L3 protocols, OSPF,
MLAG etc Underlay
Overlay
Orchestration
How is a traditional
Aggregated Technology like a Duck?
A duck can swim, walk and fly but...
Z
I/O
Bandwidth
Y
Memory
Storage
X
Compute
XoS fn(x,y,z)
is like an elastic Fabric
• You can never have enough.
• Customers want Scale. made easy.
• Hypervisor integration.
The next convergence will be collapsing the
datacenter designs into smaller, elastic form
factors for compute, storage and networking.The application
is always the driver.
Summit
Cisco ACI
HP Moonshot
XYZ Account Data CenterXYZ Account Data Center
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Fabric Modules (Spine)
I/OModules(Leaf)
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3L2 L3 L2 L3L2 L3
L2 L3L2 L3 L2 L3L2 L3
L2 L3L2 L3
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3 L2 L3
L2 L3 L2 L3
L2 L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3 L2 L3
L2 L3 L2 L3
L2 L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
ServersServers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3 L2 L3
L2 L3 L2 L3
L2 L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G)
Layer2 multi-
chassis port
channel (vPC or
MLAG)
ISSU for a redundant
pair. Less than 2000ms
impact for the
upgrade.
Spine (100G)Spine (100G) Spine (100G)Spine (100G)
Core
Spine (100G) Spine (100G)
Core
ControlControl ControlControl
Border
Control Control
Border
4 x 100G 4 x 100G
Spine (100G) Spine (100G)
Core
Control Control
Border
4 x 100G 4 x 100G
LeafLeafLeafLeaf LeafLeaf
Campus
LeafLeaf
Campus
LeafLeafLeafLeaf LeafLeaf LeafLeaf LeafLeafLeafLeaf LeafLeaf
Resnet
LeafLeaf
Resnet
LeafLeaf
Campus
LeafLeaf LeafLeaf
Resnet
Minimum MAC
address table size
should be 256K.ARP table capacity
should support minimum
64K users in a single
vlan.
Deep interface
Buffer or Intelligence
buffer management.
VXlan
Minimum MAC
address table size
should be 256K.ARP table capacity
should support minimum
64K users in a single
vlan.
Deep interface
Buffer or Intelligence
buffer management.
VXlan
Scale UP
CampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads
ResnetCampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloads
Resnet
Scale UP
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
Fabric transparent to
end devices
Combines the fabric elements into
a single domain
Fabric appears as a single device
Policy and overlays applied at the
fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
SwitchSwitch
SwitchSwitch
ExtremeFabric away to simplify
network design & operation
Fabric transparent to
end devices
Combines the fabric elements into
a single domain
Fabric appears as a single device
Policy and overlays applied at the
fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
Switch
Switch
ExtremeFabric away to simplify
network design & operation
Make the
Network act like
~~~~
OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G)
Data Center Fabric
Spine (100G)Spine (100G)
Spine
LeafLeafLeafLeaf LeafLeaf LeafLeafLeafLeaf
ISP 2ISP 2
ISP 1ISP 1
Residential
Housing
Residential
Housing
Hot Spot in
Local Town
Hot Spot in
Local Town
Everything in the Spine
Leaf is just 2 hops away.
Separate Path Available
to each Spine.
Same Latency for each
Path.
Everything in the Spine
Leaf is just 2 hops away.
Separate Path Available
to each Spine.
Same Latency for each
Path.
LeafLeaf
Spine (100G)Spine (100G)
Main CampusMain Campus
UniversityUniversity
Main Campus
University
Spine (100G)Spine (100G) Spine (100G)Spine (100G)Scale OutScale Out Scale OutScale Out
Make the
Network act like ~~~~
CampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads
ResnetCampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloads
Resnet
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
Fabric transparent
to end devices
Combines the fabric elements
into a single domain
Fabric appears as a single device
Policy and overlays applied at
the fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
SwitchSwitch
SwitchSwitch
ExtremeFabric away to simplify
network design & operation
Fabric transparent
to end devices
Combines the fabric elements
into a single domain
Fabric appears as a single device
Policy and overlays applied at
the fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
Switch
Switch
ExtremeFabric away to simplify
network design & operation
Extreme Data Center Switch Options (10, 25, 40, 50, 100G)
Data Center Fabric
Spine (100G)
Spine
LeafLeaf Leaf LeafLeaf
ISP 2
ISP 1
Residential
Housing
Hot Spot in
Local Town
Everything in the Spine
Leaf is just 2 hops away.
Separate Path Available
to each Spine.
Same Latency for each
Path.
Leaf
Spine (100G)
Main Campus
University
Spine (100G) Spine (100G)Scale Out Scale Out
Make the
Network act like ~~
CampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloads
Resnet
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
Fabric transparent
to end devices
Combines the fabric elements
into a single domain
Fabric appears as a single device
Policy and overlays applied at
the fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
Switch
Switch
ExtremeFabric away to simplify
network design & operation
OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
• NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
• NSX Controllers can also be deployed into the Management Cluster
• Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
• Ability to isolate and develop span of control
• Capacity planning – CPU, Memory & NIC
• Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
• Interaction with physical network
• Overlay (VXLAN) impact
• Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
• NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
• NSX Controllers can also be deployed into the Management Cluster
• Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
• Ability to isolate and develop span of control
• Capacity planning – CPU, Memory & NIC
• Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
• Interaction with physical network
• Overlay (VXLAN) impact
• Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
• Multi-chassis LAG
• Routing configuration
• SVIs/RVIs
• VRRP/HSRP
• LACP
• VLANs
Recurring configuration
• SVIs/RVIs
• VRRP/HSRP
• Advertise new subnets
• Access lists (ACLs)
• VLANs
• Adjust VLANs on trunks
• VLANs STP/MST mapping
• VLANs STP/MST mapping
• Add VLANs on uplinks
• Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
• Multi-chassis LAG
• Routing configuration
• SVIs/RVIs
• VRRP/HSRP
• LACP
• VLANs
Recurring configuration
• SVIs/RVIs
• VRRP/HSRP
• Advertise new subnets
• Access lists (ACLs)
• VLANs
• Adjust VLANs on trunks
• VLANs STP/MST mapping
• VLANs STP/MST mapping
• Add VLANs on uplinks
• Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
• L3 ToR designs have dynamic routing protocol between
leaf and spine.
• BGP, OSPF or ISIS can be used
• Rack advertises small set of prefixes
• (Unique VLAN/subnet per rack)
• Equal cost paths to the other racks prefixes.
• Switch provides default gateway service for each VLAN
subnet
• 801.Q trunks with a small set of VLANs for VMkernel
traffic
• Rest of the session assumes L3 topology
L3
L2
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
• L3 ToR designs have dynamic routing protocol between
leaf and spine.
• BGP, OSPF or ISIS can be used
• Rack advertises small set of prefixes
• (Unique VLAN/subnet per rack)
• Equal cost paths to the other racks prefixes.
• Switch provides default gateway service for each VLAN
subnet
• 801.Q trunks with a small set of VLANs for VMkernel
traffic
• Rest of the session assumes L3 topology
L3
L2
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
• Lync Traffic Engineering with
Purview Analytics Service Insertion
• Multi-Tenant Networks Automation
and Orchestration
• Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
• Lync Traffic Engineering with
Purview Analytics Service Insertion
• Multi-Tenant Networks Automation
and Orchestration
• Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
• Network information is distributed across nodes in a
Controller Cluster (slicing)
• Remove the VXLAN dependency on multicast
routing/PIM in the physical network
• Provide suppression of ARP broadcast traffic in
VXLAN networks
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
• Lync Traffic Engineering with
Purview Analytics Service Insertion
• Multi-Tenant Networks Automation
and Orchestration
• Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
• Network information is distributed across nodes in a
Controller Cluster (slicing)
• Remove the VXLAN dependency on multicast
routing/PIM in the physical network
• Provide suppression of ARP broadcast traffic in
VXLAN networks
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
• One or more VDS can be part
of the same TZ
• A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
• One or more VDS can be part
of the same TZ
• A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDMLDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
• The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
• Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
• Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
• The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
• Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
• Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Extreme Networks
Compute, Storage Networking
Integration...
Extreme Networks
Control, Analytics & Security
Integration...
Data center pov 2017 v3

More Related Content

What's hot

NVMe PCIe and TLC V-NAND It’s about Time
NVMe PCIe and TLC V-NAND It’s about TimeNVMe PCIe and TLC V-NAND It’s about Time
NVMe PCIe and TLC V-NAND It’s about Time
Dell World
 
Designing Scalable SAN using MDS 9396S
Designing Scalable SAN using MDS 9396SDesigning Scalable SAN using MDS 9396S
Designing Scalable SAN using MDS 9396S
Tony Antony
 
POWER10 innovations for HPC
POWER10 innovations for HPCPOWER10 innovations for HPC
POWER10 innovations for HPC
Ganesan Narayanasamy
 
Xiv overview
Xiv overviewXiv overview
Xiv overview
Jinesh Shah
 
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015 Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
Tony Antony
 
Mechanical Simulations for Electronic Products
Mechanical Simulations for Electronic ProductsMechanical Simulations for Electronic Products
Mechanical Simulations for Electronic Products
Ansys
 
InfiniBox z pohledu zákazníka
InfiniBox z pohledu zákazníkaInfiniBox z pohledu zákazníka
InfiniBox z pohledu zákazníka
MarketingArrowECS_CZ
 
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
ldangelo0772
 
Storage user cases
Storage user casesStorage user cases
Storage user cases
Andrea Mauro
 
Exploring the Performance Impact of Virtualization on an HPC Cloud
Exploring the Performance Impact of Virtualization on an HPC CloudExploring the Performance Impact of Virtualization on an HPC Cloud
Exploring the Performance Impact of Virtualization on an HPC Cloud
Ryousei Takano
 
Exadata x3 workshop
Exadata x3 workshopExadata x3 workshop
Exadata x3 workshop
Fran Navarro
 
Introduction 6.1 01_architecture_overview
Introduction 6.1 01_architecture_overviewIntroduction 6.1 01_architecture_overview
Introduction 6.1 01_architecture_overview
Anvith S. Upadhyaya
 
LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...
LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...
LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...
Linaro
 
Not All Storage Arrays Are Created Equal - with Arraya Solutions and EMC
Not All Storage Arrays Are Created Equal - with Arraya Solutions and EMCNot All Storage Arrays Are Created Equal - with Arraya Solutions and EMC
Not All Storage Arrays Are Created Equal - with Arraya Solutions and EMC
Arraya Solutions
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysis
brettallison
 
Why hitachi virtual storage platform does so well in a mainframe environment ...
Why hitachi virtual storage platform does so well in a mainframe environment ...Why hitachi virtual storage platform does so well in a mainframe environment ...
Why hitachi virtual storage platform does so well in a mainframe environment ...
Hitachi Vantara
 
A Dataflow Processing Chip for Training Deep Neural Networks
A Dataflow Processing Chip for Training Deep Neural NetworksA Dataflow Processing Chip for Training Deep Neural Networks
A Dataflow Processing Chip for Training Deep Neural Networks
inside-BigData.com
 
Ac922 watson 180208 v1
Ac922 watson 180208 v1Ac922 watson 180208 v1
Ac922 watson 180208 v1
IBM Sverige
 
White Paper - CEVA-XM4 Intelligent Vision Processor
White Paper - CEVA-XM4 Intelligent Vision ProcessorWhite Paper - CEVA-XM4 Intelligent Vision Processor
White Paper - CEVA-XM4 Intelligent Vision Processor
CEVA, Inc.
 
Virtualisation For Network Testing & Staff Training
Virtualisation For Network Testing & Staff TrainingVirtualisation For Network Testing & Staff Training
Virtualisation For Network Testing & Staff Training
APNIC
 

What's hot (20)

NVMe PCIe and TLC V-NAND It’s about Time
NVMe PCIe and TLC V-NAND It’s about TimeNVMe PCIe and TLC V-NAND It’s about Time
NVMe PCIe and TLC V-NAND It’s about Time
 
Designing Scalable SAN using MDS 9396S
Designing Scalable SAN using MDS 9396SDesigning Scalable SAN using MDS 9396S
Designing Scalable SAN using MDS 9396S
 
POWER10 innovations for HPC
POWER10 innovations for HPCPOWER10 innovations for HPC
POWER10 innovations for HPC
 
Xiv overview
Xiv overviewXiv overview
Xiv overview
 
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015 Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
 
Mechanical Simulations for Electronic Products
Mechanical Simulations for Electronic ProductsMechanical Simulations for Electronic Products
Mechanical Simulations for Electronic Products
 
InfiniBox z pohledu zákazníka
InfiniBox z pohledu zákazníkaInfiniBox z pohledu zákazníka
InfiniBox z pohledu zákazníka
 
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
 
Storage user cases
Storage user casesStorage user cases
Storage user cases
 
Exploring the Performance Impact of Virtualization on an HPC Cloud
Exploring the Performance Impact of Virtualization on an HPC CloudExploring the Performance Impact of Virtualization on an HPC Cloud
Exploring the Performance Impact of Virtualization on an HPC Cloud
 
Exadata x3 workshop
Exadata x3 workshopExadata x3 workshop
Exadata x3 workshop
 
Introduction 6.1 01_architecture_overview
Introduction 6.1 01_architecture_overviewIntroduction 6.1 01_architecture_overview
Introduction 6.1 01_architecture_overview
 
LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...
LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...
LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...
 
Not All Storage Arrays Are Created Equal - with Arraya Solutions and EMC
Not All Storage Arrays Are Created Equal - with Arraya Solutions and EMCNot All Storage Arrays Are Created Equal - with Arraya Solutions and EMC
Not All Storage Arrays Are Created Equal - with Arraya Solutions and EMC
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysis
 
Why hitachi virtual storage platform does so well in a mainframe environment ...
Why hitachi virtual storage platform does so well in a mainframe environment ...Why hitachi virtual storage platform does so well in a mainframe environment ...
Why hitachi virtual storage platform does so well in a mainframe environment ...
 
A Dataflow Processing Chip for Training Deep Neural Networks
A Dataflow Processing Chip for Training Deep Neural NetworksA Dataflow Processing Chip for Training Deep Neural Networks
A Dataflow Processing Chip for Training Deep Neural Networks
 
Ac922 watson 180208 v1
Ac922 watson 180208 v1Ac922 watson 180208 v1
Ac922 watson 180208 v1
 
White Paper - CEVA-XM4 Intelligent Vision Processor
White Paper - CEVA-XM4 Intelligent Vision ProcessorWhite Paper - CEVA-XM4 Intelligent Vision Processor
White Paper - CEVA-XM4 Intelligent Vision Processor
 
Virtualisation For Network Testing & Staff Training
Virtualisation For Network Testing & Staff TrainingVirtualisation For Network Testing & Staff Training
Virtualisation For Network Testing & Staff Training
 

Similar to Data center pov 2017 v3

Data center network reference pov jeff green 2016 v2
Data center network reference pov jeff green 2016 v2Data center network reference pov jeff green 2016 v2
Data center network reference pov jeff green 2016 v2
Jeff Green
 
Extreme Spine Leaf Design
Extreme Spine Leaf DesignExtreme Spine Leaf Design
Extreme Spine Leaf Design
Jeff Green
 
12.) fabric (your next data center)
12.) fabric (your next data center)12.) fabric (your next data center)
12.) fabric (your next data center)
Jeff Green
 
7.) convergence (w automation)
7.) convergence (w automation)7.) convergence (w automation)
7.) convergence (w automation)
Jeff Green
 
Sled local gov pov october 2016 v2
Sled local gov pov october 2016 v2Sled local gov pov october 2016 v2
Sled local gov pov october 2016 v2
Jeff Green
 
Navigating dc architectures tech&sales
Navigating dc architectures tech&salesNavigating dc architectures tech&sales
Navigating dc architectures tech&sales
Eric Zhaohui Ji
 
Cisco CCNA Data Center Networking Fundamentals
Cisco CCNA Data Center Networking FundamentalsCisco CCNA Data Center Networking Fundamentals
Cisco CCNA Data Center Networking Fundamentals
E.S.G. JR. Consulting, Inc.
 
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof MazepaPLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PROIDEA
 
What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0
ScyllaDB
 
Webinar: Untethering Compute from Storage
Webinar: Untethering Compute from StorageWebinar: Untethering Compute from Storage
Webinar: Untethering Compute from Storage
Avere Systems
 
Webinar NETGEAR - Switch ProSAFE per il disegno di rete nei livelli di core, ...
Webinar NETGEAR - Switch ProSAFE per il disegno di rete nei livelli di core, ...Webinar NETGEAR - Switch ProSAFE per il disegno di rete nei livelli di core, ...
Webinar NETGEAR - Switch ProSAFE per il disegno di rete nei livelli di core, ...
Netgear Italia
 
Arista reinventing data center switching
Arista   reinventing data center switchingArista   reinventing data center switching
Arista reinventing data center switching
VLCM2015
 
Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...
Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...
Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...
OpenEBS
 
StorPool Storage presenting at Storage Field Day 25pdf
StorPool Storage presenting at Storage Field Day 25pdfStorPool Storage presenting at Storage Field Day 25pdf
StorPool Storage presenting at Storage Field Day 25pdf
StorPool Storage
 
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta correttaWebinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Netgear Italia
 
Designing and deploying converged storage area networks final
Designing and deploying converged storage area networks finalDesigning and deploying converged storage area networks final
Designing and deploying converged storage area networks final
Bhavin Yadav
 
Manufacturing pov jeff green 2016 v2
Manufacturing pov jeff green 2016 v2Manufacturing pov jeff green 2016 v2
Manufacturing pov jeff green 2016 v2
Jeff Green
 
Places in the network (featuring policy)
Places in the network (featuring policy)Places in the network (featuring policy)
Places in the network (featuring policy)
Jeff Green
 
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010Altera Corporation
 
#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout
1CloudRoad.com
 

Similar to Data center pov 2017 v3 (20)

Data center network reference pov jeff green 2016 v2
Data center network reference pov jeff green 2016 v2Data center network reference pov jeff green 2016 v2
Data center network reference pov jeff green 2016 v2
 
Extreme Spine Leaf Design
Extreme Spine Leaf DesignExtreme Spine Leaf Design
Extreme Spine Leaf Design
 
12.) fabric (your next data center)
12.) fabric (your next data center)12.) fabric (your next data center)
12.) fabric (your next data center)
 
7.) convergence (w automation)
7.) convergence (w automation)7.) convergence (w automation)
7.) convergence (w automation)
 
Sled local gov pov october 2016 v2
Sled local gov pov october 2016 v2Sled local gov pov october 2016 v2
Sled local gov pov october 2016 v2
 
Navigating dc architectures tech&sales
Navigating dc architectures tech&salesNavigating dc architectures tech&sales
Navigating dc architectures tech&sales
 
Cisco CCNA Data Center Networking Fundamentals
Cisco CCNA Data Center Networking FundamentalsCisco CCNA Data Center Networking Fundamentals
Cisco CCNA Data Center Networking Fundamentals
 
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof MazepaPLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
 
What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0
 
Webinar: Untethering Compute from Storage
Webinar: Untethering Compute from StorageWebinar: Untethering Compute from Storage
Webinar: Untethering Compute from Storage
 
Webinar NETGEAR - Switch ProSAFE per il disegno di rete nei livelli di core, ...
Webinar NETGEAR - Switch ProSAFE per il disegno di rete nei livelli di core, ...Webinar NETGEAR - Switch ProSAFE per il disegno di rete nei livelli di core, ...
Webinar NETGEAR - Switch ProSAFE per il disegno di rete nei livelli di core, ...
 
Arista reinventing data center switching
Arista   reinventing data center switchingArista   reinventing data center switching
Arista reinventing data center switching
 
Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...
Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...
Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...
 
StorPool Storage presenting at Storage Field Day 25pdf
StorPool Storage presenting at Storage Field Day 25pdfStorPool Storage presenting at Storage Field Day 25pdf
StorPool Storage presenting at Storage Field Day 25pdf
 
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta correttaWebinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta corretta
 
Designing and deploying converged storage area networks final
Designing and deploying converged storage area networks finalDesigning and deploying converged storage area networks final
Designing and deploying converged storage area networks final
 
Manufacturing pov jeff green 2016 v2
Manufacturing pov jeff green 2016 v2Manufacturing pov jeff green 2016 v2
Manufacturing pov jeff green 2016 v2
 
Places in the network (featuring policy)
Places in the network (featuring policy)Places in the network (featuring policy)
Places in the network (featuring policy)
 
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
 
#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout
 

More from Jeff Green

Where is the beef with 6 e
Where is the beef with 6 eWhere is the beef with 6 e
Where is the beef with 6 e
Jeff Green
 
Where is the beef
Where is the beefWhere is the beef
Where is the beef
Jeff Green
 
6 e security
6 e security6 e security
6 e security
Jeff Green
 
Where is the 6 GHz beef?
Where is the 6 GHz beef?Where is the 6 GHz beef?
Where is the 6 GHz beef?
Jeff Green
 
The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)
Jeff Green
 
The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)
Jeff Green
 
The next generation ethernet gangster (part 1)
The next generation ethernet gangster (part 1)The next generation ethernet gangster (part 1)
The next generation ethernet gangster (part 1)
Jeff Green
 
The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)
Jeff Green
 
The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)
Jeff Green
 
Elephant & mice flows
Elephant & mice flowsElephant & mice flows
Elephant & mice flows
Jeff Green
 
Fortinet ngf w extreme policy
Fortinet ngf w extreme policyFortinet ngf w extreme policy
Fortinet ngf w extreme policy
Jeff Green
 
Multi fabric sales motions jg v3
Multi fabric sales motions jg v3Multi fabric sales motions jg v3
Multi fabric sales motions jg v3
Jeff Green
 
Audio video ethernet (avb cobra net dante)
Audio video ethernet (avb cobra net dante)Audio video ethernet (avb cobra net dante)
Audio video ethernet (avb cobra net dante)
Jeff Green
 
Avb pov 2017 v2
Avb pov 2017 v2Avb pov 2017 v2
Avb pov 2017 v2
Jeff Green
 
Layer 2 forwarding on an spb fabric
Layer 2 forwarding on an spb fabricLayer 2 forwarding on an spb fabric
Layer 2 forwarding on an spb fabric
Jeff Green
 
8.) xo s as a platform (on off premise)
8.) xo s as a platform (on off premise)8.) xo s as a platform (on off premise)
8.) xo s as a platform (on off premise)
Jeff Green
 
10.) vxlan
10.) vxlan10.) vxlan
10.) vxlan
Jeff Green
 
4.) switch performance (w features)
4.) switch performance (w features)4.) switch performance (w features)
4.) switch performance (w features)
Jeff Green
 
20.) physical (optics copper and power)
20.) physical (optics copper and power)20.) physical (optics copper and power)
20.) physical (optics copper and power)
Jeff Green
 
19.) security pivot (policy byod nac)
19.) security pivot (policy byod nac)19.) security pivot (policy byod nac)
19.) security pivot (policy byod nac)
Jeff Green
 

More from Jeff Green (20)

Where is the beef with 6 e
Where is the beef with 6 eWhere is the beef with 6 e
Where is the beef with 6 e
 
Where is the beef
Where is the beefWhere is the beef
Where is the beef
 
6 e security
6 e security6 e security
6 e security
 
Where is the 6 GHz beef?
Where is the 6 GHz beef?Where is the 6 GHz beef?
Where is the 6 GHz beef?
 
The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)
 
The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)
 
The next generation ethernet gangster (part 1)
The next generation ethernet gangster (part 1)The next generation ethernet gangster (part 1)
The next generation ethernet gangster (part 1)
 
The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)The next generation ethernet gangster (part 3)
The next generation ethernet gangster (part 3)
 
The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)The next generation ethernet gangster (part 2)
The next generation ethernet gangster (part 2)
 
Elephant & mice flows
Elephant & mice flowsElephant & mice flows
Elephant & mice flows
 
Fortinet ngf w extreme policy
Fortinet ngf w extreme policyFortinet ngf w extreme policy
Fortinet ngf w extreme policy
 
Multi fabric sales motions jg v3
Multi fabric sales motions jg v3Multi fabric sales motions jg v3
Multi fabric sales motions jg v3
 
Audio video ethernet (avb cobra net dante)
Audio video ethernet (avb cobra net dante)Audio video ethernet (avb cobra net dante)
Audio video ethernet (avb cobra net dante)
 
Avb pov 2017 v2
Avb pov 2017 v2Avb pov 2017 v2
Avb pov 2017 v2
 
Layer 2 forwarding on an spb fabric
Layer 2 forwarding on an spb fabricLayer 2 forwarding on an spb fabric
Layer 2 forwarding on an spb fabric
 
8.) xo s as a platform (on off premise)
8.) xo s as a platform (on off premise)8.) xo s as a platform (on off premise)
8.) xo s as a platform (on off premise)
 
10.) vxlan
10.) vxlan10.) vxlan
10.) vxlan
 
4.) switch performance (w features)
4.) switch performance (w features)4.) switch performance (w features)
4.) switch performance (w features)
 
20.) physical (optics copper and power)
20.) physical (optics copper and power)20.) physical (optics copper and power)
20.) physical (optics copper and power)
 
19.) security pivot (policy byod nac)
19.) security pivot (policy byod nac)19.) security pivot (policy byod nac)
19.) security pivot (policy byod nac)
 

Recently uploaded

Multi-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesMulti-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Multi-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Sanjeev Rampal
 
test test test test testtest test testtest test testtest test testtest test ...
test test  test test testtest test testtest test testtest test testtest test ...test test  test test testtest test testtest test testtest test testtest test ...
test test test test testtest test testtest test testtest test testtest test ...
Arif0071
 
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
keoku
 
Latest trends in computer networking.pptx
Latest trends in computer networking.pptxLatest trends in computer networking.pptx
Latest trends in computer networking.pptx
JungkooksNonexistent
 
BASIC C++ lecture NOTE C++ lecture 3.pptx
BASIC C++ lecture NOTE C++ lecture 3.pptxBASIC C++ lecture NOTE C++ lecture 3.pptx
BASIC C++ lecture NOTE C++ lecture 3.pptx
natyesu
 
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
eutxy
 
The+Prospects+of+E-Commerce+in+China.pptx
The+Prospects+of+E-Commerce+in+China.pptxThe+Prospects+of+E-Commerce+in+China.pptx
The+Prospects+of+E-Commerce+in+China.pptx
laozhuseo02
 
How to Use Contact Form 7 Like a Pro.pptx
How to Use Contact Form 7 Like a Pro.pptxHow to Use Contact Form 7 Like a Pro.pptx
How to Use Contact Form 7 Like a Pro.pptx
Gal Baras
 
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
3ipehhoa
 
guildmasters guide to ravnica Dungeons & Dragons 5...
guildmasters guide to ravnica Dungeons & Dragons 5...guildmasters guide to ravnica Dungeons & Dragons 5...
guildmasters guide to ravnica Dungeons & Dragons 5...
Rogerio Filho
 
1.Wireless Communication System_Wireless communication is a broad term that i...
1.Wireless Communication System_Wireless communication is a broad term that i...1.Wireless Communication System_Wireless communication is a broad term that i...
1.Wireless Communication System_Wireless communication is a broad term that i...
JeyaPerumal1
 
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Brad Spiegel Macon GA
 
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdfJAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
Javier Lasa
 
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC
 
This 7-second Brain Wave Ritual Attracts Money To You.!
This 7-second Brain Wave Ritual Attracts Money To You.!This 7-second Brain Wave Ritual Attracts Money To You.!
This 7-second Brain Wave Ritual Attracts Money To You.!
nirahealhty
 
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
3ipehhoa
 
Internet-Security-Safeguarding-Your-Digital-World (1).pptx
Internet-Security-Safeguarding-Your-Digital-World (1).pptxInternet-Security-Safeguarding-Your-Digital-World (1).pptx
Internet-Security-Safeguarding-Your-Digital-World (1).pptx
VivekSinghShekhawat2
 
Comptia N+ Standard Networking lesson guide
Comptia N+ Standard Networking lesson guideComptia N+ Standard Networking lesson guide
Comptia N+ Standard Networking lesson guide
GTProductions1
 
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
3ipehhoa
 
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
ufdana
 

Recently uploaded (20)

Multi-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesMulti-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Multi-cluster Kubernetes Networking- Patterns, Projects and Guidelines
 
test test test test testtest test testtest test testtest test testtest test ...
test test  test test testtest test testtest test testtest test testtest test ...test test  test test testtest test testtest test testtest test testtest test ...
test test test test testtest test testtest test testtest test testtest test ...
 
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
一比一原版(SLU毕业证)圣路易斯大学毕业证成绩单专业办理
 
Latest trends in computer networking.pptx
Latest trends in computer networking.pptxLatest trends in computer networking.pptx
Latest trends in computer networking.pptx
 
BASIC C++ lecture NOTE C++ lecture 3.pptx
BASIC C++ lecture NOTE C++ lecture 3.pptxBASIC C++ lecture NOTE C++ lecture 3.pptx
BASIC C++ lecture NOTE C++ lecture 3.pptx
 
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
一比一原版(LBS毕业证)伦敦商学院毕业证成绩单专业办理
 
The+Prospects+of+E-Commerce+in+China.pptx
The+Prospects+of+E-Commerce+in+China.pptxThe+Prospects+of+E-Commerce+in+China.pptx
The+Prospects+of+E-Commerce+in+China.pptx
 
How to Use Contact Form 7 Like a Pro.pptx
How to Use Contact Form 7 Like a Pro.pptxHow to Use Contact Form 7 Like a Pro.pptx
How to Use Contact Form 7 Like a Pro.pptx
 
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
急速办(bedfordhire毕业证书)英国贝德福特大学毕业证成绩单原版一模一样
 
guildmasters guide to ravnica Dungeons & Dragons 5...
guildmasters guide to ravnica Dungeons & Dragons 5...guildmasters guide to ravnica Dungeons & Dragons 5...
guildmasters guide to ravnica Dungeons & Dragons 5...
 
1.Wireless Communication System_Wireless communication is a broad term that i...
1.Wireless Communication System_Wireless communication is a broad term that i...1.Wireless Communication System_Wireless communication is a broad term that i...
1.Wireless Communication System_Wireless communication is a broad term that i...
 
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptx
 
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdfJAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
JAVIER LASA-EXPERIENCIA digital 1986-2024.pdf
 
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024
 
This 7-second Brain Wave Ritual Attracts Money To You.!
This 7-second Brain Wave Ritual Attracts Money To You.!This 7-second Brain Wave Ritual Attracts Money To You.!
This 7-second Brain Wave Ritual Attracts Money To You.!
 
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
1比1复刻(bath毕业证书)英国巴斯大学毕业证学位证原版一模一样
 
Internet-Security-Safeguarding-Your-Digital-World (1).pptx
Internet-Security-Safeguarding-Your-Digital-World (1).pptxInternet-Security-Safeguarding-Your-Digital-World (1).pptx
Internet-Security-Safeguarding-Your-Digital-World (1).pptx
 
Comptia N+ Standard Networking lesson guide
Comptia N+ Standard Networking lesson guideComptia N+ Standard Networking lesson guide
Comptia N+ Standard Networking lesson guide
 
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
原版仿制(uob毕业证书)英国伯明翰大学毕业证本科学历证书原版一模一样
 
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
一比一原版(CSU毕业证)加利福尼亚州立大学毕业证成绩单专业办理
 

Data center pov 2017 v3

  • 1. Multi-Rate1,2.5,5,10GigabitEdgePoE++ Multi-RateSpineLeafDesign(10,25,40,50,100Gigabit) Web-scale for the rest of us... Web-Scale for The Enterprise (Any Scale upgrades). • SLAs with Agility (Storage Pools and Containers). • Security, Control & Analytics (Data follows a VM as it moves). • Predictable Scale (I/O & data locality are critical). X460-G2 (Advanced L3 1-40G) Multirate Option PoE Fiber DC Policy Fit The Swiss Army Knife of Switches Half Duplex ½ & ½ 3 Models This is where: 10G on existing copper Cat5e and Cat6 extend the life of the installed cable plant. Great for 1:N Convergence. X620 (1OG Copper or Fiber) Speed Next Gen Edge Lowered TCO via Limited Lifetime Warrantee XYZ Account Design Goals • Fractional consumption and predictable scale (Distributed everything). • No single point of failure (Always-on systems). • Extensive automation and rich analytics. XYZ Account Fundamental Assumptions.. • Unbranded x86 servers: fail-fast systems • All intelligence and services in software • Linear, predictable scale-out CAPEX or OPEX (you choose)? Reduced Risk (just witness or take action) Time is the critical Factor with XYZ Account Services... Infrastructure Businessmodel Ownership Considerations Management Location • 32 x 100Gb • 64 x 50Gb • 128 x 25Gb • 128 x 10Gb • 32 x 40Gb 96 x 10GbE Ports (via4x10Gb breakout) 8 x 10/25/40/ 50/100G 10G Next Gen: Spine Leaf X670 & X770 - Hyper Ethernet Common Features • Data Center Bridging (DCB) features • Low ~600 nsec chipset latency in cut through mode. • Same PSUs and Fans as X670s (Front to back or Back to Front) AC or DC. X670-G2 -72X (10GbE Spine Leaf) 72 10GbE X670-48x-4q (10GbE Spine Leaf) 48 10GbE & 4 QSFP+ QSFP+ 40G DAC Extreme Feature Packs Core Edge AVB OpenFlow Advance Edge 1588 PTP MPLS Direct Attach Optics License Extreme Switches include the license they normally need. Like any other software platform you have an upgrade path. QSPF28 100G DAC Disaggregated Switch Purple Metal XoS as a Platform . Network as a Platform... Distributed Everything (no propietary tech). Always-on Operations (Spine-leaf Resilience). Extensive Automation (rich analytics). Purposed for Broadcom (ASICs) XYZ Account Business Value XoS Platform Config L2/L3 Analytics Any OS Any Bare Metal Switch Policy Disaggregated Switch Bare - Grey Web-Scale Configuration consistency .. What constitutes a Software Defined Data Center (SDDC)? Abstract pool automate across... XYZ Account Strategic Asset Initial Configuration Tasks... • Multi-chassis LAG (LACP) • Routing configuration (VRRP/HSRP) • STP (Instances/mapping) VLANs Recurring configuration... • VRRP/HSRP (Advertise new subnets) • Access lists (ACLs) • VLANs (Adjust VLANs on trunks). • VLANs STP/MST mapping • Add VLANs on uplinks • Add VLANs to server ports Control Plane Logical Data Plane Physical compute network storage Logical Router 1 VXLAN 5001 Logical Router 2 VXLAN 5002 Logical Router 3 VXLAN 5003 MAC table ARP table VTEP table Controller Directory VTEP DHCP/DNS Policy Edge Services VM VM VM VM VM Who? Where? When? Whatdevice? How? QuarantineRemediate Allow Authentication NAC Server Summit Netsite Advanced NAC Client Joe Smith XYZ Account Access Controlled Subnet Enforcement Point Network Access Control This is where if X + Y, then Z... • LLDP-MED • CDPv2 • ELRP • ZTP If user matches a defined attribute value ACL QoS Then place user into a defined ROLE A port is what it is because? Datacenter Evolution 1990's Client-Server x86 x86 x86x86 x86 x86 2000s Virtualization x86 x86x86 2010> Cloud Public Cloud Intelligent Software Roadblocks • Silos • Complexity • Scaling Application Experience FullContext App App Analytics App Stop the finger-pointing Application Network Response. Flow or Bit Bucket Collector 3 million Flows Sensors X460 IPFix 4000 Flows (2048 ingress, 2048 egress) Sensor PV-FC-180, S or K Series (Core Flow 2/ 1 Million Flows) Flow-based Access Points From the controller (8K Flows per AP or C35 is 24K Flows) Flows Why not do this in the network? 10110111011101110 101101110111011101 6 million Flows Business Value Context BW IP HTTP:// Apps Platform Automation Control Experience Solution Framework Is your network faster today than it was 3 years ago? Going forward it should deliver more, faster, different DIY Fabric for the DIY Data Center Three fundamental building blocks for Data Center Network Automation Solution: • Orchestration (OpenStack, vRealize, ESX, NSX, MS Azure, ExtremeConnect) • Overlay (VXLAN, NVGRE..) • Underlay (traditional L2/L3 protocols, OSPF, MLAG etc Underlay Overlay Orchestration How is a traditional Aggregated Technology like a Duck? A duck can swim, walk and fly but... Z I/O Bandwidth Y Memory Storage X Compute XoS fn(x,y,z) is like an elastic Fabric • You can never have enough. • Customers want Scale. made easy. • Hypervisor integration. The next convergence will be collapsing the datacenter designs into smaller, elastic form factors for compute, storage and networking.The application is always the driver. Summit Cisco ACI HP Moonshot XYZ Account Data CenterXYZ Account Data Center Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach. • Can not access Line cards. • No L2/l3 recovery inside. • No access to Fabric. Disaggregated value... • Control Top-of-Rack Switches • L2/L3 protocols inside the Spline • Full access to Spine Switches Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach. • Can not access Line cards. • No L2/l3 recovery inside. • No access to Fabric. Disaggregated value... • Control Top-of-Rack Switches • L2/L3 protocols inside the Spline • Full access to Spine Switches No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar • Traditional 3-tier model (Less cabling). • Link speeds must increase at every hop (Less predictable latency). • Common in Chassis based architectures (Optimized for North/South traffic). • Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency). • Always two hops to any leaf (More resiliency, flexibility and performance). • Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar • Traditional 3-tier model (Less cabling). • Link speeds must increase at every hop (Less predictable latency). • Common in Chassis based architectures (Optimized for North/South traffic). • Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency). • Always two hops to any leaf (More resiliency, flexibility and performance). • Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). The XYZ Account handshake layer: • This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting). • Virtualization happens with VXLAN and VMotion (Control by the overlay). • N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account handshake layer: • This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting). • Virtualization happens with VXLAN and VMotion (Control by the overlay). • N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account Ethernet Expressway Layer: deliver massive scale... • This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly • Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks. • Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Fabric Modules (Spine) I/OModules(Leaf) The XYZ Account Ethernet Expressway Layer: deliver massive scale... • This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly • Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks. • Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Spline (Speed) Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach. • Can not access Line cards. • No L2/l3 recovery inside. • No access to Fabric. Disaggregated value... • Control Top-of-Rack Switches • L2/L3 protocols inside the Spline • Full access to Spine Switches No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar • Traditional 3-tier model (Less cabling). • Link speeds must increase at every hop (Less predictable latency). • Common in Chassis based architectures (Optimized for North/South traffic). • Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency). • Always two hops to any leaf (More resiliency, flexibility and performance). • Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). The XYZ Account handshake layer: • This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting). • Virtualization happens with VXLAN and VMotion (Control by the overlay). • N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account Ethernet Expressway Layer: deliver massive scale... • This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly • Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks. • Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Spline (Speed) Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking. • This is where, you can never have enough. • Customers want scale made easy. • Hypervisor integration w cloud simplicity. L2 L3L2 L3 L2 L3L2 L3 L2 L3L2 L3 L2 L3L2 L3 L2 L3L2 L3 Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking. • This is where, you can never have enough. • Customers want scale made easy. • Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters • Each cluster is independent (including servers, storage, database & interconnects). • Each cluster can be used for a different type of service. • Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters • Each cluster is independent (including servers, storage, database & interconnects). • Each cluster can be used for a different type of service. • Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective. • All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol. • VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments. • Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective. • All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol. • VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments. • Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers. • Each spine has 4 uplinks – one to each leaf (4:1 oversubscription). • Enable insertion of services without sprawl (Analytics for fabric and application forensics). • No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers. • Each spine has 4 uplinks – one to each leaf (4:1 oversubscription). • Enable insertion of services without sprawl (Analytics for fabric and application forensics). • No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Network (Fit) Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking. • This is where, you can never have enough. • Customers want scale made easy. • Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters • Each cluster is independent (including servers, storage, database & interconnects). • Each cluster can be used for a different type of service. • Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective. • All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol. • VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments. • Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers. • Each spine has 4 uplinks – one to each leaf (4:1 oversubscription). • Enable insertion of services without sprawl (Analytics for fabric and application forensics). • No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Network (Fit) Overlay Control The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Overlay Control The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Overlay Control The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Compute Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management ServersServers Storage Summit Management Switch Summit Summit Storage Management Servers Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Typical spline setup Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Compute Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Chassis V Spline Fabric Modules (Spine) I/OModules(Leaf) Spine Leaf Proven value with legacy approach. • Can not access Line cards. • No L2/l3 recovery inside. • No access to Fabric. Disaggregated value... • Control Top-of-Rack Switches • L2/L3 protocols inside the Spline • Full access to Spine Switches No EGO, Complexity or Vendor Lock-in). Fat-Tree Clos / Cross-Bar • Traditional 3-tier model (Less cabling). • Link speeds must increase at every hop (Less predictable latency). • Common in Chassis based architectures (Optimized for North/South traffic). • Every Leaf is connected to every Spine (Efficient utilization/ Very predictable latency). • Always two hops to any leaf (More resiliency, flexibility and performance). • Friendlier to east/west traffic (The uplink to the rest of the network is just another leaf). The XYZ Account handshake layer: • This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow whatever you can (Efficient Multicasting). • Virtualization happens with VXLAN and VMotion (Control by the overlay). • N plus one fabric design needs to happen here (Delivers simple no vanity future proofing, No-forklift migrations, interop between vendors and hit-less operation). This is where, a Fabric outperforms the Big Uglies ONE to ONE: Spine Leaf The XYZ Account Ethernet Expressway Layer: deliver massive scale... • This is where low latency is critical, switch as quickly as you can. DO NOT slow down the core keep it simple (Disaggregated Spline + One Big Ugly • Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the bandwidth to your specific needs with the number of uplinks. • Availability - the state of the network is kept in each switch; no single point of failure. Seamless XYZ Account upgrades, easy to take a single switch out of service. (Cloud Fabric) Disaggregation Spine Leaf Legacy Challenges: Complex/Slow/Expensive Scale-up and Scale out Vendor lock-in Proprietary (HW, SW)Commodity Fabric Modules (Spine) I/OModules(Leaf) Spline (Speed) Active - Active redundancy fn(x,y,z) The next convergence will be collapsing datacenter designs into smaller, elastic form factors for compute, storage and networking. • This is where, you can never have enough. • Customers want scale made easy. • Hypervisor integration w cloud simplicity. L2 L3 L2 L3 L2 L3 L2 L3 L2 L3 Start Small; Scale as You Grow This is where, you can simply add a Extreme Leaf Clusters • Each cluster is independent (including servers, storage, database & interconnects). • Each cluster can be used for a different type of service. • Delivers repeatable design which can be added as a commodity. XYZ Account Spine Leaf Cluster Cluster Cluster Egress Scale Ingress Active / Active VM VMVM RR RR BGP Route-ReflectorRR iBGP Adjacency This is where VXLAN (Route Distribution) This is where Why VxLAN? It Flattens network to a single tier from the XYZ Account end station perspective. • All IP/BGP based (Virtual eXtensible Local Area Network). Host Route Distribution decoupled from the Underlay protocol. • VXLAN s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi- tenant environments. • Route-Reflectors deployed for scaling purposes - Easy setup, small configuration. TrafficEngineer“likeATMorMPLS” UDP Start Stop UDP UDP UseExistingIPNetwork VM VM VM VM VM VM VM VM VTEP VTEP Dense 10GbE Interconnect using breakout cables, Copper or Fiber VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM App 1 App 2 App 3 Intel, Facebook, OCP Facebook 4-Post Architecture - Each leaf or rack switch has up to 48 10G downlinks. Segmentation or multi-tenancy without routers. • Each spine has 4 uplinks – one to each leaf (4:1 oversubscription). • Enable insertion of services without sprawl (Analytics for fabric and application forensics). • No routers at spine. One failure reduces cluster capacity to 75%. (5 S's) Needs to be Scalable, Secure, Shared, Standardized, and Simplified. Network (Fit) Overlay Control The XYZ Account the VxLan forwarding plane for NSX control: • This is where logical switches span across physical hosts and network switches. Application continuity is delivered with scale. Scalable Multi-tenancy across data center. • Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to recover from disasters faster. • Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and operations partners, integrations, and frameworks for IT organizations. Vmware NSX (Control Plane) Management Plane deliver by the NSX Manager. Control Plane NSX Controller Manages Logical networks and data plane resources. Extreme delivers an open high performance data plane with Scale NSX Architecture and Components CORE CAMPUS 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 X870-32c 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 01 05 10 15 20 25 30 35 40 02 03 04 06 07 08 09 11 12 13 14 16 17 18 19 21 22 23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 2GbGb 1 Gb 3 Gb 4 21 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 N3K-C3064PQ-FASTAT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 10Gb Aggregation High Density 10Gb Aggregation 10Gb/40Gb Aggregation High Density 25Gb/50Gb Aggregation X770 X870-96x-8c 100Gb Uplinks X670-G2 100Gb Uplinks Server PODs 770 / 870 Spine Data Center – Private Cloud vC-1 vC-2 … vC-N This is where XYZ Account must first it must have the ability to scale with customer demand, delivering more than just disk space and processors. • Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and optimize management of the applications and services. • Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and interoperable technologies. • Business - The business model costs might be optimized for operating expenses or towards capital investment. Cloud Computing (Control Plane) (On-Premise) Infrastructure (as a Service) Platform (as a Service) Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Youmanage Managedbyvendor Managedbyvendor Youmanage Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Software (as a Service) Managedbyvendor Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Public Private MSP F A B R I C This is where Azure ExpressRoute lets XYZ Account create private connections between Azure datacenters and XYZ Account infrastructure on or off premises. • ExpressRoute connections don't go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. • XYZ Account can transfer data between on-premises systems and Azure can yield significant cost benefits. • XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider Microsoft Assure (Control Plane) Cloud The key impact of this model for the customer is a move from managing physical servers to focus on logical management of data storage through policies. Compute Storage Data Center Architecture Considerations Compute Cache Database Storage Client Response • 80% North-South Traffic Oversubscription : upto 200:1 (Client Request +Server Response = 20% traffic). • Inter-rack latency: 150 micros. Lookup Storage = 80% traffic. • Scale: Up to 20 racks (features Non- blocking 2 tier designs optimal). VM VM VM VM Purchase "vanity free" This is where.. Open Compute might allow companies to purchase "vanity free". Previous outdated data center designs support more monolithic computing. • Low density X620 might help XYZ Account to avoid stranded ports. • Availability - Dual X620s can be deployed to minimize impact to maintenance. • Flexibility of the X620 can offer flexibility to support both 1G and 10G to servers and storage. One RACK Design Closely coupled Nearly coupled Loosely coupled Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T The monolithic datacenter is dead. Servers Storage Summit Management Switch Summit Summit Storage Management Servers Open Compute - Two Rack Design This is where, XYZ Account can reduce OPEX and leverage a repeatable solution. • With the spline setup, XYZ Account can put redundant switches in the middle and link each server to those switches. • Fewer Hops between Servers - The important thing is that each server is precisely one hop from any other server. • Avoid Stranded ports – Designs often have a mix of fat and skinny nodes. If XYZ Account deploys a 48-port leaf switches many configurations might have anywhere from 16 to 24 stranded ports. Two RACK Servers Storage Summit Management Switch Summit Summit Storage Management Servers Storage Summit Management Switch Summit Summit Typical spline setup Open Compute : Eight Rack POD Design This is where Typical spline setup : Eight Rack POD Leaf Spine Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage
  • 2. OPEX Components of Converged Environment Security Compliance Automation Operations Compute Storage Networking X Y Z Pooled compute, network, and storage capacity XYZ Account 2017 Design CAPEX Components of Converged Environment Cores Memory Spindles Network 6 12 16 20 64GB 128GB 192GB 256GB 512GB 3.6TB 4.8TB 6TB 10TB8TB 10G RJ45 SFP+ QSFP+ QSFP28 SSD SSD 2016 Design 10G Compute, Memory and Storage Jeff Green 2017 Rev. 1 South Legend Legend 10G Passive (PN 10306 ~ 5m, 10307~ 10M) 10G SFP+ Active copper cable (upto 100m) 40G Passive (PN 10321 ~3m, 10323~ 5m) 40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m) 40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4- F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, ) 10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m) SFP+ DAC Cables QSFP+ DAC Cables 10 LRM 220m (720ft/plus mode conditioning) (PN 10303) 10GBASE-T over Class E Cat 6 (55M) (10G) 10GBASE-T over Class E Cat 6a or 7 (100M) (10G) 10 SR over OM3 (300M) or OM4 (400M) (PN 10301) 10 LR over single mode (10KM) 1310nm (PN 10302) 10 ER over single mode (40KM) 1550nm (PN 10309) 10 ZR over single mode (80KM) 1550nm (PN 10310) 802.3bz 10GBASE-T (100M) for Cat 6 (5G) 10G Fiber 10G Copper 802.3bz 10GBASE-T (100M) for Cat 5e (2.5G) Prescriptive Services10G / 40G Overlay Overall Architecture SDN NSX Underlay ACI Other Spine-Leaf MLAG NEXUS Other Applications Automated provisioning and configuration, Intelligence in software Manual Slow ExtremeCore10G ExtremeEdgePoE 25G / 50G /100G QSFP28 DACs (Passive Cables) LR4 - Up to 10 Km on Single Mode. 2 Km lower cost module (Lite). Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm). QSFP28 QSFP28 DACs (Active Cables) 10411 - 100Gb, QSFP28-QSFP28 DAC, 1m 10413 - 100Gb, QSFP28-QSFP28 DAC, 3m 10414 - 100Gb, QSFP28-QSFP28 DAC, 5m 10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m 4x25 DACS 1x1 DAC 10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m 10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m 10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m 2X50 DACs 100G => 4 x 25G lanes 10434 - 100Gb, QSFP28-QSFP28 DAC, 5m 10435 - 100Gb, QSFP28-QSFP28 DAC, 7m 10436 - 100Gb, QSFP28-QSFP28 DAC, 10m 10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 4x25 DACS 1x1 DAC 10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m 10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m 10437 - 100Gb, QSFP28-QSFP28 DAC, 20m 10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G) Layer2 multi- chassis port channel (vPC or MLAG) ISSU for a redundant pair. Less than 2000ms impact for the upgrade. Spine (100G)Spine (100G) Spine (100G)Spine (100G) Core Spine (100G) Spine (100G) Core ControlControl ControlControl Border Control Control Border 4 x 100G 4 x 100G Spine (100G) Spine (100G) Core Control Control Border 4 x 100G 4 x 100G LeafLeafLeafLeaf LeafLeaf Campus LeafLeaf Campus LeafLeafLeafLeaf LeafLeaf LeafLeaf LeafLeafLeafLeaf LeafLeaf Resnet LeafLeaf Resnet LeafLeaf Campus LeafLeaf LeafLeaf Resnet Minimum MAC address table size should be 256K.ARP table capacity should support minimum 64K users in a single vlan. Deep interface Buffer or Intelligence buffer management. VXlan Minimum MAC address table size should be 256K.ARP table capacity should support minimum 64K users in a single vlan. Deep interface Buffer or Intelligence buffer management. VXlan Scale UP CampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads ResnetCampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloads Resnet Scale UP X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration SwitchSwitch SwitchSwitch ExtremeFabric away to simplify network design & operation Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration Switch Switch ExtremeFabric away to simplify network design & operation Make the Network act like ~~~~
  • 3. OPEX Components of Converged Environment Security Compliance Automation Operations Compute Storage Networking X Y Z Pooled compute, network, and storage capacity XYZ Account 2017 Design CAPEX Components of Converged Environment Cores Memory Spindles Network 6 12 16 20 64GB 128GB 192GB 256GB 512GB 3.6TB 4.8TB 6TB 10TB8TB 10G RJ45 SFP+ QSFP+ QSFP28 SSD SSD 2016 Design 10G Compute, Memory and Storage Jeff Green 2017 Rev. 1 South Legend Legend 10G Passive (PN 10306 ~ 5m, 10307~ 10M) 10G SFP+ Active copper cable (upto 100m) 40G Passive (PN 10321 ~3m, 10323~ 5m) 40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m) 40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4- F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, ) 10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m) SFP+ DAC Cables QSFP+ DAC Cables 10 LRM 220m (720ft/plus mode conditioning) (PN 10303) 10GBASE-T over Class E Cat 6 (55M) (10G) 10GBASE-T over Class E Cat 6a or 7 (100M) (10G) 10 SR over OM3 (300M) or OM4 (400M) (PN 10301) 10 LR over single mode (10KM) 1310nm (PN 10302) 10 ER over single mode (40KM) 1550nm (PN 10309) 10 ZR over single mode (80KM) 1550nm (PN 10310) 802.3bz 10GBASE-T (100M) for Cat 6 (5G) 10G Fiber 10G Copper 802.3bz 10GBASE-T (100M) for Cat 5e (2.5G) Prescriptive Services10G / 40G Overlay Overall Architecture SDN NSX Underlay ACI Other Spine-Leaf MLAG NEXUS Other Applications Automated provisioning and configuration, Intelligence in software Manual Slow ExtremeCore10G ExtremeEdgePoE 25G / 50G /100G QSFP28 DACs (Passive Cables) LR4 - Up to 10 Km on Single Mode. 2 Km lower cost module (Lite). Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm). QSFP28 QSFP28 DACs (Active Cables) 10411 - 100Gb, QSFP28-QSFP28 DAC, 1m 10413 - 100Gb, QSFP28-QSFP28 DAC, 3m 10414 - 100Gb, QSFP28-QSFP28 DAC, 5m 10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m 4x25 DACS 1x1 DAC 10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m 10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m 10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m 2X50 DACs 100G => 4 x 25G lanes 10434 - 100Gb, QSFP28-QSFP28 DAC, 5m 10435 - 100Gb, QSFP28-QSFP28 DAC, 7m 10436 - 100Gb, QSFP28-QSFP28 DAC, 10m 10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 4x25 DACS 1x1 DAC 10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m 10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m 10437 - 100Gb, QSFP28-QSFP28 DAC, 20m 10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G) Data Center Fabric Spine (100G)Spine (100G) Spine LeafLeafLeafLeaf LeafLeaf LeafLeafLeafLeaf ISP 2ISP 2 ISP 1ISP 1 Residential Housing Residential Housing Hot Spot in Local Town Hot Spot in Local Town Everything in the Spine Leaf is just 2 hops away. Separate Path Available to each Spine. Same Latency for each Path. Everything in the Spine Leaf is just 2 hops away. Separate Path Available to each Spine. Same Latency for each Path. LeafLeaf Spine (100G)Spine (100G) Main CampusMain Campus UniversityUniversity Main Campus University Spine (100G)Spine (100G) Spine (100G)Spine (100G)Scale OutScale Out Scale OutScale Out Make the Network act like ~~~~ CampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads ResnetCampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloads Resnet X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration SwitchSwitch SwitchSwitch ExtremeFabric away to simplify network design & operation Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration Switch Switch ExtremeFabric away to simplify network design & operation Extreme Data Center Switch Options (10, 25, 40, 50, 100G) Data Center Fabric Spine (100G) Spine LeafLeaf Leaf LeafLeaf ISP 2 ISP 1 Residential Housing Hot Spot in Local Town Everything in the Spine Leaf is just 2 hops away. Separate Path Available to each Spine. Same Latency for each Path. Leaf Spine (100G) Main Campus University Spine (100G) Spine (100G)Scale Out Scale Out Make the Network act like ~~ CampusDataCenter Spine Leaf delivers Interconnect for distributed compute workloads Resnet X870-32c Spine/Leaf Switch 32 x 10/25/40/50/100GbE QSFP28 Ports 96 x 10GbE Ports (via 24 ports of 4x10Gb breakout) 8 x 10/25/40/50/ 100GbE Ports X690 10Gb Leaf Switches Enabled with 100Gb New 10Gb leaf aggregation switches for fiber and 10GBASE-T applications with 100Gb Ethernet. • Enabled with 40Gb & 100Gb high speed uplinks • Shares power supply and fan modules with X870 • Stacks with X870 using SummitStack-V400 460 Multirate V400 Port Extender Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T X620 Multirate Fabric transparent to end devices Combines the fabric elements into a single domain Fabric appears as a single device Policy and overlays applied at the fabric edge No subnets, no VLANs, no VRFs required within the fabric Zero Touch Configuration Switch Switch ExtremeFabric away to simplify network design & operation
  • 4. OPEX Components of Converged Environment Security Compliance Automation Operations Compute Storage Networking X Y Z Pooled compute, network, and storage capacity XYZ Account 2017 Design CAPEX Components of Converged Environment Cores Memory Spindles Network 6 12 16 20 64GB 128GB 192GB 256GB 512GB 3.6TB 4.8TB 6TB 10TB8TB 10G RJ45 SFP+ QSFP+ QSFP28 SSD SSD 2016 Design 10G Compute, Memory and Storage Jeff Green 2017 Rev. 1 South Legend Legend 10G Passive (PN 10306 ~ 5m, 10307~ 10M) 10G SFP+ Active copper cable (upto 100m) 40G Passive (PN 10321 ~3m, 10323~ 5m) 40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m) 40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4- F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, ) 10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m) SFP+ DAC Cables QSFP+ DAC Cables 10 LRM 220m (720ft/plus mode conditioning) (PN 10303) 10GBASE-T over Class E Cat 6 (55M) (10G) 10GBASE-T over Class E Cat 6a or 7 (100M) (10G) 10 SR over OM3 (300M) or OM4 (400M) (PN 10301) 10 LR over single mode (10KM) 1310nm (PN 10302) 10 ER over single mode (40KM) 1550nm (PN 10309) 10 ZR over single mode (80KM) 1550nm (PN 10310) 802.3bz 10GBASE-T (100M) for Cat 6 (5G) 10G Fiber 10G Copper 802.3bz 10GBASE-T (100M) for Cat 5e (2.5G) Prescriptive Services10G / 40G Overlay Overall Architecture SDN NSX Underlay ACI Other Spine-Leaf MLAG NEXUS Other Applications Automated provisioning and configuration, Intelligence in software Manual Slow ExtremeCore10G ExtremeEdgePoE 25G / 50G /100G QSFP28 DACs (Passive Cables) LR4 - Up to 10 Km on Single Mode. 2 Km lower cost module (Lite). Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm). QSFP28 QSFP28 DACs (Active Cables) 10411 - 100Gb, QSFP28-QSFP28 DAC, 1m 10413 - 100Gb, QSFP28-QSFP28 DAC, 3m 10414 - 100Gb, QSFP28-QSFP28 DAC, 5m 10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m 4x25 DACS 1x1 DAC 10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m 10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m 10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m 2X50 DACs 100G => 4 x 25G lanes 10434 - 100Gb, QSFP28-QSFP28 DAC, 5m 10435 - 100Gb, QSFP28-QSFP28 DAC, 7m 10436 - 100Gb, QSFP28-QSFP28 DAC, 10m 10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m 4x25 DACS 1x1 DAC 10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m 10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m 10437 - 100Gb, QSFP28-QSFP28 DAC, 20m 10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m Organizing Compute, Management & Edge Edge Leaf L3 to DC Fabric L2 to External Networks Compute Clusters Infrastructure Clusters (Edge, Storage, vCenter and Cloud Management System) WAN Internet L3 L2 L3 L2 Leaf Spine L2 VLANs f or bridging Single vCenter Server to manage all Management, Edge and Compute Clusters • NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server • NSX Controllers can also be deployed into the Management Cluster • Reduces vCenter Server licensing requirements Separation of compute, management and Edge function with following design advantage. Managing life-cycle of resources for compute and Edge functions. • Ability to isolate and develop span of control • Capacity planning – CPU, Memory & NIC • Upgrades & migration flexibility Automation control over area or function that requires frequent changes. app- tier, micro-segmentation & load-balancer. Three areas of technology require considerations. • Interaction with physical network • Overlay (VXLAN) impact • Integration with vSphere clustering Registration or Mapping WebVM WebVM VM VM WebVM Compute Cluster WebVM VM VM Compute A vCenter Server NSX Manager NSX Controller Compute B Edge and Control VM Edge Cluster Management Cluster Organizing Compute, Management & Edge Edge Leaf L3 to DC Fabric L2 to External Networks Compute Clusters Infrastructure Clusters (Edge, Storage, vCenter and Cloud Management System) WAN Internet L3 L2 L3 L2 Leaf Spine L2 VLANs f or bridging Single vCenter Server to manage all Management, Edge and Compute Clusters • NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server • NSX Controllers can also be deployed into the Management Cluster • Reduces vCenter Server licensing requirements Separation of compute, management and Edge function with following design advantage. Managing life-cycle of resources for compute and Edge functions. • Ability to isolate and develop span of control • Capacity planning – CPU, Memory & NIC • Upgrades & migration flexibility Automation control over area or function that requires frequent changes. app- tier, micro-segmentation & load-balancer. Three areas of technology require considerations. • Interaction with physical network • Overlay (VXLAN) impact • Integration with vSphere clustering Registration or Mapping WebVM WebVM VM VM WebVM Compute Cluster WebVM VM VM Compute A vCenter Server NSX Manager NSX Controller Compute B Edge and Control VM Edge Cluster Management Cluster Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Traditional Networking Configuration Tasks L3 L2 Initial configuration • Multi-chassis LAG • Routing configuration • SVIs/RVIs • VRRP/HSRP • LACP • VLANs Recurring configuration • SVIs/RVIs • VRRP/HSRP • Advertise new subnets • Access lists (ACLs) • VLANs • Adjust VLANs on trunks • VLANs STP/MST mapping • VLANs STP/MST mapping • Add VLANs on uplinks • Add VLANs to server port NSX isAGNOSTICto UnderlayNetwork L2 or L3 orAny Combination OnlyTWORequirements IPConnectivity MTUof 1600 NSX isAGNOSTICto UnderlayNetwork L2 or L3 orAny Combination OnlyTWORequirements IPConnectivity MTUof 1600 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Traditional Networking Configuration Tasks L3 L2 Initial configuration • Multi-chassis LAG • Routing configuration • SVIs/RVIs • VRRP/HSRP • LACP • VLANs Recurring configuration • SVIs/RVIs • VRRP/HSRP • Advertise new subnets • Access lists (ACLs) • VLANs • Adjust VLANs on trunks • VLANs STP/MST mapping • VLANs STP/MST mapping • Add VLANs on uplinks • Add VLANs to server port NSX isAGNOSTICto UnderlayNetwork L2 or L3 orAny Combination OnlyTWORequirements IPConnectivity MTUof 1600 Network & Security Services in Software WAN/Internet L3 L2 POD A L3 L2 POD B VLAN X Stretch VLAN Y Stretch L3 Topologies & Design Considerations. With XoS 670 Cores L2 Interfaces by default IP packet as large as 9214 Bytes can be sent and received (no configuration is required). L3 interfaces by default IP packet as large as 1500 Bytes can be sent and received. Configuration step for L3 interfaces: change MTU to 9214 “mtu ” command) IP packet as large as 9214 Bytes can be sent and received • L3 ToR designs have dynamic routing protocol between leaf and spine. • BGP, OSPF or ISIS can be used • Rack advertises small set of prefixes • (Unique VLAN/subnet per rack) • Equal cost paths to the other racks prefixes. • Switch provides default gateway service for each VLAN subnet • 801.Q trunks with a small set of VLANs for VMkernel traffic • Rest of the session assumes L3 topology L3 L2 Network & Security Services in Software WAN/Internet L3 L2 POD A L3 L2 POD B VLAN X Stretch VLAN Y Stretch L3 Topologies & Design Considerations. With XoS 670 Cores L2 Interfaces by default IP packet as large as 9214 Bytes can be sent and received (no configuration is required). L3 interfaces by default IP packet as large as 1500 Bytes can be sent and received. Configuration step for L3 interfaces: change MTU to 9214 “mtu ” command) IP packet as large as 9214 Bytes can be sent and received • L3 ToR designs have dynamic routing protocol between leaf and spine. • BGP, OSPF or ISIS can be used • Rack advertises small set of prefixes • (Unique VLAN/subnet per rack) • Equal cost paths to the other racks prefixes. • Switch provides default gateway service for each VLAN subnet • 801.Q trunks with a small set of VLANs for VMkernel traffic • Rest of the session assumes L3 topology L3 L2 XYZ Account (Spine) CORE 1 CORE 2 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Wi-FiAnalytics Security Policy Extreme s Platform • Lync Traffic Engineering with Purview Analytics Service Insertion • Multi-Tenant Networks Automation and Orchestration • Self-Provisioned Network Slicing (Proof of concept Implementation) Better Experience through simpler solutions that deliver long term value. Products – one wired and wireless platform Customer Care – Strong 1st call resolution CORE 1 CORE 2 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Wi-FiAnalytics Security Policy Extreme s Platform • Lync Traffic Engineering with Purview Analytics Service Insertion • Multi-Tenant Networks Automation and Orchestration • Self-Provisioned Network Slicing (Proof of concept Implementation) Better Experience through simpler solutions that deliver long term value. Products – one wired and wireless platform Customer Care – Strong 1st call resolution NSX Controllers Functions LogicalRouter1 VXLAN5000 LogicalRouter2 VXLAN5001 LogicalRouter3 VXLAN-5002 Controller VXLAN DirectoryService MAC table ARP table VTEPtable This is where NSX will provide XYZ Account one control plane to distribute network information to ESXi hosts. NSX Controllers are clustered for scale out and high availability. • Network information is distributed across nodes in a Controller Cluster (slicing) • Remove the VXLAN dependency on multicast routing/PIM in the physical network • Provide suppression of ARP broadcast traffic in VXLAN networks XYZ Account (Spine) CORE 1 CORE 2 Preparation Netsite Operation Convergence 3.0 (Automation/ Seconds') Flexibility and choice Wi-FiAnalytics Security Policy Extreme s Platform • Lync Traffic Engineering with Purview Analytics Service Insertion • Multi-Tenant Networks Automation and Orchestration • Self-Provisioned Network Slicing (Proof of concept Implementation) Better Experience through simpler solutions that deliver long term value. Products – one wired and wireless platform Customer Care – Strong 1st call resolution NSX Controllers Functions LogicalRouter1 VXLAN5000 LogicalRouter2 VXLAN5001 LogicalRouter3 VXLAN-5002 Controller VXLAN DirectoryService MAC table ARP table VTEPtable This is where NSX will provide XYZ Account one control plane to distribute network information to ESXi hosts. NSX Controllers are clustered for scale out and high availability. • Network information is distributed across nodes in a Controller Cluster (slicing) • Remove the VXLAN dependency on multicast routing/PIM in the physical network • Provide suppression of ARP broadcast traffic in VXLAN networks SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Serve rs Manageme nt Summi t Summi t Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity vSphere Host VXLAN Transport Network Host 1 VTEP2 10.20.10.11 V M VXLAN 5002 MAC2 vSphere Host VTEP3 10.20.10.12 Host 2 10.20.10.13 V M MAC4 V M MAC1 V M MAC3 VTEP4 vSphere Distributed Switch vSphere Distributed Switch VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS. For the Fabric is L2, this usually means that the same IP subnets are also used across racks for a given type of traffic. For a given host only one VDS responsible for VXLAN traffic. A single VDS can span multiple cluster.Transport Zone, VTEP, Logical Networks and VDS VTEP VMkernel interface belongs to a specific VLAN backed port- group dynamically created during the cluster VXLAN preparation • One or more VDS can be part of the same TZ • A given Logical Switch can span multiple VDS. vSphere Host(ESXi) L3 ToR Switch Routed uplinks (ECMP) VLANTrunk (802.1Q) VLAN 66 Mgmt 10.66.1.25/26 DGW: 10.66.1.1 VLAN 77 vMotion 10.77.1.25/26 GW: 10.77.1.1 VLAN 88 VXLAN 10.88.1.25/26 DGW: 10.88.1.1 VLAN 99 Storage 10.99.1.25/26 GW: 10.99.1.1 SVI 66: 10.66.1.1/26 SVI 77: 10.77.1.1/26 SVI 88: 10.88.1.1/26 SVI 99: 10.99.1.1/26 SpanofVLANs SpanofVLANs SERVER FARM (Leafs) Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Servers Storage Summit Management Switch Summit Summit Storage Servers Servers Storage Summit Management Switch Summit Summit Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs Serve rs Manageme nt Summi t Summi t Media Servers Routers Firewalls PBXs COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity vSphere Host VXLAN Transport Network Host 1 VTEP2 10.20.10.11 V M VXLAN 5002 MAC2 vSphere Host VTEP3 10.20.10.12 Host 2 10.20.10.13 V M MAC4 V M MAC1 V M MAC3 VTEP4 vSphere Distributed Switch vSphere Distributed Switch VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS. For the Fabric is L2, this usually means that the same IP subnets are also used across racks for a given type of traffic. For a given host only one VDS responsible for VXLAN traffic. A single VDS can span multiple cluster.Transport Zone, VTEP, Logical Networks and VDS VTEP VMkernel interface belongs to a specific VLAN backed port- group dynamically created during the cluster VXLAN preparation • One or more VDS can be part of the same TZ • A given Logical Switch can span multiple VDS. vSphere Host(ESXi) L3 ToR Switch Routed uplinks (ECMP) VLANTrunk (802.1Q) VLAN 66 Mgmt 10.66.1.25/26 DGW: 10.66.1.1 VLAN 77 vMotion 10.77.1.25/26 GW: 10.77.1.1 VLAN 88 VXLAN 10.88.1.25/26 DGW: 10.88.1.1 VLAN 99 Storage 10.99.1.25/26 GW: 10.99.1.1 SVI 66: 10.66.1.1/26 SVI 77: 10.77.1.1/26 SVI 88: 10.88.1.1/26 SVI 99: 10.99.1.1/26 SpanofVLANs SpanofVLANs Traditional control LDAP NAC DHCP Radius Captive Portal DNS MDMLDAP NAC DHCP Radius Captive Portal DNS MDM XYZ Account ServicesUser Repositories or Corporate Control LDAP NAC DHCP Radius Captive Portal DNS MDM XYZ Account ServicesUser Repositories or Corporate Control NAC Analytics Netsite Management Cluster (Control) Cloud Based control Leaf L2 L3 L3 L2 VMkernel VLANs VLANs for Management VMs L2 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk VMkernel VLANs VLANs for Management VMs Single Rack Connectivity Leaf L3 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk Dual Rack Connectivity L2 23 Extreme Vmware Deployment Considerations – This is where, management Cluster is typically provisioned on a single rack • The single rack design still requires redundant uplinks from host to ToR carrying VLANs for management • Dual rack design for increased resiliency (handling single rack failure scenarios) which could be the requirements for highly available design. • Typically in a small design management and Edge cluster are collapsed. Exclude management cluster from preparing VXLAN. ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server NSX Manager deployed as a virtual appliance 4 vCPU, 12 GB of RAM per node Consider reserving memory for VC to ensure good Web Client performance Can not modify configurations Traditional control LDAP NAC DHCP Radius Captive Portal DNS MDM XYZ Account ServicesUser Repositories or Corporate Control NAC Analytics Netsite Management Cluster (Control) Cloud Based control Leaf L2 L3 L3 L2 VMkernel VLANs VLANs for Management VMs L2 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk VMkernel VLANs VLANs for Management VMs Single Rack Connectivity Leaf L3 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk Dual Rack Connectivity L2 23 Extreme Vmware Deployment Considerations – This is where, management Cluster is typically provisioned on a single rack • The single rack design still requires redundant uplinks from host to ToR carrying VLANs for management • Dual rack design for increased resiliency (handling single rack failure scenarios) which could be the requirements for highly available design. • Typically in a small design management and Edge cluster are collapsed. Exclude management cluster from preparing VXLAN. ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server NSX Manager deployed as a virtual appliance 4 vCPU, 12 GB of RAM per node Consider reserving memory for VC to ensure good Web Client performance Can not modify configurations Extreme Networks Compute, Storage Networking Integration... Extreme Networks Control, Analytics & Security Integration...