Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
XoS “is like an elastic Fabric”
Z
I/O
Bandwidth
Y
Memory
Storage
X
Compute
fn(x,y,z)
 You can never have enough
 Custome...
Open Source Software - The Beginning
“Hello everybody out there
using minix —
I’m doing a (free) operating
system (just a ...
Software Timeline
Closed Hybrid Open
1970s-80s 1980s-90s 1990s-Present
Evolution of the Data Center
1970s 1990s 2000’s
Mainframe Scale Up Scale Out
Proprietary Proprietary Proprietar
y
2011++
O...
Evolution of the Cloud
1998 2003 2006 2008 2009
Virtualization Devops IAAS / CLOUD
Proprietary Open Source Proprietary
200...
Open Compute
• Virt IO
• Hardware
Management
• Data Center
Design
• Open Rack
• Storage
5 Main Verticals – All Open
Source...
Is your network faster today
than it was 3 years ago?
Folded: Merge input and output in to one switch =
 1.1 Strict-sense...
BigData
IPStorage
VM Farms
Cloud
Web 2.0
Legacy
VDI
It should be…
2-3 Generations
of Silicon
1G -> 100G
Speed Transition...
Data Center challenges & needs
Containing the Failure Domain
 No downtime – planned or
unplanned
 High bandwidth
 Autom...
Data Center challenges & needs
 Multi-tenancy
with Integrated
Security
 Low and
predictable
latency
 Mix and match
mult...
Chassis V SplineFabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Chassis
≠ Can not access Line cards.
≠ No L2/l3 recover...
Fully Non-blocking Architecture
Three key Components to Fabric
Design…
 Spine - 10 Gb Spine switches (10Gb
- 40Gb - 100Gb...
Facebook Switch Fabrics
The Facebook design Advantages:
 No dependency on the underlying physical network and protocols. ...
BusinessValue
StrategicAsset
Ethernet Fabric
 Single software train
 Purposed OS for
Broadcom ASIcs
 Modularity = High
...
Considerations
 80% North-South Traffic
Oversubscription : from upto
200:1
 Inter-rack latency: 75 to 150
microseconds
...
Simple Spine(Active – Active)
LAG MLAG
No Loop
Just more
bandwidth!
LAG
Summit SummitSummit
Active /
Active
 Simplifies o...
Simple Fabric
LAG
No Loop
Just more
bandwidth!
LAG
Summit
Active / Active
 Topology Independent ISSU
 Plug and Play Prov...
Start Small; Scale as You Grow
Cluster
Leaf
Spine
Cluster Cluster
Simply add a Extreme Leaf Clusters…
 Each cluster is in...
Intel, Facebook, OCP, and Disaggregation
4-Post Architecture at Facebook - Each rack switch (RSW)
has up to 48 10G downlin...
The Open Source Data Center
The current rack standard has no specification for
depth or height. The only “standard” is the...
One RACK DesignTop of Rack
Switches
Servers
Storage
Summit
Management
Switch
Open Compute because it might allow companies...
Two RACK Design
Reduce OPEX leverage a repeatable
solution from planning, configuration,
installation, commissioning , ful...
Eight RACK PoD Design
Summit Summit
Spine
Leaf
Summit
Top of Rack
Switches
Servers
Storage
Summit
Manage
ment
Top of Rack
...
Collapsed (1-tier) Spine
Summit Summit
Spine
Leaf
Storage
Summit
Manage
ment
Storage
Summit
Manage
ment
Storage
Summit
Man...
VXLAN - Overlay and underlay
VXLAN Tunneling
between sites
Remove Network
Entropy
W L2 connectivity
over L3
L3 Core
VLANs
...
Fabric vs. Cisco legacy architecture
Rack
Blade servers
Multitier network design
E W
S
80%
Elastic Spine-Leaf
Simpler Flat...
Converged Network: UCS
A single system that encompasses:
 Network: Unified fabric
 Compute: Industry standard x86
 Stor...
Petabyte Scale Data - Data Flow
Architecture at Facebook
Web Servers Scribe MidTier
Filers
Production Hive-Hadoop Cluster
...
Upcoming SlideShare
Loading in …5
×

12.) fabric (your next data center)

413 views

Published on

Extreme is rethinking the data plane, the control plane, and the management plane. Extreme is a better mouse trap which delivers new features, advanced function, and wire-speed performance. Our switches deliver deterministic performance independent of load or what features are enabled. All Extreme Switches are based on XOS, the industries first and only truly modular operating system. Having a modular OS provides higher availability of critical network resources. By isolating each critical process in its own protected memory space, a single failed process can not take down the entire switch. Application modules can be loaded and unloaded without the need for rebooting the switch. This is the level of functionality that users expect on other technology. Reaching the twenty million port milestone is a significant achievement demonstrating how our highly effective network solutions, with rich features, innovative software and integrated support for secure convergence. VoIP/Unified Communica Fons/Infrastructure/SIP Trunking (SBC) – Because of strong ROI, investment in this segment remains on a very strong growth trajectory.

Enterprises depend on modular switching solutions for all aspects of the enterprise network: in the enterprise core and data center, the distribution layer that lies between the core and wiring closet, and in the wiring closet itself. Modular solutions provide port diversity and density that fixed solutions simply cannot match. There are also high-capacity modular solutions that only the largest of enterprises and institutions use for high-density and high-speed deployments. Modular solutions are generally much more expensive than their fixed cousins, especially in situations where density or flexibility are not required. Fixed-configuration stackable switches are typically cost- optimized, but they offer no real port diversity on an individual switch. Port diversity means the availability of different port types, such as fiber versus copper ports. Stackable switches have gotten better at offering port diversity, but they still cannot match their modular cousins. Many of these products now offer high-end features such as 802.3af PoE, QoS, and multi-layer intelligence that were only found on modular switches in the past. This is due to the proliferation of third-party merchant silicon in the fixed configuration market. Generally, a stack of fixed configuration switches can be managed as a single virtual entity. Fixed configuration switches generally cannot be used to provision an entire large enterprise, but instead are mostly used out at the edge or departmental level as a low-cost alternative to modular products.

Assumptions:
Ethernet is Open
Active/Active in the Fabric
Therefore:
Open at the Edge
Active/Active at the edge

Published in: Internet
  • Be the first to comment

12.) fabric (your next data center)

  1. 1. XoS “is like an elastic Fabric” Z I/O Bandwidth Y Memory Storage X Compute fn(x,y,z)  You can never have enough  Customers want Scale. made easy.  Hypervisor integration. The next convergence will be collapsing the datacenter designs into smaller, elastic form factors for compute, storage and networking.
  2. 2. Open Source Software - The Beginning “Hello everybody out there using minix — I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones..” Free Unix! “Starting this Thanksgiving I am going to write a complete Unix- compatible software system called GNU (for Gnu's Not Unix), and give it away free(1) to everyone who can use it. Contributions of time, money, programs and equipment are greatly needed”.
  3. 3. Software Timeline Closed Hybrid Open 1970s-80s 1980s-90s 1990s-Present
  4. 4. Evolution of the Data Center 1970s 1990s 2000’s Mainframe Scale Up Scale Out Proprietary Proprietary Proprietar y 2011++ Open Source
  5. 5. Evolution of the Cloud 1998 2003 2006 2008 2009 Virtualization Devops IAAS / CLOUD Proprietary Open Source Proprietary 2008 2009 2010 Open Core Open Source
  6. 6. Open Compute • Virt IO • Hardware Management • Data Center Design • Open Rack • Storage 5 Main Verticals – All Open Source What is it? Who is it? Why does it exist? To democratize hardware and eliminate gartuitous differentiation allowing for standardization across tier 1’s and ODMs.
  7. 7. Is your network faster today than it was 3 years ago? Folded: Merge input and output in to one switch =  1.1 Strict-sense non blocking Clos networks (m ≥ 2n−1): the original 1953 Clos result.  1.2 Rearrange ably non blocking Clos networks (m ≥ n).  1.3 Blocking probabilities: the Lee and Jacobaeus approximations. Multi-stage circuit switching network proposed by Charles Clos in 1953 for telephone switching systems. Allows forming a large switch from smaller switches… Fat-tree (Blocking characteristics) Clos networks
  8. 8. BigData IPStorage VM Farms Cloud Web 2.0 Legacy VDI It should be… 2-3 Generations of Silicon 1G -> 100G Speed Transition Lowest Latency with 2 tier Leaf/Spine
  9. 9. Data Center challenges & needs Containing the Failure Domain  No downtime – planned or unplanned  High bandwidth  Automate provisioning, change control and upgrades Supports all use cases and applications  Client-server  Modern distributed apps, Big Data  Storage, Virtualization Any hardware with Any Switch Silicon Any OS Driver Routing AnalyticsVirtual Policy Platform Config Dis-aggregated Switch
  10. 10. Data Center challenges & needs  Multi-tenancy with Integrated Security  Low and predictable latency  Mix and match multiple generations of technologies Fabric Modules (Spine) I/OModules(Leaf) Spine 3 µsec Latency Leaf Network Disaggregation common Goals
  11. 11. Chassis V SplineFabric Modules (Spine) I/OModules(Leaf) Spine Leaf Chassis ≠ Can not access Line cards. ≠ No L2/l3 recovery inside. ≠ No access to Fabric. Fabric Control Top-of-Rack Switches Advance L2/L3 protocols inside the Spline Full access to Spine Switches
  12. 12. Fully Non-blocking Architecture Three key Components to Fabric Design…  Spine - 10 Gb Spine switches (10Gb - 40Gb - 100Gb Spine ) verylow Oversubscription  Leaf - 1/10 Gb Non-blocking (wirespeed) leaf switches Inter- rack latency: 2 to 5 microseconds  Compute and Storage - High performance hosts (VMfarms, IP Storage, Video Streaming, HPC, Low-latency trading) 3 µsec Chassis V 1.8 µsec Network Fabric O/S Maximum 10GbE Connections Total Summit X770 Units 40 GbE Ports Downlinks at the Edge 40 GbE Ports Uplinks at the Edge 1:1 256 6 16 16 ~2: 1 512 8 21 11 3:1 768 10 24 8 80% Traffic East-West
  13. 13. Facebook Switch Fabrics The Facebook design Advantages:  No dependency on the underlying physical network and protocols. No addresses of virtual machines are present in Ethernet switches, resulting in smaller MAC tables and less complex STP layouts  No limitations related to the Virtual LAN (VLAN) technology, resulting in more than 16 million possible separate networks, compared to the VLAN's limit of 4,000  No dependency on the IP multicast traffic
  14. 14. BusinessValue StrategicAsset Ethernet Fabric  Single software train  Purposed OS for Broadcom ASIcs  Modularity = High Availability Network Operating System Design once, leverage everywhere. Why?Why?How?
  15. 15. Considerations  80% North-South Traffic Oversubscription : from upto 200:1  Inter-rack latency: 75 to 150 microseconds  Scale: Up to 20 racks  Client Request +Server Response = 20% traffic  Lookup Storage = 80% traffic (inter- rack)  Non - blocking 2 tier designs optimal Compute Cache Database Storage Client Response
  16. 16. Simple Spine(Active – Active) LAG MLAG No Loop Just more bandwidth! LAG Summit SummitSummit Active / Active  Simplifies or eliminates the Spanning Tree topology  Simple to understand and easy to engineer traffic  Scale using standard L2/L3 Protocols (LACP, OSPF, BGP) Total Latency Under 5us L2 L3 L2 L3 L2 L3
  17. 17. Simple Fabric LAG No Loop Just more bandwidth! LAG Summit Active / Active  Topology Independent ISSU  Plug and Play Provisioning  spines and leaves L2 L3 L2 L3 L2 L3
  18. 18. Start Small; Scale as You Grow Cluster Leaf Spine Cluster Cluster Simply add a Extreme Leaf Clusters…  Each cluster is in an independent cluster that includes servers, storage, database & interconnects  Each cluster can be used for a different type of service.  Simple repeatable design capacity can be added as a commodity. Ingress Egress Scale
  19. 19. Intel, Facebook, OCP, and Disaggregation 4-Post Architecture at Facebook - Each rack switch (RSW) has up to 48 10G downlinks and 4-8 10G uplinks (10:1 oversubscription) to cluster switch (CSW)  Each CSW has 4 40G uplinks – one to each of the 4 FatCat (FC) aggregation switches (4:1 oversubscription)  4 CSW’s are connected in a 10G×8 protection ring  4FC’s are connected in a 10G×16 protection ring  No routers at FC. One CSW failure reduces intra-cluster capacity to 75%. Dense 10GbE Interconnect using breakout cables, Copper or Fiber “Wedge” “6-pack”
  20. 20. The Open Source Data Center The current rack standard has no specification for depth or height. The only “standard” is the requirement for it to be 19 inches wide. This standard evolved out of the railroad switching era. What is it? 5 projects – all completely open to lower cost and increase data center efficiency Why does it exist? To democratize hardware and eliminate gartuitous differentiation allowing for standardization across tier 1’s and ODMs.
  21. 21. One RACK DesignTop of Rack Switches Servers Storage Summit Management Switch Open Compute because it might allow companies to purchase "vanity free" You also don't have stranded ports with a spline network. Scale beyond traditional data center design, Modular datacenter construction is the future.  Outdated design only supports low density IT computing  Time consuming maintenance  Long lead time to deploy additional data center capacity Loosely coupled Nearly coupled Closely coupled The monolithic datacenter is dead. Summit Summit Shared Combo Ports 4x10GBASE-T & 4xSFP+ 100Mb/1Gb/10GBASE-T
  22. 22. Two RACK Design Reduce OPEX leverage a repeatable solution from planning, configuration, installation, commissioning , full turnkey deployments and maintenance Leverage best in class servers, storage, networking and services and is uniquely positioned to create efficient, high performance modular data centers with the infrastructure to support IT Flexible solution from planning, configuration, installation, commissioning , full turnkey deployments and maintenance Summit Top of Rack Switches Servers Storage Summit Management Switch Summit Top of Rack Switches Servers Storage Summit Management Switch Summit Summit Summit Summit
  23. 23. Eight RACK PoD Design Summit Summit Spine Leaf Summit Top of Rack Switches Servers Storage Summit Manage ment Top of Rack Switches Servers Storage Summit Manage ment Summit Summit Summit Summit Summit Top of Rack Switches Servers Storage Summit Manage ment Top of Rack Switches Servers Storage Summit Manage ment Summit Summit Summit Summit Summit Top of Rack Switches Servers Storage Summit Manage ment Top of Rack Switches Servers Storage Summit Manage ment Summit Summit Summit Summit Spine Design  Spine redundancy and capacity Low device latency and low Performance oversubscription rates  I/O Diversity (10/25/40/50/100G)  Ability to grow/scale as capacity is needed  Collapsing of fault/broadcast domains (due to Layer 3 topologies)  Deterministic failover and simpler troubleshooting , Lossless and Lossy traffic over single converged fabric  Readily available operational expertise as well as a variety of traffic engineering capabilities
  24. 24. Collapsed (1-tier) Spine Summit Summit Spine Leaf Storage Summit Manage ment Storage Summit Manage ment Storage Summit Manage ment Storage Summit Manage ment Storage Summit Manage ment Storage Summit Manage ment Summit Summit 4 x 72 =248 10Gs  25/50GbE NICs Allows 25/50GbE from server to switch (Mellanox, Qlogic, Broadcom)  Connectivity Connectors (SFP28, QSFP28), Optics and Cabling solutions backwards compatible  Commercial switching silicon 25Gb signaling lanes make 25/50/100GbE devices possible
  25. 25. VXLAN - Overlay and underlay VXLAN Tunneling between sites Remove Network Entropy W L2 connectivity over L3 L3 Core VLANs XX-YY VLANs XX-YYVXLAN Gateway VXLAN Gateway VXLAN Gateway VXLAN Gateway  L2 connections within IP overlay Unicast & multicast  Allows flat DC design w/out boundaries Simple and elastic network Hypervisor / distributed Virtual Switch – other end of VxLAN tunnels IP overlay connections established between VxLAN end-points of a tenant Fully meshed unicast tunnels – for known L2 unicast traffic PIM signaled multicast tunnels for L2 BUM traffic
  26. 26. Fabric vs. Cisco legacy architecture Rack Blade servers Multitier network design E W S 80% Elastic Spine-Leaf Simpler Flatter Any-to-any network E W 80% A fundamental change in data flows To a Service Oriented Architecture MLAGClient – Server Architecture High Latency Complex High CapEx / OpEx Constrains Virtualization Inefficient STORAGE FC SAN
  27. 27. Converged Network: UCS A single system that encompasses:  Network: Unified fabric  Compute: Industry standard x86  Storage: Access options  Virtualization optimized Uplinks
  28. 28. Petabyte Scale Data - Data Flow Architecture at Facebook Web Servers Scribe MidTier Filers Production Hive-Hadoop Cluster Oracle RAC Federated MySQL Scribe-Hadoop Cluster Adhoc Hive-Hadoop Cluster Hive replication

×