BLADE 10G Ethernet  and Use Cases
Why 10G Ethernet? Trends Multi-core processor technology Converged Ethernet and FCoE Departmental clusters/HPC IP Storage 10G Ethernet price points are dropping 10G NICs Switch is below $500 per port Very attractive price performance Lower OPEX and implementation costs Leverage existing staff and management tools 10G Ethernet technology has matured SFP+ and CX4 are now well established Benchmark data available Volume deployments—2009/2010 Jun 25, 2009 www.bladenetwork.net Gigabit Ethernet 282 56.4% 10G Ethernet 0 0% Infiniband 141 28.2% Myrinet 10 2.0% Other 67 13.4% Interconnect Count Share
10 Gigabit Ethernet Technology Overview Ratified in 2002, IEEE 802.3ae Same principles of operation as 1GbE; but  10x faster  and  8x improvement in latency Ease of use, debug and management Path to 40G and 100G Operates over both copper & fiber 10 GE Optical 10 GE Copper CX4 SFP+ DAC Leading Technologies Today Jun 25, 2009 www.bladenetwork.net XENPAK X2 XPAK XFP SFP+
Basic Elements of a 10 GE Network  Jun 25, 2009 www.bladenetwork.net 10 GE Adapter 10 GE ‘Blade’ & Rack Switches 10 GE ‘Blade’ & Rack Servers 10G Link Server-to-Switch 10G NIC 10G Up-Link Switch-to-Switch 10/100/1000 Mbps Links Clients 1-GE Switch 10-GE Switch Server Typical 10 GE Network
BLADE’s RackSwitch:  Purposely Designed for Data Centers Share I/O ports across any number of Blade Chassis’ or Racks, and automatically migrate VMs  RackSwitch consumes  57% less power  than competing solutions  *Source: Tolly Group  http://www.bladenetwork.net/media/PDFs/Tolly_RackS_G8100_v_C_4900M.pdf   RackSwitch provides plug-n-play networking over a single  converged fabric  for both data and storage traffic Jun 25, 2009 www.bladenetwork.net Virtual Cooler Easier
SFP+ Direct Attached Cable (Twinax Copper) BLADE RackSwitch G8124 Jun 25, 2009 www.bladenetwork.net Redundant AC Power Supplies ~ 165W consumption 24-ports 10GbE SFP+ (supports 1GbE SFPs) Front-to-Rear Cooling or Rear-to-Front Redundant  FANs Wire-speed ALL ports ~600 nanoseconds
RackSwitch Use Case 1:  1G Ethernet Rack Server Aggregation  Jun 25, 2009 www.bladenetwork.net Top of Rack G8000s 1 GE Rack Servers ~2:1 Oversubscription Line-Rate 10G 10G 10G G8124 G8124 G8000s G8000s . . . . . . . . . 1 GE Rack Servers 1 GE Rack Servers Item Description Qty Spine Switches G8124 2 Leaf Switches G8000 6 Interconnect cabling SFP+ DAC 48
RackSwitch Use Case 2:  10G Ethernet Server Aggregation  Most cost-effective 10GbE Top of Rack solution in the entire industry 20 Rackable Servers Each with dual 10GbE interface High uplink capacity  8 SFP+ 10Gb Ethernet uplink ports High Availability Uplink Failure detection Jun 25, 2009 www.bladenetwork.net Top of Rack G8124s 10 GE Rack Servers 5:1 Oversubscription Line-Rate 10G 10G 10G G8124 G8124 G8124s G8124s . . . . . . . . . 10 GE Rack Servers 10 GE Rack Servers Spine & Leaf Switches G8124 6 Interconnect Cabling SFP+ DAC 48 Item Description Qty.
RackSwitch Use Case 3:  10G Non-Blocking HPC Cluster (48 Nodes)  Less than 600ns port-to-port latency Logical aggregation of trunk connections for scalability and performance No STP required Uplink Failure Detection (UFD) enables traffic load-sharing over redundant paths Jun 25, 2009 www.bladenetwork.net Six 10G Links Spine Switches G8100 or 8124 Six 10G Links Leaf Switches G8100 or 8124 (12) 10GbE Ports Per Leaf Switch Spine & Leaf Switches G8124 6 Interconnect Cabling SFP+ DAC 48 Item Description Qty.
RackSwitch Use Case 3a:  10G Non-Blocking HPC Cluster (72+ Nodes)  Spine & Leaf Switches G8100/G8124 9 Interconnect Cabling CX-4 or  SFP+ DAC 72 Item Description 72-Node Qty 12 96 96-Node Qty Jun 25, 2009 www.bladenetwork.net 72-Port Fat Tree CLOS Design 96-Port Fat Tree CLOS Design G8100/ G8124 G8100/ G8124 Leaf Spine Spine Leaf (72) 10GbE Ports (96) 10GbE Ports
10G Top of Rack and Blade Server Usage Scenario—Row Design www.bladenetwork.net Financial Data Center Row Design Low Latency  Chassis to chassis L2 connectivity 18 Blade Server Chassis In a single row 2 x 1:10G Blade Switches per chassis  Jun 25, 2009
Blade Server Switch Aggregation Most cost-effective 10GbE Top of Rack Blade Server  Switch aggregator 5 Blade Server Chassis (70-80 servers) Each with 8 10GbE interfaces High uplink capacity  8 SFP+ 10Gb Ethernet uplink ports Jun 25, 2009 www.bladenetwork.net
Low Latency HPC Solution 6 Blade Server Systems 16 servers/chassis 96 servers 16 10G Adapters (NICs) 2 10G Ethernet blade switches 1 Gigabit switch for management 12 Low-Latency 10Gb 24 port Switches  Fits in two 19” racks Non-blocking 10Gb throughput End to end latency <3  s Jun 25, 2009 www.bladenetwork.net

10G Ethernet Overview & Use Cases

  • 1.
    BLADE 10G Ethernet and Use Cases
  • 2.
    Why 10G Ethernet?Trends Multi-core processor technology Converged Ethernet and FCoE Departmental clusters/HPC IP Storage 10G Ethernet price points are dropping 10G NICs Switch is below $500 per port Very attractive price performance Lower OPEX and implementation costs Leverage existing staff and management tools 10G Ethernet technology has matured SFP+ and CX4 are now well established Benchmark data available Volume deployments—2009/2010 Jun 25, 2009 www.bladenetwork.net Gigabit Ethernet 282 56.4% 10G Ethernet 0 0% Infiniband 141 28.2% Myrinet 10 2.0% Other 67 13.4% Interconnect Count Share
  • 3.
    10 Gigabit EthernetTechnology Overview Ratified in 2002, IEEE 802.3ae Same principles of operation as 1GbE; but 10x faster and 8x improvement in latency Ease of use, debug and management Path to 40G and 100G Operates over both copper & fiber 10 GE Optical 10 GE Copper CX4 SFP+ DAC Leading Technologies Today Jun 25, 2009 www.bladenetwork.net XENPAK X2 XPAK XFP SFP+
  • 4.
    Basic Elements ofa 10 GE Network Jun 25, 2009 www.bladenetwork.net 10 GE Adapter 10 GE ‘Blade’ & Rack Switches 10 GE ‘Blade’ & Rack Servers 10G Link Server-to-Switch 10G NIC 10G Up-Link Switch-to-Switch 10/100/1000 Mbps Links Clients 1-GE Switch 10-GE Switch Server Typical 10 GE Network
  • 5.
    BLADE’s RackSwitch: Purposely Designed for Data Centers Share I/O ports across any number of Blade Chassis’ or Racks, and automatically migrate VMs RackSwitch consumes 57% less power than competing solutions *Source: Tolly Group http://www.bladenetwork.net/media/PDFs/Tolly_RackS_G8100_v_C_4900M.pdf RackSwitch provides plug-n-play networking over a single converged fabric for both data and storage traffic Jun 25, 2009 www.bladenetwork.net Virtual Cooler Easier
  • 6.
    SFP+ Direct AttachedCable (Twinax Copper) BLADE RackSwitch G8124 Jun 25, 2009 www.bladenetwork.net Redundant AC Power Supplies ~ 165W consumption 24-ports 10GbE SFP+ (supports 1GbE SFPs) Front-to-Rear Cooling or Rear-to-Front Redundant FANs Wire-speed ALL ports ~600 nanoseconds
  • 7.
    RackSwitch Use Case1: 1G Ethernet Rack Server Aggregation Jun 25, 2009 www.bladenetwork.net Top of Rack G8000s 1 GE Rack Servers ~2:1 Oversubscription Line-Rate 10G 10G 10G G8124 G8124 G8000s G8000s . . . . . . . . . 1 GE Rack Servers 1 GE Rack Servers Item Description Qty Spine Switches G8124 2 Leaf Switches G8000 6 Interconnect cabling SFP+ DAC 48
  • 8.
    RackSwitch Use Case2: 10G Ethernet Server Aggregation Most cost-effective 10GbE Top of Rack solution in the entire industry 20 Rackable Servers Each with dual 10GbE interface High uplink capacity 8 SFP+ 10Gb Ethernet uplink ports High Availability Uplink Failure detection Jun 25, 2009 www.bladenetwork.net Top of Rack G8124s 10 GE Rack Servers 5:1 Oversubscription Line-Rate 10G 10G 10G G8124 G8124 G8124s G8124s . . . . . . . . . 10 GE Rack Servers 10 GE Rack Servers Spine & Leaf Switches G8124 6 Interconnect Cabling SFP+ DAC 48 Item Description Qty.
  • 9.
    RackSwitch Use Case3: 10G Non-Blocking HPC Cluster (48 Nodes) Less than 600ns port-to-port latency Logical aggregation of trunk connections for scalability and performance No STP required Uplink Failure Detection (UFD) enables traffic load-sharing over redundant paths Jun 25, 2009 www.bladenetwork.net Six 10G Links Spine Switches G8100 or 8124 Six 10G Links Leaf Switches G8100 or 8124 (12) 10GbE Ports Per Leaf Switch Spine & Leaf Switches G8124 6 Interconnect Cabling SFP+ DAC 48 Item Description Qty.
  • 10.
    RackSwitch Use Case3a: 10G Non-Blocking HPC Cluster (72+ Nodes) Spine & Leaf Switches G8100/G8124 9 Interconnect Cabling CX-4 or SFP+ DAC 72 Item Description 72-Node Qty 12 96 96-Node Qty Jun 25, 2009 www.bladenetwork.net 72-Port Fat Tree CLOS Design 96-Port Fat Tree CLOS Design G8100/ G8124 G8100/ G8124 Leaf Spine Spine Leaf (72) 10GbE Ports (96) 10GbE Ports
  • 11.
    10G Top ofRack and Blade Server Usage Scenario—Row Design www.bladenetwork.net Financial Data Center Row Design Low Latency Chassis to chassis L2 connectivity 18 Blade Server Chassis In a single row 2 x 1:10G Blade Switches per chassis Jun 25, 2009
  • 12.
    Blade Server SwitchAggregation Most cost-effective 10GbE Top of Rack Blade Server Switch aggregator 5 Blade Server Chassis (70-80 servers) Each with 8 10GbE interfaces High uplink capacity 8 SFP+ 10Gb Ethernet uplink ports Jun 25, 2009 www.bladenetwork.net
  • 13.
    Low Latency HPCSolution 6 Blade Server Systems 16 servers/chassis 96 servers 16 10G Adapters (NICs) 2 10G Ethernet blade switches 1 Gigabit switch for management 12 Low-Latency 10Gb 24 port Switches Fits in two 19” racks Non-blocking 10Gb throughput End to end latency <3  s Jun 25, 2009 www.bladenetwork.net

Editor's Notes

  • #2 Jun 25, 2009 BLADE Network Technologies
  • #12 Ibm 14 servers/chassis x 18 chassis = 252 servers per row HP 16 servers/chassis x 18 chassis = 288 servers per row