0
Sun QDR Infiniband  Product Portfolio <ul><li>Presenter's Name </li></ul><ul><ul><li>Title </li></ul></ul><ul><ul><ul><li>...
Sun’s Systems Strategy Sun Open Network Systems Sun Innovation  Software  Compute  Network  Storage  Breakthrough Efficien...
Sun Constellation System Open Petascale Architecture Eco-Efficient Building Blocks Networking Compute Storage Software Ult...
Sun DDR InfiniBand Product Family  <ul><ul><ul><li>Sun InfiniBand Switched Network  </li></ul></ul></ul><ul><ul><ul><li>Ex...
Sun InfiniBand Switched Network Express Module (NEM) <ul><li>Network Express Module for Sun Blade 6048 chassis </li></ul><...
Sun Datacenter InfiniBand Switch 3x24 <ul><li>1U switch designed to attach and glue the InfiniBand NEMs in a cluster </li>...
Sun Datacenter InfiniBand Switch 3456 <ul><li>Largest InfiniBand switch in the market - 3456 ports in a single switch  </l...
Quad Data Rate InfiniBand  Products
Sun Datacenter InfiniBand Switch 648 High density, high scalability InfiniBand QDR switch <ul><li>Switch Performance </li>...
Sun Datacenter InfiniBand Switch 648   <ul><li>11RU 19” Enclosure </li></ul><ul><ul><li>Up to 3 per standard 19” rack  </l...
Sun Datacenter InfiniBand Switch 648 Architecture diagram
Line Cards (LCs) <ul><li>72 ports per Line Card </li></ul><ul><li>4 x I4 switch Chips </li></ul><ul><li>72 ports realized ...
Line Card Block Diagram Midplane Connector Six 4x ports per line  (18 4x ports per I4 chip) Two 4x ports per line  (18 4x ...
Fabric Cards (FCs) <ul><li>9 Fabric Cards per chassis </li></ul><ul><li>QDR </li></ul><ul><li>2 Mellanox I4 Switch Chips <...
Fabric Card Block Diagram Midplane Connector Four 4x ports per line Eight 4x ports per connector I4 I4
Three-stage Fat-Free Topology
Cable Management <ul><li>Two cable management arms mounted on both side of the chassis with supporting trays </li></ul><ul...
Cables and Connectors <ul><li>1st Gen 12x DDR was iPASS  </li></ul><ul><ul><li>Proprietary to Sun </li></ul></ul><ul><li>2...
Power and Cooling  <ul><li>Cooling air flows from front to back </li></ul><ul><li>4 hot-swap redundant cooling fans on eac...
Dimensions <ul><li>Physical Characteristics without Cable Management </li></ul><ul><ul><li>Heigh: 19 inches </li></ul></ul...
System Management <ul><li>Two redundant hot-swap Chassis Management Controller cards (CMCs) </li></ul><ul><ul><li>Pigeonpo...
InfiniBand Subnet Management <ul><li>External dual redundant IB subnet managers with OpenSM software </li></ul><ul><li>Run...
Sun Datacenter InfiniBand Switch 648 Configuration options Deploying non-blocking (100%) and oversubscribed fabrics (<100%)
Sun Blade IB Host Channel Adapters <ul><li>QDR cards with QSFP transceivers </li></ul><ul><ul><li>Low profile PCI Express ...
Sun Blade 6048 InfiniBand QDR Switched NEM (QNEM) <ul><li>Divisional & Supercomputing HPC </li></ul><ul><li>2 embedded lea...
QNEM Quickspecs <ul><li>Dual height Sun Blade 6048 Network Express Module </li></ul><ul><li>Dual-bonded I4 chips </li></ul...
QNEM Architecture GE GE GE GE GE GE GE GE GE GE GE GE Server 0 Server 1 Server 0 Server 1 Server 0 Server 1 Server 0 Serve...
QNEM and SF X6275 in Sun Blade 6048 Configuration options P 2 node x 2 socket Server Module P PCIe 2 IB HCA P P PCIe 2 IB ...
Network Architecture Types <ul><li>3D Mesh or Torus </li></ul><ul><li>Clos or Fat Tree </li></ul>Leaf switches Switch Inte...
Clos Networks <ul><li>Also called folded fat tree </li></ul><ul><li>Scales with as a function of switch radix aggregated i...
3D Mesh/Torus <ul><li>Mesh/Torus </li></ul><ul><ul><li>Enabled with SB6048 QNEM </li></ul></ul><ul><ul><li>QNEMs are cable...
Sun Blade 6048, QNEM and Sun DS 648 Configuration
Thank you! <ul><li>Presenter's email </li></ul>
Upcoming SlideShare
Loading in...5
×

Qdr infini band products technical presentation

638

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
638
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
29
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Qdr infini band products technical presentation"

  1. 1. Sun QDR Infiniband Product Portfolio <ul><li>Presenter's Name </li></ul><ul><ul><li>Title </li></ul></ul><ul><ul><ul><li>Sun Microsystems </li></ul></ul></ul>
  2. 2. Sun’s Systems Strategy Sun Open Network Systems Sun Innovation Software Compute Network Storage Breakthrough Efficiency Intelligent Scalability
  3. 3. Sun Constellation System Open Petascale Architecture Eco-Efficient Building Blocks Networking Compute Storage Software Ultra-Dense Blade Platform Fastest processors: SPARC, AMD Opteron, Intel Xeon Highest compute density Fastest host channel adaptor Ultra-Dense and Ultra-Slim Switch Solutions 72, 648 and 3456 port InfiniBand switches Unrivaled cable simplification Most economical InfiniBand cost/port Comprehensive Software Stack Integrated developer tools Integrated Grid Engine infrastructure Provisioning, monitoring, patching Simplified inventory management Developer Tools Provisioning Grid Engine Linux Ultra-Dense Storage Solution Most economical and scalable parallel file system building block Up to 48 TB in 4RU Up to 2TB of SSD Direct cabling to IB switch
  4. 4. Sun DDR InfiniBand Product Family <ul><ul><ul><li>Sun InfiniBand Switched Network </li></ul></ul></ul><ul><ul><ul><li>Express Module (IB NEM) </li></ul></ul></ul><ul><ul><ul><li>Sun Datacenter Switch 3x24 </li></ul></ul></ul><ul><ul><ul><li>Sun Datacenter Switch 3456 </li></ul></ul></ul>
  5. 5. Sun InfiniBand Switched Network Express Module (NEM) <ul><li>Network Express Module for Sun Blade 6048 chassis </li></ul><ul><li>Includes twelve Mellanox InfiniHost III Ex dual port IB Host Channel Adapter (HCA) for the 12 blades in a shelf. </li></ul><ul><li>Includes two Mellanox InfiniScale III 24 port IB DDR switches that provide redundant paths for the HCA's </li></ul><ul><li>Eliminates the need for 24 cables from the HCA's to the switches </li></ul><ul><li>Includes passthrough connectivity for one of the blades' on-board Gigabit Ethernet ports. </li></ul>
  6. 6. Sun Datacenter InfiniBand Switch 3x24 <ul><li>1U switch designed to attach and glue the InfiniBand NEMs in a cluster </li></ul><ul><li>Includes 3 independent 24 port switches that do not communicate – Mellanox InfiniScale III </li></ul><ul><ul><li>unless they are connected to the NEM or using a cable(s) that uses up two or more connectors </li></ul></ul><ul><li>Can help create clusters of up to 288 nodes with only 4 Switches, 24 InfiniBand NEMs and 6 Sun Blade 6048 chassis </li></ul>
  7. 7. Sun Datacenter InfiniBand Switch 3456 <ul><li>Largest InfiniBand switch in the market - 3456 ports in a single switch </li></ul><ul><li>Needs as few as 1152 cables to communicate to all nodes using the 6048 chassis and InfiniBand NEM </li></ul><ul><li>Consists of 24 Line Cards with 24 switches (front) and 18 Fabric Cards (rear) with 8 switches </li></ul><ul><li>Total of 720 Mellanox InfiniScale III switches </li></ul><ul><li>Very low latency and number of hops to get in and out of the switch </li></ul><ul><ul><li>5 hops maximum from any port to any other port in the switch </li></ul></ul><ul><ul><li>200 ns per hop -> 1ms latency maximum! </li></ul></ul>
  8. 8. Quad Data Rate InfiniBand Products
  9. 9. Sun Datacenter InfiniBand Switch 648 High density, high scalability InfiniBand QDR switch <ul><li>Switch Performance </li></ul><ul><ul><li>648 ports QDR/DDR/SDR InfiniBand </li></ul></ul><ul><ul><li>Bisection Bandwidth of 6,480 Tbps </li></ul></ul><ul><ul><li>3 Stage internal full Clos fabric </li></ul></ul><ul><ul><li>100ns per hop, 300ns max latency (QDR) </li></ul></ul><ul><li>Line and Fabric Cards </li></ul><ul><ul><li>9 Line Cards with connectors </li></ul></ul><ul><ul><li>9 Fabric Cards with no connectors </li></ul></ul><ul><li>11 RU Chassis </li></ul><ul><ul><li>Mount up to 3 switches in a 19” rack </li></ul></ul><ul><ul><li>Host based Sun Subnet Manager </li></ul></ul>
  10. 10. Sun Datacenter InfiniBand Switch 648 <ul><li>11RU 19” Enclosure </li></ul><ul><ul><li>Up to 3 per standard 19” rack </li></ul></ul><ul><ul><li>1946 ports in a rack! </li></ul></ul><ul><li>Passive mid-plane w/ air holes </li></ul><ul><ul><li>81 passthrough connectors </li></ul></ul><ul><ul><li>Eight 4x IB ports each </li></ul></ul>9 FCs in rear 9 LCs in front <ul><ul><li>9 horizontal Line Cards provide InfiniBand cable connectivity </li></ul></ul><ul><ul><li>9 vertical Fabric Cards provide communication between line cards and chassis cooling </li></ul></ul>Mid-plane
  11. 11. Sun Datacenter InfiniBand Switch 648 Architecture diagram
  12. 12. Line Cards (LCs) <ul><li>72 ports per Line Card </li></ul><ul><li>4 x I4 switch Chips </li></ul><ul><li>72 ports realized through 24 12x CXP connectors </li></ul><ul><li>DDR or QDR </li></ul><ul><li>Power Consumption: 450W </li></ul>9 connectors to fabric cards via mid-plane 24 CXP connectors – 3 IB links each (12x)
  13. 13. Line Card Block Diagram Midplane Connector Six 4x ports per line (18 4x ports per I4 chip) Two 4x ports per line (18 4x ports per I4 chip) Eight 4x ports per connector I4 I4 I4 I4
  14. 14. Fabric Cards (FCs) <ul><li>9 Fabric Cards per chassis </li></ul><ul><li>QDR </li></ul><ul><li>2 Mellanox I4 Switch Chips </li></ul><ul><ul><li>4 ports to each Line Card </li></ul></ul><ul><li>No external connectors </li></ul><ul><li>Power Consumption: 200W </li></ul><ul><li>Four hot swap fans in each card for chassis cooling </li></ul>9 connectors to line cards via mid-plane 4 N+1 Fans per FC
  15. 15. Fabric Card Block Diagram Midplane Connector Four 4x ports per line Eight 4x ports per connector I4 I4
  16. 16. Three-stage Fat-Free Topology
  17. 17. Cable Management <ul><li>Two cable management arms mounted on both side of the chassis with supporting trays </li></ul><ul><li>Easy to arrange cables for each line card </li></ul><ul><li>Guiding cables either way, up to ceiling or down under floor </li></ul>
  18. 18. Cables and Connectors <ul><li>1st Gen 12x DDR was iPASS </li></ul><ul><ul><li>Proprietary to Sun </li></ul></ul><ul><li>2nd Gen 4x QDR is QSFP </li></ul><ul><ul><li>Industry Standard for 4x </li></ul></ul><ul><ul><li>Supports copper and optical </li></ul></ul><ul><ul><li>Strong 3rd party support </li></ul></ul><ul><li>2nd Gen 12x QDR is CXP </li></ul><ul><ul><li>CXP is an Industry Standard </li></ul></ul><ul><ul><li>3:1 Cable reduction </li></ul></ul><ul><ul><li>Optical available in 10 & 20M </li></ul></ul><ul><ul><li>Copper in 1M, 2M, 3M, and 5M </li></ul></ul>12x Optical Splitter 1CXP to 3 QSFP 12x CXP to CXP Optical 12x CXP to CXP Copper 12x Copper splitter CXP to QSFP's
  19. 19. Power and Cooling <ul><li>Cooling air flows from front to back </li></ul><ul><li>4 hot-swap redundant cooling fans on each Fabric Card </li></ul><ul><li>36 cooling fans per system </li></ul><ul><li>Four redundant N+1 Power Supplies, 2,900W each </li></ul><ul><li>450W per LC </li></ul><ul><li>200W per FC </li></ul><ul><li>Max power consumption: 6750W </li></ul>
  20. 20. Dimensions <ul><li>Physical Characteristics without Cable Management </li></ul><ul><ul><li>Heigh: 19 inches </li></ul></ul><ul><ul><li>Widt: 17.5 inches </li></ul></ul><ul><ul><li>Dept: 27 inches </li></ul></ul><ul><ul><li>Weigh: 400 pounds </li></ul></ul>
  21. 21. System Management <ul><li>Two redundant hot-swap Chassis Management Controller cards (CMCs) </li></ul><ul><ul><li>Pigeonpoint service processor </li></ul></ul><ul><ul><li>One RJ-45 Net-Mgt 100BT port </li></ul></ul><ul><ul><li>One RJ-45 serial console port </li></ul></ul><ul><li>Hardware remote management via ILOM </li></ul><ul><li>CLI, web interface </li></ul><ul><li>Support IPMI, SNMP </li></ul>Dual redundant hot-swap CMC's
  22. 22. InfiniBand Subnet Management <ul><li>External dual redundant IB subnet managers with OpenSM software </li></ul><ul><li>Runs on Linux </li></ul><ul><li>Controls IB routing </li></ul><ul><li>System HW management via CMC service processor </li></ul>
  23. 23. Sun Datacenter InfiniBand Switch 648 Configuration options Deploying non-blocking (100%) and oversubscribed fabrics (<100%)
  24. 24. Sun Blade IB Host Channel Adapters <ul><li>QDR cards with QSFP transceivers </li></ul><ul><ul><li>Low profile PCI Express card </li></ul></ul><ul><ul><li>PCI Express ExpressModule </li></ul></ul><ul><li>Need PCI Express 2.0+ to deliver QDR (40 Gbps) </li></ul><ul><li>Sun Blade 6000 Server Module Fabric Expansion Module (FEM) – DDR only </li></ul><ul><ul><li>Initially only supported with Sun Blade 6048 QDR InfiniBand switched NEM </li></ul></ul><ul><li>HCAs are based on the Mellanox “Connect-X” Chip </li></ul>
  25. 25. Sun Blade 6048 InfiniBand QDR Switched NEM (QNEM) <ul><li>Divisional & Supercomputing HPC </li></ul><ul><li>2 embedded leaf switches connects to 12 servers </li></ul><ul><li>No HCA's on-board </li></ul><ul><li>Density </li></ul><ul><ul><li>Double height (4/chassis) </li></ul></ul><ul><ul><li>30 uplinks (10 physical CXP ports) </li></ul></ul><ul><ul><li>24 pass-through GE </li></ul></ul><ul><li>Topologies </li></ul><ul><ul><li>Supports Clos with 24 uplinks scaling to 5,184 nodes </li></ul></ul><ul><ul><li>Supports 3D Torus with 30 uplinks scaling to 48K nodes </li></ul></ul><ul><li>Works in Sun Blade 6048 only </li></ul>
  26. 26. QNEM Quickspecs <ul><li>Dual height Sun Blade 6048 Network Express Module </li></ul><ul><li>Dual-bonded I4 chips </li></ul><ul><ul><li>2 x 36-port Mellanox InfiniHost I4 QDR switches </li></ul></ul><ul><ul><li>24 blade ports </li></ul></ul><ul><ul><li>30 external ports used in Torus </li></ul></ul><ul><ul><li>24 external ports used in Clos (M9) </li></ul></ul><ul><ul><ul><li>6 external ports connected together by loopback cable </li></ul></ul></ul><ul><li>10 CXP connectors </li></ul><ul><li>Optical cables only </li></ul><ul><li>24 x 1 GbEthernet Ports </li></ul><ul><li>Power consumption: 225W </li></ul>
  27. 27. QNEM Architecture GE GE GE GE GE GE GE GE GE GE GE GE Server 0 Server 1 Server 0 Server 1 Server 0 Server 1 Server 0 Server 1 Server 0 Server 1 Server 0 Server 1 Server 0 Server 1 Server 0 Server 1 Server 0 Server 1 Server 0 Server 0 Server 1 Server 1 Server 1 Server 0 36 Port QDR InfiniBand Switch 36 Port QDR InfiniBand Switch 12x IB B0–B3 12x IB B3–B5 12x IB B6–B8 12x IB B9–B11 12x IB B12–B14 Blade 8 Blade 9 Blade 10 Blade 11 Blade 6 Blade 7 Blade 2 Blade 3 Blade 4 Blade 5 Blade 0 Blade 1 12x IB A0–A3 12x IB A3–A5 12x IB A6–A8 12x IB A9–A11 12x IB A12–A14 GE GE GE GE GE GE GE GE GE GE GE GE External Connectors InfiniBand Ethernet Mid-plane provides 4x QDR InfiniBand connection to 12 Server Modules GE ports for Server Administration 12x connections to Cluster IB Fabric
  28. 28. QNEM and SF X6275 in Sun Blade 6048 Configuration options P 2 node x 2 socket Server Module P PCIe 2 IB HCA P P PCIe 2 IB HCA Node 2 Node 1 NEM 36-port QDR IB switch 36-port QDR IB switch 4x IB 12x IB cables 9 ports 15 ports 15 ports 12 ports 12 ports
  29. 29. Network Architecture Types <ul><li>3D Mesh or Torus </li></ul><ul><li>Clos or Fat Tree </li></ul>Leaf switches Switch Interconnects Leaf switches Leaf switches Switch Hierarchy
  30. 30. Clos Networks <ul><li>Also called folded fat tree </li></ul><ul><li>Scales with as a function of switch radix aggregated into “stages” </li></ul><ul><li>N = Number of Nodes, n=switch ports, T= tiers N= 2(n/2)T </li></ul><ul><li>Constant cross-sectional BW at across a multi-stage network </li></ul><ul><li>Benefits </li></ul><ul><li>Easy to manage and deploy </li></ul><ul><li>Only 1 virtual lane required for deadlock-free routing </li></ul><ul><li>Non-blocking fabric & fixed Latency </li></ul>
  31. 31. 3D Mesh/Torus <ul><li>Mesh/Torus </li></ul><ul><ul><li>Enabled with SB6048 QNEM </li></ul></ul><ul><ul><li>QNEMs are cabled together with no additional external switching </li></ul></ul><ul><li>Benefits </li></ul><ul><ul><li>Shorter cables & high scalability </li></ul></ul><ul><ul><li>Good for nearest-neighbor communication </li></ul></ul><ul><li>Challenges </li></ul><ul><ul><li>Blocking fabric & variable latency </li></ul></ul><ul><ul><li>Application must be topology-aware </li></ul></ul><ul><ul><li>Requires expertise to manage </li></ul></ul><ul><ul><li>Requires 2-6 virtual lanes for deadlock-free routing </li></ul></ul><ul><ul><li>Far more difficult to conceptualize </li></ul></ul>Example shown is a 3 x 4 x 8 3D Torus. There are 8 servers stacked in the Z axis (4 Vayu's each w/2 servers)
  32. 32. Sun Blade 6048, QNEM and Sun DS 648 Configuration
  33. 33. Thank you! <ul><li>Presenter's email </li></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×