HPC Seminar – September 2013

Scale Out Computing With NeXtScale
Systems
Karl Hansen, HPC and Technical Computing, IBM sys...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Journey Started in 2008 – ...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

IBM iDataPlex dx360 M4 Ref...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

2013 - Introducing IBM NeX...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

IBM NeXtScale: Elegant Sim...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Deep Dive into the NeXtSca...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

NeXtScale – Dense Chassis
...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

n1200 Chassis Details

Rea...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

View of Chassis
Rear View
...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

NeXtScale – The Compute No...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

½ Wide Node Details

216 +...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

nx360 M4 Node

The essenti...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

IBM NeXtScale: Elegant Sim...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

nx360 M4 Block diagram:

1...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

nx360 M4 is optimized for ...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

iDataPlex and NeXtScale – ...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

NeXtScale Improves on an A...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

NeXtScale Product Timeline...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Storage NeX

19

IBM Confi...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Storage NeX

20

IBM Confi...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Storage NeX – Internals
Se...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

PCI NeX

22

IBM Confident...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

PCI NeX
Supports 2 Full He...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

PCI NeX - Slots

GPU 0

GP...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

How the PCI NeX attaches

...
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Dense chassis – flexibilit...
Upcoming SlideShare
Loading in...5
×

NeXtScale HPC seminar

534

Published on

Presentation from the HPC event at IBM Denmark - September 2013, Copenhagen

Published in: Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
534
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
37
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

NeXtScale HPC seminar

  1. 1. HPC Seminar – September 2013 Scale Out Computing With NeXtScale Systems Karl Hansen, HPC and Technical Computing, IBM systemX, Nordic 1 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  2. 2. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Journey Started in 2008 – iDataPlex Flexible computing optimized for Data Center serviceability Race car design – Performance centric approach – Cost efficient – Energy Conscious All-front access – Reduces time behind the rack – Reduces cabling errors – Highly energy efficient Low cost, Flexible chassis – Support for servers, GPUs, and Storage – Easy to install and service – Greater density than traditional 1U systems Optimized for Top of Rack (TOR) switching – No expensive mid plane – Latency Optimized – Open Ecosystem 2 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  3. 3. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 IBM iDataPlex dx360 M4 Refresh 3 WHAT’S NEW: Intel Xeon E5-2600 v2 product family Intel Xeon Phi 7120P coprocessor New 1866MHz and 1.35V RDIMMs Higher Performance: Intel Xeon E5-2600 v2 processors providing up to 12 cores, 30MB cache and 1866MHz maximum memory speed to deliver more performance in the same power envelope Intel Xeon Phi coprocessor delivers over 1 Teraflop of double precision peak performance providing up to 4x more performance per watt than with processors alone Increased memory performance with 1866MHz DIMMs and new energy efficient 1.35V RDIMM options, ideal for HPC workloads Learn More: http://www-03.ibm.com/systems/x/hardware/rack/dx360m4/index.html 3 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  4. 4. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 2013 - Introducing IBM NeXtScale A superior building block approach for scale-out computing Standard Rack Chassis Primary Target Workloads High Performance Computing Public Cloud Private Cloud Compute Storage Better data center density and flexibility Compatible with standard racks Optimized for Top of Rack Switching Top BIN E-5 2600 v2 processors Acceleration Designed for solution redundancy The best of iDataPlex Very powerful roadmap More Coming 4 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  5. 5. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 IBM NeXtScale: Elegant Simplicity One Architecture Optimized for Many Use Cases IBM Rack or Client Rack One Simple Light Chassis IBM NeXtScale n1200 Compute Storage PCI – GPU / Phi IBM NeXtScale nx360 M4 nx360 M4 + Storage NeX nx360 M4 + PCI NeX Dense Compute Add RAID card + cable Add PCI riser + GPUs Top Performance Dense 32TB in 1U 2 x 300W GPU in 1U Energy Efficient Simple direct connect Full x16 Gen3 connect IO flexibility No trade offs in base No trade offs in base Swappable Mix and Match Mix and Match 5 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  6. 6. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Deep Dive into the NeXtScale n1200 enclosure The ultimate high density server, designed for your Technical, Grid, and Cloud computing workloads. n1200 enclosure New MT: 5456 Twice the amount of density than regular 1U servers. n1200 Enclosure Dense Chassis – The Foundation Form factor 6U tall with 12 half-wide bays Mix and match compute, storage, or GPU nodes within chassis 6U tall – standard rack Number of Bays 12 Power Supplies 6 hot swap, non redundant, N+N or N+1 Redundant. 80 PLUS Platinum high energy efficiency 900W Fans 10 Hot swap – Each system is individually serviceable – No left or right specific parts – meaning system can be put in any slot Can have up to 7 chassis (up to 84 servers) in a standard 19” rack No in-chassis networking integration – Systems connect to TOR switches – No need to manage the chassis via FSM, iMM, etc Shared power and cooling – 6 non redundant, N+1, N+N, hot swap power supplies to keep business critical applications up and running – 10 hot swap fans Front access cabling – no need to go to rear of the rack or chassis 6 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  7. 7. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 NeXtScale – Dense Chassis IBM NeXtScale n1200 enclosure The Dense Chassis System infrastructure Optimized shared infrastructure Bay 11 6U Chassis, 12 bays ½ wide component support Up to 6 900W power supplies N+N or N+1 configurations Up to 10 hot swap fans Fan and Power Controller Mix and match compute, storage, or GPU nodes No built in networking No chassis management required Bay 12 Bay 9 Bay 10 Bay 7 Bay 8 Bay 5 Bay 6 Bay 3 Bay 4 Bay 1 Bay 2 Bay 11 Front view of the IBM NeXtScale n1200 enclosure shown with 12 compute nodes installed 5x 80mm fans 3x power supplies 7 Fan and Power Controller 3x power supplies 5x 80mm fans Rear view of the IBM NeXtScale n1200 enclosure IBM Confidential – Presented under NDA © 2013 IBM Corporation
  8. 8. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 n1200 Chassis Details Rear View 262.7 mm (6U) Front View 10 x 80mm Fans 6 x Hot Swap 80+ Platinum 900W Power Supplies Power design supports non-redundant, N+1, and N+N power 8 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  9. 9. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 View of Chassis Rear View Front View Power Distribution Board Access Cover Fan / Power Control Card 6 Power Supplies 12 Node Bays 9 Fan & System LEDs 10 x 80 mm Fans IBM Confidential – Presented under NDA © 2013 IBM Corporation
  10. 10. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 NeXtScale – The Compute Node System infrastructure Simple architecture The Compute Node New ½ Wide 1U, 2 socket server Next generation Intel processors (IVB EP) Flexible slot-less I/O design Generous PCIe capability Open design, works with existing x86 tools Versatile design with flexible Native Expansion options 32TB local storage (Nov) GPUs/Phi adapters (2014) IBM NeXtScale nx360 M4 – Hyperscale Server Power button and information LED PCIe 3.0 Slot 1 GbE ports Dual-port mezzanine card (IB/Ethernet) x8 mezz. connector KVM connector CPU #2 Labeling tag 2x DIMMs IMM management port Drive bay(s) 2x DIMMs 2x DIMMs 10 x24 PCIe 3.0 slot 2x DIMMs IBM Confidential – Presented under NDA CPU #1 Power connector © 2013 IBM Corporation
  11. 11. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 ½ Wide Node Details 216 +/- 0.5 mm 8.5 inches 41 +/- 0.5 mm Storage Choice 1 x 3.5” HDD 2 x 2.5” HDD/SSD 4 x 1.8” SSD Power Interposer Card Top BIN processors x 2 All External Cable Connectors Out the front of server for easy access on cool aisle Power Button and Information LEDs 11 Motherboard IMM v2 (IPMI / SoL compliant BMC) 2 x 1Gb Intel NIC 8 DIMMs @ 1866MHz Mezzanine Card (IO – IB, 10Gb) PCIe Adapter – Full High, Half Length IBM Confidential – Presented under NDA © 2013 IBM Corporation
  12. 12. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 nx360 M4 Node The essentials – Dedicated or shared 1Gb for management – Two production 1Gb Intel NICs and 1 additional port for IMM – Standard PCI card support – Flexible LOM/Mezzanine for IO expansion – Power, Basic LightPath, and KVM crash cart access – Simple pull out asset tag for naming or RFID – Intel Node Manager 2.0 Power Metering/Management The first silver System x server – Clean, simple, and lower cost – Blade like weight and size – rack like individuality and control 12 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  13. 13. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 IBM NeXtScale: Elegant Simplicity NeXtScale will keep you in front (of the rack that is) >100 ºF!!! 65-80ºF Know what cable you are pulling! Which aisle would rather be working in? Service NeXtScale from the front of the rack Cold aisle accessibility to most components Tool-less access to servers Server removal without unplugging power 13 Front-access to Networking cables & Switches Simple cable routing (front or traditional rear switching) Power and LEDs all front facing IBM Confidential – Presented under NDA © 2013 IBM Corporation
  14. 14. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 nx360 M4 Block diagram: 14 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  15. 15. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 nx360 M4 is optimized for HPC and Grid Full CPU lineup support up to 130W 8 DIMM slots – Optimized for max speed at 1 DIMM/channel 1866MHz – Optimized for HPC workloads • 2-4GB/core with 24 cores fits nicely into 16GB cost sweet spot – Optimized for cost (reduced board to 8 layers, HP has 12) – Optimized for efficiency (greater processor spread to reduce preheating) Infiniband FDR mezzanine – Optimized for performance and cost Chassis capable of Non-redundant or N+1 power to reduce cost – HPC typically deploys non-redundant (software resiliency) – Option for N+1 to protect 12 nodes from throttling in PSU failure for minimal cost add Flexible integrated storage for boot and scratch – 1 3.5” (or stateless – no HDD) is common for HPC – 2 2.5” is used in some grid applications – 4 1.8” SSD for low power, additional flexibility Enabled for GPU and storage trays – Pre-positioned PCIe slots (1 in front, 1 in back) 15 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  16. 16. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 iDataPlex and NeXtScale – Complementary Offerings iDataPlex is being refreshed with Intel Xeon E5-2600 v2 processors – full stack – Will ship thousands nodes of iDataPlex in 3Q – Expect continued sales through 2015 – Clients with proven iDataPlex solutions can continue to purchase iDataPlex provides several functions that Gen 1 NeXtScale will not – Water Cooling – Stay with iDataPlex for direct water cooling until next gen – 16 DIMM slots – for users that need 256GB or more of memory, iDataPlex is a better choice until next gen NeXtScale offering – Short term we will use iDataPlex for our GPU/GPGPU support Key point – NeXtScale is not a near term replacement for iDataPlex NeXtScale will be our architecture of choice for HPC, Cloud, Grid, IPDC & Analytics – More flexible architecture with stronger roadmap – As NeXtScale continues to add functionality - iDataPlex will no longer be needed – outlook 2015 16 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  17. 17. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 NeXtScale Improves on an Already Great iDataPlex Platform iDataPlex NeXtScale iDataplex requires unique rack to achieve density – most customers prefer standard rack NeXtScale fits in any standard rack 84 servers per rack is difficult to utilize and configure – Infiniband fits into multiples of 18 or 24, creating a mismatch with 84 servers NeXtScale single rack allows all Infiniband and Ethernet switching with 72 servers – the perfect multiple iDataplex clusters are difficult to configure with unused switch ports at maximum density NeXtScale offers 72 nodes per rack + infrastructure, making configuring straightforward Wide iDataPlex rack drives longer, higher cost cables NeXtScale is optimized for 19” rack, reducing cable length for rack to rack and cost Other servers and storage in clusters forces addition of standard racks to layout, which eliminates the iDataPlex datacenter advantage NeXtScale, System x and storage use the same rack – easy to optimize and deploy iDataPlex Longer optical cables x3 x6 x6 x6 NeXtScale Shorter copper cables x3 x6 fillers fillers 17 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  18. 18. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 NeXtScale Product Timeline Shipping: Oct 2013 6U Dense Chassis 1U tall 1/2 Wide Compute node Shipping: Nov 2013 Storage Native Expansion (Storage NeX) 1U (2U tall 1/2 Wide) Up to 32TB total capacity Shipping: 1H 2014 PCI Native Expansion (PCI NeX) 1U (2U tall 1/2 Wide) GPU or Xeon Phi Support A Lot More Coming More Storage More IO Options Next Gen Processors MicroServers 6U Chassis will support mix-match nodes 18 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  19. 19. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Storage NeX 19 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  20. 20. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Storage NeX 20 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  21. 21. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Storage NeX – Internals Seven LFF (3.5” ) drives internal to storage NeX, plus one additional drive on the nx360 M4 Cable attached to a SAS or SATA RAID adapter or HBA on the nx360 M4 1 0 Drives are not hotswap 2 Initial capacity is up to 4TB per drive 7 3 4 5 6 21 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  22. 22. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 PCI NeX 22 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  23. 23. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 PCI NeX Supports 2 Full Height, Full Length, Double Wide adapters at up to 300W Provides 2 x16 slots Requires 1300W power supplies in the chassis Will support Intel Xeon Phi and nVIDIA GPUs Expected availability 1H 2014 23 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  24. 24. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 PCI NeX - Slots GPU 0 GPU 1 Gen3 Gen3 Note: All PCIe Slots & Server GPU slots have separate SMBus Connections x16 x16 Upper 1U Lower 1U x 8 IB/10G… Mezzanine Connector x8 x 8 x24 Planar Connector x16 Planar Connector FHHL DRAM Proc 0 x16 x24 Proc 1 x8 DRAM QPI 24 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  25. 25. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 How the PCI NeX attaches PCIe Connector Locations 25 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  26. 26. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Dense chassis – flexibility for the future Room to scale – future proof flexibility. The investment platform for HPC. Dense Compute Dense Storage 2 socket 12 compute ½ wide 2 socket 6 compute ½ wide 8 3.5” HDD Ultra-Dense dense microservers (3U) 26 GPU / Accelerator 2 socket 6 compute + 2 GPU GPU / Accelerator with IO 2 socket 4 compute + 4 GPU Full-wide 1-2 socket (full-wide) 6 compute Memory or HDD rich Dense Storage dense hot-swap storage (4U) IBM Confidential – Presented under NDA More… © 2013 IBM Corporation
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×