• Save
Mellanox hpc day 2011 kiev
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Mellanox hpc day 2011 kiev

on

  • 1,213 views

 

Statistics

Views

Total Views
1,213
Views on SlideShare
1,030
Embed Views
183

Actions

Likes
1
Downloads
0
Comments
0

1 Embed 183

http://supercomputers.kiev.ua 183

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Mellanox hpc day 2011 kiev Presentation Transcript

  • 1. Paving The Road to Exascale Computing The Highest Performance, Most ScalableInterconnect Solutions for Servers and Storage Oct 2011, HPC@mellanox.com
  • 2. Connectivity Solutions for Efficient Computing Enterprise HPC High-end HPC HPC Clouds Mellanox Interconnect Networking Solutions Host/Fabric ICs Adapter Cards Switches/Gateways Cables Software Leading Connectivity Solution Provider For Servers and Storage© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 2
  • 3. Mellanox Complete End-to-End Connectivity Host/Fabric Software Management - UFM, Mellanox OS - Integration with job schedulers - Inbox Drivers - Collectives Accelerations (FCA/CORE-Direct) Application Accelerations - GPU Accelerations (GPUDirect) - MPI/SHMEM/PGAS - RDMA - Quality of Service - Adaptive Routing Networking Efficiency/Scalability - Congestion Management - Traffic aware Routing (TARA) Server and Storage High-Speed Connectivity - Latency - CPU Utilization - Bandwidth - Message rate© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 3
  • 4. Interconnect Trends – Top100, Top200, Top300, Top400 InfiniBand connects the majority of the TOP100, 200, 300, 400 supercomputers • 42% of the Top500 systems Due to superior performance, scalability, efficiency and return on investment © 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 4
  • 5. Powering the Petascale Today Dawning (China) TSUBAME (Japan) NASA (USA) >11K nodes LANL (USA) CEA (France) InfiniBand is the interconnect of choice for PetaScale computing, all Mellanox • Accelerating 50% of the sustained PetaScale systems (5 systems out of 10)© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 5
  • 6. FDR InfiniBand 56Gb/s is HERE!© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 6
  • 7. Introducing FDR InfiniBand 56Gb/s Solutions 2011 2008 56Gb/s 2005 40Gb/s 2002 20Gb/s 10Gb/s 3.0 Highest Performance, Reliability, Scalability, Efficiency© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 7
  • 8. Recent Introductions Unified Fabric Manager Applications Switch OS LayerNetworking Storage Clustering Management SW Acceleration Products LOM Adapter Card Industry leader Mezzanine Card  Industry leader • PCI-express gen3 • 36 x FDR IB or 40GE 64 x 10GE • Dual-port FDR IB or 40GE • Integrated routers and bridges • Native RDMA • 4Tbit switching capacity • Core-Direct • Ultra-low latency© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 8
  • 9. FDR InfiniBand New Features and Capabilities Performance / Scalability Reliability / Efficiency• Adapter / Switch port bandwidth 56Gb/s • Link bit encoding – 64/66• Latency reduction • Forward Error Correction• InfiniBand router and IB-Eth/FC bridges • Lower power consumption © 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 9
  • 10. Scalable Collectives Acceleration (MPI/SHMEM) Fabric Collective Accelerator™ (FCA™)© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 10
  • 11. Mellanox Collectives Acceleration Components CORE-Direct • Adapter-based hardware offloading for collectives operations • Includes floating-point capability on the adapter for data reductions • CORE-Direct API is exposed through the Mellanox drivers - Available for 3rd party libraries/software protocols FCA • FCA is a software plug-in package that integrates into available MPIs • FCA replaces the MPI software library code for collective communications • FCA implements MPI collectives operations using the hardware accelerations - Utilizing CORE-Direct (adapter offloading) • FCA includes support for sophisticated collectives algorithms • FCA is available through licensing© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 11
  • 12. FCA/CORE-Direct – Application Performance*Acknowledgment: HPC Advisory Council for providing the performance results © 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 12
  • 13. GPUDirect for GPU Accelerations© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 13
  • 14. GPUDirect – Efficient GPU/Network Interface Receive Transmit 1 2 2 1System System CPU CPUMemory Memory Chip Chip GPU set GPU set InfiniBand InfiniBand GPU GPU Memory Memory GPUDirectSystem System 1 1Memory CPU CPU Memory Chip Chip GPU set set GPU InfiniBand InfiniBand GPU GPU Memory Memory © 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 14
  • 15. GPUDirect – Application Performance LAMPS • 3 nodes, 10% gain 3 nodes, 1 GPU per node 3 nodes, 3 GPUs per node Amber – Cellulose • 8 nodes, 32% gain Amber – FactorX • 8 nodes, 27% gain© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 15
  • 16. Thank You HPC@mellanox.com© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 16