Interconnect Your Future with Connect-IB
 

Interconnect Your Future with Connect-IB

on

  • 426 views

Presented during SuperComputing 2013 by Yoni Luzon

Presented during SuperComputing 2013 by Yoni Luzon

Statistics

Views

Total Views
426
Slideshare-icon Views on SlideShare
426
Embed Views
0

Actions

Likes
1
Downloads
7
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Interconnect Your Future with Connect-IB Interconnect Your Future with Connect-IB Presentation Transcript

    • Interconnect Your Future with Connect-IB Supercomputing 2013
    • The Only Provider of End-to-End 40/56Gb/s Solutions Comprehensive End-to-End InfiniBand and Ethernet Portfolio ICs Adapter Cards Switches/Gateways Host/Fabric Software Metro / WAN Cables/Modules From Data Center to Metro and WAN X86, ARM and Power based Compute and Storage Platforms The Interconnect Provider For 10Gb/s and Beyond © 2013 Mellanox Technologies 2
    • Technology Roadmap – One-Generation Lead over the Competition Mellanox 56Gbs 40Gbs 20Gbs Terascale Petascale 3rd “Roadrunner” Virginia Tech (Apple) Mellanox Connected © 2013 Mellanox Technologies Exascale 1st TOP500 2003 2000 200Gbs 100Gbs 2005 Mega Supercomputers 2010 2015 2020 3
    • Mellanox Connect-IB Adapter Delivers Highest Clustering ROI  The 7th generation of Mellanox interconnect adapters, based on the new Exascale architecture  World’s first 100Gb/s interconnect adapter (dual-port FDR 56Gb/s InfiniBand)  Delivers 137 million messages per second – 4X higher than competition / previous generations  Support the new innovative InfiniBand scalable transport – Dynamically Connected  Accelerates applications performance – cases of 200% higher performance than competition © 2013 Mellanox Technologies 4
    • Dynamically Connected Transport Performance Advantages © 2013 Mellanox Technologies 6
    • Connect-IB Provides Highest Server and Storage Throughput Connect-IB FDR (Dual ports) Higher is Better Connect-IB FDR (Dual ports) ConnectX-3 FDR ConnectX-3 FDR Connect-2 QDR Connect-2 QDR Competition (InfiniBand) Competition (InfiniBand) Source: Prof. DK Panda Performance Leadership © 2013 Mellanox Technologies 7
    • Connect-IB Delivers Highest Application Performance  WIEN2k is a quantum mechanical simulation software  FDR InfiniBand delivers scalable performance  Connect-IB adapter demonstrates 200% higher performance versus competition WIEN2k Performance 200 Performance Rating (Jobs/Day) Higher is Better 180 160 140 120 100 80 60 40 20 0 80 160 320 Number of Cores Competition (InfiniBand) © 2013 Mellanox Technologies ConnectX-3 FDR Connect-IB FDR 8
    • Connect-IB Delivers Highest Application Performance  Weather Research and Forecasting (WRF) simulation software  Connect-IB delivers 54% higher performance versus competition WRF Performance (conus12km) 2500 Performance (Jobs/Day) Higher is Better 3000 2000 1500 1000 500 0 320 640 Number of Cores Competition (InfiniBand) © 2013 Mellanox Technologies ConnectX-3 FDR Connect-IB FDR 9
    • GPUDirect RDMA Technology Transmit 1 System Memory CPU GPU GPUDirect 1.0 Chip set Chip set InfiniBand CPU 1 Receive GPU InfiniBand GPU Memory Memory 1 CPU CPU 1 GPU System Memory System Memory GPU Chip set Chip set System Memory GPU InfiniBand GPU Memory © 2013 Mellanox Technologies InfiniBand GPUDirect RDMA GPU Memory 10
    • Performance of MVAPICH2 with GPUDirect RDMA GPU-GPU Internode MPI Bandwidth GPU-GPU Internode MPI Latency Small Message Latency Small Message Bandwidth 900 35 MVAPICH2-1.9 MVAPICH2-1.9 800 MVAPICH2-1.9-GDR MVAPICH2-1.9-GDR Higher is Better 30 Lower is Better Latency (us) 25 19.78 20 15 69 % 10 Bandwidth (MB/s) 700 600 500 3x 400 300 200 5 100 6.12 0 0 1 4 16 64 256 1K Message Size (bytes) 69% Lower Latency © 2013 Mellanox Technologies 4K 1 4 16 64 256 Message Size (bytes) 1K 4K Source: Prof. DK Panda 3X Increase in Throughput 11
    • Performance of MVAPICH2 with GPU-Direct-RDMA Execution Time of HSG (Heisenberg Spin Glass) Application with 2 GPU Nodes Source: Prof. DK Panda Problem Size © 2013 Mellanox Technologies 12
    • The Only Provider of End-to-End 40/56Gb/s Solutions Comprehensive End-to-End InfiniBand and Ethernet Portfolio ICs Adapter Cards Switches/Gateways Host/Fabric Software Metro / WAN Cables/Modules From Data Center to Metro and WAN X86, ARM and Power based Compute and Storage Platforms The Interconnect Provider For 10Gb/s and Beyond © 2013 Mellanox Technologies 13
    • Thank You Thank You © 2013 Mellanox Technologies 14