SlideShare a Scribd company logo
1 of 48
InfiniBand Strengthens Leadership as the Interconnect 
Of Choice By Providing Best Return on Investment 
TOP500 Supercomputers, Nov 2014
TOP500 Performance Trends 
39% CAGR 79% CAGR 
 Explosive high-performance computing market growth 
 Clusters continue to dominate with 86% of the TOP500 list 
Mellanox InfiniBand solutions provide the highest systems utilization in the TOP500 
for both high-performance computing and clouds 
© 2014 Mellanox Technologies 2
TOP500 Major Highlights 
 InfiniBand is the de-facto Interconnect solution for High-Performance Computing 
• Positioned to continue and expand in Cloud and Web 2.0 
 InfiniBand is the most used interconnect on the TOP500 list, connecting 225 systems 
• Increasing 8.7% year-over-year, from 207 system in Nov’13 to 225 in Nov’14 
 FDR InfiniBand is the most used technology on the TOP500 list, connecting 141 systems 
• Grew 1.8X year-over-year from 80 systems in Nov’13 to 141 in Nov’14 
 InfiniBand enables the most efficient system on the list with 99.8% efficiency – record! 
 InfiniBand enables 24 out of the 25 most efficient systems (and the top 17 most efficient systems) 
 InfiniBand is the most used interconnect for Petascale-performance systems with 24 systems 
© 2014 Mellanox Technologies 3
InfiniBand in the TOP500 
 InfiniBand is the most used interconnect technology for high-performance computing 
• InfiniBand accelerates 225 systems, 45% of the list 
 FDR InfiniBand connects the fastest InfiniBand systems 
• TACC (#7), Government (#10), NASA (#11), Eni (#12), LRZ (#14) 
 InfiniBand connects the most powerful clusters 
• 24 of the Petascale-performance systems 
 The most used interconnect solution in the TOP100, TOP200, TOP300, TOP400 
• Connects 48% (48 systems) of the TOP100 while Ethernet only 2% (2 system) 
• Connects 51.5% (103 systems) of the TOP200 while Ethernet only 15.5% (31 systems) 
• Connects 50.7% (152 systems) of the TOP300 while Ethernet only 24.3% (73 systems) 
• Connects 49% (196 systems) of the TOP400 while Ethernet only 29.8% (119 systems) 
• Connects 45% (225 systems) of the TOP500, Ethernet connects 37.4% (187 systems) 
 InfiniBand is the interconnect of choice for accelerator-based systems 
• 80% of the accelerator-based systems are connected with InfiniBand 
 Diverse set of applications 
• High-end HPC, commercial HPC, Cloud and enterprise data center 
© 2014 Mellanox Technologies 4
Mellanox in the TOP500 
 Mellanox FDR InfiniBand is the fastest interconnect solution on the TOP500 
• More than 12GB/s throughput, less than 0.7usec latency 
• Being used in 141 systems on the TOP500 list – 1.8X increase from the Nov’13 list 
• Connects the fastest InfiniBand-based supercomputers 
- TACC (#7), Government (#10), NASA (#11), Eni (#12), LRZ (#14) 
 Mellanox InfiniBand is the most efficient interconnect technology on the list 
• Enables the highest system utilization on the TOP500 – 99.8% system efficiency 
• Enables the top 17 and 24 out of 25 highest utilized systems on the TOP500 list 
 Mellanox InfiniBand is the only Petascale-proven, standard interconnect solution 
• Connects 24 out of the 54 Petaflop-performance systems on the list 
• Connects 1.5X the number of Cray based systems in the Top100, 5X in TOP500 
 Mellanox’s end-to-end scalable solutions accelerate GPU-based systems 
• GPUDirect RDMA technology enables faster communications and higher performance 
© 2014 Mellanox Technologies 5
TOP500 Interconnect Trends 
 InfiniBand is the de-facto interconnect solution for performance demanding applications 
© 2014 Mellanox Technologies 6
TOP500 Petascale-Performance Systems 
 Mellanox InfiniBand is the interconnect of choice for Petascale computing 
• Overall 24 system of which 19 System use FDR InfiniBand 
© 2014 Mellanox Technologies 7
TOP100: Interconnect Trends 
 InfiniBand is the most used interconnect solution in the TOP100 
 The natural choice for world-leading supercomputers: performance, efficiency, scalability 
© 2014 Mellanox Technologies 8
TOP500 InfiniBand Accelerated Systems 
 Number of Mellanox FDR InfiniBand systems grew 1.8X from Nov’13 to Nov’14 
• Accelerates 63% of the InfiniBand-based systems (141 systems out of 225) 
© 2014 Mellanox Technologies 9
InfiniBand versus Ethernet – TOP100, 200, 300, 400, 500 
 InfiniBand is the most used interconnect of the TOP100, 200, 300, 400 supercomputers 
 Due to superior performance, scalability, efficiency and return-on-investment 
© 2014 Mellanox Technologies 10
TOP500 Interconnect Placement 
 InfiniBand is the high performance interconnect of choice 
• Connects the most powerful clusters, and provides the highest system utilization 
 InfiniBand is the best price/performance interconnect for HPC systems 
© 2014 Mellanox Technologies 11
InfiniBand’s Unsurpassed System Efficiency 
Average Efficiency 
• InfiniBand: 87% 
• Cray: 79% 
• 10GbE: 67% 
• GigE: 40% 
 TOP500 systems listed according to their efficiency 
 InfiniBand is the key element responsible for the highest system efficiency; in average 30% higher than 10GbE 
 Mellanox delivers efficiencies of up to 99.8% with InfiniBand 
© 2014 Mellanox Technologies 12
Enabling The Most Efficient Systems 
 Mellanox InfiniBand connects the most efficient system on the list 
• 24 of the 25 most efficient systems 
 Enabling a record system efficiency of 99.8%, only 0.2% less than the theoretical limit! 
© 2014 Mellanox Technologies 13
TOP500 Interconnect Comparison 
 InfiniBand systems account for 2.6X the performance of Ethernet systems 
 The only scalable HPC interconnect solutions 
© 2014 Mellanox Technologies 14
TOP500 Performance Trend 
 InfiniBand connected systems’ performance demonstrate highest growth rate 
 InfiniBand responsible for 2.6X the performance versus Ethernet on the TOP500 list 
© 2014 Mellanox Technologies 15
InfiniBand Performance Trends 
 InfiniBand-connected CPUs grew 28% from Nov‘13 to Nov‘14 
 InfiniBand-based system performance grew 44% from Nov‘13 to Nov‘14 
 Mellanox InfiniBand is the most efficient and scalable interconnect solution 
 Driving factors: performance, efficiency, scalability, many-many cores 
© 2014 Mellanox Technologies 16
Mellanox Accelerated World-Leading HPC Systems 
Connecting Half of the World’s Petascale Systems (examples) 
© 2014 Mellanox Technologies 17
Proud to Accelerate Future DOE Leadership Systems (“CORAL”) 
“Summit” System “Sierra” System 
5X – 10X Higher Application Performance versus Current Systems 
Mellanox EDR 100Gb/s InfiniBand, IBM POWER CPUs, NVIDIA Tesla GPUs 
Mellanox EDR 100G Solutions Selected by the DOE for 2017 Leadership Systems 
Deliver Superior Performance and Scalability over Current / Future Competition 
© 2014 Mellanox Technologies 18
Interconnect Solutions Leadership – Software and Hardware 
Comprehensive End-to-End Interconnect Software Products 
Comprehensive End-to-End InfiniBand and Ethernet Hardware Products 
ICs Adapter Cards Switches/Gateways Metro / WAN Cables/Modules 
© 2014 Mellanox Technologies 19
The Future is Here 
Enter the Era of 100Gb/s 
Copper (Passive, Active) Optical Cables (VCSEL) Silicon Photonics 
36 EDR (100Gb/s) Ports, <90ns Latency 
Throughput of 7.2Tb/s 
100Gb/s Adapter, 0.7us latency 
150 million messages per second 
(10 / 25 / 40 / 50 / 56 / 100Gb/s) 
© 2014 Mellanox Technologies 20
The Advantage of 100G Copper Cables (over Fiber) 
Lower Cost, Lower Power Consumption, Higher Reliability 
5000 Nodes Cluster 
CAPEX Saving: $2.1M 
Power Saving: 12.5KWh 
1000 Nodes Cluster 
CAPEX Saving: $420K 
Power Saving: 2.5KWh 
SC’14 Demonstration: 4m, 6m and 8m Copper Cables 
© 2014 Mellanox Technologies 21
Enter the World of Scalable Performance – 100Gb/s Switch 
Boundless Performance 
7th Generation InfiniBand Switch 
36 EDR (100Gb/s) Ports, <90ns Latency 
Throughput of 7.2 Tb/s 
InfiniBand Router 
Adaptive Routing 
Analyze Store 
© 2014 Mellanox Technologies 22
Enter the World of Scalable Performance – 100Gb/s Adapter 
ConnectX-4: Highest Performance Adapter in the Market 
Connect. Accelerate. Outperform 
InfiniBand: SDR / DDR / QDR / FDR / EDR 
Ethernet: 10 / 25 / 40 / 50 / 56 / 100GbE 
100Gb/s, <0.7us latency 
150 million messages per second 
OpenPOWER CAPI technology 
CORE-Direct technology 
GPUDirect RDMA 
Dynamically Connected Transport (DCT) 
Ethernet offloads (HDS, RSS, TSS, LRO, LSOv2) 
© 2014 Mellanox Technologies 23
End-to-End Interconnect Solutions for All Platforms 
Highest Performance and Scalability for 
X86, Power, GPU, ARM and FPGA-based Compute and Storage Platforms 
x86 
Open 
POWER 
GPU ARM FPGA 
Smart Interconnect to Unleash The Power of All Compute Architectures 
© 2014 Mellanox Technologies 24
Mellanox Delivers Highest Application Performance 
World Record Performance with Mellanox Connect-IB and HPC-X 
Achieve Higher Performance with 67% less compute infrastructure versus Cray 
25% 5% 
© 2014 Mellanox Technologies 25
HPC Clouds – Performance Demands Mellanox Solutions 
San Diego Supercomputing Center “Comet” System (2015) to 
Leverage Mellanox Solutions and Technology to Build HPC Cloud 
© 2014 Mellanox Technologies 26
Technology Roadmap – One-Generation Lead over the Competition 
20Gbs 40Gbs 56Gbs 100Gbs 
Mellanox 
Terascale Petascale Exascale 
3rd 1st 
“Roadrunner” 
Mellanox Connected 
TOP500 2003 
Virginia Tech (Apple) 
200Gbs 
2000 2005 2010 2015 
2020 
© 2014 Mellanox Technologies 27
Mellanox Interconnect Advantages 
 Mellanox solutions provide a proven, scalable and high performance end-to-end connectivity 
 Standards-based (InfiniBand , Ethernet), supported by large eco-system 
 Flexible, support all compute architectures: x86, Power, ARM, GPU, FPGA etc. 
 Backward and future compatible 
 Proven, most used solution for Petascale systems and overall TOP500 
Paving The Road to Exascale Computing 
© 2014 Mellanox Technologies 28
Texas Advanced Computing Center/Univ. of Texas - #7 
 “Stampede” system 
 6,000+ nodes (Dell), 462462 cores, Intel Phi co-processors 
 5.2 Petaflops 
 Mellanox end-to-end FDR InfiniBand 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 29
NASA Ames Research Center - #11 
 Pleiades system 
 SGI Altix ICE 
 20K InfiniBand nodes 
 3.4 sustained Petaflop performance 
 Mellanox end-to-end FDR and QDR InfiniBand 
 Supports variety of scientific and engineering projects 
• Coupled atmosphere-ocean models 
• Future space vehicle design 
• Large-scale dark matter halos and galaxy evolution 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 30
Exploration & Production ENI S.p.A. - #12 
 “HPC2” system 
 IBM iDataPlex DX360M4 
 NVIDIA K20x GPUs 
 3.2 Petaflops sustained Petaflop performance 
 Mellanox end-to-end FDR InfiniBand 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 31
Leibniz Rechenzentrum SuperMUC - #14 
 IBM iDataPlex and Intel Sandy Bridge 
 147456 cores 
 Mellanox end-to-end FDR InfiniBand solutions 
 2.9 sustained Petaflop performance 
 The fastest supercomputer in Europe 
 91% efficiency 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 32
Tokyo Institute of Technology - #15 
 TSUBAME 2.0, first Petaflop system in Japan 
 2.8 PF performance 
 HP ProLiant SL390s G7 1400 servers 
 Mellanox 40Gb/s InfiniBand 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 33
DOE/SC/Pacific Northwest National Laboratory - #18 
 “Cascade” system 
 Atipa Visione IF442 Blade Server 
 2.5 sustained Petaflop performance 
 Mellanox end-to-end InfiniBand FDR 
 Intel Xeon Phi 5110P accelerator 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 34
Total Exploration Production - #20 
 “Pangea” system 
 SGI Altix X system, 110400 cores 
 Mellanox FDR InfiniBand 
 2 sustained Petaflop performance 
 91% efficiency 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 35
Grand Equipement National de Calcul Intensif - Centre Informatique 
National de l'Enseignement Suprieur (GENCI-CINES) - #26 
 Occigen system 
 1.6 sustained Petaflop performance 
 Bull bullx DLC 
 Mellanox end-to-end FDR InfiniBand 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 36
Air Force Research Laboratory - #31 
 “Spirit” system 
 SGI Altix X system, 74584 cores 
 Mellanox FDR InfiniBand 
 1.4 sustained Petaflop performance 
 92.5% efficiency 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 37
CEA/TGCC-GENCI - #33 
 Bull Bullx B510, Intel Sandy Bridge 
 77184 cores 
 Mellanox end-to-end FDR InfiniBand solutions 
 1.36 sustained Petaflop performance 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 38
Max-Planck-Gesellschaft MPI/IPP - # 34 
 IBM iDataPlex DX360M4 
 Mellanox end-to-end FDR InfiniBand solutions 
 1.3 sustained Petaflop performance 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 39
National Supercomputing Centre in Shenzhen (NSCS) - #35 
 Dawning TC3600 Blade Supercomputer 
 5200 nodes, 120640 cores, NVIDIA GPUs 
 Mellanox end-to-end 40Gb/s InfiniBand solutions 
• ConnectX-2 and IS5000 switches 
 1.27 sustained Petaflop performance 
 The first Petaflop systems in China 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 40
NCAR (National Center for Atmospheric Research) - #36 
 “Yellowstone” system 
 72,288 processor cores, 4,518 nodes (IBM) 
 Mellanox end-to-end FDR InfiniBand, full fat tree, single plane 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 41
International Fusion Energy Research Centre (IFERC), EU(F4E) - 
Japan Broader Approach collaboration - #38 
 Bull Bullx B510, Intel Sandy Bridge 
 70560 cores 
 1.24 sustained Petaflop performance 
 Mellanox end-to-end InfiniBand solutions 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 42
SURFsara - #45 
 The “Cartesius” system - the Dutch supercomputer 
 Bull Bullx DLC B710/B720 system 
 Mellanox end-to-end InfiniBand solutions 
 1.05 sustained Petaflop performance 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 43
Commissariat a l'Energie Atomique (CEA) - #47 
 Tera 100, first Petaflop system in Europe - 1.05 PF performance 
 4,300 Bull S Series servers 
 140,000 Intel® Xeon® 7500 processing cores 
 300TB of central memory, 20PB of storage 
 Mellanox 40Gb/s InfiniBand 
Petaflop 
Mellanox Connected 
© 2014 Mellanox Technologies 44
Australian National University - #52 
 Fujitsu PRIMERGY CX250 S1 
 Mellanox FDR 56Gb/s InfiniBand end-to-end 
 980 Tflops performance 
© 2014 Mellanox Technologies 45
Purdue University - #53 
 “Conte” system 
 HP Cluster Platform SL250s Gen8 
 Intel Xeon E5-2670 8C 2.6GHz 
 Intel Xeon Phi 5110P accelerator 
 Total of 77,520 cores 
 Mellanox FDR 56Gb/s InfiniBand end-to-end 
 Mellanox Connect-IB InfiniBand adapters 
 Mellanox MetroX long Haul InfiniBand solution 
 980 Tflops performance 
© 2014 Mellanox Technologies 46
Barcelona Supercomputing Center - #57 
 “MareNostrum 3” system 
 1.1 Petaflops peak performance 
 ~50K cores, 91% efficiency 
 Mellanox FDR InfiniBand 
© 2014 Mellanox Technologies 47
Thank You

More Related Content

What's hot

CloudX – Expand Your Cloud into the Future
CloudX – Expand Your Cloud into the FutureCloudX – Expand Your Cloud into the Future
CloudX – Expand Your Cloud into the FutureMellanox Technologies
 
InfiniBand Growth Trends - TOP500 (July 2015)
InfiniBand Growth Trends - TOP500 (July 2015)InfiniBand Growth Trends - TOP500 (July 2015)
InfiniBand Growth Trends - TOP500 (July 2015)Mellanox Technologies
 
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapter
Announcing the Mellanox ConnectX-5 100G InfiniBand AdapterAnnouncing the Mellanox ConnectX-5 100G InfiniBand Adapter
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapterinside-BigData.com
 
Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBandAdvancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBandMellanox Technologies
 
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions Mellanox Technologies
 
The Top 10 Business Reasons for 10GbE iSCSI
The Top 10 Business Reasons for 10GbE iSCSIThe Top 10 Business Reasons for 10GbE iSCSI
The Top 10 Business Reasons for 10GbE iSCSIEmulex Corporation
 
Proven and Emerging Use Cases of Software Defined Network
Proven and Emerging Use Cases of Software Defined NetworkProven and Emerging Use Cases of Software Defined Network
Proven and Emerging Use Cases of Software Defined NetworkOpen Networking Summits
 
SDN Service Provider Use Cases
SDN Service Provider Use CasesSDN Service Provider Use Cases
SDN Service Provider Use CasesSDxCentral
 

What's hot (20)

Virtualization Acceleration
Virtualization Acceleration Virtualization Acceleration
Virtualization Acceleration
 
Interconnect Product Portfolio
Interconnect Product PortfolioInterconnect Product Portfolio
Interconnect Product Portfolio
 
CloudX – Expand Your Cloud into the Future
CloudX – Expand Your Cloud into the FutureCloudX – Expand Your Cloud into the Future
CloudX – Expand Your Cloud into the Future
 
InfiniBand Growth Trends - TOP500 (July 2015)
InfiniBand Growth Trends - TOP500 (July 2015)InfiniBand Growth Trends - TOP500 (July 2015)
InfiniBand Growth Trends - TOP500 (July 2015)
 
Interconnect Your Future
Interconnect Your FutureInterconnect Your Future
Interconnect Your Future
 
Mellanox's Technological Advantage
Mellanox's Technological AdvantageMellanox's Technological Advantage
Mellanox's Technological Advantage
 
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapter
Announcing the Mellanox ConnectX-5 100G InfiniBand AdapterAnnouncing the Mellanox ConnectX-5 100G InfiniBand Adapter
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapter
 
Mellanox's Operational Excellence
Mellanox's Operational ExcellenceMellanox's Operational Excellence
Mellanox's Operational Excellence
 
Mellanox VXLAN Acceleration
Mellanox VXLAN AccelerationMellanox VXLAN Acceleration
Mellanox VXLAN Acceleration
 
Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBandAdvancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBand
 
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions
 
Mellanox IBM
Mellanox IBMMellanox IBM
Mellanox IBM
 
Mellanox Market Leading Solutions
Mellanox Market Leading SolutionsMellanox Market Leading Solutions
Mellanox Market Leading Solutions
 
Mellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDNMellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDN
 
The Great IT Migration
The Great IT MigrationThe Great IT Migration
The Great IT Migration
 
Interop: The 10GbE Top 10
Interop: The 10GbE Top 10Interop: The 10GbE Top 10
Interop: The 10GbE Top 10
 
The Top 10 Business Reasons for 10GbE iSCSI
The Top 10 Business Reasons for 10GbE iSCSIThe Top 10 Business Reasons for 10GbE iSCSI
The Top 10 Business Reasons for 10GbE iSCSI
 
SDN use cases_2014
SDN use cases_2014SDN use cases_2014
SDN use cases_2014
 
Proven and Emerging Use Cases of Software Defined Network
Proven and Emerging Use Cases of Software Defined NetworkProven and Emerging Use Cases of Software Defined Network
Proven and Emerging Use Cases of Software Defined Network
 
SDN Service Provider Use Cases
SDN Service Provider Use CasesSDN Service Provider Use Cases
SDN Service Provider Use Cases
 

Viewers also liked

Mellanox hpc update @ hpcday 2012 kiev
Mellanox hpc update @ hpcday 2012 kievMellanox hpc update @ hpcday 2012 kiev
Mellanox hpc update @ hpcday 2012 kievVolodymyr Saviak
 
Mellanox presentation for Agile Conference June 2015
Mellanox presentation for Agile Conference June 2015Mellanox presentation for Agile Conference June 2015
Mellanox presentation for Agile Conference June 2015Chai Forsher
 
Mellanox Announcements at SC15
Mellanox Announcements at SC15Mellanox Announcements at SC15
Mellanox Announcements at SC15inside-BigData.com
 
Mellanox introduction 2016 03-28_hjh
Mellanox introduction  2016 03-28_hjhMellanox introduction  2016 03-28_hjh
Mellanox introduction 2016 03-28_hjhMichelle Hong
 
Mellanox hpc day 2011 kiev
Mellanox hpc day 2011 kievMellanox hpc day 2011 kiev
Mellanox hpc day 2011 kievVolodymyr Saviak
 
Storage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving GrowthStorage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving GrowthMellanox Technologies
 
Open Ethernet - открытый подход к построению Ethernet сетей
Open Ethernet - открытый подход к построению Ethernet сетейOpen Ethernet - открытый подход к построению Ethernet сетей
Open Ethernet - открытый подход к построению Ethernet сетейARCCN
 
Big Data Benchmarking with RDMA solutions
Big Data Benchmarking with RDMA solutions Big Data Benchmarking with RDMA solutions
Big Data Benchmarking with RDMA solutions Mellanox Technologies
 
Open Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network designOpen Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network designAlexander Petrovskiy
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
 
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안NAIM Networks, Inc.
 
Switchdev - No More SDK
Switchdev - No More SDKSwitchdev - No More SDK
Switchdev - No More SDKKernel TLV
 

Viewers also liked (14)

Mellanox hpc update @ hpcday 2012 kiev
Mellanox hpc update @ hpcday 2012 kievMellanox hpc update @ hpcday 2012 kiev
Mellanox hpc update @ hpcday 2012 kiev
 
Mellanox presentation for Agile Conference June 2015
Mellanox presentation for Agile Conference June 2015Mellanox presentation for Agile Conference June 2015
Mellanox presentation for Agile Conference June 2015
 
Mellanox Announcements at SC15
Mellanox Announcements at SC15Mellanox Announcements at SC15
Mellanox Announcements at SC15
 
Mellanox introduction 2016 03-28_hjh
Mellanox introduction  2016 03-28_hjhMellanox introduction  2016 03-28_hjh
Mellanox introduction 2016 03-28_hjh
 
Mellanox hpc day 2011 kiev
Mellanox hpc day 2011 kievMellanox hpc day 2011 kiev
Mellanox hpc day 2011 kiev
 
Storage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving GrowthStorage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving Growth
 
Scale Out Database Solution
Scale Out Database SolutionScale Out Database Solution
Scale Out Database Solution
 
Open Ethernet - открытый подход к построению Ethernet сетей
Open Ethernet - открытый подход к построению Ethernet сетейOpen Ethernet - открытый подход к построению Ethernet сетей
Open Ethernet - открытый подход к построению Ethernet сетей
 
Big Data Benchmarking with RDMA solutions
Big Data Benchmarking with RDMA solutions Big Data Benchmarking with RDMA solutions
Big Data Benchmarking with RDMA solutions
 
Open Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network designOpen Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network design
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
 
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
 
Mellanox Storage Solutions
Mellanox Storage SolutionsMellanox Storage Solutions
Mellanox Storage Solutions
 
Switchdev - No More SDK
Switchdev - No More SDKSwitchdev - No More SDK
Switchdev - No More SDK
 

Similar to InfiniBand Strengthens Leadership as the Interconnect Of Choice

Mellnox Interconnect presentation in OpenPOWER Brazil workshop
Mellnox Interconnect presentation in OpenPOWER Brazil workshopMellnox Interconnect presentation in OpenPOWER Brazil workshop
Mellnox Interconnect presentation in OpenPOWER Brazil workshopGanesan Narayanasamy
 
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand SolutionsMellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand Solutionsinside-BigData.com
 
IBM 40Gb Ethernet - A competitive alternative to Infiniband
IBM 40Gb Ethernet - A competitive alternative to InfinibandIBM 40Gb Ethernet - A competitive alternative to Infiniband
IBM 40Gb Ethernet - A competitive alternative to InfinibandAngel Villar Garea
 
Co-Design Architecture for Exascale
Co-Design Architecture for ExascaleCo-Design Architecture for Exascale
Co-Design Architecture for Exascaleinside-BigData.com
 
PLNOG 17 - Shabbir Ahmad - Dell Open Networking i Big Monitoring Fabric: unik...
PLNOG 17 - Shabbir Ahmad - Dell Open Networking i Big Monitoring Fabric: unik...PLNOG 17 - Shabbir Ahmad - Dell Open Networking i Big Monitoring Fabric: unik...
PLNOG 17 - Shabbir Ahmad - Dell Open Networking i Big Monitoring Fabric: unik...PROIDEA
 
InfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and RoadmapInfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and Roadmapinside-BigData.com
 
Future proof networks
Future proof networksFuture proof networks
Future proof networksRob Heymann
 
InfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and RoadmapInfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and Roadmapinside-BigData.com
 
Interconnect Your Future: Paving the Road to Exascale
Interconnect Your Future: Paving the Road to ExascaleInterconnect Your Future: Paving the Road to Exascale
Interconnect Your Future: Paving the Road to Exascaleinside-BigData.com
 
OSSF 2018 - Peter Crocker of Cumulus Networks - TCO and technical advantages ...
OSSF 2018 - Peter Crocker of Cumulus Networks - TCO and technical advantages ...OSSF 2018 - Peter Crocker of Cumulus Networks - TCO and technical advantages ...
OSSF 2018 - Peter Crocker of Cumulus Networks - TCO and technical advantages ...FINOS
 
Genesys System - 8dec2010
Genesys System - 8dec2010Genesys System - 8dec2010
Genesys System - 8dec2010Agora Group
 
CableFree MIMO Radio product summary 2018
CableFree MIMO Radio product summary 2018CableFree MIMO Radio product summary 2018
CableFree MIMO Radio product summary 2018Wireless Excellence Ltd
 
CommScope's High Speed Migration Platform
CommScope's High Speed Migration PlatformCommScope's High Speed Migration Platform
CommScope's High Speed Migration PlatformKamlesh Patel
 
HPC DAY 2017 | The network part in accelerating Machine-Learning and Big-Data
HPC DAY 2017 | The network part in accelerating Machine-Learning and Big-DataHPC DAY 2017 | The network part in accelerating Machine-Learning and Big-Data
HPC DAY 2017 | The network part in accelerating Machine-Learning and Big-DataHPC DAY
 
128 Technology Webinar - Remove Overhead and Complexity with Tunnel-Free SD-WAN
128 Technology Webinar - Remove Overhead and Complexity with Tunnel-Free SD-WAN128 Technology Webinar - Remove Overhead and Complexity with Tunnel-Free SD-WAN
128 Technology Webinar - Remove Overhead and Complexity with Tunnel-Free SD-WAN128 Technology
 
High Performance Interconnects: Landscape, Assessments & Rankings
High Performance Interconnects: Landscape, Assessments & RankingsHigh Performance Interconnects: Landscape, Assessments & Rankings
High Performance Interconnects: Landscape, Assessments & Rankingsinside-BigData.com
 
Dell EMC - - OpenStack Summit 2016/Red Hat NFV Mini Summit
Dell EMC - - OpenStack Summit 2016/Red Hat NFV Mini Summit Dell EMC - - OpenStack Summit 2016/Red Hat NFV Mini Summit
Dell EMC - - OpenStack Summit 2016/Red Hat NFV Mini Summit kimw001
 

Similar to InfiniBand Strengthens Leadership as the Interconnect Of Choice (20)

Mellnox Interconnect presentation in OpenPOWER Brazil workshop
Mellnox Interconnect presentation in OpenPOWER Brazil workshopMellnox Interconnect presentation in OpenPOWER Brazil workshop
Mellnox Interconnect presentation in OpenPOWER Brazil workshop
 
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand SolutionsMellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
 
IBM 40Gb Ethernet - A competitive alternative to Infiniband
IBM 40Gb Ethernet - A competitive alternative to InfinibandIBM 40Gb Ethernet - A competitive alternative to Infiniband
IBM 40Gb Ethernet - A competitive alternative to Infiniband
 
Co-Design Architecture for Exascale
Co-Design Architecture for ExascaleCo-Design Architecture for Exascale
Co-Design Architecture for Exascale
 
PLNOG 17 - Shabbir Ahmad - Dell Open Networking i Big Monitoring Fabric: unik...
PLNOG 17 - Shabbir Ahmad - Dell Open Networking i Big Monitoring Fabric: unik...PLNOG 17 - Shabbir Ahmad - Dell Open Networking i Big Monitoring Fabric: unik...
PLNOG 17 - Shabbir Ahmad - Dell Open Networking i Big Monitoring Fabric: unik...
 
InfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and RoadmapInfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and Roadmap
 
Future proof networks
Future proof networksFuture proof networks
Future proof networks
 
InfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and RoadmapInfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and Roadmap
 
Interconnect Your Future: Paving the Road to Exascale
Interconnect Your Future: Paving the Road to ExascaleInterconnect Your Future: Paving the Road to Exascale
Interconnect Your Future: Paving the Road to Exascale
 
OSSF 2018 - Peter Crocker of Cumulus Networks - TCO and technical advantages ...
OSSF 2018 - Peter Crocker of Cumulus Networks - TCO and technical advantages ...OSSF 2018 - Peter Crocker of Cumulus Networks - TCO and technical advantages ...
OSSF 2018 - Peter Crocker of Cumulus Networks - TCO and technical advantages ...
 
Genesys System - 8dec2010
Genesys System - 8dec2010Genesys System - 8dec2010
Genesys System - 8dec2010
 
CableFree MIMO Radio product summary 2018
CableFree MIMO Radio product summary 2018CableFree MIMO Radio product summary 2018
CableFree MIMO Radio product summary 2018
 
Cambium - ePMP
Cambium - ePMPCambium - ePMP
Cambium - ePMP
 
Mellanox OpenPOWER features
Mellanox OpenPOWER featuresMellanox OpenPOWER features
Mellanox OpenPOWER features
 
CommScope's High Speed Migration Platform
CommScope's High Speed Migration PlatformCommScope's High Speed Migration Platform
CommScope's High Speed Migration Platform
 
HPC DAY 2017 | The network part in accelerating Machine-Learning and Big-Data
HPC DAY 2017 | The network part in accelerating Machine-Learning and Big-DataHPC DAY 2017 | The network part in accelerating Machine-Learning and Big-Data
HPC DAY 2017 | The network part in accelerating Machine-Learning and Big-Data
 
Interconnect your future
Interconnect your futureInterconnect your future
Interconnect your future
 
128 Technology Webinar - Remove Overhead and Complexity with Tunnel-Free SD-WAN
128 Technology Webinar - Remove Overhead and Complexity with Tunnel-Free SD-WAN128 Technology Webinar - Remove Overhead and Complexity with Tunnel-Free SD-WAN
128 Technology Webinar - Remove Overhead and Complexity with Tunnel-Free SD-WAN
 
High Performance Interconnects: Landscape, Assessments & Rankings
High Performance Interconnects: Landscape, Assessments & RankingsHigh Performance Interconnects: Landscape, Assessments & Rankings
High Performance Interconnects: Landscape, Assessments & Rankings
 
Dell EMC - - OpenStack Summit 2016/Red Hat NFV Mini Summit
Dell EMC - - OpenStack Summit 2016/Red Hat NFV Mini Summit Dell EMC - - OpenStack Summit 2016/Red Hat NFV Mini Summit
Dell EMC - - OpenStack Summit 2016/Red Hat NFV Mini Summit
 

More from Mellanox Technologies

More from Mellanox Technologies (8)

InfiniBand FAQ
InfiniBand FAQInfiniBand FAQ
InfiniBand FAQ
 
Mellanox High Performance Networks for Ceph
Mellanox High Performance Networks for CephMellanox High Performance Networks for Ceph
Mellanox High Performance Networks for Ceph
 
InfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must KnowInfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must Know
 
Become a Supercomputer Hero
Become a Supercomputer HeroBecome a Supercomputer Hero
Become a Supercomputer Hero
 
Unified Fabric Manager - HP Insight CMU Connector
Unified Fabric Manager - HP Insight CMU ConnectorUnified Fabric Manager - HP Insight CMU Connector
Unified Fabric Manager - HP Insight CMU Connector
 
Print 'N Fly - SC13
Print 'N Fly - SC13Print 'N Fly - SC13
Print 'N Fly - SC13
 
Mellanox Financial Overview
Mellanox Financial OverviewMellanox Financial Overview
Mellanox Financial Overview
 
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit EthernetI/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
 

Recently uploaded

Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 

Recently uploaded (20)

Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 

InfiniBand Strengthens Leadership as the Interconnect Of Choice

  • 1. InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, Nov 2014
  • 2. TOP500 Performance Trends 39% CAGR 79% CAGR  Explosive high-performance computing market growth  Clusters continue to dominate with 86% of the TOP500 list Mellanox InfiniBand solutions provide the highest systems utilization in the TOP500 for both high-performance computing and clouds © 2014 Mellanox Technologies 2
  • 3. TOP500 Major Highlights  InfiniBand is the de-facto Interconnect solution for High-Performance Computing • Positioned to continue and expand in Cloud and Web 2.0  InfiniBand is the most used interconnect on the TOP500 list, connecting 225 systems • Increasing 8.7% year-over-year, from 207 system in Nov’13 to 225 in Nov’14  FDR InfiniBand is the most used technology on the TOP500 list, connecting 141 systems • Grew 1.8X year-over-year from 80 systems in Nov’13 to 141 in Nov’14  InfiniBand enables the most efficient system on the list with 99.8% efficiency – record!  InfiniBand enables 24 out of the 25 most efficient systems (and the top 17 most efficient systems)  InfiniBand is the most used interconnect for Petascale-performance systems with 24 systems © 2014 Mellanox Technologies 3
  • 4. InfiniBand in the TOP500  InfiniBand is the most used interconnect technology for high-performance computing • InfiniBand accelerates 225 systems, 45% of the list  FDR InfiniBand connects the fastest InfiniBand systems • TACC (#7), Government (#10), NASA (#11), Eni (#12), LRZ (#14)  InfiniBand connects the most powerful clusters • 24 of the Petascale-performance systems  The most used interconnect solution in the TOP100, TOP200, TOP300, TOP400 • Connects 48% (48 systems) of the TOP100 while Ethernet only 2% (2 system) • Connects 51.5% (103 systems) of the TOP200 while Ethernet only 15.5% (31 systems) • Connects 50.7% (152 systems) of the TOP300 while Ethernet only 24.3% (73 systems) • Connects 49% (196 systems) of the TOP400 while Ethernet only 29.8% (119 systems) • Connects 45% (225 systems) of the TOP500, Ethernet connects 37.4% (187 systems)  InfiniBand is the interconnect of choice for accelerator-based systems • 80% of the accelerator-based systems are connected with InfiniBand  Diverse set of applications • High-end HPC, commercial HPC, Cloud and enterprise data center © 2014 Mellanox Technologies 4
  • 5. Mellanox in the TOP500  Mellanox FDR InfiniBand is the fastest interconnect solution on the TOP500 • More than 12GB/s throughput, less than 0.7usec latency • Being used in 141 systems on the TOP500 list – 1.8X increase from the Nov’13 list • Connects the fastest InfiniBand-based supercomputers - TACC (#7), Government (#10), NASA (#11), Eni (#12), LRZ (#14)  Mellanox InfiniBand is the most efficient interconnect technology on the list • Enables the highest system utilization on the TOP500 – 99.8% system efficiency • Enables the top 17 and 24 out of 25 highest utilized systems on the TOP500 list  Mellanox InfiniBand is the only Petascale-proven, standard interconnect solution • Connects 24 out of the 54 Petaflop-performance systems on the list • Connects 1.5X the number of Cray based systems in the Top100, 5X in TOP500  Mellanox’s end-to-end scalable solutions accelerate GPU-based systems • GPUDirect RDMA technology enables faster communications and higher performance © 2014 Mellanox Technologies 5
  • 6. TOP500 Interconnect Trends  InfiniBand is the de-facto interconnect solution for performance demanding applications © 2014 Mellanox Technologies 6
  • 7. TOP500 Petascale-Performance Systems  Mellanox InfiniBand is the interconnect of choice for Petascale computing • Overall 24 system of which 19 System use FDR InfiniBand © 2014 Mellanox Technologies 7
  • 8. TOP100: Interconnect Trends  InfiniBand is the most used interconnect solution in the TOP100  The natural choice for world-leading supercomputers: performance, efficiency, scalability © 2014 Mellanox Technologies 8
  • 9. TOP500 InfiniBand Accelerated Systems  Number of Mellanox FDR InfiniBand systems grew 1.8X from Nov’13 to Nov’14 • Accelerates 63% of the InfiniBand-based systems (141 systems out of 225) © 2014 Mellanox Technologies 9
  • 10. InfiniBand versus Ethernet – TOP100, 200, 300, 400, 500  InfiniBand is the most used interconnect of the TOP100, 200, 300, 400 supercomputers  Due to superior performance, scalability, efficiency and return-on-investment © 2014 Mellanox Technologies 10
  • 11. TOP500 Interconnect Placement  InfiniBand is the high performance interconnect of choice • Connects the most powerful clusters, and provides the highest system utilization  InfiniBand is the best price/performance interconnect for HPC systems © 2014 Mellanox Technologies 11
  • 12. InfiniBand’s Unsurpassed System Efficiency Average Efficiency • InfiniBand: 87% • Cray: 79% • 10GbE: 67% • GigE: 40%  TOP500 systems listed according to their efficiency  InfiniBand is the key element responsible for the highest system efficiency; in average 30% higher than 10GbE  Mellanox delivers efficiencies of up to 99.8% with InfiniBand © 2014 Mellanox Technologies 12
  • 13. Enabling The Most Efficient Systems  Mellanox InfiniBand connects the most efficient system on the list • 24 of the 25 most efficient systems  Enabling a record system efficiency of 99.8%, only 0.2% less than the theoretical limit! © 2014 Mellanox Technologies 13
  • 14. TOP500 Interconnect Comparison  InfiniBand systems account for 2.6X the performance of Ethernet systems  The only scalable HPC interconnect solutions © 2014 Mellanox Technologies 14
  • 15. TOP500 Performance Trend  InfiniBand connected systems’ performance demonstrate highest growth rate  InfiniBand responsible for 2.6X the performance versus Ethernet on the TOP500 list © 2014 Mellanox Technologies 15
  • 16. InfiniBand Performance Trends  InfiniBand-connected CPUs grew 28% from Nov‘13 to Nov‘14  InfiniBand-based system performance grew 44% from Nov‘13 to Nov‘14  Mellanox InfiniBand is the most efficient and scalable interconnect solution  Driving factors: performance, efficiency, scalability, many-many cores © 2014 Mellanox Technologies 16
  • 17. Mellanox Accelerated World-Leading HPC Systems Connecting Half of the World’s Petascale Systems (examples) © 2014 Mellanox Technologies 17
  • 18. Proud to Accelerate Future DOE Leadership Systems (“CORAL”) “Summit” System “Sierra” System 5X – 10X Higher Application Performance versus Current Systems Mellanox EDR 100Gb/s InfiniBand, IBM POWER CPUs, NVIDIA Tesla GPUs Mellanox EDR 100G Solutions Selected by the DOE for 2017 Leadership Systems Deliver Superior Performance and Scalability over Current / Future Competition © 2014 Mellanox Technologies 18
  • 19. Interconnect Solutions Leadership – Software and Hardware Comprehensive End-to-End Interconnect Software Products Comprehensive End-to-End InfiniBand and Ethernet Hardware Products ICs Adapter Cards Switches/Gateways Metro / WAN Cables/Modules © 2014 Mellanox Technologies 19
  • 20. The Future is Here Enter the Era of 100Gb/s Copper (Passive, Active) Optical Cables (VCSEL) Silicon Photonics 36 EDR (100Gb/s) Ports, <90ns Latency Throughput of 7.2Tb/s 100Gb/s Adapter, 0.7us latency 150 million messages per second (10 / 25 / 40 / 50 / 56 / 100Gb/s) © 2014 Mellanox Technologies 20
  • 21. The Advantage of 100G Copper Cables (over Fiber) Lower Cost, Lower Power Consumption, Higher Reliability 5000 Nodes Cluster CAPEX Saving: $2.1M Power Saving: 12.5KWh 1000 Nodes Cluster CAPEX Saving: $420K Power Saving: 2.5KWh SC’14 Demonstration: 4m, 6m and 8m Copper Cables © 2014 Mellanox Technologies 21
  • 22. Enter the World of Scalable Performance – 100Gb/s Switch Boundless Performance 7th Generation InfiniBand Switch 36 EDR (100Gb/s) Ports, <90ns Latency Throughput of 7.2 Tb/s InfiniBand Router Adaptive Routing Analyze Store © 2014 Mellanox Technologies 22
  • 23. Enter the World of Scalable Performance – 100Gb/s Adapter ConnectX-4: Highest Performance Adapter in the Market Connect. Accelerate. Outperform InfiniBand: SDR / DDR / QDR / FDR / EDR Ethernet: 10 / 25 / 40 / 50 / 56 / 100GbE 100Gb/s, <0.7us latency 150 million messages per second OpenPOWER CAPI technology CORE-Direct technology GPUDirect RDMA Dynamically Connected Transport (DCT) Ethernet offloads (HDS, RSS, TSS, LRO, LSOv2) © 2014 Mellanox Technologies 23
  • 24. End-to-End Interconnect Solutions for All Platforms Highest Performance and Scalability for X86, Power, GPU, ARM and FPGA-based Compute and Storage Platforms x86 Open POWER GPU ARM FPGA Smart Interconnect to Unleash The Power of All Compute Architectures © 2014 Mellanox Technologies 24
  • 25. Mellanox Delivers Highest Application Performance World Record Performance with Mellanox Connect-IB and HPC-X Achieve Higher Performance with 67% less compute infrastructure versus Cray 25% 5% © 2014 Mellanox Technologies 25
  • 26. HPC Clouds – Performance Demands Mellanox Solutions San Diego Supercomputing Center “Comet” System (2015) to Leverage Mellanox Solutions and Technology to Build HPC Cloud © 2014 Mellanox Technologies 26
  • 27. Technology Roadmap – One-Generation Lead over the Competition 20Gbs 40Gbs 56Gbs 100Gbs Mellanox Terascale Petascale Exascale 3rd 1st “Roadrunner” Mellanox Connected TOP500 2003 Virginia Tech (Apple) 200Gbs 2000 2005 2010 2015 2020 © 2014 Mellanox Technologies 27
  • 28. Mellanox Interconnect Advantages  Mellanox solutions provide a proven, scalable and high performance end-to-end connectivity  Standards-based (InfiniBand , Ethernet), supported by large eco-system  Flexible, support all compute architectures: x86, Power, ARM, GPU, FPGA etc.  Backward and future compatible  Proven, most used solution for Petascale systems and overall TOP500 Paving The Road to Exascale Computing © 2014 Mellanox Technologies 28
  • 29. Texas Advanced Computing Center/Univ. of Texas - #7  “Stampede” system  6,000+ nodes (Dell), 462462 cores, Intel Phi co-processors  5.2 Petaflops  Mellanox end-to-end FDR InfiniBand Petaflop Mellanox Connected © 2014 Mellanox Technologies 29
  • 30. NASA Ames Research Center - #11  Pleiades system  SGI Altix ICE  20K InfiniBand nodes  3.4 sustained Petaflop performance  Mellanox end-to-end FDR and QDR InfiniBand  Supports variety of scientific and engineering projects • Coupled atmosphere-ocean models • Future space vehicle design • Large-scale dark matter halos and galaxy evolution Petaflop Mellanox Connected © 2014 Mellanox Technologies 30
  • 31. Exploration & Production ENI S.p.A. - #12  “HPC2” system  IBM iDataPlex DX360M4  NVIDIA K20x GPUs  3.2 Petaflops sustained Petaflop performance  Mellanox end-to-end FDR InfiniBand Petaflop Mellanox Connected © 2014 Mellanox Technologies 31
  • 32. Leibniz Rechenzentrum SuperMUC - #14  IBM iDataPlex and Intel Sandy Bridge  147456 cores  Mellanox end-to-end FDR InfiniBand solutions  2.9 sustained Petaflop performance  The fastest supercomputer in Europe  91% efficiency Petaflop Mellanox Connected © 2014 Mellanox Technologies 32
  • 33. Tokyo Institute of Technology - #15  TSUBAME 2.0, first Petaflop system in Japan  2.8 PF performance  HP ProLiant SL390s G7 1400 servers  Mellanox 40Gb/s InfiniBand Petaflop Mellanox Connected © 2014 Mellanox Technologies 33
  • 34. DOE/SC/Pacific Northwest National Laboratory - #18  “Cascade” system  Atipa Visione IF442 Blade Server  2.5 sustained Petaflop performance  Mellanox end-to-end InfiniBand FDR  Intel Xeon Phi 5110P accelerator Petaflop Mellanox Connected © 2014 Mellanox Technologies 34
  • 35. Total Exploration Production - #20  “Pangea” system  SGI Altix X system, 110400 cores  Mellanox FDR InfiniBand  2 sustained Petaflop performance  91% efficiency Petaflop Mellanox Connected © 2014 Mellanox Technologies 35
  • 36. Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Suprieur (GENCI-CINES) - #26  Occigen system  1.6 sustained Petaflop performance  Bull bullx DLC  Mellanox end-to-end FDR InfiniBand Petaflop Mellanox Connected © 2014 Mellanox Technologies 36
  • 37. Air Force Research Laboratory - #31  “Spirit” system  SGI Altix X system, 74584 cores  Mellanox FDR InfiniBand  1.4 sustained Petaflop performance  92.5% efficiency Petaflop Mellanox Connected © 2014 Mellanox Technologies 37
  • 38. CEA/TGCC-GENCI - #33  Bull Bullx B510, Intel Sandy Bridge  77184 cores  Mellanox end-to-end FDR InfiniBand solutions  1.36 sustained Petaflop performance Petaflop Mellanox Connected © 2014 Mellanox Technologies 38
  • 39. Max-Planck-Gesellschaft MPI/IPP - # 34  IBM iDataPlex DX360M4  Mellanox end-to-end FDR InfiniBand solutions  1.3 sustained Petaflop performance Petaflop Mellanox Connected © 2014 Mellanox Technologies 39
  • 40. National Supercomputing Centre in Shenzhen (NSCS) - #35  Dawning TC3600 Blade Supercomputer  5200 nodes, 120640 cores, NVIDIA GPUs  Mellanox end-to-end 40Gb/s InfiniBand solutions • ConnectX-2 and IS5000 switches  1.27 sustained Petaflop performance  The first Petaflop systems in China Petaflop Mellanox Connected © 2014 Mellanox Technologies 40
  • 41. NCAR (National Center for Atmospheric Research) - #36  “Yellowstone” system  72,288 processor cores, 4,518 nodes (IBM)  Mellanox end-to-end FDR InfiniBand, full fat tree, single plane Petaflop Mellanox Connected © 2014 Mellanox Technologies 41
  • 42. International Fusion Energy Research Centre (IFERC), EU(F4E) - Japan Broader Approach collaboration - #38  Bull Bullx B510, Intel Sandy Bridge  70560 cores  1.24 sustained Petaflop performance  Mellanox end-to-end InfiniBand solutions Petaflop Mellanox Connected © 2014 Mellanox Technologies 42
  • 43. SURFsara - #45  The “Cartesius” system - the Dutch supercomputer  Bull Bullx DLC B710/B720 system  Mellanox end-to-end InfiniBand solutions  1.05 sustained Petaflop performance Petaflop Mellanox Connected © 2014 Mellanox Technologies 43
  • 44. Commissariat a l'Energie Atomique (CEA) - #47  Tera 100, first Petaflop system in Europe - 1.05 PF performance  4,300 Bull S Series servers  140,000 Intel® Xeon® 7500 processing cores  300TB of central memory, 20PB of storage  Mellanox 40Gb/s InfiniBand Petaflop Mellanox Connected © 2014 Mellanox Technologies 44
  • 45. Australian National University - #52  Fujitsu PRIMERGY CX250 S1  Mellanox FDR 56Gb/s InfiniBand end-to-end  980 Tflops performance © 2014 Mellanox Technologies 45
  • 46. Purdue University - #53  “Conte” system  HP Cluster Platform SL250s Gen8  Intel Xeon E5-2670 8C 2.6GHz  Intel Xeon Phi 5110P accelerator  Total of 77,520 cores  Mellanox FDR 56Gb/s InfiniBand end-to-end  Mellanox Connect-IB InfiniBand adapters  Mellanox MetroX long Haul InfiniBand solution  980 Tflops performance © 2014 Mellanox Technologies 46
  • 47. Barcelona Supercomputing Center - #57  “MareNostrum 3” system  1.1 Petaflops peak performance  ~50K cores, 91% efficiency  Mellanox FDR InfiniBand © 2014 Mellanox Technologies 47

Editor's Notes

  1. 19