Your SlideShare is downloading. ×
  • Like
10G Ethernet Outlook for HPC
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

10G Ethernet Outlook for HPC

  • 8,630 views
Published

Many analysts predict that 10 Gigabit Ethernet (10GbE) is ready to take off in HPC environments. 10GbE can meet low-latency and high bandwidth I/O requirements using familiar Ethernet networking …

Many analysts predict that 10 Gigabit Ethernet (10GbE) is ready to take off in HPC environments. 10GbE can meet low-latency and high bandwidth I/O requirements using familiar Ethernet networking leveraging Ethernet training, and management and debugging tools, which are ubiquitous in networking. Using efficient and cost-effective 10GbE to interconnect the blade servers, HPC clusters can take advantage of dense blade-server compute nodes to lower power consumption and reduce floor space.

Published in Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
8,630
On SlideShare
0
From Embeds
0
Number of Embeds
5

Actions

Shares
Downloads
155
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • May 2, 2008 BLADE Network Technologies Confidential May 2, 2008 BLADE Network Technologies Confidential
  • May 2, 2008 BLADE Network Technologies Confidential May 2, 2008 BLADE Network Technologies Confidential Trend confirmed… Ethernet still the only standard. Infiniband is a proprietary interconnect. Ethernet is now at 57% of top500 and the 1st 10G clusters have now been identified..
  • May 2, 2008 BLADE Network Technologies Confidential
  • Aug 28, 2008 BLADE Network Technologies Confidential sdfdsf
  • Aug 28, 2008 BLADE Network Technologies Confidential sdfdsf
  • Aug 28, 2008 BLADE Network Technologies Confidential VASP does not seem scale well beyond 64 cores. VASP is Parallelizing serial jobs. Quantum Chemistry Say: 10G with iWARP enabled
  • Aug 28, 2008 BLADE Network Technologies Confidential VASP does not seem scale well beyond 64 cores. VASP is Parallelizing serial jobs. Quantum Chemistry Say: 10G with iWARP enabled
  • Continuity/consistency of traffic vs latency
  • Continuity/consistency of traffic vs latency

Transcript

  • 1. Is 10 Gigabit Ethernet Ready for HPC?
  • 2. HPC Interconnect Landscape April 30, 2011
  • 3. The bigger picture for HPC top500 Mass Market IB GigE April 30, 2011 10Gig
  • 4. HPC Forecast: Strong Growth Over Next Five Years ($ Millions) Source: IDC, 2008 April 30, 2011
  • 5. 10 Gig Ethernet Issues
    • What’s holding up adoption?
    • 10 Gig NICs
    • Price of Switches
    • Switch Scaling
    • PHY Confusion
    • Proof of Performance
  • 6. 10 Gig NICs
    • Prices are dropping fast
    • Major server vendors are including 10 Gig Ethernet as standard server feature (LOM)
    • Several NIC vendors proving mature and stable for HPC
  • 7. Price of 10 Gig Ethernet Switches
    • Switch ports used to cost more than servers!
    • 10 Gig E switches now list for <$500 / port
    Top of Rack 10 Gb Switch IBM10Gb Blade Switch April 30, 2011
  • 8. Switch Scaling 12 Leaf Switches 144 Non-blocking ports 6 SPINE Switches
    • Typical CLOS Topology - 144 10GbE Ports
    • 2-tier design scales to 288 ports
    April 30, 2011
  • 9. HPC Topology – up to 208 10GbE Servers IBM BladeCenter Design Up to 210 Servers : 15 Enclosures 6 10GbE uplinks per enclosure 1 10GbE switch per enclosure 10.1.1.0 VLAN 1 10.1.2.0 VLAN 2 10.1.3.0 VLAN 3 10.1.4.0 VLAN 4 10.2.1.0 10.2.15.0
    • Load distribution across the core using OSPF ECMP:
    • Separate IP subnet for each enclosure
    • Separate VLAN and IP subnet for each RackSwitch
    • Full network path redundancy – not just link level
    • No Spanning Tree
    • 2.3 to 1 oversubscription
    10.1.4.0 VLAN 5 10.1.4.0 VLAN 6 April 30, 2011
  • 10. PHY Confusion
    • Optical standard interfaces for 10 Gig E:
      • Fixed optics
      • XENPAK
      • X2
      • XFP
      • SFP+
    • 10GBase-T (i.e. Cat5, RJ45)
    • Users have been unwilling to bet on a survivor!
  • 11. And the winner is: SFP+
    • SFP+ Direct Attach Cables
      • Passive cables with SFP+ ends
      • Low cost - $40 – $50
      • High density – same as RJ45
  • 12. 10GBase-T
    • The problem was harder than thought
    • Expensive, power-hungry, and 2.6 usec latency
    • But – 10GBase-T will eventually become widespread
  • 13. Performance
    • 10 Gig Ethernet offers:
      • Same familiar operating environment
      • Ease of use, debug, and management
      • Path to 40 and 100 Gig Ethernet
      • 10x bandwidth and 8x better latency vs. Gig Ethernet
      • But – do applications run faster !??!?
        • Vendors talk about micro-benchmarks
        • Most users care about execution time
  • 14. PAM CRASH: Elapsed Time (sec) ‏ (Version 2008.0) 10GE 32% faster than 1G, equal to IB DDR, for 32 cores April 30, 2011
  • 15. PAM CRASH: Speed Up (Version 2008.0) 10GE 70% faster than 1G, equal to IB DDR, for 64 cores April 30, 2011
  • 16. VASP 4.6.28: Elapsed Time (sec) 10GE 4.25x faster than 1G, equal to IB DDR, for 32 cores Vienna Ab-initio Simulation Package Molecular Dynamics April 30, 2011
  • 17. VASP 4.6.28: Speed Up 10GE 6.3x faster than 1G, almost equal to IB DDR, for 64 cores April 30, 2011
  • 18. RADIOSS 9.0: Speed up (Version 9.0) 10GE 30% faster than 1G, equal to IB DDR, for 64 cores Finite Element Solver April 30, 2011
  • 19. RMDS Performance: IBM BNT’s 10GbE vs. InfiniBand
    • IBM BNT’s 10GbE outperformed InfiniBand
      • Significantly higher updates per second
      • 31% Lower latency than InfiniBand
        • *Voltaire and BLADE tests used similar 3 GHz Xeon 5160 based servers with 4MB L2 cache
    IB April 30, 2011
  • 20. RMDS Performance BLADE’s 10GbE vs. InfiniBand BLADE’s 10GbE outperformed InfiniBand
        • *Voltaire and BLADE tests used similar 3 GHz Xeon 5160 based servers with 4MB L2 cache
    10 GE IB April 30, 2011
  • 21. 10 Gig Ethernet Issues
    • What’s holding up adoption?
    • 10 Gig NICs
    • Price of Switches
    • Switch Scaling
    • PHY Confusion
    • Proof of Performance
    • Prices are dropping
    • Under $500 / port
    • Proof points emerging
    • Benchmarks emerging
  • 22. Trademarks and disclaimers
    • Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries./ Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. UNIX is a registered trademark of The Open Group in the United States and other countries. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Information is provided &quot;AS IS&quot; without warranty of any kind.
    • The customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.
    • Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.
    • All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
    • Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.
    • Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.
    • Prices are suggested U.S. list prices and are subject to change without notice. Starting price may not include a hard drive, operating system or other features. Contact your IBM representative or Business Partner for the most current pricing in your geography.
    • Photographs shown may be engineering prototypes. Changes may be incorporated in production models.
    • © IBM Corporation 1994-2010. All rights reserved.
    • References in this document to IBM products or services do not imply that IBM intends to make them available in every country.
    • Trademarks of International Business Machines Corporation in the United States, other countries, or both can be found on the World Wide Web at http://www.ibm.com/legal/copytrade.shtml .