How to Cope With All That
Mess?
- Fritz Ferstl, CTO Univa Corporation
HPC Data Center Evolution
Fresh from SC13:
• CPUs
o More cores, more options

• Memory
o Architectures and hierarchies

• ...
CPU Trends

• Continued trajectory – more cores
o Remember big-ass, enterprise SMP servers in the late 1990s early 2000s?
...
Memory

•
•
•
•

Per-socket NUMA
Server-wide ccNUMA
Virtual shared memory?
Distributed and Hadoop-like
access?
• Need to m...
Energy

• Switch off cores or CPUs when feasible
• Move them into power saving modes
o Even between application stop/start...
Accelerators

• More accelerators per server node
• Concurrent applications per accelerator card
• Accelerators as pseudo,...
Storage

• A few, beefy filers or shared filesystems
• Is there a place for HDFS?
o As Big Data evolves?
o Even in HPC?

•...
Networking

• Is it business as usual and just ever more
bandwith and shorter latency?
• Or are there topology trends:
o S...
Thank You!
fferstl@univa.com
http://www.univa.com/
Upcoming SlideShare
Loading in …5
×

Continuing HPC Datacenter Evolution

500 views

Published on

In this presentation from Radio Free HPC, Fritz Ferstl from Univa leads a discussion on the continuing HPC Datacenter Evolution.

Watch the video presentation: http://wp.me/p3RLHQ-b6U

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
500
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Continuing HPC Datacenter Evolution

  1. 1. How to Cope With All That Mess? - Fritz Ferstl, CTO Univa Corporation
  2. 2. HPC Data Center Evolution Fresh from SC13: • CPUs o More cores, more options • Memory o Architectures and hierarchies • Energy o Power-down on demand and manage heat • Accelerators o More widespread, more per server o Accelerator or pseudo server? • Storage o Topologies and file systems • Network o Topologies Copyright © 2013 Univa Corporation, All Rights Reserved. 2
  3. 3. CPU Trends • Continued trajectory – more cores o Remember big-ass, enterprise SMP servers in the late 1990s early 2000s? o Needed a lot of management for each … • Core binding & memory binding ever more important • Increased use of micro-virtualization o (CGROUPS, Linux Containers) o The death of heavy-weight virtualization for HPC? • New player ARM? o Maybe not for high-end HPC but for high capacity computing o Or when paired with accelerators like GPUs • Application-specific servers? o See HP’s Moonshot Copyright © 2013 Univa Corporation, All Rights Reserved. 3
  4. 4. Memory • • • • Per-socket NUMA Server-wide ccNUMA Virtual shared memory? Distributed and Hadoop-like access? • Need to manage more & different hierarchy levels? Copyright © 2013 Univa Corporation, All Rights Reserved. 4
  5. 5. Energy • Switch off cores or CPUs when feasible • Move them into power saving modes o Even between application stop/start cycles? • Vacate racks to power them down during phases with lower demand • Identify hot-spots in a rack and avoid further heat increase by diverting workloads o Or even decrease heat by migrating workloads Copyright © 2013 Univa Corporation, All Rights Reserved. 5
  6. 6. Accelerators • More accelerators per server node • Concurrent applications per accelerator card • Accelerators as pseudo, stand-alone servers? • Application-specific accelerators? Copyright © 2013 Univa Corporation, All Rights Reserved. 6
  7. 7. Storage • A few, beefy filers or shared filesystems • Is there a place for HDFS? o As Big Data evolves? o Even in HPC? • Trends in distributed, high-performance file systems? • What about NFS? pNFS? Copyright © 2013 Univa Corporation, All Rights Reserved. 7
  8. 8. Networking • Is it business as usual and just ever more bandwith and shorter latency? • Or are there topology trends: o Sparser interconnects requiring more topology/proximity awareness? • Is there a disconnect between commercial requirements and research/gov? o Will enterprises ever invest in ultra-large interconnects? o Do they have applications scaling to such levels? Copyright © 2013 Univa Corporation, All Rights Reserved. 8
  9. 9. Thank You! fferstl@univa.com http://www.univa.com/

×