Demand for High Performance Computing (HPC) in Climate Research

1,102 views

Published on

By Srivatsan V Raghavan. Tropical Marine Science Institute, NUS.

http://www.youtube.com/watch?v=GIzxrzkD1T8&p=83FA1CD871F4A4E5

With climate change becoming an important topic in everyday life, the use of climate models for simulations is common. But these climate models are tools that demand heavy HPC resources. The advent of latest technologies in HPC systems have changed the way the climate research field has developed and there is always a never-ending need for sound infrastructure to meet with the herculean tasks of modelling and project deadlines. The talk will highlight certain issues on the use of such HPC systems looking at their merits and limitations and cite some case studies of benchmarked exercises using different HPC platforms.

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,102
On SlideShare
0
From Embeds
0
Number of Embeds
229
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Demand for High Performance Computing (HPC) in Climate Research

  1. 1. Demand for High Performance Computing in Climate Research
  2. 2. Background Climate Research Long years - climate modelling Continuous Simulations High resolution computations - Parallel Computing
  3. 3. Situation High resolution climate modelling Experiment: about 150 years [ 1961-2010; 2010-2100] of climate simulations – In what time span can we get it done ? Continuous Running and simultaneous post processing of large data Need of a dedicated computing cluster High speed performance and large disk space
  4. 4. Sample output from model Temperature Monsoon Winds
  5. 5. Servers tested TMSI Alatum Run time for 1 calendar year Civil Engg 14 Amazon 12 10 SVU 8 Days 6 4 2 0 TMSI Alatum Civil Engg Amazon SVU
  6. 6. Sources of Bottlenecks Storage mount Network cables [ethernet vs high speed eg. Myrinet or Infiniband (IB)] Memory Core Type ( Virtual / Physical )
  7. 7. Network cables  TMSI ( Std GE)  Alatum (Std GE)  Civil Engg (Myrinet)  Amazon (Std 10GE)  SVU (IB)
  8. 8. Performance in SVU 16 node cluster, IB cabled, 48 GB mem. 810 760 710 660 610 560 Minutes 510 460 410 360 310 260 210 160 110 60 10 1 6 12 36 72 84 96 144 168 180 Processors
  9. 9. Results 1 calendar year = 1 day ( desired 2 or 3 calendar years in 1 day) IB cables certainly increase performance speed More processors necessarily do not increase speed The best system tested so far
  10. 10. Acknowledgements Mr. Tan Chee Chiang Ms. Grace Foo Mr. Wang Junhong

×