Accelerating MD..Simple Tweaks and Instant Clusters<br />Mohd Shahir Shamsir<br />Bioinformatics Research Group (BIRG)<br ...
Summary<br />Introduction to BIRG<br />MD: What, Why and How<br />Improving performance<br />Simple Tweaks…<br />Instant M...
Bioinformatics Research Group (BIRG)Faculty of Biosciences & Bioengineering<br />Just Google Us…<br />
MD…<br />What?<br />Why?<br />How?<br />COVERED!!<br />
Performance of MD?<br />Speed, speed, speed…<br />Supercomputer<br />IBM Roadrunner ~368 Flops<br />Nankai Star 3.7 ns/day...
Microwulf  cluster<br />26 Gflops,  $2500, 11" x 12" x 17", <br />airline overhead baggage compliant<br />
Simple Tweaks<br />INSPIRING CREATIVE AND INNOVATIVE MINDS<br />
Tweaks for MD?<br />Hardware ↑ =  performance ↑ = $$$ ↑<br />OR<br />Tweak Beowulf = performance ↑ = $$$<br />Pre-compiled...
Compilation<br />Winner self-compile<br />
Beowulf OpenMPIvs MPICH2 (pre)<br />Winner MPICH2<br />
Pre and Self Compiled MPI<br />Self OpenMPI = MPICH2<br />
What we found<br />Single machine<br />66 % improvement <br />Parallel environment<br />64 % improvement <br />Compilation...
Instant MD, anyone?<br />INSPIRING CREATIVE AND INNOVATIVE MINDS<br />
Instant MD cluster<br />Lots of under utilised computers in labs<br />Idle mode after office hours, holidays, etc.<br />
Instant MD cluster<br />MD, parallel computing = high computing resources<br />Solution?<br />Supercomputers<br />Dedicate...
A + B = C<br />What is A?<br />Existing computers<br />LAN connected, PXE-boot capable, CDROM/USB<br />What is B?<br />Lin...
birgHPC<br />Free, open-source Linux distribution<br />Based on PelicanHPC & Debian Live<br />GROMACS, NAMD, mpiBLAST, Clu...
Some Screenshots<br />
Some Screenshots<br />
Some Screenshots<br />
Some Screenshots<br />
Some Screenshots<br />
Some Screeshots<br />
Available at http://birg1.fbb.utm.my/birghpcor Just Google birghpc<br />
Conclusion<br />birgHPC<br />instant cluster conversion<br />Bioinformatics tools<br />Auto configurations<br />http://bir...
Acknowledgements<br />Chew Teong Han - Alchemist<br />Farizuawana – Graphics<br />Joyce Tan – Testing<br />Funding from yo...
FAQs<br />Boot sequence<br />Head node -> run birgHPC_setup -> follow instructions -> boot compute node -> script on head ...
FAQs<br />How to know compute nodes is up<br />Follow the birgHPC boot sequence -> the birgHPC_setup script will show # no...
FAQs<br />Status monitoring<br />Yes -> web browser -> localhost -> Ganglia Monitoring<br />What will displayed on compute...
FAQs<br />Headnode criteria<br />Preferably big RAM because of shared folder (/home)<br />2 Ethernet port if you want inte...
FAQs<br />Multi users?<br />No, designed for single user<br />Future release, maybe SGE or PBS<br />Installed on hardiskpe...
FAQs<br />I cannot boot from CD<br />Refer user guide -> convert CD ISO to USB drive image -> boot from USB<br />Can I use...
FAQs<br />birgHPC criteria (PCs = compute, server = head)<br />
Upcoming SlideShare
Loading in …5
×

Accelerating molecular dynamics simple tweaks to instant clusters

5,670 views

Published on

Talk presented in IPharm's CADD 2010.

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
5,670
On SlideShare
0
From Embeds
0
Number of Embeds
4,148
Actions
Shares
0
Downloads
24
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Accelerating molecular dynamics simple tweaks to instant clusters

  1. 1. Accelerating MD..Simple Tweaks and Instant Clusters<br />Mohd Shahir Shamsir<br />Bioinformatics Research Group (BIRG)<br />Faculty of Biosciences & Bioengineering<br />Universiti Teknologi Malaysia<br />INSPIRING CREATIVE AND INNOVATIVE MINDS<br />
  2. 2. Summary<br />Introduction to BIRG<br />MD: What, Why and How<br />Improving performance<br />Simple Tweaks…<br />Instant MD cluster– birgHPC<br />Short video demo<br />
  3. 3. Bioinformatics Research Group (BIRG)Faculty of Biosciences & Bioengineering<br />Just Google Us…<br />
  4. 4. MD…<br />What?<br />Why?<br />How?<br />COVERED!!<br />
  5. 5. Performance of MD?<br />Speed, speed, speed…<br />Supercomputer<br />IBM Roadrunner ~368 Flops<br />Nankai Star 3.7 ns/day on 32 nodes (DPPC)<br />HPCx 5.2 ns/day on 64 nodes (DPPC)<br />New platform<br />Cell-BE: GROMACS, 15x to Pentium 3.0GHz<br />GPU: NAMD, 4 GPU = 16 CPUs<br />
  6. 6.
  7. 7.
  8. 8. Microwulf cluster<br />26 Gflops, $2500, 11" x 12" x 17", <br />airline overhead baggage compliant<br />
  9. 9. Simple Tweaks<br />INSPIRING CREATIVE AND INNOVATIVE MINDS<br />
  10. 10. Tweaks for MD?<br />Hardware ↑ = performance ↑ = $$$ ↑<br />OR<br />Tweak Beowulf = performance ↑ = $$$<br />Pre-compiled vs self-compiled<br />MPI libraries<br />Test beds: 3 nodes GridMACS, 7 nodes Beowulf, 1 reference machine<br />
  11. 11. Compilation<br />Winner self-compile<br />
  12. 12. Beowulf OpenMPIvs MPICH2 (pre)<br />Winner MPICH2<br />
  13. 13. Pre and Self Compiled MPI<br />Self OpenMPI = MPICH2<br />
  14. 14. What we found<br />Single machine<br />66 % improvement <br />Parallel environment<br />64 % improvement <br />Compilation, software chosen affect performance<br />
  15. 15. Instant MD, anyone?<br />INSPIRING CREATIVE AND INNOVATIVE MINDS<br />
  16. 16. Instant MD cluster<br />Lots of under utilised computers in labs<br />Idle mode after office hours, holidays, etc.<br />
  17. 17. Instant MD cluster<br />MD, parallel computing = high computing resources<br />Solution?<br />Supercomputers<br />Dedicated computing cluster<br />Problems?<br />$$$<br />??? (I don’t know this, I don’t know that…)<br />
  18. 18. A + B = C<br />What is A?<br />Existing computers<br />LAN connected, PXE-boot capable, CDROM/USB<br />What is B?<br />Linux Live CD<br />Auto configuration<br />What is C?<br />Instant, out-of-the-box computing cluster!<br />
  19. 19. birgHPC<br />Free, open-source Linux distribution<br />Based on PelicanHPC & Debian Live<br />GROMACS, NAMD, mpiBLAST, ClustalW-MPI, PyMol, VMD<br />Auto cluster config<br />MPICH2 & OpenMPI<br />Auto slots detections<br />Ganglia monitoring<br />Simple interface for job submission<br />
  20. 20. Some Screenshots<br />
  21. 21. Some Screenshots<br />
  22. 22. Some Screenshots<br />
  23. 23. Some Screenshots<br />
  24. 24. Some Screenshots<br />
  25. 25. Some Screeshots<br />
  26. 26. Available at http://birg1.fbb.utm.my/birghpcor Just Google birghpc<br />
  27. 27. Conclusion<br />birgHPC<br />instant cluster conversion<br />Bioinformatics tools<br />Auto configurations<br />http://birg1.fbb.utm.my/birghpc<br />ISOs<br />guide<br />
  28. 28. Acknowledgements<br />Chew Teong Han - Alchemist<br />Farizuawana – Graphics<br />Joyce Tan – Testing<br />Funding from you via LHDN via MOSTI<br />Michael Creel for Pelican HPC <br />
  29. 29. FAQs<br />Boot sequence<br />Head node -> run birgHPC_setup -> follow instructions -> boot compute node -> script on head node will show # of nodes detected, confirm -> done<br />Headless compute nodes (no monitor)<br />Have to get a monitor -> set boot sequence to netboot -> done<br />
  30. 30. FAQs<br />How to know compute nodes is up<br />Follow the birgHPC boot sequence -> the birgHPC_setup script will show # nodes detected<br />Cannot netboot<br />Try http://etherboot.org/wiki/start<br />Heterogeneous PCs ok?<br />Ok (Thanks Micheal Creel)<br />If 32 bit + 64 bit, use 32 bit PC as head node<br />
  31. 31. FAQs<br />Status monitoring<br />Yes -> web browser -> localhost -> Ganglia Monitoring<br />What will displayed on compute node?<br />Just a simple login terminal with some warning not to use the nodes, etc<br />Limitations?<br />RAM, RAM, RAM …<br />Everything is loaded to RAM, hence HDD size = RAM size<br />
  32. 32. FAQs<br />Headnode criteria<br />Preferably big RAM because of shared folder (/home)<br />2 Ethernet port if you want internet connection (still work if you got only one eth)<br />Guide?<br />http://birg1.fbb.utm.my/birghpc<br />Forum?<br />No, but can always refer PelicanHPC forums<br />
  33. 33. FAQs<br />Multi users?<br />No, designed for single user<br />Future release, maybe SGE or PBS<br />Installed on hardiskpermenantly?<br />Not tested, technically possible, Google<br />Performance<br />On par with hardisk-installed cluster (tested up to 6 nodes)<br />
  34. 34. FAQs<br />I cannot boot from CD<br />Refer user guide -> convert CD ISO to USB drive image -> boot from USB<br />Can I use birgHPC along with existing DHCP<br />Preferably no, DHCP will distribute IP and birgHPC head will distribute IP, causing confusion of IP addresses<br />Alternative, boot DHCP as head node OR unplug DHCP, use another PC as head node<br />
  35. 35. FAQs<br />birgHPC criteria (PCs = compute, server = head)<br />

×