Accelerating molecular dynamics simple tweaks to instant clusters
Upcoming SlideShare
Loading in...5
×
 

Accelerating molecular dynamics simple tweaks to instant clusters

on

  • 5,039 views

Talk presented in IPharm's CADD 2010.

Talk presented in IPharm's CADD 2010.

Statistics

Views

Total Views
5,039
Views on SlideShare
1,259
Embed Views
3,780

Actions

Likes
0
Downloads
17
Comments
0

5 Embeds 3,780

http://birg1.fbb.utm.my 3741
http://translate.googleusercontent.com 18
http://localhost 13
http://www.birg1.bioscience.utm.my 4
http://webcache.googleusercontent.com 4

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Accelerating molecular dynamics simple tweaks to instant clusters Accelerating molecular dynamics simple tweaks to instant clusters Presentation Transcript

  • Accelerating MD..Simple Tweaks and Instant Clusters
    Mohd Shahir Shamsir
    Bioinformatics Research Group (BIRG)
    Faculty of Biosciences & Bioengineering
    Universiti Teknologi Malaysia
    INSPIRING CREATIVE AND INNOVATIVE MINDS
  • Summary
    Introduction to BIRG
    MD: What, Why and How
    Improving performance
    Simple Tweaks…
    Instant MD cluster– birgHPC
    Short video demo
  • Bioinformatics Research Group (BIRG)Faculty of Biosciences & Bioengineering
    Just Google Us…
  • MD…
    What?
    Why?
    How?
    COVERED!!
  • Performance of MD?
    Speed, speed, speed…
    Supercomputer
    IBM Roadrunner ~368 Flops
    Nankai Star 3.7 ns/day on 32 nodes (DPPC)
    HPCx 5.2 ns/day on 64 nodes (DPPC)
    New platform
    Cell-BE: GROMACS, 15x to Pentium 3.0GHz
    GPU: NAMD, 4 GPU = 16 CPUs
  • Microwulf cluster
    26 Gflops, $2500, 11" x 12" x 17",
    airline overhead baggage compliant
  • Simple Tweaks
    INSPIRING CREATIVE AND INNOVATIVE MINDS
  • Tweaks for MD?
    Hardware ↑ = performance ↑ = $$$ ↑
    OR
    Tweak Beowulf = performance ↑ = $$$
    Pre-compiled vs self-compiled
    MPI libraries
    Test beds: 3 nodes GridMACS, 7 nodes Beowulf, 1 reference machine
  • Compilation
    Winner self-compile
  • Beowulf OpenMPIvs MPICH2 (pre)
    Winner MPICH2
  • Pre and Self Compiled MPI
    Self OpenMPI = MPICH2
  • What we found
    Single machine
    66 % improvement
    Parallel environment
    64 % improvement
    Compilation, software chosen affect performance
  • Instant MD, anyone?
    INSPIRING CREATIVE AND INNOVATIVE MINDS
  • Instant MD cluster
    Lots of under utilised computers in labs
    Idle mode after office hours, holidays, etc.
  • Instant MD cluster
    MD, parallel computing = high computing resources
    Solution?
    Supercomputers
    Dedicated computing cluster
    Problems?
    $$$
    ??? (I don’t know this, I don’t know that…)
  • A + B = C
    What is A?
    Existing computers
    LAN connected, PXE-boot capable, CDROM/USB
    What is B?
    Linux Live CD
    Auto configuration
    What is C?
    Instant, out-of-the-box computing cluster!
  • birgHPC
    Free, open-source Linux distribution
    Based on PelicanHPC & Debian Live
    GROMACS, NAMD, mpiBLAST, ClustalW-MPI, PyMol, VMD
    Auto cluster config
    MPICH2 & OpenMPI
    Auto slots detections
    Ganglia monitoring
    Simple interface for job submission
  • Some Screenshots
  • Some Screenshots
  • Some Screenshots
  • Some Screenshots
  • Some Screenshots
  • Some Screeshots
  • Available at http://birg1.fbb.utm.my/birghpcor Just Google birghpc
  • Conclusion
    birgHPC
    instant cluster conversion
    Bioinformatics tools
    Auto configurations
    http://birg1.fbb.utm.my/birghpc
    ISOs
    guide
  • Acknowledgements
    Chew Teong Han - Alchemist
    Farizuawana – Graphics
    Joyce Tan – Testing
    Funding from you via LHDN via MOSTI
    Michael Creel for Pelican HPC
  • FAQs
    Boot sequence
    Head node -> run birgHPC_setup -> follow instructions -> boot compute node -> script on head node will show # of nodes detected, confirm -> done
    Headless compute nodes (no monitor)
    Have to get a monitor -> set boot sequence to netboot -> done
  • FAQs
    How to know compute nodes is up
    Follow the birgHPC boot sequence -> the birgHPC_setup script will show # nodes detected
    Cannot netboot
    Try http://etherboot.org/wiki/start
    Heterogeneous PCs ok?
    Ok (Thanks Micheal Creel)
    If 32 bit + 64 bit, use 32 bit PC as head node
  • FAQs
    Status monitoring
    Yes -> web browser -> localhost -> Ganglia Monitoring
    What will displayed on compute node?
    Just a simple login terminal with some warning not to use the nodes, etc
    Limitations?
    RAM, RAM, RAM …
    Everything is loaded to RAM, hence HDD size = RAM size
  • FAQs
    Headnode criteria
    Preferably big RAM because of shared folder (/home)
    2 Ethernet port if you want internet connection (still work if you got only one eth)
    Guide?
    http://birg1.fbb.utm.my/birghpc
    Forum?
    No, but can always refer PelicanHPC forums
  • FAQs
    Multi users?
    No, designed for single user
    Future release, maybe SGE or PBS
    Installed on hardiskpermenantly?
    Not tested, technically possible, Google
    Performance
    On par with hardisk-installed cluster (tested up to 6 nodes)
  • FAQs
    I cannot boot from CD
    Refer user guide -> convert CD ISO to USB drive image -> boot from USB
    Can I use birgHPC along with existing DHCP
    Preferably no, DHCP will distribute IP and birgHPC head will distribute IP, causing confusion of IP addresses
    Alternative, boot DHCP as head node OR unplug DHCP, use another PC as head node
  • FAQs
    birgHPC criteria (PCs = compute, server = head)