Accelerating MD..Simple Tweaks and Instant ClustersMohd Shahir ShamsirBioinformatics Research Group (BIRG)Faculty of Biosciences & BioengineeringUniversiti Teknologi MalaysiaINSPIRING CREATIVE AND INNOVATIVE MINDS
SummaryIntroduction to BIRGMD: What, Why and HowImproving performanceSimple Tweaks…Instant MD cluster– birgHPCShort video demo
Bioinformatics Research Group (BIRG)Faculty of Biosciences & BioengineeringJust Google Us…
MD…What?Why?How?COVERED!!
Performance of MD?Speed, speed, speed…SupercomputerIBM Roadrunner ~368 FlopsNankai Star 3.7 ns/day on 32 nodes (DPPC)HPCx 5.2 ns/day on 64 nodes (DPPC)New platformCell-BE: GROMACS, 15x to Pentium 3.0GHzGPU: NAMD, 4 GPU = 16 CPUs
Microwulf  cluster26 Gflops,  $2500, 11" x 12" x 17", airline overhead baggage compliant
Simple TweaksINSPIRING CREATIVE AND INNOVATIVE MINDS
Tweaks for MD?Hardware ↑ =  performance ↑ = $$$ ↑ORTweak Beowulf = performance ↑ = $$$Pre-compiled vs self-compiledMPI librariesTest beds: 3 nodes GridMACS, 7 nodes Beowulf, 1 reference machine
CompilationWinner self-compile
Beowulf OpenMPIvs MPICH2 (pre)Winner MPICH2
Pre and Self Compiled MPISelf OpenMPI = MPICH2
What we foundSingle machine66 % improvement Parallel environment64 % improvement Compilation, software chosen affect performance
Instant MD, anyone?INSPIRING CREATIVE AND INNOVATIVE MINDS
Instant MD clusterLots of under utilised computers in labsIdle mode after office hours, holidays, etc.
Instant MD clusterMD, parallel computing = high computing resourcesSolution?SupercomputersDedicated computing clusterProblems?$$$??? (I don’t know this, I don’t know that…)
A + B = CWhat is A?Existing computersLAN connected, PXE-boot capable, CDROM/USBWhat is B?Linux Live CDAuto configurationWhat is C?Instant, out-of-the-box computing cluster!
birgHPCFree, open-source Linux distributionBased on PelicanHPC & Debian LiveGROMACS, NAMD, mpiBLAST, ClustalW-MPI, PyMol, VMDAuto cluster configMPICH2 & OpenMPIAuto slots detectionsGanglia monitoringSimple interface for job submission
Some Screenshots
Some Screenshots
Some Screenshots
Some Screenshots
Some Screenshots
Some Screeshots
Available at http://birg1.fbb.utm.my/birghpcor Just Google birghpc
ConclusionbirgHPCinstant cluster conversionBioinformatics toolsAuto configurationshttp://birg1.fbb.utm.my/birghpcISOsguide
AcknowledgementsChew Teong Han - AlchemistFarizuawana – GraphicsJoyce Tan – TestingFunding from you via LHDN via MOSTIMichael Creel for Pelican HPC
FAQsBoot sequenceHead node -> run birgHPC_setup -> follow instructions -> boot compute node -> script on head node will show # of nodes detected, confirm -> doneHeadless compute nodes (no monitor)Have to get a monitor -> set boot sequence to netboot -> done
FAQsHow to know compute nodes is upFollow the birgHPC boot sequence -> the birgHPC_setup script will show # nodes detectedCannot netbootTry http://etherboot.org/wiki/startHeterogeneous PCs ok?Ok (Thanks Micheal Creel)If 32 bit + 64 bit, use 32 bit PC as head node
FAQsStatus monitoringYes -> web browser -> localhost -> Ganglia MonitoringWhat will displayed on compute node?Just a simple login terminal with some warning not to use the nodes, etcLimitations?RAM, RAM, RAM …Everything is loaded to RAM, hence HDD size = RAM size
FAQsHeadnode criteriaPreferably big RAM because of shared folder (/home)2 Ethernet port if you want internet connection (still work if you got only one eth)Guide?http://birg1.fbb.utm.my/birghpcForum?No, but can always refer PelicanHPC forums
FAQsMulti users?No, designed for single userFuture release, maybe SGE or PBSInstalled on hardiskpermenantly?Not tested, technically possible, GooglePerformanceOn par with hardisk-installed cluster (tested up to 6 nodes)
FAQsI cannot boot from CDRefer user guide -> convert CD ISO to USB drive image -> boot from USBCan I use birgHPC along with existing DHCPPreferably no, DHCP will distribute IP and birgHPC head will distribute IP, causing confusion of IP addressesAlternative, boot DHCP as head node OR unplug DHCP, use another PC as head node
FAQsbirgHPC criteria (PCs = compute, server = head)

Accelerating molecular dynamics simple tweaks to instant clusters

  • 1.
    Accelerating MD..Simple Tweaksand Instant ClustersMohd Shahir ShamsirBioinformatics Research Group (BIRG)Faculty of Biosciences & BioengineeringUniversiti Teknologi MalaysiaINSPIRING CREATIVE AND INNOVATIVE MINDS
  • 2.
    SummaryIntroduction to BIRGMD:What, Why and HowImproving performanceSimple Tweaks…Instant MD cluster– birgHPCShort video demo
  • 3.
    Bioinformatics Research Group(BIRG)Faculty of Biosciences & BioengineeringJust Google Us…
  • 4.
  • 5.
    Performance of MD?Speed,speed, speed…SupercomputerIBM Roadrunner ~368 FlopsNankai Star 3.7 ns/day on 32 nodes (DPPC)HPCx 5.2 ns/day on 64 nodes (DPPC)New platformCell-BE: GROMACS, 15x to Pentium 3.0GHzGPU: NAMD, 4 GPU = 16 CPUs
  • 8.
    Microwulf cluster26Gflops, $2500, 11" x 12" x 17", airline overhead baggage compliant
  • 9.
  • 10.
    Tweaks for MD?Hardware↑ = performance ↑ = $$$ ↑ORTweak Beowulf = performance ↑ = $$$Pre-compiled vs self-compiledMPI librariesTest beds: 3 nodes GridMACS, 7 nodes Beowulf, 1 reference machine
  • 11.
  • 12.
    Beowulf OpenMPIvs MPICH2(pre)Winner MPICH2
  • 13.
    Pre and SelfCompiled MPISelf OpenMPI = MPICH2
  • 14.
    What we foundSinglemachine66 % improvement Parallel environment64 % improvement Compilation, software chosen affect performance
  • 15.
    Instant MD, anyone?INSPIRINGCREATIVE AND INNOVATIVE MINDS
  • 16.
    Instant MD clusterLotsof under utilised computers in labsIdle mode after office hours, holidays, etc.
  • 17.
    Instant MD clusterMD,parallel computing = high computing resourcesSolution?SupercomputersDedicated computing clusterProblems?$$$??? (I don’t know this, I don’t know that…)
  • 18.
    A + B= CWhat is A?Existing computersLAN connected, PXE-boot capable, CDROM/USBWhat is B?Linux Live CDAuto configurationWhat is C?Instant, out-of-the-box computing cluster!
  • 19.
    birgHPCFree, open-source LinuxdistributionBased on PelicanHPC & Debian LiveGROMACS, NAMD, mpiBLAST, ClustalW-MPI, PyMol, VMDAuto cluster configMPICH2 & OpenMPIAuto slots detectionsGanglia monitoringSimple interface for job submission
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
    ConclusionbirgHPCinstant cluster conversionBioinformaticstoolsAuto configurationshttp://birg1.fbb.utm.my/birghpcISOsguide
  • 28.
    AcknowledgementsChew Teong Han- AlchemistFarizuawana – GraphicsJoyce Tan – TestingFunding from you via LHDN via MOSTIMichael Creel for Pelican HPC
  • 29.
    FAQsBoot sequenceHead node-> run birgHPC_setup -> follow instructions -> boot compute node -> script on head node will show # of nodes detected, confirm -> doneHeadless compute nodes (no monitor)Have to get a monitor -> set boot sequence to netboot -> done
  • 30.
    FAQsHow to knowcompute nodes is upFollow the birgHPC boot sequence -> the birgHPC_setup script will show # nodes detectedCannot netbootTry http://etherboot.org/wiki/startHeterogeneous PCs ok?Ok (Thanks Micheal Creel)If 32 bit + 64 bit, use 32 bit PC as head node
  • 31.
    FAQsStatus monitoringYes ->web browser -> localhost -> Ganglia MonitoringWhat will displayed on compute node?Just a simple login terminal with some warning not to use the nodes, etcLimitations?RAM, RAM, RAM …Everything is loaded to RAM, hence HDD size = RAM size
  • 32.
    FAQsHeadnode criteriaPreferably bigRAM because of shared folder (/home)2 Ethernet port if you want internet connection (still work if you got only one eth)Guide?http://birg1.fbb.utm.my/birghpcForum?No, but can always refer PelicanHPC forums
  • 33.
    FAQsMulti users?No, designedfor single userFuture release, maybe SGE or PBSInstalled on hardiskpermenantly?Not tested, technically possible, GooglePerformanceOn par with hardisk-installed cluster (tested up to 6 nodes)
  • 34.
    FAQsI cannot bootfrom CDRefer user guide -> convert CD ISO to USB drive image -> boot from USBCan I use birgHPC along with existing DHCPPreferably no, DHCP will distribute IP and birgHPC head will distribute IP, causing confusion of IP addressesAlternative, boot DHCP as head node OR unplug DHCP, use another PC as head node
  • 35.
    FAQsbirgHPC criteria (PCs= compute, server = head)