Blades for HPTC

1,041 views

Published on

Are blade server suitable for HPTC? This talk covers the pros and cons of building your next cluster using blades.

Talk given at International Supercomputing blade workshop in 2007.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,041
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Blades for HPTC

  1. 1. Blades for HPTC <ul><ul><li>Guy Coates </li></ul></ul><ul><ul><li>Informatics Systems Groups </li></ul></ul><ul><ul><li>[email_address] </li></ul></ul>
  2. 2. Introduction <ul><li>The science. </li></ul><ul><ul><li>What is our HPTC workload? </li></ul></ul><ul><li>Why are clusters hard? </li></ul><ul><ul><li>What are the challenges of doing cluster computing? </li></ul></ul><ul><li>How do blades help us? </li></ul><ul><ul><li>Sanger's experience with blade systems. </li></ul></ul><ul><li>Can blades help you? </li></ul><ul><ul><li>What can blades not do? </li></ul></ul>
  3. 3. The Science
  4. 4. The Post Genomic Era <ul><li>Genomes now available for many organisms. </li></ul><ul><li>What does it mean? </li></ul>TCCTCTCTTTATTTTAGCTGGACCAGACCAATTTTGAGGAAAGGATACAGACAGCGCCTG GAATTGTCAGACATATACCAAATCCCTTCTGTTGATTCTGCTGACAATCTATCTGAAAAA TTGGAAAGGTATGTTCATGTACATTGTTTAGTTGAAGAGAGAAATTCATATTATTAATTA TTTAGAGAAGAGAAAGCAAACATATTATAAGTTTAATTCTTATATTTAAAAATAGGAGCC AAGTATGGTGGCTAATGCCTGTAATCCCAACTATTTGGGAGGCCAAGATGAGAGGATTGC TTGAGACCAGGAGTTTGATACCAGCCTGGGCAACATAGCAAGATGTTATCTCTACACAAA ATAAAAAAGTTAGCTGGGAATGGTAGTGCATGCTTGTATTCCCAGCTACTCAGGAGGCTG AAGCAGGAGGGTTACTTGAGCCCAGGAGTTTGAGGTTGCAGTGAGCTATGATTGTGCCAC TGCACTCCAGCTTGGGTGACACAGCAAAACCCTCTCTCTCTAAAAAAAAAAAAAAAAAGG AACATCTCATTTTCACACTGAAATGTTGACTGAAATCATTAAACAATAAAATCATAAAAG AAAAATAATCAGTTTCCTAAGAAATGATTTTTTTTCCTGAAAAATACACATTTGGTTTCA GAGAATTTGTCTTATTAGAGACCATGAGATGGATTTTGTGAAAACTAAAGTAACACCATT ATGAAGTAAATCGTGTATATTTGCTTTCAAAACCTTTATATTTGAATACAAATGTACTCC
  5. 5. Deciphering the genome <ul><li>Sequence need to be analysed. </li></ul><ul><ul><li>Where are the genes? </li></ul></ul><ul><ul><li>What do the genes do? </li></ul></ul><ul><ul><li>Are the genes related to other genes via evolution? </li></ul></ul><ul><li>This analysis is known as gene annotation. </li></ul><ul><li>Provides the basis for new questions: </li></ul><ul><ul><li>What happens when the genes go wrong? </li></ul></ul><ul><ul><li>How do genes interact with one another? </li></ul></ul><ul><ul><li>What do the genes we have never seen before do? </li></ul></ul>
  6. 6. Annotation at Sanger <ul><li>We have both human and machine annotation efforts. </li></ul><ul><ul><li>Havanna Group: manual annotation (10% coverage). </li></ul></ul><ul><ul><li>Ensembl project: automated annotation of 26 vertebrate genomes. </li></ul></ul><ul><li>Data pooled into the Ensembl database. </li></ul><ul><ul><li>Access via website (8M hits / week). </li></ul></ul><ul><ul><li>Perl/Java/SQL APIs. </li></ul></ul><ul><ul><li>Bulk download via FTP. </li></ul></ul><ul><ul><li>Direct sql access (~150 queries / second). </li></ul></ul><ul><ul><li>Core databases 250GB / month. </li></ul></ul><ul><li>Software is all Open Source (Apache style license). </li></ul><ul><li>Data is free for download. </li></ul>
  7. 7. TCCTCTCTTTATTTTAGCTGGACCAGACCAATTTTGAGGAAAGGATACAGACAGCGCCTG GAATTGTCAGACATATACCAAATCCCTTCTGTTGATTCTGCTGACAATCTATCTGAAAAA TTGGAAAGGTATGTTCATGTACATTGTTTAGTTGAAGAGAGAAATTCATATTATTAATTA TTTAGAGAAGAGAAAGCAAACATATTATAAGTTTAATTCTTATATTTAAAAATAGGAGCC AAGTATGGTGGCTAATGCCTGTAATCCCAACTATTTGGGAGGCCAAGATGAGAGGATTGC TTGAGACCAGGAGTTTGATACCAGCCTGGGCAACATAGCAAGATGTTATCTCTACACAAA ATAAAAAAGTTAGCTGGGAATGGTAGTGCATGCTTGTATTCCCAGCTACTCAGGAGGCTG AAGCAGGAGGGTTACTTGAGCCCAGGAGTTTGAGGTTGCAGTGAGCTATGATTGTGCCAC TGCACTCCAGCTTGGGTGACACAGCAAAACCCTCTCTCTCTAAAAAAAAAAAAAAAAAGG AACATCTCATTTTCACACTGAAATGTTGACTGAAATCATTAAACAATAAAATCATAAAAG AAAAATAATCAGTTTCCTAAGAAATGATTTTTTTTCCTGAAAAATACACATTTGGTTTCA GAGAATTTGTCTTATTAGAGACCATGAGATGGATTTTGTGAAAACTAAAGTAACACCATT ATGAAGTAAATCGTGTATATTTGCTTTCAAAACCTTTATATTTGAATACAAATGTACTCC Ensembl Annotation
  8. 8. Ensembl Annotation
  9. 9. Ensembl Annotation
  10. 10. Ensembl Annotation
  11. 11. How is the data generated? <ul><li>Ensembl provides a framework for automated annotation. </li></ul><ul><ul><li>Scientist describes annotation required. </li></ul></ul><ul><li>Rulemanager generates a set of compute tasks. </li></ul><ul><ul><li>~20,000 jobs for a moderate genome. </li></ul></ul><ul><ul><li>~10,000 CPU/hours. </li></ul></ul><ul><li>Runner executes the jobs. </li></ul><ul><ul><li>Takes care of dependencies, failures. </li></ul></ul><ul><ul><li>LSF used as DRM for execution of jobs. </li></ul></ul><ul><ul><li>Results and state stored in mysql databases. </li></ul></ul><ul><li>Extensible and reusable. </li></ul><ul><ul><li>Newly sequenced genomes are are incorporated into Ensembl reasonably easily. </li></ul></ul>
  12. 12. Genebuild Workflow
  13. 13. System requirements <ul><li>Many algorithms involved. </li></ul><ul><ul><li>Blast, exonerate ( C ) </li></ul></ul><ul><ul><li>perl / java pipeline managers. </li></ul></ul><ul><ul><li>400 binaries in all. </li></ul></ul><ul><li>Integer, not floating point, intensive. </li></ul><ul><ul><li>General compute rather than specialised processors. </li></ul></ul><ul><li>Moderate memory sizes. </li></ul><ul><ul><li>64 bit memory size is nice, but not essential. </li></ul></ul><ul><li>Lots and lots of disk I/O. </li></ul><ul><ul><li>500GB genomic dataset searched by the pipeline. </li></ul></ul><ul><ul><li>I/O bound in many parts. </li></ul></ul><ul><li>Minimal interprocess communication. </li></ul><ul><ul><li>Odd 4 node MPI jobs. </li></ul></ul>
  14. 14. System requirements <ul><li>System is embarrassingly parallel. </li></ul><ul><ul><li>Scales well when we add more nodes. </li></ul></ul><ul><li>We don't need low-latency interconnects. </li></ul><ul><ul><li>Ethernet is fine. </li></ul></ul><ul><li>Well suited to clusters of commodity hardware. </li></ul><ul><li>(We also need HA clusters for the queuing system and mysql databases, but that is another presentation) </li></ul>
  15. 15. Cluster MK 1 <ul><li>360 DS10L 1U servers in 9 racks. </li></ul><ul><li>“Bog standard” cluster. </li></ul>
  16. 16. But... <ul><li>Data keeps on coming in. </li></ul><ul><ul><li>New genomes are sequenced. </li></ul></ul><ul><ul><li>Errors in old genomes corrected. </li></ul></ul><ul><li>We want to compare all genomes against all others. </li></ul>
  17. 17. Compute demand grows with the data <ul><li>Science exceeds current compute capacity every 18 months. </li></ul><ul><li>We need a bigger cluster every 18 months. </li></ul><ul><ul><li>Keep the current one running and help the users! </li></ul></ul>
  18. 18. 5 clusters in 6 years <ul><li>20x increase in compute capacity. </li></ul><ul><ul><ul><li>(Moore's law helps a bit, but that is capacitors, not Spec Int.) </li></ul></ul></ul><ul><li>What did we learn? </li></ul><ul><ul><li>Clusters are really hard. </li></ul></ul>
  19. 19. Why are clusters hard?
  20. 20. Scaling <ul><li>Everyone talks about code scaling. </li></ul><ul><ul><li>Will my application run on more nodes? </li></ul></ul><ul><li>Do admins scale? </li></ul><ul><ul><li>If we double the cluster size, will we have double the admins? </li></ul></ul><ul><ul><li>If it is hard today, what will it be like in 18 months? </li></ul></ul><ul><ul><li>If we have to spend less admin time per node, will reliability suffer? </li></ul></ul><ul><ul><li>We should be spending time helping users optimise code. </li></ul></ul><ul><li>Everything that can go wrong on a server can go wrong on a cluster node. </li></ul><ul><ul><li>But we have hundreds of nodes. </li></ul></ul><ul><ul><li>Hundreds of problems. </li></ul></ul>
  21. 21. Clusters Get More Complex <ul><li>MK1 cluster: </li></ul><ul><ul><li>360 CPUs, local disk storage, single fast ethernet. </li></ul></ul><ul><li>MK5 cluster: </li></ul><ul><ul><li>Multiple trunked GigE networks, cluster filesystems, SAN storage, multiple architectures (ia32, AMD64, token ia64 and alpha). </li></ul></ul><ul><li>Bleeding edge hardware / software stacks. </li></ul><ul><ul><li>Non trivial problems. </li></ul></ul><ul><ul><li>google may not be your friend if you are the first to find the problem. </li></ul></ul>
  22. 22. Manageability is the key <ul><li>Numerous, complex systems are hard to manage. </li></ul><ul><li>Clusters need good management tools. </li></ul><ul><li>The fastest cluster in the world is of no use if it does not stay up long enough to run your jobs. </li></ul><ul><li>Manageability is our number 1 priority when designing clusters. </li></ul><ul><ul><li>We do not buy on price/performance. </li></ul></ul><ul><ul><li>We buy on price/manageability. </li></ul></ul>
  23. 23. Cluster Management Life Cycle <ul><li>Installation. </li></ul><ul><ul><li>Bolting the thing in. </li></ul></ul><ul><li>Commissioning. </li></ul><ul><ul><li>Getting the cluster configured. </li></ul></ul><ul><li>Production. </li></ul><ul><ul><li>Doing some useful work. </li></ul></ul>
  24. 24. Installation <ul><li>Where to put the racks? </li></ul><ul><ul><li>Like disk space, data centres go to 80% full 6 months after they are built. </li></ul></ul><ul><li>Power / Aircon. </li></ul><ul><ul><li>You need to have enough. </li></ul></ul><ul><ul><li>Total heat output vs density. </li></ul></ul><ul><li>Networking. </li></ul><ul><ul><li>Each system needs multiple network cables. </li></ul></ul><ul><ul><ul><li>public network, private network, SAN, mgt network. </li></ul></ul></ul><ul><ul><li>Don't forget the switching. </li></ul></ul><ul><li>“But the cluster got delivered last week, why can't I run jobs?” </li></ul>
  25. 25. Commissioning <ul><li>Getting the system up and running. </li></ul><ul><ul><li>OS deployment usually last! </li></ul></ul><ul><li>Initial configuration. </li></ul><ul><ul><li>Firmware updates. </li></ul></ul><ul><ul><ul><li>BIOS, NIC, mgt processor, FC HBA etc. </li></ul></ul></ul><ul><ul><li>Standardise BIOS settings. </li></ul></ul><ul><ul><ul><li>HT, memory interleave etc. </li></ul></ul></ul><ul><ul><li>RAID configuration. </li></ul></ul><ul><li>DOA Discovery. </li></ul><ul><ul><li>Machines with failed DIMMS, CPUs </li></ul></ul><ul><li>OS Deployment. </li></ul><ul><ul><li>OS installation, local customisations. </li></ul></ul><ul><ul><li>Application stack. </li></ul></ul>
  26. 26. Production <ul><li>Broken Hardware. </li></ul><ul><ul><li>Hardware failures should be detected and the admin told. </li></ul></ul><ul><ul><li>Ideally they should be detected before they are fatal. </li></ul></ul><ul><ul><li>“Black hole” machines painful on HPTC clusters. </li></ul></ul><ul><li>Sysadmin tasks. </li></ul><ul><ul><li>Software updates etc. </li></ul></ul><ul><li>Emergencies. </li></ul><ul><ul><li>Can you get a remote console? </li></ul></ul><ul><ul><li>Console logs / oopses. </li></ul></ul><ul><li>Doomsday scenarios. </li></ul><ul><ul><li>Power or AC failures. </li></ul></ul><ul><ul><li>Can I power off my cluster from home at 2:00am? </li></ul></ul><ul><ul><li>Can I do it before my machines melt? </li></ul></ul>
  27. 27. How do blades help?
  28. 28. How Do Blades Help? <ul><li>Manageability touches on hardware and software. </li></ul><ul><ul><li>Good manageability requires smart software and smart hardware. </li></ul></ul><ul><li>Blades have smart hardware. </li></ul><ul><ul><li>Management processors on blades and in chassis. </li></ul></ul><ul><ul><li>(And some servers now.) </li></ul></ul><ul><li>Blades have smart software. </li></ul><ul><ul><li>Vendors supply OS deployment and management tools. </li></ul></ul><ul><li>Unit of administration is the chassis, not the blade. </li></ul><ul><ul><li>We end up managing a smaller number of smarter entities. </li></ul></ul>
  29. 29. Smart Hardware <ul><li>Management processor. </li></ul><ul><ul><li>Sits on the blade and/or the chassis. </li></ul></ul><ul><ul><li>Key enabler. Almost all benefits flow from this. </li></ul></ul><ul><li>Basic Features. </li></ul><ul><ul><li>Hardware Inventory (MAC addresses, BIOS revs etc). </li></ul></ul><ul><ul><li>Remote power. </li></ul></ul><ul><ul><li>Remote console (SOL, VNC). </li></ul></ul><ul><ul><li>Machine health (memory, fans, CPUs). </li></ul></ul><ul><ul><li>Alerting. </li></ul></ul><ul><li>Advanced Features. </li></ul><ul><ul><li>BIOS twiddling (PXE boot). </li></ul></ul><ul><ul><li>Firmware updates. </li></ul></ul><ul><ul><li>Integrated switch management. </li></ul></ul>
  30. 30. Smart software <ul><li>Management Suite. </li></ul><ul><ul><li>Provides window into what the hardware is doing. </li></ul></ul><ul><ul><li>Provides remote console, power and alerting. </li></ul></ul><ul><li>OS deployment suite. </li></ul><ul><ul><li>Typically “golden image” installers. </li></ul></ul><ul><ul><li>Allow for rapid and consistent OS installation. </li></ul></ul><ul><ul><li>Quick / automated re-tasking of machines. </li></ul></ul><ul><ul><li>Software inventories. </li></ul></ul><ul><li>May be integrated into single product. </li></ul>
  31. 33. Web interface
  32. 34. Management Interface <ul><li>Web interfaces are nice. </li></ul><ul><ul><li>Easy to get to grips and find features. </li></ul></ul><ul><li>Command line is even better. </li></ul><ul><ul><li>Command line means we can script it. </li></ul></ul><ul><li>Command line tools allow you to integrate blade management with existing tools. </li></ul><ul><ul><li>You do not have to use the vendor suggested solution. </li></ul></ul><ul><ul><li>Magic of open source. </li></ul></ul>
  33. 35. Why Extend Existing Tools? <ul><li>Vendor tools can be limiting. </li></ul><ul><ul><li>Tend to be windows centric as windows is a pain to manage. </li></ul></ul><ul><ul><li>May not work with non standard network or disk configs. </li></ul></ul><ul><li>Linux already has good deployment tools. </li></ul><ul><ul><li>Why re-invent the wheel? </li></ul></ul><ul><ul><li>Not quite fully automated. </li></ul></ul><ul><li>Management processor command line interface. </li></ul><ul><ul><li>We can script and do whatever we want. </li></ul></ul><ul><li>Extend existing tools. </li></ul><ul><ul><li>Use existing deployment tools to install blades. </li></ul></ul><ul><ul><li>Can cope with whatever twisted configs we want to run. </li></ul></ul>
  34. 36. The Cluster Management Life Cycle Revisited <ul><li>...But with blades. </li></ul><ul><li>How do blades make it easier? </li></ul>
  35. 37. Cluster MK5 <ul><li>560 CPUs </li></ul><ul><ul><li>140 dual core /dual CPU blades. </li></ul></ul><ul><ul><li>10 chassis, 2 cabinets. </li></ul></ul><ul><li>OS: </li></ul><ul><ul><li>Debian / AMD64. </li></ul></ul><ul><li>Networking: </li></ul><ul><ul><li>1 GigE external network. </li></ul></ul><ul><ul><li>2 GigE trunked private network. </li></ul></ul><ul><li>Storage: </li></ul><ul><ul><li>Disk config: hardware RAID1 for OS. </li></ul></ul><ul><ul><li>Cluster filesystem. </li></ul></ul>
  36. 38. Installation <ul><li>Blades take up less space. </li></ul><ul><ul><li>Less space to clear / tidy. </li></ul></ul><ul><li>Integrated power and networking. </li></ul><ul><ul><li>Fewer cables. </li></ul></ul>
  37. 39. Installation <ul><li>42 1U servers with 3 GigE networks: </li></ul><ul><ul><li>42 10/100 mgt cables. </li></ul></ul><ul><ul><li>126 GigE cables. </li></ul></ul><ul><ul><li>42 power cables. </li></ul></ul><ul><ul><li>External switches. </li></ul></ul><ul><li>70 blades in 5 chassis with 3GigE networks: </li></ul><ul><ul><li>5 10/100 mgt cables. </li></ul></ul><ul><ul><li>15 GigE cables. </li></ul></ul><ul><ul><li>20 power cables. </li></ul></ul><ul><ul><li>No external switches. </li></ul></ul><ul><li>One person can rack and patch a cabinet of blades in a day. </li></ul><ul><ul><li>I know, I've done it! </li></ul></ul>
  38. 40. Consolidated networking and power <ul><li>14 servers: </li></ul>
  39. 41. Cabling
  40. 42. Commissioning <ul><li>Bootstrap blade chassis. </li></ul><ul><ul><li>Configure mgt module. </li></ul></ul><ul><ul><li>Script sets static IP addresses, alerts etc. </li></ul></ul><ul><ul><li>Script configures network switches. </li></ul></ul><ul><li>FW Updates. </li></ul><ul><ul><li>Script update all blade and mgt module firmwares. </li></ul></ul><ul><li>~0.5 day for the initial config on 10 chassis. </li></ul>
  41. 43. Commissioning <ul><li>We extended the FAI Debian auto installer. </li></ul><ul><ul><li>We use it already. </li></ul></ul><ul><ul><li>It can cope with our non-standard network and disk topologies. </li></ul></ul><ul><ul><li>Open Source generic system: future-proof. </li></ul></ul><ul><li>Install sequence: </li></ul><ul><ul><li>Harvest MAC addresses from mgt processor. </li></ul></ul><ul><ul><li>PXE boot blades into FAI. </li></ul></ul><ul><ul><li>Construct raid, flash system BIOS, set BIOS flags. </li></ul></ul><ul><ul><li>OS and SW installation and customisation. </li></ul></ul><ul><ul><li>Set blade to boot of disks and reboot. </li></ul></ul><ul><li>160 seconds for a full OS and software install. </li></ul><ul><ul><li>Run script, go drink tea. </li></ul></ul><ul><li>Command line tools crucial. </li></ul>
  42. 44. Production <ul><li>Management processor. </li></ul><ul><ul><li>Remote power and remote console. </li></ul></ul><ul><ul><li>Hardware failures. </li></ul></ul><ul><ul><li>Alerts go into helpdesk system. </li></ul></ul><ul><ul><li>Manage cluster from anywhere I can get ssh. </li></ul></ul><ul><li>Standard linux tools. </li></ul><ul><ul><li>DSH: run commands on all blades. </li></ul></ul><ul><ul><li>cfengine: manage config files. </li></ul></ul><ul><ul><li>ganglia / LSF: load monitoring. </li></ul></ul><ul><ul><li>smartmontools for disk failures. </li></ul></ul><ul><li>Doomsday scenario. </li></ul><ul><ul><li>Emergency shutdown script. </li></ul></ul><ul><ul><li>Runs round mgt processors and powers off blades. </li></ul></ul><ul><ul><li>Keep blowers etc going to reduce heat stress. </li></ul></ul>
  43. 45. Blades make large clusters easier <ul><ul><li>Grown from 360 to 1456 CPUs. </li></ul></ul><ul><ul><li>Shrunks from 360 system to 42 chassis. </li></ul></ul>
  44. 46. How many admins? <ul><li>It takes 1 admin–day /week to look after a 1456 CPU cluster. </li></ul><ul><ul><li>Gone down when we moved from servers to blades. </li></ul></ul><ul><ul><li>cf TCO studies on the web. </li></ul></ul><ul><ul><li>1 full time admin for 40-50 unix machines. </li></ul></ul><ul><ul><ul><li>(Windows is half that). </li></ul></ul></ul><ul><li>We look after all the rest of the Sanger systems too! </li></ul><ul><li>We spend more time helping users rather than poking hardware. </li></ul><ul><ul><li>We get good usage out of our cluster. </li></ul></ul>
  45. 47. Can blades help you?
  46. 48. Blade Pros / Cons <ul><li>Blades cost more up front. </li></ul><ul><ul><li>Pay for the chassis, even if you never fill it. </li></ul></ul><ul><li>Management savings only realised on larger installations. </li></ul><ul><ul><li>Would you use blades for a 8 node cluster? </li></ul></ul><ul><li>However, as cluster size increases, costs change. </li></ul><ul><ul><li>Management savings multiply as cluster size increases. </li></ul></ul><ul><li>Power density is high. </li></ul><ul><ul><li>Less power overall, but in a small space. </li></ul></ul><ul><ul><li>Price / performance / watt ? </li></ul></ul>
  47. 49. Interconnects <ul><li>We do not use low latency interconnects. </li></ul><ul><ul><li>We do Gigabit + SAN </li></ul></ul><ul><li>Blade chassis share a backplane. </li></ul><ul><ul><li>Typically 4 GB/s backplane. </li></ul></ul><ul><ul><li>Limit full bandwidth of the blades. </li></ul></ul><ul><ul><li>What is the latency hit? </li></ul></ul><ul><li>Blades have limited specialised network options. </li></ul><ul><ul><li>Single half height PCI card. </li></ul></ul><ul><ul><li>Currently limited to 4x Infiniband, gigabit and SAN. </li></ul></ul>
  48. 50. Conclusions <ul><li>Good management is the key, whether you run blade or servers. </li></ul><ul><ul><li>Good management is easier on blades. </li></ul></ul><ul><li>Blades can do anything a “standard” server can. </li></ul><ul><ul><li>In less of your space and in less of your time. </li></ul></ul><ul><li>If you are building larger clusters, consider blades. </li></ul>
  49. 51. Acknowledgements <ul><li>Informatics Systems Group </li></ul><ul><ul><li>Tim Cutts </li></ul></ul><ul><ul><li>Mark Rae </li></ul></ul><ul><ul><li>Simon Kelley </li></ul></ul><ul><ul><li>Andy Flint </li></ul></ul><ul><ul><li>Gildas Le Nadan </li></ul></ul><ul><ul><li>Peter Clapham </li></ul></ul><ul><li>Special Projects Group </li></ul><ul><ul><li>John Nicholson </li></ul></ul><ul><ul><li>Martin Burton </li></ul></ul><ul><ul><li>Russell Vincent </li></ul></ul><ul><ul><li>Dave Holland </li></ul></ul>
  50. 53. Storage Concepts
  51. 54. The data problem <ul><li>Pipeline is IO bound in many places. </li></ul><ul><ul><li>500GB of genomic data to search. </li></ul></ul><ul><li>Keep the data as close as possible to the compute. </li></ul><ul><ul><li>Blast over NFS is a complete disaster. </li></ul></ul><ul><ul><li>Data / IO problems common on bioinformatics clusters > 20 nodes. </li></ul></ul>Data NFS server Bottleneck
  52. 55. Initial Strategy <ul><li>Keep the data on local disk. </li></ul><ul><ul><li>Copy the dataset to each machine in the cluster. </li></ul></ul>Nodes Disk Data
  53. 56. Data Scaling <ul><li>Data management was a real headache. </li></ul><ul><ul><li>Ever-expanding dataset was copied to each machine in the farm (400-1000 nodes). </li></ul></ul><ul><ul><li>Data grown from 50-500GB. </li></ul></ul><ul><li>Copying data onto 1000+ machines takes time. </li></ul><ul><ul><li>0.5-2 days for large data pushes, even with clever approaches. </li></ul></ul><ul><li>Ensuring data integrity is hard. </li></ul><ul><ul><li>“Black Holes” syndrome. </li></ul></ul><ul><li>Experience showed it was not a scalable approach for the future. </li></ul>
  54. 57. Cluster file systems <ul><li>Early 2003 started Investigation cluster file systems for farm usage. </li></ul><ul><li>Most machines had gigabit connections. </li></ul><ul><ul><li>Network speeds near to local disk speeds. (120 Mbytes /S). </li></ul></ul><ul><li>Bitten hard by Tru64 end of life. </li></ul><ul><ul><li>We have ~300TB of data on Tru64/Advfs clusterfs. </li></ul></ul><ul><ul><li>No migration path. </li></ul></ul><ul><ul><li>We need a future proof storage solution. </li></ul></ul><ul><li>Should be Open Source </li></ul><ul><ul><li>Binary kernel modules are evil. </li></ul></ul><ul><ul><li>We often run non-standard kernels. </li></ul></ul>
  55. 58. Initial Implementation <ul><li>No cluster file system which would scale to all nodes in the cluster. </li></ul><ul><ul><li>Assessed a large number of systems. </li></ul></ul><ul><li>GPFS was the one we settled on. </li></ul><ul><ul><li>Not all nodes need SAN connections. </li></ul></ul><ul><ul><li>Not open source (you have to start somewhere). </li></ul></ul><ul><li>Divide farm up into a number of small systems. </li></ul><ul><ul><li>Chassis is an obvious unit. </li></ul></ul><ul><ul><li>File systems spanned 2 or 3 chassis of blades. </li></ul></ul><ul><ul><li>End up with 20 file systems </li></ul></ul><ul><li>Keeping 20 file systems in sync is (relatively) easy. </li></ul>
  56. 59. Topology I <ul><li>10x28 clusters of local NSDs </li></ul><ul><ul><li>GPFS striped across local disks on all nodes. </li></ul></ul><ul><ul><li>Data accessed via gigabit. </li></ul></ul><ul><li>2 chassis per cluster. </li></ul><ul><ul><li>Limited by replication level on GPFS and how often we expect machine failures. </li></ul></ul><ul><li>Performance limited by network. </li></ul><ul><ul><li>80MBytes/s single client. </li></ul></ul><ul><li>Requires no special hardware. </li></ul>Switch
  57. 60. Topology II: Hybrid <ul><li>4x42 node clusters. </li></ul><ul><ul><li>Server machines have SAN storage. </li></ul></ul><ul><ul><li>Client machines talk to servers over the LAN. </li></ul></ul><ul><li>Not every machine needs SAN. </li></ul><ul><ul><li>Clients do IO to multiple server machines. </li></ul></ul><ul><ul><li>Eliminates single server bottleneck. </li></ul></ul>SAN Switch
  58. 61. Future implementation <ul><li>Expand cluster file system to the whole cluster. </li></ul><ul><ul><li>Single copy of the data. </li></ul></ul><ul><ul><li>Allows users to manage their own data. </li></ul></ul><ul><ul><li>Use cluster file system for general scratch/work space. </li></ul></ul><ul><ul><li>Eliminate NFS. </li></ul></ul><ul><li>Implementing Lustre. </li></ul><ul><ul><li>Open source (v. x is propriety, v. x-1 is open sourced). </li></ul></ul><ul><ul><li>Scales to 1000s of nodes. </li></ul></ul><ul><ul><li>Performs well; in pilots our network is the bottleneck. </li></ul></ul><ul><ul><li>Easy (ish) to add more network. </li></ul></ul>
  59. 62. Lustre Config 10G 10G 4G 4G 2G 2G OST OST OST OST OST OST OST OST MDS ADM
  60. 63. The network is vital. <ul><li>Cluster IO is very stressful for networks. </li></ul><ul><ul><li>We can fill gigabit links from of a single client. </li></ul></ul><ul><li>Large amounts of gigabit networking. </li></ul><ul><ul><li>Multiple gigabit trunks. </li></ul></ul><ul><li>Non blocking switches critical. </li></ul>

×