Mapping Life Science Informatics to the Cloud
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Mapping Life Science Informatics to the Cloud

on

  • 5,536 views

Infrastructure cloud platforms such as those offered by Amazon Web Services are not designed and built with scientific research as the primary use case. These presentation slides cover the current ...

Infrastructure cloud platforms such as those offered by Amazon Web Services are not designed and built with scientific research as the primary use case. These presentation slides cover the current state of mapping life science research and HPC technique onto “the cloud” and how to work around the common engineering, orchestration and data movement problems.

[Note: I've replaced the 2011 version of this talk deck with a slightly updated version as delivered at the AIRI Petabyte Challenge Meeting]

Statistics

Views

Total Views
5,536
Views on SlideShare
4,812
Embed Views
724

Actions

Likes
11
Downloads
92
Comments
0

10 Embeds 724

http://bioteam.net 548
http://www.bioteam.net 107
http://paper.li 44
http://www.linkedin.com 7
http://bioteam.wpengine.com 5
https://twitter.com 4
http://translate.googleusercontent.com 3
http://us-w1.rockmelt.com 3
https://www.linkedin.com 2
http://a0.twimg.com 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Mapping Life Science Informatics to the Cloud Presentation Transcript

  • 1. Mapping Informatics To the Cloud 2012 AIRI Petabyte Challenge Chris Dagdigian chris@bioteam.net
  • 2. I‟m Chris.I‟m an infrastructuregeek.I work for theBioTeam.
  • 3. The “C” Word.
  • 4. When I say “cloud” I‟m talking IaaS.
  • 5. Amazon AWS Is the IaaS cloud. Most others are fooling themselves.(Has-beens, also-rans & delusional marketing zombies)
  • 6. A message for the pretenders…
  • 7. No APIs?Not a cloud.
  • 8. No self-service? Not a cloud.
  • 9. I have to email a human? Not a cloud.
  • 10. ~50% failure rate whenprovisioning new servers? Stupid cloud.
  • 11. Block storageand virtual servers only? (barely) a cloud;
  • 12. Private Clouds: My $.02
  • 13. Private Clouds in 2012:• Hype vs. Reality ratio still wacky• Sensible only for certain shops • Have you seen what you have to do to your networks & gear?• There are easier ways
  • 14. Private Clouds: My Advice for „12• Remain cynical (test vendor claims)• Due Diligence still essential• I personally would not deploy/buy anything that does not explicitly provide Amazon API compatibility
  • 15. Private Clouds: My Advice for „12• Most people are better off: • Adding VM platforms to existing HPC clusters & environments • Extending enterprise VM platforms to allow user self-service & server catalogs
  • 16. Enough Bloviating. Advice time.
  • 17. Tip #1
  • 18. HPC & Clouds: Whole New World
  • 19. • We have spent decades learning to tune research HPC systems for shared access & many users.• The cloud upends this model
  • 20. • Far more common to see … • Dedicated cloud resources spun up for each app or use case • Each system gets individually tuned & optimized
  • 21. Tip #2
  • 22. Hybrid Clouds & Cloud Bursting
  • 23. • Lots of aggressive marketing• Lots of carefully constructed “case studies” and prototypes• The truth? • Less usable than you‟ve been told • Possible? Heck yeah. • Practical? Only sometimes.
  • 24. • Advice • Be cynical • Demand proof • Test carefully
  • 25. • Still want to do it? • Buy it, don‟t build it • Cycle Computing • Univa • BrightComputing • …
  • 26. • Follow the crowd• In the real world we see: • Separation between local and cloud HPC resources • Send your work to the system most suitable
  • 27. Tip #3
  • 28. You can‟t rewrite EVERYTHING.
  • 29. • Salesfolk will just glibly tell you to rewrite your apps so you can use whatever big data analysis framework they happen to be selling today
  • 30. • They have no clue.
  • 31. • In life science informatics we have hundreds of codes that will never be rewritten.• We‟ll be needing them for years to come.
  • 32. • Advice: • MapReduceish methods are the future for big-data informatics • It will take years to get there • We still have to deal with legacy algorithms and codes
  • 33. • You will need: • A process for figuring out when it‟s worthwhile to rewrite/re-architect • Tested cloud strategies for handling three use cases
  • 34. You need 3 cloudarchitectures: 1. Legacy HPC 2. “Cloudy” HPC 3. Big Data HPC (Hadoop)
  • 35. Legacy HPC on the cloud• MIT StarCluster• http://web.mit.edu/star/cluster/• This is your baseline • Extend as needed
  • 36. “Cloudy” HPC• Use this method when …• It makes sense to rewrite or rearchitect an HPC workflow to better leverage modern cloud capabilities
  • 37. “Cloudy” HPC, continued• Ditch the legacy compute farm model• Leverage elastic scale-out tools (***) • Spot Instances for elastic & cheap compute • SimpleDB for job statekeeping • SQS for job queues & workrflow “glue” • SNS for message passing & monitoring • S3 for input & output data • Etc.
  • 38. Big Data HPC• It‟s gonna be a MapReduce world• Little need to roll your own• Ecosystem already healthy• Multiple providers today• Often a slam-dunk cloud use case
  • 39. Tip #4
  • 40. The Cloud was not designed for “us”
  • 41. • HPC is an edge case for the hyperscale IaaS clouds• We need to deal with this and engineer around it.
  • 42. • Many examples • Eventual consistency • Networking & subnets • Latency • Node placement
  • 43. • Advice • Manage expectations • Benchmark & test • Evangelize • (pester the cloud sales reps …)
  • 44. Tip #5
  • 45. Data Movement Is Still Hard
  • 46. • Consistently getting easier • Amazon is not a bottleneck • AWS Import/Export • AWS Direct Connect • Aspera has some amazing stuff out right now
  • 47. • Advice • AWS Import/Export works well • Size of pipe is not everything • Sweat the small stuff • Tracking, checksums, disk speed • Dedicated workstations • Secure media storage
  • 48. Dedicated data movement station
  • 49. „naked‟ Terabyte-scale data movement
  • 50. Don‟t overlook media storage …
  • 51. • Advice for 2012 • BioTeam is dialing down our advocacy of physical data ingestion into the cloud • Why? • Operationally hard, expensive and no longer strictly needed
  • 52. Real world cross-countryinternet-based data movement March 2012
  • 53. 700Mb/sec into Amazon, stress-free & zero tuning March 2012
  • 54. • People trying to move data via physical media quickly realize the operational difficulties• Bandwidth is cheaper than hiring another body to manage physical data ingestion & movement• In 2012 we strongly recommend network-based data movement when at all possible
  • 55. u r doing it wrong
  • 56. cool data movement, bro!
  • 57. Tips #6 & 7
  • 58. Cloud storage. Still slow.
  • 59. Big shared storage. Still hard.
  • 60. • Not much we can do except engineer around it• AWS compute cluster instances are a huge step forward• AWS competitors take note
  • 61. • We are not database nerds• We care about more than just random IO performance• We need it all • Random I/O • Long sequential read/write
  • 62. • Faster Storage Options • Software RAID on EBS • Various GlusterFS options• Even if you optimize everything, the virtual NICs are still a bottleneck
  • 63. • Big Shared Storage • 10GbE nodes and NFS • Software RAID sets • GlusterFS or similar • 2012: pNFS finally?
  • 64. Tip #8
  • 65. Things fail differently in the cloud.
  • 66. • Stuff breaks• It breaks in weird ways• Transient/temporary issues more common than what we see “at home”
  • 67. • Advice • Pessimism is good • Design for failure • Think hard about • How will you detect? • How will you respond?
  • 68. • Advice • Remove humans from loop • Automate recovery • Automate your backups
  • 69. Tip #9
  • 70. Serial/batch computing at-scale
  • 71. • Loosely coupled workflows are ideal• Break the pipeline into discrete components• Components should be able to scale up|down independently
  • 72. • Component = Opportunity to: • … Make a scaling decision • (# nodes in use) • … Make sizing decision • (instance type in use)
  • 73. Nirvana is …
  • 74. … independent looselyconnected components thatcan self-scale andcommunicate asynchronously
  • 75. Advice:• Many people already doing this• Best practices are well known• Steal from the best: • RightScale, Opscode & Cycle Computing
  • 76. Phew. Think I‟m done now.
  • 77. Questions? Slides available athttp://slideshare.net/chrisdag/
  • 78. End;
  • 79. Backup Slides
  • 80. Private Clouds: Pick Your Poison• OpenStack - http://openstack.org • Pro: Super smart developers; significant mindshare; True Open Source • Con: Commitment to AWS API compatibility (?) & stability
  • 81. Private Clouds: Pick Your Poison• CloudStack- http://cloudstack.org • Pro: Explicit AWS API support; very recent move away from “open-core” model; usability • Con: Developer mindshare? Sudden switch to Apache
  • 82. Private Clouds: Pick Your Poison• Eucalyptus- http://eucalyptus.com • Pro: Direct AWS API compatibility; lots of hypervisor support • Con: Open-core model; mindshare; Recent ressurection