2014 pycon-talk

2,646 views

Published on

Published in: Science, Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,646
On SlideShare
0
From Embeds
0
Number of Embeds
35
Actions
Shares
0
Downloads
17
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide
  • …spent last15 years getting to the point where I earn considerably less than many of you
  • Billions of pieces; hi-dimensional puzzle
  • Acceptance testing other people’s software
  • Color.
  • Walk through
  • Add cost.
  • Add cost.
  • Apparently I’m approachable, trying to work on that.
  • 2014 pycon-talk

    1. 1. Instrument ALL the things: Studying data-intensive workflows in the clowd. C. Titus Brown Michigan State University (See blog post)
    2. 2. A few upfront definitions Big Data, n: whatever is still inconvenient to compute on. Data scientist, n: a statistician who lives in San Francisco. Professor, n: someone who writes grants to fund people who do the work (c.f. Fernando Perez) I am a professor (not a data scientist) who writes grants so that others can do data- intensive biology.
    3. 3. This talk dedicated to Terry Peppers Titus, I no longer understand what you actually do… Daddy, what do you do at work!?
    4. 4. I assemble puzzles for a living. Well, ok, I strategize about solving multi-dimensional puzzles with billions of pieces and no box.
    5. 5. Three bioinformatic strategies in use • Greedy: “if the piece sorta fits…” • N2 – “Do these two pieces match? How about this next one?” • The Dutch approach.
    6. 6. The Dutch Solution (De Bruijn assembly) Find similarities within puzzle pieces
    7. 7. The Dutch Solution Algorithmically: • Is linear in time with number of pieces  (Way better than N2!) • Is linear in memory with volume of data  (This is due to errors in digitization process.)
    8. 8. Practical memory measurements Velvet measurements (Adina Howe) GB RAM (About $500 of data)
    9. 9. Our research challenges – 1. It costs only $10k & 1 week to generate enough sequence data that no commodity computer (and few supercomputers) can assemble it. 2. Hundreds -> thousands of such data sets are being generated each year.
    10. 10. Our research challenges – 1. It costs only $10k & 1 week to generate enough sequence data that no commodity computer (and few supercomputers) can assemble it. 2. Hundreds -> thousands of such data sets are being generated each year.
    11. 11. Our research (i) - CS • Streaming lossy compression approach that discards pieces we’ve seen before. • Low memory probabilistic data structures. (…see Pycon 2013 talk) => RAM now scales better: O(I) where I << N (I is sample dependent but typically I < N/20)
    12. 12. Our research (ii) - approach • Open source, open data, open science, and reproducible computational research. – GitHub – Automated testing, CI, & literate reSTing – Blogging, Twitter – IPython Notebook for data analysis, figures. • Protocols for assembling in the cloud.
    13. 13. Molgula oculata Molgula occulta Molgula oculata Real solutions, tackling squishy biology! Elijah Lowe & Billie Swalla
    14. 14. Doing things right => #awesomesauce Protocols in English for running analyses in the cloud Literate reSTing => shell scripts Tool competitions Benchmarking Education Acceptance tests
    15. 15. Benchmarking strategy • Rent a bunch of cloud VMs from Amazon and Rackspace. • Extract commands from tutorials using literate-resting. • Use ‘sar’ (sysstat pkg) to sample CPU, RAM, and disk I/O.
    16. 16. Benchmarking output Data subset; AWS m1.xlarge
    17. 17. Each protocol has many steps Data subset; AWS m1.xlarge
    18. 18. Most interested in RAM-intensive bit Data subset; AWS m1.xlarge
    19. 19. Most interested in RAM-intensive bit Complete data; AWS m1.xlarge
    20. 20. Observation #1: Rackspace is faster machine data disk working hours cost rackspace-15gb 200 GB 100 GB 34.9 $23.70 m2.xlarge EBS ephemeral 44.7 $18.34 m1.xlarge EBS ephemeral 45.5 $21.82 m1.xlarge EBS, max IOPS ephemeral 49.1 $23.56 m1.xlarge EBS, max IOPS EBS, max IOPS 52.5 $25.20
    21. 21. Surprise #1: AWS ephemeral storage is FASTER machine data disk working hours cost rackspace-15gb 200 GB 100 GB 34.9 $23.70 m2.xlarge EBS ephemeral 44.7 $18.34 m1.xlarge EBS ephemeral 45.5 $21.82 m1.xlarge EBS, max IOPS ephemeral 49.1 $23.56 m1.xlarge EBS, max IOPS EBS, max IOPS 52.5 $25.20
    22. 22. Observation #2: NUMA costs Same task done with varying memory sizes.
    23. 23. Observation #2: NUMA costs Same task done with varying memory sizes.
    24. 24. Can’t we just use a faster computer? • Demo data on m1.xlarge: 2789 s • Demo data on m3.xlarge: 1970 s – 30% faster! (Why? m3.xlarge has 2x40 GB SSD drives & 40% faster cores.) Great! Let’s try it out!
    25. 25. Observation #3: multifaceted problem! • Full data on m1.xlarge: 45.5 h • Full data on m3.xlarge: out of disk space. We need about 200 GB to run the full pipeline. You can have fast disk or lots of disk but not both, for the moment.
    26. 26. Future directions 1. Invest in cache-local data structures and algorithms. 2. Invest in streaming/in-memory approaches. 3. Not clear (to me) that straight code optimization or infrastructure engineering is worthwhile investment.
    27. 27. Frequently Offered Solutions 1. You should like, totally multithread that. (See: McDonald & Brown, POSA) 2. Hadoop will just crush that workload, dude. (Unlikely to be cost-effective.) 3. Have you tried <my proprietary Big Data technology stack>? (Thatz Not Science)
    28. 28. Optimization vs scaling • Linear time/memory improvements would not have addressed our core problem. (2 years, 20x improvement, 100x increase in data.) • Puzzle problem is a graph problem with big data, no locality, small compute. Not friendly. • We need(ed) to scale our algorithms. • Can now run on single-chassis, in ~15 GB RAM.
    29. 29. Optimization vs scaling --
    30. 30. Scaling can be more important!
    31. 31. What are we losing by focusing our engineering on pleasantly parallel problems? • Hadoop is fundamentally not that interesting. • Research is about the 100x. • Scaling new problems, evaluating/creating new data structures and algorithms, etc.
    32. 32. (From my PyCon 2011 talk.) Theme: Life’s too short to tackle the easy problems – come to academia!
    33. 33. Thanks! • Leigh Sheneman, for starting the benchmarking project. • Labbies: Michael R. Crusoe, Luiz Irber, Likit Preeyanon, Camille Scott, and Qingpeng Zhang.
    34. 34. Thanks! • github.com/ged-lab/ – khmer – core project – khmer-protocols – tutorials/acceptance tests – literate-resting – script to pull out code from reST tutorials • Blog post at: http://ivory.idyll.org/blog/2014-pycon.html • Michael R. Crusoe, Likit Preeyanon, Camille Scott, and Qingpeng Zhang are here at PyCon. …note, you can probably afford to buy them off me :)
    35. 35. Different computational strategies for k-mer counting, revealed! Khmer-counting paper pipeline; Qingpeng Zhang

    ×