Successfully reported this slideshow.

2014 nicta-reproducibility

1,675 views

Published on

talk at NICTA on reproducibility

Published in: Science
  • Be the first to comment

2014 nicta-reproducibility

  1. 1. Openness and reproducibility in computational science: tools, approaches, and thought patterns. C. Titus Brown ctb@msu.edu October 16, 2014
  2. 2. Hello! Assistant Professor @ MSU; Microbiology; Computer Science; etc. => UC Davis VetMed in 2015. More information at: • ged.msu.edu/ • github.com/ged-lab/ • ivory.idyll.org/blog/ • @ctitusbrown
  3. 3. The challenges of non-model sequencing • Missing or low quality genome reference. • Evolutionarily distant. • Most extant computational tools focus on model organisms – o Assume low polymorphism (internal variation) o Assume reference genome o Assume somewhat reliable functional annotation o More significant compute infrastructure …and cannot easily or directly be used on critters of interest.
  4. 4. Shotgun sequencing & assembly http://eofdreams.com/library.html; http://www.theshreddingservices.com/2011/11/paper-shredding-services-small-business/; http://schoolworkhelper.net/charles-dickens%E2%80%99-tale-of-two-cities-summary-analysis/
  5. 5. Shotgun sequencing analysis goals: • Assembly (what is the text?) o Produces new genomes & transcriptomes. o Gene discovery for enzymes, drug targets, etc. • Counting (how many copies of each book?) o Measure gene expression levels, protein-DNA interactions • Variant calling (how does each edition vary?) o Discover genetic variation: genotyping, linkage studies… o Allele-specific expression analysis.
  6. 6. Assembly It was the best of times, it was the wor , it was the worst of times, it was the isdom, it was the age of foolishness mes, it was the age of wisdom, it was th It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness …but for lots and lots of fragments!
  7. 7. Shared low-level fragments may not reach the threshold for assembly. Lamprey mRNAseq:
  8. 8. Assembly graphs scale with data size, not information. Conway T C , Bromage A J Bioinformatics 2011;27:479-486 © The Author 2011. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
  9. 9. Practical memory measurements (soil) Velvet measurements (Adina Howe)
  10. 10. Data set size and cost • $1000 gets you ~200m “reads”, or about 20-80 GB of data, in ~week. • > 1000 labs doing this regularly. • Each data set analysis is ~custom. • Analyses are data intensive and memory intensive.
  11. 11. Efficient data structures & algorithms Efficient online counting of k-mers Trimming reads on abundance Efficient De Bruijn graph representations Read abundance normalization
  12. 12. Raw data (~10-100 GB) Analysis "Information" ~1 GB Shotgun sequencing is massively redundant; can we eliminate redundancy while retaining information? Analog: JPEG lossy compression "Information" "Information" "Information" "Information" Database & integration Compression (~2 GB)
  13. 13. Sparse collections of k-mers can be stored efficiently in Bloom filters Pell et al., 2012, PNAS; doi: 10.1073/pnas.1121464109
  14. 14. Data structures & algorithms papers • “These are not the k-mers you are looking for…”, Zhang et al., PLoS One, 2014. • “Scaling metagenome sequence assembly with probabilistic de Bruijn graphs”, Pell et al., PNAS 2012. • “A Reference-Free Algorithm for Computational Normalization of Shotgun Sequencing Data”, Brown et al., arXiv 1203.4802.
  15. 15. Data analysis papers • “Tackling soil diversity with the assembly of large, complex metagenomes”, Howe et al., PNAS, 2014. • Assembling novel ascidian genomes & transcriptomes, Stolfi et al. (eLife 2014), Lowe et (in prep) • A de novo lamprey transcriptome from large scale multi-tissue mRNAseq, Scott et al., in prep.
  16. 16. Lab approach – not intentional, but working out. Novel data structures and algorithms Implement at scale Apply to real biological problems
  17. 17. This leads to good things. Efficient online counting of k-mers Trimming reads on abundance Efficient De Bruijn graph representations (khmer software) Read abundance normalization
  18. 18. Efficient online counting of k-mers Trimming reads on abundance Efficient De Bruijn graph representations Read abundance normalization Streaming algorithms for assembly, variant calling, and error correction Efficient graph labeling & exploration Cloud assembly protocols Efficient search for target genes Data set partitioning approaches Assembly-free comparison of data sets HMM-guided assembly Current research (khmer software)
  19. 19. Testing & version control – the not so secret sauce • High test coverage - grown over time. • Stupidity driven testing – we write tests for bugs after we find them and before we fix them. • Pull requests & continuous integration – does your proposed merge break tests? • Pull requests & code review – does new code meet our minimal coding etc requirements? o Note: spellchecking!!!
  20. 20. Our “novel research” enables this: • Novel data structures and algorithms; • Permit low(er) memory data analysis; • Liberate analyses from specialized hardware.
  21. 21. Running entirely w/in cloud ~40 hours Complete data; AWS m1.xlarge (See PyCon 2014 talk; video and blog post.) MEMORY
  22. 22. On the “novel research” side: • Novel data structures and algorithms; • Permit low(er) memory data analysis; • Liberate analyses from specialized hardware. This last bit? => reproducibility.
  23. 23. Reproducibility! Scientific progress relies on reproducibility of analysis. (Aristotle, Nature, 322 BCE.) “There is no such thing as ‘reproducible science’. There is only ‘science’, and ‘not science.’” – someone on Twitter (Fernando Perez?)
  24. 24. Disclaimer Not a researcher of reproducibility! Merely a practitioner. Please take my points below as an argument and not as research conclusions. (But I’m right.)
  25. 25. Replication vs reproducibility • I will not clearly distinguish. • There are important differences. o Replication: someone using same data, same tools, => same results o Reproduction: someone using different data and/or different tools => same result. • The former is much easier. • The latter is much stronger. • Science is failing even mere replication!? • So, mostly I will talk about how we make our analyses replicable.
  26. 26. My usual intro: We practice open science! Everything discussed here: • Code: github.com/ged-lab/ ; BSD license • Blog: http://ivory.idyll.org/blog (‘titus brown blog’) • Twitter: @ctitusbrown • Grants on Lab Web site: http://ged.msu.edu/research.html • Preprints available. Everything is > 80% reproducible.
  27. 27. My usual intro: We practice open science! Everything discussed here: • Code: github.com/ged-lab/ ; BSD license • Blog: http://ivory.idyll.org/blog (‘titus brown blog’) • Twitter: @ctitusbrown • Grants on Lab Web site: http://ged.msu.edu/research.html • Preprints available. Everything is > 80% reproducible.
  28. 28. My lab & the diginorm paper. • All our code was already on github; • Much of our data analysis was already in the cloud; • Our figures were already made in IPython Notebook • Our paper was already in LaTeX
  29. 29. IPython Notebook: data + IPythcoond)Ne o=t>ebook)
  30. 30. My lab & the diginorm paper. • All our code was already on github; • Much of our data analysis was already in the cloud; • Our figures were already made in IPython Notebook • Our paper was already in LaTeX …why not push a bit more and make it easily reproducible? This involved writing a tutorial. And that’s it.
  31. 31. To reproduce our paper: git clone <khmer> && python setup.py install git clone <pipeline> cd pipeline wget <data> && tar xzf <data> make && cd ../notebook && make cd ../ && make
  32. 32. Now standard in lab -- Our papers now have: • Source hosted on github; • Data hosted there or on AWS; • Long running data analysis => ‘make’ • Graphing and data digestion => IPython Notebook (also in github) Qingpeng Zhang
  33. 33. Research process Generate new results; encode in Makefile Summarize in IPython Notebook Discuss, explore Push to github
  34. 34. Literate graphing & interactive exploration
  35. 35. The process • We start with pipeline reproducibility • Baked into lab culture; default “use git; write scripts” Community of practice! • Use standard open source approaches, so OSS developers learn it easily. • Enables easy collaboration w/in lab • Valuable learning tool!
  36. 36. Growing & refining the process • Now moving to Ubuntu Long-Term Support + install instructions. • Everything is as automated as is convenient. • Students expected to communicate with me in IPython Notebooks. • Trying to avoid building (or even using) new repro tools. • Avoid maintenance burden as much as possible.
  37. 37. 1. Use standard OS; provide install instructions • Providing install, execute for Ubuntu Long-Term Support release 14.04: supported through 2017 and beyond. • Avoid pre-configured virtual machines! They: o Lock you into specific cloud homes. o Challenge remixability and extensibility.
  38. 38. 2. Automate • Literate graphing now easy with knitr and IPython Notebook. • Build automation with make, or whatever. To first order, it does not matter what tools you use. • Explicit is better than implicit. Make it easy to understand what you’re doing and how to extend it.
  39. 39. k-mer counting paper (Ubuntu 14.04, git, make, IPython Notebook, latex)
  40. 40. Time from publication of KAnalyze to our 100% reproducible re-evaluation? ~8 hours.
  41. 41. 3. Protocols, not pipelines. STOP HIDING THE ANALYSIS STEPS.
  42. 42. Write down what you’re doing… https://khmer-protocols.readthedocs.org/
  43. 43. …and add automated end-to-end tests. c.f. “literate ReSTing”
  44. 44. 4. Drive sustainable software development with use cases.
  45. 45. …that are explicit…
  46. 46. …versioned…
  47. 47. …and automated.
  48. 48. 5. Invest in automated, reproducible workflows Genome Reference Quality Filtered Diginorm Partition Reinflation Velvet - 80.90 83.64 84.57 IDBA 90.96 91.38 90.52 88.80 SPAdes 90.42 90.35 89.57 90.02 Mis-assembled Contig Length Velvet - 52071358 44730449 45381867 IDBA 21777032 20807513 17159671 18684159 SPAdes 28238787 21506019 14247392 18851571 Kalamazoo metagenome protocol run on mock data from Shakya et al., 2013 Also! Tip o’ the hat to Michael Barton, nucleotid.es
  49. 49. Automation enables super fun paper reviews! • “What a nice new transcriptome assembler! Interesting how it doesn’t perform that well on my 10 test data sets.” • “Hey, so you make these claims, but I ran your code, and…” • “Fun fact! Your source code has a syntax error in it – even Perl has standards! You’re still sure that’s the script you used?” • “Here – use our evaluation pipeline, since you clearly need something better.” The Brown Lab: taking passive aggression to a whole new level!
  50. 50. Myths of reproducible research (Opinions from personal experience.)
  51. 51. Myth 1: Partial reproducibility is hard. “Here’s my script.” => Methods More generally, • Many scientists cannot replicate any part of their analysis without a lot of manual work. • Automating this is a win for reasons that have nothing to do with reproducibility… efficiency! See: Software Carpentry.
  52. 52. Myth 2: Incomplete reproducibility is useless Paraphrase: “We can’t possibly reproduce the experimental data exactly, so we shouldn’t bother with anything else, either.” (Analogous arg re software testing & code coverage.) • …I really have a hard time arguing the paraphrase honestly… • Being able to reanalyze your raw data? Interesting. • Knowing how you made your figures? Really useful.
  53. 53. Myth 3: We need new platforms • Techies always want to build something (which is fun!) but don’t want to do science (which is hard!) • We probably do need new platforms, but stop thinking that building them does a service. • Platforms need to be use driven. Seriously. • If you write good software for scientific inquiry and make it easy to use reproducibly, that will drive virtuousity.
  54. 54. Myth 4. Virtual Machine reproducibility is an end solution. • Good start! Better than nothing! But: • Limits understanding & reuse. • Limits remixing: often cannot install other software! • “Chinese Room” argument: could be just a lookup table. …what about Docker?
  55. 55. Myth 5: We can use GUIs for reproducible research (OK, this is partly just to make people think ;) • Almost all data analysis takes place within a larger pipeline; the GUI must consume entire pipeline in order to be reproducible. • IFF GUI wraps command line, that’s a decent compromise (e.g. Galaxy) but handicaps researchers using novel approaches. • By the time it’s in a GUI, it’s no longer research. But it can be useful for research…
  56. 56. Our current efforts? • Semantic versioning of our own code: stable command-line interface. • Writing easy-to-teach tutorials and protocols for common analysis pipelines. • Automate ‘em for testing purposes. • Encourage their use, inclusion, and adaptation by others.
  57. 57. Literate testing • Our shell-command tutorials for bioinformatics can now be executed in an automated fashion – commands are extracted automatically into shell scripts. • See: github.com/ged-lab/literate-resting/. • Tremendously improves peace of mind and confidence moving forward! Leigh Sheneman
  58. 58. Doing things right => #awesomesauce Protocols in English for running analyses in the cloud Literate reSTing => shell scripts Tool competitions Benchmarking Education Acceptance tests
  59. 59. What bits should people adopt? • Version control! • Literate graphing - IPython Notebook/knitr! • Automated “build” from data => results! • Make data available as early in your pipeline as possible.
  60. 60. Our approaches -- • We are not doing anything particularly neat on the computational side... No “magic sauce.” • Much of our effort is now driven by sheer utility: o Automation reduces our maintenance burden. o Extensibility makes revisions much easier! o Explicit instructions are good great for training. • Some effort needed at the beginning, but once practices are established, “virtuous cycle” takes over.
  61. 61. New science vs reproducibility • Nobody would care that we were doing things reproducibly if our science wasn’t decent. • Make sure students realize that faffing about on infrastructure isn’t science. • Research is about doing science. Reproducibility (like other good practices) is much easier to proselytize if you can link it to progress in science.
  62. 62. Is there a reproducibility crisis? • Mina Bissell: maybe, but science is hard and we should not overly focus on replicating published results vs doing new research. Bissel, 2013. • “But we can’t even get the software in the first place!” Collberg et al., 2014. Computational science should be the easiest thing to replicate… but it’s not!?
  63. 63. “Replication debt” • Can we borrow idea of “technical debt” from software engineering? • Semi-independent replication after initial exploratory phase, followed by articulation of protocols and independent replication. Monday, July 11th, 2039 Image from blog.crisp.se
  64. 64. “Replication debt” • Semi-independent replication after initial exploratory phase, followed by articulation of protocols and independent replication. • Public acknowledgement of debt is important. Monday, July 11th, 2039 Image from blog.crisp.se
  65. 65. Biology & sequence analysis is in a perfect place for reproducibility We are lucky! A good opportunity! • Big Data: laptops are too small; • Excel doesn’t scale any more; • Few tools in use; most of them are $$ or UNIX; • Little in the way of entrenched research practice;
  66. 66. Thanks! Talk will soon be on slideshare: slideshare.net/c.titus.brown E-mail or tweet me: ctb@msu.edu @ctitusbrown Talk at ANU, 3:30pm today

×