The challenges of non-
model sequencing
• Missing or low quality genome reference.
• Evolutionarily distant.
• Most extant computational tools focus on model
organisms –
o Assume low polymorphism (internal variation)
o Assume reference genome
o Assume somewhat reliable functional annotation
o More significant compute infrastructure
…and cannot easily or directly be used on critters of interest.
Shotgun sequencing
analysis goals:
• Assembly (what is the text?)
o Produces new genomes & transcriptomes.
o Gene discovery for enzymes, drug targets, etc.
• Counting (how many copies of each book?)
o Measure gene expression levels, protein-DNA
interactions
• Variant calling (how does each edition vary?)
o Discover genetic variation: genotyping, linkage
studies…
o Allele-specific expression analysis.
Assembly
It was the best of times, it was the wor
, it was the worst of times, it was the
isdom, it was the age of foolishness
mes, it was the age of wisdom, it was th
It was the best of times, it was the worst of times, it was
the age of wisdom, it was the age of foolishness
…but for lots and lots of fragments!
K-mers give you an
implicit alignment
CCGATTGCACTGGACCGATGCACGGTACCGTATAGCC
CATGGACCGATTGCACTGGACCGATGCACGGTACCG
K-mers give you an
implicit alignment
CCGATTGCACTGGACCGATGCACGGTACCGTATAGCC
CATGGACCGATTGCACTGGACCGATGCACGGTACCG
CATGGACCGATTGCACTGGACCGATGCACGGACCG
(with no accounting for mismatches or indels)
De Bruijn graphs –
assemble on overlaps
J.R. Miller et al. / Genomics (2010)
The problem with k-mers
CCGATTGCACTGGACCGATGCACGGTACCGTATAGCC
CATGGACCGATTGCACTCGACCGATGCACGGTACCG
Each sequencing error results in k novel k-mers!
Data set size and cost
• $1000 gets you ~100m “reads”, or about 10-40 GB of
data, in ~week.
• > 1000 labs doing this regularly.
• Each data set analysis is ~custom.
• Analyses are data intensive and memory intensive.
Efficient data structures &
algorithms
Efficient online
counting of k-mers
Trimming reads
on abundance
Efficient De
Bruijn graph
representations
Read
abundance
normalization
Shotgun sequencing is massively redundant; can we
eliminate redundancy while retaining information?
Analog: JPEG lossy compression
Raw data
(~10-100 GB) Analysis
"Information"
~1 GB
"Information"
"Information"
"Information"
"Information"
Database &
integration
Compression
(~2 GB)
Sparse collections of k-mers can be
stored efficiently in Bloom filters
Pell et al., 2012, PNAS; doi: 10.1073/pnas.1121464109
Data structures &
algorithms papers
• “These are not the k-mers you are looking for…”,
Zhang et al., arXiv 1309.2975, in review.
• “Scaling metagenome sequence assembly with
probabilistic de Bruijn graphs”, Pell et al., PNAS 2012.
• “A Reference-Free Algorithm for Computational
Normalization of Shotgun Sequencing Data”, Brown
et al., arXiv 1203.4802, under revision.
Data analysis papers
• “Tackling soil diversity with the assembly of large,
complex metagenomes”, Howe et al., PNAS, 2014.
• Assembling novel ascidian genomes &
transcriptomes, Lowe et al., in prep.
• A de novo lamprey transcriptome from large scale
multi-tissue mRNAseq, Scott et al., in prep.
Lab approach – not
intentional, but working out.
Novel data
structures and
algorithms
Implement at
scale
Apply to real
biological
problems
This leads to good things.
Efficient online
counting of k-mers
Trimming reads
on abundance
Efficient De
Bruijn graph
representations
Read
abundance
normalization
(khmer software)
Efficient online
counting of k-mers
Trimming reads
on abundance
Efficient De
Bruijn graph
representations
Read
abundance
normalization
Streaming
algorithms for
assembly,
variant calling,
and error
correction
Cloud assembly
protocols
Efficient graph
labeling &
exploration
Data set
partitioning
approaches
Assembly-free
comparison of
data sets
HMM-guided
assembly
Efficient search
for target genes
Currentresearch
(khmer software)
Testing & version control
– the not so secret sauce
• High test coverage - grown over time.
• Stupidity driven testing – we write tests for bugs after
we find them and before we fix them.
• Pull requests & continuous integration – does your
proposed merge break tests?
• Pull requests & code review – does new code meet
our minimal coding etc requirements?
o Note: spellchecking!!!
On the “novel research” side:
• Novel data structures and algorithms;
• Permit low(er) memory data analysis;
• Liberate analyses from specialized hardware.
Running entirely w/in cloud
Complete data; AWS m1.xlarge
~40 hours
(See PyCon 2014 talk; video and blog post.)
MEMORY
On the “novel research” side:
• Novel data structures and algorithms;
• Permit low(er) memory data analysis;
• Liberate analyses from specialized hardware.
This last bit? => reproducibility.
Reproducibility!
Scientific progress relies on reproducibility of
analysis. (Aristotle, Nature, 322 BCE.)
“There is no such thing as ‘reproducible science’.
There is only ‘science’, and ‘not science.’” –
someone on Twitter (Fernando Perez?)
Disclaimer
Not a researcher of reproducibility!
Merely a practitioner.
Please take my points below as an argument
and not as research conclusions.
(But I’m right.)
My usual intro:
We practice open science!
Everything discussed here:
• Code: github.com/ged-lab/ ; BSD license
• Blog: http://ivory.idyll.org/blog (‘titus brown blog’)
• Twitter: @ctitusbrown
• Grants on Lab Web site:
http://ged.msu.edu/research.html
• Preprints available.
Everything is > 80% reproducible.
My usual intro:
We practice open science!
Everything discussed here:
• Code: github.com/ged-lab/ ; BSD license
• Blog: http://ivory.idyll.org/blog (‘titus brown blog’)
• Twitter: @ctitusbrown
• Grants on Lab Web site:
http://ged.msu.edu/research.html
• Preprints available.
Everything is > 80% reproducible.
My lab & the diginorm paper.
• All our code was already on github;
• Much of our data analysis was already in the cloud;
• Our figures were already made in IPython Notebook
• Our paper was already in LaTeX
My lab & the diginorm paper.
• All our code was already on github;
• Much of our data analysis was already in the cloud;
• Our figures were already made in IPython Notebook
• Our paper was already in LaTeX
…why not push a bit more and make it easily
reproducible?
This involved writing a tutorial. And that’s it.
To reproduce our paper:
git clone <khmer> && python setup.py install
git clone <pipeline>
cd pipeline
wget <data> && tar xzf <data>
make && cd ../notebook && make
cd ../ && make
Now standard in lab --
All our papers now have:
• Source hosted on github;
• Data hosted there or on AWS;
• Long running data analysis =>
‘make’
• Graphing and data digestion
=> IPython Notebook (also in
github)
Qingpeng Zhang
The process
• We start with pipeline reproducibility
• Baked into lab culture; default “use git; write scripts”
Community of practice!
• Use standard open source approaches, so OSS
developers learn it easily.
• Enables easy collaboration w/in lab
• Valuable learning tool!
Growing & refining the
process
• Now moving to Ubuntu Long-Term Support + install
instructions.
• Everything is as automated as is convenient.
• Students expected to communicate with me in IPython
Notebooks.
• Trying to avoid building (or even using) new tools.
• Avoid maintenance burden as much as possible.
1. Use standard OS; provide
install instructions
• Providing install, execute for Ubuntu Long-Term
Support release 14.04: supported through 2017 and
beyond.
• Avoid pre-configured virtual machines!
o Locks you into specific cloud homes.
o Challenges remixability and extensibility.
2. Automate
• Literate graphing now easy with knitr and IPython
Notebook.
• Build automation with make, or whatever. To first
order, it does not matter what tools you use.
• Explicit is better than implicit. Make it easy to
understand what you’re doing and how to extend it.
Myth 1: Partial
reproducibility is hard.
“Here’s my script.” => Methods
More generally,
• Many scientists cannot replicate any part of their
analysis without a lot of manual work.
• Automating this is a win for reasons that have
nothing to do with reproducibility… efficiency!
See: Software Carpentry.
Myth 2: Incomplete
reproducibility is useless
Paraphrase: “We can’t possibly reproduce the
experimental data exactly, so we shouldn’t bother
with anything else, either.”
(Analogous arg re software testing & code coverage.)
• …I really have a hard time arguing the paraphrase
honestly…
• Being able to reanalyze your raw data? Interesting.
• Knowing how you made your figures? Really useful.
Myth 3: We need new
platforms
• Techies always want to build something (which is fun!)
but don’t want to do science (which is hard!)
• We probably do need new platforms, but stop thinking
that building them does a service.
• Platforms need to be use driven. Seriously.
• If you write good software for scientific inquiry and make
it easy to use reproducibly, that will drive virtuousity.
Myth 4. Virtual Machine
reproducibility is an end solution.
• Good start! Better than nothing!
But:
• Limits understanding & reuse.
• Limits remixing: often cannot install other software!
• “Chinese Room” argument: could be just a lookup
table.
Myth 5: We can use GUIs
for reproducible research
(OK, this is partly just to make people think ;)
• Almost all data analysis takes place within a larger
pipeline; the GUI must consume entire pipeline in
order to be reproducible.
• IFF GUI wraps command line, that’s a decent
compromise (e.g. Galaxy) but handicaps
researchers using novel approaches.
• By the time it’s in a GUI, it’s no longer research.
Our current efforts?
• Semantic versioning of our own code: stable
command-line interface.
• Writing easy-to-teach tutorials and protocols for
common analysis pipelines.
• Automate ‘em for testing purposes.
• Encourage their use, inclusion, and adaptation by
others.
khmer-protocols:
• Provide standard “cheap”
assembly protocols for the cloud.
• Entirely copy/paste; ~2-6 days
from raw reads to assembly,
annotations, and differential
expression analysis. ~$150 per
data set (on Amazon rental
computers)
• Open, versioned, forkable,
citable….
Read cleaning
Diginorm
Assembly
Annotation
RSEM differential
expression
Literate testing
• Our shell-command tutorials for bioinformatics can
now be executed in an automated fashion –
commands are extracted automatically into shell
scripts.
• See: github.com/ged-lab/literate-resting/.
• Tremendously improves peace of mind and
confidence moving forward!
Leigh Sheneman
Doing things right
=> #awesomesauce
Protocols in English
for running analyses in
the cloud
Literate reSTing =>
shell scripts
Tool
competitions
Benchmarking
Education
Acceptance
tests
Concluding thoughts
• We are not doing anything particularly neat on the
computational side... No “magic sauce.”
• Much of our effort is now driven by sheer utility:
o Automation reduces our maintenance burden.
o Extensibility makes revisions much easier!
o Explicit instructions are good for training.
• Some effort needed at the beginning, but once
practices are established, “virtuous cycle” takes
over.
What bits should people
adopt?
• Version control!
• Literate graphing!
• Automated “build” from data => results!
• Make available data as early in your pipeline as
possible.
More concluding
thoughts
• Nobody would care that we were doing things
reproducibly if our science wasn’t decent.
• Make sure students realize that faffing about on
infrastructure isn’t science.
• Research is about doing science. Reproducibility
(like other good practices) is much easier to
proselytize if you can link it to progress in science.
Biology & sequence analysis is in a
perfect place for reproducibility
We are lucky! A good opportunity!
• Big Data: laptops are too small;
• Excel doesn’t scale any more;
• Few tools in use; most of them are $$ or UNIX;
• Little in the way of entrenched research practice;
Thanks!
Talk is on slideshare: slideshare.net/c.titus.brown
E-mail or tweet me:
ctb@msu.edu
@ctitusbrown
Editor's Notes
A sketch showing the relationship between the number of sequence reads and the number of edges in the graph. Because the underlying genome is fixed in size, as the number of sequence reads increases the number of edges in the graph due to the underlying genome that will plateau when every part of the genome is covered. Conversely, since errors tend to be random and more or less unique, their number scales linearly with the number of sequence reads. Once enough sequence reads are present to have enough coverage to clearly distinguish true edges (which come from the underlying genome), they will usually be outnumbered by spurious edges (which arise from errors) by a substantial factor.