2013 duke-talk

1,022 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,022
On SlideShare
0
From Embeds
0
Number of Embeds
74
Actions
Shares
0
Downloads
17
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Goal is to do first stage data reduction/analysis in less time than it takes to generate the data. Compression => OLC assembly.
  • Larvae/stream bottoms 3-6 years; parasitic adult -> great lakes, 12-20 months feeding. 5-8 years. 40 lbs of fish per life as parasite. 98% of fish in great lakes went away!
  • Diginorm is a subsampling approach that may help assemble highly polymorphic sequences. Observed levels of variation are quite low relative to e.g. marine free spawning animals.
  • 2013 duke-talk

    1. 1. Building better genomes, transcriptomes, and metagenomes with improvedtechniques for de novo assembly -- an easier way to do it C. Titus Brown Assistant Professor CSE, MMG, BEACON Michigan State University March 2013 ctb@msu.edu
    2. 2. AcknowledgementsLab members involved Collaborators Adina Howe (w/Tiedje)  Jim Tiedje, MSU Jason Pell  Erich Schwarz, Caltech / Arend Hintze Cornell Rosangela Canino-  Paul Sternberg, Caltech Koning  Robin Gasser, U. Qingpeng Zhang Melbourne Elijah Lowe  Weiming Li Likit Preeyanon Funding Jiarong Guo Tim Brom USDA NIFA; NSF IOS; Kanchan Pavangadkar BEACON. Eric McDonald
    3. 3. We practice open science! “Be the change you want”Everything discussed here: Code: github.com/ged-lab/ ; BSD license Blog: http://ivory.idyll.org/blog („titus brown blog‟) Twitter: @ctitusbrown Grants on Lab Web site: http://ged.msu.edu/interests.html Preprints: on arXiv, q-bio: „diginorm arxiv‟
    4. 4. Outline1. Computational challenges associated with sequencing DNA/RNA from non-model organisms.2. Three case studies: 1. Parasitic nematode genome assembly 2. Lamprey transcriptome assembly 3. Soil metagenome assembly3. Future directions
    5. 5. My interestsI work primarily on organisms of agricultural,evolutionary, or ecological importance, which tendto have poor reference genomes andtranscriptomes. Focus on: Improving assembly sensitivity to better recover genomic/transcriptomic sequence, often from “weird” samples. Scaling sequence assembly approaches so that huge assemblies are possible and big assemblies are straightforward.
    6. 6. There is quite a bit of life left to sequence & assem http://pacelab.colorado.edu/
    7. 7. “Weird” biological samples: Single genome  Hard to sequence DNA (e.g. GC/AT bias) Transcriptome  Differential expression! High polymorphism data  Multiple alleles Whole genome  Often extreme amplified amplification bias (next slide) Metagenome (mixed  Differential abundance microbial community) within community.
    8. 8. Single genome assembly is alreadychallenging --
    9. 9. Once you start sequencingmetagenomes…
    10. 10. Shotgun sequencing andcoverage “Coverage” is simply the average number of reads that overlap each true base in genome.Here, the coverage is ~10 – just draw a line straight down from the top through all of the reads.
    11. 11. Random sampling => deep samplingneeded Typically 10-100x needed for robust recovery (300 Gbp for human)
    12. 12. Various experimental treatments canalso modify coverage distribution. (MD amplified)
    13. 13. Non-normal coverage distributionslead to decreased assemblysensitivity Many assemblers embed a “coverage model” in their approach.  Genome assemblers: abnormally low coverage is erroneous; abnormally high coverage is repetitive sequence.  Transcriptome assemblers: isoforms should have same coverage across the entire isoform.  Metagenome assemblers: differing abundances indicate different strains. Is there a different way? (Yes.)
    14. 14. Memory requirements (Velvet/Oases –est) Bacterial genome  1-2 GB (colony)  500-1000 GB Human genome  100 GB + Vertebrate mRNA  100 GB Low complexity metagenome  1000 GB ++ High complexity metagenome
    15. 15. Practical memory measurements
    16. 16. K-mer based assemblers scalepoorlyWhy do big data sets require big machines??Memory usage ~ “real” variation + number of errorsNumber of errors ~ size of data set
    17. 17. Why does efficiency matter? It is now cheaper to generate sequence than it is to analyze it computationally!  Machine time  (Wo)man power/time More efficient programs allow better exploration of analysis parameters for maximizing sensitivity. Better or more sensitive bioinformatic approaches can be developed on top of more efficient theory.
    18. 18. Approach: Digital normalization(a computational version of library normalization) Suppose you have a dilution factor of A (10) to B(1). To get 10x of B you need to get 100x of A! Overkill!! This 100x will consume disk space and, because of errors, memory. We can discard it for you…
    19. 19. Digital normalization
    20. 20. Digital normalization
    21. 21. Digital normalization
    22. 22. Digital normalization
    23. 23. Digital normalization
    24. 24. Digital normalization
    25. 25. Digital normalization approach A digital analog to cDNA library normalization, diginorm: Is single pass: looks at each read only once; Does not “collect” the majority of errors; Keeps all low-coverage reads; Smooths out coverage of regions.
    26. 26. Coverage before digitalnormalization: (MD amplified)
    27. 27. Coverage after digital normalization: Normalizes coverage Discards redundancy Eliminates majority of errors Scales assembly dramat Assembly is 98% identica
    28. 28. Digital normalization approach A digital analog to cDNA library normalization, diginorm is a read prefiltering approach that: Is single pass: looks at each read only once; Does not “collect” the majority of errors; Keeps all low-coverage reads; Smooths out coverage of regions.
    29. 29. Contig assembly is significantly more efficient andnow scales with underlying genome size  Transcriptomes, microbial genomes incl MDA, and most metagenomes can be assembled in under 50 GB of RAM, with identical or improved results.
    30. 30. Some diginorm examples:1. Assembly of the H. contortus parasitic nematode genome, a “high polymorphism/variable coverage” problem.2. Reference-free assembly of the lamprey (P. marinus) transcriptome, a “big assembly” problem.3. Assembly of two Midwest soil metagenomes, Iowa corn and Iowa prairie – the “impossible” assembly problem.
    31. 31. 1. The H. contortus problem A sheep parasite. ~350 Mbp genome Sequenced DNA 6 individuals after whole genome amplification, estimated 10% heterozygosity (!?) Significant bacterial contamination. (w/Robin Gasser, Paul Sternberg, and Erich Schwarz)
    32. 32. H. contortus life cycleRefs.: Nikolaou and Gasser (2006), Int. J. Parasitol. 36, 859-868; Prichard and Geary (2008), Nature 452, 157-158.
    33. 33. The power of next-gen. sequencing: get 180x coverage ... and then watch your assemblies never finish Libraries built and sequenced: 300-nt inserts, 2x75 nt paired-end reads 500-nt inserts, 2x75 and 2x100 nt paired-end reads 2-kb, 5-kb, and 10-kb inserts, 2x49 nt paired-end reads Nothing would assemble at all until filtered for basic quality.Filtering let ≤500 nt-sized inserts to assemble in a mere week.But 2+ kb-sized inserts would not assemble even then. Erich Schwarz
    34. 34. Assembly after digital normalization Diginorm readily enabled assembly of a 404 Mbp genome with N50 of 15.6 kb; Post-processing with GapCloser and SOAPdenovo scaffolding led to final assembly of 453 Mbp with N50 of 34.2kb. CEGMA estimates 73-94% complete genome. Diginorm helped by:  Suppressing high polymorphism, esp in repeats;  Eliminating 95% of sequencing errors;  “Squashing” coverage variation from whole genome amplification and bacterial contamination
    35. 35. Assembly after digital normalization Diginorm readily enabled assembly of a 404 Mbp genome with N50 of 15.6 kb; Post-processing with GapCloser and SOAPdenovo scaffolding led to final assembly of 453 Mbp with N50 of 34.2kb. CEGMA estimates 73-94% complete genome. Diginorm helped by:  Suppressing high polymorphism, esp in repeats;  Eliminating 95% of sequencing errors;  “Squashing” coverage variation from whole genome amplification and bacterial contamination
    36. 36. Next steps with H. contortus Publish the genome paper  Identification of antibiotic targets for treatment in agricultural settings (animal husbandry). Serving as “reference approach” for a wide variety of parasitic nematodes, many of which have similar genomic issues.
    37. 37. 2. Lamprey transcriptome assembly. Lamprey genome is draft quality; low contiguity, missing ~30%. No closely related reference. Full-length and exon-level gene predictions are 50- 75% reliable, and rarely capture UTRs / isoforms. De novo assembly, if we do it well, can identify  Novel genes  Novel exons  Fast evolving genes Somatic recombination: how much are we missing, really?
    38. 38. Sea lamprey in the Great Lakes  Non-native  Parasite of medium to large fishes  Caused populations of host fishes to crash Li Lab / Y-W C-D
    39. 39. Transcriptome results Started with 5.1 billion reads from 50 different tissues. Digital normalization discarded 98.7% of them as redundant, leaving 87m (!) These assembled into more than 100,000 transcripts > 1kb Against known full-length, 98.7% agreement (accuracy); 99.7% included (contiguity)
    40. 40. Evaluating de novo lamprey transcriptome  Estimate genome is ~70% complete (gene complement)  Majority of genome-annotated gene sets recovered by mRNAseq assembly.  Note method to recover transcript families w/o genome… Gene families in Fraction inAssembly analysis Gene families genome genomemRNAseq assembly 72003 51632 71.7%reference gene set 8523 8134 95.4%combined 73773 53137 72.0%intersection 6753 6753 100.0%only in mRNAseq assembly 65250 44884 68.8%only in reference gene set 1770 1500 84.7% (Includes transcripts > 300 bp)
    41. 41. Next steps with lamprey Far more complete transcriptome than the one predicted from the genome! Enabling studies in –  Basal vertebrate phylogeny  Biliary atresia  Evolutionary origin of brown fat (previously thought to be mammalian only!)  Pheromonal response in adults
    42. 42. 3. Soil metagenome assembly Observation: 99% of microbes cannot easily be cultured in the lab. (“The great plate count anomaly”) Many reasons why you can‟t or don‟t want to culture:  Syntrophic relationships  Niche-specificity or unknown physiology  Dormant microbes  Abundance within communities Single-cell sequencing & shotgun metagenomics are two common ways to investigate microbial communities.
    43. 43. SAMPLING LOCATIONS
    44. 44. Investigating soil microbialecology What ecosystem level functions are present, and how do microbes do them? How does agricultural soil differ from native soil? How does soil respond to climate perturbation? Questions that are not easy to answer without shotgun sequencing:  What kind of strain-level heterogeneity is present in the population?  What does the phage and viral population look like?  What species are where?
    45. 45. A “Grand Challenge” dataset(DOE/JGI) Total: 1,846 Gbp soil metagenome 600 MetaHIT (Qin et. al, 2011), 578 Gbp 500Basepairs of Sequencing (Gbp) 400 300 Rumen (Hess et. al, 2011), 268 Gbp 200 Rumen K-mer Filtered, 111 Gbp 100 NCBI nr database, 37 Gbp 0 Iowa, Iowa, Native Kansas, Kansas, Wisconsin, Wisconsin, Wisconsin, Wisconsin, Continuous Prairie Cultivated Native Continuous Native Restored Switchgrass corn corn Prairie corn Prairie Prairie GAII HiSeq
    46. 46. “Whoa, that‟s a lot of data…” Estimated sequencing required (bp, w/Illumina) 5E+14 4.5E+14 4E+14 3.5E+14 3E+14 2.5E+14 2E+14 1.5E+14 1E+14 5E+13 0 E. coli Human Vertebrate Human gut Marine Soil genome genome transcriptome
    47. 47. Additional Approach for Metagenomes: Data partitioning (a computational version of cell sorting)Split reads into “bins” belonging to different source species.Can do this based almost entirely on connectivity of sequences.“Divide and conquer”Memory-efficient implementation helps to scale assembly. Pell et al., 2012, PNAS
    48. 48. Partitioning separates reads by genome.When computationally spiking HMP mock data with one E. coli genome (left) or multiple E. coli strains (right), majority of partitions contain reads from only a single genome (blue) vs multi-genome partitions (green). * * Partitions containing spiked data indicated with aAdina Howe *
    49. 49. Assembly results for Iowa corn and prairie(2x ~300 Gbp soil metagenomes) Predicted Total Total Contigs % Reads protein Assembly (> 300 bp) Assembled coding 2.5 bill 4.5 mill 19% 5.3 mill 3.5 bill 5.9 mill 22% 6.8 mill Putting it in perspective: Total equivalent of ~1200 bacterial genomes Adina Howe Human genome ~3 billion bp
    50. 50. Resulting contigs are lowcoverage. Figure 11: Coverage (median basepair) dist ribut ion of assembled cont igs from soil met agenomes.
    51. 51. Strain variation? Can measure by readTop two allele frequencies mapping. Of 5000 most abundant contigs, only 1 has a polymorphism rate > 5% Position within contig
    52. 52. Tentative observations from our soilsamples: We need 100x as much data… Much of our sample may consist of phage. Phylogeny varies more than functional predictions. We see little to no strain variation within our samples  Not bulk soil --  Very small, localized, and low coverage samples We may be able to do selective really deep sequencing and then infer the rest from 16s.  Implications for soil aggregate assembly?
    53. 53. Additional projects -- Bacterial symbionts of bone eating worms – w/Shana Goffredi.  Mixed sample => low-complexity metagenome  Amplified DNA => uneven coverage  ~50% complete => 97% complete assembly Molgulid ascidians – w/Billie Swalla and Lionel Christiaen.  High heterozygosity transcriptomes and genomes Chick genome & transcriptome assembly – w/Hans Cheng, Jerry Dodgson, and Wes Warren.  Hard to sequence/assemble microchromosomes
    54. 54. Digital normalization is enabling. This is a very powerful technique for “fixing” weird samples. (…and all samples are weird.) A number of real world projects are using diginorm successfully (~6-10 in my lab; ~70-80? overall). A diginorm-derived procedure is now a recommended part of the Trinity mRNAseq assembler. Diginorm is 1. Very computationally efficient; 2. Always “cheaper” than running an assembler in the first place 3. Almost always improves results (** in our hands ;)
    55. 55. Next steps Computation is a limiting factor in dealing with NGS  Assembly is slow, compute intensive, and sensitive to parameters  Mapping and SNP calling is parameter sensitive  Exploring hypotheses and finding the right balance between sensitivity and specificity often requires “parameter sweeps” – executing one or more pieces of software with multiple parameters. Can we attack this problem with lossy compression?
    56. 56. Digital normalization retains information, whilediscarding data and errors
    57. 57. Lossy compression http://en.wikipedia.org/wiki/JPEG
    58. 58. Lossy compression http://en.wikipedia.org/wiki/JPEG
    59. 59. Lossy compression http://en.wikipedia.org/wiki/JPEG
    60. 60. Lossy compression http://en.wikipedia.org/wiki/JPEG
    61. 61. Lossy compression http://en.wikipedia.org/wiki/JPEG
    62. 62. ~2 GB – 2 TB of single-chassis RAM "Information" "Information" Raw data "Information" Analysis "Information" (~10-100 GB) ~1 GB "Information" Database & integrationCan we use lossy compression approaches to makedownstream analysis faster and better? (Yes.)Embarking upon project to build theory around digitalnormalization so that we can use it as a prefilter formapping and variant calling. We have alsoimplemented error correction, which enables easyquantification of references by reads. Streaming low-mem distributable prefilters for all
    63. 63. Error correction for metagenomic andmRNAseq data.Most error correction algorithms rely onassumption of uniform coverage; ours does not.Ours is also streaming. Jason Pell
    64. 64. Error correction via graph alignment
    65. 65. Where next? Assembly in the cloud! Study and formalize paired/end mate pair handling in diginorm. Web interface to run and evaluate assemblies. New methods to evaluate and improve assemblies, including a “meta assembly” approach for metagenomes. Fast and efficient error correction of sequencing data  Can also address assembly of high polymorphism sequence, allelic mapping bias, and others;  Can also enable fast/efficient storage and search of nucleic acid databases.
    66. 66. Four+ papers on our work, soon. 2012 PNAS, Pell et al., pmid 22847406 (partitioning). Submitted, Brown et al., arXiv:1203.4802 (digital normalization). Submitted, Howe et al, arXiv: 1212.0159 (artifact removal from Illumina metagenomes). Submitted, Howe et al., arXiv: 1212.2832 – Assembling large, complex environmental metagenomes. In preparation, Zhang et al. – efficient k-mer counting.
    67. 67. Thanks!Everything discussed here: Code: github.com/ged-lab/ ; BSD license Blog: http://ivory.idyll.org/blog („titus brown blog‟) Twitter: @ctitusbrown Grants on Lab Web site: http://ged.msu.edu/interests.html Preprints: on arXiv, q-bio: „diginorm arxiv‟

    ×