Next generation genomics: Petascale data in the life sciences
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Next generation genomics: Petascale data in the life sciences

on

  • 2,717 views

Keynote presentation at OGF 28. ...

Keynote presentation at OGF 28.

The year 2000 saw the release of "The" human genome, the product of a the combined sequencing effort of the whole planet. In 2010, single institutions are sequencing thousands of genomes a year, producing petabytes of data. Furthermore, many of the large scale sequencing projects are based around international collaboration and consortia. The talk will explore how Grid and Cloud technologies are being used to share genomics data around the planet, revolutionizing life science research.

Statistics

Views

Total Views
2,717
Views on SlideShare
2,715
Embed Views
2

Actions

Likes
1
Downloads
118
Comments
1

2 Embeds 2

http://www.linkedin.com 1
https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as OpenOffice

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • GUy - this is a fascinating presentation. Do you happen to have any current 2013/14 numbers on where were not with the # of petabytes? I would be interested to see the growth. Best - Jennifer Johnson
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Next generation genomics: Petascale data in the life sciences Presentation Transcript

  • 1. Next Generation Genomics: Petascale data in the life sciences
    • Guy Coates
    • 2. Wellcome Trust Sanger Institute
    • 3. [email_address]
  • 4. Outline
    • DNA Sequencing and Informatics
    • 5. Managing Data
    • 6. Sharing Data
    • 7. Adventures in the Cloud
  • 8. The Sanger Institute
    • Funded by Wellcome Trust.
      • 2 nd largest research charity in the world.
      • 9. ~700 employees.
      • 10. Based in Hinxton Genome Campus, Cambridge, UK.
    • Large scale genomic research.
      • Sequenced 1/3 of the human genome. (largest single contributor).
      • 11. We have active cancer, malaria, pathogen and genomic variation / human health studies.
    • All data is made publicly available.
      • Websites, ftp, direct database. access, programmatic APIs.
  • 12. DNA sequencing
  • 13. Next-generation Sequencing Life sciences is drowning in data from our new sequencing machines. Traditional sequencing:
      • 96 sequencing reactions carried out per run.
    Next-generation: sequencing.
      • 52 Million reactions per run.
    Machines are cheap(ish) and small.
      • Small labs can afford one.
      • 14. Big labs can afford lots of them.
  • 15. Economic Trends:
    • As cost of sequencing halves every 12 months.
      • cf Moore's Law
    • The Human genome project:
      • 13 years.
      • 16. 23 labs.
      • 17. $500 Million.
    • A Human genome today:
      • 3 days.
      • 18. 1 machine.
      • 19. $10,000.
      • 20. Large centres are now doing studies with 1000s and 10,000s of genomes.
    • Changes in sequencing technology are going to continue this trend.
      • “Next-next” generation sequencers are on their way.
      • 21. $500 genome is probable within 5 years.
  • 22. Output Trends
    • Our peak “old generation” sequencing:
      • August 2007: 3.5 Gbases/month.
    • Current output:
      • Jan 2010: 4 Tbases/month.
    • 1000x increase in our sequencing output.
      • In August 2007, total size of genbank was 200 Gbases.
    • Improvements in chemistry continue to increase the output of machines.
  • 23. The scary graph Instrument upgrades Peak Yearly capillary sequencing
  • 24. Managing Growth
    • We have exponential growth in storage and compute.
      • Storage /compute doubles every 12 months.
        • 2009 ~7 PB raw
    • Gigabase of sequence ≠ Gigbyte of storage.
      • 16 bytes per base for for sequence data.
      • 25. Intermediate analysis typically need 10x disk space of the raw data.
    • Moore's law will not save us.
      • Transistor/disk density: T d =18 months
      • 26. Sequencing cost: T d =12 months
  • 27. Sequencing Informatics
  • 28. DNA Sequencing TCTTTATTTTAGCTGGACCAGACCAATTTTGAGGAAAGGATACAGACAGCGCCTG AAGGTATGTTCATGTACATTGTTTAGTTGAAGAGAGAAATTCATATTATTAATTA TGGTGGCTAATGCCTGTAATCCCAACTATTTGGGAGGCCAAGATGAGAGGATTGC ATAAAAAAGTTAGCTGGGAATGGTAGTGCATGCTTGTATTCCCAGCTACTCAGGAGGCTG TGCACTCCAGCTTGGGTGACACAG CAACCCTCTCTCTCTAAAAAAAAAAAAAAAAAGG AAATAATCAGTTTCCTAAGATTTTTTTCCTGAAAAATACACATTTGGTTTCA ATGAAGTAAATCG ATTTGCTTTCAAAACCTTTATATTTGAATACAAATGTACTCC 250 Million * 75-108 Base fragments Human Genome (3GBases)
  • 29. Alignment
    • Find the best match of fragments to a known genome / genomes.
      • “grep” for DNA sequences.
      • 30. Use more sophisticated algorithms that can do fuzzy matching.
        • Real DNA has Insertions, deletions and mutations.
        • 31. Typical algorithms are maq, bwa, ssaha, blast.
    • Look for differences
      • Single base pair differences (SNP).
      • 32. Larger insertions/deletions/mutations.
    • Typical experiment:
      • Compare cancer cell genomes with healthy ones.
    Reference: ...TTTGCTGAAACCCAAGTGACGCCATCCAGCGTGACCACTGCATTTTTCTCGGTCATCACCAGCATTCTC.... Query: CAAGTGACGCCATCCAGCGTGACCACTGCATTTTTCT A GGTCATCACCAGCA
  • 33. Assembly
    • Assemble fragments into a complete genome.
      • Typical experiment: collect reference genome for a new species.
    • “De-novo” assembly.
      • Assemble fragment with no external data.
      • 34. Harder than it looks.
        • Non uniform coverage, low depth, non-unique sequence (repeats).
    • Alignment based assembly.
      • Align fragments to a related genome.
      • 35. Starting scaffold which can then be refined.
        • Eg H. neanderthal. is being assembled against a H. sapiens sequence.
  • 36. Cancer Genomes
    • Cancer is a disease caused by abnormalities in a cell's genome.
  • 37. Mutation Details
    • Lung Carcenoma genome
      • Nature 2010 463; 184-90.
    • 22,910 mutations
    • 38. 58 rearrangements
    • 39. 334 copy number segments
  • 40. Analysing Cancer Genomes
      Cancer genomes contains a lot of genetic damage.
      • Many of the mutations in cancer are incidental.
      • 41. Initial mutation disrupts the normal DNA repair/replication processes.
      • 42. Corruption spreads through the rest of the genome.
    • Today: Find the “driver” mutations amongst the thousands of “passengers.
      • Identifying the driver mutations will give us new targets for therapies.
    • Tomorrow: Analyse the cancer genome of every patient in the clinic.
      • Variations in a patient and cancer genetic makeup play a major role in how effective a particular drugs will be.
      • 43. Clinicians will use this information to tailor therapies.
  • 44. International Cancer Genome Project
    • Many cancer mutations are rare.
      • Low signal-to-noise ratio.
    • How do we find the rare but important mutations?
      • Sequence lots of cancer genomes.
    • International Cancer Genome Project.
      • Consortia of sequencing and cancer research centres in 10 countries.
    • Aim of the consortia.
      • Complete genomic analysis of 50 different tumor types. (50,000 genomes).
  • 45. Past Collaborations Data Sequencing Centre + DCC Sequencing centre Sequencing centre Sequencing centre Sequencing centre
  • 46. Future Collaborations Collaborations are short term: 18 months-3 years. Sequencing centre Sequencing centre Sequencing centre Sequencing centre Federated access
  • 47. Genomics Data Unstructured data (flat files) Data size per Genome Structured data (databases) Clinical Researchers, non-infomaticians Sequencing informatics specialists Intensities / raw data (2TB) Alignments (200 GB) Sequence + quality data (500 GB) Variation data (1GB) Individual features (3MB)
  • 48. Where can grid technologies help us?
    • Managing data.
    • 49. Sharing data.
    • 50. Making our software resources available.
  • 51. Managing Data
  • 52. Bulk Data Structured data (databases) Unstructured data (flat files) Data size per Genome Sequencing informatics specialists Intensities / raw data (2TB) Alignments (200 GB) Sequence + quality data (500 GB) Variation data (1GB) Individual features (3MB)
  • 53. Bulk Data Management
    • We though we were really good at it.
      • All samples that come through the sequencing lab are bar-coded and tracked (Laboratory Information Systems).
      • 54. Sequencing machines fed into an automated analysis pipeline.
      • 55. All the data was tracked, analysed and archived appropriately.
    • Strict meta-data controls.
      • Experiments do not start in the wet-lab until the investigator has supplied all the required data privacy and archiving requirements.
        • Anonymised data -> straight into the archive.
        • 56. Identifiable data -> private/controlled archives.
        • 57. Some data held back until journal publication.
  • 58. Compute farm analysis/QC pipeline Alignment/assembly suckers Data pull ... Final Repository (Oracle) 100TB / yr staging area 500 TB Seq 1 Seq 38
  • 59. It turn out we were looking in the wrong place
    • We had been focused on the sequencing pipeline.
      • For many investigators, data coming off the end of the sequencing pipeline is where they start .
      • 60. Investigators take the mass of finished sequence data out of the archives, onto our compute farms and “do stuff”.
    • Huge explosion of data and disk use all over the institute.
      • We had no idea what people were doing with their data.
  • 61. ... Data pull ... ? Compute farm analysis/QC pipeline assembly/alignment suckers Final Repository (Oracle) 100TB / yr staging area 500TB Seq 1 Seq 38 Compute Farm Compute farm disk Collaberators / 3 rd party sequencing Unmanged LIMS managed data
  • 62. Accidents waiting to happen... From: <User A> (who left 12 months ago) I find the <project> directory is removed . The original directory is &quot;/scratch/ <User B> (who left 6 months ago) &quot; ..where is it ? If this problem cannot be solved ,I am afriaid that <project> cannot be released.
  • 63. An idea whose time had come
      Forward thinking groups had hacked up a file tracking systems for their unstructured data.
      • They could not keep track of where the results.
      • 64. Problem exacerbated with student turnover (summer students, PhD students on rotation).
    • Big wins with little effort.
      • Disk space usage dropped by 2/3.
        • Lots of individuals keeping copies of the same data set “so I know where it is”.
      • Team leaders are happy that their data is where they thing it is.
        • Important stuff is on filesystems that are backed up etc.
    • But:
      • Systems are ad-hoc, quick hacks.
      • 65. We want an institute wide, standardised system.
        • Invest in people to maintain/develop it.
  • 66. iRODS
    • iRODS: Integrated Rule-Oriented Data System.
    • 67. Produced by DICE (Data Intensive Cyber Environments) groups at U. North Carolina, Chapel Hill.
    • 68. Successor to SRB.
  • 69. iRODS ICAT Catalogue database Rule Engine Implements policies Irods Server Data on disk User interface WebDAV, icommands,fuse Irods Server Data in database
  • 70. Basic Features
    • Catalogue:
      • Put data on disk and keeps a record of where it it.
      • 71. Add query-able metadata to files.
    • Rules engine.
      • “Do things” to files based on file data and metadata.
        • Eg move data between fast/archival storage.
      • Implement policies.
        • Experiment A data should be publicly viewable, but experiment B is restricted to certain users until 6 months after deposition.
    • Efficient.
      • Copes with PB of data and 100,000M+ files.
      • 72. Fast parallel data transfers across local and wide area network links.
  • 73. Advanced Features
    • Extensible
      • Link the system out to external services.
        • Eg external databases holding metadata, external authentication systems.
    • Federated
      • Physically and logically separated iRODS installs can be federated.
      • 74. Allows user at institute A to seamlessly access data at institute B in a controlled manner.
      • 75. Supports replication. Useful for disaster recovery/backup scenarios.
    • Policy enforcements
      • Enforces data sharing / data privacy rules.
  • 76. What are we doing with it?
    • Piloting it for internal use.
      • Help groups keep track of their data.
      • 77. Move files between different storage pools.
        • Fast scratch space ↔ warehouse disk ↔ Offsite DR centre.
      • Link metadata back to our LIMs/tracking databases.
    • We need to share data with other institutions.
      • Public data is easy: FTP/http.
      • 78. Controlled data is hard:
      • 79. Encrypt files and place on private FTP dropboxes.
      • 80. Cumbersome to manage and insecure.
    • Proof of concept to use iRODS to provide controlled access to datasets.
      • Will we get buy in for the community?
  • 81. Sharing data
  • 82. Structured Data Structured data (databases) Unstructured data (flat files) Data size per Genome Clinical Researchers, non-infomaticians Intensities / raw data (2TB) Alignments (200 GB) Sequence + quality data (500 GB) Variation data (1GB) Individual features (3MB)
  • 83. Raw Genomes are not useful TCCTCTCTTTATTTTAGCTGGACCAGACCAATTTTGAGGAAAGGATACAGACAGCGCCTG GAATTGTCAGACATATACCAAATCCCTTCTGTTGATTCTGCTGACAATCTATCTGAAAAA TTGGAAAGGTATGTTCATGTACATTGTTTAGTTGAAGAGAGAAATTCATATTATTAATTA TTTAGAGAAGAGAAAGCAAACATATTATAAGTTTAATTCTTATATTTAAAAATAGGAGCC AAGTATGGTGGCTAATGCCTGTAATCCCAACTATTTGGGAGGCCAAGATGAGAGGATTGC TTGAGACCAGGAGTTTGATACCAGCCTGGGCAACATAGCAAGATGTTATCTCTACACAAA ATAAAAAAGTTAGCTGGGAATGGTAGTGCATGCTTGTATTCCCAGCTACTCAGGAGGCTG AAGCAGGAGGGTTACTTGAGCCCAGGAGTTTGAGGTTGCAGTGAGCTATGATTGTGCCAC TGCACTCCAGCTTGGGTGACACAGCAAAACCCTCTCTCTCTAAAAAAAAAAAAAAAAAGG AACATCTCATTTTCACACTGAAATGTTGACTGAAATCATTAAACAATAAAATCATAAAAG AAAAATAATCAGTTTCCTAAGAAATGATTTTTTTTCCTGAAAAATACACATTTGGTTTCA GAGAATTTGTCTTATTAGAGACCATGAGATGGATTTTGTGAAAACTAAAGTAACACCATT ATGAAGTAAATCGTGTATATTTGCTTTCAAAACCTTTATATTTGAATACAAATGTACTCC
    • Genomes need to be annotated.
      • Locations of genes.
      • 84. Functions of genes.
      • 85. Relationships between genes (homologues, functional groups)
      • 86. Links to the medical/scientific literature
  • 87. Ensembl
    • Ensembl is a system for genome Annotation.
    • 88. Compute Pipeline.
      • Take a raw genome and run it through a compute pipeline to find genes and other features of interest.
      • 89. Ensembl at Sanger/EBI provides automated analysis for 51 vertebrate genomes.
    • Data visualisation.
      • www.ensembl.org
      • 90. Provides web interface to genomic data.
      • 91. 10k visitors / 126k page views per day.
    • Data access and mining.
      • OO Perl / Java APIs.
      • 92. Direct SQL access.
      • 93. Bulk data download.
      • 94. BioMart, DAS
    • Software is Open Source (apache license).
    • 95. Data is free for download.
  • 96. Example annotation
  • 97. Example annotation
  • 98. Example annotation
  • 99. Sharing data with Web Services
  • 100. Distributed Annotation Service
    • Labs may have data that they want to view with Ensembl.
      • Put data into context with everything else.
    • DAS is a web-services protocol that allows sharing of annotation information.
      • Developed at Cold Spring Harbor Lab and extended by Sanger Institute and others.
    • DAS Information;
      • metadata:
        • Description of the dataset, features supported.
        • 101. This can be optionally registered/validated at das.registry.org.
      • Data:
        • Object type.
        • 102. Co-ordinates (typically genome species/version and position).
        • 103. Stylesheet; (how should the data be displayed, eg histogram, color gradient).
  • 104. DAS community
    • Currently ~600 DAS providers spread across 45 institutions and 18 counties.
    Removal of non-responsive services
  • 105.  
  • 106.  
  • 107.  
  • 108. BioMART
      Provides query based access to structured data.
      • Collaboration between CSHL, European Bioinformatics Institute and Ontario Institute for Cancer Research.
    • “Tell me the function of genes that have substitution mutations in breast-cancer samples.”
    • 109. Query requires queries across multiple databases.
      • Mutations are stored in COSMIC, Cancer Genome database.
      • 110. Gene function is stored in Ensembl.
    • BioMart provides a unified entry point to these databases.
  • 111. BioMART Transform / Import Query Common IDs: federatable Common IDs: federatable Oracle CSV Mysql MART MART MART XML GUI PERL SOAP/REST JAVA
  • 112.  
  • 113.  
  • 114.  
  • 115.  
  • 116.  
  • 117. Clouds
  • 118. Disclaimer
    • This talk will use Amazon/EC2.
    • 119. We tested it.
    • 120. It is not a commercial endorsement.
    • 121. Other cloud providers exist.
    • 122. It a short hand; feel free to insert your favourite cloud provider instead.
  • 123. Cloud-ifying Ensembl
    • Website
      • LAMP stack.
      • 124. Ports easily to Amazon.
      • 125. Provides virtual world-wide co-lo.
    • Compute Pipeline
      • HPTC workload
      • 126. Compute pipeline is a harder problem.
  • 127. Expanding markets
    • There are going to be lots of new genomes that need annotating.
      • Sequencers moving into small labs, clinical settings.
      • 128. Limited informatics / systems experience.
        • Typically postdocs/PhD who have a “real” job to do.
    • We have already done all the hard work on installing the software and tuning it.
      • Can we package up the pipeline, put it in the cloud?
    • Goal: End user should simply be able to upload their data, insert their credit-card number, and press “GO” .
  • 129. Gene Finding DNA HMM Prediction Alignment with known proteins Alignment with fragments recovered in vivo Alignment with other genes and other species
  • 130. Compute Pipeline
    • Architecture:
      • OO perl pipeline manager.
      • 131. Core algorithms are C.
      • 132. 200 auxiliary binaries.
    • Workflow:
      • Investigator describes analysis at high level.
      • 133. Pipeline manager splits the analysis into parallel chunks.
        • Typically 50k-100k jobs.
      • Sorts out the dependences and then submits jobs to a DRM.
        • Typically LSF or SGE.
      • Pipeline state and results are stored in a mysql database.
    • Workflow is embarrassingly parallel.
      • Integer, not floating point.
      • 134. 64 bit memory address is nice, but not required.
        • 64 bit file access is required.
      • Single threaded jobs.
      • 135. Very IO intensive.
  • 136. Running the pipeline in practice
    • Requires a significant amount of domain knowledge.
    • 137. Software install is complicated.
      • Lots of perl modules and dependencies.
      • 138. Apache wranging if you want to run a website.
    • Need a well tuned compute cluster.
      • Pipeline takes ~500 CPU days for a moderate genome.
        • Ensembl chewed up 160k CPU days last year.
      • Code is IO bound in a number of places.
      • 139. Typically need a high performance filesystem.
        • Lustre, GPFS, Isilon, Ibrix etc.
      • Need large mysql database.
        • 100GB-TB mysql instances, very high query load generated from the cluster.
  • 140. How does this port to cloud environments?
    • Creating the software stack / machine image.
      • Creating images with software is reasonably straightforward.
      • 141. Getting queuing system etc running requires jumping through some hoops.
    • Mysql databases
      • Lots of best practice on how to do that on EC2.
    • But it took time, even for experienced systems people.
      • (You will not be firing your system-administrators just yet!).
  • 142. Moving data is hard
    • Moving large amounts of data across the public internet is hard.
      • Commonly used tools are not suited to wide-area networks.
        • There is a reason gridFTP/FDT/Aspera exist.
    • Data transfer rates (gridFTP/FDT):
      • Cambridge -> EC2 East coast: 12 Mbytes/s (96 Mbits/s)
      • 143. Cambridge -> EC2 Dublin: 25 Mbytes/s (200 Mbits/s)
      • 144. 11 hours to move 1TB to Dublin.
      • 145. 23 hours to move 1 TB to East coast.
    • What speed should we get?
      • Once we leave JANET (UK academic network) finding out what the connectivity is and what we should expect is almost impossible.
  • 146. IO Architecture VS CPU CPU CPU Fat Network Posix Global filesystem CPU CPU CPU CPU thin network Local storage Local storage Local storage Local storage Batch schedular hadoop/S3
  • 147. Storage / IO is hard
    • No viable global filesystems on EC2.
    • 148. NFS has poor scaling at the best of times.
      • EC2 has poor inter-node networking. > 8 NFS clients, everything stops.
    • “The cloud way”: store data in S3.
      • Web based object store.
        • Get, put, delete objects.
      • Not POSIX.
        • Code needs re-writing / forking.
      • Limitations; cannot store objects > 5GB.
    • Nasty-hacks:
      • Subcloud; commercial product that allows you to run a POSIX filesystem on top of S3.
        • Interesting performance, and you are paying by the hour...
  • 149. Going forward
  • 150. Cloud vs HPTC
    • Re-writing apps to use S3 or hadoop/HDFS is a real hurdle.
      • Not an issue for new apps.
      • 151. But new apps do not exist in isolation.
      • 152. Barrier for entry is much lower for file-systems.
    • Am I being a reactionary old fart?
      • 15 years ago clusters of PCs were not real supercomputers.
      • 153. ...then beowulf took over the world.
    • Big difference: porting applications between the two architectures was easy.
      • MPI/PVM etc.
    • Will the market provide “traditional” compute clusters in the cloud?
  • 154. Networking
    • How do we improve data transfers across the public internet?
      • CERN approach; don't.
      • 155. Dedicated networking has been put in between CERN and the T1 centres who get all of the CERN data.
    • Our collaborations are different.
      • We have relatively short lived and fluid collaborations. (1-2 years, many institutions).
      • 156. As more labs get sequencers, our potential collaborators also increase.
      • 157. We need good connectivity to everywhere.
  • 158. Can we turn the problem on its head?
      Fixing the internet is not going to be cost effective for us.
    • Amazon fixing the internet may be cost effective for them.
      • Core to their business model.
      • 159. All we need to do is get data into Amazon, and then everyone else can get the data from there.
    • Cloud as virtual co-location site.
      • Mass datastores.
      • 160. Host mirror sites for our web services.
    • Requires us to invest in a fast links to Amazon.
      • It changes the business dynamic.
      • 161. We have effectively tied ourselves to a single provider.
    • Expensive mistake if you change your mind, or your provider goes <pop> .
  • 162. Identity management
    • Web services for linking databases together are mature.
      • They are currently all public.
    • There will be demand for restricted services.
      • Patient identifiable data.
    • Our next big challenge.
      • Lots of solutions:
        • openID, shibboleth, aspis, globus etc.
      • Finding consensus will be hard.
      • 163. Culture shock.
  • 164. Acknowledgements
    • Sanger Institute
    • 165. Phil Butcher
    • 166. ISG
      • James Beal
      • 167. Gen-Tao Chiang
      • 168. Pete Clapham
      • 169. Simon Kelley
    • Cancer-genome Project
      • Adam Butler
      • 170. John Teague
    • STFC
      • David Corney
      • 171. Jens Jensen
  • 172. Sites of interest
    • http://www.ensembl.org
    • 173. http://www.sanger.ac.uk/cosmic
    • 174. http://www.biomart.org
    • 175. http://www.biodas.org
    • 176. http://www.icgc.org