Sequencing the start of most analysis People = Umanaged data Data in wrong place Duplicated Nobody can find anything Inc systems:Backups/security Capacity planning?
Life sciences big data use cases
Big data and Life Sciences
Wellcome Trust Sanger Institute
The Sanger Institute
Funded by Wellcome Trust.
largest research charity in the world.
• ~700 employees.
• Based in Hinxton Genome Campus,
Large scale genomic research.
• Sequenced 1/3 of the human genome.
(largest single contributor).
• Large scale sequencing with an impact
on human and animal health.
Data is freely available.
• Websites, ftp, direct database access,
• Some restrictions for potentially
• Scientific computing systems architects.
TGCACTCCAGCTTGGGTGACACAG CAACCCTCTCTCTCTAAAAAAAAAAAAAAAAAGGTGCACTCCAGCTTGGGTGACACAG CAACCCTCTCTCTCTAAAAAAAAAAAAAAAAAGG
ATGAAGTAAATCG ATTTGCTTTCAAAACCTTTATATTTGAATACAAATGTACTCCATGAAGTAAATCG ATTTGCTTTCAAAACCTTTATATTTGAATACAAATGTACTCC
250 Million * 75-108 Base fragments250 Million * 75-108 Base fragments
~1 TByte / day / machine~1 TByte / day / machine
Human Genome (3GBases)Human Genome (3GBases)
Cost of sequencing halves every 12
• Wrong side of Moore's Law.
The Human genome project:
• 13 years.
• 23 labs.
• $500 Million.
A Human genome today:
• 3 days.
• 1 machine.
Trend will continue:
• $1000 genome is probable within 2 years.
• Informatics not included.
The scary graph
Peak Yearly capillaryPeak Yearly capillary
sequencing: 30 Gbasesequencing: 30 Gbase
Current weekly sequencing:Current weekly sequencing:
7-10 Tbases7-10 Tbases
Data doubling Time: 4-6Data doubling Time: 4-6
Sequencing data flow.
Structured dataStructured data
Unstructured dataUnstructured data
(Flat files)(Flat files)
Variation dataVariation data
Raw dataRaw data
(10 TB)(10 TB)
A Sequencing Centre Today
• Generic x86_64 cluster.
• (16,000 cores)
• ~1 TB per day per sequencer.
• (15 PB disk)
• (Lustre + NFS)
Metadata driven data management
• Only keep our important files.
• Catalogue them, so we can find them!
• Keep the number of copies we want, and no more.
• (iRODS, in house LIMs).
A solved problem; we know how to do this.A solved problem; we know how to do this.
Proper Big Data
We want to compute across all the data.
• Sequencing data (of course).
• Patient records, treatment and outcomes.
• Cancer: tie in genetics, patient outcomes and treatments.
• Pharma: high failure rate due to genetic factors in drug response.
• Infectious disease epidemiology.
• Rare genetic diseases.
Many genetic effects are small
• Million member cohorts to get good signal:noise.
Translation: Genomics of drug
sensitivity in Cancer
Pre-treatmentPre-treatment BRAF inhibitorBRAF inhibitor
15 weeks of treatment15 weeks of treatment
BRAF mutation positiveBRAF mutation positive ✔✔
70% response rate vs 10% for standard chemotherapy70% response rate vs 10% for standard chemotherapy
BRAF Inhibitors in maligant melanomaBRAF Inhibitors in maligant melanoma
Slide from Mathew Garnet (CGP)Slide from Mathew Garnet (CGP)
Current Data Archives
EBI ERA / NCBI SRA store
results of all sequencing
• Public data availability: A good
• 1.6 Pbases
• Archives are “dark”.
• You can put data in, but you can't
do anything with it.
• In order to analyse the data, you
need to download it all.
• 100s of Tbytes
Situation replicated at local
Institute level too.
• eg How does CRI get hold of their
data currently held at Sanger?
Global Alliance for sharing genomic and clinical data
• 70 research institutes & hospitals (including Sanger, Broad, EBI, BGI,
Cancer Research UK)
Million cancer genome warehouse
• (UC Berkeley)
Institute AInstitute AInstitute AInstitute A
To the Cloud!
Institute BInstitute BInstitute BInstitute B
Code & Algorithms
• Integer not FP heavy.
• Single threaded.
• Large memory footprints.
• Interpreted languages.
Not a good fit for future computing architectures.
Expensive to run on public clouds.
• Memory footprint leads to unused cores.
Out of scope for a data talk, but still an important point.
Global File systemGlobal File system
cpucpucpucpu cpucpucpucpu cpucpucpucpu cpucpucpucpu
Object StoreObject Store
Fast NetworkFast Network Slow NetworkSlow Network
Static nodesStatic nodes
dynamic nodesdynamic nodes
A VM is just a VM, right?
• Clouds are supposed to be
• Nobody wants to re-write a pipeline
when they move clouds.
• Low level: AWS S3, Openstack
• High level: Data management
layer (eg iRODS)?
• Do we need is more standards?!
• First person to make one that
actually works, wins.
Data still has to get from our instruments
to the Cloud.
• Lots of products out there for wide area data
• We are currently using all of them(!)
Network bandwidth still a problem.
• Research institutes have fast data networks.
• What about your GP's surgery?
UDT / UDRUDT / UDR
rsync / sshrsync / ssh
Unlikely that data archives are going to
allow anonymous access.
• Who are you?
Federated identify providers.
• Is everyone signed up to the same federation?
• Does it include the right mix of cross-national co-
• Does your favourite bit of software support
Janet MoonshotJanet Moonshot
• Theory: anonymised data can be stored and
accessed without jumping through hoops.
• Practice: Risk of re-identification. Becomes
easier the more data you have.
• Medical records are hard to anonymise
and still be useful.
• Medical consent process adds more
restrictions above data-protection law.
• Limits data use & access even if
Controlled data access?
• No ad-hoc analysis.
• Access via restricted API only (“trusted
Policy development ongoing.
• Cross juristiction for added fun.
We know where we want to get to.
• No shortage of Vision
There are lots of interesting tools and technologies out
• Getting them to work coherently together will be a challenge.
• Prototyping efforts are underway.
• Need to leverage expertese and experience in other fields.
Not simply technical issues:
• Significant policy issues need to be worked out.
• We have to bring the public along.
• James Beal
• Helen Brimmer
• Pete Clapham
Global Alliance whitepaper:
Million Cancer Genome Warehouse whitepaper