ReComp and the Variant Interpretations Case Study

Paolo Missier
Paolo MissierProfessor of Big Data Analytics
Simple Variant Identification
under ReComp control
Jacek Cała, Paolo Missier
Newcastle University
School of Computing Science
Outline
• Motivation – many computational problems, especially Big Data and
NGS pipelines, face an output deprecation issue
• updates of input data and tools make current results obsolete
• Test case – Simple Variant Identification
• pipeline-like structure, “small-data” process
• easy to implement and experiment with
• Experiments
• 3 different approaches compared with the baseline, blind re-computation
• provide insight into what selective re-computation can/cannot achieve
• Conclusions
The heavyweight of NGS pipelines
• NGS resequencing pipelines are an important example of the Big Data
analytics problems
• Important:
• Are at the core of the genomic analysis
• Big Data:
• raw sequences for WES analysis are measured in 1-20 GB per patient
• for quality purposes patient samples are usually processed in cohorts of 20-40
or close to 1 TB per cohort
• time required to process a 24-sample cohort can easily exceed 2 CPUmonths
• WES is only a fraction of what the WGS analyses require
Tracing change in the NGS resequencing
• Although the skeleton of the pipeline remains fairly static, many
aspects of the NGS are changing continuously
• Changes occur at various points and aspects of the pipeline but are
mainly two-fold:
• new tools and improved versions of the existing tools used at various steps in
the pipeline
• new updated reference and annotation data
• It is challenging to assess the impact of these changes on the output
of the pipeline
• the cost of rerunning the pipeline for all or even a cohort of patients is very
high
ReComp
• Aims to find ways to:
• detect and measure impact of changes in the input data
• allow the computational process to be selectively re-executed
• minimise the cost (runtime, monetary) of the re-execution with the maximum
benefit for the user
• One of the first steps – to run a part of the NGS pipeline under the
ReComp and evaluate potential benefits
The Simple Variant Identification tool
• Can help classify variants into three categories: RED, GREEN, AMBER
• pathogenic, benign and unknown
• uses OMIM GeneMap to identify genes- and variants-in-scope
• uses NCBI ClinVar to classify variants pathogenicity
• The SVI can be attached at the very end of an NGS pipeline
• as a simple, short running process can serve as a test scenario for ReComp
• SVI –> a mini-pipeline
High-level structure of the SVI process
Phenotype
to genes
genes in
scope
Variant
selection
<< input data >>
patient variants
(from a NGS pipeline)
<< input data >>
phenotype
hypothesis
<< reference data >>
OMIM GeneMap
<< reference data >>
NCBI ClinVar
<< output data >>
classified
variants
variants in
scope
Variant
classification
Simple Variant Identification
Detailed design of the SVI process
• Implemented as an e-Science Central workflow
• graphical design approach
• provenance tracking
Detailed design of the SVI process
Phenotype to genes
Variant selection
Variant classification
Patient
variants
GeneMap
ClinVar
Classified variants
Phenotype
hypothesis
Running SVI under ReComp
• A set of experiments designed to get insight into what and how
ReComp can help in process re-execution:
1. Blind re-computation
2. Partial re-computation
3. Partial re-computation using input difference
4. Partial re-computation with step-by-step impact analysis
• Experiments run on a set of 16 patients split by 4 different phenotype
hypotheses
• Tracking real changes in OMIM GeneMap and NCBI ClinVar
Experiments: Input data set
Phenotype hypothesis Variant file Variant count File size [MB]
Congenital myasthenic
syndrome
MUN0785 26508 35.5
MUN0789 26726 35.8
MUN0978 26921 35.8
MUN1000 27246 36.3
Parkinsons disease C0011 23940 38.8
C0059 24983 40.4
C0158 24376 39.4
C0176 24280 39.4
Creutzfeldt-Jakob disease A1340 23410 38.0
A1356 24801 40.2
A1362 24271 39.2
A1370 24051 38.9
Frontotemporal dementia -
Amyotrophic lateral sclerosis
B0307 24052 39.0
C0053 23980 38.8
C0171 24387 39.6
D1049 24473 39.5
Experiments: Reference data sets
• Different rate of changes:
• GeneMap changes daily
• ClinVar changes monthly
Database Version
Record
count
File size
[MB]
OMIM
GeneMap
2016-03-08 13053 2.2
2016-04-28 15871 2.7
2016-06-01 15897 2.7
2016-06-02 15897 2.7
2016-06-07 15910 2.7
NCBI ClinVar 2015-02 281023 96.7
2016-02 285041 96.6
2016-05 290815 96.1
Experiment 1: Establishing the baseline –
blind re-computation
• Simple re-execution of the SVI process evoked by changes in
reference data (either GeneMap or ClinVar)
• Involves the maximum cost related to the execution of the process
• Blind re-computation is the baseline for the ReComp evaluation
• we want to be more effective than that
Experiment 1: Results
• Running the SVI workflow on one patient sample takes about 17
minutes
• executed on a single-core VM
• may be optimised –> optimisation out-of-scope at the moment
• Runtime is consistent across different phenotypes
• Changes of the GeneMap and ClinVar version have negligible impact
on the execution time, e.g.:
Run time [mm:ss]
GeneMap version 2016-03-08 2016-04-28 2016-06-07
μ ± σ 17:05 ± 22 17:09 ± 15 17:10 ± 17
Experiment 1: Results
• 17 min per sample => the SVI implementation has capacity of only 84
samples per CPUcore per day
• May be inadequate considering the daily rate of change of GeneMap
• Our goal is to increase this capacity through smart/selective re-
computation
Experiment 2: Partial re-computation
• The SVI workflow is a mini-pipeline with well defined structure
• Changes in the reference data affect different parts of the process
• Plan:
• restart the pipeline from different starting points
• run only the part affected by the changed data
• measure the savings of the partial re-computation when compared with the
baseline, blind re-comp
Experiment 2: Partial re-computation
Change in
ClinVar
Change in
GeneMap
Experiment 2: Results
• Running the part of SVI directly involved in
processing updated data can save some
runtime
• Savings depend on:
• the structure of the process
• the point where the changed data are used
• Savings involve the cost of retaining interim
data required in partial re-execution
• the size of the data depends on the
phenotype hypothesis and type of change
• the size is in range of 20–22 MB for GeneMap
changes and 2–334 kB for ClinVar changes
Run time
[mm:ss]
Savings Run time
[mm:ss]
Savings
GeneMap
version
2016-04-28 2016-06-07
μ ± σ 11:51 ± 16 31% 11:50 ± 20 31%
ClinVar
version
2016-02 2016-05
μ ± σ 9:51 ± 14 43% 9:50 ± 15 42%
Experiment 3: Partial re-computation using input
difference
• Can we use difference between two versions of the input data to run the
process?
• In general, it depends on the type of process and how the process uses the data
• SVI can use the difference
• Difference is likely to be much smaller than the new version of the data
• Plan:
• calculate difference between two versions of reference data –> compute added,
removed and changed record sets
• run SVI using the three difference sets
• recombine results
• measure the savings of the partial re-computation when compared with the
baseline, blind re-comp
Experiment 3: Partial re-comp. using diff.
• The size of difference sets is significantly reduced when compared to the new
version of the data
but:
• the difference is computed as three separate sets of: added, removed and
changed records
• it requires three separate runs of SVI and then recombination of results
GeneMap versions
from –> to
ToVersion
rec. count
Difference
rec. count Reduction
16-03-08 –> 16-06-07 15910 1458 91%
16-03-08 –> 16-04-28 15871 1386 91%
16-04-28 –> 16-06-01 15897 78 99.5%
16-06-01 –> 16-06-02 15897 2 99.99%
16-06-02 –> 16-06-07 15910 33 99.8%
ClinVar versions
from –> to
ToVersion
rec. count
Difference
rec. count Reduction
15-02 –> 16-05 290815 38216 87%
15-02 –> 16-02 285042 35550 88%
16-02 –> 16-05 290815 3322 98.9%
Experiment 3: Results
• Running the part of SVI directly involved in
processing updated data can save some
runtime
• Running the part of SVI on each difference set
also saves some runtime
• Yet, the total cost of three separate re-
executions may exceed the savings
• Concluding, this approach has a few weak
points:
• running the process on diff. sets is not always
possible
• running the process using diff. sets requires
output recombination
• total runtime may sometimes exceed the
runtime of a regular update
Run time [mm:ss]
Added Removed Changed Total
GeneMap
change
11:30 ± 5 11:27 ± 11 11:36 ± 8 34:34 ± 16
ClinVar
change
2:29 ± 9 0:37 ± 7 0:44 ± 7 3:50 ± 22
Experiment 4: Partial re-computation with
step-by-step impact analysis
• Insight into the structure of the computational process
+Ability to calculate difference sets of various types of data
=> step-by-step re-execution
• Plan:
• compute changes in the intermediate data after each execution step
• stop re-computation when no changes have been detected
• measure the savings of the partial re-computation when compared with the
baseline, blind re-comp
Experiment 4: Step-by-step re-comp.
• Re-computation evoked by daily update in GeneMap: 16-06-01 –> 16-06-02
• likely to have minimal impact on the results
• Only two tasks in the SVI process needed execution
• Execution stopped after about 20 seconds of processing
Experiment 4: Results
• The biggest savings in the runtime out of the three partial re-
computation scenarios
• the step-by-step re-computation was about 30x quicker than the complete re-
execution
• Requires tools to compute difference between various data types
• Incurs costs related to storing all intermediate data
• may be optimised by storing only intermediate data needed by long running
tasks
Conclusions
• Even simple processes like SVI can significantly benefit from selective re-
computation
• Insight into the structure of the pipeline opens a variety of options how re-
computation can be pursued
• NGS pipelines are very good candidates to optimise
• The key building blocks for successful re-computation:
• workflow-based design
• tracking data provenance
• access to intermediate data
• availability of tools to compute data difference sets
1 of 25

Recommended

ReComp for genomics by
ReComp for genomicsReComp for genomics
ReComp for genomicsPaolo Missier
544 views6 slides
Preserving the currency of genomics outcomes over time through selective re-c... by
Preserving the currency of genomics outcomes over time through selective re-c...Preserving the currency of genomics outcomes over time through selective re-c...
Preserving the currency of genomics outcomes over time through selective re-c...Paolo Missier
1.1K views48 slides
Big Data Quality Panel : Diachron Workshop @EDBT by
Big Data Quality Panel: Diachron Workshop @EDBTBig Data Quality Panel: Diachron Workshop @EDBT
Big Data Quality Panel : Diachron Workshop @EDBTPaolo Missier
646 views9 slides
ReComp: Preserving the value of large scale data analytics over time through... by
ReComp:Preserving the value of large scale data analytics over time through...ReComp:Preserving the value of large scale data analytics over time through...
ReComp: Preserving the value of large scale data analytics over time through...Paolo Missier
655 views42 slides
The data, they are a-changin’ by
The data, they are a-changin’The data, they are a-changin’
The data, they are a-changin’ Paolo Missier
942 views20 slides
ReComp project kickoff presentation 11-03-2016 by
ReComp project kickoff presentation 11-03-2016ReComp project kickoff presentation 11-03-2016
ReComp project kickoff presentation 11-03-2016Paolo Missier
672 views21 slides

More Related Content

What's hot

ReComp: optimising the re-execution of analytics pipelines in response to cha... by
ReComp: optimising the re-execution of analytics pipelines in response to cha...ReComp: optimising the re-execution of analytics pipelines in response to cha...
ReComp: optimising the re-execution of analytics pipelines in response to cha...Paolo Missier
139 views27 slides
Selective and incremental re-computation in reaction to changes: an exercise ... by
Selective and incremental re-computation in reaction to changes: an exercise ...Selective and incremental re-computation in reaction to changes: an exercise ...
Selective and incremental re-computation in reaction to changes: an exercise ...Paolo Missier
558 views60 slides
ReComp, the complete story: an invited talk at Cardiff University by
ReComp, the complete story:  an invited talk at Cardiff UniversityReComp, the complete story:  an invited talk at Cardiff University
ReComp, the complete story: an invited talk at Cardiff UniversityPaolo Missier
161 views39 slides
Efficient Re-computation of Big Data Analytics Processes in the Presence of C... by
Efficient Re-computation of Big Data Analytics Processes in the Presence of C...Efficient Re-computation of Big Data Analytics Processes in the Presence of C...
Efficient Re-computation of Big Data Analytics Processes in the Presence of C...Paolo Missier
167 views44 slides
accessible-streaming-algorithms by
accessible-streaming-algorithmsaccessible-streaming-algorithms
accessible-streaming-algorithmsFarhan Zaki
105 views13 slides
Machine learning in the life sciences with knime by
Machine learning in the life sciences with knimeMachine learning in the life sciences with knime
Machine learning in the life sciences with knimeGreg Landrum
3K views16 slides

What's hot(20)

ReComp: optimising the re-execution of analytics pipelines in response to cha... by Paolo Missier
ReComp: optimising the re-execution of analytics pipelines in response to cha...ReComp: optimising the re-execution of analytics pipelines in response to cha...
ReComp: optimising the re-execution of analytics pipelines in response to cha...
Paolo Missier139 views
Selective and incremental re-computation in reaction to changes: an exercise ... by Paolo Missier
Selective and incremental re-computation in reaction to changes: an exercise ...Selective and incremental re-computation in reaction to changes: an exercise ...
Selective and incremental re-computation in reaction to changes: an exercise ...
Paolo Missier558 views
ReComp, the complete story: an invited talk at Cardiff University by Paolo Missier
ReComp, the complete story:  an invited talk at Cardiff UniversityReComp, the complete story:  an invited talk at Cardiff University
ReComp, the complete story: an invited talk at Cardiff University
Paolo Missier161 views
Efficient Re-computation of Big Data Analytics Processes in the Presence of C... by Paolo Missier
Efficient Re-computation of Big Data Analytics Processes in the Presence of C...Efficient Re-computation of Big Data Analytics Processes in the Presence of C...
Efficient Re-computation of Big Data Analytics Processes in the Presence of C...
Paolo Missier167 views
accessible-streaming-algorithms by Farhan Zaki
accessible-streaming-algorithmsaccessible-streaming-algorithms
accessible-streaming-algorithms
Farhan Zaki105 views
Machine learning in the life sciences with knime by Greg Landrum
Machine learning in the life sciences with knimeMachine learning in the life sciences with knime
Machine learning in the life sciences with knime
Greg Landrum3K views
What is a Data Commons and Why Should You Care? by Robert Grossman
What is a Data Commons and Why Should You Care? What is a Data Commons and Why Should You Care?
What is a Data Commons and Why Should You Care?
Robert Grossman1.2K views
Integrative data management for reproducibility of microscopy experiments by Sheeba Samuel
Integrative data management for reproducibility of microscopy experimentsIntegrative data management for reproducibility of microscopy experiments
Integrative data management for reproducibility of microscopy experiments
Sheeba Samuel232 views
Is one enough? Data warehousing for biomedical research by Greg Landrum
Is one enough? Data warehousing for biomedical researchIs one enough? Data warehousing for biomedical research
Is one enough? Data warehousing for biomedical research
Greg Landrum743 views
Towards reproducibility and maximally-open data by Pablo Bernabeu
Towards reproducibility and maximally-open dataTowards reproducibility and maximally-open data
Towards reproducibility and maximally-open data
Pablo Bernabeu55 views
Estimating Query Difficulty for News Prediction Retrieval (poster presentation) by Nattiya Kanhabua
Estimating Query Difficulty for News Prediction Retrieval (poster presentation)Estimating Query Difficulty for News Prediction Retrieval (poster presentation)
Estimating Query Difficulty for News Prediction Retrieval (poster presentation)
Nattiya Kanhabua450 views
Optique presentation by DBOnto
Optique presentationOptique presentation
Optique presentation
DBOnto789 views
Towards Automatic Composition of Multicomponent Predictive Systems by Manuel Martín
Towards Automatic Composition of Multicomponent Predictive SystemsTowards Automatic Composition of Multicomponent Predictive Systems
Towards Automatic Composition of Multicomponent Predictive Systems
Manuel Martín988 views
Extracting and Making Use of Materials Data from Millions of Journal Articles... by Anubhav Jain
Extracting and Making Use of Materials Data from Millions of Journal Articles...Extracting and Making Use of Materials Data from Millions of Journal Articles...
Extracting and Making Use of Materials Data from Millions of Journal Articles...
Anubhav Jain186 views

Similar to ReComp and the Variant Interpretations Case Study

Application of microbiological data by
Application of microbiological dataApplication of microbiological data
Application of microbiological dataTim Sandle, Ph.D.
3.8K views50 slides
Sampling-SDM2012_Jun by
Sampling-SDM2012_JunSampling-SDM2012_Jun
Sampling-SDM2012_JunMDO_Lab
192 views33 slides
Aplication of on line data analytics to a continuous process polybetene unit by
Aplication of on line data analytics to a continuous process polybetene unitAplication of on line data analytics to a continuous process polybetene unit
Aplication of on line data analytics to a continuous process polybetene unitEmerson Exchange
2.2K views39 slides
4 26 2013 1 IME 674 Quality Assurance Reliability EXAM TERM PROJECT INFO... by
4 26 2013 1 IME 674  Quality Assurance   Reliability EXAM   TERM PROJECT INFO...4 26 2013 1 IME 674  Quality Assurance   Reliability EXAM   TERM PROJECT INFO...
4 26 2013 1 IME 674 Quality Assurance Reliability EXAM TERM PROJECT INFO...Robin Beregovska
2 views26 slides
[Vu Van Nguyen] Test Estimation in Practice by
[Vu Van Nguyen]  Test Estimation in Practice[Vu Van Nguyen]  Test Estimation in Practice
[Vu Van Nguyen] Test Estimation in PracticeHo Chi Minh City Software Testing Club
2K views21 slides
Modelling the effluent quality utilizing optical monitoring by
Modelling the effluent quality utilizing optical monitoringModelling the effluent quality utilizing optical monitoring
Modelling the effluent quality utilizing optical monitoringCLIC Innovation Ltd
211 views8 slides

Similar to ReComp and the Variant Interpretations Case Study(20)

Sampling-SDM2012_Jun by MDO_Lab
Sampling-SDM2012_JunSampling-SDM2012_Jun
Sampling-SDM2012_Jun
MDO_Lab192 views
Aplication of on line data analytics to a continuous process polybetene unit by Emerson Exchange
Aplication of on line data analytics to a continuous process polybetene unitAplication of on line data analytics to a continuous process polybetene unit
Aplication of on line data analytics to a continuous process polybetene unit
Emerson Exchange2.2K views
4 26 2013 1 IME 674 Quality Assurance Reliability EXAM TERM PROJECT INFO... by Robin Beregovska
4 26 2013 1 IME 674  Quality Assurance   Reliability EXAM   TERM PROJECT INFO...4 26 2013 1 IME 674  Quality Assurance   Reliability EXAM   TERM PROJECT INFO...
4 26 2013 1 IME 674 Quality Assurance Reliability EXAM TERM PROJECT INFO...
Modelling the effluent quality utilizing optical monitoring by CLIC Innovation Ltd
Modelling the effluent quality utilizing optical monitoringModelling the effluent quality utilizing optical monitoring
Modelling the effluent quality utilizing optical monitoring
Imputation techniques for missing data in clinical trials by Nitin George
Imputation techniques for missing data in clinical trialsImputation techniques for missing data in clinical trials
Imputation techniques for missing data in clinical trials
Nitin George3.7K views
Grey-box modeling: systems approach to water management by Moudud Hasan
Grey-box modeling: systems approach to water managementGrey-box modeling: systems approach to water management
Grey-box modeling: systems approach to water management
Moudud Hasan778 views
A novel auto-tuning method for fractional order PID controllers by ISA Interchange
A novel auto-tuning method for fractional order PID controllersA novel auto-tuning method for fractional order PID controllers
A novel auto-tuning method for fractional order PID controllers
ISA Interchange1.1K views
When a FILTER makes the di fference in continuously answering SPARQL queries ... by Shima Zahmatkesh
When a FILTER makes the difference in continuously answering SPARQL queries ...When a FILTER makes the difference in continuously answering SPARQL queries ...
When a FILTER makes the di fference in continuously answering SPARQL queries ...
Shima Zahmatkesh13 views
Statistical Process Control by Tushar Naik
Statistical Process ControlStatistical Process Control
Statistical Process Control
Tushar Naik487 views
A Deep Learning use case for water end use detection by Roberto Díaz and José... by Big Data Spain
A Deep Learning use case for water end use detection by Roberto Díaz and José...A Deep Learning use case for water end use detection by Roberto Díaz and José...
A Deep Learning use case for water end use detection by Roberto Díaz and José...
Big Data Spain2.2K views
Integer quantization for deep learning inference: principles and empirical ev... by jemin lee
Integer quantization for deep learning inference: principles and empirical ev...Integer quantization for deep learning inference: principles and empirical ev...
Integer quantization for deep learning inference: principles and empirical ev...
jemin lee620 views
Six sigma-in-measurement-systems-evaluating-the-hidden-factory (2) by Bibhuti Prasad Nanda
Six sigma-in-measurement-systems-evaluating-the-hidden-factory (2)Six sigma-in-measurement-systems-evaluating-the-hidden-factory (2)
Six sigma-in-measurement-systems-evaluating-the-hidden-factory (2)
A methodology for full system power modeling in heterogeneous data centers by Raimon Bosch
A methodology for full system power modeling in  heterogeneous data centersA methodology for full system power modeling in  heterogeneous data centers
A methodology for full system power modeling in heterogeneous data centers
Raimon Bosch176 views
Genetic Programming in Automated Test Code Generation by DVClub
Genetic Programming in Automated Test Code GenerationGenetic Programming in Automated Test Code Generation
Genetic Programming in Automated Test Code Generation
DVClub548 views
Statistical analysis by Xiuxia Du
Statistical analysisStatistical analysis
Statistical analysis
Xiuxia Du973 views

More from Paolo Missier

Data-centric AI and the convergence of data and model engineering: opportunit... by
Data-centric AI and the convergence of data and model engineering:opportunit...Data-centric AI and the convergence of data and model engineering:opportunit...
Data-centric AI and the convergence of data and model engineering: opportunit...Paolo Missier
41 views40 slides
Realising the potential of Health Data Science: opportunities and challenges ... by
Realising the potential of Health Data Science:opportunities and challenges ...Realising the potential of Health Data Science:opportunities and challenges ...
Realising the potential of Health Data Science: opportunities and challenges ...Paolo Missier
6 views38 slides
Provenance Week 2023 talk on DP4DS (Data Provenance for Data Science) by
Provenance Week 2023 talk on DP4DS (Data Provenance for Data Science)Provenance Week 2023 talk on DP4DS (Data Provenance for Data Science)
Provenance Week 2023 talk on DP4DS (Data Provenance for Data Science)Paolo Missier
24 views23 slides
A Data-centric perspective on Data-driven healthcare: a short overview by
A Data-centric perspective on Data-driven healthcare: a short overviewA Data-centric perspective on Data-driven healthcare: a short overview
A Data-centric perspective on Data-driven healthcare: a short overviewPaolo Missier
34 views36 slides
Capturing and querying fine-grained provenance of preprocessing pipelines in ... by
Capturing and querying fine-grained provenance of preprocessing pipelines in ...Capturing and querying fine-grained provenance of preprocessing pipelines in ...
Capturing and querying fine-grained provenance of preprocessing pipelines in ...Paolo Missier
13 views28 slides
Tracking trajectories of multiple long-term conditions using dynamic patient... by
Tracking trajectories of  multiple long-term conditions using dynamic patient...Tracking trajectories of  multiple long-term conditions using dynamic patient...
Tracking trajectories of multiple long-term conditions using dynamic patient...Paolo Missier
13 views15 slides

More from Paolo Missier(20)

Data-centric AI and the convergence of data and model engineering: opportunit... by Paolo Missier
Data-centric AI and the convergence of data and model engineering:opportunit...Data-centric AI and the convergence of data and model engineering:opportunit...
Data-centric AI and the convergence of data and model engineering: opportunit...
Paolo Missier41 views
Realising the potential of Health Data Science: opportunities and challenges ... by Paolo Missier
Realising the potential of Health Data Science:opportunities and challenges ...Realising the potential of Health Data Science:opportunities and challenges ...
Realising the potential of Health Data Science: opportunities and challenges ...
Paolo Missier6 views
Provenance Week 2023 talk on DP4DS (Data Provenance for Data Science) by Paolo Missier
Provenance Week 2023 talk on DP4DS (Data Provenance for Data Science)Provenance Week 2023 talk on DP4DS (Data Provenance for Data Science)
Provenance Week 2023 talk on DP4DS (Data Provenance for Data Science)
Paolo Missier24 views
A Data-centric perspective on Data-driven healthcare: a short overview by Paolo Missier
A Data-centric perspective on Data-driven healthcare: a short overviewA Data-centric perspective on Data-driven healthcare: a short overview
A Data-centric perspective on Data-driven healthcare: a short overview
Paolo Missier34 views
Capturing and querying fine-grained provenance of preprocessing pipelines in ... by Paolo Missier
Capturing and querying fine-grained provenance of preprocessing pipelines in ...Capturing and querying fine-grained provenance of preprocessing pipelines in ...
Capturing and querying fine-grained provenance of preprocessing pipelines in ...
Paolo Missier13 views
Tracking trajectories of multiple long-term conditions using dynamic patient... by Paolo Missier
Tracking trajectories of  multiple long-term conditions using dynamic patient...Tracking trajectories of  multiple long-term conditions using dynamic patient...
Tracking trajectories of multiple long-term conditions using dynamic patient...
Paolo Missier13 views
Delivering on the promise of data-driven healthcare: trade-offs, challenges, ... by Paolo Missier
Delivering on the promise of data-driven healthcare: trade-offs, challenges, ...Delivering on the promise of data-driven healthcare: trade-offs, challenges, ...
Delivering on the promise of data-driven healthcare: trade-offs, challenges, ...
Paolo Missier40 views
Digital biomarkers for preventive personalised healthcare by Paolo Missier
Digital biomarkers for preventive personalised healthcareDigital biomarkers for preventive personalised healthcare
Digital biomarkers for preventive personalised healthcare
Paolo Missier265 views
Digital biomarkers for preventive personalised healthcare by Paolo Missier
Digital biomarkers for preventive personalised healthcareDigital biomarkers for preventive personalised healthcare
Digital biomarkers for preventive personalised healthcare
Paolo Missier65 views
Data Provenance for Data Science by Paolo Missier
Data Provenance for Data ScienceData Provenance for Data Science
Data Provenance for Data Science
Paolo Missier123 views
Capturing and querying fine-grained provenance of preprocessing pipelines in ... by Paolo Missier
Capturing and querying fine-grained provenance of preprocessing pipelines in ...Capturing and querying fine-grained provenance of preprocessing pipelines in ...
Capturing and querying fine-grained provenance of preprocessing pipelines in ...
Paolo Missier78 views
Quo vadis, provenancer?  Cui prodest?  our own trajectory: provenance of data... by Paolo Missier
Quo vadis, provenancer? Cui prodest? our own trajectory: provenance of data...Quo vadis, provenancer? Cui prodest? our own trajectory: provenance of data...
Quo vadis, provenancer?  Cui prodest?  our own trajectory: provenance of data...
Paolo Missier312 views
Data Science for (Health) Science: tales from a challenging front line, and h... by Paolo Missier
Data Science for (Health) Science:tales from a challenging front line, and h...Data Science for (Health) Science:tales from a challenging front line, and h...
Data Science for (Health) Science: tales from a challenging front line, and h...
Paolo Missier238 views
Analytics of analytics pipelines: from optimising re-execution to general Dat... by Paolo Missier
Analytics of analytics pipelines:from optimising re-execution to general Dat...Analytics of analytics pipelines:from optimising re-execution to general Dat...
Analytics of analytics pipelines: from optimising re-execution to general Dat...
Paolo Missier61 views
Decentralized, Trust-less Marketplace for Brokered IoT Data Trading using Blo... by Paolo Missier
Decentralized, Trust-less Marketplacefor Brokered IoT Data Tradingusing Blo...Decentralized, Trust-less Marketplacefor Brokered IoT Data Tradingusing Blo...
Decentralized, Trust-less Marketplace for Brokered IoT Data Trading using Blo...
Paolo Missier296 views
Efficient Re-computation of Big Data Analytics Processes in the Presence of C... by Paolo Missier
Efficient Re-computation of Big Data Analytics Processes in the Presence of C...Efficient Re-computation of Big Data Analytics Processes in the Presence of C...
Efficient Re-computation of Big Data Analytics Processes in the Presence of C...
Paolo Missier387 views
A Customisable Pipeline for Continuously Harvesting Socially-Minded Twitter U... by Paolo Missier
A Customisable Pipeline for Continuously Harvesting Socially-Minded Twitter U...A Customisable Pipeline for Continuously Harvesting Socially-Minded Twitter U...
A Customisable Pipeline for Continuously Harvesting Socially-Minded Twitter U...
Paolo Missier172 views
ReComp and P4@NU: Reproducible Data Science for Health by Paolo Missier
ReComp and P4@NU: Reproducible Data Science for HealthReComp and P4@NU: Reproducible Data Science for Health
ReComp and P4@NU: Reproducible Data Science for Health
Paolo Missier188 views
algorithmic-decisions, fairness, machine learning, provenance, transparency by Paolo Missier
algorithmic-decisions, fairness, machine learning, provenance, transparencyalgorithmic-decisions, fairness, machine learning, provenance, transparency
algorithmic-decisions, fairness, machine learning, provenance, transparency
Paolo Missier693 views
Provenance Annotation and Analysis to Support Process Re-Computation by Paolo Missier
Provenance Annotation and Analysis to Support Process Re-ComputationProvenance Annotation and Analysis to Support Process Re-Computation
Provenance Annotation and Analysis to Support Process Re-Computation
Paolo Missier563 views

Recently uploaded

The Research Portal of Catalonia: Growing more (information) & more (services) by
The Research Portal of Catalonia: Growing more (information) & more (services)The Research Portal of Catalonia: Growing more (information) & more (services)
The Research Portal of Catalonia: Growing more (information) & more (services)CSUC - Consorci de Serveis Universitaris de Catalunya
115 views25 slides
20231123_Camunda Meetup Vienna.pdf by
20231123_Camunda Meetup Vienna.pdf20231123_Camunda Meetup Vienna.pdf
20231123_Camunda Meetup Vienna.pdfPhactum Softwareentwicklung GmbH
45 views73 slides
STKI Israeli Market Study 2023 corrected forecast 2023_24 v3.pdf by
STKI Israeli Market Study 2023   corrected forecast 2023_24 v3.pdfSTKI Israeli Market Study 2023   corrected forecast 2023_24 v3.pdf
STKI Israeli Market Study 2023 corrected forecast 2023_24 v3.pdfDr. Jimmy Schwarzkopf
24 views29 slides
2024: A Travel Odyssey The Role of Generative AI in the Tourism Universe by
2024: A Travel Odyssey The Role of Generative AI in the Tourism Universe2024: A Travel Odyssey The Role of Generative AI in the Tourism Universe
2024: A Travel Odyssey The Role of Generative AI in the Tourism UniverseSimone Puorto
13 views61 slides
PharoJS - Zürich Smalltalk Group Meetup November 2023 by
PharoJS - Zürich Smalltalk Group Meetup November 2023PharoJS - Zürich Smalltalk Group Meetup November 2023
PharoJS - Zürich Smalltalk Group Meetup November 2023Noury Bouraqadi
139 views17 slides
Piloting & Scaling Successfully With Microsoft Viva by
Piloting & Scaling Successfully With Microsoft VivaPiloting & Scaling Successfully With Microsoft Viva
Piloting & Scaling Successfully With Microsoft VivaRichard Harbridge
13 views160 slides

Recently uploaded(20)

STKI Israeli Market Study 2023 corrected forecast 2023_24 v3.pdf by Dr. Jimmy Schwarzkopf
STKI Israeli Market Study 2023   corrected forecast 2023_24 v3.pdfSTKI Israeli Market Study 2023   corrected forecast 2023_24 v3.pdf
STKI Israeli Market Study 2023 corrected forecast 2023_24 v3.pdf
2024: A Travel Odyssey The Role of Generative AI in the Tourism Universe by Simone Puorto
2024: A Travel Odyssey The Role of Generative AI in the Tourism Universe2024: A Travel Odyssey The Role of Generative AI in the Tourism Universe
2024: A Travel Odyssey The Role of Generative AI in the Tourism Universe
Simone Puorto13 views
PharoJS - Zürich Smalltalk Group Meetup November 2023 by Noury Bouraqadi
PharoJS - Zürich Smalltalk Group Meetup November 2023PharoJS - Zürich Smalltalk Group Meetup November 2023
PharoJS - Zürich Smalltalk Group Meetup November 2023
Noury Bouraqadi139 views
Piloting & Scaling Successfully With Microsoft Viva by Richard Harbridge
Piloting & Scaling Successfully With Microsoft VivaPiloting & Scaling Successfully With Microsoft Viva
Piloting & Scaling Successfully With Microsoft Viva
iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas... by Bernd Ruecker
iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas...iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas...
iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas...
Bernd Ruecker48 views
Webinar : Desperately Seeking Transformation - Part 2: Insights from leading... by The Digital Insurer
Webinar : Desperately Seeking Transformation - Part 2:  Insights from leading...Webinar : Desperately Seeking Transformation - Part 2:  Insights from leading...
Webinar : Desperately Seeking Transformation - Part 2: Insights from leading...
Future of AR - Facebook Presentation by ssuserb54b561
Future of AR - Facebook PresentationFuture of AR - Facebook Presentation
Future of AR - Facebook Presentation
ssuserb54b56122 views
Data Integrity for Banking and Financial Services by Precisely
Data Integrity for Banking and Financial ServicesData Integrity for Banking and Financial Services
Data Integrity for Banking and Financial Services
Precisely29 views
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N... by James Anderson
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
James Anderson126 views
【USB韌體設計課程】精選講義節錄-USB的列舉過程_艾鍗學院 by IttrainingIttraining
【USB韌體設計課程】精選講義節錄-USB的列舉過程_艾鍗學院【USB韌體設計課程】精選講義節錄-USB的列舉過程_艾鍗學院
【USB韌體設計課程】精選講義節錄-USB的列舉過程_艾鍗學院
STPI OctaNE CoE Brochure.pdf by madhurjyapb
STPI OctaNE CoE Brochure.pdfSTPI OctaNE CoE Brochure.pdf
STPI OctaNE CoE Brochure.pdf
madhurjyapb14 views
ESPC 2023 - Protect and Govern your Sensitive Data with Microsoft Purview in ... by Jasper Oosterveld
ESPC 2023 - Protect and Govern your Sensitive Data with Microsoft Purview in ...ESPC 2023 - Protect and Govern your Sensitive Data with Microsoft Purview in ...
ESPC 2023 - Protect and Govern your Sensitive Data with Microsoft Purview in ...
SAP Automation Using Bar Code and FIORI.pdf by Virendra Rai, PMP
SAP Automation Using Bar Code and FIORI.pdfSAP Automation Using Bar Code and FIORI.pdf
SAP Automation Using Bar Code and FIORI.pdf
"Node.js Development in 2024: trends and tools", Nikita Galkin by Fwdays
"Node.js Development in 2024: trends and tools", Nikita Galkin "Node.js Development in 2024: trends and tools", Nikita Galkin
"Node.js Development in 2024: trends and tools", Nikita Galkin
Fwdays17 views
Five Things You SHOULD Know About Postman by Postman
Five Things You SHOULD Know About PostmanFive Things You SHOULD Know About Postman
Five Things You SHOULD Know About Postman
Postman38 views

ReComp and the Variant Interpretations Case Study

  • 1. Simple Variant Identification under ReComp control Jacek Cała, Paolo Missier Newcastle University School of Computing Science
  • 2. Outline • Motivation – many computational problems, especially Big Data and NGS pipelines, face an output deprecation issue • updates of input data and tools make current results obsolete • Test case – Simple Variant Identification • pipeline-like structure, “small-data” process • easy to implement and experiment with • Experiments • 3 different approaches compared with the baseline, blind re-computation • provide insight into what selective re-computation can/cannot achieve • Conclusions
  • 3. The heavyweight of NGS pipelines • NGS resequencing pipelines are an important example of the Big Data analytics problems • Important: • Are at the core of the genomic analysis • Big Data: • raw sequences for WES analysis are measured in 1-20 GB per patient • for quality purposes patient samples are usually processed in cohorts of 20-40 or close to 1 TB per cohort • time required to process a 24-sample cohort can easily exceed 2 CPUmonths • WES is only a fraction of what the WGS analyses require
  • 4. Tracing change in the NGS resequencing • Although the skeleton of the pipeline remains fairly static, many aspects of the NGS are changing continuously • Changes occur at various points and aspects of the pipeline but are mainly two-fold: • new tools and improved versions of the existing tools used at various steps in the pipeline • new updated reference and annotation data • It is challenging to assess the impact of these changes on the output of the pipeline • the cost of rerunning the pipeline for all or even a cohort of patients is very high
  • 5. ReComp • Aims to find ways to: • detect and measure impact of changes in the input data • allow the computational process to be selectively re-executed • minimise the cost (runtime, monetary) of the re-execution with the maximum benefit for the user • One of the first steps – to run a part of the NGS pipeline under the ReComp and evaluate potential benefits
  • 6. The Simple Variant Identification tool • Can help classify variants into three categories: RED, GREEN, AMBER • pathogenic, benign and unknown • uses OMIM GeneMap to identify genes- and variants-in-scope • uses NCBI ClinVar to classify variants pathogenicity • The SVI can be attached at the very end of an NGS pipeline • as a simple, short running process can serve as a test scenario for ReComp • SVI –> a mini-pipeline
  • 7. High-level structure of the SVI process Phenotype to genes genes in scope Variant selection << input data >> patient variants (from a NGS pipeline) << input data >> phenotype hypothesis << reference data >> OMIM GeneMap << reference data >> NCBI ClinVar << output data >> classified variants variants in scope Variant classification Simple Variant Identification
  • 8. Detailed design of the SVI process • Implemented as an e-Science Central workflow • graphical design approach • provenance tracking
  • 9. Detailed design of the SVI process Phenotype to genes Variant selection Variant classification Patient variants GeneMap ClinVar Classified variants Phenotype hypothesis
  • 10. Running SVI under ReComp • A set of experiments designed to get insight into what and how ReComp can help in process re-execution: 1. Blind re-computation 2. Partial re-computation 3. Partial re-computation using input difference 4. Partial re-computation with step-by-step impact analysis • Experiments run on a set of 16 patients split by 4 different phenotype hypotheses • Tracking real changes in OMIM GeneMap and NCBI ClinVar
  • 11. Experiments: Input data set Phenotype hypothesis Variant file Variant count File size [MB] Congenital myasthenic syndrome MUN0785 26508 35.5 MUN0789 26726 35.8 MUN0978 26921 35.8 MUN1000 27246 36.3 Parkinsons disease C0011 23940 38.8 C0059 24983 40.4 C0158 24376 39.4 C0176 24280 39.4 Creutzfeldt-Jakob disease A1340 23410 38.0 A1356 24801 40.2 A1362 24271 39.2 A1370 24051 38.9 Frontotemporal dementia - Amyotrophic lateral sclerosis B0307 24052 39.0 C0053 23980 38.8 C0171 24387 39.6 D1049 24473 39.5
  • 12. Experiments: Reference data sets • Different rate of changes: • GeneMap changes daily • ClinVar changes monthly Database Version Record count File size [MB] OMIM GeneMap 2016-03-08 13053 2.2 2016-04-28 15871 2.7 2016-06-01 15897 2.7 2016-06-02 15897 2.7 2016-06-07 15910 2.7 NCBI ClinVar 2015-02 281023 96.7 2016-02 285041 96.6 2016-05 290815 96.1
  • 13. Experiment 1: Establishing the baseline – blind re-computation • Simple re-execution of the SVI process evoked by changes in reference data (either GeneMap or ClinVar) • Involves the maximum cost related to the execution of the process • Blind re-computation is the baseline for the ReComp evaluation • we want to be more effective than that
  • 14. Experiment 1: Results • Running the SVI workflow on one patient sample takes about 17 minutes • executed on a single-core VM • may be optimised –> optimisation out-of-scope at the moment • Runtime is consistent across different phenotypes • Changes of the GeneMap and ClinVar version have negligible impact on the execution time, e.g.: Run time [mm:ss] GeneMap version 2016-03-08 2016-04-28 2016-06-07 μ ± σ 17:05 ± 22 17:09 ± 15 17:10 ± 17
  • 15. Experiment 1: Results • 17 min per sample => the SVI implementation has capacity of only 84 samples per CPUcore per day • May be inadequate considering the daily rate of change of GeneMap • Our goal is to increase this capacity through smart/selective re- computation
  • 16. Experiment 2: Partial re-computation • The SVI workflow is a mini-pipeline with well defined structure • Changes in the reference data affect different parts of the process • Plan: • restart the pipeline from different starting points • run only the part affected by the changed data • measure the savings of the partial re-computation when compared with the baseline, blind re-comp
  • 17. Experiment 2: Partial re-computation Change in ClinVar Change in GeneMap
  • 18. Experiment 2: Results • Running the part of SVI directly involved in processing updated data can save some runtime • Savings depend on: • the structure of the process • the point where the changed data are used • Savings involve the cost of retaining interim data required in partial re-execution • the size of the data depends on the phenotype hypothesis and type of change • the size is in range of 20–22 MB for GeneMap changes and 2–334 kB for ClinVar changes Run time [mm:ss] Savings Run time [mm:ss] Savings GeneMap version 2016-04-28 2016-06-07 μ ± σ 11:51 ± 16 31% 11:50 ± 20 31% ClinVar version 2016-02 2016-05 μ ± σ 9:51 ± 14 43% 9:50 ± 15 42%
  • 19. Experiment 3: Partial re-computation using input difference • Can we use difference between two versions of the input data to run the process? • In general, it depends on the type of process and how the process uses the data • SVI can use the difference • Difference is likely to be much smaller than the new version of the data • Plan: • calculate difference between two versions of reference data –> compute added, removed and changed record sets • run SVI using the three difference sets • recombine results • measure the savings of the partial re-computation when compared with the baseline, blind re-comp
  • 20. Experiment 3: Partial re-comp. using diff. • The size of difference sets is significantly reduced when compared to the new version of the data but: • the difference is computed as three separate sets of: added, removed and changed records • it requires three separate runs of SVI and then recombination of results GeneMap versions from –> to ToVersion rec. count Difference rec. count Reduction 16-03-08 –> 16-06-07 15910 1458 91% 16-03-08 –> 16-04-28 15871 1386 91% 16-04-28 –> 16-06-01 15897 78 99.5% 16-06-01 –> 16-06-02 15897 2 99.99% 16-06-02 –> 16-06-07 15910 33 99.8% ClinVar versions from –> to ToVersion rec. count Difference rec. count Reduction 15-02 –> 16-05 290815 38216 87% 15-02 –> 16-02 285042 35550 88% 16-02 –> 16-05 290815 3322 98.9%
  • 21. Experiment 3: Results • Running the part of SVI directly involved in processing updated data can save some runtime • Running the part of SVI on each difference set also saves some runtime • Yet, the total cost of three separate re- executions may exceed the savings • Concluding, this approach has a few weak points: • running the process on diff. sets is not always possible • running the process using diff. sets requires output recombination • total runtime may sometimes exceed the runtime of a regular update Run time [mm:ss] Added Removed Changed Total GeneMap change 11:30 ± 5 11:27 ± 11 11:36 ± 8 34:34 ± 16 ClinVar change 2:29 ± 9 0:37 ± 7 0:44 ± 7 3:50 ± 22
  • 22. Experiment 4: Partial re-computation with step-by-step impact analysis • Insight into the structure of the computational process +Ability to calculate difference sets of various types of data => step-by-step re-execution • Plan: • compute changes in the intermediate data after each execution step • stop re-computation when no changes have been detected • measure the savings of the partial re-computation when compared with the baseline, blind re-comp
  • 23. Experiment 4: Step-by-step re-comp. • Re-computation evoked by daily update in GeneMap: 16-06-01 –> 16-06-02 • likely to have minimal impact on the results • Only two tasks in the SVI process needed execution • Execution stopped after about 20 seconds of processing
  • 24. Experiment 4: Results • The biggest savings in the runtime out of the three partial re- computation scenarios • the step-by-step re-computation was about 30x quicker than the complete re- execution • Requires tools to compute difference between various data types • Incurs costs related to storing all intermediate data • may be optimised by storing only intermediate data needed by long running tasks
  • 25. Conclusions • Even simple processes like SVI can significantly benefit from selective re- computation • Insight into the structure of the pipeline opens a variety of options how re- computation can be pursued • NGS pipelines are very good candidates to optimise • The key building blocks for successful re-computation: • workflow-based design • tracking data provenance • access to intermediate data • availability of tools to compute data difference sets