Capturing and querying fine-grained provenance of preprocessing pipelines in ...Paolo Missier
a talk given at the VLDB 2021 conference, August, 2021, presenting our paper:
Capturing and Querying Fine-grained Provenance of Preprocessing Pipelines in Data Science. Chapman, A., Missier, P., Simonelli, G., & Torlone, R. PVLDB, 14(4):507–520, January, 2021.
http://doi.org/10.14778/3436905.3436911
Capturing and querying fine-grained provenance of preprocessing pipelines in ...Paolo Missier
a talk given at the VLDB 2021 conference, August, 2021, presenting our paper:
Capturing and Querying Fine-grained Provenance of Preprocessing Pipelines in Data Science. Chapman, A., Missier, P., Simonelli, G., & Torlone, R. PVLDB, 14(4):507–520, January, 2021.
http://doi.org/10.14778/3436905.3436911
Going Smart and Deep on Materials at ALCFIan Foster
As we acquire large quantities of science data from experiment and simulation, it becomes possible to apply machine learning (ML) to those data to build predictive models and to guide future simulations and experiments. Leadership Computing Facilities need to make it easy to assemble such data collections and to develop, deploy, and run associated ML models.
We describe and demonstrate here how we are realizing such capabilities at the Argonne Leadership Computing Facility. In our demonstration, we use large quantities of time-dependent density functional theory (TDDFT) data on proton stopping power in various materials maintained in the Materials Data Facility (MDF) to build machine learning models, ranging from simple linear models to complex artificial neural networks, that are then employed to manage computations, improving their accuracy and reducing their cost. We highlight the use of new services being prototyped at Argonne to organize and assemble large data collections (MDF in this case), associate ML models with data collections, discover available data and models, work with these data and models in an interactive Jupyter environment, and launch new computations on ALCF resources.
Deep Learning on nVidia GPUs for QSAR, QSPR and QNAR predictionsValery Tkachenko
While we have seen a tremendous growth in machine learning methods over the last two decades there is still no one fits all solution. The next era of cheminformatics and pharmaceutical research in general is focused on mining the heterogeneous big data, which is accumulating at ever growing pace, and this will likely use more sophisticated algorithms such as Deep Learning (DL). There has been increasing use of DL recently which has shown powerful advantages in learning from images and languages as well as many other areas. However the accessibly of this technique for cheminformatics is hindered as it is not available readily to non-experts. It was therefore our goal to develop a DL framework embedded into a general research data management platform (Open Science Data Repository) which can be used as an API, standalone tool or integrated in new software as an autonomous module. In this poster we will present results of comparing performance of classic machine learning methods (Naïve Bayes, logistic regression, Support Vector Machines etc.) with Deep Learning and will discuss challenges associated with Ddeep Learning Neural Networks (DNN). The DNN learning models of different complexity (up to 6 hidden layers) were built and tuned (different number of hidden units per layer, multiple activation functions, optimizers, drop out fraction, regularization parameters, and learning rate) using Keras (https://keras.io/) and Tensorflow (www.tensorflow.org) and applied to various use cases connected to prediction of physicochemical properties, ADME, toxicity and calculating properties of materials. It was also shown that using nVidia GPUs significantly accelerates calculations, although memory consumption puts some limits on performance and applicability of standard toolkits 'as is'.
Carles Bo, d'ICIQ, presenta IoChem-BD, un repositori de dades en química computacional. L'objectiu és elaborar una base de dades de forma normalitzada, definint processos, què es guarda i com es fa.
Aquesta presentació ha tingut lloc a la TSIUC'14, celebrada a la Universitat Autònoma de Barcelona el passat 2 de desembre de 2014, sota el títol "Reptes en Big Data a la universitat i la Recerca".
HPC + Ai: Machine Learning Models in Scientific Computinginside-BigData.com
In this video from the 2019 Stanford HPC Conference, Steve Oberlin from NVIDIA presents: HPC + Ai: Machine Learning Models in Scientific Computing.
"Most AI researchers and industry pioneers agree that the wide availability and low cost of highly-efficient and powerful GPUs and accelerated computing parallel programming tools (originally developed to benefit HPC applications) catalyzed the modern revolution in AI/deep learning. Clearly, AI has benefited greatly from HPC. Now, AI methods and tools are starting to be applied to HPC applications to great effect. This talk will describe an emerging workflow that uses traditional numeric simulation codes to generate synthetic data sets to train machine learning algorithms, then employs the resulting AI models to predict the computed results, often with dramatic gains in efficiency, performance, and even accuracy. Some compelling success stories will be shared, and the implications of this new HPC + AI workflow on HPC applications and system architecture in a post-Moore’s Law world considered."
Watch the video: https://youtu.be/SV3cnWf39kc
Learn more: https://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Going Smart and Deep on Materials at ALCFIan Foster
As we acquire large quantities of science data from experiment and simulation, it becomes possible to apply machine learning (ML) to those data to build predictive models and to guide future simulations and experiments. Leadership Computing Facilities need to make it easy to assemble such data collections and to develop, deploy, and run associated ML models.
We describe and demonstrate here how we are realizing such capabilities at the Argonne Leadership Computing Facility. In our demonstration, we use large quantities of time-dependent density functional theory (TDDFT) data on proton stopping power in various materials maintained in the Materials Data Facility (MDF) to build machine learning models, ranging from simple linear models to complex artificial neural networks, that are then employed to manage computations, improving their accuracy and reducing their cost. We highlight the use of new services being prototyped at Argonne to organize and assemble large data collections (MDF in this case), associate ML models with data collections, discover available data and models, work with these data and models in an interactive Jupyter environment, and launch new computations on ALCF resources.
Deep Learning on nVidia GPUs for QSAR, QSPR and QNAR predictionsValery Tkachenko
While we have seen a tremendous growth in machine learning methods over the last two decades there is still no one fits all solution. The next era of cheminformatics and pharmaceutical research in general is focused on mining the heterogeneous big data, which is accumulating at ever growing pace, and this will likely use more sophisticated algorithms such as Deep Learning (DL). There has been increasing use of DL recently which has shown powerful advantages in learning from images and languages as well as many other areas. However the accessibly of this technique for cheminformatics is hindered as it is not available readily to non-experts. It was therefore our goal to develop a DL framework embedded into a general research data management platform (Open Science Data Repository) which can be used as an API, standalone tool or integrated in new software as an autonomous module. In this poster we will present results of comparing performance of classic machine learning methods (Naïve Bayes, logistic regression, Support Vector Machines etc.) with Deep Learning and will discuss challenges associated with Ddeep Learning Neural Networks (DNN). The DNN learning models of different complexity (up to 6 hidden layers) were built and tuned (different number of hidden units per layer, multiple activation functions, optimizers, drop out fraction, regularization parameters, and learning rate) using Keras (https://keras.io/) and Tensorflow (www.tensorflow.org) and applied to various use cases connected to prediction of physicochemical properties, ADME, toxicity and calculating properties of materials. It was also shown that using nVidia GPUs significantly accelerates calculations, although memory consumption puts some limits on performance and applicability of standard toolkits 'as is'.
Carles Bo, d'ICIQ, presenta IoChem-BD, un repositori de dades en química computacional. L'objectiu és elaborar una base de dades de forma normalitzada, definint processos, què es guarda i com es fa.
Aquesta presentació ha tingut lloc a la TSIUC'14, celebrada a la Universitat Autònoma de Barcelona el passat 2 de desembre de 2014, sota el títol "Reptes en Big Data a la universitat i la Recerca".
HPC + Ai: Machine Learning Models in Scientific Computinginside-BigData.com
In this video from the 2019 Stanford HPC Conference, Steve Oberlin from NVIDIA presents: HPC + Ai: Machine Learning Models in Scientific Computing.
"Most AI researchers and industry pioneers agree that the wide availability and low cost of highly-efficient and powerful GPUs and accelerated computing parallel programming tools (originally developed to benefit HPC applications) catalyzed the modern revolution in AI/deep learning. Clearly, AI has benefited greatly from HPC. Now, AI methods and tools are starting to be applied to HPC applications to great effect. This talk will describe an emerging workflow that uses traditional numeric simulation codes to generate synthetic data sets to train machine learning algorithms, then employs the resulting AI models to predict the computed results, often with dramatic gains in efficiency, performance, and even accuracy. Some compelling success stories will be shared, and the implications of this new HPC + AI workflow on HPC applications and system architecture in a post-Moore’s Law world considered."
Watch the video: https://youtu.be/SV3cnWf39kc
Learn more: https://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Your data won’t stay smart forever:exploring the temporal dimension of (big ...Paolo Missier
Much of the knowledge produced through data-intensive computations is liable to decay over time, as the underlying data drifts, and the algorithms, tools, and external data sources used for processing change and evolve. Your genome, for example, does not change over time, but our understanding of it does. How often should be look back at it, in the hope to gain new insight e.g. into genetic diseases, and how much does that cost when you scale re-analysis to an entire population?
The "total cost of ownership” of knowledge derived from data (TCO-DK) includes the cost of refreshing the knowledge over time in addition to the initial analysis, but is often not a primary consideration.
The ReComp project aims to provide models, algorithms, and tools to help humans understand TCO-DK, i.e., the nature and impact of changes in data, and assess the cost and benefits of knowledge refresh.
In this talk we try and map the scope of ReComp, by giving a number of patterns that cover typical analytics scenarios where re-computation is appropriate. We specifically describe two such scenarios, where we are conducting small scale, proof-of-concept ReComp experiments to help us sketch the general ReComp architecture. This initial exercise reveals a multiplicity of problems and research challenges, which will inform the rest of the project
Big&open data challenges for smartcity-PIC2014 ShanghaiVictoria López
This talk is about how both private enterprise and government wish to improve the value of their data and how they deal with this issue. The talk summarizes the ways we think about Big Data, Open Data and their use by organizations or individuals. Big Data is explained in terms of collection, storage, analysis and valuation. This data is collected from numerous sources including networks of sensors, government data holdings, company market databases, and public profiles on social networking sites. Organizations use many data analysis techniques to study both structured and unstructured data. Due to volume, velocity and variety of data, some specific techniques have been developed. MapReduce, Hadoop and other related as RHadoop are trendy topics nowadays.
In this talk several applications and case studies are presented as examples. Data which come from government sources must be open. Every day more and more cities and countries are opening their data. Open Data is then presented as a specific case of public data with a special role in Smartcity. The main goal of Big and Open Data in Smartcity is to develop systems which can be useful for citizens. In this sense RMap (Mapa de Recursos) is shown as an Open Data application, an open system for Madrid City Council, available for smartphones and totally developed by the researching group G-TeC (www.tecnologiaUCM.es).
Biological Apps: Rapidly Converging Technologies for Living Information Proce...Natalio Krasnogor
This is a plenary talk I gave at the 2018 International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems in Cadiz, Spain
El 29 de febrero y el 1 de marzo de 2016, la Fundación Ramón Areces analizó la relación entre 'Big Data y el cambio climático' en unas jornadas. ¿Puede el Big Data ayudar a reducir el cambio climático? ¿Cómo contribuirá ese análisis masivo de datos a prevenir y gestionar catástrofes naturales? Son solo algunas de las preguntas a las que intentarán responder los ponentes. Las ciencias vinculadas al clima tienen en el Big Data una herramienta muy prometedora para afrontar diferentes fenómenos asociados al cambio climático.
Energy Efficient Wireless Internet AccessScienzainrete
Il consumo energetico è la questione del futuro. Dipendiamo sempre più da fonti di energia che scarseggiano. D'altro canto il consumo di energia ha influenze drammatiche sui cambiamenti climatici. E' necessario affrontare la questione della riduzione dei consumi, soprattuto nel settore delle comunicazioni. Qui presentati e analizzati i consumi della telefonia mobile e del network.
Real-time PMU Data Recovery Application Based on Singular Value DecompositionPower System Operation
Phasor measurement units (PMUs) allow for the enhancement of power system monitoring and control applications and they will prove even more crucial in the future, as the grid becomes more decentralized and subject to higher uncertainty. Tools that improve PMU data quality and facilitate data analytics workflows are thus needed. In this work, we leverage a previously described algorithm to develop a python application for PMU data recovery. Because of its intrinsic nature, PMU data can be dimensionally reduced using singular value decomposition (SVD). Moreover, the high spatio-temporal correlation can be leveraged to estimate the value of measurements that are missing due to drop-outs. These observations are at the base of the data recovery application described in this work. Extensive testing is performed to study the performance under different data drop-out scenarios, and the results show very high recovery accuracy. Additionally, the application is designed to take advantage of a high performance PMU data platform called PredictiveGrid™, developed by PingThings
Real-time PMU Data Recovery Application Based on Singular Value DecompositionPower System Operation
Phasor measurement units (PMUs) allow for the enhancement of power system monitoring and control applications and they will prove even more crucial in the future, as the grid becomes more decentralized and subject to higher uncertainty. Tools that improve PMU data quality and facilitate data analytics workflows are thus needed. In this work, we leverage a previously described algorithm to develop a python application for PMU data recovery. Because of its intrinsic nature, PMU data can be dimensionally reduced using singular value decomposition (SVD). Moreover, the high spatio-temporal correlation can be leveraged to estimate the value of measurements that are missing due to drop-outs. These observations are at the base of the data recovery application described in this work. Extensive testing is performed to study the performance under different data drop-out scenarios, and the results show very high recovery accuracy. Additionally, the application is designed to take advantage of a high performance PMU data platform called PredictiveGrid™, developed by PingThings.
KEYWORDS
Our vision for the selective re-computation of genomics pipelines in reaction to changes to tools and reference datasets.
How do you prioritise patients for re-analysis on a given budget?
Talk given at TAPP'16 (Theory and Practice of Provenance), June 2016, paper is here:
https://arxiv.org/abs/1604.06412
Abstract:
The cost of deriving actionable knowledge from large datasets has been decreasing thanks to a convergence of positive factors:
low cost data generation, inexpensively scalable storage and processing infrastructure (cloud), software frameworks and tools for massively distributed data processing, and parallelisable data analytics algorithms.
One observation that is often overlooked, however, is that each of these elements is not immutable, rather they all evolve over time.
As those datasets change over time, the value of their derivative knowledge may decay, unless it is preserved by reacting to those changes. Our broad research goal is to develop models, methods, and tools for selectively reacting to changes by balancing costs and benefits, i.e. through complete or partial re-computation of some of the underlying processes.
In this paper we present an initial model for reasoning about change and re-computations, and show how analysis of detailed provenance of derived knowledge informs re-computation decisions.
We illustrate the main ideas through a real-world case study in genomics, namely on the interpretation of human variants in support of genetic diagnosis.
ReComp and P4@NU: Reproducible Data Science for HealthPaolo Missier
brief overview of the ReComp project (http://recomp.org.uk) on Selective recurring re-computation of complex analytics, and a brief outlook for the P4@NU project on seeking digital biomarkers for age-0related metabolic diseases
ICIAM 2019: A New Algorithm Model for Massive-Scale Streaming Graph AnalysisJason Riedy
Applications in many areas analyze an ever-changing environment. On billion vertices graphs, providing snapshots imposes a large performance cost. We propose the first formal model for graph analysis running concurrently with streaming data updates. We consider an algorithm valid if its output is correct for the initial graph plus some implicit subset of concurrent changes. We show theoretical properties of the model, demonstrate the model on various algorithms, and extend it to updating results incrementally.
Similar to Selective and incremental re-computation in reaction to changes: an exercise in metadata analytics (20)
Design and Development of a Provenance Capture Platform for Data SciencePaolo Missier
A talk given at the DATAPLAT workshop, co-located with the IEEE ICDE conference (May 2024, Utrecht, NL).
Data Provenance for Data Science is our attempt to provide a foundation to add explainability to data-centric AI.
It is a prototype, with lots of work still to do.
Towards explanations for Data-Centric AI using provenance recordsPaolo Missier
In this presentation, given to graduate students at Universita' RomaTre, Italy, we suggest that concepts well-known in Data Provenance can be exploited to provide explanations in the context of data-centric AI processes. Through use cases (incremental data cleaning, training set pruning), we build up increasingly complex provenance patterns, culminating in an open question:
how to describe "why" a specific data item has been manipulated as part of data processing, when such processing may consist of a complex data transformation algorithm.
Interpretable and robust hospital readmission predictions from Electronic Hea...Paolo Missier
A talk given at the BDA4HM workshop, IEEE BigData conference, Dec. 2023
please see paper here:
https://drive.google.com/file/d/1vN08G0FWxOSH1Yeak5AX6a0sr5-EBbAt/view
Data-centric AI and the convergence of data and model engineering:opportunit...Paolo Missier
A keynote talk given to the IDEAL 2023 conference (Evora, Portugal Nov 23, 2023).
Abstract.
The past few years have seen the emergence of what the AI community calls "Data-centric AI", namely the recognition that some of the limiting factors in AI performance are in fact in the data used for training the models, as much as in the expressiveness and complexity of the models themselves. One analogy is that of a powerful engine that will only run as fast as the quality of the fuel allows. A plethora of recent literature has started the connection between data and models in depth, along with startups that offer "data engineering for AI" services. Some concepts are well-known to the data engineering community, including incremental data cleaning, multi-source integration, or data bias control; others are more specific to AI applications, for instance the realisation that some samples in the training space are "easier to learn from" than others. In this "position talk" I will suggest that, from an infrastructure perspective, there is an opportunity to efficiently support patterns of complex pipelines where data and model improvements are entangled in a series of iterations. I will focus in particular on end-to-end tracking of data and model versions, as a way to support MLDev and MLOps engineers as they navigate through a complex decision space.
Realising the potential of Health Data Science:opportunities and challenges ...Paolo Missier
A guest lecture given to a group of healthcare professionals as part of an Information Management course at Newcastle University, on working with healthcare data to generate disease risk prediction models
A Data-centric perspective on Data-driven healthcare: a short overviewPaolo Missier
a brief intro on the data challenges associated with working with Health Care data, with a few examples, both from literature and our own, of traditional approaches (Latent Class Analysis, Topic Modelling) and a perspective on Language-based modelling for Electronic Health Records (EHR).
probably more references than actual content in here!
Tracking trajectories of multiple long-term conditions using dynamic patient...Paolo Missier
Momentum has been growing into research to better understand the dynamics of multiple long-term conditions-multimorbidity (MLTC-M), defined as the co-occurrence of two or more long-term or chronic conditions within an individual. Several research efforts make use of Electronic Health Records (EHR), which represent patients' medical histories. These range from discovering patterns of multimorbidity, namely by clustering diseases based on their co-occurrence in EHRs, to using EHRs to predict the next disease or other specific outcomes. One problem with the former approach is that it discards important temporal information on the co-occurrence, while the latter requires "big" data volumes that are not always available from routinely collected EHRs, limiting the robustness of the resulting models. In this paper we take an intermediate approach, where initially we use about 143,000 EHRs from UK Biobank to perform time-independent clustering using topic modelling, and Latent Dirichlet Allocation specifically. We then propose a metric to measure how strongly a patient is "attracted" into any given cluster at any point through their medical history. By tracking how such gravitational pull changes over time, we may then be able to narrow the scope for potential interventions and preventative measures to specific clusters, without having to resort to full-fledged predictive modelling. In this preliminary work we show exemplars of these dynamic associations, which suggest that further exploration may lead to On behalf of the AI-MULTIPLY consortium. Funded by NIHR AIM Development grant to AI-MULTIPLY actionable insights into patients' medical trajectories.
Digital biomarkers for preventive personalised healthcarePaolo Missier
A talk given to the Alan Turing Institute, UK, Oct 2021, reporting on the preliminary results and ongoing research in our lab, on self-monitoring using accelerometers for healthcare applications
Digital biomarkers for preventive personalised healthcarePaolo Missier
A talk given to the Alan Turing Institute, UK, Oct 2021, reporting on the preliminary results and ongoing research in our lab, on self-monitoring using accelerometers for healthcare applications
Decentralized, Trust-less Marketplacefor Brokered IoT Data Tradingusing Blo...Paolo Missier
a talk given at the 2nd IEEE Blockchain conference, Atlanta, US ?july 2019.
here is the paper: http://homepages.cs.ncl.ac.uk/paolo.missier/doc/Decentralised_Marketplace_USA_Conference___Accepted_Version_.pdf
A Customisable Pipeline for Continuously Harvesting Socially-Minded Twitter U...Paolo Missier
talk for paper published at ICWE2019:
Primo F, Missier P, Romanovsky A, Mickael F, Cacho N. A customisable pipeline for continuously harvesting socially-minded Twitter users. In: Procs. ICWE’19. Daedjeon, Korea; 2019.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Selective and incremental re-computation in reaction to changes: an exercise in metadata analytics
1. 1
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Selective and incremental re-computation in reaction to changes:
an exercise in metadata analytics
recomp.org.uk
Paolo Missier, Jacek Cala, Jannetta Steyn
School of Computing
Newcastle University, UK
Durham University
May 31st, 2018
Meta-*
In collaboration with
• Institute of Genetic Medicine, Newcastle
University
• School of GeoSciences, Newcastle University
4. 4
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Understanding change
Big
Data
The Big
Analytics
Machine
“Valuable
Knowledge”
V3
V2
V1
Meta-knowledge
Algorithms
Tools
Middleware
Reference
datasets
t
t
t
• Threats: Will any of the changes invalidate prior findings?
• Opportunities: Can the findings be improved over time?
ReComp space = expensive analysis +
frequent changes +
high impact
Analytics within ReComp space…
C1: are resource-intensive and thus expensive when repeatedly executed over time, i.e., on
a cloud or HPC cluster;
C2: require sophisticated implementations to run efficiently, such as workflows with a nested
structure;
C3: depend on multiple reference datasets and software libraries and tools, some of which
are versioned and evolve over time;
C4: apply to a possibly large population of input instances
C5: deliver valuable knowledge
5. 5
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Talk Outline
ReComp: selective re-computation to refresh outcomes in reaction
to change
• Case study 1: Re-computation decisions for flood simulations
• Learning useful estimators for the impact of change
• Black box computation, coarse-grained changes
• Case study 2: high throughput genomics data processing
• An exercise in provenance collection and analytics
• White-box computation, fine-grained changes
• Open challenges
6. 6
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Case study 1: Flood modelling simulation
Simulation characteristics:
Part of Newcastle upon Tyne
DTM: ≈2.3M cells, 2x2m cell size
Building and green areas from Nov 2017
Rainfall event with return period 50 years
Simulation time: 60 mins
10–25 frames with water depth and velocity in
each cell
Output size: 23x65 MiB ≈ 1.5 GiB
Water depth heat map
City Catchment Analysis Tool (CityCAT)
Vassilis Glenis, et al.
School of Engineering, NU
7. 7
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
When should we repeat an expensive simulation?
CityCat
Flood simulator
CityCat
Flood simulator
Can we predict
high difference
areas without re-
running the
simulation?
New buildings / green areas
may alter data flow
Extreme weather event simulation (in Newcastle)
Extreme Rainfall event
Running CityCat is generally expensive:
- Processing for the Newcastle area: ≈3h on a 4-core i7 3.2GHz CPU
Placeholder for more expensive simulations!
Maps updates are infrequent (6 months)
But useful when simulating changes eg for planning purposes
Flood
Diffusion
Time series
8. 8
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Estimating the impact of a flood simulation
Suppose we are able to quantify:
- Difference in inputs, M,M’
- Difference in outputs F,F’
Suppose also that we are only interested in large enough changes between two outputs:
For some user-defined parameter
Problem statement:
Can we define an ideal ReComp decision function which
- Operates on two versions of the inputs, M, M’, and old output F
- Returns true iff (1) would return true when F’ is actually computed
(1)
Can we predict when F’ needs to be computed?
9. 9
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Approach
1. Define input diff and output diff functions:
2. Define an impact function:
3. Define the ReComp decision function:
where is a tunable parameter
ReComp approximates (1), so it’s subject to errors:
False Positives:
False Negatives:
4. Use ground data to determine values for as a function of FPR and FNR
Note: The ReComp function should be much less expensive to compute than sim()
10. 10
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Diff and impact functions
B: Buildings
L: other Land
H: hard surface
f() partitions polygons changes into 6 types:
For each type, compute the average water depth within and around the footprint of the change
returns the max of the avg water depth over all changes
d
Water depth
B–L+
B–∩ L+
d
B–
Water depth
: max of the differences between spatially averaged F,F’ over window W
11. 11
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Tuning the threshold parameter
Ground data from all past re-computations:
FP: <1,0>
FN: <0,1>
Set FNR to be close to 0. Experimentally find that minimises FPR. (max specificity)
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0.10 0.15 0.20 0.25
θImp
0.10 0.15 0.20 0.25
θImp
Precision
Recall
Accuracy
Specificity
window size 20x20m, θO = 0.2m, all
changes
window size 20x20m, θO = 0.2m,
consecutive changes
14. 14
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Talk Outline
ReComp: selective re-computation to refresh outcomes in reaction
to change
• Case study 1: Re-computation decisions for flood simulations
• Learning useful estimators for the impact of change
• Case study 2: high throughput genomics data processing
• An exercise in provenance collection and analytics
• Open challenges
15. 15
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Data Analytics enabled by Next Gen Sequencing
Genomics: WES / WGS, Variant calling, Variant interpretation diagnosis
- Eg 100K Genome Project, Genomics England, GeCIP
Submission of
sequence data for
archiving and analysis
Data analysis using
selected EBI and
external software tools
Data presentation and
visualisation through
web interface
Visualisation
raw
sequences align clean
recalibrate
alignments
calculate
coverage
call
variants
recalibrate
variants
filter
variants
annotate
coverage
information
annotated
variants
raw
sequences align clean
recalibrate
alignments
calculate
coverage
coverage
informationraw
sequences align clean
calculate
coverage
coverage
information
recalibrate
alignments
annotate
annotated
variants
annotate
annotated
variants
Stage 1
Stage 2
Stage 3
filter
variants
filter
variants
Metagenomics: Species identification
- Eg The EBI metagenomics portal
16. 16
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Whole-exome variant calling pipeline
Van der Auwera, G. A., Carneiro, M. O., Hartl, C., Poplin, R., del Angel, G., Levy-Moonshine, A., … DePristo, M. A. (2002). From
FastQ Data to High-Confidence Variant Calls: The Genome Analysis Toolkit Best Practices Pipeline. In Current Protocols in
Bioinformatics. John Wiley & Sons, Inc. https://doi.org/10.1002/0471250953.bi1110s43
GATK quality
score
recalibration
Annovar functional annotations (eg
MAF, synonimity, SNPs…)
followed by in house annotations
BWA, Bowtie,
Novoalign
Picard:
MarkDuplicates
GATK-Haplotype Caller
FreeBayes
SamTools
Variant
recalibration
17. 17
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Expensive
Data stats per sample:
4 files per sample (2-lane, pair-end,
reads)
≈15 GB of compressed text data (gz)
≈40 GB uncompressed text data
(FASTQ)
Usually 30-40 input samples
0.45-0.6 TB of compressed data
1.2-1.6 TB uncompressed
Most steps use 8-10 GB of
reference data
Small 6-sample run takes
about 30h on the IGM HPC
machine (Stage1+2)
Scalable and Efficient Whole-exome Data Processing Using Workflows on the Cloud. Cala, J.;
Marei, E.; Yu, Y.; Takeda, K.; and Missier, P. Future Generation Computer Systems, Special Issue:
Big Data in the Cloud, 2016
18. 19
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
SVI: Simple Variant Interpretation
Genomics: WES / WGS, Variant calling, Variant interpretation diagnosis
- Eg 100K Genome Project, Genomics England, GeCIP
raw
sequences align clean
recalibrate
alignments
calculate
coverage
call
variants
recalibrate
variants
filter
variants
annotate
coverage
information
annotated
variants
raw
sequences align clean
recalibrate
alignments
calculate
coverage
coverage
informationraw
sequences align clean
calculate
coverage
coverage
information
recalibrate
alignments
annotate
annotated
variants
annotate
annotated
variants
Stage 1
Stage 2
Stage 3
filter
variants
filter
variants
Filters then classifies variants into three categories: pathogenic,
benign and unknown/uncertain
SVI: a simple single-nucleotide Human Variant Interpretation tool for Clinical Use. Missier, P.; Wijaya,
E.; Kirby, R.; and Keogh, M. In Procs. 11th International conference on Data Integration in the Life Sciences,
Los Angeles, CA, 2015. Springer
19. 20
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Changes that affect variant interpretation
What changes:
- Improved sequencing / variant calling
- ClinVar, OMIM evolve rapidly
- New reference data sources
Evolution in number of variants that affect patients
(a) with a specific phenotype
(b) Across all phenotypes
21. 22
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Unstable
Van der Auwera, G. A., Carneiro, M. O., Hartl, C., Poplin, R., del Angel, G., Levy-Moonshine, A., … DePristo, M. A. (2002). From
FastQ Data to High-Confidence Variant Calls: The Genome Analysis Toolkit Best Practices Pipeline. In Current Protocols in
Bioinformatics. John Wiley & Sons, Inc. https://doi.org/10.1002/0471250953.bi1110s43
GATK quality
score
recalibration
Annovar functional annotations (eg
MAF, synonimity, SNPs…)
followed by in house annotations
BWA, Bowtie,
Novoalign
Picard:
MarkDuplicates
GATK-Haplotype Caller
FreeBayes
SamTools
Variant
recalibration
dbSNP builds
150 2/17
149 11/16
148 6/16
147 4/16
Any of these stages may change over time – semi-independently
Human reference genome:
H19 h37, h38,…
22. 23
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
FreeBayes vs SamTools vs GATK-Haplotype Caller
GATK: McKenna, A., Hanna, M., Banks, E., Sivachenko, A., Cibulskis, K., Kernytsky, A., … DePristo, M.
A. (2010). The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA
sequencing data. Genome Research, 20(9), 1297–303. https://doi.org/10.1101/gr.107524.110
FreeBayes: Garrison, Erik, and Gabor Marth. "Haplotype-based variant detection from short-read
sequencing." arXiv preprint arXiv:1207.3907 (2012).
GIAB: Zook, J. M., Chapman, B., Wang, J., Mittelman, D., Hofmann, O., Hide, W., & Salit, M. (2014).
Integrating human sequence data sets provides a resource of benchmark SNP and indel genotype
calls. Nat Biotech, 32(3), 246–251. http://dx.doi.org/10.1038/nbt.2835
Adam Cornish and Chittibabu Guda, “A Comparison of Variant Calling Pipelines Using Genome in a
Bottle as a Reference,” BioMed Research International, vol. 2015, Article ID 456479, 11 pages, 2015.
doi:10.1155/2015/456479
Hwang, S., Kim, E., Lee, I., & Marcotte, E. M. (2015). Systematic comparison of variant calling
pipelines using gold standard personal exome variants. Scientific Reports, 5(December), 17875.
https://doi.org/10.1038/srep17875
23. 24
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Comparing three versions of Freebayes
Should we care about changes in the pipeline?
• Tested three versions of the caller:
• 0.9.10 Dec 2013
• 1.0.2 Dec 2015
• 1.1 Nov 2016
• The Venn diagram shows quantitative comparison (% and number) of filtered
variants;
• Phred quality score >30
• 16 patient BAM files (7 AD, 9 FTD-ALS)
24. 25
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Impact on SVI classification
Patient phenotypes: 7 Alzheimer’s, 9 FTD-ALS
The ONLY change in the pipeline is the version of Freebayes used to call variants
(R)ed – confirmed pathogenicity (A)mber – uncertain pathogenicity
Patient ID
Freebayes
version
B_0190
B_0191
B_0192
B_0193
B_0195
B_0196
B_0198
B_0199
B_0201
B_0202
B_0203
B_0208
B_0209
B_0211
B_0213
B_0214
0.9.10 A A R A R R R R R A R R R R A R
1.0.2 A A R A R R A A R A R A R A A R
1.1 A A R A R R A A R A R A R A A R
Phenotype
ALS-FTD
ALS-FTD
ALS-FTD
ALS-FTD
ALS-FTD
ALS-FTD
AD
ALS-FTD
AD
AD
AD
AD
AD
ALS-FTD
ALS-FTD
AD
25. 26
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Changes: frequency / impact / cost
Change Frequency
Changeimpactonacohort
GATK
Variant annotations
(Annovar)
Reference
Human genome
Variant DB
(eg ClinVar)
Phenotype
disease mapping
(eg OMIM
GeneMap)
New
sequences
LowHigh
Low High
Variant
Caller
Variant calling
N+1 problem
Variant interpretation
26. 27
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Changes: frequency / impact / cost
Change Frequency
Changeimpactonacohort
GATK
Variant annotations
(Annovar)
Reference
Human genome
Variant DB
(eg ClinVar)
Phenotype
disease mapping
(eg OMIM
GeneMap)
New
sequences
LowHigh
Low High
Variant
Caller
Variant calling
N+1 problem
Variant interpretation
ReComp
space
28. 29
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
The ReComp meta-process
Estimate impact of
changes
Select and
Enact
Record execution
history
Detect and
measure
changes
History
DB
Data diff(.,.)
functions
Change
Events
Process P
Observe
Exec
1. Capture the history of past computations:
- Process Structure and dependencies
- Cost
- Provenance of the outcomes
2. Metadata analytics: Learn from history
- Estimation models for impact, cost, benefits
Approach:
2. Collect and exploit
process history metadata
1. Quantify data-diff and impact of changes on prior outcomes
Changes:
• Algorithms and tools
• Accuracy of input sequences
• Reference databases (HGMD,
ClinVar, OMIM GeneMap…)
29. 32
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
changes, data diff, impact
1) Observed change events:
(inputs, dependencies, or both)
3) Impact occurs to various degree on multiple prior outcomes.
Impact of change C on the processing of a specific X:
2) Type-specific Diff functions:
Impact is process- and data-specific:
31. 34
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Impact: importance and Scope
Scope: which cases are affected?
- Individual variants have an associated phenotype.
- Patient cases also have a phenotype
“a change in variant v can only have impact on a case X if V and X
share the same phenotype”
Importance: “Any variant with status moving from/to Red causes High
impact on any X that is affected by the variant”
32. 35
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Approach – a combination of techniques
1. Partial re-execution
• Identify and re-enact the portion of a process that are affected by change
2. Differential execution
• Input to the new execution consists of the differences between two versions of a
changed dataset
• Only feasible if some algebraic properties of the process hold
3. Identifying the scope of change – Loss-less
• Exclude instances of the population that are certainly not affected
35. 39
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
History DB: Workflow Provenance
Each invocation of an eSC workflow generates a provenance trace
“plan”
“plan
execution”
WF
B1 B2
B1exec B2exec
Data
WFexec
partOf
partOf
usagegeneration
association association
association
db
usage
ProgramWorkflow
Execution
Entity
(ref data)
37. 41
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
1. Partial re-execution
1. Change detection: A provenance fact indicates that a new version Dnew of
database d is available wasDerivedFrom(“db”,Dnew)
:- execution(WFexec), wasPartOf(Xexec,WFexec), used(Xexec, “db”)
2.1 Find the entry point(s) into the workflow, where db was used
:- execution(WFexec), execution(B1exec), execution(B2exec),
wasPartOf(B1exec, WFexec), wasPartOf(B2exec, WFexec),
wasGeneratedBy(Data, B1exec), used(B2exec,Data)
2.2 Discover the rest of the sub-workflow graph (execute recursively)
2. Reacting to the change:
Provenance
pattern:
“plan”
“plan
execution”
Ex. db = “ClinVar v.x”
WF
B1 B2
B1exec B2exec
Data
WFexec
partOf
partOf
usagegeneration
association association
association
db
usage
38. 42
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Minimal sub-graphs in SVI
Change in
ClinVar
Change in
GeneMap
Overhead: cache intermediate data required for partial re-execution
• 156 MB for GeneMap changes and 37 kB for ClinVar changes
Time savings Partial re-
execution (seC)
Complete re-
execution
Time saving (%)
GeneMap 325 455 28.5
ClinVar 287 455 37
44. 52
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Differential execution
Suppose D is a relation (a table). diffD() can be expressed as:
Where:
We compute:
as the combination of:
This is effective if:
This can be achieved as follows:
…provided P is distributive wrt st union and difference
Cf. F. McSherry, D. Murray, R. Isaacs, and M. Isard, “Differential dataflow,” in Proceedings of CIDR 2013, 2013.
45. 53
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Partial re-computation using input difference
Idea: run SVI but replace ClinVar query with a query on ClinVar version diff:
Q(CV) Q(diff(CV1, CV2))
Works for SVI, but hard to generalise: depends on the type of process
Bigger gain: diff(CV1, CV2) much smaller than CV2
GeneMap versions
from –> to
ToVersion record
count
Difference
record count Reduction
16-03-08 –> 16-06-07 15910 1458 91%
16-03-08 –> 16-04-28 15871 1386 91%
16-04-28 –> 16-06-01 15897 78 99.5%
16-06-01 –> 16-06-02 15897 2 99.99%
16-06-02 –> 16-06-07 15910 33 99.8%
ClinVar versions
from –> to
ToVersion record
count
Difference
record count Reduction
15-02 –> 16-05 290815 38216 87%
15-02 –> 16-02 285042 35550 88%
16-02 –> 16-05 290815 3322 98.9%
47. 55
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
3: precisely identify the scope of a change
Patient / DB version
impact matrix
Strong scope:
(fine-grained provenance)
Weak scope: “if CVi was used in the processing of pj then pj is in scope”
(coarse-grained provenance – next slide)
Semantic scope:
(domain-specific scoping rules)
48. 56
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
A weak scoping algorithm
Coarse-grained
provenance
Candidate invocation:
Any invocation I of P
whose provenance
contains statements of
the form:
used(A,”db”),wasPartOf(A,I),wasAssociatedWith(I,_,WF)
- For each candidate invocation I of P:
- partially re-execute using the difference sets as inputs # see previous slides
- find the minimal subgraph P’ of P that needs re-computation # see above
- repeat:
execute P’ one step at-a-time
until <empty output> or <P’ completed>
- If <P’ completed> and not <empty output> then
- Execute P’ on the full inputs
Sketch of the algorithm:
WF
B1 B2
B1exec B2exec
Data
WFexec
partOf
partOf
usagegeneration
association association
association
db
usage
50. 58
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Summary of ReComp challenges
Change
Events
History
DB
Reproducibility:
- virtualisation
Sensitivity analysis unlikely to work well
Small input perturbations potentially large impact on diagnosis
Learning useful estimators is hard
Diff functions are both type-
and application-specific
Not all runtime environments
support provenance recording
specific generic
Data
diff(.,.)
functions
Process P
Observe
Exec
56. 64
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
ReComp decisions
Given a population X of prior inputs:
Given a change
ReComp makes yes/no decisions for each
returns True if P is to be executed again on X, and False otherwise
To decide, ReComp must estimate impact:
(as well as estimate the re-computation cost)
Example:
57. 65
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Two possible approaches
1. Direct estimator of impact function:
Here the problem is learn such function for specific P, C, and data types Y
2. Learning an emulator for P which is simpler to compute and provides a useful
approximation:
surrogate (emulator)
where ε is a stochastic term that accounts for the error in approximating f
Learning requires a training set { (xi, yi) } …
If can be found, then we can hope to use it to approximate:
Such that
58. 66
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
History DB and Differences DB
Whenever P is re-computed on input X, a new er’ is added to HDB for X:
Using diff() functions we produce a derived difference record dr:
… collected in a Differences database:
dr1 = imp(C1,Y11)
dr2= imp(C12,Y41)
dr3 = imp(C1,Y51)
dr4 = imp(C2,Y52)
DDB
C1 C2 C3
GATK
(Haplotype caller) FreeBayes
0.9
FreeBayes
1.0
FreeBayes
1.1
X1
X2
X3
X4
X5
Y11
Y21
Y31
Y41
Y51
Y12
Y52
Y43
Y53
HDB
60. 68
ReComp–DurhamUniversityMay31st,2018
PaoloMissier
Learning challenges
• Evidence is small and sparse
• How can it be used for selecting from X?
• Learning a reliable imp() function is not feasible
• What’s the use of history? You never see the same change twice!
• Must somehow use evidence from related changes
• A possible approach:
• ReComp makes probabilistic decisions, takes chances
• Associate a reward to each ReComp decision reinforcement learning
• Bayesian inference (use new evidence to update probabilities)
dr1 = imp(C1,Y11)
dr2= imp(C12,Y41)
dr3 = imp(C1,Y51)
dr4 = imp(C2,Y52)
DDB
C1 C2 C3
GATK
(Haplotype caller)
FreeBayes
0.9
FreeBayes
1.0
FreeBayes
1.1
X1
X2
X3
X4
X5
Y11
Y21
Y31
Y41
Y51
Y12
Y52
Y43
Y53
HDB
Editor's Notes
M \rightarrow \mathit{sim}(M) = F
M \rightarrow \mathit{sim}(M) = F
\Delta_F(F,F')
\Delta_M(M,M')
\Delta_F(F,F') > \theta_O
Genomics is a form of data-intensive / computation-intensive analysis
Each sample included 2-lane, pair-end raw sequence reads (4 files per sample).The average size of compressed files was nearly 15 GiB per sample; file decompression was included in the pipeline as one of the initial tasks.
Changes in the reference databases have an impact on the classification
returns updates in mappings to genes that have changed between the two versions (including possibly new mappings):
$\diffOM(\OM^t, \OM^{t'}) = \{\langle t, genes(\dt) \rangle | genes(\dt) \neq genes'(\dt) \} $\\
where $genes'(\dt)$ is the new mapping for $\dt$ in $\OM^{t'}$.
\begin{align*}
\diffCV&(\CV^t, \CV^{t'}) = \\
&\{ \langle v, \varst(v) | \varst(v) \neq \varst'(v) \} \\
& \cup \CV^{t'} \setminus \CV^t \cup \CV^t \setminus \CV^{t'}
\label{eq:diff-cv}
\end{align*}
where $\varst'(v)$ is the new class associate to $v$ in $\CV^{t'}$.
Point of slide: sparsity of impact demands better than blind recomp.
Table 1 summarises the results. We recorded four types of outcomes. Firstly, confirming the current diagnosis (� ), which happens when additional variants are added to the
Red class. Secondly, retracting the diagnosis, which may happen (rarely) when all red variants are retracted, de-noted ❖. Thirdly, changes in the amber class which do not alter the diagnosis (� ), and finally, no change at all ( ).
`Table reports results from nearly 500 executions, concern-ing a cohort of 33 patients, for a total runtime of about 58.7 hours. As merely 14 relevant output changes were de-tected, this is about 4.2 hours of computation per change: a steep cost, considering that the actual execution time of SVI takes a little over 7 minutes.
our recommendation is the use of BWA-MEM and Samtools pipeline for SNP calls and BWA-MEM and GATK-HC pipeline for indel calls.
In four cases change in the caller version changes the classification
Changes can be frequent or rare, disruptive or marginal
Changes can be frequent or rare, disruptive or marginal
How to make computational experiments reusable, all or in part, through a combination of data and code sharing and re-purposing (reusable Research Objects) and virtualisation mechanisms
\text{let } v \in \diff{Y}(Y^t, Y^{t'}): \\
\text{for any $X$: } \impact_{P}(C,X) = \texttt{High} \text{ if }\\
v.\texttt{status:}
\begin{cases}
* \rightarrow \texttt{red} \\
\texttt{red} \rightarrow *
\end{cases}
Firstly, if we can analyse the structure and semantics of process P , to recompute an instance of P more effec-tively we may be able to reduce re-computation to only those parts of the process that are actually involved in the processing of the changed data. For this, we are in-spired by techniques for smart rerun of workflow-based applications [6, 7], as well as by more general approaches to incremental computation [8, 9].
Firstly, if we can analyse the structure and semantics of process P , to recompute an instance of P more effec-tively we may be able to reduce re-computation to only those parts of the process that are actually involved in the processing of the changed data. For this, we are in-spired by techniques for smart rerun of workflow-based applications [6, 7], as well as by more general approaches to incremental computation [8, 9].
Firstly, if we can analyse the structure and semantics of process P , to recompute an instance of P more effec-tively we may be able to reduce re-computation to only those parts of the process that are actually involved in the processing of the changed data. For this, we are in-spired by techniques for smart rerun of workflow-based applications [6, 7], as well as by more general approaches to incremental computation [8, 9].
Experimental setup for our study of ReComp techniques:
SVI workflow with automated provenance recording
Cohort of about 100 exomes (neurological disorders)
Changes in ClinVar and OMIM GeneMap
Firstly, if we can analyse the structure and semantics of process P , to recompute an instance of P more effec-tively we may be able to reduce re-computation to only those parts of the process that are actually involved in the processing of the changed data. For this, we are in-spired by techniques for smart rerun of workflow-based applications [6, 7], as well as by more general approaches to incremental computation [8, 9].
This is only a small selection of rows and a subset of columns. In total there was 30 columns, 349074 rows in the old set, 543841 rows in the new set, 200746 of the added rows, 5979 of the removed rows, 27662 of the changed rows.
As on the previous slide, you may want to highlight that the selection of key-columns and where-columns is very important. For example, using #AlleleID, Assembly and Chromosome as the key columns, we have entry #AlleleID 15091 which looks very similar in both added (green) and removed (red) sets. They differ, however, in the Chromosome column.
Considering the where-columns, using only ClinicalSignificance returns blue rows which differ between versions only in that columns. Changes in other columns (e.g. LastEvaluated) are not reported, which may have ramifications if such a difference is used to produce the new output.
This is only a small selection of rows and a subset of columns. In total there was 30 columns, 349074 rows in the old set, 543841 rows in the new set, 200746 of the added rows, 5979 of the removed rows, 27662 of the changed rows.
As on the previous slide, you may want to highlight that the selection of key-columns and where-columns is very important. For example, using #AlleleID, Assembly and Chromosome as the key columns, we have entry #AlleleID 15091 which looks very similar in both added (green) and removed (red) sets. They differ, however, in the Chromosome column.
Considering the where-columns, using only ClinicalSignificance returns blue rows which differ between versions only in that columns. Changes in other columns (e.g. LastEvaluated) are not reported, which may have ramifications if such a difference is used to produce the new output.
Also, as in Tab. 2 and 3 in the paper, I’d mention whether this reduction was possible with generic diff function or specific function tailored to SVI.
What is also interesting and what I would highlight is that even if the reduction is very close to 100% but below, the cost of recomputation of the process may still be significant because of some constant-time overheads related to running a process (e.g. loading data into memory). e-SC workflows suffer from exactly this issue (every block serializes and deserializes data) and that’s why Fig. 6 shows increase in runtime for GeneMap executed with 2 \deltas even if the reduction is 99.94% (cf. Tab. 2 and Fig. 6 for GeneMap diff between 16-10-30 –> 16-10-31).
Firstly, if we can analyse the structure and semantics of process P , to recompute an instance of P more effec-tively we may be able to reduce re-computation to only those parts of the process that are actually involved in the processing of the changed data. For this, we are in-spired by techniques for smart rerun of workflow-based applications [6, 7], as well as by more general approaches to incremental computation [8, 9].
v \in (\delta^- \cup \delta^+) \cap \mathit{used}(p_j, v) \Rightarrow p_j \text{ in scope }
v.\mathit{phenotype} == p_j.\mathit{phenotype} \Rightarrow p_j \text{ in scope }
Regarding the algorithm, you show the simplified version (Alg. 1). But please take also look on Alg. 2 and mention that you can only run the loop if the distributiveness holds for all P in the downstream graph. Otherwise, you need to break and re-execute on full inputs just after first non-distributive task produces a non-empty output. But, obviously, the hope is that with a well tailored diff function the output will be empty for majority of cases.
er = \langle P, X^{t}, D^{t}, Y^{t}, c^{t}, T \rangle
\HDB = \{ er_1, er_2 \dots er_N \}
{\cal X} = \{ er.X | er \in \HDB\}