With the explosion of interest in both enhanced knowledge management and open science, the past few years have seen considerable discussion about making scientific data “FAIR” — findable, accessible, interoperable, and reusable. The problem is that most scientific datasets are not FAIR. When left to their own devices, scientists do an absolutely terrible job creating the metadata that describe the experimental datasets that make their way in online repositories. The lack of standardization makes it extremely difficult for other investigators to locate relevant datasets, to re-analyse them, and to integrate those datasets with other data. The Center for Expanded Data Annotation and Retrieval (CEDAR) has the goal of enhancing the authoring of experimental metadata to make online datasets more useful to the scientific community. The CEDAR work bench for metadata management will be presented in this webinar. CEDAR illustrates the importance of semantic technology to driving open science. It also demonstrates a means for simplifying access to scientific data sets and enhancing the reuse of the data to drive new discoveries.
Dr. Dennis Wang discusses possible ways to enable ML methods to be more powerful for discovery and to reduce ambiguity within translational medicine, allowing data-informed decision-making to deliver the next generation of diagnostics and therapeutics to patients quicker, at lowered costs, and at scale.
The talk by Dr. Dennis Wang was followed by a panel discussion with Mr. Albert Wang, M. Eng., Head, IT Business Partner, Translational Research & Technologies, Bristol-Myers Squibb.
In the late Fall and Winter of 2018, the Pistoia Alliance in cooperation with Elsevier and charitable organizations Cures within Reach and Mission: Cure ran a datathon aiming to find drugs suitable for treatment of childhood chronic pancreatitis, a rare disease that causes extreme suffering. The datathon resulted in identification of four candidate compounds in a short time frame of just under three months. In this webinar our speakers discuss the technologies that made this leap possible
Innovation applications of microphysiological systems (MPS) have been growing over the past decade, especially with respect to the use of complex human tissues for assessing safety of drug candidates – but broad industry adoption of MPS methods has not yet become a reality.
This webinar addresses some recent advances in MPS development and begins to explore the barriers to increased incorporation of MPS to improve drug safety assessment and to provide safer, more effective drugs into the clinical pipeline.
Themes and objectives:
To position FAIR as a key enabler to automate and accelerate R&D process workflows
FAIR Implementation within the context of a use case
Grounded in precise outcomes (e.g. faster and bigger science / more reuse of data to enhance value / increased ability to share data for collaboration and partnership)
To make data actionable through FAIR interoperability
Speakers:
Mathew Woodwark,Head of Data Infrastructure and Tools, Data Science & AI, AstraZeneca
Erik Schultes, International Science Coordinator, GO-FAIR
Georges Heiter, Founder & CEO, Databiology
Federated Learning (FL) is a learning paradigm that enables collaborative learning without centralizing datasets. In this webinar, NVIDIA present the concept of FL and discuss how it can help overcome some of the barriers seen in the development of AI-based solutions for pharma, genomics and healthcare. Following the presentation, the panel debate on other elements that could drive the adoption of digital approaches more widely and help answer currently intractable science and business questions.
2020.04.07 automated molecular design and the bradshaw platform webinarPistoia Alliance
This presentation described how data-driven chemoinformatics methods may automate much of what has historically been done by a medicinal chemist. It explored what is reasonable to expect “AI” approaches might achieve, and what is best left with a human expert. The implications of automation for the human-machine interface were explored and illustrated with examples from Bradshaw, GSK’s experimental automated design environment.
Open interoperability standards, tools and services at EMBL-EBIPistoia Alliance
In this webinar Dr Henriette Harmse from EMBL-EBI presents how they are using their ontology services at EMBL-EBI to scale up the annotation of data and deliver added value through ontologies and semantics to their users.
Dr. Dennis Wang discusses possible ways to enable ML methods to be more powerful for discovery and to reduce ambiguity within translational medicine, allowing data-informed decision-making to deliver the next generation of diagnostics and therapeutics to patients quicker, at lowered costs, and at scale.
The talk by Dr. Dennis Wang was followed by a panel discussion with Mr. Albert Wang, M. Eng., Head, IT Business Partner, Translational Research & Technologies, Bristol-Myers Squibb.
In the late Fall and Winter of 2018, the Pistoia Alliance in cooperation with Elsevier and charitable organizations Cures within Reach and Mission: Cure ran a datathon aiming to find drugs suitable for treatment of childhood chronic pancreatitis, a rare disease that causes extreme suffering. The datathon resulted in identification of four candidate compounds in a short time frame of just under three months. In this webinar our speakers discuss the technologies that made this leap possible
Innovation applications of microphysiological systems (MPS) have been growing over the past decade, especially with respect to the use of complex human tissues for assessing safety of drug candidates – but broad industry adoption of MPS methods has not yet become a reality.
This webinar addresses some recent advances in MPS development and begins to explore the barriers to increased incorporation of MPS to improve drug safety assessment and to provide safer, more effective drugs into the clinical pipeline.
Themes and objectives:
To position FAIR as a key enabler to automate and accelerate R&D process workflows
FAIR Implementation within the context of a use case
Grounded in precise outcomes (e.g. faster and bigger science / more reuse of data to enhance value / increased ability to share data for collaboration and partnership)
To make data actionable through FAIR interoperability
Speakers:
Mathew Woodwark,Head of Data Infrastructure and Tools, Data Science & AI, AstraZeneca
Erik Schultes, International Science Coordinator, GO-FAIR
Georges Heiter, Founder & CEO, Databiology
Federated Learning (FL) is a learning paradigm that enables collaborative learning without centralizing datasets. In this webinar, NVIDIA present the concept of FL and discuss how it can help overcome some of the barriers seen in the development of AI-based solutions for pharma, genomics and healthcare. Following the presentation, the panel debate on other elements that could drive the adoption of digital approaches more widely and help answer currently intractable science and business questions.
2020.04.07 automated molecular design and the bradshaw platform webinarPistoia Alliance
This presentation described how data-driven chemoinformatics methods may automate much of what has historically been done by a medicinal chemist. It explored what is reasonable to expect “AI” approaches might achieve, and what is best left with a human expert. The implications of automation for the human-machine interface were explored and illustrated with examples from Bradshaw, GSK’s experimental automated design environment.
Open interoperability standards, tools and services at EMBL-EBIPistoia Alliance
In this webinar Dr Henriette Harmse from EMBL-EBI presents how they are using their ontology services at EMBL-EBI to scale up the annotation of data and deliver added value through ontologies and semantics to their users.
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
Combining Explicit and Latent Web Semantics for Maintaining Knowledge GraphsPaul Groth
A look at how the thinking about Web Data and the sources of semantics can help drive decisions on combining latent and explicit knowledge. Examples from Elsevier and lots of pointers to related work.
Application of recently developed FAIR metrics to the ELIXIR Core Data ResourcesPistoia Alliance
The FAIR (Findable, Accessible, Interoperable and Reusable) principles aim to maximize the discovery and reuse of digital resources. Using recently developed software and metrics to assess FAIRness and supported through an ELIXIR Implementation Study, Michel worked with a subset of ELIXIR Core Data Resources to apply these technologies. In this webinar, he will discuss their approach, findings, and lessons learned towards the understanding and promotion of the FAIR principles.
The Roots: Linked data and the foundations of successful Agriculture DataPaul Groth
Some thoughts on successful data for the agricultural domain. Keynote at Linked Open Data in Agriculture
MACS-G20 Workshop in Berlin, September 27th and 28th, 2017 https://www.ktbl.de/inhalte/themen/ueber-uns/projekte/macs-g20-loda/lod/
Considerations and challenges in building an end to-end microbiome workflowEagle Genomics
Many of the data management and analysis challenges in microbiome research are shared with genomics and other life-science big-data disciplines. However there are aspects that are specific: some are intrinsic to microbiome data, some are related to the maturity of the field, with others related to extracting business value from the data.
Fairification experience clarifying the semantics of data matricesPistoia Alliance
This webinar presents the Statistics Ontology, STATO which is a semantic framework to support the creation of standardized analysis reports to help with review of results in the form of data matrices. STATO includes a hierarchy of classes and a vocabulary for annotating statistical methods used in life, natural and biomedical sciences investigations, text mining and statistical analyses.
The Seven Deadly Sins of BioinformaticsDuncan Hull
Keynote talk at Bioinformatics Open Source Conference (BOSC) Special Interest Group at the 15th Annual International Conference on Intelligent Systems for Molecular Biology (ISMB 2007) in Vienna, July 2007 by Carole Goble, University of Manchester.
Semantic web technologies offer a potential mechanism for the representation and integration of thousands of biomedical databases. Many of these databases offer cross-references to other data sources, but these are generally incomplete and prone to error. In this paper, we conduct an empirical analysis of the link structure of life science Linked Data, obtained from the Bio2RDF project. Three different link graphs for datasets, entities and terms are characterized by degree, connectivity, and clustering metrics, and their correlation is measured as well. Furthermore, we utilize the symmetry and transitivity of entity links to build a benchmark and evaluate several popular entity matching approaches. Our findings indicate that the life science data network can help find hidden links, can be used to validate links, and may offer a mechanism to integrate a wider set of resources to support biomedical knowledge discovery.
Women Who Code-HSV Event:
'An Introduction to Machine Learning and Genomics'. Dr. Lasseigne will introduce the R programming language and the foundational concepts of machine learning with real-world examples including applications in the field of genomics with an emphasis on complex human disease research.
Brittany Lasseigne, PhD, is a postdoctoral fellow in the lab of Dr. Richard Myers at the HudsonAlpha Institute for Biotechnology and a 2016-2017 Prevent Cancer Foundation Fellow. Dr. Lasseigne received a BS in biological engineering from the James Worth Bagley College of Engineering at Mississippi State University and a PhD in biotechnology science and engineering from The University of Alabama in Huntsville. As a graduate student, she studied the role of epigenetics and copy number variation in cancer, identifying novel diagnostic biomarkers and prognostic signatures associated with kidney cancer. In her current position, Dr. Lasseigne’s research focus is the application of genetics and genomics to complex human diseases. Her recent work includes the identification of gene variants linked to ALS, characterization of gene expression patterns in schizophrenia and bipolar disorder, and development of non-invasive biomarker assays. Dr. Lasseigne is currently focused on integrating genomic data across cancers with functional annotations and patient information to explore novel mechanisms in cancer etiology and progression, identify therapeutic targets, and understand genomic changes associated with patient survival. Based upon those analyses, she is creating tools to share with the scientific community.
Digital transformation of translational medicineEagle Genomics
Anthony Finbow, Executive Chairman, and William Spooner, Chief Science Officer, discuss Eagle Genomics' software product, marketed at pharmaceutical and biotech companies, which enables radical improvements in the productivity of scientific research.
CINECA webinar slides: Making cohort data FAIRCINECAProject
Cohort studies, which recruit groups of individuals who share common characteristics and follow them over a period of time, are a robust and essential method in biomedical research for understanding the links between risk factors and diseases. Through questionnaires, medical assessments, and other interactions, voluminous and complex data are collected about the study participants. While cohort studies present a treasure trove of data, the data is often not FAIR (findable, accessible, interoperable and reusable). First, due to the sensitive and private nature of medical information, cohort data are often access controlled. Due to the lack of information about the studies (metadata), often one needs to dig deep to know what data is available in a cohort study. Therefore, many cohort datasets suffer from the findable and accessible issues. Second, often data collection is performed with instruments and data specifications tailored to the study. As a result, combining data across cohorts, even ones with similar characteristics, is difficult, making interoperability and reusability a challenge. In this presentation, we will explore several informatics techniques, such as the use of ontology, to make cohort data more FAIR. We will also consider the implications of making cohort data more open and the ethical and governance issues associated with open science benefit sharing.
This webinar is part of the “How FAIR are you” webinar series and hackathon, which aim at increasing and facilitating the uptake of FAIR approaches into software, training materials and cohort data, to facilitate responsible and ethical data and resource sharing and implementation of federated applications for data analysis.
The CINECA webinar series aims to discuss ways to address common challenges and share best practices in the field of cohort data analysis, as well as distribute CINECA project results. All CINECA webinars include an audience Q&A session during which attendees can ask questions and make suggestions. Please note that all webinars are recorded and available for posterior viewing. CINECA webinars include an audience Q&A session during which attendees can ask questions and make suggestions.
This webinar took place on 17th February 2021 and is part of the CINECA webinar series.
For previous and upcoming CINECA webinars see:
https://www.cineca-project.eu/webinars
dkNET Webinar - FAIR Data Require Better Metadata: The Case for CEDAR 11/13/2020dkNET
Abstract
With the explosion of interest in open science, the past few years have overflowed with discussions of making scientific data “FAIR”—findable, accessible, interoperable, and reusable. The problem is that most scientific datasets are by no means FAIR. When left to their own devices, scientists do a terrible job creating the metadata that describe the experimental datasets that make their way to online repositories. The lack of standardization makes it extremely difficult for other investigators to locate relevant datasets, to reanalyze them, and to integrate those datasets with other data. There is an urgent need to make it easy for investigators to author metadata that adhere to community standards and that describe datasets in reproducible terms. The Center for Expanded Data Annotation and Retrieval (CEDAR) is developing technology with the goal of doing just that. Although it will take more than technology to make data FAIR, solid infrastructure is an essential prerequisite. CEDAR demonstrates the value of making it easy for scientists to author metadata that are complete, comprehensive, and standardized.
The top 3 key questions that CEDAR can answer:
1. How can I describe my experiment in a way that will allow other investigators to find my data?
2. What are the essential metadata fields needed to describe an experiment that uses a method such as RNA-Seq?
3. How can I easily enter metadata that are acceptable to a repository such as NCBI’s BioSample database?
Presenter: Mark Musen, PhD, Professor of Biomedical Informatics and of Biomedical Data Science at Stanford University, and Director of the Stanford Center for Biomedical Informatics Research
dkNET Webinar Information: https://dknet.org/about/webinar
Making it Easier, Possibly Even Pleasant, to Author Rich Experimental MetadataMichel Dumontier
Biomedical researchers will remain stymied in their ability to take full advantage of the Big Data revolution if they can never find the datasets that they need to analyze, if there is lack of clarity about what particular datasets contain, and if data are insufficiently described.
CEDAR, an NIH BD2K Center of Excellence, aims to develop methods and tools to vastly ease the burden of authoring good experimental metadata, and to maximally use this information to zero in on datasets of interest.
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
Combining Explicit and Latent Web Semantics for Maintaining Knowledge GraphsPaul Groth
A look at how the thinking about Web Data and the sources of semantics can help drive decisions on combining latent and explicit knowledge. Examples from Elsevier and lots of pointers to related work.
Application of recently developed FAIR metrics to the ELIXIR Core Data ResourcesPistoia Alliance
The FAIR (Findable, Accessible, Interoperable and Reusable) principles aim to maximize the discovery and reuse of digital resources. Using recently developed software and metrics to assess FAIRness and supported through an ELIXIR Implementation Study, Michel worked with a subset of ELIXIR Core Data Resources to apply these technologies. In this webinar, he will discuss their approach, findings, and lessons learned towards the understanding and promotion of the FAIR principles.
The Roots: Linked data and the foundations of successful Agriculture DataPaul Groth
Some thoughts on successful data for the agricultural domain. Keynote at Linked Open Data in Agriculture
MACS-G20 Workshop in Berlin, September 27th and 28th, 2017 https://www.ktbl.de/inhalte/themen/ueber-uns/projekte/macs-g20-loda/lod/
Considerations and challenges in building an end to-end microbiome workflowEagle Genomics
Many of the data management and analysis challenges in microbiome research are shared with genomics and other life-science big-data disciplines. However there are aspects that are specific: some are intrinsic to microbiome data, some are related to the maturity of the field, with others related to extracting business value from the data.
Fairification experience clarifying the semantics of data matricesPistoia Alliance
This webinar presents the Statistics Ontology, STATO which is a semantic framework to support the creation of standardized analysis reports to help with review of results in the form of data matrices. STATO includes a hierarchy of classes and a vocabulary for annotating statistical methods used in life, natural and biomedical sciences investigations, text mining and statistical analyses.
The Seven Deadly Sins of BioinformaticsDuncan Hull
Keynote talk at Bioinformatics Open Source Conference (BOSC) Special Interest Group at the 15th Annual International Conference on Intelligent Systems for Molecular Biology (ISMB 2007) in Vienna, July 2007 by Carole Goble, University of Manchester.
Semantic web technologies offer a potential mechanism for the representation and integration of thousands of biomedical databases. Many of these databases offer cross-references to other data sources, but these are generally incomplete and prone to error. In this paper, we conduct an empirical analysis of the link structure of life science Linked Data, obtained from the Bio2RDF project. Three different link graphs for datasets, entities and terms are characterized by degree, connectivity, and clustering metrics, and their correlation is measured as well. Furthermore, we utilize the symmetry and transitivity of entity links to build a benchmark and evaluate several popular entity matching approaches. Our findings indicate that the life science data network can help find hidden links, can be used to validate links, and may offer a mechanism to integrate a wider set of resources to support biomedical knowledge discovery.
Women Who Code-HSV Event:
'An Introduction to Machine Learning and Genomics'. Dr. Lasseigne will introduce the R programming language and the foundational concepts of machine learning with real-world examples including applications in the field of genomics with an emphasis on complex human disease research.
Brittany Lasseigne, PhD, is a postdoctoral fellow in the lab of Dr. Richard Myers at the HudsonAlpha Institute for Biotechnology and a 2016-2017 Prevent Cancer Foundation Fellow. Dr. Lasseigne received a BS in biological engineering from the James Worth Bagley College of Engineering at Mississippi State University and a PhD in biotechnology science and engineering from The University of Alabama in Huntsville. As a graduate student, she studied the role of epigenetics and copy number variation in cancer, identifying novel diagnostic biomarkers and prognostic signatures associated with kidney cancer. In her current position, Dr. Lasseigne’s research focus is the application of genetics and genomics to complex human diseases. Her recent work includes the identification of gene variants linked to ALS, characterization of gene expression patterns in schizophrenia and bipolar disorder, and development of non-invasive biomarker assays. Dr. Lasseigne is currently focused on integrating genomic data across cancers with functional annotations and patient information to explore novel mechanisms in cancer etiology and progression, identify therapeutic targets, and understand genomic changes associated with patient survival. Based upon those analyses, she is creating tools to share with the scientific community.
Digital transformation of translational medicineEagle Genomics
Anthony Finbow, Executive Chairman, and William Spooner, Chief Science Officer, discuss Eagle Genomics' software product, marketed at pharmaceutical and biotech companies, which enables radical improvements in the productivity of scientific research.
CINECA webinar slides: Making cohort data FAIRCINECAProject
Cohort studies, which recruit groups of individuals who share common characteristics and follow them over a period of time, are a robust and essential method in biomedical research for understanding the links between risk factors and diseases. Through questionnaires, medical assessments, and other interactions, voluminous and complex data are collected about the study participants. While cohort studies present a treasure trove of data, the data is often not FAIR (findable, accessible, interoperable and reusable). First, due to the sensitive and private nature of medical information, cohort data are often access controlled. Due to the lack of information about the studies (metadata), often one needs to dig deep to know what data is available in a cohort study. Therefore, many cohort datasets suffer from the findable and accessible issues. Second, often data collection is performed with instruments and data specifications tailored to the study. As a result, combining data across cohorts, even ones with similar characteristics, is difficult, making interoperability and reusability a challenge. In this presentation, we will explore several informatics techniques, such as the use of ontology, to make cohort data more FAIR. We will also consider the implications of making cohort data more open and the ethical and governance issues associated with open science benefit sharing.
This webinar is part of the “How FAIR are you” webinar series and hackathon, which aim at increasing and facilitating the uptake of FAIR approaches into software, training materials and cohort data, to facilitate responsible and ethical data and resource sharing and implementation of federated applications for data analysis.
The CINECA webinar series aims to discuss ways to address common challenges and share best practices in the field of cohort data analysis, as well as distribute CINECA project results. All CINECA webinars include an audience Q&A session during which attendees can ask questions and make suggestions. Please note that all webinars are recorded and available for posterior viewing. CINECA webinars include an audience Q&A session during which attendees can ask questions and make suggestions.
This webinar took place on 17th February 2021 and is part of the CINECA webinar series.
For previous and upcoming CINECA webinars see:
https://www.cineca-project.eu/webinars
dkNET Webinar - FAIR Data Require Better Metadata: The Case for CEDAR 11/13/2020dkNET
Abstract
With the explosion of interest in open science, the past few years have overflowed with discussions of making scientific data “FAIR”—findable, accessible, interoperable, and reusable. The problem is that most scientific datasets are by no means FAIR. When left to their own devices, scientists do a terrible job creating the metadata that describe the experimental datasets that make their way to online repositories. The lack of standardization makes it extremely difficult for other investigators to locate relevant datasets, to reanalyze them, and to integrate those datasets with other data. There is an urgent need to make it easy for investigators to author metadata that adhere to community standards and that describe datasets in reproducible terms. The Center for Expanded Data Annotation and Retrieval (CEDAR) is developing technology with the goal of doing just that. Although it will take more than technology to make data FAIR, solid infrastructure is an essential prerequisite. CEDAR demonstrates the value of making it easy for scientists to author metadata that are complete, comprehensive, and standardized.
The top 3 key questions that CEDAR can answer:
1. How can I describe my experiment in a way that will allow other investigators to find my data?
2. What are the essential metadata fields needed to describe an experiment that uses a method such as RNA-Seq?
3. How can I easily enter metadata that are acceptable to a repository such as NCBI’s BioSample database?
Presenter: Mark Musen, PhD, Professor of Biomedical Informatics and of Biomedical Data Science at Stanford University, and Director of the Stanford Center for Biomedical Informatics Research
dkNET Webinar Information: https://dknet.org/about/webinar
Making it Easier, Possibly Even Pleasant, to Author Rich Experimental MetadataMichel Dumontier
Biomedical researchers will remain stymied in their ability to take full advantage of the Big Data revolution if they can never find the datasets that they need to analyze, if there is lack of clarity about what particular datasets contain, and if data are insufficiently described.
CEDAR, an NIH BD2K Center of Excellence, aims to develop methods and tools to vastly ease the burden of authoring good experimental metadata, and to maximally use this information to zero in on datasets of interest.
Professor Carole Goble, University of Manchester, talks at the RIN "Research data: policies & behaviour" event as part of a series on Research Information in Transition.
The Center for Expanded Data Annotation and Retrieval (CEDAR) has developed a suite of tools and services that allow scientists to create and publish metadata describing scientific experiments. Using these tools and services—referred to collectively as the CEDAR Workbench—scientists can collaboratively author metadata and submit them to public repositories. A key focus of our software is semantically enriching metadata with ontology terms. The system combines emerging technologies, such as JSON-LD and graph databases, with modern software development technologies, such as microservices and container platforms. The result is a suite of user-friendly, Web-based tools and REST APIs that provide a versatile end-to-end solution to the problems of metadata authoring and management. This talk presents the architecture of the CEDAR Workbench and focuses on the technology choices made to construct an easily usable, open system that allows users to create and publish semantically enriched metadata in standard Web formats.
Rafael C Jimenez presents the Omics Discovery Index | OSFair2017 Workshop
Workshop title: How FAIR friendly is your data catalogue?
Workshop overview:
This workshop will build upon the work planned by the EOSCpilot data interoperability task and the BlueBridge workshop held on April 3 at the RDA meeting. We will investigate common mechanisms for interoperation of data catalogues that preserve established community standards, norms and resources, while simplifying the process of being/becoming FAIR. Can we have a simple interoperability architecture based on a common set of metadata types? What are the minimum metadata requirements to expose FAIR data to EOSC services and EOSC users?
DAY 3 - PARALLEL SESSION 6 & 7
Ontology-Driven Clinical Intelligence: Removing Data Barriers for Cross-Disci...Remedy Informatics
The presentation describes how Remedy Informatics is advocating and innovating "flexible standardization" through an ontology-driven approach to clinical research. You will see in greater detail how a foundational, standardized Mosaic Ontology can be extended for more specific research applications and even more specific and focused disease research.
Similar to CEDAR work bench for metadata management (20)
It seems that AI is also becoming a buzzword, like design thinking. Everyone is talking about AI or wants to have AI, and sees all the ideas and benefits – that’s fine, but how do you get started? But what’s different now? Three innovations have finally put AI on the fast track: Big Data, with the internet and sensors everywhere; massive computing power, especially through the Cloud; and the development of breakthrough algorithms, so computers can be trained to accomplish more sophisticated tasks on their own with deep learning. If you use new technology, you need to explore and know what’s possible. With design thinking, it aids to outline the steps and define the ways in which you’re going to create the solution. Starting with mapping the customer journey, defining who will be using that service enhanced with intelligent technology, or who will benefit and gain value from it. We discuss how these two worlds are coming together, and how you get started to transform your venture with Artificial Intelligence using Design Thinking.
Speaker: Claudio Mirti, Principal Solution Specialist – Data & AI, Microsoft
Knowledge graphs ilaria maresi the hyve 23apr2020Pistoia Alliance
Data for drug discovery and healthcare is often trapped in silos which hampers effective interpretation and reuse. To remedy this, such data needs to be linked both internally and to external sources to make a FAIR data landscape which can power semantic models and knowledge graphs.
This presentation reviewed the challenges in identifying, acquiring and utilizing research data in relation to an evolving data market. Strategic solutions were examined in which the FAIR principles play a key role in the future of data management.
Fair webinar, Ted slater: progress towards commercial fair data products and ...Pistoia Alliance
Elsevier is a global information analytics business that helps institutions and professional’s
advance healthcare and open science to improve performance for the benefit of humanity.
In this webinar, we discuss how Elsevier is increasingly leveraging the FAIR Guiding Principles to improve its products and services to better serve the scientific community.
Implementing Blockchain applications in healthcarePistoia Alliance
Blockchain technology can revolutionise the way information is exchanged between parties by bringing an unprecedented level of security and trust to these transactions. The technology is finding its way into multiple use cases but we are yet to see full adoption and real-world business implementation in the Healthcare industry.
In this webinar we will explore the main challenges and considerations for the implementation of Blockchain technology in Healthcare use cases. This is the third webinar in our Blockchain Education series.
Building trust and accountability - the role User Experience design can play ...Pistoia Alliance
In this webinar our panel of UX specialists give a brief introduction to User Experience before presenting the design opportunities UX can bring to AI. We all know that AI has great potential but has some significant hurdles to overcome not least so the human aspect of trust and ethical considerations when designing in the life sciences.
PA webinar on benefits & costs of FAIR implementation in life sciences Pistoia Alliance
The slides from the Pistoia Alliance Debates Webinar where a panel of experts from technology support providers and the biopharma industry, who have been invited to share their views on the "Benefits and costs of FAIR Implementation for life science industry".
Creating novel drugs is an extraordinarily hard and complex problem.
One of the many challenges in drug design is the sheer size of the search space for novel chemical compounds. Scientists need to find molecules that are active toward a biological target or pathway and at the same time have acceptable ADMET properties.
There is now considerable research going on using various AI and ML approaches to tackle these challenges.
Our distinguished speakers, Drs. Alex Tropsha and Ola Engkvist, will discuss their recent work in Drug Design involving Deep Reinforcement Learning and Neural Networks, and will answer questions from the audience on the current state of the research in the field.
Speakers:
Prof Alex Tropsha, Professor at University of North Carolina at Chapel Hill, USA
Dr. Ola Engkvist, Associate Director at AstraZeneca R&D, Gothenburg, Sweden
The slides from thecontinuing part of Pistoia Alliance's drive to improve education and communication around new technologies to life science professionals, this webinar explored how blockchain/DLT and IoT could come together to add even more trust to the GxP domain. If you want to know more about how these new technologies could help enhance GxP compliance, then this webinar will give you much food for thought.
This talk presents an overview of the philosophy and ongoing work of the PhUSE project “Clinical Trials Results as Resource Description Framework.” The team is converting data from the CDISC Study Data Tabulation Model (SDTM) to graph data using an ontology-based approach. The wider implications of this work will be discussed, along with deployment strategies within and beyond the industry.
Pistoia alliance harmonizing fair data catalog approaches webinarPistoia Alliance
Multiple groups in the life sciences community have started their journey towards data FAIR-ification by implementing Data Catalogs, a clear first step towards Finding your data. While in many cases the approaches are quite similar, in both origin and intent, differing implementations could end up hampering interoperability and reuse. The Pistoia Alliance and the Linked Data Community of Practice hosted a panel discussion describing at three implementations and their downstream goals:
[1] Pharma cross-omics data catalogs,
[2] Clinical data catalogs
[3] Bioschemas for dataset discoverability on the inter/intranet
Joint Pistoia Alliance & PRISME AI in pharma webinar 18 Oct 2018Pistoia Alliance
In order to advance Machine Learning driven analytic approaches, having access to more data is better. In order to achieve increasingly larger patient level datasets, Researchers require the pooling of data from participants across the Healthcare ecosystem.
Common requirements and technical design patterns have emerged from company-specific and industry consortia efforts, forming underlying patterns that make up an overall Reference Architecture for data that can ultimately feed new analytics and Machine Learning.
Pistoia Alliance datathon for drug repurposing for rare diseasesPistoia Alliance
As part of the Pistoia Alliance Centre of Excellence for AI in Life Sciences, we are running a datathon.
Rare Disease Drug Repurposing Datathon is your chance to advance knowledge on rare diseases and illustrate best practices in data science. Are you ready to help make a difference — and to showcase your organization’s data science work and skills?
As a run-up to the Pistoia Alliance Blockchain Bootcamp in October, we are please to bring you an in-depth introduction to blockchain technology. Hear Wolfgang Prinz and Wolfgang Gräther from Fraunhofer FIT provide an explanation of the underlying technology, the current uses of blockchain in other industries, and when blockchain is appropriate for various use-cases, followed by a demo.
Pistoia Alliance Webinar Demystifying AI: Centre of Excellence for AI Webina...Pistoia Alliance
Pistoia Alliance launched its Centre of Excellence for Artificial Intelligence (AI) in Life Sciences where we hope to bring together best practice, adoption strategy and hackathons covering a range of challenges.
Over the coming months we will be hosting a series of topics and speakers giving their perspectives on the role of Artificial & Augmented Intelligence in Life Sciences and Healthcare.
The topics will cover some of the current challenges, user stories & value in using AI in life sciences. If you want to get involved in this series as a speaker or suggest topics please get in touch
Webinar 1 will focused on the following
A Brief History
Big Data/ML/DL/AI - fundamentals and concepts
Data Fidelity importance
Some best practices
The Pistoia Alliance is examining the challenges of the Faster Safe Companion Diagnostics (CDx) by Aligning Discovery & Clinical Data in the Regulatory Domain.
The slides discuss whether the data standards used in the research environment be aligned better with the data standards used in the regulated environment? If so, the time and cost of the development of NGS-based CDx could be reduced.
The slides from our recent webinar on how User Experience can build better software for life sciences. The webinar can be viewed here:
http://www.pistoiaalliance.org/pistoia-alliance-debates-webinar-series/#WatchAgain
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
5. Khatri has used GEO to build a career from
the analysis of “other people’s data”
Khatri has a pipeline for
• Searching for datasets in the GEO database
• Elucidating genomic signals
• Confirming those signals in “validation” data
sets
• Making discoveries without ever performing a
primary experiment
6. Khatri has reused public data sets to
identify genomic signatures …
• For incipient sepsis
• For active tuberculosis
• For distinguishing viral from bacterial
respiratory infection
• For rejection of organ transplants
… and he has never touched a pipette!
7. But you can be a data parasite only if
scientific data are …
• Available in an accessible repository
• Findable through some sort of search
facility
• Retrievable in a standard format
• Self-describing so that third parties can
make sense of the data
• Intended to be reused, and to outlive the
experiment for which they were collected
8. But data in most repositories are a mess!
• Investigators view their work as
publishing papers, not leaving a legacy of
reusable data
• Sponsors—both public and private—may
require data sharing, but they do not
explicitly pay for it
• Creating the metadata to describe data
sets is unbearably hard
9.
10. age
Age
AGE
`Age
age (after birth)
age (in years)
age (y)
age (year)
age (years)
Age (years)
Age (Years)
age (yr)
age (yr-old)
age (yrs)
Age (yrs)
age [y]
age [year]
age [years]
age in years
age of patient
Age of patient
age of subjects
age(years)
Age(years)
Age(yrs.)
Age, year
age, years
age, yrs
age.year
age_years
Failure to use standard terms makes
datasets often impossible to search
11. Metadata from NCBI BioSample Stink!
• 73% of “Boolean” metadata values are not
actually true or false
– nonsmoker, former-smoker
• 26% of “integer” metadata values cannot be
parsed into integers
– JM52, UVPgt59.4, pig
• 68% of metadata entries that are supposed to
represent terms from biomedical ontologies do
not actually do so.
– presumed normal, wild_type
12.
13. At a minimum, science needs
• Open, online access to experimental data sets
• Mechanisms to search for metadata to find
relevant experimental results
• Annotation of online data sets with adequate
metadata
• Use of controlled terms in metadata whenever
possible
• A culture that cares about data sharing,
secondary analyses, and the ability to make new
discoveries from legacy data
25. When we look at experimental data …
• How can we tell what the data are about?
• How can we search for additional data that we
might be interested in?
• How can we group data into similar
categories?
26. The microarray community took the lead in
standardizing biological metadata
DNA Microarray
— What was the
substrate of the
experiment?
— What array
platform was
used?
— What were the
experimental
conditions?
27.
28. But it didn’t stop with MIAME!
• Minimal Information About T Cell Assays
(MIATA)
• Minimal Information Required in the
Annotation of biochemical Models (MIRIAM)
• MINImal MEtagemome Sequence analysis
Standard (MINIMESS)
• Minimal Information Specification For In Situ
Hybridization and Immunohistochemistry
Experiments (MISFISHIE)
30. Minimal Information Guidelines
are not Models
• MIAME and its kin specify only the
“kinds of things” that investigators should
include in their metadata
• They do not provide a detailed list of standard
metadata elements
• They do not provide datatypes for valid
metadata entries
• It takes work to convert a prose checklist into
a computable model
31. To make sense of metadata …
• We need computer-based standards
– For what we want to say about experimental data
– For the language that we use to say those things
– For how we structure the metadata
• We need a scientific culture that values
making experimental data accessible to others
32. Requirement #3: Make it
palatable—maybe even fun—to
describe experiments completely
and consistently
34. A Metadata Ecosystem
• Scientists perform experiments and generate
datasets
• Community-based standards groups create
minimal information models (like MIAME) to
define requirements for annotating particular
kinds of experimental data in a uniform manner
• CEDAR users create formal metadata templates
based on community standards
• CEDAR users fill in metadata templates to create
instances of metadata
• CEDAR stores the datasets (and the metadata
instances) in public repositories
50. age
Age
AGE
`Age
age (after birth)
age (in years)
age (y)
age (year)
age (years)
Age (years)
Age (Years)
age (yr)
age (yr-old)
age (yrs)
Age (yrs)
age [y]
age [year]
age [years]
age in years
age of patient
Age of patient
age of subjects
age(years)
Age(years)
Age(yrs.)
Age, year
age, years
age, yrs
age.year
age_years
51.
52. AIRR is providing our first experience
uploading CEDAR-authored
metadata directly to NCBI
55. LINCS Repository
generate
experimental
data
1. TEMPLATE CREATION
2. METADATA EDITING
3. SUBMISSION
Metadata
Editor
Template
Designer
Biomedical OntologiesCurators
create
template
Scientists
Metadata
Data
enter
metadata
LINCS
Templates
Templat
es
Templat
es
LINCS
Templates
Data
submit
Metadata
Data
Metadata
LINCS: CEDAR provides access to a
central data-coordinating center
58. With CEDAR, the
data parasites of
the world will be
able to learn from
data that are
“cleaner,” better
organized, and
more searchable
59. Scientists will learn
when new results
become available
that are relevant to
their work, and how
those results relate
to the data that
they are already
collecting
60. Physicians will be
able to find the
clinical evidence
that best helps
them to select
treatments for their
patients
61. Pharma will know
what experiments
have already been
done—by their own
company and by
those with which
they have merged!
62. Once scientific knowledge and data are
disseminated in machine-interpretable form …
• Technology such as CEDAR will assist in the
automated “publication” of scientific results
online
• Intelligent agents will
– Search and “read” the “literature”
– Integrate information
– Track scientific advances
– Re-explore existing scientific datasets
– Suggest the next set of experiments to perform
– And maybe even do them!