The document discusses the value of ontologies for different organizations. For Roche, ontologies are important for streamlining data integration and knowledge management. The Ontologies Mapping project has helped establish best practices and evaluate tools for ontology mapping. For Eagle Genomics, ontologies are valuable for harmonizing and integrating multi-omics datasets to support computational analyses and novel insights. Being part of the project team has helped Eagle Genomics better semantically enrich and curate scientific data.
Pistoia Alliance Debates: IDMP: It’s all about the patient: enhancing patient...Pistoia Alliance
This webinar discusses IDMP (Identification of Medicinal Products) and focuses on substances and their implementation. It provides background on IDMP, timelines for implementation in the EU which is driving standardization, and an example substance record for Brentuximab Vedotin to illustrate the level of detail in substance definitions. The goal of IDMP is to improve patient safety through standardized identification and exchange of information on medicines and ingredients.
The document discusses challenges related to IDMP compliance, including information stored across different systems and formats, duplication of data, lack of integration between processes. It then outlines potential solutions to these challenges like entity extraction to transform unstructured data, master data management to connect siloed structured data, reference data management to handle semantic issues, and linked data approaches to publish structured, interconnected data using common standards and identifiers. The value of these solutions beyond mere compliance is also discussed in terms of transparency, risk reduction, efficiency and collaboration.
Pfizer, GE Healthcare, Novartis Pharma and Boehringer-Ingelheim Pharma Confir...Torben Haagh
Want to learn more about how to cover gaps for IDMP implementation until July 1st 2016? Then join experts at the International Conference IDMP Implementation - Impact on Data, Systems and Processes taking place in Berlin from 24 – 25 June 2015.
There are limited places available, so make sure you secure your conference ticket today! Learn more here.
Don't miss this unique opportunity to hear expert presentations, take part in interactive workshops and meet with IDMP experts from around the world including participation from:
• Pfizer Inc. • Boehringer-Ingelheim Pharma GmbH & Co. KG • F. Hoffmann-La Roche Ltd. • Marr Consultancy Ltd • GE Healthcare • A.E.Tiefenbacher GmbH & Co. KG • Bundesinstitut für Arzneimittel und Medizinprodukte • Bundesverband der Arzneimittel-Hersteller e.V. • MEDA Pharma GmbH & Co. KG • Mylan EPD, Inc. • Janssen Pharmaceuticals, Inc. • Novartis Pharma AG and many more.
Want to know more regarding this topic area? Then join us at our
IDMP Implementation - Impact on Data, Systems and Processes
24 – 25 June 2015 | Steigenberger Hotel Berlin, Germany
For more information take a look at our agenda here: http://bit.ly/IDMP_agenda
Or visit our website: http://bit.ly/IDMP_website
The introduction of Identification of Medicinal Products (IDMP), as developed by the International Organization for Standardization (ISO), marks the next phase of evolution for the European drug dictionary project under Article 57.
The European Medicines Agency (EMA) will issue guidance based on five documents published by the ISO, which will require life sciences companies to use a "set of common global standards for data elements, formats, and terminologies for the unique identification and exchange of information on medicines."
Complying with IDMP represents a massive increase in scope and complexity, in addition to previous iterations. The EMA has divided the timeline for meeting the new requirements into several phases, making this a multi-year project.
Perficient’s expert in IDMP and the EudraVigilance Medicinal Product Dictionary, Mark Thackstone, reviewed everything you need to know in order to successfully comply with IDMP by the fast-approaching deadline:
-Latest information and timelines
-Steps to take meet regulatory requirements
-Challenges and factors to consider
-What IDMP means in the real world of a typical pharma company
The presentation complemented a 60 minutes webinar on ISO IDMP provided by Cunesoft in May 2015. Benefits of a regulatory master data management system are being analyzed. The entire webinar was recorded. The latest information on IDMP can be accessed on our website: https://www.phlexglobal.com/idmp
The document discusses the Identification of Medicinal Products (IDMP) standards which will replace the eXtended EudraVigilance Medicinal Product Dictionary (XEVMPD). IDMP has a broader scope than XEVMPD and encompasses regulatory, pharmacovigilance, quality and manufacturing data. It consists of five interrelated standards and will require companies to locate data from across their organizations to comply. Implementation is driven by EU legislation and is scheduled for July 1, 2016, requiring companies to begin strategic preparation now to have systems in place to manage IDMP data.
ISO IDMP: Practical considerations from XEVMPD experienceQdossier B.V.
ISO IDMP (Identification of Medicinal Products) is coming! What lessons can we learn from our practical exprience with XEVMPD in preparation for IDMP? Topics include data cleaning, managing inconsistencies across product registrations and countries and controlled vocabularies
IDMP Implementation - Impact on Data, Systems and Processes. How to cover gap...Torben Haagh
ISO IDMP will be mandatory from July 1st 2016 and it will make a fundamental impact on the way the pharmaceutical industry is required to collect, manage and submit relevant data. Now is the last call for all marketing authorisation holders to (re)think their data submission policies and processes to make sure you close your IDMP implementation gaps in the next 12 month and have a clear vision of the next challenges lying ahead!
Don’t miss out on the opportunity to get your questions answered, to benchmark the stage of your preparation, to initiate partnerships and to take an active part in designing the RIM’s community future agenda! Join us this summer in Berlin and gain valuable, practical information:
# Learn how to assess and analyse data requirements for the IDMP standards by discussing possible interpretations with our expert from regulatory bodies on-site!
# Benchmark your own IDPM Implementation process with peers from both big and mid-size pharma
# Share insights how the IDMP standards are changing the interactions between IT-Systems, company departments, contract manufacturers and regulatory agencies
# Discuss and compare with your peers experiences with vendors and solution providers offering help to achieve your IDMP implementation goals!
For more information visit our website: http://bit.ly/EventWebsite
If you would like to be part of the conference, you can register now here: http://bit.ly/Register-Event
Pistoia Alliance Debates: IDMP: It’s all about the patient: enhancing patient...Pistoia Alliance
This webinar discusses IDMP (Identification of Medicinal Products) and focuses on substances and their implementation. It provides background on IDMP, timelines for implementation in the EU which is driving standardization, and an example substance record for Brentuximab Vedotin to illustrate the level of detail in substance definitions. The goal of IDMP is to improve patient safety through standardized identification and exchange of information on medicines and ingredients.
The document discusses challenges related to IDMP compliance, including information stored across different systems and formats, duplication of data, lack of integration between processes. It then outlines potential solutions to these challenges like entity extraction to transform unstructured data, master data management to connect siloed structured data, reference data management to handle semantic issues, and linked data approaches to publish structured, interconnected data using common standards and identifiers. The value of these solutions beyond mere compliance is also discussed in terms of transparency, risk reduction, efficiency and collaboration.
Pfizer, GE Healthcare, Novartis Pharma and Boehringer-Ingelheim Pharma Confir...Torben Haagh
Want to learn more about how to cover gaps for IDMP implementation until July 1st 2016? Then join experts at the International Conference IDMP Implementation - Impact on Data, Systems and Processes taking place in Berlin from 24 – 25 June 2015.
There are limited places available, so make sure you secure your conference ticket today! Learn more here.
Don't miss this unique opportunity to hear expert presentations, take part in interactive workshops and meet with IDMP experts from around the world including participation from:
• Pfizer Inc. • Boehringer-Ingelheim Pharma GmbH & Co. KG • F. Hoffmann-La Roche Ltd. • Marr Consultancy Ltd • GE Healthcare • A.E.Tiefenbacher GmbH & Co. KG • Bundesinstitut für Arzneimittel und Medizinprodukte • Bundesverband der Arzneimittel-Hersteller e.V. • MEDA Pharma GmbH & Co. KG • Mylan EPD, Inc. • Janssen Pharmaceuticals, Inc. • Novartis Pharma AG and many more.
Want to know more regarding this topic area? Then join us at our
IDMP Implementation - Impact on Data, Systems and Processes
24 – 25 June 2015 | Steigenberger Hotel Berlin, Germany
For more information take a look at our agenda here: http://bit.ly/IDMP_agenda
Or visit our website: http://bit.ly/IDMP_website
The introduction of Identification of Medicinal Products (IDMP), as developed by the International Organization for Standardization (ISO), marks the next phase of evolution for the European drug dictionary project under Article 57.
The European Medicines Agency (EMA) will issue guidance based on five documents published by the ISO, which will require life sciences companies to use a "set of common global standards for data elements, formats, and terminologies for the unique identification and exchange of information on medicines."
Complying with IDMP represents a massive increase in scope and complexity, in addition to previous iterations. The EMA has divided the timeline for meeting the new requirements into several phases, making this a multi-year project.
Perficient’s expert in IDMP and the EudraVigilance Medicinal Product Dictionary, Mark Thackstone, reviewed everything you need to know in order to successfully comply with IDMP by the fast-approaching deadline:
-Latest information and timelines
-Steps to take meet regulatory requirements
-Challenges and factors to consider
-What IDMP means in the real world of a typical pharma company
The presentation complemented a 60 minutes webinar on ISO IDMP provided by Cunesoft in May 2015. Benefits of a regulatory master data management system are being analyzed. The entire webinar was recorded. The latest information on IDMP can be accessed on our website: https://www.phlexglobal.com/idmp
The document discusses the Identification of Medicinal Products (IDMP) standards which will replace the eXtended EudraVigilance Medicinal Product Dictionary (XEVMPD). IDMP has a broader scope than XEVMPD and encompasses regulatory, pharmacovigilance, quality and manufacturing data. It consists of five interrelated standards and will require companies to locate data from across their organizations to comply. Implementation is driven by EU legislation and is scheduled for July 1, 2016, requiring companies to begin strategic preparation now to have systems in place to manage IDMP data.
ISO IDMP: Practical considerations from XEVMPD experienceQdossier B.V.
ISO IDMP (Identification of Medicinal Products) is coming! What lessons can we learn from our practical exprience with XEVMPD in preparation for IDMP? Topics include data cleaning, managing inconsistencies across product registrations and countries and controlled vocabularies
IDMP Implementation - Impact on Data, Systems and Processes. How to cover gap...Torben Haagh
ISO IDMP will be mandatory from July 1st 2016 and it will make a fundamental impact on the way the pharmaceutical industry is required to collect, manage and submit relevant data. Now is the last call for all marketing authorisation holders to (re)think their data submission policies and processes to make sure you close your IDMP implementation gaps in the next 12 month and have a clear vision of the next challenges lying ahead!
Don’t miss out on the opportunity to get your questions answered, to benchmark the stage of your preparation, to initiate partnerships and to take an active part in designing the RIM’s community future agenda! Join us this summer in Berlin and gain valuable, practical information:
# Learn how to assess and analyse data requirements for the IDMP standards by discussing possible interpretations with our expert from regulatory bodies on-site!
# Benchmark your own IDPM Implementation process with peers from both big and mid-size pharma
# Share insights how the IDMP standards are changing the interactions between IT-Systems, company departments, contract manufacturers and regulatory agencies
# Discuss and compare with your peers experiences with vendors and solution providers offering help to achieve your IDMP implementation goals!
For more information visit our website: http://bit.ly/EventWebsite
If you would like to be part of the conference, you can register now here: http://bit.ly/Register-Event
1) IDMP and RIM aim to harmonize product and regulatory information management but face challenges due to conflicting data sources and definitions between organizations and regions.
2) Implementing IDMP requires mapping diverse internal data systems and formats to a common IDMP data model and identifiers, which is complicated by varying data quality, timeliness, and ownership.
3) A master data management approach is needed to define authoritative sources, ownership, validation requirements, and change control to accurately populate the IDMP data containers and address issues across multiple systems over time.
Pfizer, GE Healthcare, Novartis Pharma and Boehringer-Ingelheim Pharma Confir...Torben Haagh
This document advertises an international conference on implementing the IDMP standards for pharmaceutical data. The conference will be held in Berlin in June 2015 and will provide opportunities to:
1) Learn how to assess data requirements and analyze interpretations of the IDMP standards from experts;
2) Benchmark your organization's IDMP implementation process with peers from large and mid-size pharmaceutical companies; and
3) Share insights on how the IDMP standards are changing interactions between IT systems, company functions, and regulatory agencies.
Interactive workshops and case studies from major pharmaceutical companies will also be provided. The conference aims to help companies understand, prioritize, and implement strategies to achieve seamless and fully compliant regulatory information management under the new ID
Leveraging Oracle IDMP Enterprise Foundation Suite for Regulatory CompliancePerficient, Inc.
IDMP (Identification of Medicinal Products), which will soon be mandated by the European Medicines Agency (EMA) and U.S. Food and Drug Administration (FDA), will enable stakeholders to obtain a comprehensive view of each individual product (e.g., ingredients, marketing and medicinal information, contacts), based on unique codes.
While the journey towards IDMP compliance can be incredibly challenging, industry-specific knowledge and systems play an integral role in meeting the new requirements.
In our webinar, we discussed how the Oracle IDMP Enterprise Foundation Suite can help you be ready in time.
How to Review, Cleanse, and Transform Clinical Data in Oracle InFormPerficient, Inc.
When it comes to clinical trials, the consequences of bad data can be severe. Research and development becomes complicated and lives can be put at risk. The need for clinical data to be clean is critical for comprehensive reporting and analysis, ultimately enabling safer drugs and devices to be brought to market faster.
During our 30-minute, no-nonsense webinar, we discussed why and how organizations can leverage Oracle Health Sciences Data Management Workbench (DMW) to revitalize the clinical trial data captured in Oracle Health Sciences InForm.
The IDMP Challenge - Whitepaper on ISO IDMP by CunesoftV E R A
The updated whitepaper on ISO IDMP - learn what you need to know during this transition. And how Cunesoft's cune-IDMP can help your organization: https://cunesoft.com/en/products/idmp/
PharmaCircle provides global content, analytics, and visualization tools covering the pharmaceutical, biotechnology, and drug delivery industries through its Premium Service platform which offers extensive proprietary data, analysis, and visualizations across various modules to help users in R&D, clinical, regulatory, and other areas of these industries with their research and decision-making. The Premium Service offers over 120,000 products and candidates in development, analytical tools, custom data services, and support from a team of industry analysts to provide a comprehensive single source of information.
FDA Data Integrity: Misconceptions of 21 CFR Part 11 EduQuest, Inc.
This document discusses 21 CFR Part 11, which regulates electronic records and signatures. It summarizes that Part 11's original objectives were to facilitate technological improvements without losing data integrity or signature assurance compared to paper. However, misconceptions have arisen due to unclear FDA guidance. Key Part 11 requirements include validation, audit trails, and electronic signatures. FDA inspects for compliance with Part 11 and other regulations regarding computerized recordkeeping.
Current labs can greatly benefit from a digital transformation.
FAIR data principles are crucial in this process.
Laying a solid data governance foundation is an invaluable long-term move.
This document provides a summary of a report on global traceability and serialization in the pharmaceutical industry. It discusses the results of a survey on pharmaceutical serialization and traceability in 2014. The main challenges for manufacturers in implementing serialization included cost, integration with existing systems, generating high-speed printing of unique codes, and inconsistent regulations across countries. Regulations like the EU Falsified Medicines Directive and the US Drug Supply Chain Security Act are aiming to improve security and tracking of pharmaceuticals in the supply chain. Overall, the report examines the progress of serialization efforts and the ongoing challenges faced by the industry.
IFPMA-TFDA Workshop on Couterfeit Medicines
‘Integrated Approach Against Fake Medicines’
Session 2: Supply Chain Integrity
On 6th February 2015
At Taipei International Convention Center
Taipei, Taiwan
This document discusses managing the extended research and development (R&D) supply chain for clinical trials. It presents challenges at different stages of the supply chain from active pharmaceutical ingredient (API) manufacturing through investigational medicinal product (IMP) delivery. Key challenges include forecasting and planning given variable patient enrollment, ensuring visibility and integration across outsourced manufacturing steps, translating complex packaging needs from protocols, and managing drug distribution globally. The article presents different models for R&D supply chains, including fully outsourcing the physical chain or adopting a more patient-oriented model.
Data Integrity in a GxP-regulated Environment - Pauwels Consulting AcademyPauwels Consulting
On Tuesday, December 6, 2016, our colleague Angelo Rossi, Senior Regulatory Compliance Consultant, gave an interesting presentation about “Data Integrity in a GxP-regulated Environment” at the Brussels Office of Pauwels Consulting in Diegem.
In his presentation, Angelo covered definitions and concepts of data integrity, the change in regulatory focus, lessons learned from recent FDA warning letters, importants highlights of regulations and guidelines. Angelo also presented a practical example of data integrity for a computerized system.
Please contact us at contact@pauwelsconsulting.com or +32 9 324 70 80 if you have any further questions regarding our consulting services in this area.
Practical XEVMPD experience; once upon a time there was a perfectly clean dat...Qdossier B.V.
This document discusses challenges with inconsistent drug product data across different databases, regions, and disciplines. It notes inconsistencies in substance naming, active ingredients listed as excipients, and differences in listed excipients between countries for the same product. Maintaining consistent data in the XEVMPD format is challenging due to issues like using controlled vocabularies versus SmPC data and different MA numbers used across countries. Lessons learned include that inconsistencies across systems and SmPCs become visible when consolidating data, and defining consistent metadata elements and values across disciplines is important to achieve consistent records.
2013 Year of Pharmaceutical Serialization - Get it RightMichael Stewart
Pharmaceutical serialization en mass will occur in 2013 due to US and es-US regulations to track products at the item level. Michael Stewart of PharmTech Inc. shares his insight into the project Management pitfalls and allows you to use his learning curve working with top 10 pharmaceutical manufacturers, contract manufacturers and virtual manufactures to get ROI and business value in addition to compliance. Turn a perceived cost into an investment.
This document discusses the synergies between regulatory information management (RIM) and identification of medicinal products (IDMP). It argues that RIM and IDMP should be considered together, not separately, as IDMP expands on product data beyond what was traditionally included in RIM. The implementation of IDMP standards will converge various regulatory data initiatives and shape future regulatory submissions that will utilize structured IDMP data instead of documents. RIM systems will benefit from using the IDMP data model to standardize product information captured across systems and sources.
Epitome Technologies provides compliance solutions and computer system validation services to the life sciences industry. It has over a decade of experience in this field and clients across India and other regions. The company is located in Western India and is headed by experienced professionals including engineers and pharmacists. Epitome helps clients with initial validation of computer systems as well as ongoing maintenance and periodic reviews to ensure compliance with regulations from agencies such as US FDA, EU, and others. Its services include validation documentation, testing, and ensuring computer systems meet quality standards and regulatory requirements for electronic records over the lifetime of the systems.
Contract packagers and contract manufacturers have unique challenges when serializing product for the pharmaceutical industry. Michael Stewart, of PharmTech, covers the regulations and project management concerns CMO's and CPO's should address.
Pfizer is a global pharmaceutical company founded in 1849 and headquartered in New York. It produces medicines and vaccines. The document discusses Pfizer's global inventory management strategy, including inventory control techniques like reorder levels and safety stock levels. It also covers Pfizer's focus on efficiency, resilience, agility, quality control, and supply chain best practices and challenges. The presentation was submitted by Milind and team to Prof. Jaison Mathews.
mHealth Israel_Becton Dickinson_US Healthcare Digital Transformation_July 2015Levi Shapiro
Presentation for mHealth Israel by David Fegygin, VP of Health IT Integration and Strategic Innovation, Becton Dickinson, for mHealth Israel, July 14, 2015 in Tel Aviv
Apprentice Field Suite is a set of smart glass applications that provide hands-free access to critical information for operators, engineers and scientists in the biopharmaceutical industry. It includes Tandem for remote collaboration, Manuals for accessing standard operating procedures, and a safety application for data collection. The applications help avoid costly downtime, improve process consistency, and enhance worker safety. Case studies show their use reducing travel costs for troubleshooting, and allowing paper-free, always up-to-date access to procedures.
Bioschemas for Aggregating ELIXIR Events - Comms WebinarNiall Beard
This document summarizes TeSS, a tool for aggregating and registering training events and materials for ELIXIR. TeSS allows users to search, filter, and discover training events and organize them into packages and workflows. Content from various sites can be distributed via TeSS by marking it up with schema.org tags, which improves search engine optimization. TeSS will also track ELIXIR training metrics and activities to prevent duplicate data entry.
1) IDMP and RIM aim to harmonize product and regulatory information management but face challenges due to conflicting data sources and definitions between organizations and regions.
2) Implementing IDMP requires mapping diverse internal data systems and formats to a common IDMP data model and identifiers, which is complicated by varying data quality, timeliness, and ownership.
3) A master data management approach is needed to define authoritative sources, ownership, validation requirements, and change control to accurately populate the IDMP data containers and address issues across multiple systems over time.
Pfizer, GE Healthcare, Novartis Pharma and Boehringer-Ingelheim Pharma Confir...Torben Haagh
This document advertises an international conference on implementing the IDMP standards for pharmaceutical data. The conference will be held in Berlin in June 2015 and will provide opportunities to:
1) Learn how to assess data requirements and analyze interpretations of the IDMP standards from experts;
2) Benchmark your organization's IDMP implementation process with peers from large and mid-size pharmaceutical companies; and
3) Share insights on how the IDMP standards are changing interactions between IT systems, company functions, and regulatory agencies.
Interactive workshops and case studies from major pharmaceutical companies will also be provided. The conference aims to help companies understand, prioritize, and implement strategies to achieve seamless and fully compliant regulatory information management under the new ID
Leveraging Oracle IDMP Enterprise Foundation Suite for Regulatory CompliancePerficient, Inc.
IDMP (Identification of Medicinal Products), which will soon be mandated by the European Medicines Agency (EMA) and U.S. Food and Drug Administration (FDA), will enable stakeholders to obtain a comprehensive view of each individual product (e.g., ingredients, marketing and medicinal information, contacts), based on unique codes.
While the journey towards IDMP compliance can be incredibly challenging, industry-specific knowledge and systems play an integral role in meeting the new requirements.
In our webinar, we discussed how the Oracle IDMP Enterprise Foundation Suite can help you be ready in time.
How to Review, Cleanse, and Transform Clinical Data in Oracle InFormPerficient, Inc.
When it comes to clinical trials, the consequences of bad data can be severe. Research and development becomes complicated and lives can be put at risk. The need for clinical data to be clean is critical for comprehensive reporting and analysis, ultimately enabling safer drugs and devices to be brought to market faster.
During our 30-minute, no-nonsense webinar, we discussed why and how organizations can leverage Oracle Health Sciences Data Management Workbench (DMW) to revitalize the clinical trial data captured in Oracle Health Sciences InForm.
The IDMP Challenge - Whitepaper on ISO IDMP by CunesoftV E R A
The updated whitepaper on ISO IDMP - learn what you need to know during this transition. And how Cunesoft's cune-IDMP can help your organization: https://cunesoft.com/en/products/idmp/
PharmaCircle provides global content, analytics, and visualization tools covering the pharmaceutical, biotechnology, and drug delivery industries through its Premium Service platform which offers extensive proprietary data, analysis, and visualizations across various modules to help users in R&D, clinical, regulatory, and other areas of these industries with their research and decision-making. The Premium Service offers over 120,000 products and candidates in development, analytical tools, custom data services, and support from a team of industry analysts to provide a comprehensive single source of information.
FDA Data Integrity: Misconceptions of 21 CFR Part 11 EduQuest, Inc.
This document discusses 21 CFR Part 11, which regulates electronic records and signatures. It summarizes that Part 11's original objectives were to facilitate technological improvements without losing data integrity or signature assurance compared to paper. However, misconceptions have arisen due to unclear FDA guidance. Key Part 11 requirements include validation, audit trails, and electronic signatures. FDA inspects for compliance with Part 11 and other regulations regarding computerized recordkeeping.
Current labs can greatly benefit from a digital transformation.
FAIR data principles are crucial in this process.
Laying a solid data governance foundation is an invaluable long-term move.
This document provides a summary of a report on global traceability and serialization in the pharmaceutical industry. It discusses the results of a survey on pharmaceutical serialization and traceability in 2014. The main challenges for manufacturers in implementing serialization included cost, integration with existing systems, generating high-speed printing of unique codes, and inconsistent regulations across countries. Regulations like the EU Falsified Medicines Directive and the US Drug Supply Chain Security Act are aiming to improve security and tracking of pharmaceuticals in the supply chain. Overall, the report examines the progress of serialization efforts and the ongoing challenges faced by the industry.
IFPMA-TFDA Workshop on Couterfeit Medicines
‘Integrated Approach Against Fake Medicines’
Session 2: Supply Chain Integrity
On 6th February 2015
At Taipei International Convention Center
Taipei, Taiwan
This document discusses managing the extended research and development (R&D) supply chain for clinical trials. It presents challenges at different stages of the supply chain from active pharmaceutical ingredient (API) manufacturing through investigational medicinal product (IMP) delivery. Key challenges include forecasting and planning given variable patient enrollment, ensuring visibility and integration across outsourced manufacturing steps, translating complex packaging needs from protocols, and managing drug distribution globally. The article presents different models for R&D supply chains, including fully outsourcing the physical chain or adopting a more patient-oriented model.
Data Integrity in a GxP-regulated Environment - Pauwels Consulting AcademyPauwels Consulting
On Tuesday, December 6, 2016, our colleague Angelo Rossi, Senior Regulatory Compliance Consultant, gave an interesting presentation about “Data Integrity in a GxP-regulated Environment” at the Brussels Office of Pauwels Consulting in Diegem.
In his presentation, Angelo covered definitions and concepts of data integrity, the change in regulatory focus, lessons learned from recent FDA warning letters, importants highlights of regulations and guidelines. Angelo also presented a practical example of data integrity for a computerized system.
Please contact us at contact@pauwelsconsulting.com or +32 9 324 70 80 if you have any further questions regarding our consulting services in this area.
Practical XEVMPD experience; once upon a time there was a perfectly clean dat...Qdossier B.V.
This document discusses challenges with inconsistent drug product data across different databases, regions, and disciplines. It notes inconsistencies in substance naming, active ingredients listed as excipients, and differences in listed excipients between countries for the same product. Maintaining consistent data in the XEVMPD format is challenging due to issues like using controlled vocabularies versus SmPC data and different MA numbers used across countries. Lessons learned include that inconsistencies across systems and SmPCs become visible when consolidating data, and defining consistent metadata elements and values across disciplines is important to achieve consistent records.
2013 Year of Pharmaceutical Serialization - Get it RightMichael Stewart
Pharmaceutical serialization en mass will occur in 2013 due to US and es-US regulations to track products at the item level. Michael Stewart of PharmTech Inc. shares his insight into the project Management pitfalls and allows you to use his learning curve working with top 10 pharmaceutical manufacturers, contract manufacturers and virtual manufactures to get ROI and business value in addition to compliance. Turn a perceived cost into an investment.
This document discusses the synergies between regulatory information management (RIM) and identification of medicinal products (IDMP). It argues that RIM and IDMP should be considered together, not separately, as IDMP expands on product data beyond what was traditionally included in RIM. The implementation of IDMP standards will converge various regulatory data initiatives and shape future regulatory submissions that will utilize structured IDMP data instead of documents. RIM systems will benefit from using the IDMP data model to standardize product information captured across systems and sources.
Epitome Technologies provides compliance solutions and computer system validation services to the life sciences industry. It has over a decade of experience in this field and clients across India and other regions. The company is located in Western India and is headed by experienced professionals including engineers and pharmacists. Epitome helps clients with initial validation of computer systems as well as ongoing maintenance and periodic reviews to ensure compliance with regulations from agencies such as US FDA, EU, and others. Its services include validation documentation, testing, and ensuring computer systems meet quality standards and regulatory requirements for electronic records over the lifetime of the systems.
Contract packagers and contract manufacturers have unique challenges when serializing product for the pharmaceutical industry. Michael Stewart, of PharmTech, covers the regulations and project management concerns CMO's and CPO's should address.
Pfizer is a global pharmaceutical company founded in 1849 and headquartered in New York. It produces medicines and vaccines. The document discusses Pfizer's global inventory management strategy, including inventory control techniques like reorder levels and safety stock levels. It also covers Pfizer's focus on efficiency, resilience, agility, quality control, and supply chain best practices and challenges. The presentation was submitted by Milind and team to Prof. Jaison Mathews.
mHealth Israel_Becton Dickinson_US Healthcare Digital Transformation_July 2015Levi Shapiro
Presentation for mHealth Israel by David Fegygin, VP of Health IT Integration and Strategic Innovation, Becton Dickinson, for mHealth Israel, July 14, 2015 in Tel Aviv
Apprentice Field Suite is a set of smart glass applications that provide hands-free access to critical information for operators, engineers and scientists in the biopharmaceutical industry. It includes Tandem for remote collaboration, Manuals for accessing standard operating procedures, and a safety application for data collection. The applications help avoid costly downtime, improve process consistency, and enhance worker safety. Case studies show their use reducing travel costs for troubleshooting, and allowing paper-free, always up-to-date access to procedures.
Bioschemas for Aggregating ELIXIR Events - Comms WebinarNiall Beard
This document summarizes TeSS, a tool for aggregating and registering training events and materials for ELIXIR. TeSS allows users to search, filter, and discover training events and organize them into packages and workflows. Content from various sites can be distributed via TeSS by marking it up with schema.org tags, which improves search engine optimization. TeSS will also track ELIXIR training metrics and activities to prevent duplicate data entry.
Function and Phenotype Prediction through Data and Knowledge FusionKarin Verspoor
The biomedical literature captures the most current biomedical knowledge and is a tremendously rich resource for research. With over 24 million publications currently indexed in the US National Library of Medicine’s PubMed index, however, it is becoming increasingly challenging for biomedical researchers to keep up with this literature. Automated strategies for extracting information from it are required. Large-scale processing of the literature enables direct biomedical knowledge discovery. In this presentation, I will introduce the use of text mining techniques to support analysis of biological data sets, and will specifically discuss applications in protein function and phenotype prediction, exploring the integration of literature data with complementary structured resources.
This study investigated the relationship between MMP-9, TIMP-1, and sialic acid (NANA) in a human glial cell line and the effects of NANA on the expression of these genes. The study found that NANA upregulated the expression of both MMP-9 and TIMP-1 at lower concentrations in a way that maintained the MMP-9/TIMP-1 balance. However, at higher concentrations of 1000μM NANA, MMP-9 expression was upregulated to a significantly greater degree than TIMP-1 expression, causing an imbalance similar to reports in neurodegenerative diseases. This suggests NANA may be involved in signaling pathways regulating the expression of these genes linked to neuroinfl
This document summarizes concerns with draft Watershed Management Programs (WMPs) from a non-governmental organization perspective. Key concerns include WMPs relying on non-site specific data, insufficient prioritization of pollutants, unreasonable timelines that extend past permit deadlines, and monitoring plans not able to identify responsible parties for water quality issues. The document calls for WMPs to more specifically classify pollutants, justify strategies to reduce pollution, and not overrely on future changes or adaptive management to meet permit requirements.
Machine learning can be used for tasks that are labor intensive, tedious, or cannot be done at scale by humans. Three use cases are described: counting parasites in petri dishes to detect 70-95% accurately, classifying clinical documents to label 90% accurately, and coding doctor's notes to map to ICD-10 codes with 75-95% accuracy using rules-based approaches. Overall, initial results were promising but further work is needed to integrate models into products and ensure consistent performance.
This document summarizes a flexible analytical platform for precision clinical research, pharmaceutical R&D, and education. It describes the large and growing omics data analysis market and the need to extract biological meaning from big biomedical data. The platform uses machine learning, biological pathway analysis, visualization, and other techniques to analyze genomics, proteomics, transcriptomics, metabolomics, and other omics data types. It provides basic processing, predictive modeling, and decision support to help with clinical trials, molecular diagnostics, and more. The business model involves remote cloud access, full-service projects, reporting, customization, and educational programs. Testimonials highlight how the platform has helped diverse research teams.
Pistoia alliance debates big data solution or pollution 26-02-2015 15 00Pistoia Alliance
This webinar discusses big data in the pharmaceutical industry. It is chaired by David Fritsche and features a panel of experts including Ashley George of GSK and Anthony Rowe of Johnson & Johnson. The webinar covers topics such as the digital health landscape and an IMI2 initiative on remote disease assessment. Information is also provided on upcoming Pistoia Alliance events focused on text mining and their spring conference.
Renewable energy comes from resources that naturally replenish, such as sunlight, wind, rain, tides, waves and geothermal heat. It can generate electricity, provide hot water and space heating, serve as motor fuels, and power off-grid energy services. The main renewable sources are solar, biomass, wind, hydro, and geothermal energy. Hydro energy is renewable, has low operating costs, and yields lower energy costs than other methods but can displace human populations and impact ecosystems. Wind energy has high net yields but requires storage or grid connection due to intermittency. Biomass energy utilizes biological material and is theoretically carbon neutral, though land use changes can release carbon and harm biodiversity.
Pollution is the introduction of contaminants into the natural environment that cause adverse change.[1] Pollution can take the form of chemical substances or energy, such as noise, heat or light. Pollutants, the components of pollution, can be either foreign substances/energies or naturally occurring contaminants. Pollution is often classed as point source or nonpoint source pollution.
Water pollution is a major global problem that threatens human, animal and environmental health. It is caused by various factors like increasing population, industrial waste, agricultural runoff, and untreated sewage. This leads to contaminated surface and groundwater with harmful chemicals, pathogens, and debris. To address this issue, standards have been established for water quality and effluent discharge. However, pollution continues as many industrial and municipal wastewaters remain improperly treated before being dumped in water bodies. Preventing further water pollution through conservation efforts, proper disposal of toxins, and reducing plastic use can help address this growing crisis.
This document discusses the RAS/MAP kinase pathway and targeted therapies that inhibit proteins in this pathway for cancer treatment. It specifically mentions that mutations in BRAF occur in the kinase domain and that BRAF inhibitors like vemurafenib and dabrafenib block BRAF to treat cancers. MEK inhibitors and multikinase inhibitors like sorafenib that target multiple nodes in the MAPK pathway are also discussed as cancer therapies.
CRISPR: what it is, and why it is having a profound impact on human healthPistoia Alliance
This document summarizes a webinar on CRISPR that included presentations from experts in gene editing and bioinformatics. The webinar provided an overview of CRISPR and how it works using the Cas9 enzyme and guide RNA to make precise cuts in DNA. It discussed how CRISPR is being used for gene knockout studies, clinical trials to treat diseases like cystic fibrosis and cancer, and the challenges of predicting off-target effects. The webinar highlighted both the promise and challenges of CRISPR for accelerating scientific discovery and developing new gene therapies.
Greenhouses provide a controlled environment for crop growth. They allow sunlight to enter while protecting crops from outside environmental factors like cold, heat, and rain. This controlled environment allows for higher crop yields year-round. Greenhouse technologies regulate temperature, humidity, carbon dioxide levels, and protect from pests and diseases. Components include the structural framework, covering materials, and environmental control systems.
This document provides an overview of various renewable energy sources including hydro, wind, solar, biomass, and geothermal energy. It describes how each source harnesses natural resources to generate energy. For each type, it discusses their history of use, how electricity is generated, and examples of applications. The document aims to educate about renewable energy sources and their importance as clean alternatives to fossil fuels.
This document discusses various sources of storm water pollution and provides recommendations to reduce pollution entering waterways. It notes that storm water is not treated and carries many harmful materials directly into streams, lakes, and oceans. Some major pollutants identified are soil, fertilizers, pesticides, motor oil, and pet waste. The document then provides tips in areas such as limiting fertilizer use, preventing erosion, integrated pest management, proper yard trimmings disposal, cleaning up after pets, reducing driveway runoff, maintaining streamside buffers, proper waste disposal, and reducing household hazardous wastes. The overall message is that small individual actions can help improve water quality when adopted widely.
This document discusses different forms of energy and their uses. It covers fossil fuels like oil, coal and natural gas, as well as renewable sources including solar, wind and hydroelectric power. Solar power can be generated through photovoltaic systems or concentrating solar power. Wind power is economically viable according to a university study. Hydropower harnesses the kinetic energy of moving water through dams to spin turbines and generate electricity, though it can impact downstream water flow. Renewable sources may provide alternatives as fossil fuels are depleted.
Renewable energy sources include sunlight, geothermal heat, tides, wind and biomass. These sources generate clean energy without pollution or climate change. The main types are solar, wind, hydropower, biofuels and geothermal. Solar energy is captured through photovoltaic cells and solar thermal collectors. Wind energy is harnessed via wind turbines in wind farms, and hydropower uses the force of moving water in dams to generate electricity. Biomass and biofuels come from organic matter like plants, and geothermal taps heat from within the earth.
PDF, audio, and voiceover are now available on designintechreport.wordpress.com
Today’s most beloved technology products and services balance design and engineering in a way that perfectly blends form and function. Businesses started by designers have created billions of dollars of value, are raising billions in capital, and VC firms increasingly see the importance of design. The third annual Design in Tech Report examines how design trends are revolutionizing the entrepreneurial and corporate ecosystems in tech. This report covers related M&A activity, new patterns in creativity × business, and the rise of computational design.
Pistoia Alliance Debates: Ontologies as the glue for knowledge management: Us...Pistoia Alliance
Ontological resources, such as curated vocabularies and hierarchical ontologies, are used as the glue which is vital for knowledge management on the semantically enabled worldwide web. This webinar will explore selected use cases and challenges for ontological engineering which is critical for a successful life science sector.
This document summarizes a webinar on building smart cities. It discusses using semantic technologies like ontologies, taxonomies, and knowledge graphs to build smart city platforms and applications. Speakers from Semantic Web Company and Findwise discuss semantic data integration, case studies of semantic platforms for healthcare information in Australia and smart city data in Gothenburg, and tools for building semantic solutions like the PoolParty Semantic Suite. The webinar covers challenges in building smart cities and how semantic technologies can help with areas like data modeling, integration, and machine learning on city data. It concludes with a Q&A session.
Overview of FAIR and the IMI FAIRplus project at the UK Conference of Bioinformatics and Computational Biology 2020: https://www.earlham.ac.uk/uk-conference-bioinformatics-and-computational-biology-2020
Sci Know Mine 2013: What can we learn from topic modeling on 350M academic do...William Gunn
This document discusses topic modeling on 350 million documents from Mendeley. It describes how topic modeling can be used to categorize documents into topics and subcategories, though categorization is imperfect and topics change over time. It also discusses how topic modeling and metrics can help with fact discovery and reproducibility of research to build more robust datasets.
PA webinar on benefits & costs of FAIR implementation in life sciences Pistoia Alliance
The slides from the Pistoia Alliance Debates Webinar where a panel of experts from technology support providers and the biopharma industry, who have been invited to share their views on the "Benefits and costs of FAIR Implementation for life science industry".
The Pistoia Alliance HELM Project aims to set standards for exchanging biomolecular data by developing a representation language and toolkit to bridge existing gaps between tools for small molecules and sequences/biomolecules. The main goal is the HELM Ecosystem, which includes R&D organizations, software vendors, content providers and regulatory agencies. The supporting goal is to improve the "adoptability" of HELM by removing barriers to adoption such as addressing biomolecular ambiguity representation and improving the architecture. Next steps include exploring continuity mechanisms after the project finishes at the end of 2016 or early 2017.
As BioPharma adapts to incorporate nimble networks of suppliers, collaborators, and regulators the ability to link data is critical for dynamic interoperability. Adoption of linked data paradigm allows BioPharma to focus on core business: delivering valuable therapeutics in a timely manner.
The Pistoia Alliance: Update on Strategy and ProgressPistoia Alliance
Ramesh Durvasula, Pistoia Alliance board member, discusses the Pistoia Alliance mission and recaps activities in 2011-12, with particular emphasis on the successful completion of the Sequence Squeeze Competition and Sequence Services Phase 2. The presentation was delivered at BioITWorld in Boston in April 2012.
Simon Hodson discusses key aspects of open science including open access to research outputs, FAIR data principles, and engaging society. Open science requires addressing technical, funding, skills, and mindset challenges. While data created with public funds should be open by default, legitimate exceptions exist for commercial interests, privacy, and security. Criteria for data appraisal, selection and preservation need input from disciplines. Barriers to data sharing include concerns over misuse and lack of credit, while benefits include advancing research and building institutional reputation. Open science governance is needed to balance openness with other priorities like intellectual property, and define roles and responsibilities among stakeholders.
Open Insights Harvard DBMI - Personal Health Train - Kees van Bochove - The HyveKees van Bochove
In this talk, the Personal Health Train concept will be introduced, which enables running personalized medicine workflows as trains visiting data stations (e.g. hospital records, primary care records, clinical studies and registries, patient-held data from e.g. wearable sensors etc.) The Personal Health Train is a very powerful concept, which is however dependent on source medical data to be coded with appropriate metadata on consent, license, scope etc. of the data, and the data itself to be encoded using biomedical data standards, which is an ever growing field in biomedical informatics. In order to realize the Personal Health Train biomedical data will need to be FAIR, i.e. adopt the FAIR Guiding Principles. This talk will cover the emerging GO-FAIR international movement, and provide examples of how several European health data networks currently are adopting open standards based stacks, to enable routine health care data to be come accessible for research.
Dr. Tito Castillo discusses challenges with data discovery and sharing at University College London Hospitals (UCLH) due to their multiple proprietary clinical systems with undocumented data and data warehouses. To address this, UCLH is taking a standards-based approach using models like DDI and SDMX to document metadata and map their processes. The goal is to enable better data access, sharing, and reuse to support research programmes and new models of care while respecting governance and privacy.
Presentation on the FAIR data principles and how they relate to Science Gateways and software. Presented at a workshop prior to eResearch Australasia 16 October 2017
Pistoia Alliance Debates: Sharing data with my co-petition 03-12-2015, 16.04Pistoia Alliance
Richard Lingard discussed how the life sciences industry has undergone significant changes in recent years due to budget reductions, changing research portfolios, and new methodologies. Modern informatics systems can help organizations adapt to these changes by supporting cost-efficient research, enabling scientific collaboration across boundaries, and powering new research approaches. Lingard argued that now is the time for informatics to help show a return on investment from the transformations already underway in the industry.
Bio Data World - The promise of FAIR data lakes - The Hyve - 20191204Kees van Bochove
At the Bio Data World conference in Basel in December 2019, Kees van Bochove, Founder of The Hyve gave a talk on re-use of pharma R&D data, and what strategies could be used to realize operationalization of FAIR data at scale.
Open data ecosystems research talk at Copenhagen Business School on 25042014Matti Rossi
The document summarizes research on open data being conducted at Aalto University School of Business. It describes two studies: 1) an analysis of the emerging open data ecosystem and identification of five value network profiles, and 2) how open data can play a critical role in healthcare information production processes by improving transparency, preventing vendor lock-in, and supporting quality control. It also provides background on key researchers and research topics related to open data and service innovation.
Themes and objectives:
To position FAIR as a key enabler to automate and accelerate R&D process workflows
FAIR Implementation within the context of a use case
Grounded in precise outcomes (e.g. faster and bigger science / more reuse of data to enhance value / increased ability to share data for collaboration and partnership)
To make data actionable through FAIR interoperability
Speakers:
Mathew Woodwark,Head of Data Infrastructure and Tools, Data Science & AI, AstraZeneca
Erik Schultes, International Science Coordinator, GO-FAIR
Georges Heiter, Founder & CEO, Databiology
Search for the enterprise seems to have hit a wall. Bad search is the top complaint of users interacting with their internal data. Meanwhile, there is a seemingly never-ending flood of products, SaaS offerings and new solutions in the market all claiming and attempting to solve the problem.
In this roundtable, we will define what expectations organizations should really have about their search platforms and discuss what benefits to expect from using techniques like boosting, auto-classification, natural language processing, query expansion, entity extraction and ontologies. We will also explore what will supersede search in the enterprise.
DESIGN, DEVELOPMENT & IMPLEMENTATION OF ONTOLOGICAL KNOWLEDGE BASED SYSTEM FO...IJDKP
This document summarizes an article that describes the design and development of an ontological knowledge-based system to support reconfigurable assembly lines in the automotive industry. The system uses an ontology to represent the relationships between products, processes, and resources. It aims to facilitate rapid reconfiguration of assembly lines in response to changing product requirements. The system is intended to help automotive companies address challenges like increasing competition, complex products and processes, and the need to adapt quickly to changes and new customer requirements.
Making Data FAIR (Findable, Accessible, Interoperable, Reusable)Tom Plasterer
What to do About FAIR…
In the experience of most pharma professionals, FAIR remains fairly abstract, bordering on inconclusive. This session will outline specific case studies – real problems with real data, and address opportunities and real concerns.
·
Why making data Findable, Actionable, Interoperable and Reusable is important.
Talk presented at the Data Driven Drug Development (D4) conference on March 20th, 2019.
Similar to Pistoia Alliance Debates: Ontologies mapping webinar 23rd Feb 2017 (20)
Fairification experience clarifying the semantics of data matricesPistoia Alliance
This webinar presents the Statistics Ontology, STATO which is a semantic framework to support the creation of standardized analysis reports to help with review of results in the form of data matrices. STATO includes a hierarchy of classes and a vocabulary for annotating statistical methods used in life, natural and biomedical sciences investigations, text mining and statistical analyses.
This webinar discusses driving adoption of microphysiological systems (MPS) in drug R&D. The webinar agenda includes presentations on multi-organ chips for safety and efficacy assessment from TissUse, current applications and future perspectives of organ-on-chips in pharmaceutical industry from AstraZeneca, and driving adoption of MPS from ToxRox Consulting. A panel discussion will be moderated by Mary Ellen Cosenza. The presentations will cover benefits of MPS for reducing drug failures and animal testing, applications across drug discovery and development, challenges for adoption, and perspectives from industry.
Federated Learning (FL) is a learning paradigm that enables collaborative learning without centralizing datasets. In this webinar, NVIDIA present the concept of FL and discuss how it can help overcome some of the barriers seen in the development of AI-based solutions for pharma, genomics and healthcare. Following the presentation, the panel debate on other elements that could drive the adoption of digital approaches more widely and help answer currently intractable science and business questions.
It seems that AI is also becoming a buzzword, like design thinking. Everyone is talking about AI or wants to have AI, and sees all the ideas and benefits – that’s fine, but how do you get started? But what’s different now? Three innovations have finally put AI on the fast track: Big Data, with the internet and sensors everywhere; massive computing power, especially through the Cloud; and the development of breakthrough algorithms, so computers can be trained to accomplish more sophisticated tasks on their own with deep learning. If you use new technology, you need to explore and know what’s possible. With design thinking, it aids to outline the steps and define the ways in which you’re going to create the solution. Starting with mapping the customer journey, defining who will be using that service enhanced with intelligent technology, or who will benefit and gain value from it. We discuss how these two worlds are coming together, and how you get started to transform your venture with Artificial Intelligence using Design Thinking.
Speaker: Claudio Mirti, Principal Solution Specialist – Data & AI, Microsoft
Knowledge graphs ilaria maresi the hyve 23apr2020Pistoia Alliance
Data for drug discovery and healthcare is often trapped in silos which hampers effective interpretation and reuse. To remedy this, such data needs to be linked both internally and to external sources to make a FAIR data landscape which can power semantic models and knowledge graphs.
2020.04.07 automated molecular design and the bradshaw platform webinarPistoia Alliance
This presentation described how data-driven chemoinformatics methods may automate much of what has historically been done by a medicinal chemist. It explored what is reasonable to expect “AI” approaches might achieve, and what is best left with a human expert. The implications of automation for the human-machine interface were explored and illustrated with examples from Bradshaw, GSK’s experimental automated design environment.
This presentation reviewed the challenges in identifying, acquiring and utilizing research data in relation to an evolving data market. Strategic solutions were examined in which the FAIR principles play a key role in the future of data management.
Dr. Dennis Wang discusses possible ways to enable ML methods to be more powerful for discovery and to reduce ambiguity within translational medicine, allowing data-informed decision-making to deliver the next generation of diagnostics and therapeutics to patients quicker, at lowered costs, and at scale.
The talk by Dr. Dennis Wang was followed by a panel discussion with Mr. Albert Wang, M. Eng., Head, IT Business Partner, Translational Research & Technologies, Bristol-Myers Squibb.
With the explosion of interest in both enhanced knowledge management and open science, the past few years have seen considerable discussion about making scientific data “FAIR” — findable, accessible, interoperable, and reusable. The problem is that most scientific datasets are not FAIR. When left to their own devices, scientists do an absolutely terrible job creating the metadata that describe the experimental datasets that make their way in online repositories. The lack of standardization makes it extremely difficult for other investigators to locate relevant datasets, to re-analyse them, and to integrate those datasets with other data. The Center for Expanded Data Annotation and Retrieval (CEDAR) has the goal of enhancing the authoring of experimental metadata to make online datasets more useful to the scientific community. The CEDAR work bench for metadata management will be presented in this webinar. CEDAR illustrates the importance of semantic technology to driving open science. It also demonstrates a means for simplifying access to scientific data sets and enhancing the reuse of the data to drive new discoveries.
Open interoperability standards, tools and services at EMBL-EBIPistoia Alliance
In this webinar Dr Henriette Harmse from EMBL-EBI presents how they are using their ontology services at EMBL-EBI to scale up the annotation of data and deliver added value through ontologies and semantics to their users.
Fair webinar, Ted slater: progress towards commercial fair data products and ...Pistoia Alliance
Elsevier is a global information analytics business that helps institutions and professional’s
advance healthcare and open science to improve performance for the benefit of humanity.
In this webinar, we discuss how Elsevier is increasingly leveraging the FAIR Guiding Principles to improve its products and services to better serve the scientific community.
Application of recently developed FAIR metrics to the ELIXIR Core Data ResourcesPistoia Alliance
The FAIR (Findable, Accessible, Interoperable and Reusable) principles aim to maximize the discovery and reuse of digital resources. Using recently developed software and metrics to assess FAIRness and supported through an ELIXIR Implementation Study, Michel worked with a subset of ELIXIR Core Data Resources to apply these technologies. In this webinar, he will discuss their approach, findings, and lessons learned towards the understanding and promotion of the FAIR principles.
Implementing Blockchain applications in healthcarePistoia Alliance
Blockchain technology can revolutionise the way information is exchanged between parties by bringing an unprecedented level of security and trust to these transactions. The technology is finding its way into multiple use cases but we are yet to see full adoption and real-world business implementation in the Healthcare industry.
In this webinar we will explore the main challenges and considerations for the implementation of Blockchain technology in Healthcare use cases. This is the third webinar in our Blockchain Education series.
Building trust and accountability - the role User Experience design can play ...Pistoia Alliance
In this webinar our panel of UX specialists give a brief introduction to User Experience before presenting the design opportunities UX can bring to AI. We all know that AI has great potential but has some significant hurdles to overcome not least so the human aspect of trust and ethical considerations when designing in the life sciences.
This document summarizes a webinar on using machine learning and data mining techniques to predict drug repurposing opportunities for chronic pancreatitis. Specifically:
1. Ensemble learning techniques like kernel-based models were used to analyze drug and disease target interaction data from multiple sources to identify potential drug candidates for repurposing.
2. The top 5 repurposing candidates identified through this process were being evaluated further by the partner organization Mission-Cure with the goal of beginning patient trials by January 2020.
3. Additional techniques discussed included using compressed sensing to analyze drug-disease networks and predict side effects to help evaluate candidate drugs identified for repurposing opportunities.
Creating novel drugs is an extraordinarily hard and complex problem.
One of the many challenges in drug design is the sheer size of the search space for novel chemical compounds. Scientists need to find molecules that are active toward a biological target or pathway and at the same time have acceptable ADMET properties.
There is now considerable research going on using various AI and ML approaches to tackle these challenges.
Our distinguished speakers, Drs. Alex Tropsha and Ola Engkvist, will discuss their recent work in Drug Design involving Deep Reinforcement Learning and Neural Networks, and will answer questions from the audience on the current state of the research in the field.
Speakers:
Prof Alex Tropsha, Professor at University of North Carolina at Chapel Hill, USA
Dr. Ola Engkvist, Associate Director at AstraZeneca R&D, Gothenburg, Sweden
Alexander Tropsha presented on using AI and machine learning for drug design and discovery. He discussed using QSAR models to predict properties and activity of molecules based on their structural descriptors. He also introduced ReLeaSE, a new method using deep reinforcement learning to generate novel drug-like molecules and guide chemical library design through a thought cycle of molecule generation, model building, and iterative improvement. If successful, this approach could disrupt traditional computational drug discovery pipelines.
Blockchain, IoT and the GxP lab technology helping compliance?
This webinar discusses how distributed ledger technology like blockchain and IOTA could help enhance compliance in GxP laboratories. It explores how DLT could be used to track devices, materials, and data in a more transparent, trusted and auditable way. Specifically, it presents a vision of an internet-connected "laboratory of the future" where all devices share data using DLT. This could improve integrity, security and access to data while reducing costs. While DLT cannot directly increase compliance, it may help build trust in GxP systems and processes by making components more transparent to regulators.
The document discusses the results of a study on the impact of COVID-19 lockdowns on air pollution. The study found that lockdowns led to short-term reductions in nitrogen dioxide and fine particulate matter concentrations globally. However, the decreases in air pollution were temporary and levels rose back to normal once lockdown restrictions were lifted and activity resumed.
TOPIC OF DISCUSSION: CENTRIFUGATION SLIDESHARE.pptxshubhijain836
Centrifugation is a powerful technique used in laboratories to separate components of a heterogeneous mixture based on their density. This process utilizes centrifugal force to rapidly spin samples, causing denser particles to migrate outward more quickly than lighter ones. As a result, distinct layers form within the sample tube, allowing for easy isolation and purification of target substances.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
Candidate young stellar objects in the S-cluster: Kinematic analysis of a sub...Sérgio Sacani
Context. The observation of several L-band emission sources in the S cluster has led to a rich discussion of their nature. However, a definitive answer to the classification of the dusty objects requires an explanation for the detection of compact Doppler-shifted Brγ emission. The ionized hydrogen in combination with the observation of mid-infrared L-band continuum emission suggests that most of these sources are embedded in a dusty envelope. These embedded sources are part of the S-cluster, and their relationship to the S-stars is still under debate. To date, the question of the origin of these two populations has been vague, although all explanations favor migration processes for the individual cluster members. Aims. This work revisits the S-cluster and its dusty members orbiting the supermassive black hole SgrA* on bound Keplerian orbits from a kinematic perspective. The aim is to explore the Keplerian parameters for patterns that might imply a nonrandom distribution of the sample. Additionally, various analytical aspects are considered to address the nature of the dusty sources. Methods. Based on the photometric analysis, we estimated the individual H−K and K−L colors for the source sample and compared the results to known cluster members. The classification revealed a noticeable contrast between the S-stars and the dusty sources. To fit the flux-density distribution, we utilized the radiative transfer code HYPERION and implemented a young stellar object Class I model. We obtained the position angle from the Keplerian fit results; additionally, we analyzed the distribution of the inclinations and the longitudes of the ascending node. Results. The colors of the dusty sources suggest a stellar nature consistent with the spectral energy distribution in the near and midinfrared domains. Furthermore, the evaporation timescales of dusty and gaseous clumps in the vicinity of SgrA* are much shorter ( 2yr) than the epochs covered by the observations (≈15yr). In addition to the strong evidence for the stellar classification of the D-sources, we also find a clear disk-like pattern following the arrangements of S-stars proposed in the literature. Furthermore, we find a global intrinsic inclination for all dusty sources of 60 ± 20◦, implying a common formation process. Conclusions. The pattern of the dusty sources manifested in the distribution of the position angles, inclinations, and longitudes of the ascending node strongly suggests two different scenarios: the main-sequence stars and the dusty stellar S-cluster sources share a common formation history or migrated with a similar formation channel in the vicinity of SgrA*. Alternatively, the gravitational influence of SgrA* in combination with a massive perturber, such as a putative intermediate mass black hole in the IRS 13 cluster, forces the dusty objects and S-stars to follow a particular orbital arrangement. Key words. stars: black holes– stars: formation– Galaxy: center– galaxies: star formation
Evidence of Jet Activity from the Secondary Black Hole in the OJ 287 Binary S...Sérgio Sacani
Wereport the study of a huge optical intraday flare on 2021 November 12 at 2 a.m. UT in the blazar OJ287. In the binary black hole model, it is associated with an impact of the secondary black hole on the accretion disk of the primary. Our multifrequency observing campaign was set up to search for such a signature of the impact based on a prediction made 8 yr earlier. The first I-band results of the flare have already been reported by Kishore et al. (2024). Here we combine these data with our monitoring in the R-band. There is a big change in the R–I spectral index by 1.0 ±0.1 between the normal background and the flare, suggesting a new component of radiation. The polarization variation during the rise of the flare suggests the same. The limits on the source size place it most reasonably in the jet of the secondary BH. We then ask why we have not seen this phenomenon before. We show that OJ287 was never before observed with sufficient sensitivity on the night when the flare should have happened according to the binary model. We also study the probability that this flare is just an oversized example of intraday variability using the Krakow data set of intense monitoring between 2015 and 2023. We find that the occurrence of a flare of this size and rapidity is unlikely. In machine-readable Tables 1 and 2, we give the full orbit-linked historical light curve of OJ287 as well as the dense monitoring sample of Krakow.
Compositions of iron-meteorite parent bodies constrainthe structure of the pr...Sérgio Sacani
Magmatic iron-meteorite parent bodies are the earliest planetesimals in the Solar System,and they preserve information about conditions and planet-forming processes in thesolar nebula. In this study, we include comprehensive elemental compositions andfractional-crystallization modeling for iron meteorites from the cores of five differenti-ated asteroids from the inner Solar System. Together with previous results of metalliccores from the outer Solar System, we conclude that asteroidal cores from the outerSolar System have smaller sizes, elevated siderophile-element abundances, and simplercrystallization processes than those from the inner Solar System. These differences arerelated to the formation locations of the parent asteroids because the solar protoplane-tary disk varied in redox conditions, elemental distributions, and dynamics at differentheliocentric distances. Using highly siderophile-element data from iron meteorites, wereconstruct the distribution of calcium-aluminum-rich inclusions (CAIs) across theprotoplanetary disk within the first million years of Solar-System history. CAIs, the firstsolids to condense in the Solar System, formed close to the Sun. They were, however,concentrated within the outer disk and depleted within the inner disk. Future modelsof the structure and evolution of the protoplanetary disk should account for this dis-tribution pattern of CAIs.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Farming systems analysis: what have we learnt?.pptx
Pistoia Alliance Debates: Ontologies mapping webinar 23rd Feb 2017
1. Ontologies Mapping for more
effective data integration and
knowledge management
A Pistoia Alliance Debates Webinar
23rd February 2017
Chaired by Ian Harrow
3. Poll Question 1:
What is your level of familiarity/involvement
with Ontologies?
A. I lead Ontologies work in my organization
B. I contribute to Ontologies work in my
organization
C. I have a basic understanding of Ontologies
D. I know very little about Ontologies
4. PistoiaAlliance
Chair and Expert Panel
Ontologies Mapping webinarFebruary 23, 2017
Yasmin Alam-Faruque, Scientific Data Specialist at Eagle Genomics
Organisation, harmonisation and integration of datasets for the eaglecore knowledge
management platform
Previously, biocurator at EMBL-EBI for the renal gene ontology annotation initiative
Simon Jupp, Ontology Project Lead at EMBL-EBI
Developed a range of ontologies and ontology services including the Experimental Factor
Ontology and the Ontology Lookup Service
Working with ontologies in the life sciences since 2003
Martin Romacker, Principal Scientist at Roche Innovation Center, Basel
Data and Information Architect in Pharma Research and Early Development Informatics
Focusing on the Knowledge Engineering (Terminologies/ Ontologies) and Scientific Data
Integration using Semantic Technologies
Ian Harrow, Project Manager at Pistoia Alliance (Chair)
Consultant services in Bioinformatics and Text Mining
Project Manager for the Ontologies Mapping project
Previously, Senior Principal Scientist in Bioinformatics at Pfizer
Lee Harland, Founder and COO at SciBite
SciBite is a growing company based in Cambridge UK specialising in Text Analytics and
Knowledge Management for life sciences
Previously, CTO of the Open PHACTS project and head of the information engineering
group at Pfizer
4
5. PistoiaAlliance
Agenda
55February 23, 2017 Ontologies Mapping webinar
Panelist Question
Ian Harrow Welcome
Martin Romacker Why are ontologies important for Roche?
What has been achieved by the Ontologies Mapping project?
Yasmin Alam-Faruque What is the value of ontologies to Eagle Genomics?
How has being part of the OM project team helped?
Lee Harland How do ontologies power the SciBite platform?
Simon Jupp What ontology services are available at EMBL-EBI?
Ian Harrow What is the Ontologies Mapping project planning to do next?
Audience Q & A
6. Why are ontologies important for Roche?
What has been achieved by the
project and it’s value to Roche?
Martin Romacker at Roche Innovation Center
7. PistoiaAlliance
Changing Perception of Corporate Data Assets
• Pharma Industry is behind other industries
(eg Finance, Insurance, Automotive, Wholesale, Retailer – CDO/ CAO)
• Paradigm shift – from lab to data/knowledge?
Data is business and business is data – acquisition of
data not compounds (Google: data, algorithms, computer)
• Change only happens where the Pharma Industry is
forced – why? CDISC, IDMP
(heavily relying on ontologies and data standards)
• Pharma Industry accepts an incredible variety of data as
input into knowledge-driven business processes
(eg CROs, vendor data, cost avoidance)
• Pharma Industry spends huge budgets to generate
knowledge - budgets are tight for integration,
maintenance and quality assurance
February 23, 2017 Ontologies Mapping webinar 7
8. PistoiaAlliance
pREDi Terminology Service (RTS)
• RTS as domain master for terminology management
– streamlining terminology management ensuring high data quality
– semantic alignment between knowledge repositories lowering barriers
• Faster response to scientific queries (saving time)
Less effort for data integration (cost avoidance)
• Support of external collaborations based on data standards (trend CROs)
• Support of well-founded decisions
– business or scientific
• Semantic Engineering to define of research/business objects
• USP: Comprehensive semantic model to represent highly-scalable, universal
and multi-purpose terminologies
February 23, 2017 Ontologies Mapping webinar 8
10. PistoiaAlliance
Ontologies & Data Standards:
Value Proposition
Source: https://www.crowdflower.com/the-data-behind-todays-data-scientists-an-infographic/
https://whatsthebigdata.com/2016/05/01/data-scientists-spend-most-of-their-time-cleaning-data/
• Data-Science-Readiness (Time-to-Value)
• Reduced Effort for Data Integration
• Improved Data Quality (completeness, correctness, coherence)
Scientific DataIntegration
February 23, 2017 Ontologies Mapping webinar 10
11. PistoiaAlliance
Big Data - Semantics as a Key Enabler
Velocity
Volume Veracity
Variety
Value
February 23, 2017 Ontologies Mapping webinar 11
12. PistoiaAlliance
Prime Time for Ontologies
• Executives consider data more and more as a corporate asset
• Integration of Real World Data and Health Care Data
• Translational and Reverse Translational Data Integration
• Collaboration with Contract Research Organisation
• But: Missing or competing standards in Research & Development
(eg MeSH, SNOMED, MedDRA, NCIt)
Legacy systems using own terminologies/ontologies
(terminologies/ontologies are ubiquitous but not managed as such)
• Urgent need for Ontologies Mapping
Roche funding of Pistoia Ontologies Mapping Project Phase 1 & 2
Roche are committed to funding the proposed Phase 3
February 23, 2017 Ontologies Mapping webinar 12
13. PistoiaAlliance
Project Phase 1&2: Timeline and
Achievements
13
1) Ontologies
domain selected
as “test case”
2Q 3Q
2015
4Q
4) Evaluate & select existing
Ontologies Mapping tool(s) 5)
Requirements for an
Ontologies Mapping service
6) Understand the demand for an
Ontologies Mapping service
1Q 2Q
2016
4Q3Q
2) Guidelines for minimal
standards & best practices
3) Requirements for Ontologies
Mapping tool
Funded by GSK, Merck & Co, Novartis, Roche and BIOVIA 3DS
February 23, 2017 Ontologies Mapping webinar
14. PistoiaAlliance
Further Achievements
14
• Conformity with the FAIR principles
– Findable and Accessible (public wiki)
– Interoperable and Re-usable (aligned to OBO etc.)
• Endorsed by external groups:
– Interoperable Services at ELIXIR, Molecular Archival Resources at
EMBL-EBI, Ontologies Mapping Project Community of Interest
• Promotion at conferences/workshops:-
– EMBL-EBI March 2016, ISMB July 2016, ECCB September 2016,
Industry Semantic Forum at Roche September 2016, OM October
2016 and ISWC October 2016
• Ontology Alignment Evaluation Initiative
– Sponsoring of a competition for the best ontologies matching
algorithm (International Workshop on Ontology Matching)
February 23, 2017 Ontologies Mapping webinar
15. PistoiaAlliance
OM Project Community
15
Funders
• BIOVIA 3DS
• GSK
• Merck & Co
• Novartis
• Roche
Pistoia Operations
• Richard Holland
• John Wise
• Carmen Nitsche
• Nick Lynch
Project team
• Ian Harrow (Pistoia Project Manager)
• Martin Romacker (Roche)
• Andrea Splendiani (Novartis)
• Stefan Negru (Merck & Co)
• Peter Woollard (GSK)
• Scott Markel (BIOVIA)
• Martin Koch (Osthus)
• Heiner Oberkampf (Osthus)
• Yasmin Alam-Faruque (Eagle Genomics)
• Erfan Younesi (Bayer)
• Jabe Wilson (Elsevier)
• James Malone (FactBio)
Community of Interest (>80 members)
February 23, 2017 Ontologies Mapping webinar
16. PistoiaAlliance
Ontologies Mapping Project:
Value to Roche
• Phase 1 & 2 Achievements are highly relevant:-
Guidelines for selection of reference standards
Analysis of available tools for ontologies mapping
(RFI: baseline, checklist for tools)
Evaluation of ontologies mapping algorithms
(linkage to OM algorithm community)
• Phase 3 Ontologies Mapping Service proposal (later by Ian)
Important for mapping requests (e.g. HPO to MeSH)
Important application to semantic alignment
Shared resources across scientific community
February 23, 2017 Ontologies Mapping webinar 16
17. PistoiaAlliance
Conclusion
• Tremendous change: data are considered as an asset
• Urgent need for lowering the barriers for data integration
and data sharing
• Demystify knowledge acquisition process
Define as knowledge procurement process
• Terminologies/ Ontologies and related Data Standards start
to play a key role
but: getting them into business still requires tenacity
but: tackle the issue from the value perspective
• Ontologies mapping is a core capability to work efficiently
and successfully with corporate data assets
This is why Roche consumes and funds the OM project
February 23, 2017 Ontologies Mapping webinar 17
18. What is the value of ontologies to Eagle
Genomics?
How has being part of the project
team helped?
Yasmin Alam-Faruque at Eagle Genomics
19. PistoiaAlliance
Supporting the bridge between Data and Insight
Eagle Genomics provides software solutions bridging the gap between
“big data” and “innovative biological insight”
Ontologies are essential - allow disparate data to be harmonised, federated and integrated for various high
performance computational analyses (i.e. data processing, statistical analyses and data mining)
-> novel insights 19Ontologies Mapping webinarFebruary 23, 2017
20. PistoiaAlliance
20
Data curation
• We also play an active role in
curating, organising and federating
a variety of customer multi-omics
datasets and associated metadata
into a knowledge management
platform (eaglecore).
• Curation of scientific data involves
it’s collection, characterisation,
cleaning, contextualization,
categorisation and cataloguing,
making it more visible and available
for searching, sharing and further
analyses.
• Hence, using ontologies during
curation to semantically enrich and
harmonise the datasets becomes
essential for data integration and
interoperability.
Ontologies Mapping webinarFebruary 23, 2017
21. PistoiaAlliance
Data valuation
21
• Eagle Genomics pioneers measurement of data value (i.e.
usefulness and relevance) in the context of specific scientific
questions.
• Value modeling requires data harmonisation using ontologies.
• We can measure the value of data before the use of ontologies
and after, according to quality metrics and value metrics.
Dataset Catalogue
Dataset Catalogue
(improved quality)
Quality metrics
Descriptive statistics
Dataset Catalogue
(improved value)
Improvement Improvement
Value metrics
AHP, QFD
Ontologies
Ontologies Mapping webinarFebruary 23, 2017
22. PistoiaAlliance
Data Governance
22
Governance
Validity
Consistency
Processes
Organisations
..
standards
guidelines
Are we doing
the right things?
Are we doing
the things right?
Architecture
Data &
contextual
models
…
semantics
Goals
Governance
by design
Measurement
• Emerging as an important activity for biopharma
and healthcare industries
• Complex initiative: relates to the validity and
consistency throughout the organisation
• Ensuring everyone refers to the same drug or
disease across all organisational departments/sites
(R&D -> clinical trials -> sale of drug to treat
disease) is essential.
• can be initiated by
use of ontologies/
controlled
vocabularies to tag
and link
experiments/
datasets
Ontologies Mapping webinarFebruary 23, 2017
23. PistoiaAlliance
How has being part of the project team helped?
23
Allowed visibility - played an active role throughout the project which has
projected a serious and professional image among other organisational
team members
Provided an overall increase in our expertise, understanding and capability
within this important field
Credibility with potential customers/clients as we are heavily involved in this
important community project along with other Pistoia member
organisations
Opportunity to become aware of the evaluation and selection of the best
potential academic/ commercial Ontology mapping tool/ service provider
for future customer projects, ahead of the project starting – saving time.
Opportunity to be involved in the development of various documentation:
• detailing the functional and non-functional requirements for an
Ontologies Mapping Tool
• Ontology mapping guidelines (already comprehensively followed by
some ontologies)
Ontologies Mapping webinarFebruary 23, 2017
24. Poll Question 2:
Where do you source mappings between
ontologies?
A. Mostly external sources of mappings
B. Mostly internal curation of mappings
C. A mixture of both external and internal
sources
D. I do not know
25. How do ontologies power the SciBite
platform?
Lee Harland at SciBite
26. PistoiaAlliance
Ontologies In The SciBite Platform
26
Lee Harland | @SciBitely | www.scibite.com
February 23, 2017 Ontologies Mapping webinar
27. PistoiaAlliance
27
80-90% of all potentially usable
business information may
originate in unstructured form
https://en.wikipedia.org/wiki/Unstructured_data
February 23, 2017 Ontologies Mapping webinar
28. PistoiaAlliance
‘Semantics-as-a-Service’
28
Text Content
Documents & Databases
Ontologies:
Gene/Disease/Drug; Molecular;
Chemical; Clinical; Adverse Event;
Pharm Sci & Manufacturing; Business
& Commercial; Regulatory; Geo-
location; University/Company
Structured Data
+
SciBite
API
February 23, 2017 Ontologies Mapping webinar
29. PistoiaAlliance
Public Ontologies Are Vital
29
What They Are Great For
• Providing a open, consistent, stable
identifier for a given “thing”
• Developing community consensus as to
what that ”thing” is
• Developing community consensus on what
all the things are
• Powering Data Integration
• Powering Scientific Analytics
Not Designed For Text Analytics/Mining
February 23, 2017 Ontologies Mapping webinar
30. PistoiaAlliance
3 Key Issues
30
e.g. Human Phenotype Ontology
(HPO) is a gold reference standard for
phenotypes and many use cases start
with “find all the phenotypes….”
But 6997 synonyms in current HPO
over 11375 entities. Similar for many
others as not their raison d'etre
1. Synonym Coverage
2. Coding Style 3. Ambiguity
February 23, 2017 Ontologies Mapping webinar
35. PistoiaAlliance
Summary
35
• Text (Databases & Documents) accounts for large
amount of corporate “knowledge”
• Public & Internal Ontologies have great potential in
structuring this text into minable data
• But these ontologies require significant processing, both
human and automated in order to make them “fit for
purpose”
• Combine this with a fast, flexible, simple API and you
can address a vast array of different use cases in
– Software Vendors & Systems Developers
– Content Providers
– Data Scientists & Text Miners
February 23, 2017 Ontologies Mapping webinar
37. PistoiaAlliance
Ontologies at EMBL-EBI
Applications
Disease BioAssays
Cell lines
Cell types
Small molecules
Evidence
Taxonomy
Drugs
Adverse events
InformationGene function
Plant anatomy
Mouse anatomy
Phenotype
EVA Expression Atlas
GWAS catalog
Array Express
Biomedical ontologies
February 23, 2017 Ontologies Mapping webinar 37
38. PistoiaAlliance
The challenge - thousands of data attributes…
• Use the data to focus our curation efforts
– For experimental data we focus on species, cell types, tissue types, disease state, phenotypes
– Identify gaps in public ontologies
• Different requirements
– High quality, manually curated resources e.g. GWAS catalog, OpenTargets
– High throughput, automated curation e.g. archival resources like BioSamples
February 23, 2017 Ontologies Mapping webinar 38
39. PistoiaAlliance
We build ontology aware applications
Smarter searching Data analysis
Data integration
Data visualisation
February 23, 2017 Ontologies Mapping webinar 39
40. PistoiaAlliance
Common questions
• How can I access ontologies?
• How do I map data to ontologies?
• What about data that doesn’t map?
• How can I translate from one ontology to
another?
• How can I extend an ontology?
• How do I build “ontology aware” applications?
• How should I publish my data?
February 23, 2017 Ontologies Mapping webinar 40
41. PistoiaAlliance
We are building a Ontology Toolkit
Search/Visualise ontologies
Annotate data
Ontology mappings
Create new ontology content
Webulous
Ontology Lookup Service
OxO
Zooma
February 23, 2017 Ontologies Mapping webinar 41
42. PistoiaAlliance
Ontology Lookup Service
• Ontology search engine
• Ontology term history tracking
• Ontology visualisation
• Powerful RESTful API
Repository of over 160 pre-selected biomedical ontologies (4.5 million terms)
http://www.ebi.ac.uk/ols
• Provides unified mechanism to access
multiple ontologies
• Large community of users, 10s of millions of
hits per month
February 23, 2017 Ontologies Mapping webinar 42
43. PistoiaAlliance
Zooma
• Optimal mappings based on data we have seen previously
• Favours precision over recall for use in automated pipelines
• Currently contains over 92,000 curated annotations from 7 resources
– ClinVar, Cellular Phenotype Database, ExpressionAtlas, UniProt, GWAS,
EBiSC, OpenTargets
– Used to improve and share their mappings across resources
Repository of curated ontology mappings
http://www.ebi.ac.uk/spot/zooma
February 23, 2017 Ontologies Mapping webinar 43
44. PistoiaAlliance
New for 2017 – Ontology Xrefs
• A lot curator effort in building ontology cross-
references
• Cross-references are a powerful tool for
integrating data
Data source 1 Data source 2
Human
Phenotype
Ontology
SNOMED-CTMappings
February 23, 2017 Ontologies Mapping webinar 44
45. PistoiaAlliance
Ontology Mapping Service (OxO)
• New curation platform for community built mappings
• Seeded with mappings from OLS and other sources (UMLS,
SNOMED)
• Normalised CURIE prefixes using identifiers.org
– SNOMED-CT: / SNOMEDCT: / SNOMED: / SNOMEDCT_
• Provides a gold standard to support predictive mapping algorithms
http://www.ebi.ac.uk/spot/oxo *
* Going live March 2017
February 23, 2017 Ontologies Mapping webinar 45
46. PistoiaAlliance
Webulous – creating new ontology content
• Spreadsheet templates for adding new ontology
content
– Ontology “aware” for in sheet validation
– Generic ontology building technology
• Works with Google sheets
Webulous
server
Exposes list
of ontology
design
templates
Populated templates converted
OWL
Webulous exports newly
generated ontology
http://www.ebi.ac.uk/efo/webulous
February 23, 2017 Ontologies Mapping webinar 46
47. PistoiaAlliance
Putting it all together
• How can I access ontologies?
• How do I map data to ontologies?
• What about data that doesn’t map?
• How can I translate from one ontology to
another?
• How can I extend an ontology?
• How do I build “ontology aware” applications?
• How do I publish my annotations?
February 23, 2017 Ontologies Mapping webinar 47
49. What is the project planning
to do next?
Ian Harrow at Pistoia Alliance
50. PistoiaAlliance
Why do we need Ontology Mapping?
50
Data domain Example: Disease and Phenotype
Ontology 1 Ontology 2 Ontology 3
Mapping 1-2 Mapping 2-3
Mapping Tools and Services
Higher scalability at reduced cost of maintenance
A better engineering solution for application ontologies
Expandable coverage
+ More…
February 23, 2017 Ontologies Mapping webinar
51. PistoiaAlliance
Proposal for an Ontology Mapping Service
OLS
Ontology Lookup
Service
OXO
Ontology Cross
References
(Mappings)
ZOOMA
Mapping Tool
Database of curated
mappings sourced
from public datasets
162
Ontologies
Mapping free text
annotations to ontology
terms based on a curated
repository of annotation
knowledge
Pistoia Alliance Prototype Ontologies Mapping Service (OMS):-
Develop an OMS to build on the existing Ontology Services at EMBL-EBI
Evaluate value and quality of selected mappings in Disease & Phenotype domain
February 23, 2017 Ontologies Mapping webinar 51
52. PistoiaAlliance
Proposed Deliverables and Timeline
1) Start the prototype
service to run for 6 months
4Q
3) Complete prototype service
and report performance metrics
Requirements for an Ontologies
Mapping service as a standard
2Q
Phase 3: 2017
3Q
2) 3 month review of service
performance metrics
Phase 2: 2016
Promote and publicise
prototype service
1Q
Ph3: Preparatory
February 23, 2017 52Ontologies Mapping webinar
53. PistoiaAlliance
Benefits and Support for Phase 3
53
Expected Benefits
• Evaluate value and quality of mappings between public
disease & phenotype ontologies selected by funders
• Evaluate value and quality of public to internal mappings
selected by funders
• Build on the database for public ontology mappings
• Extendible to any ontology hosted at EMBL-EBI
Call for Support
• Roche have committed funds and interest is growing
• Please contact ian.harrow@pistoiaalliance.org about
support for the project
Now is a great time to join us!!!
February 23, 2017 Ontologies Mapping webinar
55. Join the Deep Learning
Hackathon March 25-26 London
Help us show how Deep Learning can impact
Life Sciences and Healthcare
Why attend?
• Create something that could make a life changing difference to human health.
• Job opportunities - meet and find out more about working with the pharma /
healthcare industry. We will bring the companies to you. Your team mate at
the event could be your future colleague!
• Win prizes and gain recognition - you can receive your prize at our
conference in front of over 100 senior industry experts from R & D and IT.
• We hope you will make new connections from new areas and disciplines,
perhaps even see new career directions you never thought possible.
• It’s fun!!!
http://www.pistoiaalliance.org/eventdetails/pistoia-alliance-hackathon
Warm welcome to all of you. My name is Ian Harrow and I am working for the Pistoia Alliance. Thank you for joining us on this Webinar titled…
Please notice that this Webinar will be recorded and made available to all of you for later reference.
Before I introduce the panel of speakers we have today, I would like to ask you with this poll. What is your level of familiarity/ Involvement with Ontologies. This will help the panellists to better understand the audience. A, B,C, D. I will give you another 30 seconds, than Nick Lynch who provides all the support for this Webinar will give us the results. Some interpretation. An interesting mix in the results.
Eagle provides software solutions that bridge between “big data” and “innovative biological insight”.
Ontologies are essential. Why? Because they allow disparate data from a variety of sources to be harmonised, federated and integrated into a resource from which various high performance computational analysis such as data processing, statistical analyses and data mining can be carried out towards novel biological insights.
Curation of scientific data involves the collection, characterisation, cleaning, contextualization, categorisation and cataloguing, …
Eagle Genomics pioneers measurement of data value (i.e. usefulness and relevance) in the context of specific scientific questions.
Value modeling requires data harmonisation using ontologies.
We can measure the value of data before the use of ontologies and after, according to quality metrics and value metrics.
AHP (analytic hierarchy process) is a structured technique for organizing and analyzing complex decisions, based on mathematics and psychology.
QFD (Quality Function Deployment) is a structured approach to defining customer needs or requirements and translating them into specific plans to produce products to meet those needs. The “voice of the customer” is the term to describe these stated and unstated customer needs or requirements.
Data Governance is emerging as an important activity within the biopharma and healthcare industries. This is a complex initiative which relates to the validity (such as are we doing the right things) and consistency (are we doing the things right) throughout the organisation. It goes towards ensuring everyone refers to the same drug or disease across all organisational departments/sites (R&D -> clinical trials -> sale of drug to treat disease) is essential.
Should be by design not as a “tick box”
Governance can be initiated by use of ontologies/ controlled vocabularies to tag and link experiments/ datasets throughout different departments .
So being a part of the project team has
Allowed visibility - played an active role throughout the project which has projected a serious and professional image among other organisational team members
Provided an overall increase in our expertise, understanding and capability within this important field
Allowed Credibility with potential customers/clients to say that we are heavily involved in this important community project along with other Pistoia member organisations
It has given us an Opportunity to become aware of the evaluation and selection of the best potential academic/ commercial Ontology mapping tool/ service provider for future customer projects, ahead of the project starting, hence saving precious time.
And also given us an Opportunity to be involved in the development of various documentation:
detailing the functional and non-functional requirements for an Ontologies Mapping Tool
Ontology mapping guidelines (already comprehensively followed by some ontologies)
Here is our second poll for the audience…
EBI makes extensive use of ontologies spanning many domains
Multiple applications that use these ontologies to describe data
The metadata is a mess, we focus and prioritise our curation. Focus on species, cell types, tissues type, disease state, phenotype
Range of applications drivenby ontologies, smarter searching, analysis, data integration, visualisation of data
Zooma and OLS first port of call for mappings
Zooma allows curation for unmapped data, new terms can be minted with webulous
Might find mapping but need cross-refs – then use OxO
Build applciation, extarct applciation ontologies, build search indexes with biosolr
Publish dta and mapping – EBI RDF platform state of the art in semantic publishing
Used by biosamples via biosolr
Refer back to BioSamples, building pipelines
Can’t have DO and HPO, but if I’ve got mappings I’m ok
GWAS need to exports as Mesh instead of OWL
The come from multiple sources
Directly asserted as term annotation inside ontologies
Dedicated mapping resources (UMLS, SNOMED, ICD)
Manually curated mappings (in spreadsheets)
Automated tools for predicting mappings
Identifier formats can vary
URIs, CURIEs (compact URIs e.g. GO:0001234)
Prefixes vary for CURIEs (MSH and MeSH both used for MeSH)
Provenance models and semantics lacking
Hard to know how existing mappings were derived (manual vs automated)
What mapping means (sameAs, cross species, broader)
Zooma and OLS first port of call for mappings
Zooma allows curation for unmapped data, new terms can be minted with webulous
Might find mapping but need cross-refs – then use OxO
Build application, extract application ontologies, build search indexes with biosolr
Publish data and mapping – EBI RDF platform state of the art in semantic publishing
Thanks you Martin and Sergio for these excellent examples how cross pharma collaboration can deliver real value for the community.
A reminder to type your questions into Goto Webinar
Recording of this Webinar will be made available on the Pistoia website. Thank you