In this presentation we explain the role of archetypes for facilitating data reuse and data quality evaluation. Our use case will be a system for monitoring best practices during babies first 1,000 days. What happens during the first 1,000 days is the foundation of an optimum health, growth, and neurodevelopment across the lifespan. This presentation shows how to create a standardized and data quality assessed integrated data depository for a reliable data reuse in monitoring of best practices and research of perinatal health.
Developing a Clinical Decision Support System with GraknVaticle
Building a scalable and interoperable decision support system for clinical practice is the Holy Grail of medical informatics.
Being one the many in the quest, trying to fit a pantagruelic amount of data in a business rule, I often found myself thinking: What if the problem is not only the complexity of the domain, but the unsuitable set of tools we use to model it?
Therefore, I propose that an expert system for clinical decisions manually built from if-then logic and simplistic models is unable to keep pace with the ever-growing body of free-text literature.
I'd like you to join me in my journey of building a clinical decision support system using Grakn for integrated care in oncology in an Italian hospital. Let me drive you through the most interesting bits of the process, from the Natural Language Processing of free text medical reports to the annotation of medical entities as Snomed-CT concepts and their migration into Grakn.
I landed on Grakn because it allows me to match the machine-readable text of medical reports to the free text of the latest clinical practice guidelines and medical literature, making automatic reasoning to extract tailored recommendations for the patients possible.
The 10th Annual Utah Health Services Research Conference: Data Quality in Multi-Site Health Services and Comparative Effectiveness Research: Lessons from PHIS+ By: Ram Gouripeddi
Health Services Research Conference: March 16, 2015
Patient Centered Research Methods Core, University of Utah, CCTS
Enabling Clinical Data Reuse with openEHR Data Warehouse EnvironmentsLuis Marco Ruiz
Databases for Clinical Information Systems are difficult to
design and implement, especially when the design should be
compliant with a formal specification or standard. The
openEHR specifications offer a very expressive and generic
model for clinical data structures, allowing semantic
interoperability and compatibility with other standards like
HL7 CDA, FHIR, and ASTM CCR. But openEHR is not only
for data modeling, it specifies an EHR Computational
Platform designed to create highly modifiable future-proof
EHR systems, and to support long term economically viable
projects, with a knowledge-oriented approach that is
independent from specific technologies. Software Developers
find a great complexity in designing openEHR compliant
databases since the specifications do not include any
guidelines in that area. The authors of this tutorial are
developers that had to overcome these challenges. This
tutorial will expose different requirements, design principles,
technologies, techniques and main challenges of implementing
an openEHR-based Clinical Database, with examples and
lessons learned to help designers and developers to overcome the challenges more easily
Enabling Clinical Data Reuse with openEHR Data Warehouse EnvironmentsLuis Marco Ruiz
Modern medicine needs methods to enable access to data,
captured during health care, for research, surveillance,
decision support and other reuse purposes. Initiatives like the
National Patient Centered Clinical Research Network in the
US and the Electronic Health Records for Clinical Research
in the EU are facilitating the reuse of Electronic Health
Record (EHR) data for clinical research. One of the barriers
for data reuse is the integration and interoperability of
different Healthcare Information Systems (HIS). The reason is
the differences among the HIS information and terminology
models. The use of EHR standards like openEHR can alleviate
these barriers providing a standard, unambiguous,
semantically enriched representation of clinical data to
enable semantic interoperability and data integration. Few
works have been published describing how to drive
proprietary data stored in EHRs into standard openEHR
repositories. This tutorial provides an overview of the key
concepts, tools and techniques necessary to implement an
openEHR-based Data Warehouse (DW) environment to reuse
clinical data. We aim to provide insights into data extraction
from proprietary sources, transformation into openEHR
compliant instances to populate a standard repository and
enable access to it using standard query languages and
services
Developing a Clinical Decision Support System with GraknVaticle
Building a scalable and interoperable decision support system for clinical practice is the Holy Grail of medical informatics.
Being one the many in the quest, trying to fit a pantagruelic amount of data in a business rule, I often found myself thinking: What if the problem is not only the complexity of the domain, but the unsuitable set of tools we use to model it?
Therefore, I propose that an expert system for clinical decisions manually built from if-then logic and simplistic models is unable to keep pace with the ever-growing body of free-text literature.
I'd like you to join me in my journey of building a clinical decision support system using Grakn for integrated care in oncology in an Italian hospital. Let me drive you through the most interesting bits of the process, from the Natural Language Processing of free text medical reports to the annotation of medical entities as Snomed-CT concepts and their migration into Grakn.
I landed on Grakn because it allows me to match the machine-readable text of medical reports to the free text of the latest clinical practice guidelines and medical literature, making automatic reasoning to extract tailored recommendations for the patients possible.
The 10th Annual Utah Health Services Research Conference: Data Quality in Multi-Site Health Services and Comparative Effectiveness Research: Lessons from PHIS+ By: Ram Gouripeddi
Health Services Research Conference: March 16, 2015
Patient Centered Research Methods Core, University of Utah, CCTS
Enabling Clinical Data Reuse with openEHR Data Warehouse EnvironmentsLuis Marco Ruiz
Databases for Clinical Information Systems are difficult to
design and implement, especially when the design should be
compliant with a formal specification or standard. The
openEHR specifications offer a very expressive and generic
model for clinical data structures, allowing semantic
interoperability and compatibility with other standards like
HL7 CDA, FHIR, and ASTM CCR. But openEHR is not only
for data modeling, it specifies an EHR Computational
Platform designed to create highly modifiable future-proof
EHR systems, and to support long term economically viable
projects, with a knowledge-oriented approach that is
independent from specific technologies. Software Developers
find a great complexity in designing openEHR compliant
databases since the specifications do not include any
guidelines in that area. The authors of this tutorial are
developers that had to overcome these challenges. This
tutorial will expose different requirements, design principles,
technologies, techniques and main challenges of implementing
an openEHR-based Clinical Database, with examples and
lessons learned to help designers and developers to overcome the challenges more easily
Enabling Clinical Data Reuse with openEHR Data Warehouse EnvironmentsLuis Marco Ruiz
Modern medicine needs methods to enable access to data,
captured during health care, for research, surveillance,
decision support and other reuse purposes. Initiatives like the
National Patient Centered Clinical Research Network in the
US and the Electronic Health Records for Clinical Research
in the EU are facilitating the reuse of Electronic Health
Record (EHR) data for clinical research. One of the barriers
for data reuse is the integration and interoperability of
different Healthcare Information Systems (HIS). The reason is
the differences among the HIS information and terminology
models. The use of EHR standards like openEHR can alleviate
these barriers providing a standard, unambiguous,
semantically enriched representation of clinical data to
enable semantic interoperability and data integration. Few
works have been published describing how to drive
proprietary data stored in EHRs into standard openEHR
repositories. This tutorial provides an overview of the key
concepts, tools and techniques necessary to implement an
openEHR-based Data Warehouse (DW) environment to reuse
clinical data. We aim to provide insights into data extraction
from proprietary sources, transformation into openEHR
compliant instances to populate a standard repository and
enable access to it using standard query languages and
services
Spatio-‐temporal Sensor Integration, Analysis, Classification or Can Exascal...Joel Saltz
Presentation at Clusters, Clouds and Data for Scientific Computing 2014
Integrative analyses of large scale spatio-temporal datasets play increasingly important roles in many areas of science and engineering. Our recent work in this area is motivated by application scenarios involving complementary digital microscopy, radiology and “omic” analyses in cancer research. In these scenarios, the objective is to use a coordinated set of image analysis, feature extraction and machine learning methods to predict disease progression and to aid in targeting new therapies. I will describe tools and methods our group has developed for extraction, management, and analysis of features along with the systems software methods for optimizing execution on high end CPU/GPU platforms. Once having provided our current work as an introduction, I will then describe 1) related but much more ambitious exascale biomedical and non-biomedical use cases that also involve the complex interplay between multi-scale structure and molecular mechanism and 2) concepts and requirements for methods and tools that address these challenges.
A brief presentation outlining the concepts of data quality in the context of clinical data, and highlighting the importance of data quality for population health, health analytics, and other secondary uses of clinical data.
Ontology-Driven Clinical Intelligence: A Path from the Biobank to Cross-Disea...Remedy Informatics
The discovery of clinical insights through effective management and reuse of data requires several conditions to be optimized: Data need to be digital, data need to be structured, and data need to be standardized in terms of metadata and ontology. This presentation describes a bioinformatics system that combines a next-generation biobank management model mapped to applicable international standards and guidelines with a master ontology that controls all input and output and is able to add unique properties to meet the specialized needs of clinicians for cross-disease research.
Ontology-Driven Clinical Intelligence: Removing Data Barriers for Cross-Disci...Remedy Informatics
The presentation describes how Remedy Informatics is advocating and innovating "flexible standardization" through an ontology-driven approach to clinical research. You will see in greater detail how a foundational, standardized Mosaic Ontology can be extended for more specific research applications and even more specific and focused disease research.
The Uneven Future of Evidence-Based MedicineIda Sim
An Apple ResearchKit study enrolled 22,000 people in five days. A
study claims that Twitter can be used to identify depressed patients. A computer program crunches genomic data, the published literature, and electronic health record data to guide cancer treatment. The pace, the data sources, and the methods for generating medical evidence are changing radically. What will — what should — evidence-based medicine look like in a faster, personalized, data-dense tomorrow?
- Presented as the 3rd Annual Cochrane Lecture, October 2015 in Vienna, Austria.
Enterprise Analytics: Serving Big Data Projects for HealthcareDATA360US
Andrew Rosenberg's Presentation on "Enterprise Analytics: Serving Big Data Projects for Healthcare" at DATA 360 Healthcare Informatics Conference - March 5th, 2015
Independent forces on the biomedical ecosystem is causing a convergence of care, quality measurement, and clinical research at the point of care. The presentation outlines some of the informatics implications of this convergence.
The presentation outlines three fundamental questions: (1) how is medicare doing today?, (2) why is MACRA happening?, and (3) Why is clinical data quality important to you?
Sharing and standards christopher hart - clinical innovation and partnering...Christopher Hart
Acknowledging the increasing need for cooperation and collaboration in data sharing and access. Describing the complexity that this can bring. Then describing some of the ways to simplify that.
Originally presented at Terrapin's Clinical innovation and partnering world March 8-9 2017.
http://www.terrapinn.com/conference/innovation-and-partnering/index.stm
Computer validation of e-source and EHR in clinical trials-KuchinkeWolfgang Kuchinke
Clinical Trials in the Learning Health System (LHS): Computer System Validation of eSource and EHR Data.
The question that was addressed: How to make a clinical trial data management system that uses EHR data, Patient Reported Outcome (PRO) and eSource data as part of the Learning Health System compliant with regulations and with Good Clinical Practice (GCP)?
The Learning Health System (LHS) connects health care with translational and clinical research. It generates new medical knowledge as a by-product of the care process and its aim is to improve health and safety of patients. The LHS generates and applies knowledge. For this purpose, clinical research, which is research involving humans, must be part of the LHS. Two general types of research exists: observational studies and clinical trials.
Clinical data drive the LHS, because results from randomized controlled trials are seen as “gold standard” for medical evidence. For this reason the concept of using data gathered directly from the patient care environment has enormous potential for accelerating the rate at which useful knowledge is generated.
All computer systems involved in clinical trials must undergo Computer System Validation (CSV). For this process, a legal framework for the TRANSFoRm project was developed. It was used for data privacy analysis of the data flow in two research use cases: an epidemiological cohort study on Diabetes and a randomised clinical trial about different GORD treatment regimes.
Computerized system validation is the documented process to produce evidence that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. The validation of electronic source data in clinical trials presents many challenges because of the blurring of the border between care and research. Here we present our approach for the validation of eSource data capture and the developed documentation for the CSV of the complete data flow in the LHS developed by the TRANSFoRm project. An important part hereby played the GORD Valuation Study.
Computer System Validation - privacy zones, eSource and EHR data in clinical ...Wolfgang Kuchinke
Clinical Trials in the Learning Health System (LHS): Computer System Validation of eSource and EHR Data.
The question that was addressed: How to make a clinical trial data management system that uses EHR data, Patient Reported Outcome (PRO) and eSource data as part of the Learning Health System compliant with regulations and with Good Clinical Practice (GCP)?
The Learning Health System (LHS) connects health care with translational and clinical research. It generates new medical knowledge as a by-product of the care process and its aim is to improve health and safety of patients. The LHS generates and applies knowledge. For this purpose, clinical research, which is research involving humans, must be part of the LHS. Two general types of research exists: observational studies and clinical trials.
Clinical data drive the LHS, because results from randomized controlled trials are seen as “gold standard” for medical evidence. For this reason the concept of using data gathered directly from the patient care environment has enormous potential for accelerating the rate at which useful knowledge is generated.
All computer systems involved in clinical trials must undergo Computer System Validation (CSV). For this process, a legal framework for the TRANSFoRm project was developed. It was used for data privacy analysis of the data flow in two research use cases: an epidemiological cohort study on Diabetes and a randomised clinical trial about different GORD treatment regimes.
Computerized system validation is the documented process to produce evidence that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. The validation of electronic source data in clinical trials presents many challenges because of the blurring of the border between care and research. Here we present our approach for the validation of eSource data capture and the developed documentation for the CSV of the complete data flow in the LHS developed by the TRANSFoRm project. An important part hereby played the GORD Valuation Study.
Computer System Validation with privacy zones, e-source and clinical trials b...Wolfgang Kuchinke
Clinical Trials in the Learning Health System: Computer System Validation of eSource and EHR Data. Basic question is how to make a clinical trial data management system that uses EHR data, Patient Reported Outcome (PRO) and eSource data as part of the Learning Health System compliant with regulations and with Good Clinical Practice (GCP)? Computer System Validation (CSV) is a requirement for all computer systems involved in clinical trials for drug submission. It consists of documented processes to produce evidence that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. Validation begins with the system requirements definition and continues until system retirement. For example, the components of a clinical trials
framework used in our case are: Patient eligibility checks and enrolment, pre-population of eCRFs with data from EHRs, PROM data collection by patients, storing of a copy of study data in the EHR, and validation of the Study System that coordinates all study and data collection events.
eSource direct data entry in clinical trials and GCP requirements. It is the duty of physicians who are involved in medical research to protect the privacy and confidentiality of personal information of research subjects. Any eSource system should be fully compliant with the provisions of applicable data protection legislation. This creates the need to develop and implement processes that ensure the continuous control of the investigators over these data. This has to be the focus of CSV. Clinical Data drive the LHS. The results from randomized controlled trials are seen as the “gold standard” for medical evidence, but such trials are often performed outside the usual system of care and recruit highly selected populations. For this reason, the concept of using data gathered directly from the patient care environment has enormous potential for accelerating the rate at which useful knowledge is generated.
This leads to the requirement for validating electronic source data in clinical trials. This includes validation for clinical data that is either captured from the subject directly or from the subject’s medical records. The problem is the correct and appropriate system validation of electronic source data. The main componenets of CSV are the Validation Master Plan), User Requirements Specification, Hardware Requirements Specification, Design qualification, Installation qualification, Operational qualification, Performance qualification.
Any instrument used to capture source data should ensure that the data are captured as specified within the protocol. Source data should be accurate, legible, contemporaneous, original, attributable, complete and consistent. An audit trail should be maintained as part of the source documents for the original creation and subsequent modification of all source data.
Combining Patient Records, Genomic Data and Environmental Data to Enable Tran...Perficient, Inc.
The average academic research organization (ARO) and hospital has many systems that house patient-related information, such as patient records and genomic data. Combining data from a variety of sources in an ongoing manner can enable complex and meaningful querying, reporting and analysis for the purposes of improving patient safety and care, boosting operational efficiency, and supporting personalized medicine initiatives.
In this webinar, Perficient’s Mike Grossman, a director of clinical data warehousing and analytics, and Martin Sizemore, a healthcare strategist, discussed:
-How AROs and hospitals can benefit from a systematic approach to combining data from diverse systems and utilizing a suite of data extraction, reporting, and analytical tools, in order to support a wide variety of needs and requests
-Examples of proposed solutions to real-life challenges AROs and hospitals often encounter
Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism.
Un uso (o reúso) adecuado de los datos de salud pasa por asegurar su calidad. La calidad de datos consiste en que los datos representan correctamente la realidad a la que se refieren y que sean los adecuados para el uso esperado. Proponemos siete dimensiones para evaluar la calidad de los datos:
- Unicidad: ¿Existen datos replicados?
- Completitud: ¿Faltan datos?
- Consistencia: ¿Los datos cumplen con las reglas estrablecidas (tipos, rangos, ocurrencias, etc.)?
- Corrección: ¿Existen datos anómalos?
- Estabilidad Temporal: ¿Existe variabilidad en los datos a lo largo del tiempo?
- Estabilidad Multifuente: ¿Existe variabilidad en los datos en función de su origen o fuente (hospitales, departamentos, profesionales, etc.)?
- Valor Predictivo: ¿Puedo utilizar alguna variable de mis datos para construir un sistema de ayuda a la decisión?
VeraTech ha desarrollado qualize como nuestro marco de referencia para la evaluación de la calidad de datos. www.qualize.net
More Related Content
Similar to Data reuse and quality evaluation in archetype-based environments
Spatio-‐temporal Sensor Integration, Analysis, Classification or Can Exascal...Joel Saltz
Presentation at Clusters, Clouds and Data for Scientific Computing 2014
Integrative analyses of large scale spatio-temporal datasets play increasingly important roles in many areas of science and engineering. Our recent work in this area is motivated by application scenarios involving complementary digital microscopy, radiology and “omic” analyses in cancer research. In these scenarios, the objective is to use a coordinated set of image analysis, feature extraction and machine learning methods to predict disease progression and to aid in targeting new therapies. I will describe tools and methods our group has developed for extraction, management, and analysis of features along with the systems software methods for optimizing execution on high end CPU/GPU platforms. Once having provided our current work as an introduction, I will then describe 1) related but much more ambitious exascale biomedical and non-biomedical use cases that also involve the complex interplay between multi-scale structure and molecular mechanism and 2) concepts and requirements for methods and tools that address these challenges.
A brief presentation outlining the concepts of data quality in the context of clinical data, and highlighting the importance of data quality for population health, health analytics, and other secondary uses of clinical data.
Ontology-Driven Clinical Intelligence: A Path from the Biobank to Cross-Disea...Remedy Informatics
The discovery of clinical insights through effective management and reuse of data requires several conditions to be optimized: Data need to be digital, data need to be structured, and data need to be standardized in terms of metadata and ontology. This presentation describes a bioinformatics system that combines a next-generation biobank management model mapped to applicable international standards and guidelines with a master ontology that controls all input and output and is able to add unique properties to meet the specialized needs of clinicians for cross-disease research.
Ontology-Driven Clinical Intelligence: Removing Data Barriers for Cross-Disci...Remedy Informatics
The presentation describes how Remedy Informatics is advocating and innovating "flexible standardization" through an ontology-driven approach to clinical research. You will see in greater detail how a foundational, standardized Mosaic Ontology can be extended for more specific research applications and even more specific and focused disease research.
The Uneven Future of Evidence-Based MedicineIda Sim
An Apple ResearchKit study enrolled 22,000 people in five days. A
study claims that Twitter can be used to identify depressed patients. A computer program crunches genomic data, the published literature, and electronic health record data to guide cancer treatment. The pace, the data sources, and the methods for generating medical evidence are changing radically. What will — what should — evidence-based medicine look like in a faster, personalized, data-dense tomorrow?
- Presented as the 3rd Annual Cochrane Lecture, October 2015 in Vienna, Austria.
Enterprise Analytics: Serving Big Data Projects for HealthcareDATA360US
Andrew Rosenberg's Presentation on "Enterprise Analytics: Serving Big Data Projects for Healthcare" at DATA 360 Healthcare Informatics Conference - March 5th, 2015
Independent forces on the biomedical ecosystem is causing a convergence of care, quality measurement, and clinical research at the point of care. The presentation outlines some of the informatics implications of this convergence.
The presentation outlines three fundamental questions: (1) how is medicare doing today?, (2) why is MACRA happening?, and (3) Why is clinical data quality important to you?
Sharing and standards christopher hart - clinical innovation and partnering...Christopher Hart
Acknowledging the increasing need for cooperation and collaboration in data sharing and access. Describing the complexity that this can bring. Then describing some of the ways to simplify that.
Originally presented at Terrapin's Clinical innovation and partnering world March 8-9 2017.
http://www.terrapinn.com/conference/innovation-and-partnering/index.stm
Computer validation of e-source and EHR in clinical trials-KuchinkeWolfgang Kuchinke
Clinical Trials in the Learning Health System (LHS): Computer System Validation of eSource and EHR Data.
The question that was addressed: How to make a clinical trial data management system that uses EHR data, Patient Reported Outcome (PRO) and eSource data as part of the Learning Health System compliant with regulations and with Good Clinical Practice (GCP)?
The Learning Health System (LHS) connects health care with translational and clinical research. It generates new medical knowledge as a by-product of the care process and its aim is to improve health and safety of patients. The LHS generates and applies knowledge. For this purpose, clinical research, which is research involving humans, must be part of the LHS. Two general types of research exists: observational studies and clinical trials.
Clinical data drive the LHS, because results from randomized controlled trials are seen as “gold standard” for medical evidence. For this reason the concept of using data gathered directly from the patient care environment has enormous potential for accelerating the rate at which useful knowledge is generated.
All computer systems involved in clinical trials must undergo Computer System Validation (CSV). For this process, a legal framework for the TRANSFoRm project was developed. It was used for data privacy analysis of the data flow in two research use cases: an epidemiological cohort study on Diabetes and a randomised clinical trial about different GORD treatment regimes.
Computerized system validation is the documented process to produce evidence that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. The validation of electronic source data in clinical trials presents many challenges because of the blurring of the border between care and research. Here we present our approach for the validation of eSource data capture and the developed documentation for the CSV of the complete data flow in the LHS developed by the TRANSFoRm project. An important part hereby played the GORD Valuation Study.
Computer System Validation - privacy zones, eSource and EHR data in clinical ...Wolfgang Kuchinke
Clinical Trials in the Learning Health System (LHS): Computer System Validation of eSource and EHR Data.
The question that was addressed: How to make a clinical trial data management system that uses EHR data, Patient Reported Outcome (PRO) and eSource data as part of the Learning Health System compliant with regulations and with Good Clinical Practice (GCP)?
The Learning Health System (LHS) connects health care with translational and clinical research. It generates new medical knowledge as a by-product of the care process and its aim is to improve health and safety of patients. The LHS generates and applies knowledge. For this purpose, clinical research, which is research involving humans, must be part of the LHS. Two general types of research exists: observational studies and clinical trials.
Clinical data drive the LHS, because results from randomized controlled trials are seen as “gold standard” for medical evidence. For this reason the concept of using data gathered directly from the patient care environment has enormous potential for accelerating the rate at which useful knowledge is generated.
All computer systems involved in clinical trials must undergo Computer System Validation (CSV). For this process, a legal framework for the TRANSFoRm project was developed. It was used for data privacy analysis of the data flow in two research use cases: an epidemiological cohort study on Diabetes and a randomised clinical trial about different GORD treatment regimes.
Computerized system validation is the documented process to produce evidence that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. The validation of electronic source data in clinical trials presents many challenges because of the blurring of the border between care and research. Here we present our approach for the validation of eSource data capture and the developed documentation for the CSV of the complete data flow in the LHS developed by the TRANSFoRm project. An important part hereby played the GORD Valuation Study.
Computer System Validation with privacy zones, e-source and clinical trials b...Wolfgang Kuchinke
Clinical Trials in the Learning Health System: Computer System Validation of eSource and EHR Data. Basic question is how to make a clinical trial data management system that uses EHR data, Patient Reported Outcome (PRO) and eSource data as part of the Learning Health System compliant with regulations and with Good Clinical Practice (GCP)? Computer System Validation (CSV) is a requirement for all computer systems involved in clinical trials for drug submission. It consists of documented processes to produce evidence that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. Validation begins with the system requirements definition and continues until system retirement. For example, the components of a clinical trials
framework used in our case are: Patient eligibility checks and enrolment, pre-population of eCRFs with data from EHRs, PROM data collection by patients, storing of a copy of study data in the EHR, and validation of the Study System that coordinates all study and data collection events.
eSource direct data entry in clinical trials and GCP requirements. It is the duty of physicians who are involved in medical research to protect the privacy and confidentiality of personal information of research subjects. Any eSource system should be fully compliant with the provisions of applicable data protection legislation. This creates the need to develop and implement processes that ensure the continuous control of the investigators over these data. This has to be the focus of CSV. Clinical Data drive the LHS. The results from randomized controlled trials are seen as the “gold standard” for medical evidence, but such trials are often performed outside the usual system of care and recruit highly selected populations. For this reason, the concept of using data gathered directly from the patient care environment has enormous potential for accelerating the rate at which useful knowledge is generated.
This leads to the requirement for validating electronic source data in clinical trials. This includes validation for clinical data that is either captured from the subject directly or from the subject’s medical records. The problem is the correct and appropriate system validation of electronic source data. The main componenets of CSV are the Validation Master Plan), User Requirements Specification, Hardware Requirements Specification, Design qualification, Installation qualification, Operational qualification, Performance qualification.
Any instrument used to capture source data should ensure that the data are captured as specified within the protocol. Source data should be accurate, legible, contemporaneous, original, attributable, complete and consistent. An audit trail should be maintained as part of the source documents for the original creation and subsequent modification of all source data.
Combining Patient Records, Genomic Data and Environmental Data to Enable Tran...Perficient, Inc.
The average academic research organization (ARO) and hospital has many systems that house patient-related information, such as patient records and genomic data. Combining data from a variety of sources in an ongoing manner can enable complex and meaningful querying, reporting and analysis for the purposes of improving patient safety and care, boosting operational efficiency, and supporting personalized medicine initiatives.
In this webinar, Perficient’s Mike Grossman, a director of clinical data warehousing and analytics, and Martin Sizemore, a healthcare strategist, discussed:
-How AROs and hospitals can benefit from a systematic approach to combining data from diverse systems and utilizing a suite of data extraction, reporting, and analytical tools, in order to support a wide variety of needs and requests
-Examples of proposed solutions to real-life challenges AROs and hospitals often encounter
Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism.
Un uso (o reúso) adecuado de los datos de salud pasa por asegurar su calidad. La calidad de datos consiste en que los datos representan correctamente la realidad a la que se refieren y que sean los adecuados para el uso esperado. Proponemos siete dimensiones para evaluar la calidad de los datos:
- Unicidad: ¿Existen datos replicados?
- Completitud: ¿Faltan datos?
- Consistencia: ¿Los datos cumplen con las reglas estrablecidas (tipos, rangos, ocurrencias, etc.)?
- Corrección: ¿Existen datos anómalos?
- Estabilidad Temporal: ¿Existe variabilidad en los datos a lo largo del tiempo?
- Estabilidad Multifuente: ¿Existe variabilidad en los datos en función de su origen o fuente (hospitales, departamentos, profesionales, etc.)?
- Valor Predictivo: ¿Puedo utilizar alguna variable de mis datos para construir un sistema de ayuda a la decisión?
VeraTech ha desarrollado qualize como nuestro marco de referencia para la evaluación de la calidad de datos. www.qualize.net
Descripción de la situación actual de la norma UNE-EN 13606 y el avanc en su proceso de renovación.
Presentado en la V Reunión del Foro de Interoperabilidad en Salud, Barcelona, 22 de abril de 2015
Archetype-based data transformation with LinkEHRDavid Moner Cano
How can we convert data to standard data (EN ISO 13606, openEHR, HL7 CDA...) using archetypes? LinkEHR is a tool that helps in achieving this objective.
This presentation was made at the "Arctic Conference on Dual-Model based Clinical Decision Support and Knowledge Management", that took place the 27th and 28th of May, 2014 in Tromsø, Norway.
Standardised and Flexible Health Data Management with an Archetype Driven EHR...David Moner Cano
The semantically interoperable Electronic Health Record is one of the most challenging research fields in health informatics. Reaching this objective, the use of EHR standards that formally describe health data structures cannot be waived. CEN EN13606 is one of the most promising approaches to solve this problem since it covers the technical needs for semantic interoperability and, at the same time, it incorporates a mechanism (archetype model) for clinical domain experts’ participation in building an EHR system. In this paper we present EHRflex, a generic system based on archetypes. It empowers the clinician and allows him to manage his own EHR system in a simple and generic way, assuring that the user works with underlying standardized data structures. They can be exchanged with other people and systems when needed. EHRflex introduces EHR standards into the clinical routine delivering a technical platform which works directly on archetype based data.
Publication:
Anton Brass, David Moner, Claudia Hildebrand, Montserrat Robles. "Standardized and flexible health data Management with an archetype driven EHR system (EHRflex)". Seamless care – Safe care: The Challenges of Interoperability and Patient Safety in Health Care. Proceedings of the EFMI Special Topic Conference, pp. 212-218. IOS Press BV, Amsterdam. ISBN: 978-1-60750-562-4, 2010.
Since the approval of the CEN EN13606 norm for the electronic health record communication, a growing interest around the application of this specification has emerged. The main objective of the norm is to serve as a mechanism to achieve the semantic interoperability of clinical data. This will require an effort to use common terminologies, to normalise the clinical knowledge domain and to combine all these formalisations with the existing information systems. This paper presents a methodology and developed tools to reach the seamless semantic interoperability of health data in legacy systems and several study cases where the developed framework has been applied
Publication:
David Moner, José A. Maldonado, Diego Boscá, Carlos Angulo, Montserrat Robles, Daniel Pérez, Pablo Serrano. "CEN EN13606 normalisation framework implementation experiences". Seamless care – Safe care: The Challenges of Interoperability and Patient Safety in Health Care. Proceedings of the EFMI Special Topic Conference, pp. 136-142. IOS Press BV, Amsterdam. ISBN: 978-1-60750-562-4, 2010.
Implementation of a CEN/ISO 13606 Platform for Medicines ReconciliationDavid Moner Cano
Medicines reconciliation is a key process to improve health, welfare and patient security. It is also recognized that semantically interoperable systems based on the use of health standards is an adequate strategy to achieve a reliable medicines conciliation process. This paper describes a solution developed for medicines reconciliation at the Hospital de Fuenlabrada in Madrid. It is based on the use of a CEN/ISO 13606 based patient summary that is shared between primary care and the hospital center. The 13606 norm and archetypes were used to achieve the semantic interoperability of the clinical information together with SNOMED CT and the Spanish National Medication Database. This approach has showed that it is feasible to achieve a patient security improvement in an innovative and collaborative way.
Publication:
David Moner, Marta Terrón, Carlos Angulo, Luis Lechuga, Pablo Serrano, José Alberto Maldonado, Francisco J. Farfán, Montserrat Robles. "Implementation of a CEN/ISO 13606 platform for medicines reconciliation". XXIII International Conference of the European Federation for Medical Informatics (MIE 2011), Oslo
International Cancer Survivors Day is celebrated during June, placing the spotlight not only on cancer survivors, but also their caregivers.
CANSA has compiled a list of tips and guidelines of support:
https://cansa.org.za/who-cares-for-cancer-patients-caregivers/
PET CT beginners Guide covers some of the underrepresented topics in PET CTMiadAlsulami
This lecture briefly covers some of the underrepresented topics in Molecular imaging with cases , such as:
- Primary pleural tumors and pleural metastases.
- Distinguishing between MPM and Talc Pleurodesis.
- Urological tumors.
- The role of FDG PET in NET.
Medical Technology Tackles New Health Care Demand - Research Report - March 2...pchutichetpong
M Capital Group (“MCG”) predicts that with, against, despite, and even without the global pandemic, the medical technology (MedTech) industry shows signs of continuous healthy growth, driven by smaller, faster, and cheaper devices, growing demand for home-based applications, technological innovation, strategic acquisitions, investments, and SPAC listings. MCG predicts that this should reflects itself in annual growth of over 6%, well beyond 2028.
According to Chris Mouchabhani, Managing Partner at M Capital Group, “Despite all economic scenarios that one may consider, beyond overall economic shocks, medical technology should remain one of the most promising and robust sectors over the short to medium term and well beyond 2028.”
There is a movement towards home-based care for the elderly, next generation scanning and MRI devices, wearable technology, artificial intelligence incorporation, and online connectivity. Experts also see a focus on predictive, preventive, personalized, participatory, and precision medicine, with rising levels of integration of home care and technological innovation.
The average cost of treatment has been rising across the board, creating additional financial burdens to governments, healthcare providers and insurance companies. According to MCG, cost-per-inpatient-stay in the United States alone rose on average annually by over 13% between 2014 to 2021, leading MedTech to focus research efforts on optimized medical equipment at lower price points, whilst emphasizing portability and ease of use. Namely, 46% of the 1,008 medical technology companies in the 2021 MedTech Innovator (“MTI”) database are focusing on prevention, wellness, detection, or diagnosis, signaling a clear push for preventive care to also tackle costs.
In addition, there has also been a lasting impact on consumer and medical demand for home care, supported by the pandemic. Lockdowns, closure of care facilities, and healthcare systems subjected to capacity pressure, accelerated demand away from traditional inpatient care. Now, outpatient care solutions are driving industry production, with nearly 70% of recent diagnostics start-up companies producing products in areas such as ambulatory clinics, at-home care, and self-administered diagnostics.
CHAPTER 1 SEMESTER V PREVENTIVE-PEDIATRICS.pdfSachin Sharma
This content provides an overview of preventive pediatrics. It defines preventive pediatrics as preventing disease and promoting children's physical, mental, and social well-being to achieve positive health. It discusses antenatal, postnatal, and social preventive pediatrics. It also covers various child health programs like immunization, breastfeeding, ICDS, and the roles of organizations like WHO, UNICEF, and nurses in preventive pediatrics.
COVID-19 PCR tests remain a critical component of safe and responsible travel in 2024. They ensure compliance with international travel regulations, help detect and control the spread of new variants, protect vulnerable populations, and provide peace of mind. As we continue to navigate the complexities of global travel during the pandemic, PCR testing stands as a key measure to keep everyone safe and healthy. Whether you are planning a business trip, a family vacation, or an international adventure, incorporating PCR testing into your travel plans is a prudent and necessary step. Visit us at https://www.globaltravelclinics.com/
KEY Points of Leicester travel clinic In London doc.docxNX Healthcare
In order to protect visitors' safety and wellbeing, Travel Clinic Leicester offers a wide range of travel-related health treatments, including individualized counseling and vaccines. Our team of medical experts specializes in getting people ready for international travel, with a particular emphasis on vaccines and health consultations to prevent travel-related illnesses. We provide a range of travel-related services, such as health concerns unique to a trip, prevention of malaria, and travel-related medical supplies. Our clinic is dedicated to providing top-notch care, keeping abreast of the most recent recommendations for vaccinations and travel health precautions. The goal of Travel Clinic Leicester is to keep you safe and well-rested no matter what kind of travel you choose—business, pleasure, or adventure.
Under Pressure : Kenneth Kruk's StrategyKenneth Kruk
Kenneth Kruk's story of transforming challenges into opportunities by leading successful medical record transitions and bridging scientific knowledge gaps during COVID-19.
Health Education on prevention of hypertensionRadhika kulvi
Hypertension is a chronic condition of concern due to its role in the causation of coronary heart diseases. Hypertension is a worldwide epidemic and important risk factor for coronary artery disease, stroke and renal diseases. Blood pressure is the force exerted by the blood against the walls of the blood vessels and is sufficient to maintain tissue perfusion during activity and rest. Hypertension is sustained elevation of BP. In adults, HTN exists when systolic blood pressure is equal to or greater than 140mmHg or diastolic BP is equal to or greater than 90mmHg. The
LGBTQ+ Adults: Unique Opportunities and Inclusive Approaches to CareVITASAuthor
This webinar helps clinicians understand the unique healthcare needs of the LGBTQ+ community, primarily in relation to end-of-life care. Topics include social and cultural background and challenges, healthcare disparities, advanced care planning, and strategies for reaching the community and improving quality of care.
Data reuse and quality evaluation in archetype-based environments
1. Data reuse and quality evaluation
in archetype-based environments
2nd Arctic Conference on OpenEHR and Archetype-
based clinical Information Systems
David Moner
damoca@veratech.es
18-19 January 2018, Tromsø
2. About this presentation
• Our use case will be a system for monitoring
best practices during babies first 1,000 days
2
• We want to focus on the role
of archetypes for facilitating
data reuse and data quality
evaluation
Data reuse and quality evaluation in archetype-based environments
3. The first 1,000 days
3
Image from www.first1000days.ie
Data reuse and quality evaluation in archetype-based environments
4. The first 1,000 days
4
Health aspects to be documented Health problems and risks of newborn
Breastfeeding and food introduction
Pregnancy and delivery
Data reuse and quality evaluation in archetype-based environments
What happens during the first 1,000 days is the
foundation of an optimum health, growth, and
neurodevelopment across the lifespan
5. Project description
• Purpose
– To create a standardized and data quality assessed
integrated data depository for a reliable data reuse in
monitoring of best practices and research
5
Pilot project to improve quality
of perinatal information and care (2015)
Data reuse and quality evaluation in archetype-based environments
6. Project description
• What have we done?
6
Define standardized archetypes from gestation to two years
old (1,000 days).
Normalize existing data according to the archetypes, and
import it into an integrated data repository.
Define data quality assessment criteria and evaluate the
quality of the integrated data.
Define Best Practices Indicators (BPIs) for the monitoring of
maternal and child care based on the data structure of the
archetypes.
Data reuse and quality evaluation in archetype-based environments
7. Project description
7
DB
DB
IDR
Perinatal care
Perinatal care
(neonatal info. only)
Set of maternal
and child care
BPIs
ISO 13606
Archetypes
Data Quality
dimensions and
methods
Data Quality
reports
Definition of archetypes and standardization
of data
Data Quality assessment
Monitoring of BPIs
POST-stnd.PRE-stnd.
Data Quality
assessment7 dimensions
DB
Infant-feeding
(primary care)
Data extraction
Data transformation
and standardizationMapping
Hospital
Virgen del Castillo
Hospital
12 de Octubre
Archetype-
based
query
Best Practices Indicators
Data reuse and quality evaluation in archetype-based environments
9. Archetype definition
• Domain
– First 1,000 days of the baby and pregnancy and delivery of the mother
• Reference model
– ISO 13606:2008
• Team
– 6 health professionals (3 perinatal experts)
– 2 experts in archetypes and information standards
• Experiences in the literature
– Jasmin Buck, et al. Towards a comprehensive electronic patient record to
support an innovative individual care concept for premature infants using the
openEHR approach, Int J of Med Inf, Volume 78, Issue 8, 2009, Pages 521-531
– Christina Pahl, et al. Role of OpenEHR as an open source solution for the
regional modelling of patient data in obstetrics, J of Biomed Inf, Volume 55,
2015, Pages 174-187
9Data reuse and quality evaluation in archetype-based environments
10. Archetype definition
• Most archetypes were new implementations
• Some were reused from the Spanish national EHR
– Demographic archetypes, medications, problems, lab results…
• Some were reused from openEHR CKM, and
reimplemented in ISO 13606
– openEHR-EHR-OBSERVATION.apgar.v1
– openEHR-EHR-EVALUATION.health_risk.v1
10Data reuse and quality evaluation in archetype-based environments
11. Archetype definition
• COMPOSITION
– Pregnancy and birth report
– Newborn breastfeeding report
– Food introduction report
• SECTION
– 28 sections, mostly defined inside the Compositions
• ENTRY
– 44 archetypes
11
Archetypes available at: http://mm.linkehr.com/
Data reuse and quality evaluation in archetype-based environments
13. Archetype definition
• Regarding terminologies, the work was limited to
the harmonization of local terms and their
mapping to the archetype list of terms
– Eg. Type of anesthesia
• Terminology bindings were (once again) a victim
of time contraints and lack of terminology
experts
13Data reuse and quality evaluation in archetype-based environments
14. Data collection and normalization
• Data from two hospitals were normalized and
integrated
– Hospital Virgen del Castillo, Murcia
– Hospital 12 Octubre, Madrid
• Over 270 different data items were extracted
from the original databases
– Data was provided as plain XML or CSV by the
informatics service of each hospital
• 7,672 XML ISO 13606 instances (one per child)
were generated and stored in the repository
14Data reuse and quality evaluation in archetype-based environments
15. • Data was transformed into compliant ISO
13606 XML extracts using LinkEHR Studio
15Data reuse and quality evaluation in archetype-based environments
Data collection and normalization
XQuery
16. Data repository
• The data repository was implemented using
eXistdb
– Focus on fast prototyping, not performance
• Configuration:
– One collection per composition type
– Indexes over all paths (by defaulf in eXistdb), all
archetype_id nodes, and object names
16Data reuse and quality evaluation in archetype-based environments
17. Data quality analysis
• High data quality
– It correctly represents the real-world construct to
which it refers
– It fit for its intended uses
• Poor data quality has a serious impact on the
reuse of data for clinical trials, research, public
health, health policy development, etc.
17Data reuse and quality evaluation in archetype-based environments
18. Data quality analysis
• We have developed qualize to evaluate the
quality of aggregated data
• It is an online service that helps in the
evaluation of biomedical data quality
– Automating as much as possible the evaluation
process
– Offering a quantifiable data quality score
18
www.qualize.net
Data reuse and quality evaluation in archetype-based environments
19. Data quality analysis
• qualize evaluates seven dimensions of data
quality:
– Uniqueness. Are there replicated data?
– Completeness. Are there missing data?
– Correction. Are there unexpectedly anomalous
registries?
– Consistency. Do my data comply with stablished rules?
Formats, ranges ...
– Temporal stability. Is there variability in my data over
time?
– Multi-source stability. Is there variability in my data
depending on their origin?
– Predictive value. Can I build decision support systems
from my data?
19Data reuse and quality evaluation in archetype-based environments
21. Data quality analysis
• Completeness and consistency are based on
archetype constraints
– We use those constraint to generate evaluation
rules in Schematron
– The evaluation of those rules over existing data
instances provide a quality score for each
dimension
21Data reuse and quality evaluation in archetype-based environments
22. Data quality analysis
• Completeness: test the existence, or not, of
each attribute
– FPI: Formal Public Identifier (rule identifier)
– Each type of rule is afterwards weighed
• Eg. A void optional element is not as important for the
total completeness score as a void mandatory one
22Data reuse and quality evaluation in archetype-based environments
23. • Consistency: check archetype constraint,
including terminology subsets
23Data reuse and quality evaluation in archetype-based environments
Data quality analysis
24. Data quality analysis
24
Hospital
Dataset
DQ dimension
Hospital 12 de Octubre Hospital Virgen del Castillo
Perinatal health Perinatal health Infant feeding
PRE-stnd.
n=1949
POST-
stnd.
n=1948
PRE-stnd.
n=3781
POST-
stnd.
n=3776
PRE-stnd.
n=2133
POST-
stnd.
n=2133
Uniqueness
Non-replicated identifiers
100% 100% 100% 100% 100% 100%
Completeness
Non-missing data, weighting
obligatory and optative elements
76.71% 8.44% 56.60% 18.03% 98.73% 29.65%
Consistency
Conformance to basic schema rules
- 100% - 100% - 100%
Temporal stability
Data concordance over time
2 1 3 3 1 1
Multi-source stability
Data concordance among different
sources (1-GPD metric)
0.08
(among hospitals)
0.79
(among professionals)
Correctness
Possibly anomalous records
3.39% 0.62% 1.38% 0.45% 0.09% 0.19%
Predictive value
Usefulness of data to predict
breastfeeding at one month
Not applicable 0.60
Data reuse and quality evaluation in archetype-based environments
25. Data quality analysis
Hospital Virgen del Castillo, infant feeding dataset
25
Stability of 0.79 among
primary care professionals
(1- Global Probabilistic
Deviation metric)
Data reuse and quality evaluation in archetype-based environments
26. Best practices indicators
• We defined 127 best practice indicators based on the archetypes
– Compiled from national and international recommendations: Euro-
Peristat Network, WHO, UNICEF, etc.
• Grouped in seven main categories:
– Central indicators
– Maternal history
– Obstetric conditions
– Obstetric environment
– Obstetric interventions
– Baby health status
– Breastfeeding and infant-feeding
• Each best practice indicator includes:
– Rationale
– Readable definition and operational definition
– Location of variables in the archetypes
26Data reuse and quality evaluation in archetype-based environments
27. 27Data reuse and quality evaluation in archetype-based environments
Best practices indicators
28. 28Data reuse and quality evaluation in archetype-based environments
Best practices indicators
INDICADOR 6.14
% babies with exclusive
breastfeeding at the age of one
month
Numerator Numerator filter
Number of babies with an exclusive
breastfeeding
(Rec24h_Lactancia = 1) and (Rec24h_LecheFórmula
= 2) and (Rec24h_Líquidos = 2) and
(Rec24h_Sólidos = 2) and (EI_CerealesSG > 1) and
(EI_CerealesCG > 1) and (EI_Fórmula > 1) and
(EI_Frutas > 1) and (EI_Huevo > 1) and
(EI_LecheVaca > 1) and (EI_Legumbres > 1) and
(EI_Líquidos > 1) and (EI_Pescado > 1) and (EI_Pollo
> 1) and (EI_Verduras > 1) and (EI_Yema > 1) and
(EI_Yogur > 1) and (Tipo_Lactancia = 1)
Denominator Denominator filter
Total number of babies at the age of one
month
(DATEDIFF ( m , "Fecha_Nacimiento" ,
"Fecha_TomaDatos" )) = 1
29. Best practices indicators
• Six indicators were monitored in this pilot for
both hospitals, implemented in Xquery
29Data reuse and quality evaluation in archetype-based environments
31. Improvement of protocols and information systems
31
PREVIOUS VERSION
Gestational age was recorded in two
fields (weeks and days)
NEW VERSION
Gestational age is recorded automatically based
on last period date, expected delivery date, and
current date
Data reuse and quality evaluation in archetype-based environments
32. Results
• Change in healthcare protocols for births
• Improvement of perinatal information systems
• Increased rates of breastfeeding and its
duration
• Reduced use of antibiotics in children
• Leading pilot project towards a Spanish data
quality-assured repository of maternal-child
information
32Data reuse and quality evaluation in archetype-based environments
33. Results
• Number of registers
• Technology stack
– Persistence eXist, could be replaced by Marand or
EHRserver
33Data reuse and quality evaluation in archetype-based environments
34. Lessons learned
• Dealing with the difference between health
data of the mother and her children.
• Limited terminology bindings
– As usual, the semantic definition of archetypes is
always the first victim of time constraints
34Data reuse and quality evaluation in archetype-based environments
35. What’s next?
• Graph databases?
• Add subjective items
• How well do you feel with the received
– Quality of patient experience
35Data reuse and quality evaluation in archetype-based environments