The document discusses the origins and development of the HL7/LOINC Document Ontology Model. It began with an analysis of over 2000 clinical document names from various healthcare organizations to identify common elements. This led to the creation of a multi-axial model for clinical document names that includes subject matter domain, role, setting, type of service, and kind of document. The model has undergone ongoing evaluation and expansion based on empirical analyses to improve coverage of document names. Future work includes further ontology evolution and refinement.
Paper presentations: UK e-science AHM meeting, 2005Paolo Missier
The document describes an ontology-based approach to handling information quality in e-science. It presents an initial quality framework that captures scientists' quality requirements and allows defining domain-specific quality characteristics. It introduces a web service that annotates datasets with quality metrics based on how well their elements conform to relevant ontologies, using transcriptomics as an example domain. The approach aims to make quality definitions reusable and the computation of quality measurements over large datasets cost-effective.
SNOMED CT is a comprehensive clinical healthcare terminology that enables consistent representation of clinical data. It has the following key features:
1) It is the most comprehensive clinical terminology in the world, containing over 300,000 concepts and 1.5 million descriptions and relationships.
2) SNOMED CT concepts are organized into hierarchies and relationships that provide a structured framework for clinical meaning.
3) By implementing standardized clinical terminology and structure, SNOMED CT allows for improved data exchange and reuse, decision support, and analytics across systems and countries.
Before the enrollment of a patient in a clinical trial and beginning of any trial-related procedures, an informed consent is obtained from the potential participants. This informed consent form (ICF) provides the participants with the information related to the clinical trial. TSDP provides regulatory medical writing training on preparation of ICF.
inform consent form before participate in clinical trials.for purpose of understanding the nature of research,risk,benefits,and decision about participation
The document discusses the principles of informed consent and the role of nurses in the informed consent process. It addresses why informed consent is needed, who is responsible for obtaining consent, and when it is required. It outlines the delivery methods for obtaining consent and ethical considerations. The document emphasizes that nurses have an important role in ensuring patient comprehension, addressing anxiety, identifying appropriate surrogates, and facilitating documentation. Nurses can help ensure the process complies with legal and regulatory standards to protect patient autonomy and participation in healthcare decisions.
The document discusses the concept of informed consent as it relates to nursing. It states that informed consent involves a patient's right to accept or reject treatment, and is a fundamental principle in healthcare. The role of nurses is to ensure physicians have explained treatments to patients in a way they understand, warned of risks, and documented that informed consent was obtained. It also notes special considerations for emancipated minors and those requiring a legal guardian's consent.
Informed consent is the process by which a physician shares information with a patient about their medical condition, treatment options, risks and benefits to allow the patient to make an informed decision. It involves a dialogue where the physician explains the diagnosis, treatment recommendations and alternatives while addressing the patient's questions and concerns. The goal is to provide the right amount of information tailored to the individual patient's needs and comprehension so they can collaborate as a partner in their care. However, studies show physicians often fail to fully practice informed consent due to challenges in communication, time constraints and patient factors like anxiety.
This document provides guidance and reminders for an educational session on informed consent. It instructs participants to turn off electronics and participate in a debriefing session. It outlines learning objectives around shared decision-making, the informed consent conversation, and obtaining consent consistent with standards. Key elements of the informed consent conversation are described, including setting the environment, discussing options and patient preferences, and documenting the discussion and patient decision. Potential challenges like incapacitated patients, treatment refusal, language barriers, and consent for minors are also addressed.
Paper presentations: UK e-science AHM meeting, 2005Paolo Missier
The document describes an ontology-based approach to handling information quality in e-science. It presents an initial quality framework that captures scientists' quality requirements and allows defining domain-specific quality characteristics. It introduces a web service that annotates datasets with quality metrics based on how well their elements conform to relevant ontologies, using transcriptomics as an example domain. The approach aims to make quality definitions reusable and the computation of quality measurements over large datasets cost-effective.
SNOMED CT is a comprehensive clinical healthcare terminology that enables consistent representation of clinical data. It has the following key features:
1) It is the most comprehensive clinical terminology in the world, containing over 300,000 concepts and 1.5 million descriptions and relationships.
2) SNOMED CT concepts are organized into hierarchies and relationships that provide a structured framework for clinical meaning.
3) By implementing standardized clinical terminology and structure, SNOMED CT allows for improved data exchange and reuse, decision support, and analytics across systems and countries.
Before the enrollment of a patient in a clinical trial and beginning of any trial-related procedures, an informed consent is obtained from the potential participants. This informed consent form (ICF) provides the participants with the information related to the clinical trial. TSDP provides regulatory medical writing training on preparation of ICF.
inform consent form before participate in clinical trials.for purpose of understanding the nature of research,risk,benefits,and decision about participation
The document discusses the principles of informed consent and the role of nurses in the informed consent process. It addresses why informed consent is needed, who is responsible for obtaining consent, and when it is required. It outlines the delivery methods for obtaining consent and ethical considerations. The document emphasizes that nurses have an important role in ensuring patient comprehension, addressing anxiety, identifying appropriate surrogates, and facilitating documentation. Nurses can help ensure the process complies with legal and regulatory standards to protect patient autonomy and participation in healthcare decisions.
The document discusses the concept of informed consent as it relates to nursing. It states that informed consent involves a patient's right to accept or reject treatment, and is a fundamental principle in healthcare. The role of nurses is to ensure physicians have explained treatments to patients in a way they understand, warned of risks, and documented that informed consent was obtained. It also notes special considerations for emancipated minors and those requiring a legal guardian's consent.
Informed consent is the process by which a physician shares information with a patient about their medical condition, treatment options, risks and benefits to allow the patient to make an informed decision. It involves a dialogue where the physician explains the diagnosis, treatment recommendations and alternatives while addressing the patient's questions and concerns. The goal is to provide the right amount of information tailored to the individual patient's needs and comprehension so they can collaborate as a partner in their care. However, studies show physicians often fail to fully practice informed consent due to challenges in communication, time constraints and patient factors like anxiety.
This document provides guidance and reminders for an educational session on informed consent. It instructs participants to turn off electronics and participate in a debriefing session. It outlines learning objectives around shared decision-making, the informed consent conversation, and obtaining consent consistent with standards. Key elements of the informed consent conversation are described, including setting the environment, discussing options and patient preferences, and documenting the discussion and patient decision. Potential challenges like incapacitated patients, treatment refusal, language barriers, and consent for minors are also addressed.
Clinical LOINC provides a standardized ontology for classifying clinical documents. It originated from a need for consistent naming across systems. The model uses a multi-axial approach with subject matter, role, setting, type of service, and kind of document. Evaluation found mapping many local names but also need for expansion. Ongoing work refines the ontology through empirical analysis and collaboration.
The document discusses the origins and ongoing development of a document ontology within LOINC and HL7. It describes how the Clinical Document Ontology (CDO) provides consistent semantics for clinical document names to enable interoperability. The CDO uses a multi-axial model with domains like subject matter, role, setting, type of service, and kind of document. Iterative evaluations have helped expand and refine the CDO. Future work includes further harmonization and expanding the model to new document types.
This document discusses the origins and development of the LOINC Clinical Document Ontology (CDO), which provides a standardized terminology for clinical document names. It describes how the CDO was created based on empirical analysis of over 2000 local document names. The CDO uses a multi-axial model with domains like subject matter, role, setting, type of service, and kind of document. Iterative evaluations found the expanded CDO better mapped local names than the original. Ongoing work involves adding new content and harmonizing with other clinical terminologies.
This document summarizes a presentation on the clinical document ontology (CDO) developed by LOINC. It describes the origins and development of having a standardized vocabulary for clinical document names, including empirical analysis of local document names. The presentation reviews the multi-axial model used by LOINC for document names, provides examples, and discusses ongoing evaluation and expansion efforts through collaboration. Future directions include further harmonization of CDO terms and analyzing document content.
The document provides an overview of LOINC (Logical Observation Identifiers Names and Codes) and its use for standardizing clinical panels, forms, and patient assessment instruments. It discusses LOINC's history with standardizing panels and enhancing its model for assessment instruments. Current projects involving standardizing various US government forms and health surveys in LOINC are mentioned, as well as ongoing challenges around intellectual property, modeling, and inconsistencies between source instruments and standards.
This document discusses standardizing patient assessments in LOINC. It summarizes LOINC's work enhancing its panel model to represent patient assessments, which allows representing individual assessment items, structured answer lists, and item instances within specific assessments. Challenges included variation between similar assessments, starting from paper forms rather than a uniform data model, and intellectual property issues. Ongoing work aims to standardize more assessments in LOINC to improve data sharing.
The Logical Model Designer - Binding Information Models to TerminologySnow Owl
This presentation demonstrates the functionality provided by the Logical Model Designer (LMD) and Snow Owl tools, which enables terminology to be bound to the Singapore Logical Information Model.
Abstract:
A critical enabler in the journey towards semantic interoperability in Singapore is the Singapore "˜Logical Information Model' (LIM). The LIM is a model of the healthcare information shared within Singapore, and is defined as a set of reusable "˜archetypes' for each clinical concept (e.g. Problem/Diagnosis, Pharmacy Order). These archetypes are then constrained and composed into "˜templates' to support specific use cases.
The Singapore LIM harmonises the semantics of the information structures with the terminology, using multiple types of terminology bindings, including semantic, value domain and constraint bindings. Value domain bindings are defined to both national "˜reference terminology' (used for querying nationally-collated data), as well as to a variety of "˜interface terminologies' used within local clinical systems (required to enforce conformance-compliance rules over message specifications generated from the LIM). To support the diversity of pre-coordination captured in local interface terms, "˜design patterns' are included in the LIM, based on the SNOMED CT concept model. These design patterns represent a logical model of meaning for a specific concept, and allow more than one split between the information model and the terminology model to be represented in a semantically-consistent manner.
This presentation will demonstrate the "˜Logical Model Designer' (LMD) - an Eclipse-based tool that is being used to maintain Singapore's Logical Information Model. A number of features of the LMD tooling will be demonstrated, with a specific focus on how the information structure is bound to the terminology via an interface to the Snow Owl platform. Value Domains are defined as reference sets within Snow Owl and then linked to the information structures defined in the LMD.
Please see our website http://b2i.sg for further information.
Effective Classification of Clinical Reports: Natural Language Processing-Bas...Efsun Kayi
With the recent emphasis on the use of electronic health records (EHRs), the importance of leveraging the large amounts of electronic clinical data have become clearer. Efficient and effective use of this information could supplement or even replace manual chart review as a means of studying and improving the quality and safety of healthcare delivery. However, some of these clinical data are in the form of free text and require
preprocessing before use in automated systems.
There are many challenges in developing such automated decision support systems. Clinical reports include medical terms that are not commonly used in daily languages, in various forms. Those terms have to be coded into the standard forms defined in medical dictionaries for consistency. Furthermore, coding by itself may not be sufficient for correctly identifying clinical conditions. Reported conditions must be analyzed with their surrounding contexts to validate their temporal, certainty, and negation status. Biomedical Natural Language Processing (NLP) tools map medical terms to standard dictionaries and analyze their contexts; however, their output cannot be directly used in subsequent automated processes.
Accordingly, in this research, we first investigate the best ways to extract features from the NLP output that can be used for automatic classification. While results show that the classification performance is significantly improved by using the NLP features over using the raw text, this NLP-based classification is computationally expensive and requires significant amount of manual steps for the system to be used in different clinical areas. As an alternative, we developed a framework for topic modeling-based classification system. Topic modeling provides interpretable themes (topic distributions) in reports, a representation that is more compact than bag-of-words representation and can be processed faster than raw text in subsequent automated processes. Our topic based classifier system is shown to be competitive with existing text classification techniques and provides a more efficient and interpretable representation. A common free text data source is radiology reports, typically dictated by radiologists. Therefore we analyzed the performance of our system using computed tomography (CT) imaging reports.
Linkages to EHRs and Related Standards. What can we learn from the Parallel U...Koray Atalag
This is the prezo I used during the CellML workshop in Waiheke Island, Auckland, New Zealand on 13 April 2015. The aim was to introduce information modelling methods and tools for the purpose of inspiring computational modelling work in the area of semantics and interoperability.
Ontologies for Clinical Research - Assessment and DevelopmentWolfgang Kuchinke
Wolfgang Kuchinke presented on ontologies for clinical research. He discussed what ontologies are and their main components. Their purpose is to limit complexity and organize data into information and knowledge. Kuchinke described several existing ontologies for clinical research including the Clinical Trial Ontology, Ontology of Clinical Research, Ontology for Biomedical Investigations, and Cochrane PICO Ontology. He noted the need for an ontology to integrate clinical trial data and discussed possible ways to build a new joint ontology combining aspects of existing ones like PICO, OCRe, and OBI to better enable data reuse and sharing in clinical research.
How to Apply NLP to Analyze Clinical TrialsDavid Talby
How to apply natural language processsing techniques including multi-modal models and zero-shot learning to accurately extract information from raw clinical trial documents.
A Semantic Web based Framework for Linking Healthcare Information with Comput...Koray Atalag
Presented at Health Informatics New Zealand (HINZ 2017) Conference, 1-3 Nov 2017, Rotorua, New Zealand. Authorship: Koray Atalag, Reza Kalbasi, David Nickerson
The University of Auckland
11 - qualitative research data analysis ( Dr. Abdullah Al-Beraidi - Dr. Ibrah...Rasha
The document describes a dataset collected from students in a writing class. As an assignment, students were asked to describe in detail how they write, without consulting others. This generated a set of 10 individual narratives for analysis. Permission was obtained to use the anonymized data for teaching purposes. The total qualitative data available comprises 10 files, each containing a short student-authored narrative on the writing process.
Automatic Rubric-Based Content Grading For Clinical NotesEmma Burke
This document describes research on developing systems for automatically grading clinical notes based on predefined rubrics. The researchers created a corpus of clinical notes and corresponding rubric-based grades from medical scribe trainees. They developed baseline grading systems using simple feature-based and neural network models. Their baseline systems showed promising results, with content point accuracy and kappa values of 0.86 and 0.71 on test data. The researchers conducted experiments varying training data size and rubric tagsets to provide insight into expected system performance.
This document presents a preliminary ontology to identify factors that influence software maintenance. The purpose of the ontology is to provide context for empirical studies of maintenance and help understand contradictory results. The ontology identifies key factors such as the maintained product, maintenance activities, maintenance organization and process, and maintenance engineers. It presents these factors using a UML model and discusses how variations in the factors could impact empirical studies of maintenance productivity, quality or efficiency. Two common maintenance scenarios are also described to demonstrate how the ontology can be used to characterize differences between scenarios.
Text Mining for Biocuration of Bacterial Infectious DiseasesDan Sullivan, Ph.D.
Specialty gene sets, such as virulence factors and antibiotic resistance genes, are of particular interest to infectious disease researchers. Much of the information about specialty genes’ function is described in literature but unavailable as structured data in bioinformatics databases. The steadily increasing volume of literature makes it difficult to manually find relevant papers and extract assertion sentences about specialty genes. This presentation describes efforts to build and an automatic classifier for such sentences. Experiments were conducted to assess the impact of the imbalance of positive and negative examples in source documents on classification; develop a support vector machine (SVM) classifier using term frequency-inverse document frequency (TF-IDF) representation of text; and assess the marginal benefit of additional training examples on the quality of the classifier. Analysis of learning curves indicates that additional training examples will not likely improve the quality of the classifier. We discuss options for other text representation schemes to investigate in order to improve the quality of the classifier as measured by F-score.
Curation-Friendly Tools for the Scientific Researcherbwestra
Presentation for Online Northwest Conference, in Corvallis Oregon, February 10, 2012.
Highlights electronic lab notebooks (ELN) and OMERO (Open Microscopy Environment) as two tools that enable researchers to better manage their research data.
Capturing and Analyzing Publication, Citation and Usage Data for Contextual C...NASIG
Libraries have long sought to demonstrate the value of their collections through a variety of usage statistics. Traditionally, a strong emphasis is placed on high usage statistics when evaluating journals in collection development discussions. However, as budget pressures persist, administrators are increasingly concerned with looking beyond traditional usage metrics to determine the real impact of library services and collections. By examining journal usage in the context of scholarly communication, we hope to gain a more holistic understanding of the use and impact of our library’s resources. In this session, we begin by outlining our methodology for gathering comprehensive publication and citation data for authors affiliated with Northwestern University’s Feinberg School of Medicine, utilizing Web of Science as our primary data source and leveraging a custom Python script to manage the data. Using this data we discuss various potential metrics that could be employed to measure and evaluate journals in institutional and field-specific contexts, including but not limited to: number of publications and references per journal, co-citation networks, percentage of references per journal, and increases or decreases of references over time per title. We then consider the development of normalized benchmarks and criteria for creating field-specific core journal lists. We also discuss a process for establishing usage thresholds to evaluate existing journal subscriptions and to highlight potential gaps in the collection. Finally, we apply and compare these metrics to traditional collection development tools like COUNTER usage reports, cost-per-use analysis, Inter-Library Loan statistics and turnaway reports, to determine what correlations or discrepancies might exist. We finish by highlighting some use-cases which demonstrate the value of considering publication and citation metrics, and provide suggestions for incorporating these metrics into library collection development practices.
Speakers: Joelen Pastva and Jonathan Shank, Northwestern University
Project GitHub page: https://goo.gl/2C2Pcy
Standardization of the HIPC Data Templates: The Story So FarAhmad C. Bukhari
This document discusses efforts to standardize data templates for the Human Immunology Project Consortium (HIPC) to make HIPC data more findable, accessible, interoperable, and reusable (FAIR). Currently, HIPC data is inconsistently formatted and named. The authors propose mapping HIPC data submission templates to ontologies to semantically normalize the data and facilitate data integration and querying. They have mapped template elements like assay types and value sets to ontologies like the Ontology for Biomedical Investigations and are working with ontology groups to refine and improve the mappings. The goal is to incorporate this ontology-linked metadata approach into the CEDAR and ImmPort databases.
This document discusses efforts to standardize data templates for the Human Immunology Project Consortium (HIPC) to make HIPC data more findable, accessible, interoperable, and reusable (FAIR). Currently, HIPC data is inconsistently formatted and named. The authors propose mapping HIPC data submission templates to ontological terms to semantically normalize the data and facilitate data integration and querying. They have mapped template elements like assay types and value sets to domain ontologies and suggest changes to CEDAR and ImmPort to incorporate this ontology-linked metadata approach.
Presentation by Daniel J. Vreeman, PT, DPT, MSc for the AMIA KRS Working Group. Title: LOINC - An Introduction to the Universal Catalog of Laboratory and Clinical Observations.
2012 02 16 - Clinical LOINC Tutorial - Collections - Panels Forms and Assessm...dvreeman
This document summarizes a presentation on using LOINC (Logical Observation Identifiers Names and Codes) to standardize patient assessments. It discusses how LOINC provides a uniform model for representing standardized questions, answers, and panels/forms. The presentation covers the iterative development of LOINC's assessment model over 10 years, current assessment content in LOINC, and lessons learned regarding variation, data modeling, and intellectual property issues.
More Related Content
Similar to 2010 07 15 - Clinical LOINC Tutorial - Documents
Clinical LOINC provides a standardized ontology for classifying clinical documents. It originated from a need for consistent naming across systems. The model uses a multi-axial approach with subject matter, role, setting, type of service, and kind of document. Evaluation found mapping many local names but also need for expansion. Ongoing work refines the ontology through empirical analysis and collaboration.
The document discusses the origins and ongoing development of a document ontology within LOINC and HL7. It describes how the Clinical Document Ontology (CDO) provides consistent semantics for clinical document names to enable interoperability. The CDO uses a multi-axial model with domains like subject matter, role, setting, type of service, and kind of document. Iterative evaluations have helped expand and refine the CDO. Future work includes further harmonization and expanding the model to new document types.
This document discusses the origins and development of the LOINC Clinical Document Ontology (CDO), which provides a standardized terminology for clinical document names. It describes how the CDO was created based on empirical analysis of over 2000 local document names. The CDO uses a multi-axial model with domains like subject matter, role, setting, type of service, and kind of document. Iterative evaluations found the expanded CDO better mapped local names than the original. Ongoing work involves adding new content and harmonizing with other clinical terminologies.
This document summarizes a presentation on the clinical document ontology (CDO) developed by LOINC. It describes the origins and development of having a standardized vocabulary for clinical document names, including empirical analysis of local document names. The presentation reviews the multi-axial model used by LOINC for document names, provides examples, and discusses ongoing evaluation and expansion efforts through collaboration. Future directions include further harmonization of CDO terms and analyzing document content.
The document provides an overview of LOINC (Logical Observation Identifiers Names and Codes) and its use for standardizing clinical panels, forms, and patient assessment instruments. It discusses LOINC's history with standardizing panels and enhancing its model for assessment instruments. Current projects involving standardizing various US government forms and health surveys in LOINC are mentioned, as well as ongoing challenges around intellectual property, modeling, and inconsistencies between source instruments and standards.
This document discusses standardizing patient assessments in LOINC. It summarizes LOINC's work enhancing its panel model to represent patient assessments, which allows representing individual assessment items, structured answer lists, and item instances within specific assessments. Challenges included variation between similar assessments, starting from paper forms rather than a uniform data model, and intellectual property issues. Ongoing work aims to standardize more assessments in LOINC to improve data sharing.
The Logical Model Designer - Binding Information Models to TerminologySnow Owl
This presentation demonstrates the functionality provided by the Logical Model Designer (LMD) and Snow Owl tools, which enables terminology to be bound to the Singapore Logical Information Model.
Abstract:
A critical enabler in the journey towards semantic interoperability in Singapore is the Singapore "˜Logical Information Model' (LIM). The LIM is a model of the healthcare information shared within Singapore, and is defined as a set of reusable "˜archetypes' for each clinical concept (e.g. Problem/Diagnosis, Pharmacy Order). These archetypes are then constrained and composed into "˜templates' to support specific use cases.
The Singapore LIM harmonises the semantics of the information structures with the terminology, using multiple types of terminology bindings, including semantic, value domain and constraint bindings. Value domain bindings are defined to both national "˜reference terminology' (used for querying nationally-collated data), as well as to a variety of "˜interface terminologies' used within local clinical systems (required to enforce conformance-compliance rules over message specifications generated from the LIM). To support the diversity of pre-coordination captured in local interface terms, "˜design patterns' are included in the LIM, based on the SNOMED CT concept model. These design patterns represent a logical model of meaning for a specific concept, and allow more than one split between the information model and the terminology model to be represented in a semantically-consistent manner.
This presentation will demonstrate the "˜Logical Model Designer' (LMD) - an Eclipse-based tool that is being used to maintain Singapore's Logical Information Model. A number of features of the LMD tooling will be demonstrated, with a specific focus on how the information structure is bound to the terminology via an interface to the Snow Owl platform. Value Domains are defined as reference sets within Snow Owl and then linked to the information structures defined in the LMD.
Please see our website http://b2i.sg for further information.
Effective Classification of Clinical Reports: Natural Language Processing-Bas...Efsun Kayi
With the recent emphasis on the use of electronic health records (EHRs), the importance of leveraging the large amounts of electronic clinical data have become clearer. Efficient and effective use of this information could supplement or even replace manual chart review as a means of studying and improving the quality and safety of healthcare delivery. However, some of these clinical data are in the form of free text and require
preprocessing before use in automated systems.
There are many challenges in developing such automated decision support systems. Clinical reports include medical terms that are not commonly used in daily languages, in various forms. Those terms have to be coded into the standard forms defined in medical dictionaries for consistency. Furthermore, coding by itself may not be sufficient for correctly identifying clinical conditions. Reported conditions must be analyzed with their surrounding contexts to validate their temporal, certainty, and negation status. Biomedical Natural Language Processing (NLP) tools map medical terms to standard dictionaries and analyze their contexts; however, their output cannot be directly used in subsequent automated processes.
Accordingly, in this research, we first investigate the best ways to extract features from the NLP output that can be used for automatic classification. While results show that the classification performance is significantly improved by using the NLP features over using the raw text, this NLP-based classification is computationally expensive and requires significant amount of manual steps for the system to be used in different clinical areas. As an alternative, we developed a framework for topic modeling-based classification system. Topic modeling provides interpretable themes (topic distributions) in reports, a representation that is more compact than bag-of-words representation and can be processed faster than raw text in subsequent automated processes. Our topic based classifier system is shown to be competitive with existing text classification techniques and provides a more efficient and interpretable representation. A common free text data source is radiology reports, typically dictated by radiologists. Therefore we analyzed the performance of our system using computed tomography (CT) imaging reports.
Linkages to EHRs and Related Standards. What can we learn from the Parallel U...Koray Atalag
This is the prezo I used during the CellML workshop in Waiheke Island, Auckland, New Zealand on 13 April 2015. The aim was to introduce information modelling methods and tools for the purpose of inspiring computational modelling work in the area of semantics and interoperability.
Ontologies for Clinical Research - Assessment and DevelopmentWolfgang Kuchinke
Wolfgang Kuchinke presented on ontologies for clinical research. He discussed what ontologies are and their main components. Their purpose is to limit complexity and organize data into information and knowledge. Kuchinke described several existing ontologies for clinical research including the Clinical Trial Ontology, Ontology of Clinical Research, Ontology for Biomedical Investigations, and Cochrane PICO Ontology. He noted the need for an ontology to integrate clinical trial data and discussed possible ways to build a new joint ontology combining aspects of existing ones like PICO, OCRe, and OBI to better enable data reuse and sharing in clinical research.
How to Apply NLP to Analyze Clinical TrialsDavid Talby
How to apply natural language processsing techniques including multi-modal models and zero-shot learning to accurately extract information from raw clinical trial documents.
A Semantic Web based Framework for Linking Healthcare Information with Comput...Koray Atalag
Presented at Health Informatics New Zealand (HINZ 2017) Conference, 1-3 Nov 2017, Rotorua, New Zealand. Authorship: Koray Atalag, Reza Kalbasi, David Nickerson
The University of Auckland
11 - qualitative research data analysis ( Dr. Abdullah Al-Beraidi - Dr. Ibrah...Rasha
The document describes a dataset collected from students in a writing class. As an assignment, students were asked to describe in detail how they write, without consulting others. This generated a set of 10 individual narratives for analysis. Permission was obtained to use the anonymized data for teaching purposes. The total qualitative data available comprises 10 files, each containing a short student-authored narrative on the writing process.
Automatic Rubric-Based Content Grading For Clinical NotesEmma Burke
This document describes research on developing systems for automatically grading clinical notes based on predefined rubrics. The researchers created a corpus of clinical notes and corresponding rubric-based grades from medical scribe trainees. They developed baseline grading systems using simple feature-based and neural network models. Their baseline systems showed promising results, with content point accuracy and kappa values of 0.86 and 0.71 on test data. The researchers conducted experiments varying training data size and rubric tagsets to provide insight into expected system performance.
This document presents a preliminary ontology to identify factors that influence software maintenance. The purpose of the ontology is to provide context for empirical studies of maintenance and help understand contradictory results. The ontology identifies key factors such as the maintained product, maintenance activities, maintenance organization and process, and maintenance engineers. It presents these factors using a UML model and discusses how variations in the factors could impact empirical studies of maintenance productivity, quality or efficiency. Two common maintenance scenarios are also described to demonstrate how the ontology can be used to characterize differences between scenarios.
Text Mining for Biocuration of Bacterial Infectious DiseasesDan Sullivan, Ph.D.
Specialty gene sets, such as virulence factors and antibiotic resistance genes, are of particular interest to infectious disease researchers. Much of the information about specialty genes’ function is described in literature but unavailable as structured data in bioinformatics databases. The steadily increasing volume of literature makes it difficult to manually find relevant papers and extract assertion sentences about specialty genes. This presentation describes efforts to build and an automatic classifier for such sentences. Experiments were conducted to assess the impact of the imbalance of positive and negative examples in source documents on classification; develop a support vector machine (SVM) classifier using term frequency-inverse document frequency (TF-IDF) representation of text; and assess the marginal benefit of additional training examples on the quality of the classifier. Analysis of learning curves indicates that additional training examples will not likely improve the quality of the classifier. We discuss options for other text representation schemes to investigate in order to improve the quality of the classifier as measured by F-score.
Curation-Friendly Tools for the Scientific Researcherbwestra
Presentation for Online Northwest Conference, in Corvallis Oregon, February 10, 2012.
Highlights electronic lab notebooks (ELN) and OMERO (Open Microscopy Environment) as two tools that enable researchers to better manage their research data.
Capturing and Analyzing Publication, Citation and Usage Data for Contextual C...NASIG
Libraries have long sought to demonstrate the value of their collections through a variety of usage statistics. Traditionally, a strong emphasis is placed on high usage statistics when evaluating journals in collection development discussions. However, as budget pressures persist, administrators are increasingly concerned with looking beyond traditional usage metrics to determine the real impact of library services and collections. By examining journal usage in the context of scholarly communication, we hope to gain a more holistic understanding of the use and impact of our library’s resources. In this session, we begin by outlining our methodology for gathering comprehensive publication and citation data for authors affiliated with Northwestern University’s Feinberg School of Medicine, utilizing Web of Science as our primary data source and leveraging a custom Python script to manage the data. Using this data we discuss various potential metrics that could be employed to measure and evaluate journals in institutional and field-specific contexts, including but not limited to: number of publications and references per journal, co-citation networks, percentage of references per journal, and increases or decreases of references over time per title. We then consider the development of normalized benchmarks and criteria for creating field-specific core journal lists. We also discuss a process for establishing usage thresholds to evaluate existing journal subscriptions and to highlight potential gaps in the collection. Finally, we apply and compare these metrics to traditional collection development tools like COUNTER usage reports, cost-per-use analysis, Inter-Library Loan statistics and turnaway reports, to determine what correlations or discrepancies might exist. We finish by highlighting some use-cases which demonstrate the value of considering publication and citation metrics, and provide suggestions for incorporating these metrics into library collection development practices.
Speakers: Joelen Pastva and Jonathan Shank, Northwestern University
Project GitHub page: https://goo.gl/2C2Pcy
Standardization of the HIPC Data Templates: The Story So FarAhmad C. Bukhari
This document discusses efforts to standardize data templates for the Human Immunology Project Consortium (HIPC) to make HIPC data more findable, accessible, interoperable, and reusable (FAIR). Currently, HIPC data is inconsistently formatted and named. The authors propose mapping HIPC data submission templates to ontologies to semantically normalize the data and facilitate data integration and querying. They have mapped template elements like assay types and value sets to ontologies like the Ontology for Biomedical Investigations and are working with ontology groups to refine and improve the mappings. The goal is to incorporate this ontology-linked metadata approach into the CEDAR and ImmPort databases.
This document discusses efforts to standardize data templates for the Human Immunology Project Consortium (HIPC) to make HIPC data more findable, accessible, interoperable, and reusable (FAIR). Currently, HIPC data is inconsistently formatted and named. The authors propose mapping HIPC data submission templates to ontological terms to semantically normalize the data and facilitate data integration and querying. They have mapped template elements like assay types and value sets to domain ontologies and suggest changes to CEDAR and ImmPort to incorporate this ontology-linked metadata approach.
Similar to 2010 07 15 - Clinical LOINC Tutorial - Documents (20)
Presentation by Daniel J. Vreeman, PT, DPT, MSc for the AMIA KRS Working Group. Title: LOINC - An Introduction to the Universal Catalog of Laboratory and Clinical Observations.
2012 02 16 - Clinical LOINC Tutorial - Collections - Panels Forms and Assessm...dvreeman
This document summarizes a presentation on using LOINC (Logical Observation Identifiers Names and Codes) to standardize patient assessments. It discusses how LOINC provides a uniform model for representing standardized questions, answers, and panels/forms. The presentation covers the iterative development of LOINC's assessment model over 10 years, current assessment content in LOINC, and lessons learned regarding variation, data modeling, and intellectual property issues.
2012 02 10 - Vreeman - Possibilities and Implications of ICF-powered Health I...dvreeman
The document discusses the possibilities and implications of using the International Classification of Functioning (ICF) to power health information technology. It describes how incorporating standardized vocabularies like ICF and LOINC into electronic health records could allow for data reuse across settings, clinical decision support, and a more seamless exchange of health information. This would help realize the vision of a healthcare system with coordinated, consumer-centered care enabled by digital tools.
2012 02 11 EHRs - healthcare system chicken soup or rotten eggdvreeman
This document summarizes a presentation on electronic health records (EHRs) given to the CSM 2012 HPA Tech SIG. The presentation covered why EHRs are important, how to select an EHR system, considerations for implementation, and a case study. The presentation discussed how EHRs can help accelerate a vision of coordinated, consumer-centered care by enabling data reuse, clinical decision support, and interoperability between systems through standards. Barriers to EHR adoption include workflow changes and training needs, while success factors include staff participation and data standardization.
2012 02 11 - Informatics Competencies in PT Educationdvreeman
The document proposes a framework for informatics competencies in physical therapy education. It discusses how informatics is addressed in core PT education documents and competencies established in other healthcare professions. The framework proposes competencies in 6 roles: lifelong learner, clinical reasoning, evidence-based practice, electronic health record literacy, advancing the science of PT, and accountability, communication, and education. It emphasizes viewing informatics as a longitudinal theme across the curriculum.
This document provides an overview of LOINC (Logical Observation Identifiers Names and Codes) presented by Daniel Vreeman. In 3 sentences: LOINC is a universal standard for identifying health measurements and observations that allows for data exchange between systems. It has over 60,000 codes covering laboratory and clinical observations. The LOINC community is open-source and has over 14,000 members from 145 countries contributing to its ongoing development and adoption worldwide.
2011 11 16 - Vreeman - Corralling Creativity with Standardsdvreeman
The document summarizes Daniel J. Vreeman's presentation at the 9th Forum on Laboratory Informatics on challenges and successes with community-wide laboratory data exchange using standards. The presentation discusses Indiana Network for Patient Care, which connects over 200 healthcare organizations using Logical Observation Identifiers Names and Codes (LOINC) as the standard terminology to facilitate data sharing and interoperability. It highlights successes in public health surveillance and clinical research enabled by the network, and lessons learned around prioritizing trust and iterating systems based on user needs.
2011 09 10 - Maybe is Not a Wary Word - Vreeman - Exploring Future of PTdvreeman
1. The document discusses the future of physical therapy and the need for physical therapists to be part of collaborative, multidisciplinary healthcare teams with the patient as the focus.
2. It advocates for adopting interoperable electronic health records but acknowledges the complexity, and suggests physical therapy education programs incorporate informatics training.
3. The document envisions a future with complete, longitudinal patient information that follows the consumer across settings to facilitate coordinated, value-based care guided by consumer-centered information tools.
The document discusses a presentation on LOINC (Logical Observation Identifiers Names and Codes) given at the 2011 Public Health Informatics conference in Atlanta, GA. The presentation provides an introduction to LOINC and covers topics such as the origins of LOINC, common elements in LOINC terms, LOINC collections like forms and surveys, and domain-specific approaches to mapping standards and terminologies in areas like microbiology. It also discusses LOINC tools and resources for mapping terms and codes.
2011 08 15 - Clinical LOINC Tutorial - Collections - Panels Forms and Assessm...dvreeman
This document summarizes a presentation on using LOINC (Logical Observation Identifiers Names and Codes) to standardize clinical assessments and patient-reported outcomes. It describes how LOINC provides a model for organizing assessments into hierarchical panels and items with specific attributes. A growing number of standardized assessments are available in LOINC, including government forms, clinical screening tools, and patient-reported outcomes. Lessons learned include the need to minimize variation between similar assessments and start from a uniform data model to avoid discrepancies. IP issues also present challenges for widespread adoption.
The document provides an introduction and overview of LOINC (Logical Observation Identifiers Names and Codes), a universal standard for identifying health measurements, observations, and documents. LOINC codes are organized using a six-axis model and include over 55,000 codes for laboratory tests, clinical observations, surveys, and claims attachments. The document outlines the history, development, and governance of LOINC, as well as examples of how LOINC codes are structured and used in clinical documents and messages.
The document provides an overview of the Regenstrief LOINC Mapping Assistant (RELMA) tool. It discusses RELMA's features for installing the tool, setting preferences, loading local observation files, searching for and mapping local terms to LOINC codes, and proposing new LOINC terms. The goal is to help laboratories map their local test names and codes to standardized LOINC codes to improve data interoperability, comparability and quality.
This document provides an introduction and overview of LOINC (Logical Observation Identifiers Names and Codes). It discusses the origins of LOINC as a universal code system to facilitate exchange of clinical observation data. It describes how LOINC provides codes for questions, while other vocabularies provide codes for answers. The document outlines the growth of LOINC over time, its adoption internationally and in the US, and new areas of content modeling like standardized assessments. It emphasizes that LOINC development is an open, collaborative community effort to standardize clinical observations and questions.
2011 05 26 - Lab LOINC Tutorial - Chicago - Handout version - fulldvreeman
The document provides information about an upcoming Laboratory LOINC Workshop in Chicago, Illinois. It includes an agenda for the workshop covering topics such as the origins of LOINC, using the RELMA mapping tool, searching and mapping local terms to LOINC codes, and hands-on practice mapping terms. The workshop will be led by Daniel Vreeman from Indiana University and Clem McDonald from the National Library of Medicine.
The document provides an introduction and overview of LOINC (Logical Observation Identifiers Names and Codes), including:
- LOINC codes clinical observations and laboratory tests using a six-axis model for consistent naming.
- It has over 55,000 codes covering laboratory tests, clinical observations, surveys, and claims attachments.
- LOINC is maintained by committees and aims to standardize coding of clinical data to facilitate exchange between systems.
This document provides an overview of LOINC codes for diagnostic imaging studies. It discusses the different classes and components of LOINC codes for imaging, including examples for radiology terms, orderable vs observation codes, views and positions, limited vs complete exams, guidance procedures, laterality, and modality subparts. It notes some challenges in coding imaging exams and areas where additional terms need development, such as for PET, interventional radiology, and combination modalities.
This document discusses LOINC's model for standardizing patient assessments and forms. It provides an overview of LOINC's current efforts to represent common health assessments, including various government forms and clinical screening tools. The presentation notes that while these assessments address similar concepts, there is significant variation in how the items are structured between different forms. It recommends starting with LOINC's standardized data model to help address inconsistencies and avoid unnecessary variation. Lessons learned include the high costs of losing comparability and that intellectual property issues pose large challenges for standardization.
The LOINC name does not include the instrument used in testing, specific details about the specimen, priority (e.g. STAT), where testing was done, who did the test, test interpretation, or anything else that is not an intrinsic part of the name of the result.
This document summarizes a presentation on the Logical Observation Identifiers Names and Codes (LOINC) database. It discusses the origins and purpose of LOINC as a universal standard for clinical observations. It also provides details on the growth of LOINC over time, its international adoption and translations into multiple languages. Large health organizations in the US and abroad have implemented LOINC to facilitate interoperability and data exchange.
4. Rationale
• As with other domains, local systems have
idiosyncratic names for clinical documents
• Need a common, controlled vocabulary
• Needed for HL7 CDA standard
– HL7 v2 too
• Timeline
– 06/2000 Document Ontology Task Force
– 09/2003 First axis values and LOINC codes
– ~2005 Expanded SMD domain
– 10/2007 Revised axis value approval
5. Document Type Codes
• Created to provide consistent semantics
for names of documents exchanged b/w
independent systems
• Supported Uses
– Retrieval
– Organization
– Preparation of templates
– Display
Frazier P, Rossi-Mori A, Dolin RH, Alschuler L, Huff SM. The creation of an ontology of clinical document names. Stud Health Technol Inform. 2001;84(Pt 1):94-8.
6. What is a Document?
• Document = Collection of information
– Sentences, sections
– Distinguished from “panels”, which have
enumerated discrete contents of result elements
• Formal Document Ontology model/rules apply
to “clinical notes”
– Clinical document (per HL7 CDA), produced by
clinicians spontaneously or in response to a request
for consultation
– Does not apply to “reports”, produced in response to
an order for a procedure
7. Approach
• Started with empiric analysis of over 2000
document names
– Mayo, 3M/Intermountain, VA in SLC, VA in
Nashville
• Find the level of granularity that best
meets the exchange use case
8. Finding the Commonality
• Ultra-specific local names:
– Dr. Evil’s Friday Afternoon Pain Clinic Note
• Generalizable elements:
– Outpatient Pain Clinic Note
• Local codes may still be needed
– Can send both local and universal codes in HL7
– Mapping to the universal code enables
interoperability and aggregation
10. Names Based on Document Content
• Names based on the expected
information content
• NOT based on the document format
– Text, scanned images, structured entry form,
XML, etc would all have the same LOINC code
if the information content was the same
Assume that these other important attributes
would be sent in different fields of the message
11. What’s NOT in a Document Name
• Specific author
• Specific location of service or dictation
• Date of service
• Status (e.g. signed, unsigned)
• Security/privacy flags (protected)
• Updates/amendments to a document
Assume that these other important attributes
would be sent in different fields of the message
12. Model of Document Names
• Subject Matter Domain
– E.g. Cardiology, Pediatric Cardiology, Physical Therapy
• Role
– Author training/professional classification (not @ subspecialty)
– E.g. Physician, Nursing, Case Manager, Therapist, Patient
• Setting
– Modest extension of CMS’s definition (not equivalent to location)
– E.g. Inpatient Hospital, Outpatient, Emergency Department
• Type of Service
– Service or activity provided to/for the patient (or other subject)
– E.g. Consultation, History and Physical, Discharge Summary
• Kind of Document
– General structure of the document
– E.g. Note, Letter, Consent
13. Rules for Constructing Names
• LOINC has enumerated value lists for axes
– Published in Users Guide
– Development edition at loinc.org
• Names need a Kind of Document value and at
least one of the other four axes
Component
Prop
Time
System
Scale
Method
<Type
of
Service>
<Kind
of
Document>
Find
Pt
<Se8ng>
Doc
<SMD>.<Role>
• Combinations from within an axis
– Allowed where they make sense (SMD, Service)
– Represented with a plus (+)
• LOINC Class = DOC.CLINRPT
14. Example LOINC Codes
Component
Prop
Time
System
Scale
Method
Group
counseling
note
Find
Pt
InpaBent
Hospital
Doc
{Provider}
EvaluaBon
and
management
note
Find
Pt
OutpaBent
Doc
{Provider}
EvaluaBon
and
management
note
Find
Pt
{Se8ng}
Doc
{Provider}
History
and
physical
note
Find
Pt
{Se8ng}
Doc
{Provider}
IniBal
evaluaBon
note
Find
Pt
{Se8ng}
Doc
Physician
Subsequent
evaluaBon
note
Find
Pt
{Se8ng}
Doc
Nurse
PracBBoner
{curly braces} notation: send that content as a
separate item in the message (field or segment)
15. Hierarchy in LOINC
• Constructed a first-pass Component hierarchy
based on the Type of Service axis
– Ignored Kind of Document
• Multi-axial hierarchy is generated based on the
component hierarchy
– (available as separate download)
• Could imagine construction of other
hierarchies, like context-specific use cases
18. Ontology Evolution and Refinement
• Ongoing evaluation and evolution
• Exceptional contributions from Columbia
University and the VA
• In particular, expanded original SMD value
list with ABMS specialty names and
iterative discussion
Shapiro
JS,
Bakken
S,
Hyun
S,
Melton
GB,
Schlegel
C,
Johnson
SB.
Document
ontology:
supporting
narrative
documents
in
electronic
health
records.
AMIA
Annu
Symp
Proc.
2005:684-‐8.
19. Iterative Evaluation Case Study:
NYPH-CUMC
SMD
Role
Se8ng
Type
of
Service
Kind
of
Document
Overall
Dis?nct
Original
CDO
26.7%
99.9%
99.9%
43.5%
100%
23.4%
7.9%
(n=894)
Expanded
CDO
98.6%
100%
100%
99.9%
99.9%
98.5%
39.1%
(n=935)
• Hyun
S,
Shapiro
JS,
Melton
G,
Schlegel
C,
Stetson
PD,
Johnson
SB,
Bakken
S.
Iterative
evaluation
of
the
Health
Level
7-‐-‐Logical
Observation
IdentiOiers
Names
and
Codes
Clinical
Document
Ontology
for
representing
clinical
document
names:
a
case
report.
J
Am
Med
Inform
Assoc.
2009
May-‐Jun;16(3):395-‐9.
• Summary
• More
documents
could
be
fully
speciOied
in
with
the
expanded
CDO
• Many
documents
map
to
one
LOINC
code
–
factor
of
local
names
and
suitable
LOINC
values
• Inter-‐rater
reliability
was
very
good
20. Nursing
SMD
Role
Se8ng
Type
of
Service
Kind
of
Document
Overall
Dis?nct
SMD-‐enhanced
74%
100%
100%
100%
100%
74.5%
33%
CDO
(2005)
(n=94)
Hyun
S,
Shapiro
JS,
Melton
G,
Schlegel
C,
Stetson
PD,
Johnson
SB,
Bakken
S.
Iterative
evaluation
of
the
Health
Level
7-‐-‐Logical
Observation
IdentiOiers
Names
and
Codes
Clinical
Document
Ontology
for
representing
clinical
document
names:
a
case
report.
J
Am
Med
Inform
Assoc.
2009
May-‐Jun;16(3):395-‐9.
• In
a
separate
analysis,
38%
of
the
section
headings
(n=308)
from
nursing
documents
could
be
mapped
to
existing
LOINC
codes
• Hyun
S,
Bakken
S.
Toward
the
creation
of
an
ontology
for
nursing
document
sections:
mapping
section
names
to
the
LOINC
semantic
model.
AMIA
Annu
Symp
Proc.
2006:364-‐8.
21. German University Hospital
• Used LOINC v2.24 (original DocOnt terms)
• Of 86 the document types for 1.2 million
documents:
– 44% mapped to LOINC
– 44% had available mapping deemed not specific
enough
– 12% had no LOINC match
• A LOINC code existed for 93.1% of documents in
their set (by volume)
Dugas
M,
Thun
S,
Frankewitsch
T,
Heitmann
KU.
LOINC
codes
for
hospital
information
systems
documents:
a
case
study.
J
Am
Med
Inform
Assoc.
2009
May-‐Jun;16(3):400-‐3.
23. Ongoing Development
• A work-in-progress
• LOINC Users’ Guide is the definitive
source for current policy
– Always available at http://loinc.org
• Collaboration/discussion
– Clinical LOINC meetings, HL7 SDTC
– LOINC website
– LOINC Users’ Forum: http:/forum.loinc.org
26. Future Directions
• Continued harmonization of v1 and v2
axis values
• Axis definitions
• Extension/refinement to other Kind of
Documents
• Empiric analysis of document contents