This document discusses using ontologies to simplify semantic solutions for biomedical applications. It provides examples of how ontologies can be used to integrate medical expertise and knowledge from different sources. It also describes challenges in representing biomedical information with ontologies and introduces MedMaP, a medical management portal that aims to simplify access to ontology-based reasoning and analytics using graphical visualizations and self-service tools. MedMaP allows users to customize their experience and gain insights from subject matter experts.
Computer Aided Diagnosis in Pathology: Pros & Cons by Dr. Liron PantanowitzCirdan
This presentation looks at the benefits and problems related to computer aided diagnosis in pathology. It was delivered by Dr. Liron Pantanowitz, University of Pittsburgh, USA at the Pathology Horizons conference in Cairns, Australia.
Pathology Horizons is an annual CPD conference organised by Cirdan on the future of pathology. More information on Pathology Horizons can be accessed at www.pathologyhorizons.com
Pathology informatics has evolved from early pioneers applying data analytics and computers to medicine. It involves applying information science principles to pathology practice and laboratory data. At UCDHS, pathology informatics manages laboratory information systems, implements digital pathology, performs data mining and analytics, and oversees clinical registries. Future trends include personalized medicine using "big data", wearable devices, learning healthcare systems, and tethered meta-registries that integrate multiple data sources to improve quality and lower costs.
Mining Health Examination Records A Graph Based Approachijtsrd
This document presents a graph-based approach for mining health examination records to predict future health risks. It proposes a semi-supervised heterogeneous graph (SHG-Health) algorithm to handle classification with large amounts of unlabeled data. The SHG-Health algorithm constructs a graph (HeteroHER) from health examination records, where different item types are modeled as different node types. It then applies semi-supervised learning to classify nodes and predict risks. The authors evaluate the approach on real and synthetic health examination datasets, showing it can effectively predict risks from live data streams and handle heterogeneous and unlabeled data.
Medical Informatics: Computational Analytics in HealthcareNUS-ISS
Presented by Dr Liu Nan, Senior Research Scientist and Principal Investigator, Singapore General Hospital at ISS Seminar: How Analytics is Transforming Healthcare on 31 Oct 2014.
Using real-world evidence to investigate clinical research questionsKarin Verspoor
Adoption of electronic health records to document extensive clinical information brings with it the opportunity to utilise that information to support clinical research, and ultimately to support clinical decision making. In this talk, I discuss both these opportunities and the challenges that we face when working with real-world clinical data, and introduce some of the strategies that we are adopting to make this data more usable, and to extract more value from it. I specifically discuss the use of natural language processing to transform clinical documentation into structured data for this purpose.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Electronic health records and machine learningEman Abdelrazik
Electronic health records and machine learning can be used together to generate real-world evidence. Real-world data is collected from electronic health records in real clinical settings and can provide insights into a treatment's effectiveness and safety outside of clinical trials. Machine learning models can analyze structured and unstructured data in electronic health records to identify patterns and make predictions. This can help with tasks like medical diagnosis, which is challenging due to variations between individuals and potential for misdiagnosis. However, developing accurate machine learning models requires addressing issues like selecting representative training data and setting performance standards.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Computer Aided Diagnosis in Pathology: Pros & Cons by Dr. Liron PantanowitzCirdan
This presentation looks at the benefits and problems related to computer aided diagnosis in pathology. It was delivered by Dr. Liron Pantanowitz, University of Pittsburgh, USA at the Pathology Horizons conference in Cairns, Australia.
Pathology Horizons is an annual CPD conference organised by Cirdan on the future of pathology. More information on Pathology Horizons can be accessed at www.pathologyhorizons.com
Pathology informatics has evolved from early pioneers applying data analytics and computers to medicine. It involves applying information science principles to pathology practice and laboratory data. At UCDHS, pathology informatics manages laboratory information systems, implements digital pathology, performs data mining and analytics, and oversees clinical registries. Future trends include personalized medicine using "big data", wearable devices, learning healthcare systems, and tethered meta-registries that integrate multiple data sources to improve quality and lower costs.
Mining Health Examination Records A Graph Based Approachijtsrd
This document presents a graph-based approach for mining health examination records to predict future health risks. It proposes a semi-supervised heterogeneous graph (SHG-Health) algorithm to handle classification with large amounts of unlabeled data. The SHG-Health algorithm constructs a graph (HeteroHER) from health examination records, where different item types are modeled as different node types. It then applies semi-supervised learning to classify nodes and predict risks. The authors evaluate the approach on real and synthetic health examination datasets, showing it can effectively predict risks from live data streams and handle heterogeneous and unlabeled data.
Medical Informatics: Computational Analytics in HealthcareNUS-ISS
Presented by Dr Liu Nan, Senior Research Scientist and Principal Investigator, Singapore General Hospital at ISS Seminar: How Analytics is Transforming Healthcare on 31 Oct 2014.
Using real-world evidence to investigate clinical research questionsKarin Verspoor
Adoption of electronic health records to document extensive clinical information brings with it the opportunity to utilise that information to support clinical research, and ultimately to support clinical decision making. In this talk, I discuss both these opportunities and the challenges that we face when working with real-world clinical data, and introduce some of the strategies that we are adopting to make this data more usable, and to extract more value from it. I specifically discuss the use of natural language processing to transform clinical documentation into structured data for this purpose.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Electronic health records and machine learningEman Abdelrazik
Electronic health records and machine learning can be used together to generate real-world evidence. Real-world data is collected from electronic health records in real clinical settings and can provide insights into a treatment's effectiveness and safety outside of clinical trials. Machine learning models can analyze structured and unstructured data in electronic health records to identify patterns and make predictions. This can help with tasks like medical diagnosis, which is challenging due to variations between individuals and potential for misdiagnosis. However, developing accurate machine learning models requires addressing issues like selecting representative training data and setting performance standards.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses the journey of a SAS programmer becoming a clinical SAS programmer. It describes some key differences between being a pure SAS programmer versus a clinical programmer. A clinical programmer must understand objectives of clinical trials, trial phases, and cross-functional roles of sponsors, investigators, statisticians, and data managers. The document outlines the main objectives and characteristics of each phase of clinical trials from Phase I to Phase IV. It emphasizes that clinical programming requires understanding clinical concepts in addition to programming skills.
The document discusses Peter Embi's approach to presenting on clinical and translational research informatics literature from the past year. It provides an overview of Embi's search strategy and categorization of papers, which involved searching literature databases and recommendations from colleagues. The presentation will focus on summarizing representative papers within categories like data sharing/reuse, methods and systems, recruitment and eligibility, and trends in clinical research informatics.
Clinical Research Informatics (CRI) Year-in-Review 2014Peter Embi
Peter Embi's review of notable publications and events in the field of Clinical Research Informatics (CRI) that took place in 2013+. This was presented as the closing keynote presentation of the 2014 AMIA CRI Summit in San Francisco, CA on April 11, 2014.
Disrupting the Oncology Care Continuum through AI and Advanced AnalyticsMichael Peters
Having Presented at #SROA18 on the need to move from basic Data and Reporting to Advanced Analytics and Artificial Intelligence, I thought I would share my deck for all.
This document summarizes a presentation given by Peter Embi on clinical and translational research and informatics literature from 2012-2013. It begins with Embi's background and approach to identifying relevant papers. It then describes the topics covered in the presentation, which are grouped into categories like clinical data reuse, data management/discovery, researcher support/resources, and recruitment. For each category, 1-2 key papers are summarized in 1-3 sentences. The summaries highlight the papers' goals, methods, and conclusions. The document concludes by mentioning other notable papers and events from the past year.
This paper describes the methods of the Treatment In Morning versus Evening (TIME) study, a large prospective randomized open-label blinded endpoint study comparing morning versus evening dosing of antihypertensive medications. The TIME study recruits participants through advertising, primary and secondary care, and patient databases in the UK. Participants self-enroll and consent on a secure website, and are randomized to morning or evening dosing. Follow-ups are conducted by automated email at 1 month and every 3 months thereafter. The study uses a prospective randomized open-label blinded endpoint design to establish if evening dosing is more cardioprotective than morning dosing.
Healthcare analytics has the potential to reduce costs of treatment, predict outbreaks of epidemics, avoid preventable diseases, and improve quality of life. It can improve processes, enhance patient care, and save lives by using analytics to better predict patient needs and staff accordingly. Electronic health records store a patient's comprehensive medical history digitally, allowing doctors to track changes over time with no risk of lost data or duplication. Analyzing demographic health data allows for strategic planning to identify factors that discourage treatment uptake. Analytics also helps prevent security threats, fraud, and inaccurate insurance claims while streamlining the claims process. The patient experience, overall population health, and operational costs can all be improved through healthcare analytics.
One aspect of personalized medicine is certain; it is
complicated. If you happen to have a highly scientific
background, you actually may be able to define the term.
However, if you polled five people very familiar with
personalized medicine, you should expect to hear five
different definitions. ISR wanted to understand where oncologists stand on the topic of personalized medicine. We interviewed 101 US based, board-certified oncologists to gather their views on
how familiar they are with personalized medicine, how they
are treating patients, what tests are being used and which
will be used more, and how their patient treatment regimens
could evolve in the future.
The document summarizes a study that evaluated the acceptability of a personally controlled health record (PCHR) system called Indivo in a community-based setting. Over 300 participants were involved in formative research activities to understand awareness, beliefs and reactions. The study found moderate awareness of privacy issues and high support for patient autonomy. Results informed guidelines on design improvements, literacy tools, and safety protocols for PCHR systems. Limitations included a lack of detail on methodology and sample selection.
Adaptive Learning Expert System for Diagnosis and Management of Viral Hepatitisgerogepatton
Viral hepatitis is the regularly found health problem throughout the world among other easily transmitted diseases, such as tuberculosis, human immune virus, malaria and so on. Among all hepatitis viruses, the uppermost numbers of deaths are result from the long-lasting hepatitis C infection or long-lasting hepatitis B. In order to develop this system, the knowledge is acquired using both structured and semi-structured interviews from internists of St.Paul Hospital. Once the knowledge is acquired, it is modeled and represented using rule based reasoning techniques. Both forward and backward chaining is used to infer the rules and provide appropriate advices in the developed expert system. For the purpose of developing the prototype expert system SWI-prolog editor also used. The proposed system has the ability to adapt with dynamic knowledge by generalizing rules and discover new rules through learning the newly arrived knowledge from domain experts adaptively without any help from the knowledge engineer
ADAPTIVE LEARNING EXPERT SYSTEM FOR DIAGNOSIS AND MANAGEMENT OF VIRAL HEPATITISijaia
Viral hepatitis is the regularly found health problem throughout the world among other easily transmitted
diseases, such as tuberculosis, human immune virus, malaria and so on. Among all hepatitis viruses, the
uppermost numbers of deaths are result from the long-lasting hepatitis C infection or long-lasting hepatitis
B. In order to develop this system, the knowledge is acquired using both structured and semi-structured
interviews from internists of St.Paul Hospital. Once the knowledge is acquired, it is modeled and
represented using rule based reasoning techniques. Both forward and backward chaining is used to infer
the rules and provide appropriate advices in the developed expert system. For the purpose of developing
the prototype expert system SWI-prolog editor also used. The proposed system has the ability to adapt with
dynamic knowledge by generalizing rules and discover new rules through learning the newly arrived
knowledge from domain experts adaptively without any help from the knowledge engineer.
This study evaluated the use of an interactive computer module to supplement a traditional paper informed consent form for pediatric endoscopy. Parents who received the supplemental electronic module were more likely to achieve informed consent compared to those who only received the paper form. The electronic module did not impact parent satisfaction, anxiety, or the number of questions asked of physicians. The results suggest that electronic tools can enhance traditional informed consent methods.
This document provides definitions for various terms related to health informatics. It defines terms such as algorithm, bioinformatics, clinical coding system, clinical data system, clinical decision tool, communication, database, electronic health record, and medical knowledge. The definitions cover topics such as the use of informatics methods and technologies in research, clinical practice, public health, and consumer health contexts.
Provenance abstraction for implementing security: Learning Health System and ...Vasa Curcin
Discussion of provenance usage in the Learning Health System paradigm, as implemented in the TRANSFoRm project, with focus on security requirements and how they can be addressed using provenance graph abstraction.
This document discusses developing a framework for evidence-based medicine using real-world data. It outlines developing a framework that captures baseline rates and evolves over time. The objective is to facilitate discussion on assembling baseline data from sources like clinical trials, patient data, and genomic studies to derive treatable patient traits and map available therapies to identify unmet medical needs. It provides examples of establishing such a framework for a condition like atopic dermatitis and lists potential components needed like an up-to-date health metrics database and tools for evaluating disease burden and treatment efficacy.
Researchers and care providers wanted to have access to all of the patients` vitals signs (temperature, blood pressure, heart rate, and respiratory rate) but most of this data wasn?t recorded, only a few readings a day were posted to the patients Electronic Medical Record (EMR). The EMR isn`t meant to store such volume of data, let alone to perform any data mining on it. This session will describe the architecture of the solution that was implemented to collect these vital signs automatically from Bedside Medical Devices (BDMI), and store them into a temporary storage, then load them into a Hadoop cluster. The session will also cover how the team married this vital signs data in the HDFS (Hadoop File System) with the rest of the EMR data for our Principles Investigators (PI) in our research institute to search for correlations between administered medications, diagnosis, and vital signs readings. The session will describe the reasons behind the design decisions that were made, such as using a Cloud Hadoop cluster versus on-premises while maintaining HIPAA.
Intelligent, Interoperable, Relevance and Value Enrichment in Universal, Ubiq...ijceronline
Electronic Health Records(EHR) are electronically maintained, linked, collections of allied, patientrelated healthcare information collected during past encounters. They incorporate patient demographic information, encounter details, laboratory reports, prescription notes, past medical records, and other medical data. EHR creation is designed to support the future diagnosis, treatment, and decision making in patient care. However, since EHR technology is a burgeoning science, many facets lie under-used or under-utilized.Current implementations are confined to national boundaries managed by individual National Health Systems (NHS). Consolidated, universally interoperable EHR schemes are still a thing for the future; a migratory patient may not have his national EHR available in distant territories. Further, the examination of operational factors unearthed more inadequacies. Interoperability-related issues include the limiting network bandwidth causing inordinate delays, diverse local storage schemes at the various NHS clusters, the related requirement for synchronous vocabulary-related translation mechanisms at the various NHScontrolled boundaries causing inordinate delays, and the related security and access issues. These issues arise from the requirement for synchronous, query-messaging nature of information access and exchange. This paper articulates a novel, sound, and secure methodology for achieving true International Interoperability and uniform efficiency in ubiquitous Electronic Health Record systems.Utilizing intelligent machine learning processes, required query-messaging information is meaningfully aggregated enhancing the relevancy, access speed, and value-derivation from the given data.Asynchronous learning excludes the need for high available network bandwidth, upload and download delays associated with current synchronous database/cloud systems.Indeed, this overarching solution ensures seamless synchronous operation and high-end international interoperability, and would work in any ubiquitous EHR environment.
European Pharmaceutical Review: Trials and Errors in NeuroscienceKCR
This document discusses several challenges of conducting clinical research in neuroscience. It notes that while interest and publications in neuroscience have increased, the nervous system remains the least understood part of the human body. Conducting global clinical trials in neuroscience poses difficulties due to variations in where patients can be found, standards of care between countries, and restrictions on access to modern therapies. The document also outlines problems with using complex questionnaires in trials and inconsistencies in how patients perceive and report their symptoms.
The Learning Health System: Thinking and Acting Across ScalesPhilip Payne
A Learning Health System (LHS) can be defined as an environment in which knowledge generation processes are embedded into daily clinical practice in order to continually improve the quality, safety, and outcomes of healthcare delivery. While still largely an aspirational goal, the promise of the LHS is a future in which every patient encounter is an opportunity to learn and improve that patient’s care, as well as the care their family and broader community receives. The foundation for building such an LHS can and should be the Electronic Health Record (EHR), which provides the basis for the comprehensive instrumentation and measurement of clinical phenotypes, as well as a means of delivering new evidence at the patient- and population levels. In this presentation, we will explore the ways in which such EHR-derived phenotypes can be combined with complementary data across a spectrum from biomolecules to population level trends, to both generate insights and deliver such knowledge in the right time, place, and format, ultimately improving clinical outcomes and value.
This document provides definitions for various terms related to health informatics. It defines terms such as algorithm, bioinformatics, clinical coding system, clinical data system, clinical decision tool, communication, database, electronic health record, and medical knowledge. The definitions cover topics such as the use of informatics methods and technologies in clinical care, research, public health, and consumer health contexts.
This document discusses the journey of a SAS programmer becoming a clinical SAS programmer. It describes some key differences between being a pure SAS programmer versus a clinical programmer. A clinical programmer must understand objectives of clinical trials, trial phases, and cross-functional roles of sponsors, investigators, statisticians, and data managers. The document outlines the main objectives and characteristics of each phase of clinical trials from Phase I to Phase IV. It emphasizes that clinical programming requires understanding clinical concepts in addition to programming skills.
The document discusses Peter Embi's approach to presenting on clinical and translational research informatics literature from the past year. It provides an overview of Embi's search strategy and categorization of papers, which involved searching literature databases and recommendations from colleagues. The presentation will focus on summarizing representative papers within categories like data sharing/reuse, methods and systems, recruitment and eligibility, and trends in clinical research informatics.
Clinical Research Informatics (CRI) Year-in-Review 2014Peter Embi
Peter Embi's review of notable publications and events in the field of Clinical Research Informatics (CRI) that took place in 2013+. This was presented as the closing keynote presentation of the 2014 AMIA CRI Summit in San Francisco, CA on April 11, 2014.
Disrupting the Oncology Care Continuum through AI and Advanced AnalyticsMichael Peters
Having Presented at #SROA18 on the need to move from basic Data and Reporting to Advanced Analytics and Artificial Intelligence, I thought I would share my deck for all.
This document summarizes a presentation given by Peter Embi on clinical and translational research and informatics literature from 2012-2013. It begins with Embi's background and approach to identifying relevant papers. It then describes the topics covered in the presentation, which are grouped into categories like clinical data reuse, data management/discovery, researcher support/resources, and recruitment. For each category, 1-2 key papers are summarized in 1-3 sentences. The summaries highlight the papers' goals, methods, and conclusions. The document concludes by mentioning other notable papers and events from the past year.
This paper describes the methods of the Treatment In Morning versus Evening (TIME) study, a large prospective randomized open-label blinded endpoint study comparing morning versus evening dosing of antihypertensive medications. The TIME study recruits participants through advertising, primary and secondary care, and patient databases in the UK. Participants self-enroll and consent on a secure website, and are randomized to morning or evening dosing. Follow-ups are conducted by automated email at 1 month and every 3 months thereafter. The study uses a prospective randomized open-label blinded endpoint design to establish if evening dosing is more cardioprotective than morning dosing.
Healthcare analytics has the potential to reduce costs of treatment, predict outbreaks of epidemics, avoid preventable diseases, and improve quality of life. It can improve processes, enhance patient care, and save lives by using analytics to better predict patient needs and staff accordingly. Electronic health records store a patient's comprehensive medical history digitally, allowing doctors to track changes over time with no risk of lost data or duplication. Analyzing demographic health data allows for strategic planning to identify factors that discourage treatment uptake. Analytics also helps prevent security threats, fraud, and inaccurate insurance claims while streamlining the claims process. The patient experience, overall population health, and operational costs can all be improved through healthcare analytics.
One aspect of personalized medicine is certain; it is
complicated. If you happen to have a highly scientific
background, you actually may be able to define the term.
However, if you polled five people very familiar with
personalized medicine, you should expect to hear five
different definitions. ISR wanted to understand where oncologists stand on the topic of personalized medicine. We interviewed 101 US based, board-certified oncologists to gather their views on
how familiar they are with personalized medicine, how they
are treating patients, what tests are being used and which
will be used more, and how their patient treatment regimens
could evolve in the future.
The document summarizes a study that evaluated the acceptability of a personally controlled health record (PCHR) system called Indivo in a community-based setting. Over 300 participants were involved in formative research activities to understand awareness, beliefs and reactions. The study found moderate awareness of privacy issues and high support for patient autonomy. Results informed guidelines on design improvements, literacy tools, and safety protocols for PCHR systems. Limitations included a lack of detail on methodology and sample selection.
Adaptive Learning Expert System for Diagnosis and Management of Viral Hepatitisgerogepatton
Viral hepatitis is the regularly found health problem throughout the world among other easily transmitted diseases, such as tuberculosis, human immune virus, malaria and so on. Among all hepatitis viruses, the uppermost numbers of deaths are result from the long-lasting hepatitis C infection or long-lasting hepatitis B. In order to develop this system, the knowledge is acquired using both structured and semi-structured interviews from internists of St.Paul Hospital. Once the knowledge is acquired, it is modeled and represented using rule based reasoning techniques. Both forward and backward chaining is used to infer the rules and provide appropriate advices in the developed expert system. For the purpose of developing the prototype expert system SWI-prolog editor also used. The proposed system has the ability to adapt with dynamic knowledge by generalizing rules and discover new rules through learning the newly arrived knowledge from domain experts adaptively without any help from the knowledge engineer
ADAPTIVE LEARNING EXPERT SYSTEM FOR DIAGNOSIS AND MANAGEMENT OF VIRAL HEPATITISijaia
Viral hepatitis is the regularly found health problem throughout the world among other easily transmitted
diseases, such as tuberculosis, human immune virus, malaria and so on. Among all hepatitis viruses, the
uppermost numbers of deaths are result from the long-lasting hepatitis C infection or long-lasting hepatitis
B. In order to develop this system, the knowledge is acquired using both structured and semi-structured
interviews from internists of St.Paul Hospital. Once the knowledge is acquired, it is modeled and
represented using rule based reasoning techniques. Both forward and backward chaining is used to infer
the rules and provide appropriate advices in the developed expert system. For the purpose of developing
the prototype expert system SWI-prolog editor also used. The proposed system has the ability to adapt with
dynamic knowledge by generalizing rules and discover new rules through learning the newly arrived
knowledge from domain experts adaptively without any help from the knowledge engineer.
This study evaluated the use of an interactive computer module to supplement a traditional paper informed consent form for pediatric endoscopy. Parents who received the supplemental electronic module were more likely to achieve informed consent compared to those who only received the paper form. The electronic module did not impact parent satisfaction, anxiety, or the number of questions asked of physicians. The results suggest that electronic tools can enhance traditional informed consent methods.
This document provides definitions for various terms related to health informatics. It defines terms such as algorithm, bioinformatics, clinical coding system, clinical data system, clinical decision tool, communication, database, electronic health record, and medical knowledge. The definitions cover topics such as the use of informatics methods and technologies in research, clinical practice, public health, and consumer health contexts.
Provenance abstraction for implementing security: Learning Health System and ...Vasa Curcin
Discussion of provenance usage in the Learning Health System paradigm, as implemented in the TRANSFoRm project, with focus on security requirements and how they can be addressed using provenance graph abstraction.
This document discusses developing a framework for evidence-based medicine using real-world data. It outlines developing a framework that captures baseline rates and evolves over time. The objective is to facilitate discussion on assembling baseline data from sources like clinical trials, patient data, and genomic studies to derive treatable patient traits and map available therapies to identify unmet medical needs. It provides examples of establishing such a framework for a condition like atopic dermatitis and lists potential components needed like an up-to-date health metrics database and tools for evaluating disease burden and treatment efficacy.
Researchers and care providers wanted to have access to all of the patients` vitals signs (temperature, blood pressure, heart rate, and respiratory rate) but most of this data wasn?t recorded, only a few readings a day were posted to the patients Electronic Medical Record (EMR). The EMR isn`t meant to store such volume of data, let alone to perform any data mining on it. This session will describe the architecture of the solution that was implemented to collect these vital signs automatically from Bedside Medical Devices (BDMI), and store them into a temporary storage, then load them into a Hadoop cluster. The session will also cover how the team married this vital signs data in the HDFS (Hadoop File System) with the rest of the EMR data for our Principles Investigators (PI) in our research institute to search for correlations between administered medications, diagnosis, and vital signs readings. The session will describe the reasons behind the design decisions that were made, such as using a Cloud Hadoop cluster versus on-premises while maintaining HIPAA.
Intelligent, Interoperable, Relevance and Value Enrichment in Universal, Ubiq...ijceronline
Electronic Health Records(EHR) are electronically maintained, linked, collections of allied, patientrelated healthcare information collected during past encounters. They incorporate patient demographic information, encounter details, laboratory reports, prescription notes, past medical records, and other medical data. EHR creation is designed to support the future diagnosis, treatment, and decision making in patient care. However, since EHR technology is a burgeoning science, many facets lie under-used or under-utilized.Current implementations are confined to national boundaries managed by individual National Health Systems (NHS). Consolidated, universally interoperable EHR schemes are still a thing for the future; a migratory patient may not have his national EHR available in distant territories. Further, the examination of operational factors unearthed more inadequacies. Interoperability-related issues include the limiting network bandwidth causing inordinate delays, diverse local storage schemes at the various NHS clusters, the related requirement for synchronous vocabulary-related translation mechanisms at the various NHScontrolled boundaries causing inordinate delays, and the related security and access issues. These issues arise from the requirement for synchronous, query-messaging nature of information access and exchange. This paper articulates a novel, sound, and secure methodology for achieving true International Interoperability and uniform efficiency in ubiquitous Electronic Health Record systems.Utilizing intelligent machine learning processes, required query-messaging information is meaningfully aggregated enhancing the relevancy, access speed, and value-derivation from the given data.Asynchronous learning excludes the need for high available network bandwidth, upload and download delays associated with current synchronous database/cloud systems.Indeed, this overarching solution ensures seamless synchronous operation and high-end international interoperability, and would work in any ubiquitous EHR environment.
European Pharmaceutical Review: Trials and Errors in NeuroscienceKCR
This document discusses several challenges of conducting clinical research in neuroscience. It notes that while interest and publications in neuroscience have increased, the nervous system remains the least understood part of the human body. Conducting global clinical trials in neuroscience poses difficulties due to variations in where patients can be found, standards of care between countries, and restrictions on access to modern therapies. The document also outlines problems with using complex questionnaires in trials and inconsistencies in how patients perceive and report their symptoms.
The Learning Health System: Thinking and Acting Across ScalesPhilip Payne
A Learning Health System (LHS) can be defined as an environment in which knowledge generation processes are embedded into daily clinical practice in order to continually improve the quality, safety, and outcomes of healthcare delivery. While still largely an aspirational goal, the promise of the LHS is a future in which every patient encounter is an opportunity to learn and improve that patient’s care, as well as the care their family and broader community receives. The foundation for building such an LHS can and should be the Electronic Health Record (EHR), which provides the basis for the comprehensive instrumentation and measurement of clinical phenotypes, as well as a means of delivering new evidence at the patient- and population levels. In this presentation, we will explore the ways in which such EHR-derived phenotypes can be combined with complementary data across a spectrum from biomolecules to population level trends, to both generate insights and deliver such knowledge in the right time, place, and format, ultimately improving clinical outcomes and value.
This document provides definitions for various terms related to health informatics. It defines terms such as algorithm, bioinformatics, clinical coding system, clinical data system, clinical decision tool, communication, database, electronic health record, and medical knowledge. The definitions cover topics such as the use of informatics methods and technologies in clinical care, research, public health, and consumer health contexts.
Health research, clinical registries, electronic health records – how do they...Koray Atalag
This is a talk I gave at my own organisation - National Institute for Health Innovation (NIHI) of the University of Auckland on 6 Aug 2014. Abstract as follows:
In this talk I’ll first cover the topic of clinical registry – an invaluable tool for supporting clinical practice but also gaining momentum in research and quality improvement. NIHI has been very active in this space: we have delivered the prestigious and highly successful National Cardiac Registry (ANZACS-QI) together with VIEW research team and also very recently launched the Gestational Diabetes Registry with Counties Manukau DHB & Diabetes Projects Trust. A few others are in likely to come down the line. This is a huge opportunity for health data driven research and NIHI to position itself as ‘the health data steward’ in the country given our independent status and existing IT infrastructure and “good culture” of working with health data . NIHI’s ‘health informatics’ twist in delivering these projects is how we go about defining ‘information’ – using a scientifically credible and robust methodology: openEHR. This is an international (and now national too) standard to non-ambiguously define health information so that they are easy to understand and also are computable. We build software (even automatically in some cases!) using models created by this formalism. I’ll give basics of openEHR approach and then walk you through how to make sense out of all these. Hopefully you may have an idea about its ‘value proposition’ (as business people call) or Science merit as I like to call it ;)
Combining Patient Records, Genomic Data and Environmental Data to Enable Tran...Perficient, Inc.
The average academic research organization (ARO) and hospital has many systems that house patient-related information, such as patient records and genomic data. Combining data from a variety of sources in an ongoing manner can enable complex and meaningful querying, reporting and analysis for the purposes of improving patient safety and care, boosting operational efficiency, and supporting personalized medicine initiatives.
In this webinar, Perficient’s Mike Grossman, a director of clinical data warehousing and analytics, and Martin Sizemore, a healthcare strategist, discussed:
-How AROs and hospitals can benefit from a systematic approach to combining data from diverse systems and utilizing a suite of data extraction, reporting, and analytical tools, in order to support a wide variety of needs and requests
-Examples of proposed solutions to real-life challenges AROs and hospitals often encounter
Talk entitled "from the Virtual Human to a Digital Me" presented at the Virtual Physiological Human 2012 Conference held at IET Savoy, Savoy Place, London, 18-20 September 2012.
Welcome to the age of cognitive computing: where intelligent machines have
moved from the realms of science fiction to the present day. This groundbreaking
technology is driving advanced discoveries and allowing improved decision-making –
resulting in better patient care
Machine learning algorithms show promise in diagnostic development but face barriers to market entry. ML can generate biomarker signatures for new diagnostics from large clinical datasets, either as fixed biomarker panels or with continuously evolving algorithms using full sequencing data. However, ML tests require extensive validation in prospective clinical trials with well-defined controls before regulatory approval and reimbursement. Major technical limitations also include limited high-quality patient data and potential sample biases. Widespread adoption of ML diagnostics will depend on demonstrating clear clinical utility and outcomes that change standards of care.
This document discusses the importance of electronic health records and clinical decision support systems for improving healthcare quality and reducing costs and errors. It notes that healthcare information is essential for providing and managing patient care. Clinical decision support systems can help ensure best practices are followed and reduce unnecessary tests and costs. However, the document also finds that healthcare practices still vary greatly between regions and clinicians due to complexity, uncertainty and lack of evidence. More high-quality data and decision support are needed to address these issues and improve consistent high-value care.
1. The document discusses the advantages and disadvantages of implementing an electronic health record (EHR) system to replace a paper-based system.
2. A key disadvantage is the high cost of implementation, with the cost of Alberta's new clinical information system estimated at $1.6 billion over 10 years.
3. Another disadvantage is a lack of interoperability between existing EHR systems, which prevents patient information from being shared and understood across health settings.
Data Science Deep Roots in Healthcare IndustryDinesh V
Data Science transforms the healthcare industry with impeccable solutions that can improve patient care through EHRs, medical imaging, drug discovery, predictive medicines and genetics and genomics.
The document discusses the multiple lives of clinical data. It begins by describing how clinical data is first used in patient care by being documented in the electronic medical record. It then discusses how clinical data can be transformed and used for research purposes by analyzing aggregate data from clinical documentation. The document provides examples of how clinical data from a clinic was analyzed over time to enable new research studies. Finally, the document discusses how clinical data can have a third life in an enterprise data warehouse, where operational and strategic questions can be answered by analyzing and reporting on clinical data. It provides examples of the types of analyses that can be done using an enterprise data warehouse.
AI and machine learning show promise in addressing several healthcare problems by making sense of complex and diverse medical data. However, healthcare data needs to be handled carefully due to privacy and bias concerns. Radiology is one area that could benefit from AI, which has been shown to identify things in medical images well. AI may help pathologists more consistently score biomarkers like PD-L1. Pharmaceutical companies are also exploring uses of AI in areas like drug discovery and patient stratification.
In this full-day tutorial, you will learn basic overview of electronic medical records systems, health data management and how you can use the OpenMRS system for data and information management. We will cover basics of installation, user management, location management, patient dashboards and some interesting features that are provided by different modules. You can see how OpenMRS can be customized with different modules that are suitable for different contexts. This tutorial is helpful for new users and developers who would like to know the features of OpenMRS. Individuals who would like to evaluate and try to see if OpenMRS fits their healthcare needs will also benefit from this tutorial.
Data Mining and Big Data Analytics in Pharma Ankur Khanna
The document proposes software solutions for drug research, including text mining, data warehousing, data mining, database development, and big data analytics. It discusses common challenges in drug research like the high costs and low success rates. It then describes various solutions like text mining patents and research to help identify new research opportunities and reduce duplication of efforts. It provides examples of how various pharmaceutical companies use data mining and warehousing techniques. Overall, the document pitches different IT solutions that can help pharmaceutical and life sciences companies address their research challenges and make their processes more efficient.
An AI-based Decision Platform built using unified data model, incorporating systems biology topics for unit analysis using semi-supervised learning models
Panel: FROM SMALL TO BIG TO RICH DATA: Dealing with new sources of data in Biomedicine Precision and Participatory Medicine
Fernando J. Martin-Sanchez, Professor and Chair of Health Informatics at Melbourne Medical School, discusses new sources of data in biomedicine including small, big, and rich data. He describes how small data connects people with meaningful insights from big data to be understandable for everyday tasks. Martin-Sanchez also discusses precision medicine, participatory health, and how convergence between the two can help integrate multiple data sources including genomics, the exposome, and digital health to improve disease prevention and treatment outcomes.
Automated Abstracting - NCRA San Antonio 2015Victor Brunka
Artificial intelligence can help automate the process of completing cancer registry abstracts. Recent successes in automating casefinding from pathology and imaging reports and extracting standardized data show promise. Continued progress in natural language processing, along with consolidation of diverse health records into a common data architecture, may allow auto-population of most abstract fields with high accuracy and completeness. This would enhance quality and timeliness of cancer reporting while reducing costs. The registry's role then focuses on complex tasks, maintaining standards and oversight.
Similar to Simplifying semantics for biomedical applications (20)
Gruff provides two versions: a standalone version for exploring small datasets and a server version for large enterprise datasets. It allows navigating graphs in various views, automatically deriving queries from patterns, and programmatically controlling Gruff over HTTP. The document outlines 11 lessons on using key Gruff features like managing triple stores, the graph view, table view, visual query builder, and using pictures for nodes. It demonstrates connecting to AllegroGraph Server and discusses future plans like additional views and statistical/geospatial analysis support.
This document discusses Real Time Semantic Data Warehousing (RETIS) technology provided by Sindice.com. RETIS allows pharmaceutical companies to integrate diverse public and private data sources in real-time to help data scientists discover new insights and connections. It provides unified search and browsing of live internal and external datasets. Sindice's semantic warehousing approach uses Linked Data clouds, semantic sandboxes, and cloud computing to easily integrate new databases with unprecedented flexibility and scale.
The document discusses the adoption of RDFa and structured data, also known as the "Wave", by major websites like Yahoo, Google, and Facebook between 2008-2010. It provides timelines showing when each site began supporting different types of structured data through RDFa, such as reviews, events, recipes, and products. The growing use of RDFa led to increases in backlinks and click-through rates, demonstrating the importance of structured data for search engine optimization.
The document discusses the rise of structured data and RDFa usage on the web, known as "The Wave". It describes how search engines like Yahoo, Google and Facebook began supporting RDFa for rich snippets in search results starting in 2008. As adoption increased, it led to growth in the Linked Open Data cloud. The document encourages adding RDFa to websites to take advantage of benefits like increased click-through rates and search visibility. It notes that standardized vocabularies are important and demonstrates an RDFa validation tool.
The document discusses how the Semantic Web could potentially disrupt or benefit online commerce. It provides definitions and explanations of key concepts related to the Semantic Web, including semantic triples using RDF, ontologies, Linked Data, and technologies like SPARQL and RDFa that help extract structured data from web pages. The goal is to move from a web of documents to a web of structured, interconnected data that can be processed by machines.
The document discusses how the Semantic Web could potentially disrupt or benefit online commerce. It provides definitions and explanations of key concepts related to the Semantic Web, including semantic triples using RDF, ontologies, Linked Data, and technologies like SPARQL and RDFa that help extract structured data from web pages. The goal is to move from a web of documents to a web of data through adding meaning and relationships between concepts.
The document discusses the semantic web and how it can potentially disrupt or benefit online commerce. It provides definitions and explanations of key concepts related to the semantic web including RDF, ontologies, linked data, and semantic search. It outlines how search engines and websites are increasingly adopting and leveraging semantic web technologies like RDFa to provide richer search results and experiences for users.
This document discusses how semantic web technologies are being leveraged in various real world applications. It begins by providing examples of how search engines like Google and Bing are using semantic metadata to provide definitive answers and rich snippets directly in search results. It then discusses how social networks like Facebook are using semantic metadata through technologies like Open Graph protocol. The document concludes by showcasing the growth of Linked Open Data cloud and listing organizations that are adopting semantic web standards like RDFa.
1. Simplifying Semantic Solutions
for Biomedical Applications:
San Diego Semantic Web Meet-up
Eric Little, PhD
Chief Knowledge Engineer
Eric.Little@CTG.com
2. Current Biomedical Ontologies Exist in 3
Modes
Metaphysically-based category
systems that can properly classify
biomedical entities & relations.
Improperly structured coding
systems (ICD-9/10, LOINC,
EHRs, etc) where little care has
been taken in providing the proper
formal standardization.
Ontology development tools used
for implementing models and their
accompanying GUIs.
4. Promoting Fundamental Change
Decisions based upon
Medical Informatics
Decisions based upon
Claims and Codes
ProactiveReactive
Rich Internet Experience &
Self Service
Static or limited delivery
Current Technology ApproachesCurrent Technology Approaches New Technology ApproachesNew Technology Approaches
Example: Current systems rely on things like ICD-9 and CPT codes
- so patients already possess the chronic condition (past the threshold for a disease)
- it is hard to objectively judge treatment efficacy
The need is to proactively recognize the complexities that lead to chronic disease and
use objective measures to judge appropriate courses of action.
5. Ontology and Advanced Business
Intelligence
• CTG provides a combination of innovative technologies that deliver
unique medical insights into:
– New Research Areas
• Reuse & repurposing of knowledge
– Disease Registries
– Interventions
– Physician Feedback
– Outcomes Analysis
– Treatment Efficacy
– Provider Efficacy
– Cost Effectiveness
– Waste Reduction
OntologyReasoning
Engines
Semantic
Web
7. Ontologies should provide “actionable intelligence” across
an entire organization
• Knowledge Specification – Provide formal definitions of all items/relations
in a domain.
– E.g., A substance such as Alpha-fetoprotein exists in high levels in
healthy fetuses and infants and plays an important role in their health
and development. High levels in adults is an indicator of liver cancer.
Thus the same protein can be positive or negative based on age.
• Knowledge Elicitation – Provide search/query capabilities to better find
and utilize pertinent information.
– E.g., What are all the known compounds that can act as an SSRI drug?
Which treatments have proven the most effective for a particular
disease type?
• Knowledge Transfer – Provide improved communication between 1)
human-to-machine, 2) human-to-human, 3) machine-to-machine.
– E.g., Improves understanding of complex things like disease states and
provides added insights – requires linking disparate knowledge from
different experts.
Traditional technologies can prove difficult to provide these kinds of capabilities.
Ontology-based systems can help to integrate people through the use of a
common semantic framework.
8. Capturing Medical Knowledge in the Ontology
(Knowledge of Ranges Rules)
• The normal ranges for hemoglobin depend on the age and, beginning in adolescence, the gender
of the person. The normal ranges are:
• Newborns: 17-22 gm/dl
• One (1) week of age: 15-20 gm/dl
• One (1) month of age: 11-15gm/dl
• Children: 11-13 gm/dl
• Adult males: 14-18 gm/dl
• Adult women: 12-16 gm/dl
• Men after middle age: 12.4-14.9 gm/dl
• Women after middle age: 11.7-13.8 gm/dl
9. Merging Independent Domain Ontologies
• Ontologies provide:
– Relations between
blood chemistry,
medical conditions and
symptoms
– Relations between
diseases, diagnoses
and treatments
– Knowledge integration
across a broad
spectrum of
applications
– A common medical
semantics – gives
meaning to codes and
integrates the
information contained in
those codes
Integrate
Integrate
Integrate
Diagnosis
Ontology
Disease
Ontology Allows
inference
across
various
domains
Cell
Ontology
LOINC
Ontology
10. Diagnosis
SNOMED-CT ICD9
Ontology Storyboard for Medical Informatics
Physical Test Results
LOINC
Medical Conditions
SNOMED-CT
Medical Claims
ICD9 - CPT
EHR
Disease Risk Registry
SNOMED-CT
Disease Registry
SNOMED-CT
Treatment
SNOMED-CT ,CPT
Co-morbidities
Disease
Model
Example of Utilizing MedMaP for
Liver Cancer Screening
LOINC CODE
1834-1
AFP Ser-mCnc
ICD-9
155
Elevated AFP
Age Ontology:
Adult Male
Disease Risk Ontology:
Hepatocellular
Carcinoma (HCC)
ICD-9
155
Foundational Model
Human Anatomy
Ontology:
LIVER – Yolk Sac
Cell Ontology:
Liver Cells Dendrite Cell
Ontology:
Immune Cells
Patient Cohorting
Gene Ontology:
Chromosome 4
Gene ID - 174
CPT 47135:
Liver Transplant
Pharmaceutical
Treatments
Drug Bank Ontology:
Sorafenib
Hepatitis B
Hepatitis C
Patient Age, Sex, etc
11. Objectives of Ontology-based Biomedical Systems
• Find new insights
• Provide new knowledge and insights by running inference engines (the system itself should multiply
the knowledge)
• Establish and apply business rules which entail a higher level of reasoning
• Capture SME expertise and deliver globally
• Establish a model that captures SME approach and science (Establish a collaborative community)
• Insulate the common user from the complexity unless it is requested
• Access to knowledge should not require IT intervention
• Allow user to customize their experience to meet their specific needs and objectives
• Portal to allow for customized experience
• Multi-perspectivealism should allow multiple views or renderings of the same data
• Also support static views to dictate or direct useable experience
• Deliver intelligence in an intuitive easily consumable format
• Ease of Use = High Adoption
• Interactive and tactile feel tp allow the user to experience and interact with the ABI system
• Rendered / skinned in format familiar to users surroundings
• Utilize Common Semantics that are relevant to the user
• Health Care Provider – Hypertensive (140 systolic / 90 diastolic) is more common to Patient as High
Blood Pressure
• Model must support synonyms
• Educate
• Allow users to gain insights to SME expertise, and reasoning results
• News feeds, research, sparql end points, lexicons of relevant terms, video
• Provide Evidence to support findings
• Ability to drill into reasoning and scoring and expose business rules, raw data, formulas, and
foundations of knowledge
• Make intelligence “actionable”
• Allow people to take action with the knowledge gained
12. Technology Stack
RDF RDF SchemaEstablish Fact Base
XML XML Schema XSLTStandard Structure&Syntax
OWL–DL Ontology
Model Concepts into
Knowledge Base
REASONER
Provide Reasoning and
Inference
Rule
Execution
Take Action based on
Knowledge
IncreaseIntelligence
13. Layered Inference based upon
Medical Informatics
Medical Concepts & Facts
Medical Conditions
& Disease Risk Models
Disease Registries
& Recommend treatments
Interventions &
Outcomes & Efficacy
IncreasesCapability
Claims & Labs
Establish Fact Base
Standard Structure&Syntax
Model Concepts into
Knowledge Base
Provide Reasoning and
Inference
Take Action based on
Knowledge
15. Knowledge Management Technologies for
Drugs/Immunizations
• Utilizes
federated sub-
ontologies to:
– Classify types of
diseases treated
– Classify active
and inactive
ingredients
– Classify potential
side effects
-e.g.,
contraindications
– Classify dosages
– Classify
chemical
composition/for
mula
– Classify
pathways
Class Hierarchy
Product of Interest
(Pentacel Immunization)
Graph of Items in
The Ontology
Web Search Capabilities
On Any Item in Ontology
(e.g., Pentacel)
16. Advanced Research Capabilities
• Information can be manually searched or automatically linked (via
RSS feeds)
• Allows researchers to find the information they need very rapidly.
• Information gathered through these inputs can be quickly ingested
into the ontology
– improves the knowledge base over time
Specific Research
Article
Details of the
Article
On-line Resource
17. Graphical Visualizations of Important
Relationships in the Data
• Different data sets can be graphically related to one another.
• Allows for rapid insights into new kinds of relationships.
• Can be used to represent individual records (e.g., indiv. patient data)
versus group data (e.g., groups of patients classified into categories –
morbidly obese, clinically obese, etc.
Shows Overlap of
Different Categories
18. Building a Disease Registry
• Proactive Identification of a Disease Risk Constituency allows a Payer to take action
(intervene or educate) and perform preventive maintenance.
• Abilities for Predictive Modeling of Disease Risk, etc.
19. Inferring Patient Complexity, Conditions, Disease & Co
Morbidity
• The Medical Management ontology utilizes an inference engine auto classifies based upon
industry accepted reference ranges and intelligent rules to determine complexity factors such
as conditions, disease risk, etc.
• Blue Lines in the diagram below identify knowledge learned (inferred) from the inference
engine.
20. Rapid Researching of Disease Management Related Issues
• Allows one to quickly identify and apply
the most available research on a given
topic.
• Information can be manually searched
• Information can be automatically linked to
the ontology (RSS feeds).
• Allows the knowledge base of diseases
(and related symptoms/blood chemistry
readings) to continuously grow over time.
• Knowledge of disease becomes an asset
over time.
21. SELECT ?lotno ?plrcount ?scocount ?sumcount ?factor
WHERE {
{SELECT ?lotno (COUNT(*) AS ?plrcount)
WHERE {
?plr a PLR:Pallet_Reading .
?plr PLR:lot_no ?lotno .
} GROUP BY ?lotno
}
OPTIONAL {
{SELECT ?lotno ?scocount
WHERE {
?sco a SCO:Lot_Score .
?sco PLR:lot_no ?lotno .
?sco SCO:hasPalletCount ?scocount
}
}
}
OPTIONAL {
{SELECT ?lotno (SUM (?count) AS ?sumcount)
WHERE {
?s a SUM:Lot_Summary .
?s PLR:lot_no ?lotno .
?s SUM:hasSummaryCount ?count
} GROUP BY ?lotno
}
}
LET (?factor := ?sumcount / ?plrcount)
} ORDER BY ?lotno
CONSTRUCT {
?uri a SUM:Hourly_Summary .
?uri PLR:lot_no ?lotno .
?uri PLR:captureevtdate ?date .
?uri PLR:hasprocessinghour ?hr .
?uri CLS:hasClassifyGroup ?classifyGroup .
?uri CLS:hasClassifyProperty ?classifyProp .
?uri CLS:hasClassifyCategory ?classifyCategory .
?uri SUM:hasSummaryCount ?LotHrCount .
}
WHERE {
{
SELECT ?lotno ?date ?hr ?classifyGroup ?classifyProp ?
classifyCategory (SUM(?Count) AS ?LotHrCount)
WHERE {
?sum a SUM:Hourly_Summary .
?sum PLR:lot_no ?lotno .
?sum PLR:captureevtdate ?date .
?sum PLR:hasprocessinghour ?hr .
?sum CLS:hasClassifyGroup ?classifyGroup .
?sum CLS:hasClassifyProperty ?classifyProp .
?sum CLS:hasClassifyCategory ?classifyCategory .
?sum SUM:hasSummaryCount ?Count .
} GROUP BY ?lotno ?date ?hr ?classifyGroup ?
classifyProp ?classifyCategory
} .
LET (?uuid := smf:generateUUID()) .
LET (?uri := smf:buildURI("<http://www.ctg.com/SUM#{?lotno}_{?
date}_{?hr}_{?uuid}>")) .
}
Examples of Complex SPARQL Queries
25. How Knowledge is Delivered – Medical
Management Portal (MedMaP)
• Support access to multiple analytical tools (portlets)
• Users customize their experience to meet their specific objectives
• Support multiple perspectives (Different views for different people)
• Allow users to self help – self serve
• Educate
• Allow users to gain insights to SME expertise, and
reasoning results
• News feeds, research, sparql end points, lexicons of
relevant terms, video
• Support static views to dictate or direct the most efficient
experience
26. Graphical Visualizations of Important Relationships in the
Data
• Different data sets can be
graphically related to one
another.
• Allows for rapid insights into
new kinds of relationships.
• Can be used to represent
individual records (e.g.,
indiv. patient data) versus
group data (e.g., groups of
patients classified into
categories – morbidly
obese, clinically obese, etc.
Yellow Balls Represent
Individual Patients
Purple Balls Represent
Individual Patients
With Added Attribute of
HYPERGLYCEMIA
27. Cohort Populations: Identifying Critical Relationships
within the Ontology
• Information from disparate
sources can be rapidly
compared using industry
standard classifications.
– Can relate information
across different medical
areas.
• The system can be used to link
blood chemistry values to one
another (e.g., glucose readings
& BUN scores).
• It can link those values to other
items (e.g., patient attributes –
gender, Body Mass Index (BMI),
etc.)
29. Therapeutic Optimization for Disease Management
• Example: Effective Treatment
Management
– Prescribing large doses of epoetin
alpha for low levels of hemoglobin
(Anemia) in CKD patients.
– Outcomes analysis can be used to
determine whether this is an
appropriate course of action.
30. Therapeutic Optimization for Disease Management
• Example: Potentially Ineffective
Treatment Management
– Prescribing Large Doses of Epoetin
for normal Hemoglobin.
– Can be used to identify when a
treatment is not warranted based on
empirical evidence (outcomes).
– Resulting Question: Why are large
and costly doses of epoetin alpha
being given to patients with normal
hemoglobin levels?
Editor's Notes
If you cannot take action with the intelligence what is the point – this is usually not science for the sake of science
If you cannot take action with the intelligence what is the point – this is usually not science for the sake of science