This document discusses UCSF's Enterprise Data Warehouse and Analytics Team. It describes the team's objectives to understand data needs, implement best practices for data management, and provide access and expertise to enterprise data. It then focuses on UCSF's implementation of the Epic Cogito Data Warehouse, which combines clinical and financial data from their Epic system and other sources into a common data model. Key details include the types of data in Cogito, how information flows from operational systems into the warehouse, and UCSF's timeline for incremental improvements to data and capabilities.
Extending Your EMR with Business Intelligence SolutionsPerficient, Inc.
The best business intelligence applications start with one part EMR, one part financial applications, and one part operational applications stirred into real insights. These slides show examples from speakers that have successfully extended EMRs into managing costs, transmitting information to disease registries and improving patient care.
How Northwestern Medicine is Leveraging Epic to Enable Value-Based CarePerficient, Inc.
Value-based care and payment reform are prompting hospitals and healthcare providers to more closely manage population health. Hospitals and health systems rely on technology and data to outline the characteristics of their population and identify high-risk patients in order to manage chronic diseases and deliver enhanced preventative care.
Our webinar covered how Cadence Health, now part of Northwestern Medicine, is leveraging the native capabilities of Epic to manage their population health initiatives and value-based care relationships across the continuum of care.
Our speakers:
-Analyzed how Epic’s Healthy Planet and Cogito platforms can be used to manage value-based care initiatives.
-Examined the three steps for effective population health management: Collect data, analyze data and engage with patients.
-Covered how access to analytics allows physicians at Northwestern Medicine to deliver enhanced preventive care and better manage chronic diseases.
-Discussed Northwestern Medicine’s strategy to integrate data from Epic and other data sources.
Combining Patient Records, Genomic Data and Environmental Data to Enable Tran...Perficient, Inc.
The average academic research organization (ARO) and hospital has many systems that house patient-related information, such as patient records and genomic data. Combining data from a variety of sources in an ongoing manner can enable complex and meaningful querying, reporting and analysis for the purposes of improving patient safety and care, boosting operational efficiency, and supporting personalized medicine initiatives.
In this webinar, Perficient’s Mike Grossman, a director of clinical data warehousing and analytics, and Martin Sizemore, a healthcare strategist, discussed:
-How AROs and hospitals can benefit from a systematic approach to combining data from diverse systems and utilizing a suite of data extraction, reporting, and analytical tools, in order to support a wide variety of needs and requests
-Examples of proposed solutions to real-life challenges AROs and hospitals often encounter
Levi Thatcher, Health Catalyst Director of Data Science and his team provide a live demonstration using healthcare.ai to implement a healthcare-specific machine learning model from data source to patient impact. Levi goes through a hands-on coding example while sharing his insights on the value of predictive analytics, the best path towards implementation, and avoiding common pitfalls. Frequently asked questions are answered during the session.
During the webinar, we will:
Describe and install healthcare.ai
Build and evaluate a machine learning model
Deploy interpretable predictions to SQL Server
Discuss the process of deploying into a live analytics environment.
If you’d like to follow along, you should download and install R and RStudio prior to the event. We look forward to you joining us!
Extending Your EMR with Business Intelligence SolutionsPerficient, Inc.
The best business intelligence applications start with one part EMR, one part financial applications, and one part operational applications stirred into real insights. These slides show examples from speakers that have successfully extended EMRs into managing costs, transmitting information to disease registries and improving patient care.
How Northwestern Medicine is Leveraging Epic to Enable Value-Based CarePerficient, Inc.
Value-based care and payment reform are prompting hospitals and healthcare providers to more closely manage population health. Hospitals and health systems rely on technology and data to outline the characteristics of their population and identify high-risk patients in order to manage chronic diseases and deliver enhanced preventative care.
Our webinar covered how Cadence Health, now part of Northwestern Medicine, is leveraging the native capabilities of Epic to manage their population health initiatives and value-based care relationships across the continuum of care.
Our speakers:
-Analyzed how Epic’s Healthy Planet and Cogito platforms can be used to manage value-based care initiatives.
-Examined the three steps for effective population health management: Collect data, analyze data and engage with patients.
-Covered how access to analytics allows physicians at Northwestern Medicine to deliver enhanced preventive care and better manage chronic diseases.
-Discussed Northwestern Medicine’s strategy to integrate data from Epic and other data sources.
Combining Patient Records, Genomic Data and Environmental Data to Enable Tran...Perficient, Inc.
The average academic research organization (ARO) and hospital has many systems that house patient-related information, such as patient records and genomic data. Combining data from a variety of sources in an ongoing manner can enable complex and meaningful querying, reporting and analysis for the purposes of improving patient safety and care, boosting operational efficiency, and supporting personalized medicine initiatives.
In this webinar, Perficient’s Mike Grossman, a director of clinical data warehousing and analytics, and Martin Sizemore, a healthcare strategist, discussed:
-How AROs and hospitals can benefit from a systematic approach to combining data from diverse systems and utilizing a suite of data extraction, reporting, and analytical tools, in order to support a wide variety of needs and requests
-Examples of proposed solutions to real-life challenges AROs and hospitals often encounter
Levi Thatcher, Health Catalyst Director of Data Science and his team provide a live demonstration using healthcare.ai to implement a healthcare-specific machine learning model from data source to patient impact. Levi goes through a hands-on coding example while sharing his insights on the value of predictive analytics, the best path towards implementation, and avoiding common pitfalls. Frequently asked questions are answered during the session.
During the webinar, we will:
Describe and install healthcare.ai
Build and evaluate a machine learning model
Deploy interpretable predictions to SQL Server
Discuss the process of deploying into a live analytics environment.
If you’d like to follow along, you should download and install R and RStudio prior to the event. We look forward to you joining us!
Explains about Evolution of IT in Healthcare, how analytics can make a difference and evolution of IT in healtcare. For more information visit: http://www.transformhealth-it.org/
This webinar will focus on the technical and practical aspects of creating and deploying predictive analytics. We have seen an emerging need for predictive analytics across clinical, operational, and financial domains. One pitfall we’ve seen with predictive analytics is that while many people with access to free tools can develop predictive models, many organizations fail to provide a sufficient infrastructure in which the models are deployed in a consistent, reliable way and truly embedded into the analytics environment. We will survey techniques that are used to get better predictions at scale. This webinar won’t be an intense mathematical treatment of the latest predictive algorithms, but will rather be a guide for organizations that want to embed predictive analytics into their technical and operational workflows.
Topics will include:
Reducing the time it takes to develop a model
Automating model training and retraining
Feature engineering
Deploying the model in the analytics environment
Deploying the model in the clinical environment
Healthcare Business Intelligence for Power UsersPerficient, Inc.
The Healthcare industry is accustomed to volumes of clinical and administrative data. Business intelligence helps convert these large amounts of data into actionable insights to reduce costs, streamline processes, and improve healthcare delivery. Our first webinar, “An Introduction to Business Intelligence for Healthcare,” introduces business intelligence in healthcare and common concepts.
In the second of this series of two webinars, Health BI Practice Manager, Mike Jenkins addresses:
- The BI Maturity Level
- Examples of Levels 3 and 4
- Attaining Level 5
Hadoop and Data Virtualization - A Case Study by VHADenodo
Access to full webinar: http://goo.gl/dQjxRe
This webinar by Hortonworks, VHA and Denodo provides information about the functionalities and benefits of Hadoop in Modern Data Architectures; how Hadoop along with data virtualization simplify data management and enable faster data discovery; and what data virtualization can offer in big data projects. VHA explains how they deployed data virtualization and Hadoop together and presents their lessons learned and best practices for data lake and data virtualization deployment.
Big Data Analytics for Healthcare Decision Support- Operational and ClinicalAdrish Sannyasi
Splunk’s data analytics platform could be utilized to solve many high impact business problems in healthcare delivery systems to reduce cost, improve patient outcome and safety, and enhance care coordination experience. Analyze observed behavior from healthcare event data and metadata to discover patterns, monitor compliance, and optimize the workflow. Furthermore 80% of healthcare data is unstructured (clinical free text and documentation), or semi-structured and many new data sources are such as tele health, mobile health, sensors, and devices are getting integrated in many healthcare systems specifically in the area of chronic disease management. So, one need analytics software that can harvest, interpret, enrich, normalize, and model diverse structured and unstructured data and analytics approaches that embrace the “data turmoil” by relying less on standardized data items and more on the capability to process data in any format.
New Ways for Predictive Analytics and Machine Learning to Advance Population ...Edifecs Inc
The team at University of Washington’s Center for Data Science and Edifecs have collaboratively built predictive tools that use machine-learning to identify patterns in morbidity progress and health status.
Learning Objectives
Hear how other industries are using the latest in predictive analytics and how this experience can be applied to healthcare
Discuss why healthcare needs machine learning and how it compares to traditional analytics
Explore the Data Tsunami and what the future holds for our industry
Strata Rx 2013 - Data Driven Drugs: Predictive Models to Improve Product Qual...EMC
Like most of healthcare and life science, pharmaceutical companies are undergoing a data-driven transformation. The industry-wide need to reduce the cost of developing, manufacturing and distributing drugs while bringing to market new products is not a novel concept or challenge. However, the ability to process and analyze large amounts of data using cutting-edge massively parallel processing (MPP) technologies means innovation can be found not only in the traditional hypothesis-driven approaches we have come to expect. New technologies and approaches make it possible to incorporate all available data, structured and unstructured. At Pivotal, it is the goal of our data science practice to demonstrate the capabilities of the technologies we offer. We focus on building predictive models by combining the vast and variable data that is available to elicit action or generate insights. In our talk we will focus on a use case in pharmaceutical manufacturing, wherein we created a predictive model to produce more consistent, high-quality products and drive decisions to abandon lots with expected poor outcomes. In addition, we demonstrate how we used machine learning to cleanse data and to improve efficiencies in data collection by identifying low information-content measurements and incorporate under-utilized data sources in manufacturing. Beyond this use case, we will discuss our vision of using machine learning in all areas of the industry, from research through distribution, to drive change.
My talk in the technical meeting "Global Burden of Diseases and Scientific Computation in Health". 25-26 September 2015. FIOCRUZ, Rio de Janeiro, Brazil
The MD Anderson / IBM Watson Announcement: What does it mean for machine lear...Health Catalyst
It’s been over six years since IBM’s Watson amazed all of us on Jeopardy, but it has yet to deliver similar breakthroughs in healthcare. The headlines in last week’s Forbes article read, “MD Anderson Benches IBM Watson In Setback For Artificial Intelligence In Medicine.” Is it really a setback for the entire industry or not? Health Catalyst’s EVP for Product Development, Dale Sanders, believes that the challenges are unique to IBM’s machine learning strategy in healthcare. If they adjust that strategy and better manage expectations about what’s possible for machine learning in medicine, the future will be brighter for Watson, their clients, and AI in healthcare, in general. Watson’s success is good for all of us, but it’s failure is bad for all of us, too.
Join Dale as he discusses:
The good news: Machine learning technology is accelerating at a rate beyond Moore’s Law. Dale believes that machine learning algorithms and models are doubling in capability every six months.
The bad news: The healthcare data ecosystem is not nearly as rich as many would believe, and certainly not as rich as that used to train Watson for Jeopardy. Without high-volume, high-quality data, Watson’s potential and the constant advances in machine learning algorithms will hit a glass ceiling in healthcare.
The best news: By adjusting strategy and expectations, there are still plenty of opportunities to do great things with machine learning by using the current data content in healthcare, while we build out the volume and breadth of data we need to truly understand the patient at the center of the healthcare picture… and you don’t need an army of PhD data scientists to do it.
Access the webinar: http://goo.gl/p08pTz
These slides were presented in a webinar by Denodo in collaboration with BioStorage Technologies and Indiana Clinical and Translational Sciences Institute and Regenstrief Institute.
BioStorage Technologies, Inc., Indiana Clinical and Translational Sciences Institute, and Regenstrief Institute (CTSI) have joined Denodo to talk about the important role of technological advancements, such as data virtualization, in advancing biospecimen research.
By watching this webinar, you can gain insight into best practices around the integration of biospecimen and research data as well as technology solutions that provide consolidated views and rapid conversions of this data into valuable business insights. You will also learn how data virtualization can assist with the integration of data residing in heterogeneous repositories and can securely deliver aggregated data in real-time.
This white paper describes how BlueData enables virtualization of Hadoop and Spark workloads running on Intel architecture.
Even as virtualization has spread throughout the data center, Apache Hadoop continues to be deployed almost exclusively on bare-metal physical servers. Processing overhead and I/O latency typically associated with virtualization have prevented big data architects from virtualizing Hadoop implementations.
As a result, most Hadoop initiatives have been limited in terms of agility, with infrastructure changes such as provisioning a new server for Hadoop often taking weeks or even months. This infrastructure complexity continues to slow down adoption in enterprise deployments. Apache Spark is a relatively new big data technology, but interest is growing rapidly; many of these same deployment challenges apply to on-premises Spark implementations.
The BlueData EPIC software platform addresses these limitations, enabling data center operators to accelerate Hadoop and Spark implementations on Intel architecture-based servers.
For more information, visit intel.com/bigdata and bluedata.com
Translational Biomedical Informatics 2010: Infrastructure and Scaling – Brian Athey,
PhD; Professor of Biomedical Informatics and Director for Academic Informatics,
University of Michigan Medical School; Chair Designate for Computational Medicine and Bioinformatics, University of Michigan; Associate Director, Michigan Institute for Clinical Health Research; Principal Investigator, National Center for Integrative Biomedical Informatics
Explains about Evolution of IT in Healthcare, how analytics can make a difference and evolution of IT in healtcare. For more information visit: http://www.transformhealth-it.org/
This webinar will focus on the technical and practical aspects of creating and deploying predictive analytics. We have seen an emerging need for predictive analytics across clinical, operational, and financial domains. One pitfall we’ve seen with predictive analytics is that while many people with access to free tools can develop predictive models, many organizations fail to provide a sufficient infrastructure in which the models are deployed in a consistent, reliable way and truly embedded into the analytics environment. We will survey techniques that are used to get better predictions at scale. This webinar won’t be an intense mathematical treatment of the latest predictive algorithms, but will rather be a guide for organizations that want to embed predictive analytics into their technical and operational workflows.
Topics will include:
Reducing the time it takes to develop a model
Automating model training and retraining
Feature engineering
Deploying the model in the analytics environment
Deploying the model in the clinical environment
Healthcare Business Intelligence for Power UsersPerficient, Inc.
The Healthcare industry is accustomed to volumes of clinical and administrative data. Business intelligence helps convert these large amounts of data into actionable insights to reduce costs, streamline processes, and improve healthcare delivery. Our first webinar, “An Introduction to Business Intelligence for Healthcare,” introduces business intelligence in healthcare and common concepts.
In the second of this series of two webinars, Health BI Practice Manager, Mike Jenkins addresses:
- The BI Maturity Level
- Examples of Levels 3 and 4
- Attaining Level 5
Hadoop and Data Virtualization - A Case Study by VHADenodo
Access to full webinar: http://goo.gl/dQjxRe
This webinar by Hortonworks, VHA and Denodo provides information about the functionalities and benefits of Hadoop in Modern Data Architectures; how Hadoop along with data virtualization simplify data management and enable faster data discovery; and what data virtualization can offer in big data projects. VHA explains how they deployed data virtualization and Hadoop together and presents their lessons learned and best practices for data lake and data virtualization deployment.
Big Data Analytics for Healthcare Decision Support- Operational and ClinicalAdrish Sannyasi
Splunk’s data analytics platform could be utilized to solve many high impact business problems in healthcare delivery systems to reduce cost, improve patient outcome and safety, and enhance care coordination experience. Analyze observed behavior from healthcare event data and metadata to discover patterns, monitor compliance, and optimize the workflow. Furthermore 80% of healthcare data is unstructured (clinical free text and documentation), or semi-structured and many new data sources are such as tele health, mobile health, sensors, and devices are getting integrated in many healthcare systems specifically in the area of chronic disease management. So, one need analytics software that can harvest, interpret, enrich, normalize, and model diverse structured and unstructured data and analytics approaches that embrace the “data turmoil” by relying less on standardized data items and more on the capability to process data in any format.
New Ways for Predictive Analytics and Machine Learning to Advance Population ...Edifecs Inc
The team at University of Washington’s Center for Data Science and Edifecs have collaboratively built predictive tools that use machine-learning to identify patterns in morbidity progress and health status.
Learning Objectives
Hear how other industries are using the latest in predictive analytics and how this experience can be applied to healthcare
Discuss why healthcare needs machine learning and how it compares to traditional analytics
Explore the Data Tsunami and what the future holds for our industry
Strata Rx 2013 - Data Driven Drugs: Predictive Models to Improve Product Qual...EMC
Like most of healthcare and life science, pharmaceutical companies are undergoing a data-driven transformation. The industry-wide need to reduce the cost of developing, manufacturing and distributing drugs while bringing to market new products is not a novel concept or challenge. However, the ability to process and analyze large amounts of data using cutting-edge massively parallel processing (MPP) technologies means innovation can be found not only in the traditional hypothesis-driven approaches we have come to expect. New technologies and approaches make it possible to incorporate all available data, structured and unstructured. At Pivotal, it is the goal of our data science practice to demonstrate the capabilities of the technologies we offer. We focus on building predictive models by combining the vast and variable data that is available to elicit action or generate insights. In our talk we will focus on a use case in pharmaceutical manufacturing, wherein we created a predictive model to produce more consistent, high-quality products and drive decisions to abandon lots with expected poor outcomes. In addition, we demonstrate how we used machine learning to cleanse data and to improve efficiencies in data collection by identifying low information-content measurements and incorporate under-utilized data sources in manufacturing. Beyond this use case, we will discuss our vision of using machine learning in all areas of the industry, from research through distribution, to drive change.
My talk in the technical meeting "Global Burden of Diseases and Scientific Computation in Health". 25-26 September 2015. FIOCRUZ, Rio de Janeiro, Brazil
The MD Anderson / IBM Watson Announcement: What does it mean for machine lear...Health Catalyst
It’s been over six years since IBM’s Watson amazed all of us on Jeopardy, but it has yet to deliver similar breakthroughs in healthcare. The headlines in last week’s Forbes article read, “MD Anderson Benches IBM Watson In Setback For Artificial Intelligence In Medicine.” Is it really a setback for the entire industry or not? Health Catalyst’s EVP for Product Development, Dale Sanders, believes that the challenges are unique to IBM’s machine learning strategy in healthcare. If they adjust that strategy and better manage expectations about what’s possible for machine learning in medicine, the future will be brighter for Watson, their clients, and AI in healthcare, in general. Watson’s success is good for all of us, but it’s failure is bad for all of us, too.
Join Dale as he discusses:
The good news: Machine learning technology is accelerating at a rate beyond Moore’s Law. Dale believes that machine learning algorithms and models are doubling in capability every six months.
The bad news: The healthcare data ecosystem is not nearly as rich as many would believe, and certainly not as rich as that used to train Watson for Jeopardy. Without high-volume, high-quality data, Watson’s potential and the constant advances in machine learning algorithms will hit a glass ceiling in healthcare.
The best news: By adjusting strategy and expectations, there are still plenty of opportunities to do great things with machine learning by using the current data content in healthcare, while we build out the volume and breadth of data we need to truly understand the patient at the center of the healthcare picture… and you don’t need an army of PhD data scientists to do it.
Access the webinar: http://goo.gl/p08pTz
These slides were presented in a webinar by Denodo in collaboration with BioStorage Technologies and Indiana Clinical and Translational Sciences Institute and Regenstrief Institute.
BioStorage Technologies, Inc., Indiana Clinical and Translational Sciences Institute, and Regenstrief Institute (CTSI) have joined Denodo to talk about the important role of technological advancements, such as data virtualization, in advancing biospecimen research.
By watching this webinar, you can gain insight into best practices around the integration of biospecimen and research data as well as technology solutions that provide consolidated views and rapid conversions of this data into valuable business insights. You will also learn how data virtualization can assist with the integration of data residing in heterogeneous repositories and can securely deliver aggregated data in real-time.
This white paper describes how BlueData enables virtualization of Hadoop and Spark workloads running on Intel architecture.
Even as virtualization has spread throughout the data center, Apache Hadoop continues to be deployed almost exclusively on bare-metal physical servers. Processing overhead and I/O latency typically associated with virtualization have prevented big data architects from virtualizing Hadoop implementations.
As a result, most Hadoop initiatives have been limited in terms of agility, with infrastructure changes such as provisioning a new server for Hadoop often taking weeks or even months. This infrastructure complexity continues to slow down adoption in enterprise deployments. Apache Spark is a relatively new big data technology, but interest is growing rapidly; many of these same deployment challenges apply to on-premises Spark implementations.
The BlueData EPIC software platform addresses these limitations, enabling data center operators to accelerate Hadoop and Spark implementations on Intel architecture-based servers.
For more information, visit intel.com/bigdata and bluedata.com
Translational Biomedical Informatics 2010: Infrastructure and Scaling – Brian Athey,
PhD; Professor of Biomedical Informatics and Director for Academic Informatics,
University of Michigan Medical School; Chair Designate for Computational Medicine and Bioinformatics, University of Michigan; Associate Director, Michigan Institute for Clinical Health Research; Principal Investigator, National Center for Integrative Biomedical Informatics
HMIS is an integrated Hospital management system, which addresses All requirements of hospitals. It is a powerful, flexible and easy to use application designed and developed to convey real conceivable benefits to hospitals and clinics which reduce the paper overload.
How to Harness the Power of Google Analytics, Email Marketing & Vanity to Inc...CTSI at UCSF
40 minute presentation by Nooshin Latour (@nooshin) & Anirvan Chatterjee (@anirvan) at the UC Computing Services Conference (UCCSC 2014). Evolution of UCSF Profiles research networking system, early promotion at launch, growth/SEO, and engagement with targeted personalized data emails. Full description here: https://uccsc.ucsf.edu/node/101
Data Reproducibility in Preclinical Discovery, Is It a Real Problem? 09/17/15CTSI at UCSF
On Sep 17th Catalyst brought a panel of academic and industry thought leaders for a lively discussion on the issue of data reproducibility in academic research. Moderated by Cathy Tralau-Stewart, head of the Therapeutics track of the Catalyst Awards, the panel explored causes and potential solutions for a problem that has been receiving national attention in both scientific and popular media.
Panelists included Keith Yamamoto, Vice Chancellor for Research at UCSF; Larry Tabak, Principle Deputy Director, NIH; John Ioannidis, Professor of Health Research Policy at Stanford School of Medicine; Elizabeth Iorns, Co-Founder, Science Exchange; Parker B. Antin, Board of Directors President, FASEB; Amanda Halford, MBA, VP of Research, Sigma-Aldrich.
http://ctsi.ucsf.edu/news/about-ctsi/data-reproducibility-preclinical-research-and-discovery
Building Your Professional Network with LinkedInCTSI at UCSF
Presentation by Erik Wieland, Applications Manager at UCSF IT, as part of the "UCSF Profiles & LinkedIn Bootcamp for Researchers, Faculty, Staff" on 2/9/2015.
VIVO 2014: Google Analytics, Email Marketing & Vanity to Increase User Engage...CTSI at UCSF
Poster presented at VIVO 2014 conference: Utilized UCSF Profiles web analytics data to deliver a customized “UCSF Profiles Annual Report” to individual researchers at UCSF, which listed their total annual unique pageviews, broken down by major relevant categories (e.g. pageviews from the UCSF campus, NIH, pharmaceutical companies, foundations, and other universities). Result: Increase in user engagement and edite Profiles pages. UCSF-CTSI Team: Nooshin Latour, MA, Sr. Communications & Marketing Manager and Anirvan Chatterjee, Director of Data Strategy
Enriching the Value of Clinical Data with Oracle Data Management WorkbenchPerficient, Inc.
To effectively conduct clinical research and development you need to collect, manage, and visualize clinical and healthcare data – including mHealth data – using a centralized and secure data repository that can be considered the single source of truth.
Oracle Data Management Workbench (DMW) is a proven solution that is used by a number of global pharmaceutical and medical device organizations to aggregate and manage clinical data in support of their R&D initiatives.
In this SlideShare, we demonstrate how Oracle DMW can quickly enrich and expand the value of clinical data, as well as support enhanced analytics and decision-making.
Galen healthcare solutions Healthcare Information Technology 2017 Year in Rev...Justin Campbell
In the ever-changing and fast-paced world of healthcare IT, there can be a lot to keep up with. As 2017 wraps up and we look towards 2018, we take the opportunity to review the major happenings in the industry this past year, and explore key focal areas for the next. We’ve compiled insights gleaned from our market research conducted through attending industry conferences, gathering healthcare executive perspectives, and observing what is occurring in practice, to distill the key areas of focus for 2018. We’ll examine topics critical to the success of Healthcare Delivery Organizations (HDOs) including:
Application Portfolio Rationalization – Data Migration & Archival
Patient Engagement through Telehealth & Telemedicine
Clinician Engagement, Satisfaction, and Data-Driven Clinical Optimization
Clinical Decision Support – Syndromic Surveillance, Sepsis Prevention
Quality Payment Programs – Medicare Advantage, HCC & PCMH
Interoperability – HIE, APIs, Patient Identity & Matching
This webinar will provide a blueprint to assist healthcare information technology stakeholders in understanding key issues affecting the healthcare industry. Attendees will gain insightful resources and analysis of the healthcare information technology landscape in 2018. Register now to learn how these key trends could affect your organization and what you can do to prepare.
Using JReview to Analyze Clinical and Pharmacovigilance Data in Disparate Sys...Perficient, Inc.
Sponsors and CROs naturally rely on various clinical and safety systems from a multitude of software vendors. However, continuously accessing disparate sources for the reporting, analysis, and monitoring of data can be a treacherous undertaking, if you don't have a solution that connects to them right out of the box.
That's where JReview comes in. For almost two decades, life sciences companies, research organizations, in addition to the government, have relied on JReview for the comprehensive analysis and monitoring of clinical and pharmacovigilance data.
The analytics solution works with many Oracle Health Sciences applications, including Argus Safety, Oracle AERS, Oracle Clinical (OC), Remote Data Capture (RDC), Thesaurus Management System (TMS), InForm, Life Sciences Data Hub (LSH), and Clinical Development Center (CDC). JReview also works with non-Oracle solutions, such as ARISg, Medidata Rave, and SAS Drug Development.
In this slideshare, you will learn:
The features and benefits of JReview, including the new functionality in v10.0 (e.g., risk-based monitoring analytics reporting on the clinical data itself, etc.)
Benefits of using JReview for:
Reporting and query of your clinical data
Supplying internal and/or external users/sponsors information
Providing a secure way for your internal users and/or sponsor users to access the clinical data
Examples of how customers use JReview with OC/RDC
The implementation process and options
Health Care: Cost Reductions through Data Insights - The Data Analysis GroupJames Karis
An overview of the cost reduction opportunities for a Health Care provider. These opportunities can be identified, quantified and optimised through data-driven insights. The slide pack also provides a strategic overview of how one would set up such a project within a large organisation, whilst mitigating patient-care concerns.
Microsoft: A Waking Giant in Healthcare Analytics and Big DataDale Sanders
Ten years ago, critics didn’t believe that Microsoft could scale in the second generation of relational data warehouses, but they did. More recently, many of these same pundits have criticized Microsoft for missing the technology wave du jour in cloud offerings, mobile technology, and big data. But, once again, Microsoft has been quietly reengineering its culture and products, and as a result, they now offer the best value and most visionary platform for cloud services, big data, and analytics in healthcare.
Building an Intelligent Biobank to Power Research Decision-MakingDenodo
This presentation belongs to the workshop: "Building an Intelligent Biobank to Power Research Decision-Making", from ISBER 2015 Annual Meeting by Lori A. Ball (Chief Operating Officer, President of Integrated Client Solutions at BioStorage Technologies, Inc), Brian Brunner (Senior Manager, Clinical Practice at LabAnswer) and Suresh Chandrasekaran (Senior Vice President at Denodo).
The workshop cover three different topic areas:
- Research sample intelligence: the growing need for Global Data Integration (Biobank Sample and Data Stakeholders).
- Building a research data integration plan and cloud sourcing strategy (data integration).
- How data virtualization works and the value it delivers (a data virtualization introduction, solution portfolio and current customers in Life Sciences industry).
The biomedical R&D environment is increasingly dependent on data meta-analysis and bioinformatics to support research advancements. The integration of biorepository sample inventory data with biomarker and clinical research information has become a priority to R&D organizations. Therefore, a flexible IT system for managing sample collections, integrating sample data with clinical data and providing a data virtualization platform will enable the advancement of research studies. This workshop provides an overview of how sample data integration, virtualization and analytics can lead to more streamlined and unified sample intelligence to support global biobanking for future research.
Curlew Research Brussels 2014 Electronic Data & Knowledge ManagementNick Lynch
Life Science externalisation and collaboration overview and the challenges that Life Science companies face in delivering successful data sharing with their partners in either Open Innovation or pre-competitive workflows
Unlocking New Insights with Information DiscoveryAlithya
Edgewater Ranzal invited to present Unlocking New Insights with Information Discovery at the Oracle Hyperion User Group Minnesota (HUGmn) Tech Day 2015. Presented an introduction to Oracle Endeca Information Discovery (OEID), a powerful database tool for structured and unstructured data.
In this webinar, Dale Sanders will provide a pragmatic, step-by-step, and measurable roadmap for the adoption of analytics in healthcare-- a roadmap that organizations can use to plot their strategy and evaluate vendors; and that vendors can use to develop their products. Attendees will have a chance to learn about:
1) The details of his eight-level model, 2) A brief introduction to the HIMSS/IIA DELTA Model, 3) The importance of permanent organizational teams to sustain improvements from analytic investments, 4) The process of curating and maturing data governance, and 5) The coordination of a data acquisition strategy with payment and reimbursement strategies
Challenges in Clinical Research: Aridhia Disrupts Technology Approach to Rese...VMware Tanzu
Join Jeff Kelly, Pivotal’s Big Data Strategist and Chris Roche, Aridhia’s CEO, to learn how Big Data and data science are being applied to clinical research. Learn…
• Why research-oriented healthcare delivery organizations and academic medical centers need an ACRIS
• How improving collaboration and productivity accelerates the discovery of insights and increases competitiveness
• Why robust data security is critical to modernizing engagement between academia, industry and healthcare
• How to reduce research costs while improving commercialization opportunities
• Why enabling transparent analysis and reproducibility of research are key to scientific progress
• Best practices to get started on your digital transformation and Big Data journey
Microsoft: A Waking Giant In Healthcare Analytics and Big DataHealth Catalyst
In 2005, Northwestern Memorial Healthcare embarked upon a strategic Enterprise Data Warehousing (EDW) initiative with the Microsoft technology platform as the foundation. Dale Sanders was CIO at Northwestern and led the development of Northwestern’s Microsoft-based EDW. At that time, Microsoft as an EDW platform was not en vogue and there were many who doubted the success of the Northwestern project. While other organizations were spending millions of dollars and years developing EDW’s and analytics on other platforms, Northwestern achieved great and rapid value at a fraction of the cost of the more typical technology platforms. Now, there are more healthcare data warehouses built around Microsoft products than any other vendor. The risky bet on Microsoft in 2005 paid off.
Ten years ago, critics didn’t believe that Microsoft could scale in the second generation of relational data warehouses, but they did. More recently, many of these same pundits have criticized Microsoft for missing the technology wave du jour in cloud offerings, mobile technology, and big data. But, once again, Microsoft has been quietly reengineering its culture and products, and as a result, they now offer the best value and most visionary platform for cloud services, big data, and analytics in healthcare.
In this context, Dale will talk about:
His up and down journey with Microsoft as an Air Force and healthcare CIO, and why he is now more bullish on Microsoft like never before
A quick review of the Healthcare Analytics Adoption Model and Closed Loop Analytics in healthcare, and how Microsoft products relate to both
The rise of highly specialized, cloud-based analytic services and their value to healthcare organizations’ analytics strategies
Microsoft’s transformation from a closed-system, desktop PC company to an open-system consumer and business infrastructure company
The current transition period of enterprise data warehouses between the decline of relational databases and the rise of non-relational databases, and the new Microsoft products, notably Azure and the Analytic Platform System (APS), that bridge the transition of skills and technology while still integrating with core products like Office, Active Directory, and System Center
Microsoft’s strategy with its PowerX product line, and geospatial analysis and machine learning visualization tools
Big Data at Geisinger Health System: Big Wins in a Short TimeDataWorks Summit
Geisinger Health System is well known in the healthcare community as a pioneer in data and analytics. We have had an Electronic Health Record (EHR) since 1996, and an Electronic Data Warehouse (EDW) since 2008. Much of daily and weekly operational reporting, as well as an abundance of ad hoc analytics, come from the EDW.
Approximately 18 months ago, the Data Management team implemented Hadoop in the Hortonworks Data Platform (HDP), and successes in implementation and development have proven to the organization that we should abandon the traditional EDW in favor of the Big Data (HDP) platform.
In less than 18 months, we stood up the platform, created a data ingestion pipeline, duplicated all source feeds from the EDW into HDP, and had several analytics developed with HDP and Tableau. Furthermore, we have exploited the new capabilities of the platform, where we use Natural Language Processing (NLP) to interrogate valuable (but previously hidden) clinical notes. The new platform has data that is modeled and governed, setting the stage to push Geisinger Health System from a pioneer to a leader in Big Data and Analytics.
This session will focus on Hortonworks Data Platform, covering data architecture, security, data process flow, and development. It is geared toward Data Architects, Data Scientists, and Operations/I.T. audiences.
Similar to UCSF Informatics Day 2014 - David Dobbs, "Enterprise Data Warehouse" (20)
AMIA Joint Summits 2017: Building Research Data Mart from UCSF OMOP Database ...CTSI at UCSF
Poster presented at AMIA 2017 Joint Summits on Translational and Clinical Informatics.
In this research data delivery project, we explored a less traveled path of building a clinical “data mart” for a registry study on kidney transplant patients based on our institutional OMOP database.
UCSF Informatics Day 2014 - Keith R. Yamamoto, "Precision Medicine"CTSI at UCSF
Keith R. Yamamoto, PhD — Opening Remarks – Precision Medicine
Vice Chancellor for Research
Executive Vice Dean of the School of Medicine
Professor of Cellular and Molecular Pharmacology
UCSF
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
2. Introduction
Interim Executive
Director
Networked Data
Warehousing
2
• Expertise
• Leading large-scale data integration & analytic programs
• Understanding domain area needs
• Engineer practical, technology solutions using health
technology standards
• Key Accomplishments
• Nationwide syndromic surveillance system with 500+
hospitals
• Developing community-based population health solutions
• Professional Qualifications
• Engineering background with a bachelor of arts in
business administration in information systems.
• Certified Six Sigma and Project Management Professional
• Member HIMSS Clinical and Business Intelligence
Committee
• Co-Chair of the HIMSS Data and Technology Task Force
3. Topic Flow
• Enterprise DW and Analytics Team
• UCSF’s EDW Strategy
• Epic Cogito DW
• Questions
3
4. Enterprise DW and Analytics Team
Objectives
• Create a team with a passion for understanding and
managing enterprise data
• Partner with domain areas to understand their data and
analytic needs
• Implement highly professional data management
practices
– Well managed data architecture
– Comprehensive and high quality metadata management
– Strong data security and controls
• Provide domain areas:
– Easy and secure access to enterprise data
– Expertise in developing analytic work products
– Expertise on BI and analytic technologies
4
5. Increase Analytics Maturity
Optimize
What is my best
alternative?
Precision medicine
Forecast &
Predict
What happens if
trends continue?
Population management and
value-based reimbursements
Decision
Support
What should I do?
Applying evidence-based
guidelines at the POC
Statistical
Analysis
Why is this
happening?
Determining evidence-based
guidelines
Metrics and
Dashboards
Where is the
problem?
Quality and safety KPIs and
benchmarks
Reporting What happened? Standard retrospective reports
INCREASINGVALUE
1
2
3
4
5
6
5
6. Domains
Research
Patient Care Finance/Admin
Education HR/Payroll
• IDR / UCReX
• RDB
• Oncore / REDCap
• Clinical data marts
• APeX, Clarity,
Cogito EDW
• Axiom Dental
• ACO data
• UCALL / OmniView
• DART
• Campus Fin DW
• Registration System
• Course Evaluation
• Grades Mgmt
• Peoplesoft HRMS
• Peoplesoft Payroll
• OLPPS
/ Integration
Data Governance
Metadata
Management
Master Data
Management
/ Management
6
8. Epic Cogito (ko-GEE-toe) DW
• An analytical database combining Epic and Non-Epic
Data
– Pre-defined healthcare data model
– Seamless flow of Epic data from APeX Clarity database
– Extensible to include non-APeX data
• Common data model across Epic Customers
– Facilitates collaboration with other Epic customers (e.g.,
Other UCs, Children’s of Oakland, etc.)
8
9. Uses for Cogito EDW
• Research
– Sophisticated cohort selection (RDB)
– Quality and clinical research
• Population Health
– Combining APeX clinical data with external clinical, claims
and patient satisfaction data
• Performance Improvement
– Monitoring clinical and operational metrics for APeX and
non-Apex data
• Streamlined reporting for APeX data
– Highly simplified version of Clarity
9
10. Information Flow
10
Chronicles
Cache
Chronicles
Cache
95,000+
Data Elements
Reporting Workbench
Real-time operational reporting
Clarity
SQL Server
Clarity
SQL Server
12,000+ Tables
125,000+ Columns
Clarity Reporting
Enterprise reporting
Cogito DW
SQL Server
Cogito DW
SQL Server
19 Fact Tables
76 Dimensions
Data Warehouse Reporting
BI and Analytical Reporting
14. Data Not Currently in Cogito*
• Ambulatory
– Provider metrics
– Order sets
• Anesthesia
• ED – Chief Complaint
• Inpatient
– Clinical Notes
– Medication Administration
• Operating room administration
• Obstetrics and Labor & Delivery
14
* Representative list of key data items14
15. Cogito DW Dimensions*
Admission Profile Department Lab Component
Appointment Diagnosis Lab Result
Billing Area Diagnosis Hierarchy Medication
Billing Account Discharge Profile Patient Attributes
Billing Service DRG Patient
Billing Status Duration Procedure
Billing Procedure Employee Provider Attributes
Cost Center Encounter Provider
Coverage Encounter Profile Reaction Profile
Date Guarantor Visit Attributes
Time of Date Immunization Visit Profile
15
*Partial, representative list15
17. Cogito Timeline
• Version 8
– Additional Epic Data
• ED
• Surgery
• Coded Procedures
– Non-Epic Data
• CMS Medicate Shared Savings Plan Claims
• Press Ganey
– Additional Universes – Received Claims, Patient
Satisfaction17
18. Summary
• Creating an Enterprise DW and Analytics Team
– Coordinate UCSF data architecture, metadata definitions
and serve as a resource for available data sources
• Cogito Data Warehouse is being implemented
– Research Data Browser 1st
use case
– Understandable set of data structures
– Extensible data model
– Facilitates sharing of data with other Epic sites
– Epic continues to refine and enhance
• More information
– Doug Berman - Academic Research Systems
18
5 Major Domains:
Research
Patient Care
Finance/Admin
Education
GL/Payroll
Domain foci will include UCSF internally generated data as well as external data such as data from research collaborators, ACO partners or external benchmarks.
Enterprise Data Warehouse and Analytics team will have staff who work with domain areas to understand and document their source systems, data staging areas, data warehouse, data marts and key analytics. Doug Berman and the Academic Research Systems are responsible for interacting with UCSF’s Research community and performing this function.
The Enterprise Data Warehouse and Analytics team will also be responsible for documenting data flow between and among both internal systems and with external systems.
To help tie together and manage all these data sources and repositories the Enterprise DW and Analytics team will be responsible for building a fabric of processes and technologies.
It starts with implementing a Data Governance process. This process …..
The data governance process will generate a tremendous amount of data about UCSF’s data, or metadata. The Enterprise DW and Analytics team will put in place a meta data management system to store and manage these data. This will allow UCSF researchers and staff to get up to date information on key data resources and be able to benefit from the understanding of those data created by others before them.
Finally, the Enterprise DW and Analytics team will implement Master Data Management capabilities. This provides the ability to match data across systems and domain areas.
Master Data includes:
Identifiers: patient, provider, payer, facility
Codes: GL account number, lab test, department, result status
Hierarchies: reporting relationship, service line rollup
Mappings: ICD-9 to ICD-10 code, prior GL code to current GL code
As you can imagine, this is a monumental task. The approach that the EDW and Analytics team will take is to incrementally build these capabilities while meeting pressing research, clinical, financial, educational and human resource needs.