1. A mixed methods task analysis of the implementation and
validation of EHR-based clinical quality measures
Nicole G. Weiskopf, PhD1; Faiza J. Khan, MBBS, MBI1,2; Deborah Woodcock, MBA1; David A. Dorr, MD, MS1; Jayne Mitchell, ANP-BC, CHFN2;
James O. Mudd, MD2; Aaron M. Cohen, MD, MS1
1. Department of Medical Informatics & Clinical Epidemiology, OHSU, Portland, OR; 2. Knight Cardiovascular Institute, OHSU, Portland, OR
Clinical quality measures (CQMs) are an important tool for the assessment and improvement of
healthcare quality. Federal requirements initially set forth in the American Recovery and
Reinvestment Act, and advanced in subsequent stages of the requirements, codified electronic health
record (EHR)-based CQM reporting, and have made automated CQM implementation a priority
within the clinical and informatics communities. Nevertheless, the processes surrounding CQM
implementation and validation remain complex, time-consuming, and largely undefined. We
collected issue-tracking data during the course of an agile and rigorous collaborative project to build
an analytics platform for the Knight Cardiovascular Institute (KCVI) at OHSU, with nine heart failure
CQMs defined by the American College of Cardiology (ACC) as an exemplar. Using a mixed methods
approach we provide an overview of our CQM implementation and validation process, identify major
roadblocks and bottlenecks, and present recommendations for other professionals working in the
area of healthcare quality assessment and improvement.
Abstract
Setting: The Informatics Discovery Lab, a program within the Department of Medical Informatics and
Clinical Epidemiology at OHSU, has partnered with KCVI to develop and evaluate an analytics
platform to track and improve quality of care. The platform was built upon an existing CQM engine,
the Integrated Care Coordination Information System, a component of Care Management Plus, a care
coordination, quality improvement, and information technology model.2, 3
Quality Measures: We implemented nine heart failure CQMs, summarized in Table 1, based on the
specifications of the ACC.1 The beta-blocker and ACE/ARB measures are specified by the ACC as
paired measures, and were developed and analyzed accordingly.
Data Collection and Management: We developed an issue-tracking database that allowed
collaborators to open, modify, assign, update, and close tasks, decisions, requests, and issues. Each
task could be categorized as belonging to a specific CQM, or belonging to all CQMs. We extracted
task name, CQM name, and task opened date, and task closed date from the database.
Analysis: An iterative, open coding approach was used to create a set of exhaustive, mutually
exclusive categories to capture the types of work involved in CQM implementation. The final
category for each task was determined through consensus. For each task we calculated the number
of days from date opened to final date modified; we referred to this metric as task-days, in order to
emphasize the possibility of concurrent rather than purely sequential work.
Measure Description
LVEF* (outpatient) Annual LVEF assessment for HF patients (outpatient)
LVEF (inpatient) Annual LVEF assessment for HF patients (inpatient)
Symptom Assessment Quantitative results of evaluation of level of activity and clinical symptoms (outpatient)
Symptom Management Documentation that symptoms have improved, stayed the same, or worsened but have a
documented care plan (outpatient)
Patient Education Provision of at least three items of self-care education to patient (outpatient)
Beta-blocker Therapy Prescription of evidence-based beta-blocker if any LVEF < 40% (outpatient and inpatient)
ACE/ARB** Therapy Prescription of evidence-based ACE/ARB if any LVEF < 40% (outpatient and inpatient)
ICD+ Counseling Documentation that patient was counseled regarding ICD implantation if LVEF ≤ 35 despite
optimal therapy (outpatient)
Post-Discharge Appt. Follow-up appointment scheduled at the time of discharge (inpatient)
Category Definition
Task
count
Task- days
(%)
Mean task-
days (SD)
Interpretation Interpretation and operationalization of CQM concepts,
population, and related issues
7 301 (4.8%) 43.0 (38.2)
Data exploration Identification and selection of appropriate data fields 13 842 (13.5%) 64.8 (60.9)
System development
& debugging
Development, maintenance, correction, updating of back-
end data capture and pre-processing system
45 1868 (29.9%) 41.5 (45.6)
CQM development &
debugging
Development, maintenance, correction, updating of CQM
queries and CQM-specific programming
19 1463 (23.4%) 77.0 (81.6)
Validation Determining quality of automated CQMs and true
performance based on manual chart review
14 594 (9.5%) 42.4 (27.3)
Synthesis & analysis Quantitative, qualitative, and graphical analysis of CQMs 12 1048 (16.8%) 87.3 (78.5)
Informing & updating
stakeholders
Delivery of findings and recommendations to stakeholders 6 124 (2.0%) 20.7 (22.1)
116 6240 (100%) 54.3 (59.3)
Table 1. Summary of ACC heart failure CQMs.1 *LVEF = Left Ventricular Ejection Fraction; **ACE = Angiotensin-
converting-enzyme inhibitor; ARB Angiotensin Receptor Blocker; +ICD= Implantable Cardioverter Defibrillator
Table 2. Categories of work necessary for implementing and validating automated EHR-based CQMs. The
categories are listed from top to bottom in roughly expected order of occurrence.
Derived work categories and process: Our results aligned closely with CQM implementation models
proposed by the American Hospital Association4 and by the Office of the National Coordinator for
Health Information Technology. The most significant difference between those models and ours are
our focus on interpretation of the measure and their focus on workflow and documentation changes
at the point of care, which we see as a future step informed by our work.
Nonlinearity of implementation process: We had anticipated that CQM implementation would be
largely sequential, with iteration around system development and debugging and measure
development and debugging. The inpatient LVEF measure roughly follows this linear process. Other
measures, including ICD counseling and follow-up scheduling, followed a more iterative path, with
extended work related to system development and debugging and data exploration. At any stage of
the process it was possible to discover an issue that required returning to any of the preceding
stages. Measure development and debugging, validation, or synthesis and analysis, for example,
might uncover system errors, like missing data capture or incorrect population selection. System
development and debugging, in turn, might reveal that a data element was being used differently
than originally anticipated, which would require further data exploration to find a better element, or
even interpretation, to select a similar concept that might be documented more consistently.
Impact of data quality: Poor EHR data quality and limited data accessibility lead to difficulty mapping
CQM concepts to EHR elements, and necessitate complex measure logic. As an example, the paired
beta-blocker and ACE/ARB CQMS required substantial iterative work related to measure development
and debugging and validation due to EHR documentation problems. After initial implementation
based on medication data, we learned that clinicians documented medication adherence and
medication exceptions in two other structured fields, depending upon setting. These fields, however,
were not always used consistently and were sometimes out of date; some patients who were labeled
as having exceptions were in fact on the appropriate medications. To account for this data quality
problem we needed to develop and evaluate significantly more complex logic.
Performance attribution: Many of the system development and debugging tasks related the
identification of relevant patients and the attribution of those patients to the appropriate providers
for group- and provider-level performance calculation and reporting. During the outpatient and
inpatient LVEF CQM implementation, we had to develop and validate system rules and processes for
the identification and extraction of relevant patients and their data, which were different for the two
populations. We also had to select and implement an appropriate model of CQM performance
attribution.5, 6 Often, patients are assigned to primary care providers or the most recent care
provider. These approaches, however, may not be appropriate in settings where care is delivered by
teams or where providers frequently see referral patients. Instead, we used a multiple attribution
rule, where all providers who saw a patient during the measurement period received “credit.”
Implementing this rule involved substantial system development and validation effort.
Figure 3. Expected CQM implementation process, which is mostly linear with specific instances of iteration at the
development and debugging stages, and observed process, which may include substantial iteration.
Figure 1. Number of tasks and time spent on tasks in each category, by measure. Tasks assigned to all measures are
not included here.
Figure 3. A selection of observed by-measure workflows. Concurrent tasks within categories will overlap. The “all
measures” category consists of tasks explicitly assigned to all measures, not tasks assigned to any measure.
Through a mixed methods analysis of issue-tracking data, we have derived a set of seven categories
of work relating to the end-to-end implementation and evaluation of valid and reliable automated
EHR-based CQMs. These align well with and expand upon prior work in this area. We encountered a
number of both expected and unexpected challenges during this work, stemming largely from EHR
data limitations, from back-end and calculation-related challenges, and from the implementation of
new features needed to support data analytics. To the extent possible, we would advise other CQM
implementers to conduct exhaustive exploration of user needs and relevant medical concept
documentation practices at the start of any CQM implementation project in order to limit iteration
and redundancy to the extent possible. Some degree of iteration, however, is an unavoidable and, in
fact, vital feature of a complex development and knowledge discovery process.
Research reported in this poster was supported by the Knight Cardiovascular Institute.
References
1. American College of Cardiology Foundation, American Heart Association, American Medical Association. Heart Failure
Performance Measurement Set 2010.
2. Dale JA, Behkami NA, Olsen GS, Dorr DA. A multi-perspective analysis of lessons learned from building an Integrated Care
Coordination Information System (ICCIS). AMIA Annu Symp Proc. 2012;2012:129-35.
3. Dorr DA, Wilcox A, Burns L, Brunker CP, Narus SP, Clayton PD. Implementing a multidisease chronic care model in primary care
using people and technology. Dis Manag. 2006;9(1):1-15.
4. Eisenberg F, Lasome C, Advani A, Martins R, Craig P, Sprenger S. A study of the impact of meaningful use clinical quality
measures. Washington, DC: American Hospital Association. 2013.
5. Mehrotra A, Adams JL, Thomas JW, McGlynn EA. The effect of different attribution rules on individual physician cost profiles.
Ann Intern Med. 2010;152(10):649-54.
6. Peterson ED, Ho PM, Barton M, Beam C, Burgess LH, Casey DE, Jr., et al. ACC/AHA/AACVPR/AAFP/ANA concepts for clinician-
patient shared accountability in performance measures: a report of the American College of Cardiology/American Heart
Association Task Force on Performance Measures. Circulation. 2014;130(22):1984-94.
12/1/14 2/1/15 4/1/15 6/1/15 8/1/15 10/1/15 12/1/15 2/1/16
Outp
Interpretation
Data exploration
System development and debugging
Measure development and debugging
Validation
Synthesis and analysis
Informing and updating stakeholders
Inpat
Interpretation
Data exploration
System development and debugging
Measure development and debugging
Validation
Synthesis and analysis
Informing and updating stakeholders
Bet
ACE/A
Interpretation
Data exploration
System development and debugging
Measure development and debugging
Validation
Synthesis and analysis
Informing and updating stakeholders
ICDC
Interpretation
Data exploration
System development and debugging
Measure development and debugging
Validation
Synthesis and analysis
Informing and updating stakeholders
FollSched
Interpretation
Data exploration
System development and debugging
Measure development and debugging
Validation
Synthesis and analysis
Informing and updating stakeholders
Allm
Interpretation
Data exploration
System development and debugging
Measure development and debugging
Validation
Synthesis and analysis
Informing and updating stakeholders
Methods
Discussion
Conclusion