PubH 6863: Understanding Health Care Quality
Fall 2009 (2 credits)
Wednesdays, 3:35-5:30 pm, 1250 Mayo
Robert L. Kane, MD, Professor
School of Public Health Policies:
Students may change grading options during the initial registration period or during the first two weeks of the
semester. The grading option may not be changed after the second week of the term.
Students should refer to the Refund and Drop/Add Deadlines for the particular term at onestop.umn.edu for
information and deadlines for withdrawing from a course. As a courtesy, students should notify their instructor
and, if applicable, advisor of their intent to withdraw.
Students wishing to withdraw from a course after the noted final deadline for a particular term
must contact the School of Public Health Student Services Center at email@example.com for further
An incomplete grade is permitted only in cases of exceptional circumstances and following
consultation with the instructor. In such cases an “I” grade will require a specific written agreement
between the instructor and the student specifying the time and manner in which the student will
complete the course requirements. Extension for completion of the work will not exceed one year.
Student Conduct, Scholastic Dishonesty, and Sexual Harassment Policies
Students are responsible for knowing the University of Minnesota Board of Regents' policy on Student Conduct
and Sexual Harassment found at www.umn.edu/regents/polindex.html.
Students are responsible for maintaining scholastic honesty in their work at all times. Students engaged in
scholastic dishonesty will be penalized, and offenses will be reported to the Office of Student Academic Integrity
The University’s Student Conduct Code defines scholastic dishonesty as “plagiarizing; cheating on assignments or
examinations; engaging in unauthorized collaboration on academic work; taking, acquiring, or using test materials
without faculty permission; submitting false or incomplete records of academic achievement; acting alone or in
cooperation with another to falsify records or to obtain dishonestly grades, honors, awards, or professional
endorsement; or altering, forging, or misusing a University academic record; or fabricating or falsifying of data,
research procedures, or data analysis.”
Plagiarism is an important element of this policy. It is defined as the presentation of another's writing or ideas as
your own. Serious, intentional plagiarism will result in a grade of "F" or "N" for the entire course. For more
information on this policy and for a helpful discussion of preventing plagiarism, please consult University policies
and procedures regarding academic integrity: http://writing.umn.edu/tww/plagiarism/.
Students are urged to be careful that they properly attribute and cite others' work in their own writing. For
guidelines for correctly citing sources, go to http://tutorial.lib.umn.edu/ and click on “Citing Sources.”
In addition, original work is expected in this course. It is unacceptable to hand in assignments for this course for
which you receive credit in another course unless by prior agreement with the instructor. Building on a line of
work begun in another course or leading to a thesis, dissertation, or final project is acceptable.
If you have any questions, consult the instructor.
It is University policy to provide, on a flexible and individualized basis, reasonable accommodations to students
who have a documented disability (e.g., physical, learning, psychiatric, vision, hearing, or systemic) that may
affect their ability to participate in course activities or to meet course requirements. Students with disabilities are
encouraged to contact Disability Services to have a confidential discussion of their individual needs for
accommodations. Disability Services is located in Suite180 McNamara Alumni Center, 200 Oak Street. Staff can
be reached by calling 612-626-1333 (voice or TTY).
At the end of this course, students should be able to:
1. Distinguish between structural, process, and outcome-oriented approaches.
2. Distinguish between appropriateness and effectiveness, and describe their relationship to
structural, process, and outcome-oriented approaches.
3. Discuss the implications of these alternative approaches.
4. Provide examples of how to apply each approach to given health care problems.
5. Describe the historical cycles of quality assurance activities and identify the major players
associated with each era.
6. Distinguish between quality assessment, assurance, and improvement.
7. Discuss the implications of practice variation data for health policy.
8. describe what is involved in selecting criteria for inclusion in a practice protocol, guideline, or
9. Describe the elements of a Total Quality Management or Continuous Quality Improvement
10. Discuss the role of evidence-based medicine in contemporary practice.
11. Design a process review.
12. Design an outcomes analysis.
13. Outline an intervention to change practice behavior and describe how to evaluate it.
The final grade will be based on your performance on the mid-term examination (40%), the final exam
(50%), and class participation (10%).
This course is designed to be highly experiential. A substantial proportion of the sessions are given over
to small group projects that allow students to gain some first-hand experience with aspects of quality
assessment. Students are expected to have done the readings prior to class. The exams are based on the
class discussions and the material in the core readings. You will be responsible for the content of the
latter, whether or not it has been specifically covered in class. Students are expected to participate
actively in the class exercises. Allowance will be made for the differences in clinical knowledge that can
affect the level of participation, but everyone should be involved.
Meeting with Instructor
Students are encouraged to meet with the instructor whenever they have any questions or problems. Dr.
Kane does not keep formal office hours. Instead, students should make an appointment to see him. His
office is in room D-351 Mayo; the phone number is 612-624-1185; email is: firstname.lastname@example.org.
Note: The readings have been divided into core readings, which must be read before each session, and
suggested readings for learning more about a topic. The core readings for each session are bolded and
marked with an asterisk and are listed at the end of each session. They form the basis for discussion
questions for each session. You should come to class prepared to address these topics. There are two
sources of core readings: a text (RL Kane, ed. Understanding Health Outcomes Research, Jones and Bartlett
Publishers, 2005) and a set of readings that are available on E-Reserve through the Bio-Medical Library
(www.lib.umn.edu; under Course Support, select Course Reserves; then Connect to the E-Reserve system;
then Electronic Reserves and Reserves Pages; and enter course number PubH 6863 and search; when you
click on the course listing you will be asked to provide a password [the password to access the articles will
be provided to you by the instructor at the first session]; the list of articles will appear and when you click
on them you will open either a PDF file or will be directed to the link for that specific article). A reference
list including all readings is listed at the end of the syllabus. Copies of the PowerPoint slides for several
sessions will be made available in advance for the session.
In this class, our use of technology will sometimes make students' names and U of M Internet IDs visible
within the course website, but only to other students in the same class. Since we are using a secure,
password-protected course website, this will not increase the risk of identity theft or spamming for anyone
in the class. If you have concerns about the visibility of your Internet ID, please contact me for further
For sessions 2, 4, 5, 7, 9, and 13 we will have two people debate the topic indicated. Each student will
participate in at least one debate.
Session 1: Definitions and Overview History (September 9)
Quality Assessment vs. Quality Assurance
• defining quality (Institute of Medicine, 1990b); (Chelimsky, 1993)
Quality of care is the degree to which health services for individuals and populations increase
the likelihood of desired health outcomes and are consistent with current professional
Institute of Medicine, 1990
If we know what to do, why don’t we do it? (Millenson, 1997); (Gawande, 2007)*; (Business Week)*;
Quality = doing the right things well
right = appropriateness (Brook 2009)*
well = skill
Quality = having the right things happen a reasonable proportion of the time
observed vs. expected
• structure, process and outcomes (A. Donabedian, 1980); (Donabedian, 1988)*; (A Donabedian,
1989); (Brook, McGlynn, & Cleary, 1996); (Birkmeyer et al., 2003)* (Kizer, 2003)* (Peterson et
al., 2004); (Shahian, 2004)
• concerns about quality (Institute of Medicine, 2001)
• changing behavior (Williamson, 1967); (Brown & Fleisher, 1971); (McDonald, 1976); (Davis, et
• perspectives (Leape, 1994); (Blumenthal, 1996a); (Blumenthal, 1996b); (Green et al., 1997)
• Concerns about hospital error (Kohn, Corrigan, & Donaldson, 2000); (Zhan & Miller, 2003)
Assessment vs. Assurance vs. Improvement
• Level of responsibility
• Philosophy of control
• regulatory/punitive/hide errors
• cooperative/safe to admit errors
Epidemiology vs. RCTs
• suggestive associations
• treatment effect inferred
• selection bias possible
• post hoc case mix adjustment
• treatment variation
• causal modeling
• design isolates treatment effect
• random assignment
• inclusion/exclusion criteria
• tight protocol
(Benson & Hartz, 2000); (Kaufman & Poole, 2000); (Little & Rubin, 2000); (Concato, Shah, &
Horwitz, 2000); (Berlin & Rennie, 1999); (Juni et al., 1999)
A Brief History of QA
1910 Ernest Codman (Reverby, 1981)
1930s Lee and Jones
1940s Paul Lembcke
1950s JCAH/JCAH(O) (Institute of Medicine, 1990b); (Millenson, 1997)
Vergil Slee (PAS/CHAS)
1968 National Halothane Study
1970 Experimental Medical Care Review Organizations (Sanazaro et al., 1972)
1972 National Center for Health Services Research
1972 Professional Standards Review Organizations
1973 Brook’s analysis of process and outcomes (Brook, 1973)
1977 Costs, Risks and Benefits of Surgery
1980 small area variation
1984 Peer Review Organizations (Institute of Medicine, 1990b)
from record abstracting to epidemiological studies using Medicare administrative data
1989 Agency for Health Care Policy and Research
1990 National Committee for Quality Assurance
1995 Cochrane Centers
1999 Agency for Healthcare Research and Quality
Evidence-based Practice Centers
2000 Institute of Medicine Report, To err is human: Building a safer health system (Kohn et al., 2000)
2001 Institute of Medicine Report, Crossing the quality chasm: A new health system for the 21st century
(Institute of Medicine, 2001)
Outcome = f (baseline, patient factors, process, environment)
Outcome = f (process, structure)
Disability Pain & Discomfort
Dissatisfaction Social activity
• level of activity
• social support
• risk factors
• training (Tamblyn et al., 2002) (Ayanian et al., 2002)
• corporate structure
Terms (Roland, 2004)
• Effectiveness (treatment-related outcomes)
• Critical pathways
• Iatrogenic complications (nosocomial)
• Continuous improvement
• Problem-solving cycles
• Report cards
• HEDIS (Health Plan Employers Data and Information Set)
• Medical errors (Blendon et al., 2002)* (Gallagher et al., 2003)
• Crossing the Quality Chasm (Berwick, 2002)*; (Shojania et al., 2004)*; (AHRQ--Closing the
Quality Gap)* (Commonwealth)*
• Cost/quality trade-off (Sirovich, 2006)*
Baldrige Health care criteria (“Patient-focused excellence”)
The delivery of health care services must be patient focused. Quality and performance are the key
components in determining patient satisfaction, and all attributes of patient care delivery (including those
not directly related to medical/ clinical services) factor into the judgment of satisfaction and value.
Satisfaction and value to patients are key considerations for other customers as well. Patient-focused
excellence has both current and future components: understanding today’s patient desires and anticipating
future patient desires and health care marketplace offerings.
Value and satisfaction may be influenced by many factors during a patient’s experience participating
in health care. Primary among these factors is an expectation that patient safety will be ensured
throughout the health care delivery process.
Additional factors include a clear understanding of likely health and functional status outcomes, as
well as the patient’s relationship with the health care provider and ancillary staff, cost, responsiveness,
and continuing care and attention. For many patients, the ability to participate in making decisions about
their health care is considered an important factor. This requires patient education for an informed
decision. Characteristics that differentiate one provider from another also contribute to the sense of being
Patient-focused excellence is thus a strategic concept. It is directed toward obtaining and retaining
patient loyalty, referral of new patients, and market share gain in competitive markets. Patient-focused
excellence thus demands rapid and flexible response to emerging patient desires and health care
marketplace requirements, and measurement of the factors that drive patient satisfaction. It demands
listening to your patients and other customers. Patient-focused excellence also demands awareness of
new technology and new modalities for delivery of health care services.
Core Readings for Session 1
Agency for Healthcare Research and Quality (2004). Fact Sheet—Closing the quality gap: A critical
analysis of quality improvement strategies. http://www.ahrq.gov/clinic/epc/qgapfact.pdf
Berwick DM. (2002). A user's manual for the IOM's 'quality chasm' report. Health Affairs, 21(3), 80-90.
Birkmeyer JD, Stukel TA, Siewers AE, Goodney PP., Wennberg DE, & Lucas FL (2003). Surgeon
volume and operative mortality in the United States. New England Journal of Medicine 349(22):
Blendon RJ, DesRoches CM, Brodie M, Benson JM, Rosen AB, Schneider E, Altman DE, Zapert K,
Hermann MJ., & Steffenson AE (2002). Views of practicing physicians and the public on medical errors.
New England Journal of Medicine, 347(24), 1933-1940.
Brook RH (2009). Assessing the appropriateness of care—its time has come. JAMA 302(9):997-998.
Business Week article on Medical Guesswork
Commonwealth Fund. Why Not the Best? Results from a National Scorecard on U.S. Health System
Donabedian A. (1988). The quality of care: How can it be assessed? Journal of the American Medical
Association, 260, 1743-1748.
Gawande A. (2007) The Checklist. Annals of Medicine: The New Yorker, December 10, 2007, 87-95.
Kizer KW. (2003). The volume-outcome conundrum. New England Journal of Medicine 349(22):2159-2161.
Shojania KG, McDonald KM, Wachter RM, and Owens DK. (2004). Closing the quality gap: A critical
analysis of quality improvement strategies, Technical Review No. 9 prepared for the Agency for
Healthcare Research and Quality. Volume 1 Summary, pages 3-5. available at
Sirovich BE, Gottlieb DJ, Welch HG, & Fisher ES. (2006). Regional variations in health care intensity and
physician perceptions of quality of care. Annals of Internal Medicine 144(9):641-649.
Wharam, JF & Sulmasy D. (2009). Improving the quality of health care: who is responsible for what?
JAMA, 301(2), 215-217.
1. What is quality?
2. Is it realistic to expect quality medicine? What is a reasonable standard of quality in medical
care? Should it be error free?
3. Do the same standards and approach to quality apply in manufacturing and health?
4. How do structure and process relate to quality?
5. If we know what should be done, why don’t we do it?
Session 2: Appropriateness/Process Approaches (September 16)
• relationship between process and appropriateness
• physician practice patterns (Brook & Williams, 1976); (McGlynn et al., 2003)*; (Steinberg, 2003)
• process/outcomes debate (McAuliffe, 1978); (McAuliffe, 1979); (Kane & Lurie, 1992); (Higashi,
2005)*; (Bradley, 2006)*
• criteria mapping (Greenfield, 1989); (Greenfield, Lewis, Kaplan, & et al, 1975)
• sources of data to assess process (Peabody et al., 2000)*
• simulated patients
• record abstraction
• direct observation (personal, video, audio)
• completeness of records (Fessel & Van Brunt, 1972)
• adequacy of information base
• expert consensus panels (Institute of Medicine, 1990a); (Ayanian et al., 1998)
• appropriateness of procedures studies (Winslow et al. , 1988)*; (Chassin et al., 1987) (Brook, 2009)*
• ambulatory care (Lawthers et al., 1993); (Weiner et al., 1995)
• checklists (Haynes, 2009)*
Process = doing the right things (appropriateness) well (skill)
Care process = Diagnosis → Treatment → Education → Monitoring
Separating diagnosis and treatment (Williamson, 1971)
What information is needed?
+ves and -ves
• linear vs. branching logic
Dealing with missing data
• intensity (dosage)
What does it mean to say information/counseling was given?
• how much
• how well
Skill at each level
Core Readings for Session 2
Bradley EH, Herrin J, Elbel B, McNamara RL, Magid DJ, Nallamothu BK, Wang Y, Normand SL,
Spertus JA, & Krumholz HM. (2006). Hospital quality for acute myocardial infarction: correlation among
process measures and relationship with short-term mortality. JAMA. 296(1):72-8.
Brook RH (2009). Assessing the appropriateness of care—its time has come. JAMA 302(9):997-998.
(This reference was included in session 1)
Haynes, AB, Weiser TG, et al. (2009). A surgical safety checklist to reduce morbidity and mortality in a
global population. N Engl J Med 360(5): 491-9.
Higashi T, Shekelle G, Adams JL, Kamber CJ, Roth CP, Solomon, DH, Reuben DB, Chiang L, Maclean
CH, Chang JT, Young RT, Saliba DM, & Wenger NS. (2005). Quality of care is associated with survival
in vulnerable older patients. Annals of Internal Medicine 143:274-281.
McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, & Kerr EA. (2003). The quality of
health care delivered to adults in the United States. New England Journal of Medicine,
Peabody JW, Luck J, Glassman P, Dresselhaus TR, & Lee M. (2000). Comparison of vignettes,
standardized patients, and chart abstraction: A prospective validation study of 3 methods for measuring
quality. Journal of the American Medical Association, 283(13):1715-1722.
Winslow CM, Kosecoff J, Chassin M, Kanouse DE, & Brook RH. (1988). The appropriateness of
performing coronary artery by-pass surgery. Journal of the American Medical Association, 260:505-509.
Debate Topic: Process measures provide useful insights into the quality of care.
Doctor held liable for fatal handwriting mix-up:
By Mimi Hall
A Texas cardiologist could be the first doctor held liable for a fatal medication mix-up caused by a longtime
problem of the medical profession: bad handwriting. A jury in Odessa, Texas, ordered Ramachandra Kolluru to pay
$225,000 to the family of Ramon Vasquez, who died after a pharmacist misread Kolluru's writing. The 42-year-old
heart patient was given the wrong medication at eight times the recommended dosage. Two weeks later, he was dead
from an apparent heart attack.
The victim's widow, Teresa Vasquez, says she sued to prompt doctors and pharmacists to be more careful. "I was
hoping we'd win, because if the doctors don't change their writing, then it could happen to me again with my kids or
even me," she says. Now, "doctors might change, and it might not ever happen again to anybody."
The case points to a growing danger as medications become more numerous and their names more similar.
Pharmacist Michael Cohen of the Institute for Safe Medication Practices says mix-ups of drugs and doses are
common. Hundreds, maybe thousands, of patients have been harmed either because someone misread a written
prescription or because a nurse or aide misunderstood what an emergency-room doctor said should be given to a
patient, he says. "We're bringing more and more drugs to the market, and that's good news," Cohen says. "But it's
becoming more and more difficult for companies to come up with a name that doesn't look or sound like a drug
that's already out there." In Vasquez's case, there was no dispute over the doctor's care, just whether Kolluru and the
pharmacist should have been more careful about the prescription.
In June 1995, Vasquez was given a prescription for Isordil for heart pain caused by valve problems. The prescription
called for him to take 20 milligrams of Isordil four times a day, for a total of 80 milligrams a day. The pharmacist
thought the handwritten prescription said Plendil, a drug for high blood pressure typically taken at no more than 10
milligrams a day. So Vasquez received the wrong drug, with directions to take it at the high dosage meant for
Isordil. "After he took it, he would complain that his heart was pounding real fast, but after a while it would go
away," Teresa Vasquez says of the fast time her husband took the drug, on June 24, 1995. By the next night, it
became clear her husband needed medical attention. She took him to the emergency room, where doctors told her he
had suffered a heart attack. Two weeks later, he died.
At the trial, which ended last week, lawyers for the doctor called expert witnesses who testified the pills did not kill
Vasquez. His heart was so weak, they argued, he was about to die anyway. "But the jury thought that, given the
massive dose of Plendil he took, it just could not have helped but do something, and they just chose to disbelieve the
two experts," defense lawyer Max Wright says. Teresa Vasquez agrees that her husband was very ill, and both she
and her lawyer insisted the doctor's care was good.
"We had no complaint about (Kolluru's) care. In fact he is a good doctor," lawyer Kent Buckingham says. But
Buckingham argued that Kolluru and the pharmacy were still responsible for Vasquez's death because of the mix-up.
The jury agreed on Oct. 14 and awarded $450,000 to the 41-year-old widow and her three children.
Half of the judgment was assigned to the pharmacy, leaving Kolluru responsible for paying the other half. The
doctor has not decided whether to appeal. Because the pharmacy had previously settled its side of the case in 1998
for an undisclosed sum while denying liability, it does not have to pay its half of the verdict. Wright says he believes
the jury was trying to send a message to the medical community that in the computerized information age, there is
no reason for doctors to create the potential for error by writing out their prescriptions instead of typing or printing
them out. Cohen also warns that patients should be more diligent as well. He says they should insist that doctors
write out the reason for the medication on a prescription.
Session 3: Application (September 23)
Students will develop criteria and apply them to a set of medical records. The goal of this exercise is to
identify a set of simple criteria that might be reasonably expected to be obtained by a chart review. The
criteria should fit the problem (in this case CHF) and the situation (ongoing patient care, inpatient and
outpatient). For the purposes of this exercise do not attempt to be exhaustive. Instead think about the most
salient items, those that would generate the most insight into the quality of care produced.
Session 4: Variation (September 30)
• Variation of what? (Paul-Shaheen, Clark, & Williams, 1987)
• From small area variation to national Medicare patterns. Wennberg studies in Maine, between
cities, national geography (Wennberg, Bunker, & Barnes, 1980); (J.E. Wennberg & McAndrew,
1996); (JE Wennberg et al., 1988); (JE Wennberg, Freeman, & Culp, 1987); (Wennberg, et al.,
1989)*; ( Fisher et al., 1994); (NP Roos, 1989); (Anderson, Newhouse, & Roos, 1989);
(McPherson, Strong, Epstein, & Jones, 1981); (Ellerbeck et al., 1995); (Chassin et al., 1987);
(Guadagnoli et al., 1995); (O'Connor et al., 1999) (Landon, 2006)*
• Is variation real? cf. statistical artifact (Kane, Lin et al., 2002)
• Factors affecting variation (Skinner et al, 2003) (Gawande, 2009)*
• Is more better? Supply? Volume? (Fisher & Welch, 1999)* (Sirovich, 2006)*, (Fisher et al,
2003)*, (Halm, 2002)*
• Relationship with appropriateness and outcomes (e.g., mortality) (Wennberg, 1984); (Leape et al.,
1990); Fisher & Welch, 1999)*; (Keeler et al., 1992); (Davidson, 1993); (Park, 1993); (Cain &
Diehr, 1993); (Petersen et al., 2003)
• Modern medical management goal: eliminate variance BUT more care does not necessarily imply
inappropriate or better care (Fisher et al., 2003a, b)*
• Drive to reduce variance led to enthusiasm for guidelines
• Variation v. disparities (Trivedi, 2006), (Liu, 2006)
• Scorecards (AHRQ, 2006)*; (Commonwealth, 2007)*
Core Readings for Session 4
AHRQ National Healthcare Quality Report, 2006 http://www.ahrq.gov/qual/nhqr06/nhqr06.htm)
Commonwealth Fund. Aiming Higher: Results of a State Scorecard on System Performance June, 2007
Fisher ES, & Welch HG. (1999). Avoiding the unintended consequences of growth in medical care: How
might more be worse? JAMA, 281(5), 446-453.
Fisher ES. (2003). Medical care—is more always better? New England Journal of Medicine. 349(17):
Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, & Pinder EL. (2003). The implications of
regional variations in Medicare spending. Part 2: Health outcomes and satisfaction with care. Annals of
Internal Medicine, 138(4), 288-298.
Gawande A (2009). The cost conundrum: What a Texas town can teach us about health care. The New
Halm EA, Lee C, & Chassin MR. (2002). Is volume related to outcome in health care? A systematic
review and methodologic critique of the literature. Annals of Internal Medicine, 137, 511-520.
Landon BE, Normand S-LT, Lessler A, O’Malley AJ, Schmaltz S, Loeb JM, and McNeil, BJ (2006).
Quality of care for the treatment of acute medical conditions in US hospitals. Archives of Internal
Sirovich BE. Gottlieb DJ. Welch HG. Fisher ES. (2006). Regional variations in health care intensity and
physician perceptions of quality of care. Annals of Internal Medicine. 144(9):641-9.
Wennberg JE, JL Freeman, RM Shelton, et al. (1989). Hospital use and mortality among Medicare
beneficiaries in Boston and New Haven. New England Journal of Medicine, 321, 1168-1173.
Debate Topic: The information from variation studies is useful.
Session 5: Parameters, Protocols, Guidelines and Pathways (October 7)
• guidelines (James, 1993); (Ayanian et al., 1998); (Graham, James, & Cowan, 2000); (Beck et al., 2000);
(Mehta et al., 2002)* (Shekelle et al., 2001); (Shojania, 2007)*; (Walter, 2004)*; (Haycox 1999)*;
• AHCPR efforts to create practice guidelines (Sniderman, 2009)*
• appropriateness from consensus or empirical data (Kane & Lurie, 1992); (Phelps, 1993); (Brook, 1991);
(Wennberg, 1991); (Kassirer, 1993); (Tricoci, 2009)*
• clinical pathways critical path: “optimal sequencing and timing of interventions by physicians, nurses,
and other staff for a particular diagnosis or procedure, designed to better utilize resources, maximize
quality of care and minimize delays” (Lumsdon & Hagland, 1993); (Petryshen & Petryshen, 1992);
(Strong & Sneed, 1991) (Holtzman, Bjerke, & Kane, 1998)
• basically an extension of process measures (what to do under specified circumstances)
• guidelines vs. clinical pathways—latter more nursing oriented, milestones, expected outcomes,
-work better when tasks are standardized and well defined (e.g., pre-/post-op)
• protocol-specific guidelines, less discretion
• Dangers of guidelines
Major goals for guidelines:
• reduce use (LOS)
• improve appropriateness
Types of guidelines
• whom to treat
• how to treat
Lack of strong empirical base for guidelines
• general lack of data
• classic case not typical-multiple variations in situation
• complicated geriatric cases
Appropriateness determined by consensus judgment is not necessarily correct
• consensus is not the same as truth
• paradox of textbooks
• different disciplines may not agree (Fowler et al., 2000)*
• level of specificity: principles vs. algorithm
• Are guidelines based on good evidence? (O’Connor, 2005) simple vs. complex cases (Boyd,
2005)*; (Tinetti, 2004)
Core Readings for Session 5
Boyd CM, Dareer J, Boult C, Fried LP, Boult L, & Wu AW. (2005). Clinical practice guidelines and
quality of care for older patients with multiple comorbid diseases: Implications for pay for performance.
JAMA, 294(6), 716-724.
Haycox A, Bagust A, & Walley T. (1999). Clinical guidelines--the hidden costs. British Medical Journal,
Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, & Moher D. (2007). How quickly do systematic
reviews go out of date? A survival analysis. Annals of Internal Medicine, 147, 224-233.
Sniderman AD. and Furberg CD. (2009). Why guideline-making requires reform. JAMA 301(4): 429-31.
Tricoci, P, Allen JM, et al. (2009). Scientific evidence underlying the ACC/AHA clinical practice
guidelines. JAMA 301(8): 831-41.
Walter LC, Davidowitz NP, Heineken PA, & Covinsky KE (2004). Pitfalls of converting practice
guidelines into quality measures: Lessons learned from a VA performance measure. JAMA, 291(20),
Debate Topic: Are guidelines useful?
Session 6: Management Models of Quality Assurance (October 14) (Bill Riley)
Review Process Improvement Concepts and methods, how they apply to health care, and relation of
Process Improvement to other outcome systems and methods
1. Define Continuous Quality Improvement (CQI) and describe a process.
2. Define variation and why it is important in process improvement.
3. Identify and describe three types of causes
4. List four Tools for process improvement and describe how they can be used in health care
5. Differentiate between CQI, TQI, PDCA and Six Sigma.
6. Be able to develop and implement and process improvement project.
1. Foundations of CQI
2. Definition and Overview of Process Improvement
a. Voice of Customer and Voice of Process
b. Differentiating Signal and Noise
4. Three Types of Cause
a. Common Cause
b. Special Cause
c. Root Cause
5. CQI Models
b. Plan, Do, Check Act
d. Find, Organize, Clarify, Understand, Select
e. Six Sigma
i. 3.4 defects per million opportunities
ii. Define, Measure, Analyze, Improve, Control
6. Common CQI Tools
a. Process Map
b. Cause and Effect Diagram (Fishbone Diagram)
c. Pareto Chart
d. Run Chart
e. Control Chart
7. Statistical Process Control
8. Three Principles of Total Quality
9. Case Study
Core Readings for Session 6
Berwick DM. (2002). A user's manual for the IOM's 'quality chasm' report. Health Affairs, 21(3), 80-90.
(This article was included in Session 1.)
Bodenheimer T. (1999a). The American health care system--The movement for improved quality in
health care. New England Journal of Medicine, 340(6), 488-492.
Chassin MR. (1998). Is health care ready for six sigma quality? The Milbank Quarterly 76(4):565-591.
Leape LL, Berwick DM. (2005). Five years after to err is human: What have we learned? JAMA 293(19),
Steinberg EP. (2003). Improving the quality of care—Can we practice what we preach? New England
Journal of Medicine 348(26), 2681-2683.
Discussion Questions (Students should be prepared to discuss these issues.)
A. A User’s Manual for the IOM’s ‘Quality Chasm’ Report
Donald M. Berwick
1. What are the redesign recommendations of the U.S. health care system at four levels?
2. What is your critique of these recommendations?
3. What are the three primary obstacles, in your view?
B. The American Health Care System—The Movement for Improved Quality in Health Care
1. What are the two main intertwining threads creating the movement to improve quality of
2. What are the main problems with the quality of health care and how is it measured?
C. Five Years After To Err is Human: What Have We Learned?
Lucian L. Leape, Donald M. Berwick
1. What do the authors conclude about the effects in improving health care safety in three
areas five years after the IOM report “To Error is Human”?
2. What barriers to progress have been encountered?
3. What needs to be done next?
D. Improving the Quality of Care—Can We Practice What We Preach?
Earl P. Steinberg
1. What four actions does the author suggest to address the findings by McGlynn et al.
(2003) that adults receive only 55% of recommended care according to 439 process-of-
2. Do you agree with these recommendations? Why or why not?
Session 7: Guideline Criteria/Evidence-based Medicine (October 21)
What is evidence-based health care
“The process of systematically finding, appraising, and using contemporaneous research findings as the
basis for clinical decisions”
National Library of Medicine MeSH scope note
• Steps involved in Evidence-Based Health Care:
1. Formulate a clear clinical question
2. Search the literature for relevant evidence
3. Evaluate (critically appraise) the literature
4. Implement useful findings in clinical practice
• Major Sources of Evidence:
1. MEDLINE/PubMed (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed)
2. Cochrane Library (http://www.cochrane.org)
3. Evidence-based Practice Center Reports (http://www.ahrq.gov/clinic/epcquick.htm)
4. Clinical practice guidelines (e.g., http://www.guideline.gov)
5. Others (e.g., Bandolier, assorted books and journals)
• Levels of Evidence and Grades of Recommendations
IA evidence from meta-analysis of randomized controlled trials
IB evidence from at least one randomized controlled trial
IIA evidence from at least one controlled study without randomization
IIB evidence from at least one other type of quasi-experimental study
III evidence from nonexperimental descriptive studies, such as comparative studies, correlation
studies, and case-control studies
IV evidence from expert committee reports or opinions or clinical experience of respected
authorities, or both
• Historical landmarks
o EA Codman
o Lawrence Weed (problem oriented medical records) (Weed, 1968)
o Archibald Cochrane (Effectiveness and efficiency: random reflections on health services. London:
Nuffield Provincial Hospitals Trust; 1982)
• criteria for guidelines (Institute of Medicine, 1990a); (Field & Lohr, 1992); (Shekelle et al., 2001);
• Minnesota practice parameters; agreement across plans on ICCI guidelines
• evidence-based medicine (Tu et al., 1997), (Moher et al., 1998)*; (Sackett et al., 1996)*, (Moher et
al., 2001)*; (Consort E-Flowchart and Checklist)—handout from instructor (Haynes, 2009)*
• efforts to review medical literature systematically (Sackett, 1997); (Juni et al., 1999); (Oxman, 1994);
(Juni, Altman, & Egger, 2001)*; (Agency for Healthcare Research and Quality, 2002)*
• What are the problems in implementing guidelines? (Boyd, et al., 2005); (Tinetti, 2004)*; (Haycox,
1999)*; (Dougherty, 2008)*
• Cochrane Centers
• AHCPR strategies: PORTs, guidelines, Evidence-based Practice Centers (role of politics)
• Criteria for studies
• what is involved in doing an EBM review
• how to summarize evidence (meta-analysis, consistency of point estimates, etc)
• how strong is the quality of evidence (Steinberg & Luce, 2005)*, (Garber, 2005)
• how other techniques (like consensus judgments) are used to fill in the gaps (Holmes, 2006)*;
(Kramer, 1984)*; (Kunz, 2008)*
• Example of an evidence-based report
• How is evidence used? (Rawlins, 2004),* (Mendelson & Carino, 2005), (Timmermans and Mauck,
2005), (Eddy, 1998) (Green, 2006)*
Core Readings for Session 7
Agency for Healthcare Research and Quality. (2002). Systems to rate the strength of scientific evidence
(Number 47): US Department of Health and Human Services, Public Health Service. Available at:
Dougherty D and Conway PH. (2008). The "3T's" road map to transform US health care: the "how" of
high-quality care. JAMA 299(19): 2319-21.
Gawande A. (2007) The Checklist. The New Yorker, December 10, 2007, 87-95. (This article was
included in Session 1.)
Green LW & Glasgow RE. (2006). Evaluating the relevance, generalization, and applicability of research:
Issues in external validation and translation methodology. Evaluation & Health Professions 29(1):126-153.
Haycox A, Bagust A, & Walley T. (1999). Clinical guidelines--the hidden costs. British Medical Journal,
318, 391-393. (This article was included in Session 5.)
Haynes, AB., Weiser TG, et al. (2009). A surgical safety checklist to reduce morbidity and mortality in a
global population.N Engl J Med 360(5): 491-9. (This article was included in Session 2.)
Holmes D, Murray SJ, Perron A, et al. (2006). Deconstructing the evidence-based discourse in health
sciences: truth, power and fascism. International Journal of Evidence-Based Healthcare, 4(3):180-6.
Juni P, Altman DG, & Egger M. (2001). Systematic reviews in health care: Assessing the quality of
controlled clinical trials. BMJ, 323, 42-46.
Kramer, MS. and Shapiro SH. (1984). Scientific challenges in the application of randomized trials. JAMA
Kunz R. (2008). Randomized trials and observational studies: still mostly similar results, still crucial
differences. Journal of Clinical Epidemiology 61:207-208.
Rawlins MD. (2004). NICE work--Providing guidance to the British National Health Service. New
England Journal of Medicine, 351(14), 1383-1385.
Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, & Richardson, WS. (1996). Evidence based
medicine: What it is and what it isn't. BMJ, 312, 71-72.
Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, & Moher D. (2007). How quickly do systematic
reviews go out of date? A survival analysis. Annals of Internal Medicine, 147, 224-233. (This article was
included in Session 5.)
Sniderman, A. D. and C. D. Furberg (2009). Why guideline-making requires reform. JAMA 301(4):
429-31. (This article was included in Session 5.)
Steinberg EP and Luce BR. (2005). Evidence Based? Caveat Emptor! Health Affairs, 24(1), 80-92
1. Is evidence-based practice feasible?
2. What strength of evidence is needed to declare a relationship a fact? Are there universal truths?
3. Must one rely exclusively on RCTs? Some areas lend themselves to RCTs better than others?
4. How do a series of weak studies rank against one strong one?
5. How do you extrapolate from an evidence review to practice? What other factors should be considered?
Debate Topic: Medical practice and policy should be based on evidence.
Session 8: Mid-term examination (October 28)
Session 9: Exercise: Develop a Guideline (November 4)
Working in teams, students will develop and adapt an AHCPR guideline (either heart failure or
depression) to a workable system that can be implemented in a clinical setting.
Session 10: Outcomes/Effectiveness (November 11)
• outcomes (Chapter 1 in Kane, 2005)*; (Chapter 13 in Kane, 2005)*; (Brook et al, 1977);
(Ellwood, 1988); (Lohr, 1988); (Wennberg et al., 1980); (Bunker, 1988)
• measuring outcomes (QALYS, symptom-specific, general) (Stewart et al., 1988); (Avorn, 1984);
(Testa & Simonson, 1996)
• ambulatory outcomes (Mushlin, Appel, & Barr, 1978); (Kane et al., 1977); (Payne et al., 1984);
(Tarlov et al., 1989); (Stewart et al., 1988); (Greenfield et al., 1992); (Ware et al., 1996); (Ayanian
et al., 1997)
• consumers as sources of data for outcomes & satisfaction (Davies & Ware, 1988)
• epidemiological approaches vs. RCTs
• decision-analysis approaches (Eddy, 1990)
• using claims data (Normand et al., 1995)
• adjusting for severity (Iezzoni & Moskowitz, 1988); (Stukel, 2007)*; (D’Agostino, 2007)
• interpreting trial data (Flum, 2006); (Weinstein, 2006a, b)
• expected vs. achieved benefit (Williamson, 1971)
• assigning relative values to outcomes (Kane et al., 1986); (Finch et al., 1995); (Kane, Rockwood,
Philp, & Finch, 1998)
• continuity (Roos et al, 1980)
• effects of other factors (SES, ethnicity) (Wenneker & Epstein, 1989); (Hadley, Steinberg, & Feder,
Outcomes = f(baseline, clinical, demographic, treatment)
Collecting prospective data vs. using extant data
• mortality comparisons: HCFA hospital mortality data comparisons controversy (Jencks et al.,
1988)*; (Kahn et al., 1988); (Normand et al., 1996); (Hayward & Hofer, 2001); (Roper et al.,
1988); (Roos et al., 1989); (Jolis et al., 1994)
• preventable hospitalizations (Billings, Anderson, & Newman, 1996); (Weissman, Gastsonis, &
Epstein, 1992); (Bindman et al., 1995)
• underuse (Asch et al., 2000)
• sentinel events (Rutstein et al., 1976)
• cost-effectiveness (Russell et al., 1996); (Weinstein et al., 1996); (Ubel et al., 1996)
Appropriateness should be based on effectiveness
Randomized Clinical Trials are not universally feasible
• too many problems
• too many variants
(Ioannidis et al., 2001)*
Epidemiological approaches will be needed
• deduce the effect of treatment by controlling for other relevant variables
• identify relevant subgroups for separate analysis
• Outcomes = f (baseline, demographics, clinical factors, treatment)
• observed vs. expected
Why clinicians don’t like outcomes
1. They have been trained in process
2. Accountability is easier and safer
3. Outcomes information is always after the fact; no opportunity to intervene
4. Outcomes results don’t tell what to do, just where to look
5. Externally imposed/economically motivated
• inconsistency/unreliability of medical records
• baseline measures
• clinic followup visits miss important cases
• specific baseline questions
• follow up
• mail vs. telephone
• medical record abstract
• billing data
Future of outcomes research
• cost effects of guidelines
• options under guidelines
• franchising new technology
When to evaluate
• mature intervention vs. timely assessment
• hard to evaluate established programs
• changes in technology
Outcomes research agenda
• comparisons of alternative treatments
• comparisons of alternative clinicians
• deducing skill
• testing guidelines
• answerable questions
• effectiveness of guidelines
• effect on care & outcomes
• selection bias
Adjusting for selection bias (Stukel, 2007)*
Core Readings for Session 10
Chapters 1 and 13 in Kane, 2005
Ioannidis JP, A-B Haidich, M Pappa, et al. (2001). Comparison of evidence of treatment effects in
randomized and nonrandomized studies. JAMA 286(7), 821-830.
Jencks SF, Daley J, Draper D, et al. (1988). Interpreting hospital mortality data: The role of clinical risk
adjustment. JAMA, 260, 3611-3616.
Stukel TA, Fisher ES, Wennberg DE, Alter DA, Gottlieb DJ, & Vermeulen MJ. (2007). Analysis of
observational studies in the presence of treatment selection bias: Effects of invasive cardiac management
on AMI survival using propensity score and instrumental variable methods. JAMA, 297(3), 278-285.
1. What are the pros and cons of using outcomes as primary quality measures?
2. What are the major components of an outcomes study?
3. What are the special burdens of proof of an observational study?
4. What is the role of case mix adjustment in studies of hospital mortality?
5. How strong are the conclusions from the MOS?
Debate Topic: Outcomes are better guides to quality of care than process measures.
Session 11: Changing Provider/System Behavior (November 18) [Gordon Mosser, MD]
“Most doctors wake up in the morning wanting to do the right thing.” (Nelson S., a friend)
- the challenge day-to-day is how to make it easier to do the right thing.
“Given a choice between changing and proving that it is not necessary, most people get busy with the
proof.” (John Galbraith)
Rogers EM (Rogers, 1995)
1. Knowledge (of something)
2. Persuasion (that it will work)
3. Decision (to use it)
4. Implementation (of it
5. Confirmation (i.e. wanting to know more)
At “Persuasion” - it is critical what others think
Something is adapted quickly when it has:
- a ‘relative advantage’
- ‘lack of compatibility’
One person’s change model. . .
(A + B + D > X) → change
A = Dissatisfaction with where things are now.
B = Vision of a better way.
D = ‘Doability’ of getting from ‘A’ to ‘B’.
X = My personal cost with the change
make it easy
engage opinion leaders
Eisenberg, 3M (Eisenberg, 1986, chapters 6 and 7)
Six ways to change behavior:
- administrative changes
• response to consensus statements (Kosecoff et al., 1987); (Auleley et al., 1997); (GAO, 1997),
(Grimshaw & Russell, 1993; Grimshaw et al., 2001; Gross et al., 2001; Woolf et al., 1999);
(Stafford et al., 2004)
• identifying critical points of intervention (Williamson, 1969); (Grol, 2001); (Mason et al., 2001),
(Eisenberg, 1986 chapters 6 & 7)
• active strategies (Soumerai, McLaughlin, & Avorn, 1989); (Soumerai & Avorn, 1990); (Greco &
Eisenberg, 1993); (Hillman, 1991); (Soumerai et al., 1998), (Lomas et al., 1989)
• using secondary Medicare data (Roper et al., 1988); (Relman, 1988)
• continuing medical education ( Davis et al., 1995); (Davis et al., 1984); (Kane & Kane, 1995)
• role of regulation (Shorr, Fought, & Ray, 1994); (Chassin, 1996); (Chassin, Hannan, & DeBuono,
1996); (Lee, Meyer, & Brennan, 2004)
• implementing guidelines (Grimshaw et al , 2004)* (Grimshaw & Russell, 1993); (Mittman,
Tonesk, & Jacobson, 1992); (AHCPR, 1992); (Cabana et al., 1999); (Horowitz et al., 1996);
(Solberg et al., 2000); (Betz Brown et al., 2000); (Roberts et al., 2004); (Roland, 2004), (Shekelle,
2003), (Dalal & Evans, 2003)
• active QI programs (Berwick D , 1989)*, (Wells et al., 2000); (Kiefe et al., 2001), (Demakis et al.,
• disease management (Bodenheimer, 1999b); (Casalino et al., 2003)
• VA (Jha et al., 2003)
• Payment (Leatherman et al., 2003)*, (Rosenthal et al., 2005), (Epstein, Lee, & Hamel, 2004);
( Fisher & Avorn, 2004)
• Consumer information (Marshall et al., 2000)*, (Werner & Asch, 2005)*
• Information systems (Dexter 2004); (Jha et al., 2003)*
Core Readings for Session 11
Asarnow JR, Jaycox LH, Duan N, LaBorde AP, Rea,MM, Murray P, Anderson M, Landon C, Tang L,
Wells KB. (2005). Effectiveness of a quality improvement intervention for adolescent depression in
primary care clinics: a randomized controlled trial. JAMA. 293(3):311-319.
Berwick D. (1989). Continuous improvement as an ideal in health care. NEJM, 320:53-56.
Berwick DM (2008). The science of improvement. JAMA 299(10):1182-1184.
Buist M, Harrison J, Abaloz E, & Van Dyke S. (2007). Quality improvement report: Six year audit of
cardiac arrests and medical emergency team calls in an Australian outer metropolitan teaching hospital.
Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination
and implementation strategies. Health Technology Assessment 2004; volume 8, no 6, executive summary.
(The summary is available by going through PubMed to the citation and then clicking a link to the HTA
site. "HTA" stands for Health Technology Assessment.)
Jha AK, Perlin JB, Kizer KW, & Dudley RA. (2003). Effect of the transformation of the veterans affairs
health care system on the quality of care. New England Journal of Medicine, 348(22), 2218-2227.
Leatherman S, Berwick D, Iles D, et al. The business care for quality: case studies and an analysis. Health
Marshall MN, Shekelle PG, Leatherman S, et al. The public release of performance data: what do we
expect to gain? A review of the evidence. JAMA 2000;283:1866-74.
Rosenthal, MB. (2008). Beyond pay for performance--emerging models of provider-payment reform. N
Engl J Med 359(12): 1197-200.
Shojania G and Grimshaw JM. (2005). Evidence-based quality improvement: the state of the science.
Health Affairs. 24(1):138-50.
Werner RM, Asch DA. The unintended consequences of publicly reported quality information. JAMA
1. Do you agree with the premises underlying pay for performance? Should physicians be paid
incentives for closing care gaps? Many employers think physicians are already paid well and that
they should not be paid extra for "doing the right things."
2. A number of the initiatives to close care gaps involve services provided by professionals other
than physicians. Can you think of examples where care delivery can be redesigned to incorporate
these other professionals? When does this approach make sense?
3. Many articles reference practice guidelines. Some physicians equate practice guidelines to
utilization management policies that were common in the '80s and '90s. Are they the same? Other
physicians equate the attempt to implement practice guidelines with "cookbook medicine" saying
that such guidelines remove the art from the practice of medicine. What do you think?
4. There is an enormous difference between distributing a practice guideline and incorporating the
recommendations from that practice guideline into the infrastructure that supports practice. What
examples of both approaches did you see in the readings?
5. Closing care gaps will mean changing how physicians practice medicine and therefore changing
physician behavior. The readings for this session discussed various techniques to do just that.
Which techniques seemed to work well? Which ones would you tend to avoid?
November 25: pre-Thanksgiving no class
Session 12: Pay for Performance (December 2)
• Is P4P feasible? (Eddy, 1998); (Fisher, 2006)*; (Rosenthal, 2006); (Kindig, 2006); (Fonarow,
2007); (Pham, 2007)*; (Lee, 2007; (Lindenauer, 2007)*; (Epstein, 2007); (Rosenthal, 2007a)*;
(Rosenthal, 2007b)*; (Rosenthal, 2005)* (Glickman, 2007); (Birkmeyer, 2006); (Doran, 2006)*,
(Doran, 2008)*; (Epstein, 2006), (Klein, 2006); (Vijan, 2000)
• Continuous Quality Improvement
• cybernetic cycle
• evidence of successful QI (Asarnow, et al., 2005)*; Total Quality Management/Continuous
Quality Improvement (Kritchevsky & Simmons, 1991); (Blumenthal, 1993); (Berwick, 1996a);
(Solberg et al., 1998), (Ferguson et al., 2003), (Casalino et al., 2003); (Auerbach, 2007)*
• How good is the evidence for using evidence-based QI? (Shojania and Grimshaw, 2005)*; (Tonelli,
• systems v individual components
• system-wide approach (Berwick, 1989); (White & Ball, 1990); (Kelly & Swartwoult, 1990);
(Gottlieb, Margolis, & Schoenbaum, 1990); (Bindman et al., 1995); ( Brook, Kamberg, &
McGlynn, 1996); (Brook, McGlynn et al., 1996); (Schneider & Epstein, 1996) (Rosenberg et al.,
1995); (Burstin, Lipsitz, & Brennan, 1992); (Bernstein et al., 1993); (Berwick, 1996b); (Kerr et al.,
1996); (Landon, Wilson, & Cleary, 1998); (Mor et al., 2003) (Christianson, 2007)*
• report cards, HEDIS (Corrigan & Nielsen, 1993); (Epstein, 1995); (Iglehart, 1996); (Epstein, 1998);
(Schneider & Epstein, 1998); (Hofer et al., 1999); (Green et al., 1997); (Knutson, Kind, Fowles, &
Adlis, 1998); (Poses et al., 2000), (Kizer, 2001)*, (Krumholz et al., 2002), (Schneider, Zaslavsky,
& Epstein, 2002), (Uhrig and Short, 2002/2003); (Jha, 2006); (Smith, 1995)
• designing feedback makes a difference (Hibbard, et al., 2005)
• guidelines (Go et al., 2003); (Mehta et al., 2002); (Hampton, 2003)
• business case (Leatherman, et al., 2003)
• When can report cards produce negative consequences? (Werner and Asch, 2005)*
• MMA’s Principles for Pay for Performance
1. Pay-for-performance programs must be designed to drive improvements to health care
quality and the systems in which quality care is delivered.
• Pay-for-performance programs should measure quality across the full continuum of care.
Quality should be measured comprehensively considering the six aims as defined by the
Institute of Medicine (i.e., safety, effectiveness, patient-centeredness, timeliness, efficiency,
• Pay-for-performance programs must demonstrate improvements to health care quality.
• Pay-for-performance programs must offer increased value to health care consumers.
• Pay-for-performance programs should improve systems of care by encouraging use of health
information technology (HIT), promoting collaboration among all members of the health care
team, supporting implementation of evidence-based clinical guidelines, and increasing patient
access to care that is high-quality and appropriate.
2. Pay-for-performance programs must promote and strengthen the partnership between
patients and physicians.
• Physicians are ethically required to use sound medical judgment and hold the best interests of
the patient as paramount. Programs should respect patient preferences and physician judgment.
• Target goals should reflect the need for patient-centered care; therefore, performance goals
should not be set at 100%. Thresholds for any P4P program should also reflect the role of
patient adherence to treatment plans.
• Programs must make sure that access to care is not limited. Systems must be in place to ensure
that physicians are not discouraged from providing care to patients who are members of
underserved and high-risk patient populations).
• Patient privacy must be protected during all data collection, analysis, and reporting. Data
collection must be consistent with the Health Insurance Portability and Accountability Act
3. Pay-for-performance programs should support and facilitate broad participation and
minimize barriers to participation.
• Pay-for-performance programs must work to include physician groups across the continuum of
health care as soon as possible.
• Participation in P4P programs must not create undue financial or administrative burdens on
physicians and/or their practices (i.e., implementation, data collection, and reporting of data).
• Elective P4P programs should allow clinics to take into account their ability to participate
based on resources, patient population, and number of patients affected by the condition being
measured. Physician groups, regardless of size, specialty, or HIT capability, should have the
opportunity to participate in P4P programs if they have the resources and patient population
needed to do so.
• Groups should be aware of P4P programs and clearly understand what the rewards will be
relative to their level of participation so that they can accurately assess the cost/benefit of
• Individual physician information must be protected. Data collected as part of P4P programs
must not be used against physicians in obtaining professional licensure and certification.
4. Pay-for-performance program design and implementation must be credible, reliable,
transparent, scientifically valid, administratively streamlined, and useful to patients and
• Practicing physicians from the appropriate specialty should be integrally involved in the
design and implementation of accountability and performance-improvement measures.
Clinical performance measures must be objective, transparent, reliable, evidence-based, current,
statistically valid, clinically relevant, and cost-effective; the methodology should be
• Clinical performance measures should be selected for diseases that create a great burden on
the health care system and for areas that have significant potential for clinical improvement.
• Pay-for-performance programs should collect, report upon, and link payment to both process
and outcome measures.
• Statistical validity is essential to measurement and reporting. Data collection, data analysis,
and public reporting must utilize sample sizes large enough to ensure statistical validity,
whether at the facility, group, or individual physician level. If valid sample sizes are not
possible at the individual physician level, measurement and reporting must occur at the
medical group or facility level.
• Risk adjustment is complex, and current methodology has serious limitations. To date, risk
adjustment does not adjust adequately for confounding factors. Developers should use the best
available methods for risk adjustment and update statistical methodology as the science of risk
adjustment advances. Risk adjustment should account for factors that are outside the
physician’s control (i.e., pre-existing conditions, demographics, and co-morbidities).
• Pilot testing should not be disregarded in order to introduce a program into the marketplace
quickly. Developers of P4P programs and performance measures must allow for pilot testing
that will adequately assess the reliability and validity of the measures. Measures should be
reviewed at regular intervals and revised as needed to reflect changes in the evidence base.
• A clear description of the quality measures and methods used to assess and reward physician
performance should be provided prior to implementation.
• The American Medical Association’s Physician Consortium for Performance Improvement
incorporates these characteristics (What characteristics) into clinical measure sets that can be
used across specialties. Developers of P4P programs should consider using AMA measure
• Public reporting must reflect the full scope of the health system, and must be useful to both
patients and physicians.
• Programs must allow physicians to review the data collected and its analysis prior to using it
for public reporting, rating, or rewards programs. Results should be reported back to
individual physicians and physician groups to facilitate process and systems quality
• When comparing and reporting among clinical groups or across hospitals, public reports
should include a clear notation on the complexity and limitations of risk adjustment.
• Clinics should know about any changes in program requirements and evaluation methods as
they occur. In order to compare data, changes should occur no more often than annually.
• Pay-for-performance programs should make an effort to reduce or eliminate duplicative
measurement and reporting. A common data set should be adopted across communities, and
data pertaining to a patient’s care should be collected only once.
5. Pay-for-performance programs should reward those physicians and clinics that: 1) show
measurable improvements to the process of providing quality care; 2) show measurable
improvements in patients’ clinical outcomes; 3) meet or exceed stated clinical goals; 4)
make efforts to improve the systems in which they practice; or 5) work to successfully
coordinate patients’ care among providers.
• There is value in selecting a target then rewarding physicians who meet or exceed it (absolute
value) and in rewarding physicians who make significant improvements to the quality of care
they provide, regardless of whether they make relative improvements or reach the desired
• The MMA supports rewards, bonuses, and systems improvements as opposed to withholds as
a more effective incentive for improving quality and building systems of care.
• Programs ought to reward groups that build systems capacity in order to deliver high quality
care (e.g., providing telephonic care, installation of HIT, computerized pharmacy order entry
and clinical decision-support systems, disease and case management, and team-based care).
Pay for performance programs should make efforts to help transition clinics from manual to
electronic patient data collection.
• There are significant costs associated with data collection and reporting. Rewards should
sufficiently cover the added practice expenses and administrative costs associated with
collecting and reporting data.
• Pay-for-performance programs should reward physicians for providing effective disease
management services (e.g., telephone care, care that is not provided in person) and
coordinating treatment efforts among primary care physicians and hospitalists or specialists.
Programs should recognize and reward groups that successfully get patients to adhere to
agreed-upon treatment plans.
• Funding for P4P programs ought to be obtained through generated savings or new
Core Readings for Session 12
Asarnow JR, Jaycox LH, Duan N, LaBorde AP, Rea,MM, Murray P, Anderson M, Landon C, Tang L,
Wells KB. (2005). Effectiveness of a quality improvement intervention for adolescent depression in
primary care clinics: a randomized controlled trial. JAMA. 293(3):311-319. (This article was included
in Session 11.)
Auerbach AD, Landefeld CS, & Shojania KG. (2007). The tension between needing to improve care and
knowing how to do it. New England Journal of Medicine 357(6), 608-613.
Christianson JB, Leatherman S and Sutherland K (2007) The Synthesis project
Doran T, Fullwood C, Gravelle H, Reeves D, Kontopantelis E, Hiroeh U, Roland M. (2006). Pay-for-
performance programs in family practices in the United Kingdom. New England Journal of Medicine.
Doran T, Fullwood C, Reeves D, Gravelle H, & Roland M. (2008). Exclusion of patients from pay-for-
performance targets by English physicians. New England Journal of Medicine 359(3):274-284.
Fisher ES. (2006). Paying for performance--risks and recommendations. New England Journal of
Medicine, 355(18), 1845-1847.
Kizer K.W. (2001). Establishing health care performance standards in an era of consumerism. JAMA,
Lindenauer PK, Remus D, Roman S, Rothberg MB, Benjamin EM, Ma A, et al. (2007). Public reporting
and pay for performance in hospital quality improvement. New England Journal of Medicine, 356(5),
Pham HH, Schrag D, O'Malley AS, Wu B, & Bach PB. (2007). Care patterns in Medicare and their
implications for pay for performance. New England Journal of Medicine, 356(11), 1130-1139.
Pronovost PJ, Goeschel CA, & Wachter RM. (2008). The wisdom and justice of not paying for
“preventable complications.” JAMA 299(18):2197-2199.
Rosenthal MB, Frank FG, Li Z, & Epstein AM. (2005). Early experience with pay-for-performance: from
concept to practice. JAMA, 294(14), 1788-1793.
Rosenthal MB. (2007a). Nonpayment for performance? Medicare’s new reimbursement rule. New
England Journal of Medicine 357(16):1573-1575.
Rosenthal MB, & Dudley RA. (2007b). Pay-for-performance: Will the latest payment trend improve
care? JAMA, 297(7), 740-744.
Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA.
293(10):1239-44, 2005. (This article was included in Session 11.)
Debate Topic: We should use pay for performance more actively
Session 13: Regulation (December 9) [Jane Pederson, MD]
• Changing mission (Jencks & Wilensky, 1992); (Health Care Financing Administration, 1998)*
• Changing strategy (Marciniak et al., 1998); (Jencks et al., 2000) (Jencks, Huff, & Cuerdon, 2003)*,
(Hsia, 2003), (Snyder and Anderson, 2005)*
NCQA (Iglehart, 1996)*
Physician register/databank (Baldwin et al., 1999)
Brief History of Medicare and the QIO Program
1935 – Passage of the Social Security Act provided a virtually universal retirement benefit.
1965 - Establishment of Medicare and Medicaid programs through an amendment of the SSA. Additional
legislation required utilization review to assess medical necessity and quality of care.
1972 - Public Law 92-603. This established the Professional Standards Review Organizations (PSROs)
and replaced the existing utilization review requirements for Medicare and Medicaid. This was the
Federal Government’s first wide scale attempt to monitor health care and control expenditure of federal
health monies. PSROs focused mainly on utilization issues, not really on quality. There were two PSROs
1982 - Public Law 97-248, the Tax Equity and Fiscal Responsibility ACT (TEFRA). This set limits on a
cost-per-case basis for hospital payments for routine days, special care days and ancillary services. This
act also replaced the PSROs with the Peer Review Organizations (PROs).
1983 - Public Law 98-21, the establishment of the Prospective Payment System (PPS). Under PPS, the
rate paid for an inpatient hospital admission is established in advance. The amount is based on the
Diagnosis Related Group (DRG) assignment for the admission. The concept behind DRG payments is
that hospitals in the same geographic area receive the same payment for treatment of a similar illness. The
PPS was a radical departure from the previous payment system with was based on reasonable costs.
1984 - Public Law 98-369, the Deficit Reduction Act. This required the Secretary of the Department of
Health and Human Services to enter into contracts with the PROs for conducting utilization and quality
review activities. This act also required that hospitals receiving payment under PPS have agreements with
PROs for conducting reviews. PROs reviewed 25% of all hospital discharges looking for quality of care
1986 - The Omnibus Budget Reconciliation Act (OBRA). This act modified the PRO federal contract in
1) It mandated PROs to include utilization and quality review of ambulatory surgical procedures
performed in Ambulatory Surgery Centers (ASCs) and Hospital Outpatient Areas (HOPAs).
2) It required PROs to perform quality review of intervening care when a patient is readmitted to a
hospital in less than 31 days after discharge.
3) It mandated PRO review of quality of care provided to Medicare beneficiaries whose health care
benefits are assigned to an HMO.
1990-1991 – IOM report challenges that the PRO program does little to improve the quality of care.
1993 - Establishment of the Health Care Quality Improvement Program (HCQIP) as part of the PRO 4th
SOW (Scope of Work). This represented a new approach to quality, beginning the transition from quality
assurance through retrospective peer review to prospective cooperative quality improvement projects
(shifting the bell curve rather than focusing on only those providers in the lower end). The core of the
HCQIP initiative is to work collaboratively with practitioners, providers, and health plans to develop and
implement meaningful quality improvement programs, assess the impact of these efforts, and share
successful strategies for quality improvement. Hospital (and other organizations) involvement is
The 4th SOW focused on collecting data on selected inpatient indicators and feeding this information back
to hospitals for use in their quality improvement efforts. There was limited direct involvement of the PRO
in the intervention during this SOW. The PRO would then collect remeasurement data to provide
feedback to the hospitals on their performance. One national initiative - The Cooperative Cardiovascular
Retrospective review continued to exist, but was no longer the main focus of the PRO contracts.
1996 - Start of the 5th SOW. The majority of projects in the 5th SOW were topic focused and involved a
smaller number of collaborating organizations. The topics were chosen by the PRO based on the interest
of the collaborating organizations and also on the recommendation of local health care expert panels. The
structure of the projects as a pre-post measurement with a structured intervention designed by the PRO.
Toward the end of the 5th SOW, PROs were required to consolidate their projects and measurement of
success was based on both improvement in quality indicators and the number of Medicare beneficiaries
potentially impacted by the intervention.
1999 - Start of the 6th SOW. There was a shift from local, topic focused, projects to nationally determined
initiatives aimed at attaining statewide impact on pre-determined indicators of clinical quality. The Health
Care Financing Administration (HCFA) chose the clinical topics. PRO performance was assessed on
improvement in statewide measures from both the inpatient and outpatient settings.
There is also the addition of the Payment Error Prevention Program (PEPP), an attempt to address errors
in hospital payments through an educational and quality improvement process. Initially, PEPP generated
concern because of HCFA’s focus on fraud and abuse. Through PEPP, hospitals selected projects
involving issues such as coding, short stay admissions, and readmissions.
2001 - The HCFA changed its name to the Centers for Medicare and Medicaid Services (CMS). The
PROs also became Quality Improvement Organizations (QIO).
2002 - Start of the 7th SOW. In addition to the continued work with hospitals and clinics, the QIOs will
work with nursing homes and home health agencies. There is an added focus on the public reporting of
health care quality data – similar to the increased demand for transparency and accountability in the
business sector. Publicly reported data will be derived from existing sources such as the Minimum Data
Set (MDS) in the nursing home setting. The main role of the QIO continues to be education and support
for quality improvement at the statewide and individual facility levels.
The Hospital Payment Monitoring Program (HPMP) replaces PEPP with the QIOs involved in local
projects based on need. Peer review continues as a component of the contract, focusing on beneficiary
complaints, EMTALA determinations, HINN review and coding review.
2005 – 8th SOW scheduled to begin in summer/fall. QIOs will continue to work with hospitals, clinics,
nursing homes, and home health agencies. Clinical topics expanded and increased emphasis on culture
Core Readings for Session 13
Health Care Financing Administration. (1998). PRO results: Bridging the past with the future: Executive
summary . Baltimore, MD.
Iglehart JK. (1996). Health policy report: The national committee for quality assurance. New England
Journal of Medicine, 335(13), 995-999.
Jencks SF, Huff ED, and Cuerdon T. (2003). Change in the quality of care delivered to Medicare
beneficiaries, 1998-1999 to 2000-2001. JAMA, 289(3), 305-312.
Snyder C, & Anderson G. (2005). Do quality improvement organizations improve the quality of hospital
care for Medicare beneficiaries? JAMA, 293(23), 2900-2907.