SlideShare a Scribd company logo
1 of 20
Download to read offline
INTRODUCTION
Not until the release of the 1999 Institute of
Medicine (IOM) report To Err Is Human: Build-
ing a Safer Health System (Kohn, Corrigan, &
Donaldson, 2000), did the general public discover
what was long known, albeit not always admit-
ted, by the health care industry: that medical care
could be unsafe. The IOM report estimated that
more than 1 million preventable errors occur year-
ly in the United States, and between 44,000 and
98,000 result in death. This figure includes more
people than die annually from motor vehicle acci-
dents, breast cancer, orAIDS. The cost in human
life is accompanied by costs incurred by health
care practitioners and facilities: loss of time, re-
sources,credibility,andmoneyasaresultofdelays,
emergencychangesinthecourseofcare,andlegal
actions. Errors that result in patient harm are also
associatedwithanestimatednationalcostof$37.6
billion (Kohn et al., 2000).
MEDICAL ERROR/INCIDENT REPORTING
AS A PATIENT SAFETY TOOL
Currently, reporting of medical errors and inci-
dents is one of the leading initiatives proposed to
enhance patient safety. Several U.S., British,
A Review of Medical Error Reporting System Design
Considerations and a Proposed Cross-Level Systems
Research Framework
Richard J. Holden and Ben-Tzion Karsh, University of Wisconsin-Madison, Madison,
Wisconsin
Objective: To review the literature on medical error reporting systems, identify gaps
in the literature, and present an integrative cross-level systems model of reporting to
address the gaps and to serve as a framework for understanding and guiding reporting
system design and research. Background: Medical errors are thought to be a leading
cause of death among adults in the United States. However, no review exists summa-
rizingwhatis knownaboutthebarriersandfacilitatorsforsuccessfulreportingsystems,
and no integrated model exists to guide further research into and development of med-
ical error reporting systems. Method: Relevant literature was identified using online
databases; references in relevant articles were searched for additional relevant articles.
Results: The literature review identified components of medical error reporting sys-
tems, error reporting system design choices, barriers and incentives for reporting, and
suggestions for successful reporting system design. Little theory was found to guide
the published research. An integrative cross-level model of medical error reporting
system design was developed and is proposed as a framework for understanding the
medical error reporting literature, addressing existing limitations, and guiding future
design and research. Conclusion: The medical error reporting research provides some
guidance for designing and implementing successful reporting systems.The proposed
cross-level systems model provides a way to understand this existing research. How-
ever,additionalresearchisneededonreportingandrelatedsafetyactions.Theproposed
model provides a framework for such future research. Application: This work can be
usedtoguidethedesign,implementation,andstudyofmedicalerrorreportingsystems.
Address correspondence to Ben-Tzion Karsh, Department of Industrial and Systems Engineering, University of Wisconsin-
Madison, 1513 University Ave., Room 387, Madison, WI 53706; bkarsh@engr.wisc.edu. HUMAN FACTORS, Vol. 49, No. 2,
April 2007, pp. 257–276. Copyright © 2007, Human Factors and Ergonomics Society. All rights reserved.
SPECIAL SECTION
258 April 2007 – Human Factors
Australian, and global organizations and promi-
nent political and patient safety players have ad-
vocated the implementation of error reporting
systems (Aspden, Corrigan,Wolcott, & Erickson,
2004; Donaldson, 2000; Runciman & Moller,
2001). Error reporting, though only one of many
components needed for a successful safety pro-
gram, can improve patient safety in the following
ways (see also Leape, Kabcenell, Berwick, &
Roessner, 1998): by helping staff understand the
nature and extent of their errors (i.e., learning/
education); by tracking system performance over
time and following changes in the system; and
even by changing the mind-set of health care
practitioners – for example, by raising reporters’
awareness of the potential for error (Weick & Sut-
cliffe, 2001) or promoting a safety culture (Kap-
lan & Barach, 2002). When errors are detected
through reporting, reactive efforts may prevent
the error from resulting in patient harm (Uribe,
Schweikhart, Pathak, & Marsh, 2002). An error
reporting system can also facilitate proactive safe-
ty efforts. A system that identifies problems and
prompts investigation of the underlying causes of
errors could potentially facilitate subsequent cor-
rection, which could include safe system design
and redesign (i.e., system improvement; Bates et
al., 1998; Leape et al., 1995).
In summary, the rationale behind error report-
ing is that with knowledge comes the power to
detect problems and their causes and then to effect
change. Successful reporting structures in place in
other high-risk fields, such as petrochemicals and
aviation (Barach & Small, 2000; Billings, 1998;
Johnson, 2002; Statement Before the Subcom-
mittee, 2000), have encouraged interest in error
reporting as a patient safety tool; in health care,
however, errors are grossly underreported, per-
haps by as much as 50% to 96% (Barach & Small,
2000). It follows that these systems have had
dubiouseffectivenessinfacilitatingchange(Leape,
2002).Asaresult,themajorityofthemedicalerror
reporting literature is concerned with establishing
what would make for a successful reporting sys-
tem and the barriers and motivators that affect re-
porting behavior.
What follows is a review and discussion of de-
sign considerations for medical error reporting
systems, followed by a review of literature on the
barriers and motivators of reporting behavior.The
first section discusses existing reporting systems
in health care and in other industries. In the sec-
tions that follow we discuss key reporting system
designconsiderationsandliterature-identifiedbar-
riers to reporting. Next, we present design sug-
gestions from the literature for removing these
barriers and establishing a successful reporting
system.Finally,afterexaminingtrendsandgapsin
the current error reporting literature, including the
lack of theory in the research, we propose that the
literature reviewed in this paper could be framed
using a theoretically grounded, integrated model,
and we present one such model. The model ad-
dresses some of the gaps in the literature and urges
a new wave of research to strengthen and expand
existing knowledge. With this integrative and
cross-level model, we seek to provide (a) design-
erswithamoreholisticsetofdesignconsiderations
and(b)scientistswithdirectionsfordevelopingand
testing hypotheses about a range of medical error
reporting system topics.
EXISTING MEDICAL AND NONMEDICAL
REPORTING SYSTEMS
Anumber of systems for reporting errors, inci-
dents, and accidents have been implemented in
health care and in other industries. Leape (2002)
devoted an article to the discussion of the features
and success (or lack thereof) of several popular re-
porting systems in health care, such as the Medi-
cation Error Reporting Program, MEDMARX,
National Nosocomial Infection Survey, and
SentinelEventReportingProgram.Other,perhaps
moresuccessful,systemsincludetheAppliedStra-
tegies for Improving Patient Safety (ASIPS) Pa-
tientSafetyReportingSystem(Fernaldetal.,2004;
Pace et al., 2003); the Medical Event Reporting
System for Transfusion Medicine (MERS-TM;
Kaplan, Battles, Van der Schaff, Shea, & Mercer,
1998; Kaplan, Callum, Fastman, & Merkley,
2002);theNationalSurgicalQualityImprovement
Program (American College of Surgeons, 2005);
the SwissAnaesthesia Critical Incident Reporting
System (Staender, 2000; Staender, Davies, Helm-
reich, Sexton, & Kaufmann, 1997), the Edinburgh
intensive care unit critical incident reporting sys-
tem (Busse & Wright, 2000); theAustralian Inci-
dent Monitoring Study (AIMS; Runciman,Webb,
Lee, & Holland, 1993), and numerous others
(e.g.,Arroyo, 2005; Rudman, Bailey, Hope, Gar-
rett, & Brown, 2005). An in-depth discussion of
nonmedical reporting systems is outside of the
scope of this paper, but readers can find excellent
MEDICAL ERROR REPORTING 259
reviews elsewhere (Barach & Small, 2000; John-
son, 2000a, 2003a).These medical error reporting
systems differ on a number of dimensions, includ-
ing the content of what is reported to them, the in-
tendedgroupofreporters,theformatofthereports
and the reporting media, how mandatory or vol-
untary it is to report, and the confidentiality and
anonymity options in the system.These and other
dimensions of medical error reporting systems are
now discussed.
REPORTING SYSTEM DESIGN
CONSIDERATIONS
As mentioned, purposeful design decisions
must be made about what should be reported, who
should report, how information should be report-
ed, what should be done with reports, and so on.
Here we discuss the design issues related to these
questions from the literature on medical error re-
porting systems (see also Johnson, 2003a, for a
comprehensive review of reporting system com-
ponents and Ulep & Moran, 2005, for design con-
siderations for Web-based reporting).
What Should Be Reported?
A reporting system must establish and define
what is and what is not a reportable event in order
to standardize reporting (Aspden et al., 2004), and
taxonomies of error are critical to this. Should all
identified safety concerns/hazards be reported be-
fore they lead to errors, as is done in the nuclear
industry (Nuclear Regulatory Commission, 2005)
and in the Confidential Incident Reporting &
Analysis System (CIRAS) of the UK rail industry
(CIRAS, 2005)? Or should reporting be restricted
to actual errors? If the latter, then should every er-
ror be reported, even incidents (i.e., errors that do
not result in harm), as is the policy in the aviation
industry (see Barach & Small, 2000)? Or should
the focus be on accidents – that is, those errors that
lead to injury (e.g., Layde et al., 2002)? This inci-
dent/accident question, in particular, has received
much attention in medical and nonmedical error
reporting literature.
Ithasbeenwritten,forexample,thateventsthat
result in harm should be reported (and are in fact
disproportionately reported) for several reasons:
They are usually detectable, are difficult to con-
ceal, bear great costs, and are an obvious sign of
problems (Layde et al., 2002). Incidents – also re-
ferred to as potential adverse events, near miss-
es, and close calls – however, are less frequently
reported but may be important to consider. Inci-
dents and accidents may be caused by identical
conditions (Barach & Small, 2000).Additionally,
errors that did not lead to harm are valuable to
studybecausetheymaypointtothesystemfactors
and rescue and recovery efforts that can success-
fully contain the effects of errors (Kaplan & Fast-
man, 2003). Hazards are reported even less often
in health care than are incidents. Hazard reporting
is the most proactive form of reporting because
hazard identification does not require that inci-
dents, accidents, or injuries occur; all that is re-
quired is the identification of a situation that could
increase the risk of an incident, accident, or injury.
Hazards,incidents,andaccidentsallprovideinfor-
mation vis-à-vis flaws in the system, but hazards
and incidents are much more numerous and more
frequent (Suresh et al., 2004;Williamson & Mac-
kay, 1991). The frequency of hazards and inci-
dents thus facilitates hazard analysis, which relies
on there being some manner of data to analyze.
Furthermore,reportingfrequenteventslikehaz-
ards and incidents can keep practitioners aware of
thepresenceofhazards,encouragingmindfulness,
alertness and proactive actions (Kaplan, 2003;
Leape,1994;Reason,2000;seeJohnson,2000a,for
a discussion of the usefulness of incidents). Fi-
nally, because hazards and incidents are not as
emotionally charged as accidents and are not asso-
ciated with blame or cover-ups, practitioners may
be more willing to share these and may provide
more information (Barach & Small, 2000; Kaplan
&Fastman,2003;butseeHamilton-Escoto,Karsh,
& Beasley, 2006). In aviation, theAviation Safety
ReportingSystem(ASRS),AviationSafetyAction
Program (ASAP), and the BritishAirways Safety
Information System (BASIS) are demonstrations
of how reporting of incidents can lead to safety
improvements, and incident reporting has proven
successful in other nonmedical job sites, such as
manufacturing (Walker & Lowe, 1998).
However, there is a downside to reporting haz-
ards and errors that do not result in harm: Some
are very unlikely to ever cause severe harm or any
harm at all; further, reporting of these and other
hazards and incidents (given their frequency)
may clog up the reporting system or overburden
reporters and analysts alike (Karsh, Escoto, Beas-
ley, & Holden, 2006; Leape, 1999; McGreevy,
1997). Furthermore, hazard reporting is difficult
because reporters may not recognize many poten-
tial hazards if no bad outcomes have occurred in
260 April 2007 – Human Factors
their presence (Barkan, 2002) because of a lack of
situation awareness (Stanton, Chambers, & Pig-
gott, 2001), because potentially hazardous meth-
ods of providing care are the excepted norm
(Reason, Parker, & Lawton, 1998), or because the
reporters’ level of expertise makes them more
likely to take risks that they do not perceive to be
risky. In this paper, we use the term error reporting
systems, implying a focus on reporting all errors,
irrespectiveofwhethertheerrorledtoharm.How-
ever, we are not promoting reporting errors over
reporting only hazards, incidents, accidents, or
high-severity accidents, and in fact most of our
discussion can apply to these kinds of reporting
systems as well.
Other report content issues include whether
there should be different policies for reporting er-
rors with systemic versus person-centered root
causes (Bogner, 1994; Reason, 1990) and whether
reports should contain reporters’opinions as to the
cause of the event (see Hollnagel,1993) or sugges-
tions for correction or “best practices” (Beasley,
Escoto, & Karsh, 2004). Obviously, in addressing
all of the aforementioned issues, definitions for
terms such as hazard, incident, error, and so on
must be agreed upon. This necessity highlights
the importance of developing mature taxonomies
of error that can be applied in health care (Dovey
et al., 2002; Kaushal & Bates, 2002; New York
StateDepartmentofHealth,2005;RobertGraham
Center,AmericanAcademy of Family Physicians
Education Resource Center, & State Networks of
Colorado Ambulatory Practices and Partners,
2005). These taxonomies not only guide what to
report but can also provide an agreed-upon struc-
ture to error report data, which will facilitate sub-
sequent analysis and control steps in the safety
process.
Another major issue in designing reporting
systems is the level of detail, if any, that should be
provided regarding the who, what, when, where,
why, and how of the reported event, and this is
closely related to the issue of the format of the
reports themselves. Some medical error reporting
systems allow or require spoken, typed, or written
narratives of what happened (e.g., Pace et al.,
2003). Others include free-response fields of lim-
ited space (e.g., Medical Event Reporting System,
2005).Yet others require reporters to select mainly
fromalistofoptionsusingcheckboxes,pull-down
lists, or codes (e.g., New York State Department
of Health, 2005). Some systems offer a combina-
tion of open-ended and structured questions (e.g.,
National Aeronautics and Space Administration,
2002; Suresh et al., 2004). The success of non-
narrative formats rests on the inclusion of all the
necessary fields and options to characterize an
event. The chosen format will affect subsequent
analysis processes (e.g., how easily one can gen-
erate descriptive statistics and discover trends;
how much detail is available) as well as the design
ofreportingmedia(e.g.,phonehotlinesandE-mail
may be more fitting for narratives than for select-
ing from a list of options).When the reporting for-
mat yields variation in how and what is reported,
some consistency can be gained through the use
of taxonomies. The common language provided
by taxonomies in addition to free-text narratives,
for instance, can retain the richness of narrative re-
ports and at the same time allow for systematical-
ly organizing and analyzing the reported data.
In contrast to the manual clinician-generated
reportingofhazards,errors,incidents,oraccidents,
data can be obtained and processed using comput-
erized screening technologies that detect, collect,
search, and analyze data, which would otherwise
be done by clinicians or analysts (for reviews see
Batesetal.,2003;Murff,Patel,Hripcsak,&Bates,
2003).
Who Should Report and How
System designers must keep in mind that mul-
tiple clinicians may witness the same event. Deci-
sions have to be made whether to encourage each
witness to report or to create a system for delegat-
ing responsibility to one person. The former may
notbepreferableifthereportingsystemtreatseach
report as a separate event (Johnson, 2003b) or if it
puts an unnecessary burden on clinicians. Dele-
gation may be problematic as well because it may
cause unfair distribution of responsibility, as when
physicians delegate the responsibility to nurses
(Hamilton-Escoto et al., 2006; Kingston, Evans,
Smith, & Berry, 2004).Additionally, reports by a
single professional group, such as physicians or
nurses, may be biased to include certain facts and
types of errors and to exclude others (e.g., Kings-
ton et al., 2004; Ricci et al., 2004; Waring, 2004).
Local, Regional, National, or Specialty-
Specific Reporting Systems?
Another issue is whether separate special-
ties should report errors to separate databases.
Specialty-basedreportingmayprovideinformation
MEDICAL ERROR REPORTING 261
on errors that are unique to the specialty. Addi-
tionally, aggregating errors within one specialty
over multiple institutions may point to problems
that are common within the specialty but are rel-
atively infrequent or difficult to detect at any one
facility(Sureshetal.,2004).However,dataaggre-
gated across an entire specialty may not be useful
to individual practices. A similar trade-off exists
betweenlargenationaldatabasesandregionalsys-
tems (Barach & Small, 2000).
Mandatory Versus Voluntary Reporting
At the same time that the IOM (Kohn et al.,
2000) encouraged establishing voluntary report-
ing from individual practitioners, it also suggest-
ed that states require the mandatory reporting of
serious accidents and hazards in hospitals. The
Joint Commission onAccreditation of Healthcare
Organizations (JCAHO) now mandates reporting
of such sentinel events. Mandatory reporting sys-
tems at the state and federal level are intended to
keep organizations and practitioners accountable
for their actions (Kohn et al., 2000), punishing
continued disregard for patient safety (Flowers &
Riley, 2000), and these systems have become
understandably associated with disciplinary pur-
poses (Leape, 2002). In contrast, the purpose of
voluntary reporting systems is learning, not pun-
ishment,thoughthismaynotbeperceivedtobeso.
Several papers discuss the failure of mandatory
systems (and the ability of voluntary systems) to
stimulate reporting behavior and address system
flaws (e.g., Barach & Small, 2000; Leape, 2002).
However, in surveys of the public, a large major-
ity favors a mandated reporting system with data
available to the public (Blendon et al., 2002; Rob-
inson et al., 2002). In a study of clinicians, physi-
cians expressed interest in voluntary reporting,
whereas clinical assistants thought that only a
mandatory system would convey the importance
of reporting errors (Hamilton-Escoto et al., 2006).
Anonymity and Confidentiality
Anonymity, confidentiality, or some manner
of protection from discovery and punishment may
be essential for potential reporters to overcome
the barrier of fear (Beasley et al., 2004; Leape,
2002; Wakefield, Uden-Holman, & Wakefield,
2005).Thedownsidetoanonymityisthatitblocks
access to further information (Johnson, 2000b).
Anonymous systems rely on reporters to provide
sufficient information because there can be no
follow-up (but see Runciman, Merry, & Smith,
2001, who believe that anonymous reporting can
provide sufficient data).Additionally, anonymous
systems cannot be used for the purpose of indi-
vidual accountability, something that even clini-
ciansmaylikeinareportingsystem(Evans,Berry,
Smith, & Esterman, 2004). Systems that are not
anonymous allow follow-up but might require
protection from punishment before they can be
trusted (e.g., Beasley et al., 2004; Britt et al.,
1997). This is a characteristic of many confiden-
tial nonmedical error reporting systems, such as
the ASRS, which provides legal immunity to all
reporters (Barach & Small, 2000). However, as
long as there is fear of any sort remaining – and
this may be fear of shame or embarrassment,
which cannot be removed through legal protec-
tion–there may be reluctance to report to a system
requiring identifying information (e.g., Kingston
et al., 2004).
System Design to Support Social-Cognitive
Processes
Afinalconsiderationisthefactthaterrorreport-
ing is a social-cognitive process and must be un-
derstood as such. For instance, reporting involves
theprocessesofencoding,storage,andretrievalof
mnemonic information. Reporting accuracy may
thus be affected by memory, interference (e.g.,
distractions), and decay (e.g., when much time
passesbetweentheerrorandthereport;e.g.,Suresh
etal.,2004).Reportingissusceptibletolimitations
and biases in memory and reasoning – for exam-
ple, causal attribution and hindsight biases (for
definitions and discussion, see Billings, 1998;
Henriksen & Kaplan, 2003; Kaplan & Fastman,
2003; Parker & Lawton, 2003). Other conse-
quences of the human social-cognitive system are
that reporters will seek social and decision support
(Hamilton-Escoto et al., 2006) and that habits,
beliefs, affect, attitudes, motivation, and other so-
cial and cognitive factors may influence reporting
behavior (Holden & Karsh, 2005; Kingston et al.,
2004).
Additional Concerns
Reporting system designers and implementers
will have to make further decisions not discussed
previously in this paper, including decisions about
when reports should be filed (e.g., when during
the work schedule) and about the makeup of the
262 April 2007 – Human Factors
design and implementing team. Several studies
havedemonstratedthatcliniciansareespeciallyin-
terested in being involved in the design and rollout
ofreportingsystemsandthatcliniciansuggestions
maybequiteuseful(Beasleyetal.,2004;Hamilton-
Escoto et al., 2006; Karsh, Escoto, et al., 2006).
Clinicianparticipationmayengendercommitment
and better design/implementation, but it requires
awareness of cultural barriers within the organi-
zation and between professional groups.
For many of the design dimensions we have
noted, it is far too early to determine which is
“best” or even which is most fitting for which con-
text. Much more research is necessary to under-
stand (a) which design options are preferable in
which context and (b) what the mechanisms are
that result in reporting system success, given cer-
tain design characteristics. For instance, there is a
need for comparative research to determine how
medical error reporting formats differ in terms of
usability. Further, research should examine the
mechanisms by which certain formats affect re-
porting usability – perhaps using Nielsen’s (1993)
dimensionsofusabilityasatheoreticalframework
for exploring these mechanisms.
BARRIERS TO REPORTING
For whatever reasons, medical accidents and
incidents are substantially underreported (e.g.,
Cullen, Bates, Small, Cooper, & Nemeskal, 1995;
Stanhope,Crowley-Murphy,Vincent,O’Connor,&
Taylor-Adams,1999).Additionally,thereisuneven
reporting across practitioners, depending on their
positionorgrade(Lawton&Parker,2002;Vincent,
Stanhope, & Crowley-Murphy, 1999; Waring,
2004). Without reporting, none of the objectives
ofreportingsystemscanberealized.Thus,amajor
focus of the literature has been to understand the
barriers to reporting; this section discusses such
barriers.
Busyness and Fatigue
An obvious but important fact is that doctors,
nurses, and pharmacists, as well as other critical
members of the health care community, are ex-
tremelybusy.Althoughseveralstateshaverestrict-
edthemaximumhoursthatnursesandpharmacists
can work, and theAccreditation Council for Grad-
uate Medical Education has restricted resident
duty hours, work burden remains an issue.An ob-
vious deterrent to reporting, then, is that the po-
tential reporter is too busy and too tired or over-
loaded to report (ironically, busyness and fatigue
may also raise the frequency of reportable med-
ical errors). The literature consistently finds fac-
tors such as “time involved in documenting an
error” and “extra work involved in reporting”
(Suresh et al., 2004) to be leading self-reported
barriers to reporting, and this is especially true of
clinicians who experience high workload (Rogers
etal.,1988).Thisisthecasenotonlyforphysicians
(e.g., Figueiras, Tato, Fontainas, Takkouche, &
Gestal-Otero,2001)butforpharmacists(e.g.,Green,
Mottram, Rowe, & Pirmohamed, 2001), surgical
and medical specialists (e.g., Eland et al., 1999),
midwives (e.g., Vincent et al., 1999), and nurses
(e.g.,Walker & Lowe, 1998) as well, though there
may be intra- and interprofessional differences
(Katz & Lagasse, 2000; Vincent et al., 1999).
Difficult Reporting Schemes and Lack of
Knowledge About the Reporting System
Assuming that the error is noticed (and many
incidents may not be; Cullen et al., 1995; Wake-
field,Wakefield, Uden-Holman, & Blegen, 1996),
clinicians may be unaware of the existence of the
reporting system or of the system’s purpose. Sev-
eral studies report clinician reluctance or failure
to report as a result of being unaware of the need
or ability to report (Eland et al., 1999), not know-
ing who should report (Robinson et al., 2002), or
being unsure of what to report or how to do it
(Green et al., 2001; Jeffe et al., 2004; Rogers et al.,
1988).Theseverityoftheerror’seffect,theerror’s
proximity to the patient in the process of events,
whether the error resulted from behavior that
complied with or violated procedures, and wheth-
er or not the error was preventable are factors
moderating reporting (Antonow, Smith, & Silver,
2000; Katz & Lagasse, 2000; Lawton & Parker,
2002), and this may partially be attributable to a
misunderstandingofthereportingsystemorofthe
definition of error (Wakefield et al., 2005).
Additionally, clinicians claim that reporting
forms or schemes are too burdensome or compli-
cated(Figueirasetal.,2001;Wakefieldetal.,2005)
or that they cannot locate reporting forms (Rogers
et al., 1988). Johnson (2003b) suggested that dif-
ficulty of use stems from poorly designed report-
ing systems. For example, paper reporting forms
are not always available or are difficult to find, and
electronic reporting systems are sometimes in-
flexible, either constraining data entry or making
MEDICAL ERROR REPORTING 263
it difficult (see Karsh, Escoto, et al., 2006, who re-
portedclinicians’suggestionsforelectronicreport-
ing system interfaces). Again, these trends were
found in a variety of clinicians (e.g., pharmacists,
general physicians, nurses), though inexperienced
reporters, junior staff, and physicians tend to find
reporting more difficult and tend to lack knowl-
edge as compared with experienced reporters,
senior staff, and nurses/midwives, respectively
(Figueiras et al., 2001; Jeffe et al., 2004; Uribe et
al., 2002; Vincent et al., 1999).
Aversive Consequences of Reporting
Other reasons for not reporting are rooted in
the aversive nature of the outcomes associated
with reporting and the fear that they generate.This
fearhasbeenfoundamongjunior-andsenior-level
physicians and nurses (Vincent et al., 1999;Wein-
gart, Callanan, Ship, &Aronson, 2001).Apreva-
lent blame culture contributes to nonreporting in
a variety of ways. Doctors and nurses alike are
fearful of disciplinary or legal action being taken
againstthemoragainsttheircolleaguesiftheydis-
closeanerrorevent(Leape,2002).Itdoesnothelp
that many state and federal reporting systems are
actuallyestablishedfordisciplinarypurposes.Par-
ticipantsinonestudydifferedgreatlyintheiropin-
ions on what should be reported, what would be
reported in a realistic situation, and what was re-
ported in actuality; the discrepancy may be in part
attributable to their beliefs that error reporting is
a disciplinary tool (Cullen et al., 1995). The gen-
eral fear of reprimand is well established in all
clinicians, but perhaps especially so in nurses, and
moresoinjuniorstaff(Vincentetal.,1999;Walker
& Lowe, 1998). Nurses also fear being held liable
by authorities and being “found out” by peers,
patients, and doctors (Wakefield et al., 1996);
some also feel uncomfortable reporting cowork-
ers, either out of concern for the coworkers or
because the coworkers (e.g., physicians) have au-
thority over them (Karsh, Escoto, et al., 2006;
Uribeetal.,2002).Moreover,socialrepercussions
might ensue if a reporter, especially a nurse, were
found out to be a “whistleblower” (Hamilton-
Escoto et al., 2006; Kingston et al., 2004).
Legal consequences are yet another concern
for potential reporters (Horton, 1999; Lawton &
Parker, 2002; Leape, 2000). Accordingly, both
the IOM and JCAHO refer to the current medical
liability system as a major barrier to reporting
(JCAHO, 2005; Kohn et al., 2000; see also Sage,
2003). Physician opinion is in accord (Robinson
et al., 2002), demonstrating more concern with the
rates of malpractice insurance than with the error
rate (Blendon et al., 2002). Although the degree
to which fear of litigation prevents reporting may
differ as a function of clinician group or seniority
(Katz&Lagasse,2000;Uribeetal.,2002;Vincent
et al., 1999), such fear is often cited as a barrier
to reporting (but see Rogers et al., 1988).
Onemustalsoconsidertheemotionaleffectsof
error on erring individuals as these effects reveal
agooddealabouttheirreasonsnottoreport.Alarge
majority of physicians can recall at least one crit-
ical error in their practice (Christensen, Levinson,
&Dunn,1992;Newman,1996).Erringphysicians
have initial feelings of agony and anguish, fol-
lowed by the onset of guilt, anger, embarrassment,
and humiliation. They fear legal action and being
foundout,andtheygeneralizethemistaketoover-
all incompetence, both as a physician and a person
(Christensen et al., 1992). Some physicians cope
through self-disclosure (Christensen et al., 1992;
Resnick, 2003), and many desire some sort of sup-
port.Manydonotactuallyreceiveanysupport,and
most who do receive it from their spouse; many
physicians are not willing to offer their own un-
conditional support to a colleague in a hypotheti-
calsituation(Newman,1996).Suchfindingspoint
to the stigma associated with admitting fallibility
(see Dickey, Damiano, & Ungerleider, 2003; Gal-
lagher, Waterman, Ebers, Fraser, & Levinson,
2003; Kingston et al., 2004; Pilpel, Schor, & Ben-
bassat, 1998). Error reporting may serve as a way
to cope through disclosure, but, perhaps, only if
it offers support, does not bring about feelings of
fallibility, and does not exacerbate emotional suf-
fering. Indeed, threats to self-image and psycho-
logical comfort may result in reluctance to discuss
errors(Sexton,Thomas,&Helmreich,2000).This
may discourage reporting (Leape, 1999; Reinert-
sen, 2000) and promoting the so-called code of
silence (Barach & Small, 2000). Barring the re-
moval of stigma and misperceptions of infallibil-
ity, designers of reporting systems must be aware
of this fact.
In summary, findings point to a blame culture
in health care, one that overemphasizes discipli-
nary action or other aversive consequences such
as shame and the tendency to “shoot the messen-
ger” (see “pathologic organizations” in Westrum,
1992). Such blame culture discourages practi-
tioners from admitting to and reporting errors.An
264 April 2007 – Human Factors
alternative is the safety culture, or “just culture”
(Marx, 1999, 2001), and there is some agreement
that establishing such a culture will remove the
barrier to reporting posed by the potential of aver-
sive consequences (e.g.,Arroyo, 2005; Barach &
Small, 2000; Kaplan & Barach, 2002; Reason,
2000).
Lack of Perceived System Usefulness
Evidence exists that an apparent lack of report-
ing system usefulness may also contribute to non-
reporting. Reporting systems may have multiple
potential functions or purposes, but chief among
those is the identification and correction of system
flaws through the analysis of reported data. If a re-
portingsystemisnotperceivedtohelpaccomplish
this purpose, then it may be thought of as useless
and reporting as a waste of time. Perceptions of
usefulnessmaybegainedintwoways:(a)byactu-
ally using reported data to guide system improve-
ment and (b) by making reporters aware that this
is happening, which is referred to as feedback.
Accordingly,theWilliamsonandMackay(1991)
reporting method recommends that medical errors
be recorded, analyzed for clues to the problemat-
ic system components behind a larger number of
errors, used to eliminate or correct these system
components, and shared with others through feed-
back (see also Johnson, 2003b; Kaplan & Fast-
man, 2003). Studies support the idea that lack of
follow-up or a perceived uselessness of reporting
may discourage nurses (Wakefield et al., 1996;
Walker & Lowe, 1998) and physicians (Uribe et
al., 2002) from reporting, whereas it has been sug-
gested that reporting systems that are useful and
that are perceived to be useful by reporters can
promote reporting behavior (e.g., Kaplan et al.,
1998).Aparticipant in one of Jeffe et al.’s (2004)
focus groups echoed many clinicians’ opinions
towardreportingsystemsthatdonotprovidefeed-
back to reporters: Reporting to such systems is
“wasted energy.” If clinicians do not see error re-
porting as a means of bettering the situation and
correcting the underlying factors that initially led
to the error, then what reason is there to report,
other than adhering to the law?
DESIGNING A MORE EFFECTIVE
REPORTING SYSTEM
Thus far, we have identified important charac-
teristics of reporting systems and barriers to re-
porting. In this section we turn to the literature on
reporting as well as the human factors and safety
disciplines to approach the problem of designing
an effective reporting system.
To begin with, a necessary – but perhaps not
sufficient – first step to designing an effective
reporting system is to remove the reporting barri-
ers we have described. The most notable barriers
are reporting systems that are difficult to use or not
time efficient, combined with a busy and fatigued
workforce; lack of knowledge about the report-
ing system; fear of aversive consequences of re-
porting; and a perceived lack of usefulness of
reporting. Table 1 takes a synthesized view of the
literature and draws together several authors’sug-
gestions for reducing these barriers. For example,
to address the barrier of reporting system diffi-
culty, one common suggestion in the literature is
that system design should fit with clinician work
factors such as busyness and fatigue. The idea of
“fit” (Holden & Karsh, 2005; Karsh, Escoto, et al.,
2006) more generally suggests that a successful
reporting system design must be relatively com-
patible with the characteristics of the workplace
(its users, tasks, environment, organizational fac-
tors,etc.).Thismeansthatmanyofthesuggestions
available in the literature may not be successful
in every system; accordingly, more research is
necessary to understand the contextual factors and
health care-specific nuances that determine the
effectiveness of the suggestions in the literature.
Additionally,thetableincludessuggestedsolu-
tions for dealing with users’lack of knowledge of
the reporting system and with the barrier of fear.
In regard to the latter, it has been suggested that
organizations should transition from a culture of
blame, shame, and quick fixes to a “just culture”
(Kaplan & Fastman, 2003). A just culture is one
in which individuals are not blamed or punished
if an error occurs, as long as there was no intent to
harm (Marx, 1999, 2001), and reporting of errors
is encouraged because reporting can result in
learning. In these ways a just culture avoids the
tension (or even injustice) that exists in a blame
culture, wherein it is not acceptable to err yet it is
required that practitioners report (admit to) these
errors. Reporting systems within such a culture, or
more generally ones that provide anonymity (and
thus cannot be punitive), have been predicted and
shown to facilitate more reports than systems in
which punishment is a possibility (Kaplan et al.,
1998; Kingston et al., 2004; Leape, 2002). A just
Continued on page 267
265
TABLE 1: A Synthesis of the Literature Yields Suggestions for Addressing Reporting Barriers
Specific Suggestions References
System Difficulty and Inefficiency
Include interface specialists in the design process in order to design intuitive and usable reporting Johnson, 2003b; Kaplan & Fastman, 2003; Vincent
forms with clear instructions. Designing a reporting form that uses check boxes and limited et al., 1999
narrative reporting interface can save time and effort.
Limit length/difficulty of reporting process by providing quicker reporting alternatives such as Web Beasley et al., 2004; Cullen et al., 1995; Jeffe et al.,
reporting or phone/hot line reporting. 2004; Kingston et al., 2004; Rudman et al., 2005;
Wilf-Miron, Lewenhoff, Benyamini, & Aviram, 2003
Reporting system components associated with system ease of use and time efficiency should fit the Beasley et al., 2004; Holden & Karsh, 2005; Karsh,
(busy and fatiguing) work flow of health care practice Escoto, et al., 2006
Lack of Knowledge About the Reporting System
Define the purpose of the system at the outset and define what must be reported. Communicate Beasley et al., 2004; Flink et al., 2005; Jeffe et al.,
these definitions (e.g., explain the working definition of medical errors) and communicate 2004; Stanhope et al., 1999; Uribe et al., 2002;
practitioners’ reporting responsibilities. Williamson & Mackay, 1991
Provide training that builds knowledge about the system and how to use it, and then provide Desikan et al., 2005; Flink et al., 2005; Hart, Baldwin,
continuing education about the system. Continue to provide system use information (e.g., list of Gutteridge, & Ford, 1994; Jeffe et al., 2004; Kingston
reportable incidents) that can be accessed at any time. The training should be tailored to different et al., 2004; Uribe et al., 2002; Vincent et al., 1999
types of health care professionals.
Fear of Aversive Consequences of Reporting
Institute a “no-blame,” nonpunitive policy (or “just culture”) that encourages learning, not punishment, Arroyo, 2005; Barach & Small, 2000; Bates et al.,
and in which practitioners are comfortable reporting errors. Errors must not be thought of as 1995; Cullen et al., 1995; Jeffe et al., 2004; Kaplan,
shameful, and clinicians must be supported, not shunned, when an error occurs. As the focus should 2003; Kingston et al., 2004; Leape, 1994, 2002; Marx,
be on the system, not on blaming the individual, begin eliminating the blame culture by educating 2001; Reason, 2000; Vincent et al., 1999; Wakefield
clinicians about system-based versus person-based causes of errors. et al., 1996, 2005; Wilf-Miron et al., 2003
Address existing legal barriers to reporting. Provide reporters protection and immunity from Barach & Small, 2000; Harper & Helmreich, 2005;
disciplinary action. Carry out disciplinary actions only if error is egregious. Phillips, Dovey, Hickner, Graham, & Johnson, 2005;
Vincent et al., 1999; Wilf-Miron et al., 2003
Protecting reporters can be facilitated by providing the option for confidential or anonymous reporting. Beasley et al., 2004; Jeffe et al., 2004; Rudman et al.,
Confidential and anonymous reporting systems should not forsake accountability. 2005; Runciman et al., 2001; Uribe et al., 2002
Clinicians can be better protected if reports are analyzed by external or independent organizations. Karsh, Escoto, et al., 2006; Kingston et al., 2004;
At the very least, access to the error report database should be limited. Suresh et al., 2004; Uribe et al., 2002
Continued on next page
266
TABLE 1: Continued
Specific Suggestions References
Reporting should be voluntary until a safer and more accepting reporting culture is established in Beasley et al., 2004; Cohen, 2000; JCAHO, 2005;
health care. At the very least, there should be an option of reporting to a nonpunitive local reporting Kohn et al., 2000
alongside any reporting systems mandated by government or organizations responsible for
oversight. If there are both government-mandated and voluntary reporting systems in place, make
clear distinctions between these systems.
Do not design systems in which individuals can be “found out.” For example, remove large logos that Johnson, 2003b
show up on reporting system interfaces, alerting one’s colleagues as to what he or she is doing.
Perceived Uselessness of Reporting
Develop a useful process for selecting and analyzing reports. Be careful of reporting and analytical Harris et al., 2005; Johnson, 2002
biases that may render any corrective action ineffective.
Do not overload the system with reports to the extent that effective analysis cannot be carried out. One Harper & Helmreich, 2005; Leape, 2002; Suresh et al.,
way to do this would be to create specialty-based systems that provide useful and relevant expert 2004
feedback. The reporting system could thus be tailored to fit with clinicians’ specific problems.
Establish a group or task force to process reported data and generate strategies for improvement. Barach & Small, 2000; Cullen et al., 1995; Flink et al.,
Invest sufficient funds and dedicated personnel to this task to establish useful data storage and 2005; Jeffe et al., 2004; Johnson, 2003b; Uribe et al.,
analysis systems. 2002
In general, take corrective actions following the analysis of reports and provide feedback demonstrating Cullen et al., 1995; Harper & Helmreich, 2005; Kaplan
that actions were taken. More specifically, analysts can use quality improvement techniques such as & Fastman, 2003; Kingston et al., 2004; Leape, 2002;
total quality management and continuous quality improvement to follow up on reports. To make this Rudman et al., 2005; Suresh et al., 2004; Vincent et
a manageable task, priorities may need to be assigned to each potential follow-up effort. al., 1999; Wakefield et al., 1996; West et al., 2005;
Williamson & Mackay, 1991
Put emphasis on recovery efforts following error reports so that it is immediately obvious that reporting Barach & Small, 2000; Kaplan &
is useful for improvements in patient care. Fastman, 2003; Wakefield et al., 1996
Provide feedback to those submitting reports as well as regular feedback identifying recent errors, Beasley et al., 2004; Flink et al., 2005; Harper &
associated hazards, and hazard control strategies. Additional feedback can be provided to clinicians Helmreich, 2005; Jeffe et al., 2004; Kingston et al.,
on what they should and should not be doing. Feedback should include encouragements to 2004; Martin et al., 2005; Uribe et al., 2002
continue to report. One suggestion on implementing feedback is to provide it in the form of
summary data that are pertinent to practice; it should be informative, anonymous, and nonaccusatory,
and it may be best if the feedback does not come from a supervisor.
MEDICAL ERROR REPORTING 267
culture can also promote the usefulness of report-
ing if it is able to encourage learning from errors
(Reason, 1997, 2000).
Other suggestions for overcoming a perceived
lack of usefulness are presented inTable 1. Useful
reporting systems are ones that meet objectives
establishedbyindividuals,organizations,orindus-
tries. Typically the objectives are related to perfor-
mance, safety, and quality of care. One common
objective is the correction of system flaws that
lead to errors. However, by itself, reporting errors
cannotmeetthisobjective(Johnson,2002,2003b).
KaplanandFastman(2003)reviewedstepstotake
in processing reported data in a useful way so that
objectives can be met. Perhaps the most crucial
test of usefulness is whether reported data are fol-
lowed up on in a way that sense can be made out
of them and system improvement can result. This
might require tools such as failure modes and
effects analysis, sociotechnical system variance
analysis, fault trees and root cause analysis, or any
number of other methods for investigating acci-
dents and incidents.
When data are processed and followed up on,
the system may be objectively useful in some
sense, and it follows that such systems are asso-
ciated with more reporting (Cullen et al., 1995;
Johnson, 2003b; Kaplan & Fastman, 2003). Even
when processing and follow-up procedures are
established, a further consideration should be
whether potential reporters actually perceive the
system to be useful. This is because individual
assessments of usefulness may be the deciding
factor for reporting behavior (Holden & Karsh,
2005).This can be illustrated in the case of a prac-
titioner whose reported data are analyzed and
investigated without his or her awareness of this
fact. Thus, from the reporter’s perspective, the
system may be an “administrative black hole”
(Kaplan & Fastman, 2003, p. ii69) and thus not
very useful.
Feedback, whether about the status of one’s
report or the corrective actions that it generated, is
one way in which reporters can become aware of
the usefulness of a system. Feedback can be given
immediately following a report through prompts
in an electronic reporting system or through Web
sites, newsletters, E-mail, list server messages, or
scheduled meetings (Beasley et al., 2004; Martin,
Etchegaray,Simmons,Belt,&Clark,2005;Suresh
et al., 2004). Feedback in the form of identified er-
rors or hazards in the system and associated con-
trol strategies may also contribute to the actual
usefulness of the system to the extent that the pur-
pose of the system includes risk communication
and error/hazard management (Kaplan & Fast-
man, 2003; Karsh, Escoto, et al., 2006). For these
reasons, it is believed that building timely feed-
back into a medical error reporting system builds
trustandencouragesreporting(e.g.,Beasleyetal.,
2004; Kaplan et al., 1998; Suresh et al., 2004).
Because a system’s usefulness depends on its
ability to achieve its purpose, a system will be per-
ceived to be useful to the extent that practitioners
are aware of its purposes. A reporting system de-
signed to capture data on near miss recovery ef-
forts may be quite effective in this respect, but it
might not be perceived as useful by individuals
who believe that a successful system should re-
duce hazards.
Finally, there are several roadblocks to achiev-
ing usefulness. First, data processing depends on
the richness of the data. Thus, data that are not de-
tailed enough either cannot be usefully processed
or would require follow-up with the reporter. The
latter may not be possible in an anonymous sys-
tem. Thus, there may be a trade-off between hav-
ing a useful system and providing anonymity
(Barach & Small, 2000; but see Runciman et al.,
2001). Second, not every report can be followed
up on (Kaplan & Fastman, 2003; Leape, 1999),
anddataprocessingandstoragemaybequitecost-
ly (Johnson, 2003b). Thus, prioritizing, assess-
ing, and other data treatment methods may be
necessary to protect the system from becoming
ineffective (Kaplan & Fastman, 2003). Similarly,
corrective actions must be prioritized; too many
changes at once may destabilize a system.
Many of these suggestions can be implement-
ed by designing better reporting systems or rede-
signingthebroaderworksystem.Thisimpliesthat
underreporting and reporting system failure are
the result of design, not of clinician motivation.
Low reporting is to be expected when a reporting
system is not integrated into patient care, such as
whenasystemtakestimeawayfromclinicalwork
without ever improving clinical outcomes.
Some of the suggestions in Table 1 are taken
fromempiricalwork,butthereisaneedtovalidate
these suggestions and to conduct research that
can guide more specific design suggestions. Sim-
ilarly, many other practices can be suggested for
reporting of errors, but these need to be evidence
based.Achecklist for procedures with a check for
268 April 2007 – Human Factors
every step done according to plan – allowing er-
rorstobe“reported”intheprocess–canhelpmore
smoothly embed reporting into current practices.
(We thank an anonymous reviewer for this sug-
gestion.) The need for studies demonstrating the
effectiveness of such a reporting system is as great
as that for studies validating the suggestions in
Table 1.
SUMMARY OF REPORTING SYSTEM
LITERATURE
Areview of the findings of such studies reveals
several trends. First, many components must be
considered in the design of error reporting sys-
tems, and the decisions must lead to a reporting
system that fits within the context of the imple-
menting facility (its culture, goals, staff, user
needs, practices, organizational characteristics,
etc.).Theories of reporting system success and fu-
ture research may need to go beyond prescribing
one-size-fits-all solutions to reporting system de-
sign and, instead, further explore requirements for
compatibilityorfit(see,e.g.,Karsh,Holden,Alper,
& Or, 2006). Second, barriers such as busyness
andfatigue,lackofknowledgeabouttheexistence
and proper use of the reporting system, a culture
ofblame,lackoforganizationalsupport,andalack
of usefulness or perceived usefulness of reporting
systems may lead to nonreporting. These factors,
too, need to be included in future research and
design attempts.
The reporting literature also demonstrates con-
siderable group differences between professional
groups, such as physicians and nurses, and be-
tween levels of seniority within the same group.
These differences in attitudes, behaviors, barri-
ers, and incentives may stem from deep-rooted
professional and cultural differences (Hamilton-
Escoto et al., 2006; Kingston et al., 2004). The
mechanisms behind these differences must be
understood in order to inform the design of effec-
tive reporting systems. Finally, it is obvious that
multiple barriers and incentives influence report-
ing behavior and that there is no “silver bullet” for
increasing the amount of reporting; instead, mul-
tiple factors need to be addressed simultaneously
if one expects to achieve consistent, successful re-
porting.Alongthesamelines,interactionsbetween
barriers must be further studied – for instance,
some write of a trade-off between easy-to-use,
time-efficient reporting systems and the amount
ofcontentthatcanbeusedtoguidefutureredesign.
Research should attempt to confirm this trade-off
and to better understand it.
LACK OF THEORY IN THE REPORTING
LITERATURE
Conspicuouslyabsentinthereportingliterature
is a theoretically grounded organizing framework
that can explain findings and guide successful
reportingsystemdesign.Withveryfewnotableex-
ceptions (Holden & Karsh, 2005; Karsh, Escoto,
et al., 2006; Kingston et al., 2004), the reporting
literature is atheoretical despite the availability of
a wide variety of theoretical frameworks from the
fields of human factors, social and cognitive psy-
chology, communications science, management,
andtechnologychange/acceptance,tonameafew.
As Holden and Karsh (2005) noted,“Employing
a theoretical framework may provide more in-
sightful evaluation and interpretation of findings
and may guide the selection of factors to explore
and hypotheses to test. Conversely, an atheoreti-
cal approach risks missing key factors, is weak
for explaining how findings within and between
studies interact, and makes it difficult to make
generalizations about future findings or – impor-
tantly – about practical design decisions” (p. 131).
Todemonstratehowexistingtheoryfromthetech-
nology acceptance and adoption literature could
be used to frame results from a medical error sys-
tem study, Karsh, Escoto, et al. (2006) presented
a multilevel systems model that integrated inno-
vation diffusion theory, sociotechnical systems
theory, and the technology acceptance model to
explain system design and implementation con-
siderations at the organization, system, and indi-
vidual user levels.
Forthesamereasons,HoldenandKarsh(2005)
have also demonstrated the usefulness of theories
of motivation, decision making, and technology
change and acceptance for understanding report-
ing behavior. In one of the few other instances of
using theory to guide medical error reporting re-
search, Kingston et al. (2004) utilized behavioral
modeling theory to frame their focus group find-
ings. Because of the lack of theory guiding med-
icalerrorreportingsystemresearch,itisimportant
to begin to develop a framework that can guide the
pursuit of testable models of medical error report-
ing. Furthermore, theoretical frameworks for re-
porting can be – but have rarely been – inductively
MEDICAL ERROR REPORTING 269
developed from the ground up, based on the evi-
dence accrued in the reporting literature. One pur-
pose of the current paper is to present one such
theoretical model. This model is grounded in a
multiple-level systems framework and in the hu-
manfactorsengineeringconceptoffit.Atthesame
time, it draws on the findings from the literature
reviewed here, integrating much of what is known
and illuminating what is not known.
A THEORETICAL FRAMEWORK FOR
RESEARCH AND DESIGN
InFigure1,amodelisintroducedthatcanserve
as a framework for addressing the research and
design gaps identified in the literature. Specifical-
ly, the model provides a framework for testing hy-
potheses about why the barriers and incentives we
have discussed influence reporting behavior and
reporting system success.As indicated, success is
defined relative to the purpose of the system and
can include appropriate reporting by appropriate
individuals, useful analysis of the reported data,
implementation of hazard control strategies that
follow from the analyzed data, and evaluation of
these strategies.At the same time, the model pro-
vides an integrated framework for error reporting
system designers in that it demonstrates how error
reporting systems must be designed to fit within
thecomplex,hierarchical,healthcaredeliverysys-
tem and shows that design must include consid-
eration for the reporting, analysis, control, and
evaluation stages.
Reporting is only one step in a larger cycle of
required safety actions.Additional steps are ana-
lyzing reports to determine whether and how to
control reported errors, developing and implement-
ing engineering and administrative interventions,
and evaluating system performance following re-
design. Even these safety activities are only part
ofwhatshouldbeamuchlargersetofsafetyactiv-
ities,including, among other things, proactive risk
analysis, proactive hazard control, hazard inspec-
tion, and injury surveillance (Smith, Carayon, &
Karsh, 2001; Smith, Karsh, Carayon, & Conway,
2003).
Thereporting-analysis-control-evaluationcycle
is illustrated in Figure 1; as it shows, the principal
actors involved in reporting, analysis, control, and
evaluation are referred to as reporters, analysts,
change agents, and evaluators, respectively. Al-
though a single person could serve multiple safe-
ty roles, limitations on time and training make this
difficult; thus, in health care, clinicians are primar-
ily reporters and are not often involved in analysis,
control, or evaluation. Reporting as an activity can
be carried out independently of the other steps in
the cycle, depending on the organization’s objec-
tives for the reporting process. If the objective is
strictly “learning” (Kaplan, 2003; Leape, 2002;
Leape et al., 1998), then it is not necessary to con-
trol identified hazards, only to conduct analyses
to learn about existing hazards. If the objective is
“systemimprovement”(Beasley,Escoto,&Karsh,
2004; Kaiser Permanente, National Quality Fo-
rum, & Drucker, 2000), then hazard control must
be carried out and evaluated. Thus, under some
objectives, hazard control activities may not be
carried out and reporting systems may therefore
not always lead to safer systems from a hazard re-
ductionpointofview.Certainly,thepurposeofthe
reporting system – especially the extent to which
reducing and eliminating hazards is a priority –
will affect the process of reporting and the many
considerations that need to be made in designing
this process, such as who should report and what
should be reported.
The central concept of the model is fit, concor-
dant with a human factors/systems approach to
design. An earlier study by Karsh, Escoto, et al.
(2006) expanded on the application of the concept
of fit to error reporting design; here we provide
only an abbreviated discussion. Fit is the product
of interactions between error reporting technolo-
gy characteristics and the various subcomponents
of the work system in which the technology is
nested. In the model, several work system levels
are specified; at each level are factors that deter-
mine fit and, therefore, the success of reporting,
analysis, control, and evaluation. For instance, at
the work group level, work design factors (e.g.,
staffing ratios) interact with the reporting technol-
ogy design (e.g., reporting format) to determine
fit. In this case, fit may mean an easy-to-use and
usable reporting system relative to the amount of
busyness and fatigue faced by clinicians. Here, fit
encouragesreporting.Likewise,otherinstancesof
fit can be determined based on the reporting liter-
ature reviewed in this paper. For example, the fit
between the culture of the organization and the
anonymity options available for reporting can
determine whether or not clinicians report.
Onewaythismodeldiffersfrompreviousmod-
els of error reporting systems is that it integrates a
270 April 2007 – Human Factors
Figure 1. The interconnected cycle of reporting, analysis, control, and evaluation provides a framework for under-
standing the role of reporting systems in safety. The concepts of fit, cross-level effects, feedback between the stages
in the safety cycle, and changes in the model over time are consistent with the medical error reporting literature and
relevant theories from multiple scientific disciplines. These and other concepts of the model suggest theory-based
routes for future design and research.
MEDICAL ERROR REPORTING 271
number of factors across levels that are important
to research and design jointly, as opposed to sep-
arately. Instead of simply listing the reporting sys-
tem variables reviewed in this paper, the model
encourages the understanding of the interactions
that produce fit, a truer depiction of the complex-
ity of health care systems. The model explicitly
depicts the contribution of different levels of orga-
nizational hierarchy to the success or failure of an
error reporting system. Previous discussions of er-
ror reporting systems have asserted that reporter-
reporting system variables such as ease of use and
organization-reporter variables such as organiza-
tional support of reporting are important for re-
porting system success. However, our framework
suggests that vertical alignment (or fit) through-
out all levels of organizational hierarchy need to
be investigated through research and designed in
practice (Rasmussen, 1997;Vicente, 2003). Like-
wise, previous discussions of error reporting have
focused on facilitators and barriers to reporting,
whereas this model demonstrates that studying
and/ordesigningforreportingisbutonestepinthe
overall process. Figure 1 provides an integrated
framework in which reporting is integrated into
the larger safety cycle.
The model in Figure 1 frames error reporting as
onecomponentofasafetyprogram.Liketheother
components, reporting is affected by multiple lev-
els in the organizational hierarchy of systems,
which is one of the new elements of this model
of error reporting.The left side of the figure, titled
“work system,” depicts this hierarchy. It is based
on the work system models developed by Smith
and Sainfort (1989) and Carayon et al. (2003). It
demonstrates how an error reporting system is
nested within a work group, which might be a care
team, unit, or department nested within a higher
level we refer to as organization (which could be
a hospital or a health care system), which itself is
embedded in an even larger system. The actual
definitions of each level and, for that matter, the
actual number of levels is dependent on the unit
of analysis for a given study or health care orga-
nization.
Though the work system factor appears on
the left side of the model, note that the system-
reporting technology interaction affects all four
stages; this is because all the stages in the cycle
take place within the work system. The model
shows that separate, but related, design consider-
ations need to be in place to promote the goals in
each of the four stages. For example, to promote
erroridentification,thereportingmechanismmust
be designed for the reporters; that is, it must be
easy to use, nonthreatening, and integrated into
the current work flow and work environment.The
requirements for design success and fit are differ-
ent at the analysis, control, and evaluation stages.
Although the design of these stages of the safety
cycle is beyond the reporting focus of this paper,
each step is important to consider because of the
feedback from each step to the other. Thus, re-
search is needed to understand the interplay be-
tween reporting and other stages in the safety
cycle, as discussed in the Feedback Through the
Stages section.
The identification-analysis-control-evaluation
processes are not simply linear. Instead, there are
two types of feedback loops at work: (a) feedback
through system hierarchies, known as cross-level
effects (Klein, Dansereau, & Hall, 1994); and (b)
feedback through steps in the cycle (see, e.g., Bo-
gart, 1980). Both types of feedback are important
to understand because each exerts influence on the
success of the four stages of the cycle. A lack of
understanding of these feedback influences can
result in unanticipated detrimental consequences.
Feedback Through the System
Hierarchies
Because error reporting is influenced by the
interactionsofmultiplelevelswithinthehierarchy,
it is important to consider cross-level feedback.
Such feedback could be in the form of policies,
information, normative influences, or goals and
rewards. For brevity we provide examples using
onlythelattertwoformsoffeedback.Forinstance,
at the reporting stage, in which individuals choose
whether they will report an error, individual be-
haviors are affected by higher-level group, orga-
nizational, and industry factors. As discussed in
the review of the literature, reward and punish-
ment structures may affect individual reporting
decisions (e.g., if nurses are rewarded more for
productivity than for reporting), as may culture
(e.g., blame vs. just culture), or organizational
structure. Other cross-level effects that influence
reporting include those related to training and in-
formation provision (e.g., technical competence
ofcliniciansandsystemusability)andsocialinflu-
ences at the individual, group, organizational, and
industry levels. In turn, the content and frequency
of reporting may create feedback effects through
272 April 2007 – Human Factors
the hierarchy of systems, alerting management to
change rewards for and punishment of certain be-
haviors, creating the need for training or redesign,
and/or affecting the culture by either reinforcing
or contradicting norms and beliefs related to
reporting.
At the analysis stage, pertinent rewards pro-
vided and goals developed by higher levels, such
as management, will affect how data are analyzed
andusedbyanalysts.Forexample,ifmanagement
simply rewards for showing trends, that is likely
what the analysts will produce. In turn, what the
individual analysts produce will subsequently
affect what management rewards for, depending
on whether the analysis reports are consistent with
management expectations.
At the hazard control (i.e., intervention) level,
as before, the types of control activities that pro-
duce rewards are those likely to be designed and
implemented.Themanagementlevelalsoimpacts
the efficacy of safety control activities to the ex-
tent that management is the entity that provides
time and financial resources to staff for designing
and implementing controls. In turn, the types of
controls developed by individuals or units will
influence management to change or accept the
current course of control activities and will impact
what types of data they want reported and what
types of analyses they want produced. The evalu-
ation of implemented safety controls could influ-
ence the design of subsequent controls, depending
on the success or failure of the interventions, and
even the nature of report forms, to the extent that
the organization learns from evaluations that it
needs more specific information to be reported.
From a scientific point of view, each of the possi-
ble scenarios presented in this section provides a
testable hypothesis. From a design point of view,
each example may be, if the empirical evidence
provides support, in need of design consideration.
Feedback Through the Stages
Each stage in the cycle can influence the other
stages. What is reported will determine the types
of analyses that can be produced and the nature of
follow-uprequired.Likewise,thedepthandbreadth
of the analyses will determine the specificity of
possible interventions to control hazards and the
criteria used to evaluate these interventions. The
degree to which targeted controls are developed
and implemented will impact whether clinicians
continue to report; they will be unlikely to con-
tinue reporting if they do not see interventions be-
ing implemented that were based on their reports.
Looking at feedback in the other direction, eval-
uation produces adjustments to future control
activities.Also, the type of data required for these
targeted control activities can be fed back to de-
termine the types of analyses that are required to
produce optimal hazard control interventions. Fi-
nally, the types of analyses required for sound
interventions should help to determine the type of
data required from the reporters. Again, each
aforementioned proposal is an opportunity for
research and design.
The proposed model also incorporates the ele-
ment of time by demonstrating that the cycle of
feedback among steps in the process and levels
ofhierarchyarecontinuouslyoperatingandaffect-
ingeach other.This notion is important for under-
standing that decisions made at any stage of the
cycle or in the various levels of organizational
hierarchy will have immediate or delayed effects
on medical error reporting and safety in general.
The success of a medical error reporting sys-
tem is determined by design considerations in the
four steps (reporting, analysis, control, and eval-
uation) and by feedback systems that run through
the steps and levels. This shows just how compli-
cated it is to design and implement a successful
system and further emphasizes the importance of
proposing a research framework that can lead to
testable hypotheses.
FUTURE RESEARCH
The model proposed in Figure 1 and the fore-
going discussion provide multiple directions for
future research into medical error reporting sys-
tems. In the review of the literature we identified
areas in need of research. Here we provide more
specific possible research questions, divided into
twocategories:questionsrelatedtoparticularsteps
in the safety cycle and questions related to the
relationships among the steps.
Within steps, possible questions include, gen-
erally, what factors predict success at a given
stage, and how do these factors interact (or fit) to
determine success? The literature reviewed here
provides a few preliminary answers but is limited
in that it often fails to recognize the hierarchically
complex, interactive, and dynamic nature of re-
porting systems.As the model suggests, for stud-
ies to successfully address questions about such
MEDICAL ERROR REPORTING 273
a system, data will have to be collected at multiple
levels of hierarchy to allow for testing of cross-
level effects. Specific questions that require such
data might include the following:
• What variables at different levels of hierarchy pre-
dict end-user use (or rejection) of a medical error
reporting system?
• Is error reporting behavior independent, homoge-
neous, or heterogeneous within groups?
• What types of resources and system design consid-
erations facilitate successful analysis of reported
data?
• What variables at different levels of hierarchy pro-
mote (or hinder) the adoption and success of hazard
control interventions within health care organiza-
tions or within groups?
• How do changes in variables that affect a given step
impact success at that step over time?
These questions are somewhat broad, and
within these, specific research questions will need
to be independently developed by researchers,
depending on their theoretical interests and area
of application (i.e., because different health care
contexts call for nuanced research questions).Ad-
ditionally, the suggested relationships among the
three steps produce questions that are even more
complicated to address, especially when consid-
ered alongside cross-level effects. However, the
relationships among identification/reporting,
analysis, control, and evaluation are equally im-
portant to address because, as the model shows,
thestagesareinterrelatedthroughfeedbackmech-
anisms. Specific questions might include the fol-
lowing:
• Which, if any, variables at different levels of hierar-
chy simultaneously predict success at the reporting,
analysis, control, and evaluation stages?
• How do changes at one stage affect subsequent
changes in the other stages over time?
For these and any other questions that might be
generated and tested from the model, it is impor-
tant to utilize existing theories to guide the re-
search.Forexample,questionsrelatedtoreporting
behavior might benefit from existing psychologi-
cal theories of motivation and behavior as well as
theories on technology adoption and acceptance
(Holden & Karsh, 2005). Questions dealing with
cross-level effects might benefit from organiza-
tional theories or sociotechnical systems theory
(Clegg, 2000; Karsh & Brown, 2005). Similarly,
questions dealing with hazard control interven-
tions might benefit from decision-making theories
(DeJoy, 1996). In each case, the appropriate the-
ory will depend on the specifics of the questions
being studied, and the model should provide guid-
ance as to the types of variables to consider. Fur-
thermore, researchers can build on and refine each
others’theories.The conceptual model in Figure 1
is a first attempt at integrating knowledge from the
reporting literature. Components of the model
need to be tested and the model amended appro-
priately.Weproposethatbeginningtodevelopand
revise models and theories for reporting and relat-
ed safetystagesinhealthcareisaneedednextstep.
CONCLUSION
Successful medical error reporting systems can
be one approach toward safer and higher quality
patient care.Whether the system is successful de-
pends on how well it achieves its goals, which
include identification, analysis, control, and con-
tinuousimprovement.Themedicalerrorreporting
literature suggests several factors that affect re-
porting system success.These include a reporting
system that is usable (e.g., easy to use and time
efficient), is known to users, and fits with their
workflow; that is useful and provides feedback to
its users demonstrating this usefulness; and that
providesrewardsanddoesnotpunishusers.Many
design considerations are necessary to provide for
successfulsystems.Themodelpresentedheresug-
gests that these design considerations will be opti-
mally accounted for when medical error reporting
systemsaretreatedasdynamicandmultilevelsys-
tems characterized by multiple interacting pro-
cesses.Futureresearchanddesign/implementation
efforts must account for all levels of this system’s
hierarchy, all four steps in the cycle, and the dy-
namic feedback between these levels and steps,
all within the context of a wide assortment of
available theoretical frameworks.
ACKNOWLEDGMENTS
This work was supported by a grant from
University-Industry Relations at the University
of Wisconsin-Madison. The authors thank the
reviewers for their helpful comments and Brett
Marquardt for helping to collect and summarize
some of the articles referenced in this paper.
REFERENCES
American College of Surgeons. (2005). National Surgical Quality
Improvement Program. Retrieved November 15, 2005, from https://
acsnsqip.org/main/default.asp
274 April 2007 – Human Factors
Antonow, J. A., Smith, A. B., & Silver, M. P. (2000). Medication error
reporting:Asurveyofnursingstaff.JournalofNursingCareQuality,
15, 42–48.
Arroyo, D. A. (2005). A nonpunitive, computerized system for im-
proved reporting of medical occurrences. In K. Henriksen, J. B.
Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient
safety: From research to implementation (Vol. 4, pp. 71–80).
Rockville, MD: Agency for Healthcare Research and Quality.
Aspden, P., Corrigan, J. M., Wolcott, J., & Erickson, S. M. (2004).
Patient safety:Achieving a new standard for care.Washington, DC:
National Academies Press.
Barach, P., & Small, S. D. (2000). Reporting and preventing medical
mishaps: Lessons from non-medical near miss reporting systems.
British Medical Journal, 320, 759–763.
Barkan, R. (2002). Using a signal detection safety model to simulate
managerial expectations and supervisory feedback. Organizational
Behavior and Human Decision Processes, 89, 1005–1031.
Bates, D. W., Cullen, D. J., Laird, N., Petersen, L. A., Small, S. D.,
Servi, D., et al. (1995). Incidence of adverse drug events and poten-
tial adverse drug events – Implications for prevention. Journal of
the American Medical Association, 274, 29–34.
Bates, D. W., Evans, R. S., Murff, H. J., Stetson, P. D., Pizziferri, L., &
Hripcsak, G. (2003). Detecting adverse events using information
technology. Journal of the American Medical Informatics Associa-
tion, 10, 115–128.
Bates, D.W., Leape, L. L., Cullen, D. J., Laird, N., Petersen, L.A.,Teich,
J. M., et al. (1998). Effect of computerized physician order entry
and a team intervention on prevention of serious medication errors.
Journal of the American Medical Association, 280, 1311–1316.
Beasley, J. W., Escoto, K. H., & Karsh, B. (2004). Design elements for
a primary care medical error reporting system. Wisconsin Medical
Journal, 103, 56–59.
Billings, C. E. (1998). Some hopes and concerns regarding medical
event-reporting systems – Lessons from the NASAAviation Safety
ReportingSystem. ArchivesofPathologyandLaboratoryMedicine,
122, 214–215.
Blendon, R. J., DesRoches, C. M., Brodie, M., Benson, J. M., Rosen,
A. B., Schneider, E., et al. (2002). Patient safety: Views of practic-
ing physicians and the public on medical errors. New England
Journal of Medicine, 347, 1933–1940.
Bogart, D. H. (1980). Feedback, feedforward, and feedwithin: Strategic
information in systems. Behavioral Science, 25, 237–249.
Bogner, M. S. (1994). Human error in medicine. Hillsdale, NJ: Erlbaum.
Britt, H., Miller, G. C., Steven, I. D., Howarth, G. C., Nicholson, P. A.,
Bhasale,A. L., et al. (1997). Collecting data on potentially harmful
events: A method for monitoring incidents in general practice.
Family Practice, 14, 101–106.
Busse, D. K., &Wright, D. J. (2000). Classification and analysis of inci-
dents in complex, medical environments. Topics in Health Informa-
tion Management, 20, 1–11.
Carayon, P., Alvarado, C., Brennan, P., Gurses, A., Hundt, A., Karsh,
B., et al. (2003). Work system and patient safety. Proceedings of
Human Factors in Organizational Design and Management, 7,
583–588.
Christensen,J.F.,Levinson,W.,&Dunn,P.M.(1992).Theheartofdark-
ness: The impact of perceived mistakes on physicians. Journal of
General Internal Medicine, 7, 424–431.
Clegg, C. (2000). Sociotechnical principles for system design. Applied
Ergonomics, 31, 463–477.
Cohen, M. R. (2000). Why error reporting systems should be voluntary.
British Medical Journal, 320, 728–729.
Confidential Incident Reporting & Analysis System. (2005). CIRAS –
Confidential Incident Reporting &Analysis System for the UK rail-
way industry. Retrieved November 28, 2005, from http://www.
ciras.org.uk/
Cullen, D. J., Bates, D. W., Small, S. D., Cooper, J., & Nemeskal, R.
(1995). The incident reporting system does not detect adverse drug
events:Aproblem for quality improvement. The Joint Commission
Journal on Quality Improvement, 21, 541–548.
DeJoy, D. M. (1996). Theoretical models of health behavior and work-
place self-protective behavior. Journal of Safety Research, 27,
61–72.
Desikan, R., Krauss, M. J., Dunagan, W. C., Rachmiel, E. C., Bailey, T.,
& Fraser, V. J. (2005). Reporting of adverse drug events: Exami-
nation of a hospital incident reporting system. In K. Henriksen, J.
B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient
safety: From research to implementation (Vol. 1, pp. 145–160).
Rockville, MD: Agency for Healthcare Research and Quality.
Dickey, J., Damiano, R. J., Jr., & Ungerleider, R. (2003). Our surgical
culture of blame: A time for change. Journal of Thoracic and
Cardiovascular Surgery, 126, 1259–1260.
Donaldson, L. (2000). An organisation with a memory. London: Depart-
ment of Health.
Dovey, S. M., Meyers, D. S., Phillips, R. L., Green, L. A., Fryer, G. E.,
Galliher, J. M., et al. (2002). A preliminary taxonomy of medical
errors in family practice. Quality and Safety in Health Care, 11,
233–238.
Eland, I. A., Belton, K. J., van Grootheest, A. C., Meiners, A. P.,
Rawlins, M. D., & Stricker, B. H. C. (1999). Attitudinal survey of
voluntary reporting of adverse drug reactions. British Journal of
Clinical Pharmacology, 48, 623–627.
Evans, S. M., Berry, J. G., Smith, B. J., & Esterman, A. J. (2004).
Anonymity or transparency in reporting of medial error: A
community-based survey in South Australia. Medical Journal of
Australia, 180, 577–580.
Fernald, D. H., Pace, W. D., Harris, D. M., West, D. R., Main, D. S., &
Westfall, J. M. (2004). Event reporting to a primary care patient
safety reporting system: A report from the ASIPS Collaborative.
Annals of Family Medicine, 2, 327–332.
Figueiras,A., Tato, F., Fontainas, J., Takkouche, B., & Gestal-Otero, J.
J. (2001). Physicians’ attitudes towards voluntary reporting of
adverse drug events. Journal of Evaluation in Clinical Practice, 7,
347–354.
Flink, E., Chevalier, C. L., Ruperto, A., Dameron, P., Heigel, F. J.,
Leslie, R., et al. (2005). Lessons learned from the evolution of
mandatory adverse event reporting systems. In K. Henriksen, J. B.
Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient
safety: From research to implementation (Vol. 3, pp. 135–151).
Rockville, MD: Agency for Healthcare Research and Quality.
Flowers, L., & Riley, T. (2000). How states are responding to medical
errors: An analysis of recent state legislative proposals. Portland,
ME: National Academy for State Health Policy.
Gallagher, T. H., Waterman, A. D., Ebers, A. G., Fraser, V. J., &
Levinson, W. (2003). Patients’ and physicians’ attitudes regarding
the disclosure of medical errors. Journal of the American Medical
Association, 289, 1001–1007.
Green, C. F., Mottram, D. R., Rowe, P. H., & Pirmohamed, M. (2001).
Attitudes and knowledge of hospital pharmacists to adverse drug
reaction reporting. British Journal of Clinical Pharmacology, 51,
81–86.
Hamilton-Escoto, K. H., Karsh, B., & Beasley, J. W. (2006). Multiple
user considerations and their implications in medical error reporting
system design. Human Factors, 48, 48–58.
Harper, M. L., & Helmreich, R. L. (2005). Identifying barriers to the
success of a reporting system. In K. Henriksen, J. B. Battles, E. S.
Marks, & D. I. Lewin (Eds.), Advances in patient safety: From
research to implementation (Vol. 3, pp. 167–179). Rockville, MD:
Agency for Healthcare Research and Quality.
Harris, D. M., Westfall, J. M., Fernald, D. H., Duclos, C. W., West, D.
R., Niebauer, L., et al. (2005). Mixed methods analysis of medical
error event reports: A report from the ASIPS Collaborative. In K.
Henriksen,J.B.Battles,E.S.Marks,&D.I.Lewin(Eds.),Advances
in patient safety: From research to implementation (Vol. 2, pp.
133–147). Rockville, MD: Agency for Healthcare Research and
Quality.
Hart, G. K., Baldwin, I., Gutteridge, G., & Ford, J. (1994).Adverse inci-
dent reporting in intensive care. Anaesthesia and Intensive Care,
22, 556–561.
Henriksen, K., & Kaplan, H. S. (2003). Hindsight bias, outcome knowl-
edge and adaptive learning. Quality and Safety in Health Care,
12(Suppl. 2), ii46–ii50.
Holden, R. J., & Karsh, B. (2005). Applying a theoretical framework
to the research and design of medical error reporting systems. In
Proceedings of the International Conference on Healthcare Systems
Ergonomics and Patient Safety (pp. 131–134). London: Taylor and
Francis Group.
Hollnagel, E. (1993). Human reliability analysis: Context and control.
London: Academic Press.
Horton, R. (1999). The uses of error. Lancet, 353, 422–423.
Jeffe, D. B., Dunagan, W. C., Garbutt, J., Burroughs, T. E., Gallagher,
T. H., Hill, P. R., et al. (2004). Using focus groups to understand
physicians’and nurses’perspectives on error reporting in hospitals.
Joint Commission Journal on Quality and Safety, 30, 471–479.
Johnson, C. W. (2000a). Architectures for incident reporting. In P.
Palanque, F. Paterno, & C. Johnson (Eds.), Proceedings of Safety
MEDICAL ERROR REPORTING 275
and Usability Concerns in Aviation (pp. 23–25), Toulouse, France:
University of Toulouse.
Johnson, C. W. (2000b). Designing forms to support the elicitation of in-
formation about incidents involving human error. In P. C. Cacciabue
(Ed.), Proceedings of the 19th European Annual Conference on
Human Decision Making and Manual Control, EAM-2000 (pp.
127–134). Luxemburg: European Commission Joint Research
Centre.
Johnson, C. W. (2002). Reasons for the failure of incident reporting in
the healthcare and rail industries. In F. Redmill & T. Anderson
(Eds.), Components of System Safety: Proceedings of the 10th
Safety-Critical Systems Symposium (pp. 31–60). Berlin, Germany:
Springer-Verlag.
Johnson, C. W. (2003a). Failure in safety-critical systems: A handbook
of accident and incident reporting. Glasgow, Scotland: University
of Glasgow Press.
Johnson, C. W. (2003b). How will we get the data and what will we do
with it then? Issues in the reporting of adverse healthcare events.
Quality and Safety in Health Care, 12(Suppl. 2), ii64–ii67.
Joint Commission on Accreditation of Healthcare Organizations.
(2005). Health care at the crossroads: Strategies for improving the
medical liability system and preventing patient injury. Oakbrook
Terrace, IL: Author.
Kaiser Permanente, National Quality Forum, & Drucker, P. F. (2000).
Roundtable Discussion: Reporting as a means to improve patient
safety. Claremont, CA: Kaiser Permanente Institute for Health
Policy.
Kaplan, H. S. (2003). Benefiting from the “gift of failure”: Essentials
foraneventreportingsystem.JournalofLegalMedicine,24,29–35.
Kaplan, H. S., & Barach, P. (2002). Incident reporting: Science or pro-
toscience?Ten years later [Comment]. Quality and Safety in Health
Care, 11, 144–145.
Kaplan, H. S., Battles, J. B., Van der Schaff, T. W., Shea, C. E., &
Mercer, S. Q. (1998). Identification and classification of the causes
of events in transfusion medicine. Transfusion, 38, 1071–1081.
Kaplan, H. S., Callum, J. L., Fastman, R. B., & Merkley, L. L. (2002).
The Medical Event Reporting System for Transfusion Medicine
(MERS-TM): Will it help us get the right blood to the right patient?
Transfusion Medicine Reviews, 16, 86–102.
Kaplan, H. S., & Fastman, B. R. (2003). Organization of event report-
ing data for sense making and system improvement. Quality and
Safety in Health Care, 12(Suppl. 2), ii68–ii72.
Karsh, B., & Brown, R. (2005).The impact of organizational hierarchies
on the design and analysis of medical error research. Proceedings
of Human Factors in Organizational Design and Management, 8,
293–298.
Karsh, B., Escoto, K. H., Beasley, J.W., & Holden, R. J. (2006).Toward
a theoretical approach to medical error reporting system research
and design. Applied Ergonomics, 37, 283–295.
Karsh, B., Holden, R. J., Alper, S. J., & Or, C. K. L. (2006). A human
factors engineering paradigm for patient safety – Designing to sup-
port the performance of the health care professional. Quality and
Safety in Health Care, 15(Suppl. 1), i59–i65.
Katz, R. I., & Lagasse, R. S. (2000). Factors influencing the reporting
of adverse perioperative outcomes to a quality management pro-
gram. Anesthesia and Analgesia, 90, 344–350.
Kaushal, R., & Bates, D. W. (2002). Information technology and med-
ication safety: What is the benefit? Quality and Safety in Health
Care, 11, 261–265.
Kingston, M. J., Evans, S. M., Smith, B. J., & Berry, J. G. (2004). Atti-
tudes of doctors and nurses towards incident reporting:Aqualitative
analysis. Medical Journal of Australia, 181, 36–39.
Klein, K. J., Dansereau, F., & Hall, R. J. (1994). Levels issues in theo-
ry development, data collection, and analysis. Academy of
Management Review, 19, 195–229.
Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (Eds.). (2000). To err
is human: Building a safer health system (Institute of Medicine
Report on Medical Errors). Washington, DC: National Academy
Press.
Lawton, R., & Parker, D. (2002). Barriers to incident reporting in a
healthcare system. Quality and Safety in Health Care, 11, 15–18.
Layde, P. M., Maas, L.A.,Teret, S. P., Brasel, K. J., Kuhn, E. M., Mercy,
J. A., et al. (2002). Patient safety efforts should focus on medical
injuries. Journal of the American Medical Association, 287,
1993–1997.
Leape, L. L. (1994). Error in medicine. Journal of the American
Medical Association, 272, 1851–1857.
Leape, L. L. (1999). Why should we report adverse incidents? Journal
of Evaluation in Clinical Practice, 5, 1–4.
Leape, L. L. (2000). Reporting of medical errors: Time for a reality
check. Quality in Health Care, 9, 144–145.
Leape, L. L. (2002). Reporting of adverse events. New England Journal
of Medicine, 347, 1633–1638.
Leape, L. L., Bates, D. W., Cullen, D. J., Cooper, J., Demonaco, H. J.,
Gallivan,T., et al. (1995). Systems-analysis of adverse drug events.
Journal of the American Medical Association, 274, 35–43.
Leape, L. L., Kabcenell, A., Berwick, D. M., & Roessner, J. (1998).
Reducing adverse drug events. Boston: Institute for Healthcare
Improvement.
Martin, S. K., Etchegaray, J. M., Simmons, D., Belt, W. T., & Clark, K.
(2005). Development and implementation of the University of
Texas Close Call Reporting System. In K. Henriksen, J. B. Battles,
E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From
research to implementation (Vol. 2, pp. 149–160). Rockville, MD:
Agency for Healthcare Research and Quality.
Marx, D. A. (1999). Maintenance error causation. Washington, DC:
Federal Aviation Authority Office of Aviation Medicine.
Marx, D.A. (2001). Patient safety and the “just culture”: A primer for
health care executives. Retrieved November 28, 2006, from
http://www.mers-tm.net/support/Marx_Primer.pdf
McGreevy, M. W. (1997). A practical guide to interpretation of large
collections of incident narratives using the QUORUM method.
Moffett Field, CA: NASAAmes Research Center.
Medical Event Reporting System. (2005). Medical Event Reporting
System — Transfusion medicine. Retrieved November 15, 2005,
from http://www.mers-tm.net/
Murff, H. J., Patel,V. L., Hripcsak, G., & Bates, D.W. (2003). Detecting
adverse events for patient safety research: A review of current
methodologies. Journal of Biomedical Informatics, 36, 131–143.
NationalAeronautics and SpaceAdministration. (2002). Aviation Safety
Reporting System. Retrieved March 7, 2002, from http://asrs.arc.
nasa.gov/
New York State Department of Health. (2005). NYPORTS – The New
York Patient Occurrence and Tracking System. Retrieved Novem-
ber 15, 2005, from http://www.health.state.ny.us/nysdoh/hospital/
nyports
Newman, M. C. (1996). The emotional impact of mistakes on family
physicians. Archives of Family Medicine, 5, 71–75.
Nielsen, J. (1993). Usability engineering. Boston: Academic Press.
Nuclear Regulatory Commission. (2005). Human Factors Information
System Reports. Retrieved November 9, 2005, from http://www.
nrc.gov/reading-rm/doc-collections/human-factors/
Pace, W. D., Staton, E. W., Higgins, G. S., Main, D. S., West, D. R.,
Harris, D. M., et al. (2003). Database design to ensure anonymous
study of medical errors: A report from the ASIPS Collaborative.
Journal of the American Medical Informatics Association, 10,
531–540.
Parker, D., & Lawton, R. (2003). Psychological contribution to the
understanding of adverse events in health care. Quality and Safety
in Health Care, 12, 453–457.
Phillips, R. L., Dovey, S. M., Hickner, J. S., Graham, D., & Johnson,
M. (2005). The AAFP Patient Safety Reporting System: Develop-
ment and legal issues pertinent to medical error tracking and analy-
sis. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin
(Eds.), Advances in patient safety: From research to implementation
(Vol. 3, pp. 121–134). Rockville, MD: Agency for Healthcare
Research and Quality.
Pilpel, D., Schor, R., & Benbassat, J. (1998). Barriers to acceptance of
medical error:The case for a teaching program. Medical Education,
32, 3–7.
Rasmussen, J. (1997). Risk management in a dynamic society:Amod-
elling problem. Safety Science, 27, 183–213.
Reason, J. (1990). Human error. Cambridge, UK: Cambridge University
Press.
Reason, J. (1997). Managing the risks of organizational accidents.
Aldershot, UK: Ashgate.
Reason, J. (2000). Human error: Models and management. British
Medical Journal, 320, 768–770.
Reason, J., Parker, D., & Lawton, R. (1998). Organizational controls and
safety: The varieties of rule-related behaviour. Journal of Occupa-
tional and Organizational Psychology, 71, 289–304.
Reinertsen, J. L. (2000). Let’s talk about error – Leaders should take
responsibility for mistakes. British Medical Journal, 320, 730.
276 April 2007 – Human Factors
Resnick, D. (2003). The Jessica Santillan tragedy: Lessons learned.
Hastings Center Report, 33(4), 15–20.
Ricci, M., Goldman,A. P., de Leval, M. R., Cohen, G.A., Devaney, F.,
& Carthey, J. (2004). Pitfalls of adverse event reporting in paedi-
atric cardiac intensive care. Archives of Disease in Childhood, 89,
856–859.
Robert Graham Center, American Academy of Family Physicians
Education Resource Center, & State Networks of Colorado Am-
bulatory Practices and Partners. (2005). Medical error taxonomies:
Aresearch forum. Retrieved November 15, 2005, from http://www.
errorsinmedicine.net/taxonomy/
Robinson,A. R., Hohmann, K. B., Rifkin, J. I., Topp, D., Gilroy, C. M.,
Pickard, J.A., et al. (2002). Physician and public opinions on qual-
ity of health care and the problem of medical errors. Archives of
Internal Medicine, 162, 2186–2190.
Rogers, A. S., Israel, E., Smith, C. R., Levine, D., McBean, A. M.,
Valente,C.,etal.(1988).Physicianknowledge,attitudes,andbehav-
ior related to reporting adverse drug events. Archives of Internal
Medicine, 148, 1596–1600.
Rudman, W. J., Bailey, J. H., Hope, C., Garrett, P., & Brown, C. A.
(2005). The impact of a Web-based reporting system on the col-
lection of medication error occurrence data. In K. Henriksen, J. B.
Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient
safety: From research to implementation (Vol. 3, pp. 195–205).
Rockville, MD: Agency for Healthcare Research and Quality.
Runciman,W. B., Merry,A., & Smith,A. M. (2001). Improving patients’
safety by gathering information: Anonymous reporting has an
important role. British Medical Journal, 323, 298.
Runciman, W. B., & Moller, J. (2001). Iatrogenic injury in Australia.
Adelaide, South Australia: Australian Patient Safety Foundation.
Runciman, W. B., Webb, R. K., Lee, R., & Holland, R. (1993). System
failure: An analysis of 2000 incident reports. Anaesthesia and
Intensive Care, 21, 684–695.
Sage, W. M. (2003). Medical liability and patient safety. Health Affairs,
22, 26–36.
Sexton, J. B., Thomas, E. J., & Helmreich, R. L. (2000). Error, stress
and teamwork in medicine and aviation: Cross sectional surveys.
British Medical Journal, 320, 745–749.
Smith, M. J., Carayon, P., & Karsh, B. (2001). Design for occupation-
al health and safety. In G. Salvendy (Ed.), Handbook of industrial
engineering: Technology and operations management (3rd ed., pp.
1156–1191). New York: Wiley.
Smith, M. J., Karsh, B., Carayon, P., & Conway, F.T. (2003). Controlling
occupational safety and health hazards. In J. C. Quick & L. E.
Tetrick (Eds.), Handbook of occupational health psychology (pp.
35–68). Washington, DC: American Psychological Association.
Smith, M. J., & Sainfort, P. C. (1989). Balance theory of job design for
stress reduction. International Journal of Industrial Ergonomics,
4, 67–79.
Staender, S. (2000). Critical Incident Reporting System (CIRS): Critical
incidents in anaesthesiology. Basel, Switzerland: University of
Basel, Department of Anaesthesia.
Staender, S., Davies, J., Helmreich, B., Sexton, B., & Kaufmann, M.
(1997). The Anaesthesia Critical Incident Reporting System: An
experience based database. International Journal of Medical Infor-
matics, 47, 87–90.
Stanhope, N., Crowley-Murphy, M., Vincent, C., O’Connor, A. M., &
Taylor-Adams, S. E. (1999). An evaluation of adverse incident
reporting. Journal of Evaluation in Clinical Practice, 5, 5–12.
Stanton, N. A., Chambers, P. R. G., & Piggott, J. (2001). Situational
awareness and safety. Safety Science, 39, 189–204.
Statement Before the Subcommittee on Oversight and Investigations,
House Committee on Veteran’s Affairs, 106th Cong., D82 (2000)
(testimony of Linda J. Connell).
Suresh, G., Horbar, J. D., Plsek, P., Gray, J., Edwards, W. H., Shiono,
P. H., et al. (2004).Voluntary anonymous reporting of medical errors
for neonatal intensive care. Pediatrics, 113, 1609–1618.
Ulep, S. K., & Moran, S. L. (2005). Ten considerations for easing the
transition to a Web-based patient safety reporting system. In K.
Henriksen,J.B.Battles,E.S.Marks,&D.I.Lewin(Eds.),Advances
in patient safety: From research to implementation (Vol. 3, pp.
207–222). Rockville, MD: Agency for Healthcare Research and
Quality.
Uribe, C. L., Schweikhart, S. B., Pathak, D. S., & Marsh, G. B. (2002).
Perceived barriers to medical-error reporting:An exploratory inves-
tigation. Journal of Healthcare Management, 47, 263–279.
Vicente, K. (2003). What does it take: A case study. Joint Commission
Journal on Quality and Safety, 29, 598–609.
Vincent, C., Stanhope, N., & Crowley-Murphy, M. (1999). Reasons for
not reporting adverse incidents: An empirical study. Journal of
Evaluation in Clinical Practice, 5, 13–21.
Wakefield, B. J., Uden-Holman, T., & Wakefield, D. S. (2005).
Development and validation of the Medication Administration
Error Reporting Survey. In K. Henriksen, J. B. Battles, E. S. Marks,
& D. I. Lewin (Eds.), Advances in patient safety: From research to
implementation (Vol. 4, pp. 475–489). Rockville, MD:Agency for
Healthcare Research and Quality.
Wakefield, D. S., Wakefield, B. J., Uden-Holman, T., & Blegen, M. A.
(1996). Perceived barriers in reporting medication administration
errors. BestPracticesandBenchmarkinginHealthcare,1, 191–197.
Walker, S. B., & Lowe, M. J. (1998). Nurses’views on reporting med-
ication incidents. International Journal of Nursing Practice, 4,
97–102.
Waring, J. J. (2004). A qualitative study of the intra-hospital variations
in incident reporting. International Journal for Quality in Health
Care, 16, 347–352.
Weick, K., & Sutcliffe, K. (2001). Managing the unexpected: Assuring
high performance in an age of complexity. San Francisco: Jossey-
Bass.
Weingart, S. N., Callanan, L. D., Ship,A. N., &Aronson, M. D. (2001).
Aphysician-basedvoluntaryreportingsystemforadverseeventsand
medical errors. Journal of General Internal Medicine, 16, 809–814.
West, D. R., Westfall, J. M.,Araya-Guerra, R., Hansen, L., Quintela, J.,
VanVorst, R., et al. (2005). Using reported primary care errors to
develop and implement patient safety interventions: A report from
the ASIPS Collaborative. In K. Henriksen, J. B. Battles, E. S.
Marks, & D. I. Lewin (Eds.), Advances in patient safety: From
research to implementation (Vol. 3, pp. 105–119). Rockville, MD:
Agency for Healthcare Research and Quality.
Westrum, R. (1992). Cultures with requisite imagination. In J.A. Wise,
V. D. Hokin, & P. Stager (Eds.), Verification and validation of com-
plex systems: Human factors aspects (pp. 401–416). Berlin,
Germany: Springer-Verlag.
Wilf-Miron, R., Lewenhoff, I., Benyamini, Z., & Aviram, A. (2003).
From aviation to medicine: Applying concepts of aviation safety
to risk management in ambulatory care. Quality and Safety in
Health Care, 12, 35–39.
Williamson, J. A., & Mackay, P. (1991). Incident reporting. Medical
Journal of Australia, 155, 340–344.
Richard J. Holden is a Ph.D. student pursuing a joint
degree in psychology and industrial and systems engi-
neering at the University of Wisconsin-Madison, where
he received an M.S. in psychology in 2004.
Ben-Tzion Karsh is an assistant professor in the Depart-
ment of Industrial and Systems Engineering at the
University of Wisconsin-Madison, where he received a
Ph.D. in industrial engineering in 1999.
Date received: August 5, 2005
Date accepted: September 20, 2006

More Related Content

Similar to A Review Of Medical Error Reporting System Design Considerations And A Proposed Cross-Level Systems Research Framework

Evolution Of Health Care Information Systems
Evolution Of Health Care Information SystemsEvolution Of Health Care Information Systems
Evolution Of Health Care Information Systems
Lana Sorrels
 
Computer Information Systems and the Electronic Health Record
Computer Information Systems and the Electronic Health RecordComputer Information Systems and the Electronic Health Record
Computer Information Systems and the Electronic Health Record
Rebotto89
 
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docx
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docxApplication Evaluation Project Part 1 Evaluation Plan FocusTec.docx
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docx
alfredai53p
 
Information GovernanceTitleCourse NameTo.docx
Information GovernanceTitleCourse NameTo.docxInformation GovernanceTitleCourse NameTo.docx
Information GovernanceTitleCourse NameTo.docx
dirkrplav
 
1Milestone 1Deanna BuchananSouthern New Hampsh
1Milestone 1Deanna BuchananSouthern New Hampsh1Milestone 1Deanna BuchananSouthern New Hampsh
1Milestone 1Deanna BuchananSouthern New Hampsh
pearlenehodge
 
Electronic Health Records And The Healthcare Field
Electronic Health Records And The Healthcare FieldElectronic Health Records And The Healthcare Field
Electronic Health Records And The Healthcare Field
Diane Allen
 
Nur3563 group project sol1 2
Nur3563 group project sol1 2Nur3563 group project sol1 2
Nur3563 group project sol1 2
JLANurse
 
Team 3 presentation2
Team 3 presentation2Team 3 presentation2
Team 3 presentation2
ToddMacDonald
 
IntroductionHealthcare Information Systems are defined as Comp.docx
IntroductionHealthcare Information Systems are defined as Comp.docxIntroductionHealthcare Information Systems are defined as Comp.docx
IntroductionHealthcare Information Systems are defined as Comp.docx
vrickens
 
Team 3 CIS Presentation
Team 3 CIS PresentationTeam 3 CIS Presentation
Team 3 CIS Presentation
jrjackson
 
Clinical information system-final copy
Clinical information system-final copyClinical information system-final copy
Clinical information system-final copy
CISgroup
 
Clinical information system-final copy
Clinical information system-final copyClinical information system-final copy
Clinical information system-final copy
CISgroup
 

Similar to A Review Of Medical Error Reporting System Design Considerations And A Proposed Cross-Level Systems Research Framework (20)

Evolution Of Health Care Information Systems
Evolution Of Health Care Information SystemsEvolution Of Health Care Information Systems
Evolution Of Health Care Information Systems
 
Computer Information Systems and the Electronic Health Record
Computer Information Systems and the Electronic Health RecordComputer Information Systems and the Electronic Health Record
Computer Information Systems and the Electronic Health Record
 
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docx
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docxApplication Evaluation Project Part 1 Evaluation Plan FocusTec.docx
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docx
 
Information GovernanceTitleCourse NameTo.docx
Information GovernanceTitleCourse NameTo.docxInformation GovernanceTitleCourse NameTo.docx
Information GovernanceTitleCourse NameTo.docx
 
1Milestone 1Deanna BuchananSouthern New Hampsh
1Milestone 1Deanna BuchananSouthern New Hampsh1Milestone 1Deanna BuchananSouthern New Hampsh
1Milestone 1Deanna BuchananSouthern New Hampsh
 
1Milestone 1Deanna BuchananSouthern New Hampsh.docx
1Milestone 1Deanna BuchananSouthern New Hampsh.docx1Milestone 1Deanna BuchananSouthern New Hampsh.docx
1Milestone 1Deanna BuchananSouthern New Hampsh.docx
 
Design and Implementation of Hospital Management System Using Java
Design and Implementation of Hospital Management System Using JavaDesign and Implementation of Hospital Management System Using Java
Design and Implementation of Hospital Management System Using Java
 
INTEGRATING MACHINE LEARNING IN CLINICAL DECISION SUPPORT SYSTEMS
INTEGRATING MACHINE LEARNING IN CLINICAL DECISION SUPPORT SYSTEMSINTEGRATING MACHINE LEARNING IN CLINICAL DECISION SUPPORT SYSTEMS
INTEGRATING MACHINE LEARNING IN CLINICAL DECISION SUPPORT SYSTEMS
 
INTEGRATING MACHINE LEARNING IN CLINICAL DECISION SUPPORT SYSTEMS
INTEGRATING MACHINE LEARNING IN CLINICAL DECISION SUPPORT SYSTEMSINTEGRATING MACHINE LEARNING IN CLINICAL DECISION SUPPORT SYSTEMS
INTEGRATING MACHINE LEARNING IN CLINICAL DECISION SUPPORT SYSTEMS
 
Electronic Health Records And The Healthcare Field
Electronic Health Records And The Healthcare FieldElectronic Health Records And The Healthcare Field
Electronic Health Records And The Healthcare Field
 
Nur3563 group project sol1 2
Nur3563 group project sol1 2Nur3563 group project sol1 2
Nur3563 group project sol1 2
 
Understanding basics of software development and healthcare
Understanding basics of software development and healthcareUnderstanding basics of software development and healthcare
Understanding basics of software development and healthcare
 
IRJET- Text Summarization of Medical Records using Text Mining
IRJET- Text Summarization of Medical Records using Text MiningIRJET- Text Summarization of Medical Records using Text Mining
IRJET- Text Summarization of Medical Records using Text Mining
 
Application of Data Analytics to Improve Patient Care: A Systematic Review
Application of Data Analytics to Improve Patient Care: A Systematic ReviewApplication of Data Analytics to Improve Patient Care: A Systematic Review
Application of Data Analytics to Improve Patient Care: A Systematic Review
 
Team 3 presentation2
Team 3 presentation2Team 3 presentation2
Team 3 presentation2
 
IntroductionHealthcare Information Systems are defined as Comp.docx
IntroductionHealthcare Information Systems are defined as Comp.docxIntroductionHealthcare Information Systems are defined as Comp.docx
IntroductionHealthcare Information Systems are defined as Comp.docx
 
Team 3 CIS Presentation
Team 3 CIS PresentationTeam 3 CIS Presentation
Team 3 CIS Presentation
 
Issues in informatics
Issues in informaticsIssues in informatics
Issues in informatics
 
Clinical information system-final copy
Clinical information system-final copyClinical information system-final copy
Clinical information system-final copy
 
Clinical information system-final copy
Clinical information system-final copyClinical information system-final copy
Clinical information system-final copy
 

More from Jeff Nelson

More from Jeff Nelson (20)

Pin By Rhonda Genusa On Writing Process Teaching Writing, Writing
Pin By Rhonda Genusa On Writing Process Teaching Writing, WritingPin By Rhonda Genusa On Writing Process Teaching Writing, Writing
Pin By Rhonda Genusa On Writing Process Teaching Writing, Writing
 
Admission Essay Columbia Suppl
Admission Essay Columbia SupplAdmission Essay Columbia Suppl
Admission Essay Columbia Suppl
 
001 Contractions In College Essays
001 Contractions In College Essays001 Contractions In College Essays
001 Contractions In College Essays
 
016 Essay Example College Level Essays Argumentativ
016 Essay Example College Level Essays Argumentativ016 Essay Example College Level Essays Argumentativ
016 Essay Example College Level Essays Argumentativ
 
Sample Dialogue Of An Interview
Sample Dialogue Of An InterviewSample Dialogue Of An Interview
Sample Dialogue Of An Interview
 
Part 4 Writing Teaching Writing, Writing Process, W
Part 4 Writing Teaching Writing, Writing Process, WPart 4 Writing Teaching Writing, Writing Process, W
Part 4 Writing Teaching Writing, Writing Process, W
 
Where To Find Best Essay Writers
Where To Find Best Essay WritersWhere To Find Best Essay Writers
Where To Find Best Essay Writers
 
Pay Someone To Write A Paper Hire Experts At A Cheap Price Penessay
Pay Someone To Write A Paper Hire Experts At A Cheap Price PenessayPay Someone To Write A Paper Hire Experts At A Cheap Price Penessay
Pay Someone To Write A Paper Hire Experts At A Cheap Price Penessay
 
How To Write A Argumentative Essay Sample
How To Write A Argumentative Essay SampleHow To Write A Argumentative Essay Sample
How To Write A Argumentative Essay Sample
 
Buy Essay Buy Essay, Buy An Essay Or Buy Essays
Buy Essay Buy Essay, Buy An Essay Or Buy EssaysBuy Essay Buy Essay, Buy An Essay Or Buy Essays
Buy Essay Buy Essay, Buy An Essay Or Buy Essays
 
Top Childhood Memory Essay
Top Childhood Memory EssayTop Childhood Memory Essay
Top Childhood Memory Essay
 
Essay About Teacher Favorite Songs List
Essay About Teacher Favorite Songs ListEssay About Teacher Favorite Songs List
Essay About Teacher Favorite Songs List
 
Free College Essay Sample
Free College Essay SampleFree College Essay Sample
Free College Essay Sample
 
Creative Writing Worksheets For Grade
Creative Writing Worksheets For GradeCreative Writing Worksheets For Grade
Creative Writing Worksheets For Grade
 
Kindergarden Writing Paper With Lines 120 Blank Hand
Kindergarden Writing Paper With Lines 120 Blank HandKindergarden Writing Paper With Lines 120 Blank Hand
Kindergarden Writing Paper With Lines 120 Blank Hand
 
Essay Writing Rubric Paragraph Writing
Essay Writing Rubric Paragraph WritingEssay Writing Rubric Paragraph Writing
Essay Writing Rubric Paragraph Writing
 
Improve Essay Writing Skills E
Improve Essay Writing Skills EImprove Essay Writing Skills E
Improve Essay Writing Skills E
 
Help Write A Research Paper - How To Write That Perfect
Help Write A Research Paper - How To Write That PerfectHelp Write A Research Paper - How To Write That Perfect
Help Write A Research Paper - How To Write That Perfect
 
Fundations Writing Paper G
Fundations Writing Paper GFundations Writing Paper G
Fundations Writing Paper G
 
Dreage Report News
Dreage Report NewsDreage Report News
Dreage Report News
 

Recently uploaded

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
kauryashika82
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
Chris Hunter
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
SanaAli374401
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 

Recently uploaded (20)

Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 

A Review Of Medical Error Reporting System Design Considerations And A Proposed Cross-Level Systems Research Framework

  • 1. INTRODUCTION Not until the release of the 1999 Institute of Medicine (IOM) report To Err Is Human: Build- ing a Safer Health System (Kohn, Corrigan, & Donaldson, 2000), did the general public discover what was long known, albeit not always admit- ted, by the health care industry: that medical care could be unsafe. The IOM report estimated that more than 1 million preventable errors occur year- ly in the United States, and between 44,000 and 98,000 result in death. This figure includes more people than die annually from motor vehicle acci- dents, breast cancer, orAIDS. The cost in human life is accompanied by costs incurred by health care practitioners and facilities: loss of time, re- sources,credibility,andmoneyasaresultofdelays, emergencychangesinthecourseofcare,andlegal actions. Errors that result in patient harm are also associatedwithanestimatednationalcostof$37.6 billion (Kohn et al., 2000). MEDICAL ERROR/INCIDENT REPORTING AS A PATIENT SAFETY TOOL Currently, reporting of medical errors and inci- dents is one of the leading initiatives proposed to enhance patient safety. Several U.S., British, A Review of Medical Error Reporting System Design Considerations and a Proposed Cross-Level Systems Research Framework Richard J. Holden and Ben-Tzion Karsh, University of Wisconsin-Madison, Madison, Wisconsin Objective: To review the literature on medical error reporting systems, identify gaps in the literature, and present an integrative cross-level systems model of reporting to address the gaps and to serve as a framework for understanding and guiding reporting system design and research. Background: Medical errors are thought to be a leading cause of death among adults in the United States. However, no review exists summa- rizingwhatis knownaboutthebarriersandfacilitatorsforsuccessfulreportingsystems, and no integrated model exists to guide further research into and development of med- ical error reporting systems. Method: Relevant literature was identified using online databases; references in relevant articles were searched for additional relevant articles. Results: The literature review identified components of medical error reporting sys- tems, error reporting system design choices, barriers and incentives for reporting, and suggestions for successful reporting system design. Little theory was found to guide the published research. An integrative cross-level model of medical error reporting system design was developed and is proposed as a framework for understanding the medical error reporting literature, addressing existing limitations, and guiding future design and research. Conclusion: The medical error reporting research provides some guidance for designing and implementing successful reporting systems.The proposed cross-level systems model provides a way to understand this existing research. How- ever,additionalresearchisneededonreportingandrelatedsafetyactions.Theproposed model provides a framework for such future research. Application: This work can be usedtoguidethedesign,implementation,andstudyofmedicalerrorreportingsystems. Address correspondence to Ben-Tzion Karsh, Department of Industrial and Systems Engineering, University of Wisconsin- Madison, 1513 University Ave., Room 387, Madison, WI 53706; bkarsh@engr.wisc.edu. HUMAN FACTORS, Vol. 49, No. 2, April 2007, pp. 257–276. Copyright © 2007, Human Factors and Ergonomics Society. All rights reserved. SPECIAL SECTION
  • 2. 258 April 2007 – Human Factors Australian, and global organizations and promi- nent political and patient safety players have ad- vocated the implementation of error reporting systems (Aspden, Corrigan,Wolcott, & Erickson, 2004; Donaldson, 2000; Runciman & Moller, 2001). Error reporting, though only one of many components needed for a successful safety pro- gram, can improve patient safety in the following ways (see also Leape, Kabcenell, Berwick, & Roessner, 1998): by helping staff understand the nature and extent of their errors (i.e., learning/ education); by tracking system performance over time and following changes in the system; and even by changing the mind-set of health care practitioners – for example, by raising reporters’ awareness of the potential for error (Weick & Sut- cliffe, 2001) or promoting a safety culture (Kap- lan & Barach, 2002). When errors are detected through reporting, reactive efforts may prevent the error from resulting in patient harm (Uribe, Schweikhart, Pathak, & Marsh, 2002). An error reporting system can also facilitate proactive safe- ty efforts. A system that identifies problems and prompts investigation of the underlying causes of errors could potentially facilitate subsequent cor- rection, which could include safe system design and redesign (i.e., system improvement; Bates et al., 1998; Leape et al., 1995). In summary, the rationale behind error report- ing is that with knowledge comes the power to detect problems and their causes and then to effect change. Successful reporting structures in place in other high-risk fields, such as petrochemicals and aviation (Barach & Small, 2000; Billings, 1998; Johnson, 2002; Statement Before the Subcom- mittee, 2000), have encouraged interest in error reporting as a patient safety tool; in health care, however, errors are grossly underreported, per- haps by as much as 50% to 96% (Barach & Small, 2000). It follows that these systems have had dubiouseffectivenessinfacilitatingchange(Leape, 2002).Asaresult,themajorityofthemedicalerror reporting literature is concerned with establishing what would make for a successful reporting sys- tem and the barriers and motivators that affect re- porting behavior. What follows is a review and discussion of de- sign considerations for medical error reporting systems, followed by a review of literature on the barriers and motivators of reporting behavior.The first section discusses existing reporting systems in health care and in other industries. In the sec- tions that follow we discuss key reporting system designconsiderationsandliterature-identifiedbar- riers to reporting. Next, we present design sug- gestions from the literature for removing these barriers and establishing a successful reporting system.Finally,afterexaminingtrendsandgapsin the current error reporting literature, including the lack of theory in the research, we propose that the literature reviewed in this paper could be framed using a theoretically grounded, integrated model, and we present one such model. The model ad- dresses some of the gaps in the literature and urges a new wave of research to strengthen and expand existing knowledge. With this integrative and cross-level model, we seek to provide (a) design- erswithamoreholisticsetofdesignconsiderations and(b)scientistswithdirectionsfordevelopingand testing hypotheses about a range of medical error reporting system topics. EXISTING MEDICAL AND NONMEDICAL REPORTING SYSTEMS Anumber of systems for reporting errors, inci- dents, and accidents have been implemented in health care and in other industries. Leape (2002) devoted an article to the discussion of the features and success (or lack thereof) of several popular re- porting systems in health care, such as the Medi- cation Error Reporting Program, MEDMARX, National Nosocomial Infection Survey, and SentinelEventReportingProgram.Other,perhaps moresuccessful,systemsincludetheAppliedStra- tegies for Improving Patient Safety (ASIPS) Pa- tientSafetyReportingSystem(Fernaldetal.,2004; Pace et al., 2003); the Medical Event Reporting System for Transfusion Medicine (MERS-TM; Kaplan, Battles, Van der Schaff, Shea, & Mercer, 1998; Kaplan, Callum, Fastman, & Merkley, 2002);theNationalSurgicalQualityImprovement Program (American College of Surgeons, 2005); the SwissAnaesthesia Critical Incident Reporting System (Staender, 2000; Staender, Davies, Helm- reich, Sexton, & Kaufmann, 1997), the Edinburgh intensive care unit critical incident reporting sys- tem (Busse & Wright, 2000); theAustralian Inci- dent Monitoring Study (AIMS; Runciman,Webb, Lee, & Holland, 1993), and numerous others (e.g.,Arroyo, 2005; Rudman, Bailey, Hope, Gar- rett, & Brown, 2005). An in-depth discussion of nonmedical reporting systems is outside of the scope of this paper, but readers can find excellent
  • 3. MEDICAL ERROR REPORTING 259 reviews elsewhere (Barach & Small, 2000; John- son, 2000a, 2003a).These medical error reporting systems differ on a number of dimensions, includ- ing the content of what is reported to them, the in- tendedgroupofreporters,theformatofthereports and the reporting media, how mandatory or vol- untary it is to report, and the confidentiality and anonymity options in the system.These and other dimensions of medical error reporting systems are now discussed. REPORTING SYSTEM DESIGN CONSIDERATIONS As mentioned, purposeful design decisions must be made about what should be reported, who should report, how information should be report- ed, what should be done with reports, and so on. Here we discuss the design issues related to these questions from the literature on medical error re- porting systems (see also Johnson, 2003a, for a comprehensive review of reporting system com- ponents and Ulep & Moran, 2005, for design con- siderations for Web-based reporting). What Should Be Reported? A reporting system must establish and define what is and what is not a reportable event in order to standardize reporting (Aspden et al., 2004), and taxonomies of error are critical to this. Should all identified safety concerns/hazards be reported be- fore they lead to errors, as is done in the nuclear industry (Nuclear Regulatory Commission, 2005) and in the Confidential Incident Reporting & Analysis System (CIRAS) of the UK rail industry (CIRAS, 2005)? Or should reporting be restricted to actual errors? If the latter, then should every er- ror be reported, even incidents (i.e., errors that do not result in harm), as is the policy in the aviation industry (see Barach & Small, 2000)? Or should the focus be on accidents – that is, those errors that lead to injury (e.g., Layde et al., 2002)? This inci- dent/accident question, in particular, has received much attention in medical and nonmedical error reporting literature. Ithasbeenwritten,forexample,thateventsthat result in harm should be reported (and are in fact disproportionately reported) for several reasons: They are usually detectable, are difficult to con- ceal, bear great costs, and are an obvious sign of problems (Layde et al., 2002). Incidents – also re- ferred to as potential adverse events, near miss- es, and close calls – however, are less frequently reported but may be important to consider. Inci- dents and accidents may be caused by identical conditions (Barach & Small, 2000).Additionally, errors that did not lead to harm are valuable to studybecausetheymaypointtothesystemfactors and rescue and recovery efforts that can success- fully contain the effects of errors (Kaplan & Fast- man, 2003). Hazards are reported even less often in health care than are incidents. Hazard reporting is the most proactive form of reporting because hazard identification does not require that inci- dents, accidents, or injuries occur; all that is re- quired is the identification of a situation that could increase the risk of an incident, accident, or injury. Hazards,incidents,andaccidentsallprovideinfor- mation vis-à-vis flaws in the system, but hazards and incidents are much more numerous and more frequent (Suresh et al., 2004;Williamson & Mac- kay, 1991). The frequency of hazards and inci- dents thus facilitates hazard analysis, which relies on there being some manner of data to analyze. Furthermore,reportingfrequenteventslikehaz- ards and incidents can keep practitioners aware of thepresenceofhazards,encouragingmindfulness, alertness and proactive actions (Kaplan, 2003; Leape,1994;Reason,2000;seeJohnson,2000a,for a discussion of the usefulness of incidents). Fi- nally, because hazards and incidents are not as emotionally charged as accidents and are not asso- ciated with blame or cover-ups, practitioners may be more willing to share these and may provide more information (Barach & Small, 2000; Kaplan &Fastman,2003;butseeHamilton-Escoto,Karsh, & Beasley, 2006). In aviation, theAviation Safety ReportingSystem(ASRS),AviationSafetyAction Program (ASAP), and the BritishAirways Safety Information System (BASIS) are demonstrations of how reporting of incidents can lead to safety improvements, and incident reporting has proven successful in other nonmedical job sites, such as manufacturing (Walker & Lowe, 1998). However, there is a downside to reporting haz- ards and errors that do not result in harm: Some are very unlikely to ever cause severe harm or any harm at all; further, reporting of these and other hazards and incidents (given their frequency) may clog up the reporting system or overburden reporters and analysts alike (Karsh, Escoto, Beas- ley, & Holden, 2006; Leape, 1999; McGreevy, 1997). Furthermore, hazard reporting is difficult because reporters may not recognize many poten- tial hazards if no bad outcomes have occurred in
  • 4. 260 April 2007 – Human Factors their presence (Barkan, 2002) because of a lack of situation awareness (Stanton, Chambers, & Pig- gott, 2001), because potentially hazardous meth- ods of providing care are the excepted norm (Reason, Parker, & Lawton, 1998), or because the reporters’ level of expertise makes them more likely to take risks that they do not perceive to be risky. In this paper, we use the term error reporting systems, implying a focus on reporting all errors, irrespectiveofwhethertheerrorledtoharm.How- ever, we are not promoting reporting errors over reporting only hazards, incidents, accidents, or high-severity accidents, and in fact most of our discussion can apply to these kinds of reporting systems as well. Other report content issues include whether there should be different policies for reporting er- rors with systemic versus person-centered root causes (Bogner, 1994; Reason, 1990) and whether reports should contain reporters’opinions as to the cause of the event (see Hollnagel,1993) or sugges- tions for correction or “best practices” (Beasley, Escoto, & Karsh, 2004). Obviously, in addressing all of the aforementioned issues, definitions for terms such as hazard, incident, error, and so on must be agreed upon. This necessity highlights the importance of developing mature taxonomies of error that can be applied in health care (Dovey et al., 2002; Kaushal & Bates, 2002; New York StateDepartmentofHealth,2005;RobertGraham Center,AmericanAcademy of Family Physicians Education Resource Center, & State Networks of Colorado Ambulatory Practices and Partners, 2005). These taxonomies not only guide what to report but can also provide an agreed-upon struc- ture to error report data, which will facilitate sub- sequent analysis and control steps in the safety process. Another major issue in designing reporting systems is the level of detail, if any, that should be provided regarding the who, what, when, where, why, and how of the reported event, and this is closely related to the issue of the format of the reports themselves. Some medical error reporting systems allow or require spoken, typed, or written narratives of what happened (e.g., Pace et al., 2003). Others include free-response fields of lim- ited space (e.g., Medical Event Reporting System, 2005).Yet others require reporters to select mainly fromalistofoptionsusingcheckboxes,pull-down lists, or codes (e.g., New York State Department of Health, 2005). Some systems offer a combina- tion of open-ended and structured questions (e.g., National Aeronautics and Space Administration, 2002; Suresh et al., 2004). The success of non- narrative formats rests on the inclusion of all the necessary fields and options to characterize an event. The chosen format will affect subsequent analysis processes (e.g., how easily one can gen- erate descriptive statistics and discover trends; how much detail is available) as well as the design ofreportingmedia(e.g.,phonehotlinesandE-mail may be more fitting for narratives than for select- ing from a list of options).When the reporting for- mat yields variation in how and what is reported, some consistency can be gained through the use of taxonomies. The common language provided by taxonomies in addition to free-text narratives, for instance, can retain the richness of narrative re- ports and at the same time allow for systematical- ly organizing and analyzing the reported data. In contrast to the manual clinician-generated reportingofhazards,errors,incidents,oraccidents, data can be obtained and processed using comput- erized screening technologies that detect, collect, search, and analyze data, which would otherwise be done by clinicians or analysts (for reviews see Batesetal.,2003;Murff,Patel,Hripcsak,&Bates, 2003). Who Should Report and How System designers must keep in mind that mul- tiple clinicians may witness the same event. Deci- sions have to be made whether to encourage each witness to report or to create a system for delegat- ing responsibility to one person. The former may notbepreferableifthereportingsystemtreatseach report as a separate event (Johnson, 2003b) or if it puts an unnecessary burden on clinicians. Dele- gation may be problematic as well because it may cause unfair distribution of responsibility, as when physicians delegate the responsibility to nurses (Hamilton-Escoto et al., 2006; Kingston, Evans, Smith, & Berry, 2004).Additionally, reports by a single professional group, such as physicians or nurses, may be biased to include certain facts and types of errors and to exclude others (e.g., Kings- ton et al., 2004; Ricci et al., 2004; Waring, 2004). Local, Regional, National, or Specialty- Specific Reporting Systems? Another issue is whether separate special- ties should report errors to separate databases. Specialty-basedreportingmayprovideinformation
  • 5. MEDICAL ERROR REPORTING 261 on errors that are unique to the specialty. Addi- tionally, aggregating errors within one specialty over multiple institutions may point to problems that are common within the specialty but are rel- atively infrequent or difficult to detect at any one facility(Sureshetal.,2004).However,dataaggre- gated across an entire specialty may not be useful to individual practices. A similar trade-off exists betweenlargenationaldatabasesandregionalsys- tems (Barach & Small, 2000). Mandatory Versus Voluntary Reporting At the same time that the IOM (Kohn et al., 2000) encouraged establishing voluntary report- ing from individual practitioners, it also suggest- ed that states require the mandatory reporting of serious accidents and hazards in hospitals. The Joint Commission onAccreditation of Healthcare Organizations (JCAHO) now mandates reporting of such sentinel events. Mandatory reporting sys- tems at the state and federal level are intended to keep organizations and practitioners accountable for their actions (Kohn et al., 2000), punishing continued disregard for patient safety (Flowers & Riley, 2000), and these systems have become understandably associated with disciplinary pur- poses (Leape, 2002). In contrast, the purpose of voluntary reporting systems is learning, not pun- ishment,thoughthismaynotbeperceivedtobeso. Several papers discuss the failure of mandatory systems (and the ability of voluntary systems) to stimulate reporting behavior and address system flaws (e.g., Barach & Small, 2000; Leape, 2002). However, in surveys of the public, a large major- ity favors a mandated reporting system with data available to the public (Blendon et al., 2002; Rob- inson et al., 2002). In a study of clinicians, physi- cians expressed interest in voluntary reporting, whereas clinical assistants thought that only a mandatory system would convey the importance of reporting errors (Hamilton-Escoto et al., 2006). Anonymity and Confidentiality Anonymity, confidentiality, or some manner of protection from discovery and punishment may be essential for potential reporters to overcome the barrier of fear (Beasley et al., 2004; Leape, 2002; Wakefield, Uden-Holman, & Wakefield, 2005).Thedownsidetoanonymityisthatitblocks access to further information (Johnson, 2000b). Anonymous systems rely on reporters to provide sufficient information because there can be no follow-up (but see Runciman, Merry, & Smith, 2001, who believe that anonymous reporting can provide sufficient data).Additionally, anonymous systems cannot be used for the purpose of indi- vidual accountability, something that even clini- ciansmaylikeinareportingsystem(Evans,Berry, Smith, & Esterman, 2004). Systems that are not anonymous allow follow-up but might require protection from punishment before they can be trusted (e.g., Beasley et al., 2004; Britt et al., 1997). This is a characteristic of many confiden- tial nonmedical error reporting systems, such as the ASRS, which provides legal immunity to all reporters (Barach & Small, 2000). However, as long as there is fear of any sort remaining – and this may be fear of shame or embarrassment, which cannot be removed through legal protec- tion–there may be reluctance to report to a system requiring identifying information (e.g., Kingston et al., 2004). System Design to Support Social-Cognitive Processes Afinalconsiderationisthefactthaterrorreport- ing is a social-cognitive process and must be un- derstood as such. For instance, reporting involves theprocessesofencoding,storage,andretrievalof mnemonic information. Reporting accuracy may thus be affected by memory, interference (e.g., distractions), and decay (e.g., when much time passesbetweentheerrorandthereport;e.g.,Suresh etal.,2004).Reportingissusceptibletolimitations and biases in memory and reasoning – for exam- ple, causal attribution and hindsight biases (for definitions and discussion, see Billings, 1998; Henriksen & Kaplan, 2003; Kaplan & Fastman, 2003; Parker & Lawton, 2003). Other conse- quences of the human social-cognitive system are that reporters will seek social and decision support (Hamilton-Escoto et al., 2006) and that habits, beliefs, affect, attitudes, motivation, and other so- cial and cognitive factors may influence reporting behavior (Holden & Karsh, 2005; Kingston et al., 2004). Additional Concerns Reporting system designers and implementers will have to make further decisions not discussed previously in this paper, including decisions about when reports should be filed (e.g., when during the work schedule) and about the makeup of the
  • 6. 262 April 2007 – Human Factors design and implementing team. Several studies havedemonstratedthatcliniciansareespeciallyin- terested in being involved in the design and rollout ofreportingsystemsandthatcliniciansuggestions maybequiteuseful(Beasleyetal.,2004;Hamilton- Escoto et al., 2006; Karsh, Escoto, et al., 2006). Clinicianparticipationmayengendercommitment and better design/implementation, but it requires awareness of cultural barriers within the organi- zation and between professional groups. For many of the design dimensions we have noted, it is far too early to determine which is “best” or even which is most fitting for which con- text. Much more research is necessary to under- stand (a) which design options are preferable in which context and (b) what the mechanisms are that result in reporting system success, given cer- tain design characteristics. For instance, there is a need for comparative research to determine how medical error reporting formats differ in terms of usability. Further, research should examine the mechanisms by which certain formats affect re- porting usability – perhaps using Nielsen’s (1993) dimensionsofusabilityasatheoreticalframework for exploring these mechanisms. BARRIERS TO REPORTING For whatever reasons, medical accidents and incidents are substantially underreported (e.g., Cullen, Bates, Small, Cooper, & Nemeskal, 1995; Stanhope,Crowley-Murphy,Vincent,O’Connor,& Taylor-Adams,1999).Additionally,thereisuneven reporting across practitioners, depending on their positionorgrade(Lawton&Parker,2002;Vincent, Stanhope, & Crowley-Murphy, 1999; Waring, 2004). Without reporting, none of the objectives ofreportingsystemscanberealized.Thus,amajor focus of the literature has been to understand the barriers to reporting; this section discusses such barriers. Busyness and Fatigue An obvious but important fact is that doctors, nurses, and pharmacists, as well as other critical members of the health care community, are ex- tremelybusy.Althoughseveralstateshaverestrict- edthemaximumhoursthatnursesandpharmacists can work, and theAccreditation Council for Grad- uate Medical Education has restricted resident duty hours, work burden remains an issue.An ob- vious deterrent to reporting, then, is that the po- tential reporter is too busy and too tired or over- loaded to report (ironically, busyness and fatigue may also raise the frequency of reportable med- ical errors). The literature consistently finds fac- tors such as “time involved in documenting an error” and “extra work involved in reporting” (Suresh et al., 2004) to be leading self-reported barriers to reporting, and this is especially true of clinicians who experience high workload (Rogers etal.,1988).Thisisthecasenotonlyforphysicians (e.g., Figueiras, Tato, Fontainas, Takkouche, & Gestal-Otero,2001)butforpharmacists(e.g.,Green, Mottram, Rowe, & Pirmohamed, 2001), surgical and medical specialists (e.g., Eland et al., 1999), midwives (e.g., Vincent et al., 1999), and nurses (e.g.,Walker & Lowe, 1998) as well, though there may be intra- and interprofessional differences (Katz & Lagasse, 2000; Vincent et al., 1999). Difficult Reporting Schemes and Lack of Knowledge About the Reporting System Assuming that the error is noticed (and many incidents may not be; Cullen et al., 1995; Wake- field,Wakefield, Uden-Holman, & Blegen, 1996), clinicians may be unaware of the existence of the reporting system or of the system’s purpose. Sev- eral studies report clinician reluctance or failure to report as a result of being unaware of the need or ability to report (Eland et al., 1999), not know- ing who should report (Robinson et al., 2002), or being unsure of what to report or how to do it (Green et al., 2001; Jeffe et al., 2004; Rogers et al., 1988).Theseverityoftheerror’seffect,theerror’s proximity to the patient in the process of events, whether the error resulted from behavior that complied with or violated procedures, and wheth- er or not the error was preventable are factors moderating reporting (Antonow, Smith, & Silver, 2000; Katz & Lagasse, 2000; Lawton & Parker, 2002), and this may partially be attributable to a misunderstandingofthereportingsystemorofthe definition of error (Wakefield et al., 2005). Additionally, clinicians claim that reporting forms or schemes are too burdensome or compli- cated(Figueirasetal.,2001;Wakefieldetal.,2005) or that they cannot locate reporting forms (Rogers et al., 1988). Johnson (2003b) suggested that dif- ficulty of use stems from poorly designed report- ing systems. For example, paper reporting forms are not always available or are difficult to find, and electronic reporting systems are sometimes in- flexible, either constraining data entry or making
  • 7. MEDICAL ERROR REPORTING 263 it difficult (see Karsh, Escoto, et al., 2006, who re- portedclinicians’suggestionsforelectronicreport- ing system interfaces). Again, these trends were found in a variety of clinicians (e.g., pharmacists, general physicians, nurses), though inexperienced reporters, junior staff, and physicians tend to find reporting more difficult and tend to lack knowl- edge as compared with experienced reporters, senior staff, and nurses/midwives, respectively (Figueiras et al., 2001; Jeffe et al., 2004; Uribe et al., 2002; Vincent et al., 1999). Aversive Consequences of Reporting Other reasons for not reporting are rooted in the aversive nature of the outcomes associated with reporting and the fear that they generate.This fearhasbeenfoundamongjunior-andsenior-level physicians and nurses (Vincent et al., 1999;Wein- gart, Callanan, Ship, &Aronson, 2001).Apreva- lent blame culture contributes to nonreporting in a variety of ways. Doctors and nurses alike are fearful of disciplinary or legal action being taken againstthemoragainsttheircolleaguesiftheydis- closeanerrorevent(Leape,2002).Itdoesnothelp that many state and federal reporting systems are actuallyestablishedfordisciplinarypurposes.Par- ticipantsinonestudydifferedgreatlyintheiropin- ions on what should be reported, what would be reported in a realistic situation, and what was re- ported in actuality; the discrepancy may be in part attributable to their beliefs that error reporting is a disciplinary tool (Cullen et al., 1995). The gen- eral fear of reprimand is well established in all clinicians, but perhaps especially so in nurses, and moresoinjuniorstaff(Vincentetal.,1999;Walker & Lowe, 1998). Nurses also fear being held liable by authorities and being “found out” by peers, patients, and doctors (Wakefield et al., 1996); some also feel uncomfortable reporting cowork- ers, either out of concern for the coworkers or because the coworkers (e.g., physicians) have au- thority over them (Karsh, Escoto, et al., 2006; Uribeetal.,2002).Moreover,socialrepercussions might ensue if a reporter, especially a nurse, were found out to be a “whistleblower” (Hamilton- Escoto et al., 2006; Kingston et al., 2004). Legal consequences are yet another concern for potential reporters (Horton, 1999; Lawton & Parker, 2002; Leape, 2000). Accordingly, both the IOM and JCAHO refer to the current medical liability system as a major barrier to reporting (JCAHO, 2005; Kohn et al., 2000; see also Sage, 2003). Physician opinion is in accord (Robinson et al., 2002), demonstrating more concern with the rates of malpractice insurance than with the error rate (Blendon et al., 2002). Although the degree to which fear of litigation prevents reporting may differ as a function of clinician group or seniority (Katz&Lagasse,2000;Uribeetal.,2002;Vincent et al., 1999), such fear is often cited as a barrier to reporting (but see Rogers et al., 1988). Onemustalsoconsidertheemotionaleffectsof error on erring individuals as these effects reveal agooddealabouttheirreasonsnottoreport.Alarge majority of physicians can recall at least one crit- ical error in their practice (Christensen, Levinson, &Dunn,1992;Newman,1996).Erringphysicians have initial feelings of agony and anguish, fol- lowed by the onset of guilt, anger, embarrassment, and humiliation. They fear legal action and being foundout,andtheygeneralizethemistaketoover- all incompetence, both as a physician and a person (Christensen et al., 1992). Some physicians cope through self-disclosure (Christensen et al., 1992; Resnick, 2003), and many desire some sort of sup- port.Manydonotactuallyreceiveanysupport,and most who do receive it from their spouse; many physicians are not willing to offer their own un- conditional support to a colleague in a hypotheti- calsituation(Newman,1996).Suchfindingspoint to the stigma associated with admitting fallibility (see Dickey, Damiano, & Ungerleider, 2003; Gal- lagher, Waterman, Ebers, Fraser, & Levinson, 2003; Kingston et al., 2004; Pilpel, Schor, & Ben- bassat, 1998). Error reporting may serve as a way to cope through disclosure, but, perhaps, only if it offers support, does not bring about feelings of fallibility, and does not exacerbate emotional suf- fering. Indeed, threats to self-image and psycho- logical comfort may result in reluctance to discuss errors(Sexton,Thomas,&Helmreich,2000).This may discourage reporting (Leape, 1999; Reinert- sen, 2000) and promoting the so-called code of silence (Barach & Small, 2000). Barring the re- moval of stigma and misperceptions of infallibil- ity, designers of reporting systems must be aware of this fact. In summary, findings point to a blame culture in health care, one that overemphasizes discipli- nary action or other aversive consequences such as shame and the tendency to “shoot the messen- ger” (see “pathologic organizations” in Westrum, 1992). Such blame culture discourages practi- tioners from admitting to and reporting errors.An
  • 8. 264 April 2007 – Human Factors alternative is the safety culture, or “just culture” (Marx, 1999, 2001), and there is some agreement that establishing such a culture will remove the barrier to reporting posed by the potential of aver- sive consequences (e.g.,Arroyo, 2005; Barach & Small, 2000; Kaplan & Barach, 2002; Reason, 2000). Lack of Perceived System Usefulness Evidence exists that an apparent lack of report- ing system usefulness may also contribute to non- reporting. Reporting systems may have multiple potential functions or purposes, but chief among those is the identification and correction of system flaws through the analysis of reported data. If a re- portingsystemisnotperceivedtohelpaccomplish this purpose, then it may be thought of as useless and reporting as a waste of time. Perceptions of usefulnessmaybegainedintwoways:(a)byactu- ally using reported data to guide system improve- ment and (b) by making reporters aware that this is happening, which is referred to as feedback. Accordingly,theWilliamsonandMackay(1991) reporting method recommends that medical errors be recorded, analyzed for clues to the problemat- ic system components behind a larger number of errors, used to eliminate or correct these system components, and shared with others through feed- back (see also Johnson, 2003b; Kaplan & Fast- man, 2003). Studies support the idea that lack of follow-up or a perceived uselessness of reporting may discourage nurses (Wakefield et al., 1996; Walker & Lowe, 1998) and physicians (Uribe et al., 2002) from reporting, whereas it has been sug- gested that reporting systems that are useful and that are perceived to be useful by reporters can promote reporting behavior (e.g., Kaplan et al., 1998).Aparticipant in one of Jeffe et al.’s (2004) focus groups echoed many clinicians’ opinions towardreportingsystemsthatdonotprovidefeed- back to reporters: Reporting to such systems is “wasted energy.” If clinicians do not see error re- porting as a means of bettering the situation and correcting the underlying factors that initially led to the error, then what reason is there to report, other than adhering to the law? DESIGNING A MORE EFFECTIVE REPORTING SYSTEM Thus far, we have identified important charac- teristics of reporting systems and barriers to re- porting. In this section we turn to the literature on reporting as well as the human factors and safety disciplines to approach the problem of designing an effective reporting system. To begin with, a necessary – but perhaps not sufficient – first step to designing an effective reporting system is to remove the reporting barri- ers we have described. The most notable barriers are reporting systems that are difficult to use or not time efficient, combined with a busy and fatigued workforce; lack of knowledge about the report- ing system; fear of aversive consequences of re- porting; and a perceived lack of usefulness of reporting. Table 1 takes a synthesized view of the literature and draws together several authors’sug- gestions for reducing these barriers. For example, to address the barrier of reporting system diffi- culty, one common suggestion in the literature is that system design should fit with clinician work factors such as busyness and fatigue. The idea of “fit” (Holden & Karsh, 2005; Karsh, Escoto, et al., 2006) more generally suggests that a successful reporting system design must be relatively com- patible with the characteristics of the workplace (its users, tasks, environment, organizational fac- tors,etc.).Thismeansthatmanyofthesuggestions available in the literature may not be successful in every system; accordingly, more research is necessary to understand the contextual factors and health care-specific nuances that determine the effectiveness of the suggestions in the literature. Additionally,thetableincludessuggestedsolu- tions for dealing with users’lack of knowledge of the reporting system and with the barrier of fear. In regard to the latter, it has been suggested that organizations should transition from a culture of blame, shame, and quick fixes to a “just culture” (Kaplan & Fastman, 2003). A just culture is one in which individuals are not blamed or punished if an error occurs, as long as there was no intent to harm (Marx, 1999, 2001), and reporting of errors is encouraged because reporting can result in learning. In these ways a just culture avoids the tension (or even injustice) that exists in a blame culture, wherein it is not acceptable to err yet it is required that practitioners report (admit to) these errors. Reporting systems within such a culture, or more generally ones that provide anonymity (and thus cannot be punitive), have been predicted and shown to facilitate more reports than systems in which punishment is a possibility (Kaplan et al., 1998; Kingston et al., 2004; Leape, 2002). A just Continued on page 267
  • 9. 265 TABLE 1: A Synthesis of the Literature Yields Suggestions for Addressing Reporting Barriers Specific Suggestions References System Difficulty and Inefficiency Include interface specialists in the design process in order to design intuitive and usable reporting Johnson, 2003b; Kaplan & Fastman, 2003; Vincent forms with clear instructions. Designing a reporting form that uses check boxes and limited et al., 1999 narrative reporting interface can save time and effort. Limit length/difficulty of reporting process by providing quicker reporting alternatives such as Web Beasley et al., 2004; Cullen et al., 1995; Jeffe et al., reporting or phone/hot line reporting. 2004; Kingston et al., 2004; Rudman et al., 2005; Wilf-Miron, Lewenhoff, Benyamini, & Aviram, 2003 Reporting system components associated with system ease of use and time efficiency should fit the Beasley et al., 2004; Holden & Karsh, 2005; Karsh, (busy and fatiguing) work flow of health care practice Escoto, et al., 2006 Lack of Knowledge About the Reporting System Define the purpose of the system at the outset and define what must be reported. Communicate Beasley et al., 2004; Flink et al., 2005; Jeffe et al., these definitions (e.g., explain the working definition of medical errors) and communicate 2004; Stanhope et al., 1999; Uribe et al., 2002; practitioners’ reporting responsibilities. Williamson & Mackay, 1991 Provide training that builds knowledge about the system and how to use it, and then provide Desikan et al., 2005; Flink et al., 2005; Hart, Baldwin, continuing education about the system. Continue to provide system use information (e.g., list of Gutteridge, & Ford, 1994; Jeffe et al., 2004; Kingston reportable incidents) that can be accessed at any time. The training should be tailored to different et al., 2004; Uribe et al., 2002; Vincent et al., 1999 types of health care professionals. Fear of Aversive Consequences of Reporting Institute a “no-blame,” nonpunitive policy (or “just culture”) that encourages learning, not punishment, Arroyo, 2005; Barach & Small, 2000; Bates et al., and in which practitioners are comfortable reporting errors. Errors must not be thought of as 1995; Cullen et al., 1995; Jeffe et al., 2004; Kaplan, shameful, and clinicians must be supported, not shunned, when an error occurs. As the focus should 2003; Kingston et al., 2004; Leape, 1994, 2002; Marx, be on the system, not on blaming the individual, begin eliminating the blame culture by educating 2001; Reason, 2000; Vincent et al., 1999; Wakefield clinicians about system-based versus person-based causes of errors. et al., 1996, 2005; Wilf-Miron et al., 2003 Address existing legal barriers to reporting. Provide reporters protection and immunity from Barach & Small, 2000; Harper & Helmreich, 2005; disciplinary action. Carry out disciplinary actions only if error is egregious. Phillips, Dovey, Hickner, Graham, & Johnson, 2005; Vincent et al., 1999; Wilf-Miron et al., 2003 Protecting reporters can be facilitated by providing the option for confidential or anonymous reporting. Beasley et al., 2004; Jeffe et al., 2004; Rudman et al., Confidential and anonymous reporting systems should not forsake accountability. 2005; Runciman et al., 2001; Uribe et al., 2002 Clinicians can be better protected if reports are analyzed by external or independent organizations. Karsh, Escoto, et al., 2006; Kingston et al., 2004; At the very least, access to the error report database should be limited. Suresh et al., 2004; Uribe et al., 2002 Continued on next page
  • 10. 266 TABLE 1: Continued Specific Suggestions References Reporting should be voluntary until a safer and more accepting reporting culture is established in Beasley et al., 2004; Cohen, 2000; JCAHO, 2005; health care. At the very least, there should be an option of reporting to a nonpunitive local reporting Kohn et al., 2000 alongside any reporting systems mandated by government or organizations responsible for oversight. If there are both government-mandated and voluntary reporting systems in place, make clear distinctions between these systems. Do not design systems in which individuals can be “found out.” For example, remove large logos that Johnson, 2003b show up on reporting system interfaces, alerting one’s colleagues as to what he or she is doing. Perceived Uselessness of Reporting Develop a useful process for selecting and analyzing reports. Be careful of reporting and analytical Harris et al., 2005; Johnson, 2002 biases that may render any corrective action ineffective. Do not overload the system with reports to the extent that effective analysis cannot be carried out. One Harper & Helmreich, 2005; Leape, 2002; Suresh et al., way to do this would be to create specialty-based systems that provide useful and relevant expert 2004 feedback. The reporting system could thus be tailored to fit with clinicians’ specific problems. Establish a group or task force to process reported data and generate strategies for improvement. Barach & Small, 2000; Cullen et al., 1995; Flink et al., Invest sufficient funds and dedicated personnel to this task to establish useful data storage and 2005; Jeffe et al., 2004; Johnson, 2003b; Uribe et al., analysis systems. 2002 In general, take corrective actions following the analysis of reports and provide feedback demonstrating Cullen et al., 1995; Harper & Helmreich, 2005; Kaplan that actions were taken. More specifically, analysts can use quality improvement techniques such as & Fastman, 2003; Kingston et al., 2004; Leape, 2002; total quality management and continuous quality improvement to follow up on reports. To make this Rudman et al., 2005; Suresh et al., 2004; Vincent et a manageable task, priorities may need to be assigned to each potential follow-up effort. al., 1999; Wakefield et al., 1996; West et al., 2005; Williamson & Mackay, 1991 Put emphasis on recovery efforts following error reports so that it is immediately obvious that reporting Barach & Small, 2000; Kaplan & is useful for improvements in patient care. Fastman, 2003; Wakefield et al., 1996 Provide feedback to those submitting reports as well as regular feedback identifying recent errors, Beasley et al., 2004; Flink et al., 2005; Harper & associated hazards, and hazard control strategies. Additional feedback can be provided to clinicians Helmreich, 2005; Jeffe et al., 2004; Kingston et al., on what they should and should not be doing. Feedback should include encouragements to 2004; Martin et al., 2005; Uribe et al., 2002 continue to report. One suggestion on implementing feedback is to provide it in the form of summary data that are pertinent to practice; it should be informative, anonymous, and nonaccusatory, and it may be best if the feedback does not come from a supervisor.
  • 11. MEDICAL ERROR REPORTING 267 culture can also promote the usefulness of report- ing if it is able to encourage learning from errors (Reason, 1997, 2000). Other suggestions for overcoming a perceived lack of usefulness are presented inTable 1. Useful reporting systems are ones that meet objectives establishedbyindividuals,organizations,orindus- tries. Typically the objectives are related to perfor- mance, safety, and quality of care. One common objective is the correction of system flaws that lead to errors. However, by itself, reporting errors cannotmeetthisobjective(Johnson,2002,2003b). KaplanandFastman(2003)reviewedstepstotake in processing reported data in a useful way so that objectives can be met. Perhaps the most crucial test of usefulness is whether reported data are fol- lowed up on in a way that sense can be made out of them and system improvement can result. This might require tools such as failure modes and effects analysis, sociotechnical system variance analysis, fault trees and root cause analysis, or any number of other methods for investigating acci- dents and incidents. When data are processed and followed up on, the system may be objectively useful in some sense, and it follows that such systems are asso- ciated with more reporting (Cullen et al., 1995; Johnson, 2003b; Kaplan & Fastman, 2003). Even when processing and follow-up procedures are established, a further consideration should be whether potential reporters actually perceive the system to be useful. This is because individual assessments of usefulness may be the deciding factor for reporting behavior (Holden & Karsh, 2005).This can be illustrated in the case of a prac- titioner whose reported data are analyzed and investigated without his or her awareness of this fact. Thus, from the reporter’s perspective, the system may be an “administrative black hole” (Kaplan & Fastman, 2003, p. ii69) and thus not very useful. Feedback, whether about the status of one’s report or the corrective actions that it generated, is one way in which reporters can become aware of the usefulness of a system. Feedback can be given immediately following a report through prompts in an electronic reporting system or through Web sites, newsletters, E-mail, list server messages, or scheduled meetings (Beasley et al., 2004; Martin, Etchegaray,Simmons,Belt,&Clark,2005;Suresh et al., 2004). Feedback in the form of identified er- rors or hazards in the system and associated con- trol strategies may also contribute to the actual usefulness of the system to the extent that the pur- pose of the system includes risk communication and error/hazard management (Kaplan & Fast- man, 2003; Karsh, Escoto, et al., 2006). For these reasons, it is believed that building timely feed- back into a medical error reporting system builds trustandencouragesreporting(e.g.,Beasleyetal., 2004; Kaplan et al., 1998; Suresh et al., 2004). Because a system’s usefulness depends on its ability to achieve its purpose, a system will be per- ceived to be useful to the extent that practitioners are aware of its purposes. A reporting system de- signed to capture data on near miss recovery ef- forts may be quite effective in this respect, but it might not be perceived as useful by individuals who believe that a successful system should re- duce hazards. Finally, there are several roadblocks to achiev- ing usefulness. First, data processing depends on the richness of the data. Thus, data that are not de- tailed enough either cannot be usefully processed or would require follow-up with the reporter. The latter may not be possible in an anonymous sys- tem. Thus, there may be a trade-off between hav- ing a useful system and providing anonymity (Barach & Small, 2000; but see Runciman et al., 2001). Second, not every report can be followed up on (Kaplan & Fastman, 2003; Leape, 1999), anddataprocessingandstoragemaybequitecost- ly (Johnson, 2003b). Thus, prioritizing, assess- ing, and other data treatment methods may be necessary to protect the system from becoming ineffective (Kaplan & Fastman, 2003). Similarly, corrective actions must be prioritized; too many changes at once may destabilize a system. Many of these suggestions can be implement- ed by designing better reporting systems or rede- signingthebroaderworksystem.Thisimpliesthat underreporting and reporting system failure are the result of design, not of clinician motivation. Low reporting is to be expected when a reporting system is not integrated into patient care, such as whenasystemtakestimeawayfromclinicalwork without ever improving clinical outcomes. Some of the suggestions in Table 1 are taken fromempiricalwork,butthereisaneedtovalidate these suggestions and to conduct research that can guide more specific design suggestions. Sim- ilarly, many other practices can be suggested for reporting of errors, but these need to be evidence based.Achecklist for procedures with a check for
  • 12. 268 April 2007 – Human Factors every step done according to plan – allowing er- rorstobe“reported”intheprocess–canhelpmore smoothly embed reporting into current practices. (We thank an anonymous reviewer for this sug- gestion.) The need for studies demonstrating the effectiveness of such a reporting system is as great as that for studies validating the suggestions in Table 1. SUMMARY OF REPORTING SYSTEM LITERATURE Areview of the findings of such studies reveals several trends. First, many components must be considered in the design of error reporting sys- tems, and the decisions must lead to a reporting system that fits within the context of the imple- menting facility (its culture, goals, staff, user needs, practices, organizational characteristics, etc.).Theories of reporting system success and fu- ture research may need to go beyond prescribing one-size-fits-all solutions to reporting system de- sign and, instead, further explore requirements for compatibilityorfit(see,e.g.,Karsh,Holden,Alper, & Or, 2006). Second, barriers such as busyness andfatigue,lackofknowledgeabouttheexistence and proper use of the reporting system, a culture ofblame,lackoforganizationalsupport,andalack of usefulness or perceived usefulness of reporting systems may lead to nonreporting. These factors, too, need to be included in future research and design attempts. The reporting literature also demonstrates con- siderable group differences between professional groups, such as physicians and nurses, and be- tween levels of seniority within the same group. These differences in attitudes, behaviors, barri- ers, and incentives may stem from deep-rooted professional and cultural differences (Hamilton- Escoto et al., 2006; Kingston et al., 2004). The mechanisms behind these differences must be understood in order to inform the design of effec- tive reporting systems. Finally, it is obvious that multiple barriers and incentives influence report- ing behavior and that there is no “silver bullet” for increasing the amount of reporting; instead, mul- tiple factors need to be addressed simultaneously if one expects to achieve consistent, successful re- porting.Alongthesamelines,interactionsbetween barriers must be further studied – for instance, some write of a trade-off between easy-to-use, time-efficient reporting systems and the amount ofcontentthatcanbeusedtoguidefutureredesign. Research should attempt to confirm this trade-off and to better understand it. LACK OF THEORY IN THE REPORTING LITERATURE Conspicuouslyabsentinthereportingliterature is a theoretically grounded organizing framework that can explain findings and guide successful reportingsystemdesign.Withveryfewnotableex- ceptions (Holden & Karsh, 2005; Karsh, Escoto, et al., 2006; Kingston et al., 2004), the reporting literature is atheoretical despite the availability of a wide variety of theoretical frameworks from the fields of human factors, social and cognitive psy- chology, communications science, management, andtechnologychange/acceptance,tonameafew. As Holden and Karsh (2005) noted,“Employing a theoretical framework may provide more in- sightful evaluation and interpretation of findings and may guide the selection of factors to explore and hypotheses to test. Conversely, an atheoreti- cal approach risks missing key factors, is weak for explaining how findings within and between studies interact, and makes it difficult to make generalizations about future findings or – impor- tantly – about practical design decisions” (p. 131). Todemonstratehowexistingtheoryfromthetech- nology acceptance and adoption literature could be used to frame results from a medical error sys- tem study, Karsh, Escoto, et al. (2006) presented a multilevel systems model that integrated inno- vation diffusion theory, sociotechnical systems theory, and the technology acceptance model to explain system design and implementation con- siderations at the organization, system, and indi- vidual user levels. Forthesamereasons,HoldenandKarsh(2005) have also demonstrated the usefulness of theories of motivation, decision making, and technology change and acceptance for understanding report- ing behavior. In one of the few other instances of using theory to guide medical error reporting re- search, Kingston et al. (2004) utilized behavioral modeling theory to frame their focus group find- ings. Because of the lack of theory guiding med- icalerrorreportingsystemresearch,itisimportant to begin to develop a framework that can guide the pursuit of testable models of medical error report- ing. Furthermore, theoretical frameworks for re- porting can be – but have rarely been – inductively
  • 13. MEDICAL ERROR REPORTING 269 developed from the ground up, based on the evi- dence accrued in the reporting literature. One pur- pose of the current paper is to present one such theoretical model. This model is grounded in a multiple-level systems framework and in the hu- manfactorsengineeringconceptoffit.Atthesame time, it draws on the findings from the literature reviewed here, integrating much of what is known and illuminating what is not known. A THEORETICAL FRAMEWORK FOR RESEARCH AND DESIGN InFigure1,amodelisintroducedthatcanserve as a framework for addressing the research and design gaps identified in the literature. Specifical- ly, the model provides a framework for testing hy- potheses about why the barriers and incentives we have discussed influence reporting behavior and reporting system success.As indicated, success is defined relative to the purpose of the system and can include appropriate reporting by appropriate individuals, useful analysis of the reported data, implementation of hazard control strategies that follow from the analyzed data, and evaluation of these strategies.At the same time, the model pro- vides an integrated framework for error reporting system designers in that it demonstrates how error reporting systems must be designed to fit within thecomplex,hierarchical,healthcaredeliverysys- tem and shows that design must include consid- eration for the reporting, analysis, control, and evaluation stages. Reporting is only one step in a larger cycle of required safety actions.Additional steps are ana- lyzing reports to determine whether and how to control reported errors, developing and implement- ing engineering and administrative interventions, and evaluating system performance following re- design. Even these safety activities are only part ofwhatshouldbeamuchlargersetofsafetyactiv- ities,including, among other things, proactive risk analysis, proactive hazard control, hazard inspec- tion, and injury surveillance (Smith, Carayon, & Karsh, 2001; Smith, Karsh, Carayon, & Conway, 2003). Thereporting-analysis-control-evaluationcycle is illustrated in Figure 1; as it shows, the principal actors involved in reporting, analysis, control, and evaluation are referred to as reporters, analysts, change agents, and evaluators, respectively. Al- though a single person could serve multiple safe- ty roles, limitations on time and training make this difficult; thus, in health care, clinicians are primar- ily reporters and are not often involved in analysis, control, or evaluation. Reporting as an activity can be carried out independently of the other steps in the cycle, depending on the organization’s objec- tives for the reporting process. If the objective is strictly “learning” (Kaplan, 2003; Leape, 2002; Leape et al., 1998), then it is not necessary to con- trol identified hazards, only to conduct analyses to learn about existing hazards. If the objective is “systemimprovement”(Beasley,Escoto,&Karsh, 2004; Kaiser Permanente, National Quality Fo- rum, & Drucker, 2000), then hazard control must be carried out and evaluated. Thus, under some objectives, hazard control activities may not be carried out and reporting systems may therefore not always lead to safer systems from a hazard re- ductionpointofview.Certainly,thepurposeofthe reporting system – especially the extent to which reducing and eliminating hazards is a priority – will affect the process of reporting and the many considerations that need to be made in designing this process, such as who should report and what should be reported. The central concept of the model is fit, concor- dant with a human factors/systems approach to design. An earlier study by Karsh, Escoto, et al. (2006) expanded on the application of the concept of fit to error reporting design; here we provide only an abbreviated discussion. Fit is the product of interactions between error reporting technolo- gy characteristics and the various subcomponents of the work system in which the technology is nested. In the model, several work system levels are specified; at each level are factors that deter- mine fit and, therefore, the success of reporting, analysis, control, and evaluation. For instance, at the work group level, work design factors (e.g., staffing ratios) interact with the reporting technol- ogy design (e.g., reporting format) to determine fit. In this case, fit may mean an easy-to-use and usable reporting system relative to the amount of busyness and fatigue faced by clinicians. Here, fit encouragesreporting.Likewise,otherinstancesof fit can be determined based on the reporting liter- ature reviewed in this paper. For example, the fit between the culture of the organization and the anonymity options available for reporting can determine whether or not clinicians report. Onewaythismodeldiffersfrompreviousmod- els of error reporting systems is that it integrates a
  • 14. 270 April 2007 – Human Factors Figure 1. The interconnected cycle of reporting, analysis, control, and evaluation provides a framework for under- standing the role of reporting systems in safety. The concepts of fit, cross-level effects, feedback between the stages in the safety cycle, and changes in the model over time are consistent with the medical error reporting literature and relevant theories from multiple scientific disciplines. These and other concepts of the model suggest theory-based routes for future design and research.
  • 15. MEDICAL ERROR REPORTING 271 number of factors across levels that are important to research and design jointly, as opposed to sep- arately. Instead of simply listing the reporting sys- tem variables reviewed in this paper, the model encourages the understanding of the interactions that produce fit, a truer depiction of the complex- ity of health care systems. The model explicitly depicts the contribution of different levels of orga- nizational hierarchy to the success or failure of an error reporting system. Previous discussions of er- ror reporting systems have asserted that reporter- reporting system variables such as ease of use and organization-reporter variables such as organiza- tional support of reporting are important for re- porting system success. However, our framework suggests that vertical alignment (or fit) through- out all levels of organizational hierarchy need to be investigated through research and designed in practice (Rasmussen, 1997;Vicente, 2003). Like- wise, previous discussions of error reporting have focused on facilitators and barriers to reporting, whereas this model demonstrates that studying and/ordesigningforreportingisbutonestepinthe overall process. Figure 1 provides an integrated framework in which reporting is integrated into the larger safety cycle. The model in Figure 1 frames error reporting as onecomponentofasafetyprogram.Liketheother components, reporting is affected by multiple lev- els in the organizational hierarchy of systems, which is one of the new elements of this model of error reporting.The left side of the figure, titled “work system,” depicts this hierarchy. It is based on the work system models developed by Smith and Sainfort (1989) and Carayon et al. (2003). It demonstrates how an error reporting system is nested within a work group, which might be a care team, unit, or department nested within a higher level we refer to as organization (which could be a hospital or a health care system), which itself is embedded in an even larger system. The actual definitions of each level and, for that matter, the actual number of levels is dependent on the unit of analysis for a given study or health care orga- nization. Though the work system factor appears on the left side of the model, note that the system- reporting technology interaction affects all four stages; this is because all the stages in the cycle take place within the work system. The model shows that separate, but related, design consider- ations need to be in place to promote the goals in each of the four stages. For example, to promote erroridentification,thereportingmechanismmust be designed for the reporters; that is, it must be easy to use, nonthreatening, and integrated into the current work flow and work environment.The requirements for design success and fit are differ- ent at the analysis, control, and evaluation stages. Although the design of these stages of the safety cycle is beyond the reporting focus of this paper, each step is important to consider because of the feedback from each step to the other. Thus, re- search is needed to understand the interplay be- tween reporting and other stages in the safety cycle, as discussed in the Feedback Through the Stages section. The identification-analysis-control-evaluation processes are not simply linear. Instead, there are two types of feedback loops at work: (a) feedback through system hierarchies, known as cross-level effects (Klein, Dansereau, & Hall, 1994); and (b) feedback through steps in the cycle (see, e.g., Bo- gart, 1980). Both types of feedback are important to understand because each exerts influence on the success of the four stages of the cycle. A lack of understanding of these feedback influences can result in unanticipated detrimental consequences. Feedback Through the System Hierarchies Because error reporting is influenced by the interactionsofmultiplelevelswithinthehierarchy, it is important to consider cross-level feedback. Such feedback could be in the form of policies, information, normative influences, or goals and rewards. For brevity we provide examples using onlythelattertwoformsoffeedback.Forinstance, at the reporting stage, in which individuals choose whether they will report an error, individual be- haviors are affected by higher-level group, orga- nizational, and industry factors. As discussed in the review of the literature, reward and punish- ment structures may affect individual reporting decisions (e.g., if nurses are rewarded more for productivity than for reporting), as may culture (e.g., blame vs. just culture), or organizational structure. Other cross-level effects that influence reporting include those related to training and in- formation provision (e.g., technical competence ofcliniciansandsystemusability)andsocialinflu- ences at the individual, group, organizational, and industry levels. In turn, the content and frequency of reporting may create feedback effects through
  • 16. 272 April 2007 – Human Factors the hierarchy of systems, alerting management to change rewards for and punishment of certain be- haviors, creating the need for training or redesign, and/or affecting the culture by either reinforcing or contradicting norms and beliefs related to reporting. At the analysis stage, pertinent rewards pro- vided and goals developed by higher levels, such as management, will affect how data are analyzed andusedbyanalysts.Forexample,ifmanagement simply rewards for showing trends, that is likely what the analysts will produce. In turn, what the individual analysts produce will subsequently affect what management rewards for, depending on whether the analysis reports are consistent with management expectations. At the hazard control (i.e., intervention) level, as before, the types of control activities that pro- duce rewards are those likely to be designed and implemented.Themanagementlevelalsoimpacts the efficacy of safety control activities to the ex- tent that management is the entity that provides time and financial resources to staff for designing and implementing controls. In turn, the types of controls developed by individuals or units will influence management to change or accept the current course of control activities and will impact what types of data they want reported and what types of analyses they want produced. The evalu- ation of implemented safety controls could influ- ence the design of subsequent controls, depending on the success or failure of the interventions, and even the nature of report forms, to the extent that the organization learns from evaluations that it needs more specific information to be reported. From a scientific point of view, each of the possi- ble scenarios presented in this section provides a testable hypothesis. From a design point of view, each example may be, if the empirical evidence provides support, in need of design consideration. Feedback Through the Stages Each stage in the cycle can influence the other stages. What is reported will determine the types of analyses that can be produced and the nature of follow-uprequired.Likewise,thedepthandbreadth of the analyses will determine the specificity of possible interventions to control hazards and the criteria used to evaluate these interventions. The degree to which targeted controls are developed and implemented will impact whether clinicians continue to report; they will be unlikely to con- tinue reporting if they do not see interventions be- ing implemented that were based on their reports. Looking at feedback in the other direction, eval- uation produces adjustments to future control activities.Also, the type of data required for these targeted control activities can be fed back to de- termine the types of analyses that are required to produce optimal hazard control interventions. Fi- nally, the types of analyses required for sound interventions should help to determine the type of data required from the reporters. Again, each aforementioned proposal is an opportunity for research and design. The proposed model also incorporates the ele- ment of time by demonstrating that the cycle of feedback among steps in the process and levels ofhierarchyarecontinuouslyoperatingandaffect- ingeach other.This notion is important for under- standing that decisions made at any stage of the cycle or in the various levels of organizational hierarchy will have immediate or delayed effects on medical error reporting and safety in general. The success of a medical error reporting sys- tem is determined by design considerations in the four steps (reporting, analysis, control, and eval- uation) and by feedback systems that run through the steps and levels. This shows just how compli- cated it is to design and implement a successful system and further emphasizes the importance of proposing a research framework that can lead to testable hypotheses. FUTURE RESEARCH The model proposed in Figure 1 and the fore- going discussion provide multiple directions for future research into medical error reporting sys- tems. In the review of the literature we identified areas in need of research. Here we provide more specific possible research questions, divided into twocategories:questionsrelatedtoparticularsteps in the safety cycle and questions related to the relationships among the steps. Within steps, possible questions include, gen- erally, what factors predict success at a given stage, and how do these factors interact (or fit) to determine success? The literature reviewed here provides a few preliminary answers but is limited in that it often fails to recognize the hierarchically complex, interactive, and dynamic nature of re- porting systems.As the model suggests, for stud- ies to successfully address questions about such
  • 17. MEDICAL ERROR REPORTING 273 a system, data will have to be collected at multiple levels of hierarchy to allow for testing of cross- level effects. Specific questions that require such data might include the following: • What variables at different levels of hierarchy pre- dict end-user use (or rejection) of a medical error reporting system? • Is error reporting behavior independent, homoge- neous, or heterogeneous within groups? • What types of resources and system design consid- erations facilitate successful analysis of reported data? • What variables at different levels of hierarchy pro- mote (or hinder) the adoption and success of hazard control interventions within health care organiza- tions or within groups? • How do changes in variables that affect a given step impact success at that step over time? These questions are somewhat broad, and within these, specific research questions will need to be independently developed by researchers, depending on their theoretical interests and area of application (i.e., because different health care contexts call for nuanced research questions).Ad- ditionally, the suggested relationships among the three steps produce questions that are even more complicated to address, especially when consid- ered alongside cross-level effects. However, the relationships among identification/reporting, analysis, control, and evaluation are equally im- portant to address because, as the model shows, thestagesareinterrelatedthroughfeedbackmech- anisms. Specific questions might include the fol- lowing: • Which, if any, variables at different levels of hierar- chy simultaneously predict success at the reporting, analysis, control, and evaluation stages? • How do changes at one stage affect subsequent changes in the other stages over time? For these and any other questions that might be generated and tested from the model, it is impor- tant to utilize existing theories to guide the re- search.Forexample,questionsrelatedtoreporting behavior might benefit from existing psychologi- cal theories of motivation and behavior as well as theories on technology adoption and acceptance (Holden & Karsh, 2005). Questions dealing with cross-level effects might benefit from organiza- tional theories or sociotechnical systems theory (Clegg, 2000; Karsh & Brown, 2005). Similarly, questions dealing with hazard control interven- tions might benefit from decision-making theories (DeJoy, 1996). In each case, the appropriate the- ory will depend on the specifics of the questions being studied, and the model should provide guid- ance as to the types of variables to consider. Fur- thermore, researchers can build on and refine each others’theories.The conceptual model in Figure 1 is a first attempt at integrating knowledge from the reporting literature. Components of the model need to be tested and the model amended appro- priately.Weproposethatbeginningtodevelopand revise models and theories for reporting and relat- ed safetystagesinhealthcareisaneedednextstep. CONCLUSION Successful medical error reporting systems can be one approach toward safer and higher quality patient care.Whether the system is successful de- pends on how well it achieves its goals, which include identification, analysis, control, and con- tinuousimprovement.Themedicalerrorreporting literature suggests several factors that affect re- porting system success.These include a reporting system that is usable (e.g., easy to use and time efficient), is known to users, and fits with their workflow; that is useful and provides feedback to its users demonstrating this usefulness; and that providesrewardsanddoesnotpunishusers.Many design considerations are necessary to provide for successfulsystems.Themodelpresentedheresug- gests that these design considerations will be opti- mally accounted for when medical error reporting systemsaretreatedasdynamicandmultilevelsys- tems characterized by multiple interacting pro- cesses.Futureresearchanddesign/implementation efforts must account for all levels of this system’s hierarchy, all four steps in the cycle, and the dy- namic feedback between these levels and steps, all within the context of a wide assortment of available theoretical frameworks. ACKNOWLEDGMENTS This work was supported by a grant from University-Industry Relations at the University of Wisconsin-Madison. The authors thank the reviewers for their helpful comments and Brett Marquardt for helping to collect and summarize some of the articles referenced in this paper. REFERENCES American College of Surgeons. (2005). National Surgical Quality Improvement Program. Retrieved November 15, 2005, from https:// acsnsqip.org/main/default.asp
  • 18. 274 April 2007 – Human Factors Antonow, J. A., Smith, A. B., & Silver, M. P. (2000). Medication error reporting:Asurveyofnursingstaff.JournalofNursingCareQuality, 15, 42–48. Arroyo, D. A. (2005). A nonpunitive, computerized system for im- proved reporting of medical occurrences. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From research to implementation (Vol. 4, pp. 71–80). Rockville, MD: Agency for Healthcare Research and Quality. Aspden, P., Corrigan, J. M., Wolcott, J., & Erickson, S. M. (2004). Patient safety:Achieving a new standard for care.Washington, DC: National Academies Press. Barach, P., & Small, S. D. (2000). Reporting and preventing medical mishaps: Lessons from non-medical near miss reporting systems. British Medical Journal, 320, 759–763. Barkan, R. (2002). Using a signal detection safety model to simulate managerial expectations and supervisory feedback. Organizational Behavior and Human Decision Processes, 89, 1005–1031. Bates, D. W., Cullen, D. J., Laird, N., Petersen, L. A., Small, S. D., Servi, D., et al. (1995). Incidence of adverse drug events and poten- tial adverse drug events – Implications for prevention. Journal of the American Medical Association, 274, 29–34. Bates, D. W., Evans, R. S., Murff, H. J., Stetson, P. D., Pizziferri, L., & Hripcsak, G. (2003). Detecting adverse events using information technology. Journal of the American Medical Informatics Associa- tion, 10, 115–128. Bates, D.W., Leape, L. L., Cullen, D. J., Laird, N., Petersen, L.A.,Teich, J. M., et al. (1998). Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. Journal of the American Medical Association, 280, 1311–1316. Beasley, J. W., Escoto, K. H., & Karsh, B. (2004). Design elements for a primary care medical error reporting system. Wisconsin Medical Journal, 103, 56–59. Billings, C. E. (1998). Some hopes and concerns regarding medical event-reporting systems – Lessons from the NASAAviation Safety ReportingSystem. ArchivesofPathologyandLaboratoryMedicine, 122, 214–215. Blendon, R. J., DesRoches, C. M., Brodie, M., Benson, J. M., Rosen, A. B., Schneider, E., et al. (2002). Patient safety: Views of practic- ing physicians and the public on medical errors. New England Journal of Medicine, 347, 1933–1940. Bogart, D. H. (1980). Feedback, feedforward, and feedwithin: Strategic information in systems. Behavioral Science, 25, 237–249. Bogner, M. S. (1994). Human error in medicine. Hillsdale, NJ: Erlbaum. Britt, H., Miller, G. C., Steven, I. D., Howarth, G. C., Nicholson, P. A., Bhasale,A. L., et al. (1997). Collecting data on potentially harmful events: A method for monitoring incidents in general practice. Family Practice, 14, 101–106. Busse, D. K., &Wright, D. J. (2000). Classification and analysis of inci- dents in complex, medical environments. Topics in Health Informa- tion Management, 20, 1–11. Carayon, P., Alvarado, C., Brennan, P., Gurses, A., Hundt, A., Karsh, B., et al. (2003). Work system and patient safety. Proceedings of Human Factors in Organizational Design and Management, 7, 583–588. Christensen,J.F.,Levinson,W.,&Dunn,P.M.(1992).Theheartofdark- ness: The impact of perceived mistakes on physicians. Journal of General Internal Medicine, 7, 424–431. Clegg, C. (2000). Sociotechnical principles for system design. Applied Ergonomics, 31, 463–477. Cohen, M. R. (2000). Why error reporting systems should be voluntary. British Medical Journal, 320, 728–729. Confidential Incident Reporting & Analysis System. (2005). CIRAS – Confidential Incident Reporting &Analysis System for the UK rail- way industry. Retrieved November 28, 2005, from http://www. ciras.org.uk/ Cullen, D. J., Bates, D. W., Small, S. D., Cooper, J., & Nemeskal, R. (1995). The incident reporting system does not detect adverse drug events:Aproblem for quality improvement. The Joint Commission Journal on Quality Improvement, 21, 541–548. DeJoy, D. M. (1996). Theoretical models of health behavior and work- place self-protective behavior. Journal of Safety Research, 27, 61–72. Desikan, R., Krauss, M. J., Dunagan, W. C., Rachmiel, E. C., Bailey, T., & Fraser, V. J. (2005). Reporting of adverse drug events: Exami- nation of a hospital incident reporting system. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From research to implementation (Vol. 1, pp. 145–160). Rockville, MD: Agency for Healthcare Research and Quality. Dickey, J., Damiano, R. J., Jr., & Ungerleider, R. (2003). Our surgical culture of blame: A time for change. Journal of Thoracic and Cardiovascular Surgery, 126, 1259–1260. Donaldson, L. (2000). An organisation with a memory. London: Depart- ment of Health. Dovey, S. M., Meyers, D. S., Phillips, R. L., Green, L. A., Fryer, G. E., Galliher, J. M., et al. (2002). A preliminary taxonomy of medical errors in family practice. Quality and Safety in Health Care, 11, 233–238. Eland, I. A., Belton, K. J., van Grootheest, A. C., Meiners, A. P., Rawlins, M. D., & Stricker, B. H. C. (1999). Attitudinal survey of voluntary reporting of adverse drug reactions. British Journal of Clinical Pharmacology, 48, 623–627. Evans, S. M., Berry, J. G., Smith, B. J., & Esterman, A. J. (2004). Anonymity or transparency in reporting of medial error: A community-based survey in South Australia. Medical Journal of Australia, 180, 577–580. Fernald, D. H., Pace, W. D., Harris, D. M., West, D. R., Main, D. S., & Westfall, J. M. (2004). Event reporting to a primary care patient safety reporting system: A report from the ASIPS Collaborative. Annals of Family Medicine, 2, 327–332. Figueiras,A., Tato, F., Fontainas, J., Takkouche, B., & Gestal-Otero, J. J. (2001). Physicians’ attitudes towards voluntary reporting of adverse drug events. Journal of Evaluation in Clinical Practice, 7, 347–354. Flink, E., Chevalier, C. L., Ruperto, A., Dameron, P., Heigel, F. J., Leslie, R., et al. (2005). Lessons learned from the evolution of mandatory adverse event reporting systems. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From research to implementation (Vol. 3, pp. 135–151). Rockville, MD: Agency for Healthcare Research and Quality. Flowers, L., & Riley, T. (2000). How states are responding to medical errors: An analysis of recent state legislative proposals. Portland, ME: National Academy for State Health Policy. Gallagher, T. H., Waterman, A. D., Ebers, A. G., Fraser, V. J., & Levinson, W. (2003). Patients’ and physicians’ attitudes regarding the disclosure of medical errors. Journal of the American Medical Association, 289, 1001–1007. Green, C. F., Mottram, D. R., Rowe, P. H., & Pirmohamed, M. (2001). Attitudes and knowledge of hospital pharmacists to adverse drug reaction reporting. British Journal of Clinical Pharmacology, 51, 81–86. Hamilton-Escoto, K. H., Karsh, B., & Beasley, J. W. (2006). Multiple user considerations and their implications in medical error reporting system design. Human Factors, 48, 48–58. Harper, M. L., & Helmreich, R. L. (2005). Identifying barriers to the success of a reporting system. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From research to implementation (Vol. 3, pp. 167–179). Rockville, MD: Agency for Healthcare Research and Quality. Harris, D. M., Westfall, J. M., Fernald, D. H., Duclos, C. W., West, D. R., Niebauer, L., et al. (2005). Mixed methods analysis of medical error event reports: A report from the ASIPS Collaborative. In K. Henriksen,J.B.Battles,E.S.Marks,&D.I.Lewin(Eds.),Advances in patient safety: From research to implementation (Vol. 2, pp. 133–147). Rockville, MD: Agency for Healthcare Research and Quality. Hart, G. K., Baldwin, I., Gutteridge, G., & Ford, J. (1994).Adverse inci- dent reporting in intensive care. Anaesthesia and Intensive Care, 22, 556–561. Henriksen, K., & Kaplan, H. S. (2003). Hindsight bias, outcome knowl- edge and adaptive learning. Quality and Safety in Health Care, 12(Suppl. 2), ii46–ii50. Holden, R. J., & Karsh, B. (2005). Applying a theoretical framework to the research and design of medical error reporting systems. In Proceedings of the International Conference on Healthcare Systems Ergonomics and Patient Safety (pp. 131–134). London: Taylor and Francis Group. Hollnagel, E. (1993). Human reliability analysis: Context and control. London: Academic Press. Horton, R. (1999). The uses of error. Lancet, 353, 422–423. Jeffe, D. B., Dunagan, W. C., Garbutt, J., Burroughs, T. E., Gallagher, T. H., Hill, P. R., et al. (2004). Using focus groups to understand physicians’and nurses’perspectives on error reporting in hospitals. Joint Commission Journal on Quality and Safety, 30, 471–479. Johnson, C. W. (2000a). Architectures for incident reporting. In P. Palanque, F. Paterno, & C. Johnson (Eds.), Proceedings of Safety
  • 19. MEDICAL ERROR REPORTING 275 and Usability Concerns in Aviation (pp. 23–25), Toulouse, France: University of Toulouse. Johnson, C. W. (2000b). Designing forms to support the elicitation of in- formation about incidents involving human error. In P. C. Cacciabue (Ed.), Proceedings of the 19th European Annual Conference on Human Decision Making and Manual Control, EAM-2000 (pp. 127–134). Luxemburg: European Commission Joint Research Centre. Johnson, C. W. (2002). Reasons for the failure of incident reporting in the healthcare and rail industries. In F. Redmill & T. Anderson (Eds.), Components of System Safety: Proceedings of the 10th Safety-Critical Systems Symposium (pp. 31–60). Berlin, Germany: Springer-Verlag. Johnson, C. W. (2003a). Failure in safety-critical systems: A handbook of accident and incident reporting. Glasgow, Scotland: University of Glasgow Press. Johnson, C. W. (2003b). How will we get the data and what will we do with it then? Issues in the reporting of adverse healthcare events. Quality and Safety in Health Care, 12(Suppl. 2), ii64–ii67. Joint Commission on Accreditation of Healthcare Organizations. (2005). Health care at the crossroads: Strategies for improving the medical liability system and preventing patient injury. Oakbrook Terrace, IL: Author. Kaiser Permanente, National Quality Forum, & Drucker, P. F. (2000). Roundtable Discussion: Reporting as a means to improve patient safety. Claremont, CA: Kaiser Permanente Institute for Health Policy. Kaplan, H. S. (2003). Benefiting from the “gift of failure”: Essentials foraneventreportingsystem.JournalofLegalMedicine,24,29–35. Kaplan, H. S., & Barach, P. (2002). Incident reporting: Science or pro- toscience?Ten years later [Comment]. Quality and Safety in Health Care, 11, 144–145. Kaplan, H. S., Battles, J. B., Van der Schaff, T. W., Shea, C. E., & Mercer, S. Q. (1998). Identification and classification of the causes of events in transfusion medicine. Transfusion, 38, 1071–1081. Kaplan, H. S., Callum, J. L., Fastman, R. B., & Merkley, L. L. (2002). The Medical Event Reporting System for Transfusion Medicine (MERS-TM): Will it help us get the right blood to the right patient? Transfusion Medicine Reviews, 16, 86–102. Kaplan, H. S., & Fastman, B. R. (2003). Organization of event report- ing data for sense making and system improvement. Quality and Safety in Health Care, 12(Suppl. 2), ii68–ii72. Karsh, B., & Brown, R. (2005).The impact of organizational hierarchies on the design and analysis of medical error research. Proceedings of Human Factors in Organizational Design and Management, 8, 293–298. Karsh, B., Escoto, K. H., Beasley, J.W., & Holden, R. J. (2006).Toward a theoretical approach to medical error reporting system research and design. Applied Ergonomics, 37, 283–295. Karsh, B., Holden, R. J., Alper, S. J., & Or, C. K. L. (2006). A human factors engineering paradigm for patient safety – Designing to sup- port the performance of the health care professional. Quality and Safety in Health Care, 15(Suppl. 1), i59–i65. Katz, R. I., & Lagasse, R. S. (2000). Factors influencing the reporting of adverse perioperative outcomes to a quality management pro- gram. Anesthesia and Analgesia, 90, 344–350. Kaushal, R., & Bates, D. W. (2002). Information technology and med- ication safety: What is the benefit? Quality and Safety in Health Care, 11, 261–265. Kingston, M. J., Evans, S. M., Smith, B. J., & Berry, J. G. (2004). Atti- tudes of doctors and nurses towards incident reporting:Aqualitative analysis. Medical Journal of Australia, 181, 36–39. Klein, K. J., Dansereau, F., & Hall, R. J. (1994). Levels issues in theo- ry development, data collection, and analysis. Academy of Management Review, 19, 195–229. Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (Eds.). (2000). To err is human: Building a safer health system (Institute of Medicine Report on Medical Errors). Washington, DC: National Academy Press. Lawton, R., & Parker, D. (2002). Barriers to incident reporting in a healthcare system. Quality and Safety in Health Care, 11, 15–18. Layde, P. M., Maas, L.A.,Teret, S. P., Brasel, K. J., Kuhn, E. M., Mercy, J. A., et al. (2002). Patient safety efforts should focus on medical injuries. Journal of the American Medical Association, 287, 1993–1997. Leape, L. L. (1994). Error in medicine. Journal of the American Medical Association, 272, 1851–1857. Leape, L. L. (1999). Why should we report adverse incidents? Journal of Evaluation in Clinical Practice, 5, 1–4. Leape, L. L. (2000). Reporting of medical errors: Time for a reality check. Quality in Health Care, 9, 144–145. Leape, L. L. (2002). Reporting of adverse events. New England Journal of Medicine, 347, 1633–1638. Leape, L. L., Bates, D. W., Cullen, D. J., Cooper, J., Demonaco, H. J., Gallivan,T., et al. (1995). Systems-analysis of adverse drug events. Journal of the American Medical Association, 274, 35–43. Leape, L. L., Kabcenell, A., Berwick, D. M., & Roessner, J. (1998). Reducing adverse drug events. Boston: Institute for Healthcare Improvement. Martin, S. K., Etchegaray, J. M., Simmons, D., Belt, W. T., & Clark, K. (2005). Development and implementation of the University of Texas Close Call Reporting System. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From research to implementation (Vol. 2, pp. 149–160). Rockville, MD: Agency for Healthcare Research and Quality. Marx, D. A. (1999). Maintenance error causation. Washington, DC: Federal Aviation Authority Office of Aviation Medicine. Marx, D.A. (2001). Patient safety and the “just culture”: A primer for health care executives. Retrieved November 28, 2006, from http://www.mers-tm.net/support/Marx_Primer.pdf McGreevy, M. W. (1997). A practical guide to interpretation of large collections of incident narratives using the QUORUM method. Moffett Field, CA: NASAAmes Research Center. Medical Event Reporting System. (2005). Medical Event Reporting System — Transfusion medicine. Retrieved November 15, 2005, from http://www.mers-tm.net/ Murff, H. J., Patel,V. L., Hripcsak, G., & Bates, D.W. (2003). Detecting adverse events for patient safety research: A review of current methodologies. Journal of Biomedical Informatics, 36, 131–143. NationalAeronautics and SpaceAdministration. (2002). Aviation Safety Reporting System. Retrieved March 7, 2002, from http://asrs.arc. nasa.gov/ New York State Department of Health. (2005). NYPORTS – The New York Patient Occurrence and Tracking System. Retrieved Novem- ber 15, 2005, from http://www.health.state.ny.us/nysdoh/hospital/ nyports Newman, M. C. (1996). The emotional impact of mistakes on family physicians. Archives of Family Medicine, 5, 71–75. Nielsen, J. (1993). Usability engineering. Boston: Academic Press. Nuclear Regulatory Commission. (2005). Human Factors Information System Reports. Retrieved November 9, 2005, from http://www. nrc.gov/reading-rm/doc-collections/human-factors/ Pace, W. D., Staton, E. W., Higgins, G. S., Main, D. S., West, D. R., Harris, D. M., et al. (2003). Database design to ensure anonymous study of medical errors: A report from the ASIPS Collaborative. Journal of the American Medical Informatics Association, 10, 531–540. Parker, D., & Lawton, R. (2003). Psychological contribution to the understanding of adverse events in health care. Quality and Safety in Health Care, 12, 453–457. Phillips, R. L., Dovey, S. M., Hickner, J. S., Graham, D., & Johnson, M. (2005). The AAFP Patient Safety Reporting System: Develop- ment and legal issues pertinent to medical error tracking and analy- sis. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From research to implementation (Vol. 3, pp. 121–134). Rockville, MD: Agency for Healthcare Research and Quality. Pilpel, D., Schor, R., & Benbassat, J. (1998). Barriers to acceptance of medical error:The case for a teaching program. Medical Education, 32, 3–7. Rasmussen, J. (1997). Risk management in a dynamic society:Amod- elling problem. Safety Science, 27, 183–213. Reason, J. (1990). Human error. Cambridge, UK: Cambridge University Press. Reason, J. (1997). Managing the risks of organizational accidents. Aldershot, UK: Ashgate. Reason, J. (2000). Human error: Models and management. British Medical Journal, 320, 768–770. Reason, J., Parker, D., & Lawton, R. (1998). Organizational controls and safety: The varieties of rule-related behaviour. Journal of Occupa- tional and Organizational Psychology, 71, 289–304. Reinertsen, J. L. (2000). Let’s talk about error – Leaders should take responsibility for mistakes. British Medical Journal, 320, 730.
  • 20. 276 April 2007 – Human Factors Resnick, D. (2003). The Jessica Santillan tragedy: Lessons learned. Hastings Center Report, 33(4), 15–20. Ricci, M., Goldman,A. P., de Leval, M. R., Cohen, G.A., Devaney, F., & Carthey, J. (2004). Pitfalls of adverse event reporting in paedi- atric cardiac intensive care. Archives of Disease in Childhood, 89, 856–859. Robert Graham Center, American Academy of Family Physicians Education Resource Center, & State Networks of Colorado Am- bulatory Practices and Partners. (2005). Medical error taxonomies: Aresearch forum. Retrieved November 15, 2005, from http://www. errorsinmedicine.net/taxonomy/ Robinson,A. R., Hohmann, K. B., Rifkin, J. I., Topp, D., Gilroy, C. M., Pickard, J.A., et al. (2002). Physician and public opinions on qual- ity of health care and the problem of medical errors. Archives of Internal Medicine, 162, 2186–2190. Rogers, A. S., Israel, E., Smith, C. R., Levine, D., McBean, A. M., Valente,C.,etal.(1988).Physicianknowledge,attitudes,andbehav- ior related to reporting adverse drug events. Archives of Internal Medicine, 148, 1596–1600. Rudman, W. J., Bailey, J. H., Hope, C., Garrett, P., & Brown, C. A. (2005). The impact of a Web-based reporting system on the col- lection of medication error occurrence data. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From research to implementation (Vol. 3, pp. 195–205). Rockville, MD: Agency for Healthcare Research and Quality. Runciman,W. B., Merry,A., & Smith,A. M. (2001). Improving patients’ safety by gathering information: Anonymous reporting has an important role. British Medical Journal, 323, 298. Runciman, W. B., & Moller, J. (2001). Iatrogenic injury in Australia. Adelaide, South Australia: Australian Patient Safety Foundation. Runciman, W. B., Webb, R. K., Lee, R., & Holland, R. (1993). System failure: An analysis of 2000 incident reports. Anaesthesia and Intensive Care, 21, 684–695. Sage, W. M. (2003). Medical liability and patient safety. Health Affairs, 22, 26–36. Sexton, J. B., Thomas, E. J., & Helmreich, R. L. (2000). Error, stress and teamwork in medicine and aviation: Cross sectional surveys. British Medical Journal, 320, 745–749. Smith, M. J., Carayon, P., & Karsh, B. (2001). Design for occupation- al health and safety. In G. Salvendy (Ed.), Handbook of industrial engineering: Technology and operations management (3rd ed., pp. 1156–1191). New York: Wiley. Smith, M. J., Karsh, B., Carayon, P., & Conway, F.T. (2003). Controlling occupational safety and health hazards. In J. C. Quick & L. E. Tetrick (Eds.), Handbook of occupational health psychology (pp. 35–68). Washington, DC: American Psychological Association. Smith, M. J., & Sainfort, P. C. (1989). Balance theory of job design for stress reduction. International Journal of Industrial Ergonomics, 4, 67–79. Staender, S. (2000). Critical Incident Reporting System (CIRS): Critical incidents in anaesthesiology. Basel, Switzerland: University of Basel, Department of Anaesthesia. Staender, S., Davies, J., Helmreich, B., Sexton, B., & Kaufmann, M. (1997). The Anaesthesia Critical Incident Reporting System: An experience based database. International Journal of Medical Infor- matics, 47, 87–90. Stanhope, N., Crowley-Murphy, M., Vincent, C., O’Connor, A. M., & Taylor-Adams, S. E. (1999). An evaluation of adverse incident reporting. Journal of Evaluation in Clinical Practice, 5, 5–12. Stanton, N. A., Chambers, P. R. G., & Piggott, J. (2001). Situational awareness and safety. Safety Science, 39, 189–204. Statement Before the Subcommittee on Oversight and Investigations, House Committee on Veteran’s Affairs, 106th Cong., D82 (2000) (testimony of Linda J. Connell). Suresh, G., Horbar, J. D., Plsek, P., Gray, J., Edwards, W. H., Shiono, P. H., et al. (2004).Voluntary anonymous reporting of medical errors for neonatal intensive care. Pediatrics, 113, 1609–1618. Ulep, S. K., & Moran, S. L. (2005). Ten considerations for easing the transition to a Web-based patient safety reporting system. In K. Henriksen,J.B.Battles,E.S.Marks,&D.I.Lewin(Eds.),Advances in patient safety: From research to implementation (Vol. 3, pp. 207–222). Rockville, MD: Agency for Healthcare Research and Quality. Uribe, C. L., Schweikhart, S. B., Pathak, D. S., & Marsh, G. B. (2002). Perceived barriers to medical-error reporting:An exploratory inves- tigation. Journal of Healthcare Management, 47, 263–279. Vicente, K. (2003). What does it take: A case study. Joint Commission Journal on Quality and Safety, 29, 598–609. Vincent, C., Stanhope, N., & Crowley-Murphy, M. (1999). Reasons for not reporting adverse incidents: An empirical study. Journal of Evaluation in Clinical Practice, 5, 13–21. Wakefield, B. J., Uden-Holman, T., & Wakefield, D. S. (2005). Development and validation of the Medication Administration Error Reporting Survey. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From research to implementation (Vol. 4, pp. 475–489). Rockville, MD:Agency for Healthcare Research and Quality. Wakefield, D. S., Wakefield, B. J., Uden-Holman, T., & Blegen, M. A. (1996). Perceived barriers in reporting medication administration errors. BestPracticesandBenchmarkinginHealthcare,1, 191–197. Walker, S. B., & Lowe, M. J. (1998). Nurses’views on reporting med- ication incidents. International Journal of Nursing Practice, 4, 97–102. Waring, J. J. (2004). A qualitative study of the intra-hospital variations in incident reporting. International Journal for Quality in Health Care, 16, 347–352. Weick, K., & Sutcliffe, K. (2001). Managing the unexpected: Assuring high performance in an age of complexity. San Francisco: Jossey- Bass. Weingart, S. N., Callanan, L. D., Ship,A. N., &Aronson, M. D. (2001). Aphysician-basedvoluntaryreportingsystemforadverseeventsand medical errors. Journal of General Internal Medicine, 16, 809–814. West, D. R., Westfall, J. M.,Araya-Guerra, R., Hansen, L., Quintela, J., VanVorst, R., et al. (2005). Using reported primary care errors to develop and implement patient safety interventions: A report from the ASIPS Collaborative. In K. Henriksen, J. B. Battles, E. S. Marks, & D. I. Lewin (Eds.), Advances in patient safety: From research to implementation (Vol. 3, pp. 105–119). Rockville, MD: Agency for Healthcare Research and Quality. Westrum, R. (1992). Cultures with requisite imagination. In J.A. Wise, V. D. Hokin, & P. Stager (Eds.), Verification and validation of com- plex systems: Human factors aspects (pp. 401–416). Berlin, Germany: Springer-Verlag. Wilf-Miron, R., Lewenhoff, I., Benyamini, Z., & Aviram, A. (2003). From aviation to medicine: Applying concepts of aviation safety to risk management in ambulatory care. Quality and Safety in Health Care, 12, 35–39. Williamson, J. A., & Mackay, P. (1991). Incident reporting. Medical Journal of Australia, 155, 340–344. Richard J. Holden is a Ph.D. student pursuing a joint degree in psychology and industrial and systems engi- neering at the University of Wisconsin-Madison, where he received an M.S. in psychology in 2004. Ben-Tzion Karsh is an assistant professor in the Depart- ment of Industrial and Systems Engineering at the University of Wisconsin-Madison, where he received a Ph.D. in industrial engineering in 1999. Date received: August 5, 2005 Date accepted: September 20, 2006