SlideShare a Scribd company logo
1 of 238
202 Copyright © 2009 The Author(s)
Evidence-Based Practice: Critical
Appraisal of Qualitative Evidence
Kathleen M. Williamson
One of the key steps of evidence-based practice is to critically
appraise evidence to best answer a clinical question. Mental
health clinicians need to understand the importance of
qualitative evidence to their practice, including levels of
qualitative
evidence, qualitative inquiry methods, and criteria used to
appraise qualitative evidence to determine how implementing
the best qualitative evidence into their practice will influence
mental health outcomes. The goal of qualitative research is
to develop a complete understanding of reality as it is perceived
by the individual and to uncover the truths that exist.
These important aspects of mental health require clinicians to
engage this evidence. J Am Psychiatr Nurses Assoc, 2009;
15(3), 202-207. DOI: 10.1177/1078390309338733
Keywords: evidence-based practice; qualitative inquiry;
qualitative designs; critical appraisal of qualitative
evidence; mental health
Evidence-based practice (EBP) is an approach that
enables psychiatric mental health care practitioners
as well as all clinicians to provide the highest quality
of care using the best evidence available (Melnyk &
Fineout-Overholt, 2005). One of the key steps of EBP
is to critically appraise evidence to best answer a
clinical question. For many mental health questions,
understanding levels of evidence, qualitative inquiry
methods, and questions used to appraise the evidence
are necessary to implement the best qualitative evi-
dence into practice. Drawing conclusions and making
judgments about the evidence are imperative to the
EBP process and clinical decision making (Melnyk &
Fineout-Overholt, 2005; Polit & Beck, 2008). The over-
all purpose of this article is to familiarize clinicians
with qualitative research as an important source of
evidence to guide practice decisions. In this article, an
overview of the goals, methods and types of qualita-
tive research, and the criteria used to appraise the
quality of this type of evidence will be presented.
QUALITATIVE BELIEFS
Qualitative research aims to generate insight,
describe, and understand the nature of reality in
human experiences (Ayers, 2007; Milne & Oberle,
2005; Polit & Beck, 2008; Saddler, 2006; Sandelowski,
2004; Speziale & Carpenter, 2003; Thorne, 2000).
Qualitative researchers are inquisitive and seek to
understand knowledge about how people think and
feel, about the circumstances in which they find
themselves, and use methods to uncover and decon-
struct the meaning of a phenomenon (Saddler, 2006;
Thorne, 2000). Qualitative data are collected in a
natural setting. These data are not numerical; rather,
they are full and rich descriptions from participants
who are experiencing the phenomenon under study.
The goal of qualitative research is to uncover the
truths that exist and develop a complete understand-
ing of reality and the individual’s perception of what
is real. This method of inquiry is deeply rooted in
descriptive modes of research. “The idea that multiple
realties exist and create meaning for the individuals
studied is a fundamental belief of qualitative research-
ers” (Speziale & Carpenter, 2003, p. 17). Qualitative
research is the studying, collecting, and understand-
ing the meaning of individuals’ lives using a variety
of materials and methods (Denzin & Lincoln, 2005).
WHAT IS A QUALITATIVE
RESEARCHER?
Qualitative researchers commonly believe that indi-
viduals come to know and understand their reality in
Kathleen M. Williamson, PhD, RN, associate director, Center
for
the Advancement of Evidence-Based Practice, Arizona State
University, College of Nursing & Healthcare Innovation,
Phoenix,
Arizona; [email protected]
Journal of the American Psychiatric Nurses Association,Vol. 15,
No. 3 203
Critical Appraisal of Qualitative Evidence
different ways. It is through the lived experience
and the interactions that take place in the natural
setting that the researcher is able to discover and
understand the phenomenon under study (Miles &
Huberman, 1994; Patton, 2002; Speziale & Carpenter,
2003). To ensure the least disruption to the environ-
ment/natural setting, qualitative researchers care-
fully consider the best research method to answer
the research question (Speziale & Carpenter, 2003).
These researchers are intensely involved in all
aspects of the research process and are considered
participants and observers in setting or field (Patton,
2002; Polit & Beck, 2008; Speziale & Carpenter,
2003). Flexibility is required to obtain data from the
richest possible sources of information. Using a
holistic approach, the researcher attempts to cap-
ture the perceptions of the participants from an
“emic” approach (i.e., from an insider’s viewpoint;
Miles & Huberman, 1994; Speziale & Carpenter,
2003). Often, this is accomplished through the use of
a variety of data collection methods, such as inter-
views, observations, and written documents (Patton,
2002). As the data are collected, the researcher
simultaneously analyzes it, which includes identi-
fying emerging themes, patterns, and insights
within the data. According to Patton (2002), quali-
tative analysis engages exploration, discovery, and
inductive logic. The researcher uses a rich literary
account of the setting, actions, feelings, and mean-
ing of the phenomenon to report the findings
(Patton, 2002).
COMMONLY USED
QUALITATIVE DESIGNS
According to Patton (2002), “Qualitative methods
are first and foremost research methods. They are
ways of finding out what people do, know, think, and
feel by observing, interviewing, and analyzing docu-
ments” (p. 145). Qualitative research designs vary by
type and purpose: data collection strategies used and
the type of question or phenomenon under study. To
critically appraise qualitative evidence for its valid-
ity and use in practice, an understanding of the
types of qualitative methods as well as how they are
employed and reported is necessary.
Many of the methods are routed in the anthropol-
ogy, psychological, and sociology disciplines. Many
commonly used methods in the health sciences
research are ethnography, phenomenology, and
grounded theory (see Table 1).
Ethnography
Ethnography has its traditions in cultural
anthropology, which describe the values, beliefs,
and practice of cultural groups (Ploeg, 1999; Polit
& Beck, 2008). According to Speziale and Carpenter
(2003), the characteristics that are central to eth-
nography are that (a) the research is focused on
culture, (b) the researcher is totally immersed in
the culture, and (c) the researcher is aware of her/
his own perspective as well as those in the study.
Ethnographic researchers strive to study cultures
from an emic approach. The researcher as a par-
ticipant observer becomes involved in the culture
to collect data, learn from participants, and report
on the way participants see their world (Patton,
2002). Data are primarily collected through obser-
vations and interviews. Analysis of ethnographic
results involves identifying the meanings attrib-
uted to objects and events by members of the cul-
ture. These meanings are often validated by
members of the culture before finalizing the results
(called member checks). This is a labor-intensive
method that requires extensive fieldwork.
TABLE 1. Most Commonly Used Qualitative Research Methods
Method
Purpose
Research question(s)
Sample size (on average)
Data sources/collection
Ethnography
Describe culture of people
What is it like to live . . .
What is it . . .
30-50
Interviews, observations, field
notes, records, chart data,
life histories
Phenomenology
Describe phenomena, the
appearance of things, as lived
experience of humans in a natural
setting
What is it like to have this
experience? What does it feel like?
6-8
Interviews, videotapes, observations,
in-depth conversations
Grounded theory
To develop a theory rather than
describe a phenomenon
Questions emerge from the data
25-50
Taped interview, observation,
diaries, and memos from
researcher
Source. Adapted from Polit and Beck (2008) and Speziale and
Carpenter(2003).
204 Journal of the American Psychiatric Nurses
Association,Vol. 15, No. 3
Williamson
Phenomenology
Phenomenology has its roots in both philosophy
and psychology. Polit and Beck (2008) reported,
“Phenomenological researchers believe that lived
experience gives meaning to each person’s percep-
tion of a particular phenomenon” (p. 227). According
to Polit and Beck, there are four aspects of the
human experience that are of interest to the phe-
nomenological researcher: (a) lived space (spatial-
ity), (b) lived body (corporeality), (c) lived human
relationships (relationality), and (d) lived time (tem-
porality). Phenomenological inquiry is focused on
exploring how participants in the experience make
sense of the experience, transform the experience
into consciousness, and the nature or meaning of
the experience (Patton, 2002). Interpretive phenom-
enology (hermeneutics) focuses on the meaning and
interpretation of the lived experience to better
understand social, cultural, political, and historical
context. Descriptive phenomenology shares vivid
reports and describes the phenomenon.
In a phenomenological study, the researcher is an
active participant/observer who is totally immersed
in the investigation. It involves gaining access to
participants who could provide rich descriptions
from in-depth interviews to gather all the informa-
tion needed to describe the phenomenon under study
(Speziale & Carpenter, 2003). Ongoing analyses of
direct quotes and statements by participants occur
until common themes emerge. The outcome is a vivid
description of the experience that captures the
meaning of the experience and communicates clearly
and logically the phenomenon under study (Speziale
& Carpenter, 2003).
Grounded Theory
Grounded theory has its roots in sociology and
explores the social processes that are present within
human interactions (Speziale & Carpenter, 2003).
The purpose is to develop or build a theory rather
than test a theory or describe a phenomenon (Patton,
2002). Grounded theory takes an inductive approach
in which the researcher seeks to generate emergent
categories and integrate them into a theory grounded
in the data (Polit & Beck, 2008). The research does
not start with a focused problem; it evolves and is
discovered as the study progresses. A feature of
grounded theory is that the data collection, data
analysis, and sampling of participants occur simulta-
neously (Polit & Beck, 2008; Powers, 2005). The
researchers using ground theory methodology are
able to critically analyze situations, not remove
themselves from the study but realize that they
are part of it, recognize bias, obtain valid and reliable
data, and think abstractly (Strauss & Corbin, 1990).
Data collection is through in-depth interview and
observations. A constant comparative process is used
for two reasons: (a) to compare every piece of data
with every other piece to more accurately refine the
relevant categories and (b) to assure the researcher
that saturation has occurred. Once saturation is
reached the researcher connects the categories, pat-
terns, or themes that describe the overall picture
that emerged that will lead to theory development.
ASPECTS OF QUALITATIVE RESEARCH
The most important aspects of qualitative inquiry
is that participants are actively involved in the
research process rather than receiving an interven-
tion or being observed for some risk or event to be
quantified. Another aspect is that the sample is pur-
posefully selected and is based on experience with a
culture, social process, or phenomena to collect infor-
mation that is rich and thick in descriptions. The final
essential aspect of qualitative research is that one or
more of the following strategies are used to collect
data: interviews, focus groups, narratives, chat rooms,
and observation and/or field notes. These methods
may be used in combination with each other. The
researcher may choose to use triangulation strategies
on data collection, investigator, method, or theory and
use multiple sources to draw conclusions about the
phenomenon (Patton, 2002; Polit & Beck, 2009).
SUMMARY
This is not an inclusive list of qualitative methods
that researchers could choose to use to answer a
research question, other methods include historical
research, feminist research, case study method, and
action research. All qualitative research methods are
used to describe and discover meaning, understand-
ing, or develop a theory and transport the reader to
the time and place of the observation and/or inter-
view (Patton, 2002).
THE HIERARCHY OF
QUALITATIVE EVIDENCE
Clinical questions that require qualitative evi-
dence to answer them focus on human response and
Journal of the American Psychiatric Nurses Association,Vol. 15,
No. 3 205
Critical Appraisal of Qualitative Evidence
meaning. An important step in the process of apprais-
ing qualitative research as a guide for clinical prac-
tice is the identification of the level of evidence or the
“best” evidence. The level of evidence is a guide that
helps identify the most appropriate, rigorous, and
clinically relevant evidence to answer the clinical
question (Polit & Beck, 2008). Evidence hierarchy for
qualitative research ranges from opinion of authori-
ties and/or reports of expert committees to a single
qualitative research study to metasynthesis (Melnyk
& Fineout-Overholt, 2005; Polit & Beck, 2008). A
metasynthesis is comparable to meta-analysis (i.e.,
systematic reviews) of quantitative studies. A meta-
synthesis is a technique that integrates findings of
multiple qualitative studies on a specific topic, pro-
viding an interpretative synthesis of the research
findings in narrative form (Polit & Beck, 2008). This
is the strongest level of evidence in which to answer
a clinical question. The higher the level of evidence
the stronger the evidence is to change practice.
However, all evidence needs be critically appraised
based on (a) the best available evidence (i.e., level of
evidence), (b) the quality and reliability of the study,
and (c) the applicability of the findings to practice.
CRITICAL APPRAISAL OF
QUALITATIVE EVIDENCE
Once the clinical issue has been identified, the
PICOT question constructed, and the best evidence
located through an exhaustive search, the next step
is to critically appraise each study for its validity
(i.e., the quality), reliability, and applicability to use
in practice (Melnyk & Fineout-Overholt, 2005).
Although there is no consensus among qualitative
researchers on the quality criteria (Cutcliffe &
McKenna, 1999; Polit & Beck, 2008; Powers, 2005;
Russell & Gregory, 2003; Sandelowski, 2004), many
have published excellent tools that guide the process
for critically appraising qualitative evidence (Duffy,
2005; Melnyk & Fineout-Overholt, 2005; Polit &
Beck, 2008; Powers, 2005; Russell & Gregory, 2003;
Speziale & Carpenter, 2003). They all base their cri-
teria on three primary questions: (a) Are the study
findings valid? (b) What were the results of the
study? (c) Will the results help me in caring for my
patients? According to Melnyk and Fineout-Overholt
(2005), “The answers to these questions ensure rele-
vance and transferability of the evidence from the
search to the specific population for whom the practi-
tioner provides care” (p. 120). In using the questions
in Tables 2, 3, and 4, one can evaluate the evidence
and determine if the study findings are valid, the
method and instruments used to acquire the knowl-
edge credible, and if the findings are transferable.
The qualitative process contributes to the rigor or
trustworthiness of the data (i.e., the quality). “The
goal of rigor in qualitative research is to accurately
represent study participants’ experiences” (Speziale
& Carpenter, 2003, p. 38). The qualitative attributes
of validity include credibility, dependability, confirm-
ability, transferability, and authenticity (Guba &
Lincoln, 1994; Miles & Huberman, 1994; Speziale &
Carpenter, 2003).
Credibility is having confidence and truth about
the data and interpretations (Polit & Beck, 2008).
The credibility of the findings hinges on the skill,
competence, and rigor of the researcher to describe
the content shared by the participants and the abil-
ity of the participants to accurately describe the
phenomenon (Patton, 2002; Speziale & Carpenter,
2003). Cutcliffe and McKenna (1999) reported that
the most important indicator of the credibility of
findings is when a practitioner reads the study find-
ings and regards them meaningful and applicable
and incorporates them into his or her practice.
Confirmability refers to the way the researcher
documents and confirms the study findings (Speziale
TABLE 2. Subquestions to Further Answer, Are the Study
Findings Valid?
Participants
Sample
Data collection
How were they
selected?
Was it adequate?
How were the
data collected?
Did they provide
rich and thick
descriptions?
Was the setting
appropriate to
acquire an
adequate sample?
Were the tools
adequate?
Were the
participants’
rights protected?
Was the sampling
method
appropriate?
How were the data
coded? If so
how?
Did the researcher
eliminate bias?
Do the data accurately
represent the study
participants?
How accurate and
complete were the
data?
Was the group or
population adequately
described?
Was saturation achieved?
Does gathering the data
adequately portray the
phenomenon?
Source. Adapted from Powers (2005), Polit and Beck (2008),
Russell and Gregory (2003), and Speziale and Carpenter (2003).
206 Journal of the American Psychiatric Nurses
Association,Vol. 15, No. 3
Williamson
& Carpenter, 2003). Confirmability is the process of
confirming the accuracy, relevance, and meaning of
the data collected. Confirmability exists if (a) the
researcher identifies if saturation was reached and
(b) records of the methods and procedures are
detailed enough that they can be followed by an
audit trail (Miles & Huberman, 1994).
Dependability is a standard that demonstrates
whether (a) the process of the study was consistent, (b)
data remained consistent over time and conditions,
and (c) the results are reliable (Miles & Huberman,
1994; Polit & Beck, 2008; Speziale & Carpenter, 2003).
For example, if study methods and results are depend-
able, the researcher consistently approaches each
occurrence in the same way with each encounter and
results were coded with accuracy across the study.
Transferability refers to the probability that the
study findings have meaning and are usable by oth-
ers in similar situations (i.e., generalizable to others
in that situation; Miles & Huberman, 1994; Polit &
Beck, 2008; Speziale & Carpenter, 2003). To deter-
mine if the findings of a study are transferable and
can be used by others, the clinician must consider
the potential client to whom the findings may be
applied (Speziale & Carpenter, 2003).
Authenticity is when the researcher fairly and
faithfully shows a range of different realities and
develops an accurate and authentic portrait for
the phenomenon under study (Polit & Beck, 2008).
For example, if a clinician were to be in the same
environment as the researcher describes, they would
experience the phenomenon similarly. All mental
health providers need to become familiar with these
aspects of qualitative evidence and hone their criti-
cal appraisal skills to enable them to improve the
outcomes of their clients.
CONCLUSION
Qualitative research aims to impart meaning of
the human experience and understand how people
think and feel about their circumstances. Qualitative
researchers use a holistic approach in an attempt to
uncover truths and understand a person’s reality.
The researcher is intensely involved in all aspects
of the research design, collection, and analysis pro-
cesses. Ethnography, phenomenology, and grounded
theory are some of the designs that a researcher may
use to study a culture, phenomenon, or theory. Data
collection strategies vary based on the research
question, method, and informants. Methods such as
interviews, observations, and journals allow for
information-rich participants to provide detailed lit-
erary accounts of the phenomenon. Data analysis
occurs simultaneously as data collection and is the
process by which the researcher identifies themes,
concepts, and patterns that provide insight into the
phenomenon under study.
One of the crucial steps in the EBP process is to
critically appraise the evidence for its use in practice
TABLE 3. Subquestions to Further Answer, What Were the
Results of the Study?
Is the research
design
appropriate
for the
research
question?
Is the
description
of findings
thorough?
Do findings
fit the data
from which
they were
generated?
Are the results
logical,
consistent,
and easy to
follow?
Was the
purpose of the
study clear?
Were all themes
identified,
useful,
creative, and
convincing of
the
phenomena?
Source. Adapted from Powers (2005), Russell and Gregory
(2003), and Speziale and Carpenter (2003).
TABLE 4. Subquestions to Further Answer, Will the Results
Help Me in Caring for My Patients?
What meaning and
relevance does
this study have
for my patients?
How would I use
these findings
in my practice?
How does the study
help provide
perspective on my
practice?
Are the conclusions
appropriate to my
patient
population?
Are the results
applicable to
my patients?
How would patient
and family values
be considered in
applying these
results?
Source. Adapted from Powers (2005), Russell and Gregory
(2003), and Speziale and Carpenter (2003).
Journal of the American Psychiatric Nurses Association,Vol. 15,
No. 3 207
Critical Appraisal of Qualitative Evidence
and determine the value of findings. Critical appraisal
is the review of the evidence for its validity (i.e.,
strengths and weaknesses), reliability, and usefulness
for clients in daily practice. “Psychiatric mental
health clinicians are practicing in an era emphasizing
the use of the most current evidence to direct their
treatment and interventions” (Rice, 2008, p. 186).
Appraising the evidence is essential for assurance
that the best knowledge in the field is being applied
in a cost-effective, holistic, and effective way. To do
this, one must incorporate the critically appraised
findings with their abilities as clinicians and their
clients’ preferences. As professionals, clinicians are
expected to use the EBP process, which includes
appraising the evidence to determine if the best
results are believable, useable, and dependable.
Clinicians in psychiatric mental health must use
qualitative evidence to inform their practice deci-
sions. For example, how do clients newly diagnosed
with bipolar and their families perceive the life
impact of this diagnosis? Having a well done meta-
synthesis that provides an accurate representation of
the participants’ experiences, and is trustworthy (i.e.,
credible, dependable, confirmable, transferable, and
authentic), will provide insight into the situational
context, human response, and meaning for these cli-
ents and will assist clinicians in delivering the best
care to achieve the best outcomes.
REFERENCES
Ayers, L. (2007). Qualitative research proposals—Part I.
Journal
Wound Ostomy Continence Nursing, 34, 30-32.
Cutcliffe, J. R., & McKenna, H. P. (1999). Establishing the
credibil-
ity of qualitative research findings: The plot thickens. Journal
of Advanced Nursing, 30, 374-380.
Denzin, N. K., & Lincoln, Y. S. (2005). The Sage handbook of
qualitative research (3rd ed.). Thousand Oaks, CA: Sage.
Duffy, M. E. (2005). Resources for critically appraising
qualitative
research evidence of nursing practice clinical question. Clinical
Nursing Specialist, 19, 288-290.
Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in
qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.),
Handbook of qualitative research (pp. 105-117). Thousand
Oaks, CA: Sage.
Melnyk, B. M., & Fineout-Overholt, E. (Eds.). (2005).
Evidence-based
practice in nursing and healthcare. Philadelphia: Lippincott
Williams & Wilkins.
Miles, M. B., & Huberman, A. M. (1994). An expend
sourcebook
qualitative data analysis (4th ed.). Thousand Oaks, CA: Sage.
Milne, J., & Oberle, K. (2005). Enhancing rigor in qualitative
description: A case study. Journal Wound Ostomy Continence
Nursing, 32, 413-420.
Patton, M. Q. (2002). Qualitative research & evaluation
methods
(3rd ed.). Thousand Oaks: Sage.
Ploeg, J. (1999). Identifying the best research design to fit the
question. Part 2: Qualitative designs. Evidence-Based Nursing,
2, 36-37.
Polit, D. F., & Beck, C. T. (2008). Nursing research: Generating
and
assessing evidence fro nursing practice. Philadelphia:
Lippincott
Williams & Wilkins.
Powers, B. A. (2005). Critically appraising qualitative evidence.
In
B. M. Melnyk & E. Fineout-Overholt (Eds.), Evidence-based
practice in nursing and healthcare (pp. 127-162). Philadelphia:
Lippincott Williams & Wilkins.
Rice, M. J. (2008). Evidence-based practice in psychiatric care:
Defining levels of evidence. Journal of the American
Psychiatric
Nurses Association, 14(3), 181-187.
Russell, C. K., & Gregory, D. M. (2003). Evaluation of
qualitative
research studies. Evidence-Based Nursing, 6, 36-40.
Saddler, D. (2006). Research 101. Gastroenterology Nursing,
30,
314-316.
Sandelowski, M. (2004). Using qualitative research. Qualitative
Health Research, 14, 1366-1386.
Speziale, H. J. S., & Carpenter, D. R. (2003). Qualitative
research
in nursing: Advancing the humanistic imperative. Philadelphia:
Lippincott Williams & Wilkins.
Strauss, A., & Corbin, J. (1990). Basics of qualitative research:
Grounded theory procedures and techniques. London: Sage.
Thorne, S. (2000). Data analysis in qualitative research.
Evidence-
Based Nursing, 3, 68-70.
For reprints and permission queries, please visit SAGE’s Web
site at http://www.sagepub.com/journalsPermissions.nav.
By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN
In September’s evidence- based practice (EBP) article, Rebecca
R., our hypotheti cal
staff nurse, Carlos A., her hospi-
tal’s expert EBP mentor, and Chen
M., Rebecca’s nurse colleague, ra-
pidly critically appraised the 15
articles they found to answer their
clinical question—“In hospital-
ized adults (P), how does a rapid
response team (I) compared with
no rapid response team (C) affect
the number of cardiac arrests (O)
and unplanned admissions to the
ICU (O) during a three-month
period (T)?”—and determined
that they were all “keepers.” The
team now begins the process of
evaluation and syn thesis of the
articles to see what the evidence
says about initiating a rapid re-
sponse team (RRT) in their hos-
pital. Carlos reminds them that
evaluation and synthesis are syn-
ergistic processes and don’t neces-
sarily happen one after the other.
Nevertheless, to help them learn,
he will guide them through the
EBP process one step at a time.
STARTING THE EVALUATION
Rebecca, Carlos, and Chen begin
to work with the evaluation table
they created earlier in this process
when they found and filled in the
essential elements of the 15 stud-
ies and projects (see “Critical Ap -
praisal of the Evidence: Part I,”
July). Now each takes a stack of
the “keeper” studies and system-
atically begins adding to the table
any remaining data that best re -
flect the study elements pertain-
ing to the group’s clinical question
(see Table 1; for the entire table
with all 15 articles, go to http://
links.lww.com/AJN/A17). They
had agreed that a “Notes” sec-
tion within the “Appraisal: Worth
to Practice” column would be a
good place to record the nuances
of an article, their impressions
of it, as well as any tips—such as
what worked in calling an RRT—
that could be used later when
they write up their ideas for ini-
tiating an RRT at their hospital, if
the evidence points in that direc-
tion. Chen remarks that al though
she thought their ini tial table con-
tained a lot of information, this
final version is more thorough by
far. She appreciates the opportu-
nity to go back and confirm her
original understanding of the
study essentials.
The team members discuss the
evolving patterns as they complete
the table. The three systematic
Critical Appraisal of the Evidence: Part III
The process of synthesis: seeing similarities and differences
across the body of evidence.
This is the seventh article in a series from the Arizona State
University College of Nursing and Health Innovation’s
Center for the Advancement of Evidence-Based Practice.
Evidence-based practice (EBP) is a problem-solving approach
to the delivery of health care that integrates the best evidence
from studies and patient care data with clinician exper-
tise and patient preferences and values. When delivered in a
context of caring and in a supportive organizational
culture, the highest quality of care and best patient outcomes
can be achieved.
The purpose of this series is to give nurses the knowledge and
skills they need to implement EBP consistently, one
step at a time. Articles will appear every two months to allow
you time to incorporate information as you work toward
implementing EBP at your institution. Also, we’ve scheduled
“Chat with the Authors” calls every few months to provide
a direct line to the experts to help you resolve questions. See
details below.
Need Help with Evidence-Based Practice? Chat with
the Authors on November 16!
On November 16 at 3 PM EST, join the “Chat with the Au -
thors” call. It’s your chance to get personal consultation from
the experts! Dial-in early! U.S. and Canada, dial 1-800-947-
5134
(International, dial 001-574-941-6964). When prompted, enter
code 121028#.
Go to www.ajnonline.com and click on “Podcasts” and then
on “Conversations” to listen to our interview with Ellen
Fineout-
Overholt and Bernadette Mazurek Melnyk.
[email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11
43
44 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com
Ta
bl
e
1.
F
in
al
E
va
lu
at
io
n
Ta
bl
e
Fi
rs
t A
ut
ho
r
(Y
ea
r)
Co
nc
ep
tu
al
Fr
am
ew
or
k
D
es
ig
n/
M
et
ho
d
Sa
m
pl
e/
Se
tti
ng
M
aj
or
V
ar
ia
bl
es
St
ud
ie
d
(a
nd
Th
ei
r
D
ef
in
iti
on
s)
M
ea
su
re
m
en
t
D
at
a
A
na
ly
si
s
Fi
nd
in
gs
A
pp
ra
is
al
: W
or
th
to
Pr
ac
tic
e
C
ha
n
PS
, e
t a
l.
A
rc
h
In
te
rn
M
ed
20
10
;1
70
(1
):
18
-2
6
N
on
e
SR Pu
rp
os
e:
e
ffe
ct
o
f
RR
T
on
H
M
R
an
d
C
R
•
Se
ar
ch
ed
5
da
ta
ba
se
s
fro
m
19
50
–2
00
8
an
d
“g
re
y
lit
er
at
ur
e”
fro
m
M
D
c
on
fe
r-
en
ce
s
•
In
cl
ud
ed
o
nl
y
1)
R
C
Ts
a
nd
pr
os
pe
c t
iv
e
stu
di
es
w
ith
2)
a
c
on
tro
l
gr
ou
p
or
co
nt
ro
l p
er
io
d
an
d
3)
h
os
pi
ta
l
m
or
ta
lit
y
w
el
l
de
sc
rib
ed
a
s
ou
tc
om
e
•
Ex
cl
ud
ed
5
stu
di
es
th
at
m
et
cr
ite
ria
d
ue
to
no
re
sp
on
se
to
e-
m
ai
l b
y
pr
im
ar
y
au
th
or
s
N
=
1
8
ou
t o
f
14
3
po
te
nt
ia
l
stu
di
es
Se
tti
ng
: a
cu
te
ca
re
h
os
pi
ta
ls;
13
a
du
lt,
5
p
ed
s
A
ve
ra
ge
n
o.
be
ds
: N
R
A
ttr
iti
on
: N
R
IV
: R
RT
D
V1
: H
M
R
(in
cl
ud
in
g
D
N
R,
ex
cl
ud
in
g
D
N
R,
no
t t
re
at
ed
in
IC
U
, n
o
H
M
R
de
fin
iti
on
)
D
V2
: C
R
RR
T:
w
as
th
e
M
D
in
vo
lv
ed
?
H
M
R:
o
ve
ra
ll
ho
sp
ita
l d
ea
th
s
(s
ee
d
ef
in
iti
on
)
C
R:
c
ar
di
o
an
d/
or
p
ul
m
o
-
na
ry
a
rr
es
t;
ca
rd
ia
c
ar
re
st
ca
lls
•
Fr
e
-
qu
en
cy
•
Re
la
tiv
e
ris
k
13
/1
6
stu
di
es
re
po
rti
ng
te
am
str
uc
tu
re
7/
11
a
du
lt
an
d
4/
5
pe
ds
stu
di
es
h
ad
s
ig
-
ni
fic
an
t r
ed
uc
-
tio
n
in
C
R
C
R:
•
In
a
du
lts
,
21
%
–4
8%
re
du
ct
io
n
in
C
R;
R
R
0.
66
(9
5%
C
I,
0.
54
–0
.8
0)
•
In
p
ed
s,
3
8%
re
du
ct
io
n
in
C
R;
R
R
0.
62
(9
5%
C
I,
0.
46
–0
.8
4)
H
M
R:
•
In
a
du
lts
,
H
M
R
RR
0.
96
(9
5%
C
I,
0.
84
–
1
.0
9)
•
In
p
ed
s,
H
M
R
RR
0.
79
(9
5%
C
I,
0.
63
–
0
.9
8)
W
ea
kn
es
se
s:
•
Po
te
nt
ia
l m
is
se
d
ev
i-
de
nc
e
w
ith
e
xc
lu
si
on
of
a
ll
stu
di
es
e
xc
ep
t
th
os
e
w
ith
c
on
tro
l
gr
ou
ps
•
G
re
y
lit
er
at
ur
e
se
ar
ch
lim
ite
d
to
m
ed
ic
al
m
ee
t-
in
gs
•
O
nl
y
in
cl
ud
ed
H
M
R
an
d
C
R
ou
tc
om
es
•
N
o
co
st
da
ta
St
re
ng
th
s:
•
Id
en
tif
ie
d
no
. o
f a
ct
iv
a-
tio
ns
o
f R
RT
/1
,0
00
ad
m
is
si
on
s
•
Id
en
tif
ie
d
va
ria
nc
e
in
o
ut
co
m
e
de
fin
iti
on
an
d
m
ea
s u
re
m
en
t (
fo
r
ex
am
pl
e,
1
0
of
1
5
stu
d-
ie
s
in
cl
ud
ed
d
ea
th
s
fro
m
D
N
Rs
in
th
ei
r m
or
ta
lit
y
m
ea
su
re
m
en
t)
C
on
cl
us
io
n:
•
RR
T
re
du
ce
s
C
R
in
ad
ul
ts,
a
nd
C
R
an
d
H
M
R
in
p
ed
s
Fe
as
ib
ili
ty
:
•
RR
T
is
re
as
on
ab
le
to
im
pl
em
en
t;
ev
al
ua
tin
g
co
st
w
ill
h
el
p
in
m
ak
in
g
de
ci
si
on
s
ab
ou
t u
si
ng
RR
T
•
Ri
sk
/B
en
ef
it
(h
ar
m
):
be
ne
fit
s
ou
tw
ei
gh
ri
sk
s
[email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11
45
M
cG
au
gh
ey
J,
et
a
l.
C
oc
hr
an
e
D
at
ab
as
e
Sy
st
Re
v
20
07
;3
:
C
D
00
55
29
N
on
e
SR
(C
oc
hr
an
e
re
vi
ew
)
Pu
rp
os
e:
e
ffe
ct
o
f
RR
T
on
H
M
R
•
Se
ar
ch
ed
6
da
ta
ba
se
s
fro
m
19
90
–2
00
6
•
Ex
cl
ud
ed
a
ll
bu
t
2
RC
Ts
N
=
2
s
tu
di
es
A
cu
te
c
ar
e
se
t-
tin
gs
in
A
us
tra
lia
an
d
th
e
U
K
A
ttr
iti
on
: N
R
IV
: R
RT
D
V1
: H
M
R
H
M
R:
A
us
tra
lia
:
ov
er
al
l h
os
pi
ta
l
m
or
ta
lit
y
w
ith
-
ou
t D
N
R
U
K:
S
im
pl
ifi
ed
A
cu
te
P
hy
si
ol
-
og
y
Sc
or
e
(S
A
PS
) I
I
de
at
h
pr
ob
ab
il-
ity
e
sti
m
at
e
O
R
O
R
of
A
us
-
tra
lia
n
stu
dy
,
0.
98
(9
5%
C
I,
0.
83
–1
.1
6)
O
R
of
U
K
stu
dy
,
0.
52
(9
5%
C
I,
0.
32
–0
.8
5)
W
ea
kn
es
se
s:
•
D
id
n’
t i
nc
lu
de
fu
ll
bo
dy
of
e
vi
de
nc
e
•
C
on
fli
ct
in
g
re
su
lts
o
f
re
ta
in
ed
s
tu
di
es
, b
ut
n
o
di
sc
us
si
on
o
f t
he
im
pa
ct
of
lo
w
er
-le
ve
l e
vi
de
nc
e
•
Re
co
m
m
en
da
tio
n
“n
ee
d
m
or
e
re
se
ar
ch
”
C
on
cl
us
io
n:
•
In
co
nc
lu
si
ve
W
in
te
rs
B
D
,
et
a
l.
C
rit
C
ar
e
M
ed
20
07
;3
5(
5)
:
12
38
-4
3
N
on
e
SR Pu
rp
os
e:
e
ffe
ct
o
f
RR
T
on
H
M
R
an
d
C
R
•
Se
ar
ch
ed
3
da
ta
ba
se
s
fro
m
19
90
–2
00
5
•
In
cl
ud
ed
o
nl
y
stu
di
es
w
ith
a
co
nt
ro
l g
ro
up
N
=
8
s
tu
di
es
A
ve
ra
ge
n
o.
be
ds
: 5
00
A
ttr
iti
on
: N
R
IV
: R
RT
D
V1
: H
M
R
D
V2
: C
R
H
M
R:
o
ve
ra
ll
de
at
h
ra
te
C
R:
n
o.
o
f i
n-
ho
sp
ita
l a
rr
es
ts
Ri
sk
ra
tio
H
M
R:
•
O
bs
er
va
-
tio
na
l s
tu
di
es
,
ris
k
ra
tio
fo
r
RR
T
on
H
M
R,
0.
87
(9
5%
C
I,
0.
73
–
1.
04
)
•
C
lu
ste
r R
C
Ts
,
ris
k
ra
tio
fo
r
RR
T
on
H
M
R,
0.
76
(9
5%
C
I,
0.
39
–
1.
48
)
C
R:
•
O
bs
er
va
-
tio
na
l s
tu
di
es
,
ris
k
ra
tio
fo
r
RR
T
on
C
R,
0.
70
(9
5%
C
I,
0.
56
–
0.
92
)
•
C
lu
ste
r R
C
Ts
,
ris
k
ra
tio
fo
r
RR
T
on
C
R,
0.
94
(9
5%
C
I,
0.
79
–
1.
13
)
St
re
ng
th
s:
•
Pr
ov
id
es
c
om
pa
ris
on
ac
ro
ss
s
tu
di
es
fo
r
S
tu
dy
le
ng
th
s
(ra
ng
e,
4
–8
2
m
on
th
s)
Sa
m
pl
e
si
ze
(r
an
ge
,
2,
18
3–
19
9,
02
4)
C
rit
er
ia
fo
r R
RT
in
iti
a-
tio
n
(c
om
m
on
: r
es
pi
ra
-
to
ry
ra
te
, h
ea
rt
ra
te
,
bl
oo
d
pr
es
su
re
, m
en
ta
l
sta
tu
s
ch
an
ge
; n
ot
a
ll
stu
di
es
, b
ut
n
ot
ew
or
-
th
y:
o
xy
ge
n
sa
tu
ra
tio
n,
“w
or
ry
”)
•
In
cl
ud
es
id
ea
s
ab
ou
t
fu
tu
re
e
vi
de
nc
e
ge
n-
er
at
io
n
(c
on
du
ct
in
g
re
se
ar
ch
)—
fin
di
ng
o
ut
w
ha
t w
e
do
n’
t k
no
w
C
on
cl
us
io
n:
•
So
m
e
su
pp
or
t f
or
R
RT
,
bu
t n
ot
re
lia
bl
e
en
ou
gh
to
re
co
m
m
en
d
as
s
ta
n-
da
rd
o
f c
ar
e
C
I =
c
on
fid
en
ce
in
te
rv
al
; C
R
=
ca
rd
io
pu
lm
on
ar
y
ar
re
st
o
r
co
de
r
at
es
; D
N
R
=
do
n
ot
r
es
us
ci
ta
te
; D
V
=
d
ep
en
de
nt
v
ar
ia
bl
e;
H
M
R
=
ho
sp
ita
l-w
id
e
m
or
ta
lit
y
ra
te
s;
IC
U
=
in
te
ns
iv
e
ca
re
un
it;
IV
=
in
de
pe
nd
en
t v
ar
ia
bl
e;
M
D
=
m
ed
ic
al
d
oc
to
r;
N
R
=
no
t r
ep
or
te
d;
O
R
=
od
ds
r
at
io
; P
ed
s
=
pe
di
at
ric
s;
R
C
T
=
ra
nd
om
iz
ed
c
on
tro
lle
d
tri
al
; R
R
=
re
la
tiv
e
ris
k;
R
RT
=
r
ap
id
re
sp
on
se
te
am
; S
R
=
sy
st
em
at
ic
r
ev
ie
w
; U
K
=
U
ni
te
d
Ki
ng
do
m
46 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com
as well as a good num ber of jour-
nals have encouraged their use.
When they review the actual
guidelines, the team notices that
they seem to be fo cused on re-
search; for example, they require
a research question and refer to
the study of an intervention,
whereas EBP projects have PICOT
questions and apply evidence to
practice. The team discusses that
these guidelines can be confusing
to the clinicians au thoring the re-
ports on their proj ects. In addition,
they note that there’s no mention
of the syn thesis of the body of
evidence that should drive an
evidence-based project. While the
SQUIRE Guidelines are a step in
the right direction for the future,
Carlos, Rebecca, and Chen con-
clude that, for now, they’ll need
to learn to read these studies as
they find them—looking care-
fully for the details that inform
their clinical question.
Once the data have been en-
tered into the table, Carlos sug-
gests that they take each column,
one by one, and note the similari-
ties and differences across the
studies and projects. After they’ve
briefly looked over the columns,
he asks the team which ones they
think they should focus on to an-
swer their question. Re becca and
Chen choose “Design/ Method,”
“Sample/Setting,” “Findings,” and
“Appraisal: Worth to Practice”
(see Table 1) as the ini tial ones
to consider. Carlos agrees that
these are the columns in which
they’re most likely to find the
most pertinent information for
their syn thesis.
Chen in their efforts to appraise
the MERIT study and comments
on how well they’re putting the
pieces of the evidence puzzle to-
gether. The nurses are excited
that they’re able to use their new
knowledge to shed light on the
study. They discuss with Carlos
how the interpretation of the
MERIT study has perhaps con-
tributed to a misunderstanding
of the impact of RRTs.
Comparing the evidence. As
the team enters the lower-level evi-
dence into the evaluation table,
they note that it’s challenging to
compare the project reports with
studies that have clearly described
methodology, measurement, anal -
ysis, and findings. Chen remarks
that she wishes researchers and
clinicians would write study and
project reports similarly. Although
each of the studies has a process
or method determining how it was
conducted, as well as how out-
comes were measured, data were
analyzed, and results interpreted,
comparing the studies as they’re
currently written adds an other
layer of complexity to the eval-
uation. Carlos says that while it
would be great to have studies
and projects written in a similar for-
mat so they’re easier to compare,
that’s unlikely to happen. But he
tells the team not to lose all hope,
as a format has been de veloped
for re porting quality improve-
ment initiatives called the SQUIRE
Guidelines; however, they aren’t
ideal. The team looks up the guide-
lines online (www.squire-statement.
org) and finds that the In stitute
for Healthcare Improve ment (IHI)
reviews, which are higher-level
evidence, seem to have an inher-
ent bias in that they included only
studies with control groups. In
general, these studies weren’t in
favor of initiating an RRT. Carlos
asks Rebecca and Chen whether,
now that they’ve appraised all the
evidence about RRTs, they’re con -
fident in their decision to include
all the studies and projects (in -
cluding the lower-level evidence)
among the “keepers.” The nurses
reply with an emphatic affirma-
tive! They tell Carlos that the pro j -
ects and descriptive studies were
what brought the issue to life for
them. They realize that the higher-
level evidence is somewhat in
conflict with the lower-level evi-
dence, but they’re most interested
in the conclusions that can be
drawn from considering the entire
body of evidence.
Rebecca and Chen admit they
have issues with the systematic
reviews, all of which include the
MERIT study.1-4 In particular, they
discuss how the authors of the
systematic reviews made sure to
report the MERIT study’s finding
that the RRT had no effect, but
didn’t emphasize the MERIT study
authors’ discussion about how
their study methods may have
influenced the reliability of the
findings (for more, see “Critical
Appraisal of the Evi dence: Part
II,” Septem ber). Carlos says that
this is an excellent observation.
He also reminds the team that
clinicians may read a systematic
review for the conclusion and
never consider the original stud-
ies. He encourages Rebecca and
It’s not the number of studies or projects that determines
the reliability of their findings, but the uniformity and
quality of their methods.
[email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11
47
SYNTHESIZING: MAKING DECISIONS
BASED ON THE EVIDENCE
Design/Method. The team starts
with the “Design/Method” column
because Carlos reminds them that
it’s important to note each study’s
level of evidence. He suggests
that they take this information
and create a synthesis table (one
in which data is extracted from
the evaluation table to better see
the similarities and differences
bet ween studies) (see Table 21-15).
The synthesis table makes it clear
that there is less higher-level and
more lower-level evidence, which
will impact the reliability of the
overall findings. As the team noted,
the higher-level evidence is not
without meth odological issues,
which will increase the challenge
of coming to a conclusion about
the impact of an RRT on the out -
comes.
Sample/Setting. In reviewing
the “Sample/Setting” column, the
group notes that the number of
hospital beds ranged from 218
to 662 across the studies. There
were several types of hospitals
represented (4 teaching, 4 com-
munity, 4 no mention, 2 acute
care hospitals, and 1 public hos-
pital). The evidence they’ve col-
lected seems applicable, since
their hospital is a community
hos pital.
Findings. To help the team
better discuss the evidence, Car-
los suggests that they refer to all
pro j ects or studies as “the body
of evidence.” They don’t want to
get confused by calling them all
studies, as they aren’t, but at the
same time continually referring
to “stud ies and projects” is cum-
bersome. He goes on to say that,
as part of the synthesis process,
it’s impor tant for the group to
determine the overall impact of
the intervention across the body
of evi dence. He helps them create
a second synthesis table contain-
ing the findings of each study or
pro ject (see Table 31-15). As they
look over the results, Rebecca
and Chen note that RRTs reduce
code rates, par ti cularly outside
the ICU, whereas unplanned
ICU admissions (UICUA) don’t
seem to be as affected by them.
How ever, 10 of the 15 studies
and projects reviewed didn’t
ev aluate this outcome, so it
may not be fair to write it off
just yet.
Table 2: The 15 Studies: Levels and Types of Evidence
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Level I: Systematic review
or meta-analysis
X X X
Level II: Randomized con-
trolled trial
X
Level III: Controlled trial
without randomization
Level IV: Case-control or
cohort study
X X
Level V: Systematic review
of qualitative or descrip-
tive studies
Level VI: Qualitative or
descriptive study (includes
evidence implementation
projects)
X X X X X X X X X
Level VII: Expert opinion
or consensus
Adapted with permission from Melnyk BM, Fineout-Overholt E,
editors. Evidence-based practice in nursing and healthcare: a
guide to best practice.
2nd ed. Philadelphia: Wolters Kluwer Health / Lippincott
Williams and Wilkins; 2010.
1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters
BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.; 6 = Chan
PS, et al.
(2009); 7 = DeVita MA, et al.; 8 = Mailey J, et al.; 9 = Dacey
MJ, et al.; 10 = McFarlan SJ, Hensley S.; 11 = Offner PJ, et al.;
12 = Bertaut Y,
et al.; 13 = Benson L, et al.; 14 = Hatler C, et al.; 15 = Bader
MK, et al.
hav ing level- VI evidence, a study
and a project, had statistically
significant (less likely to occur by
chance, P < 0.05) reductions in
HMR, which in creases the reli-
ability of the results.
Chen asks, since four level-VI
reports documented that an RRT
reduces HMR, should they put
more confidence in findings that
occur more than once? Carlos re-
plies that it’s not the number of
studies or projects that determines
the re liability of their findings, but
the uniformity and quality of their
methods. He recites something he
heard in his Expert EBP Mentor
program that helped to clarify
the concept of making decisions
based on the evidence: the level
of the evidence (the design) plus
the quality of the evidence (the
validity of the methods) equals the
strength of the evidence, which is
what leads clinicians to act in con -
fidence and apply the evidence (or
not) to their practice and expect
similar findings (outcomes). In
terms of making a decision about
whether or not to initiate an RRT,
Carlos says that their evidence
stacks up: first, the MERIT study’s
results are questionable because
of problems with the study meth-
ods, and this affects the reliability
of the three systematic reviews as
well as the MERIT study it self;
second, the reasonably conducted
lower-level studies/projects, with
their statistically significant find-
ings, are persuasive. Therefore,
the team begins to consider the
possibility that initiating an RRT
may re duce code rates outside the
ICU (CRO) and may impact non-
ICU mor tality; both are outcomes
they would like to address. The
evidence doesn’t provide equally
The EBP team can tell from
reading the evidence that research -
ers consider the impact of an RRT
on hospital-wide mortality rates
(HMR) as the more important
outcome; however, the group re -
mains unconvinced that this out-
come is the best for evaluating
the purpose of an RRT, which,
according to the IHI, is early in -
tervention in patients who are
unstable or at risk for cardiac or
respiratory arrest.16 That said, of
the 11 studies and projects that
evaluated mortality, more than
half found that an RRT reduced it.
Carlos reminds the group that
four of those six articles are level-VI
evidence and that some weren’t
research. The findings produced
at this level of evidence are typi-
cally less reliable than those at
higher levels of evidence; how-
ever, Carlos notes that two articles
48 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com
Table 3: Effect of the Rapid Response Team on Outcomes
1a 2a 3a 4a 5a 6a 7 8 9 10 11 12 13 14 15
HMR
adult
b
peds
b NE c b NR NE c NE b, d
CRO NE NE NE NE c b NE NE b c
b c NE c c
CR b
peds
and
adult
NE b NE b c NE NE NE NE b NE NE
UICUA NE NE NE NE NE NE NE b c NE NE NE
b
1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters
BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.;
6 = Chan PS, et al. (2009); 7 = DeVita MA, et al.; 8 = Mailey J,
et al.; 9 = Dacey MJ, et al.; 10 = McFarlan SJ, Hensley S.;
11 = Offner PJ, et al.; 12 = Bertaut Y, et al.; 13 = Benson L, et
al.; 14 = Hatler C, et al.; 15 = Bader MK, et al.
CR = cardiopulmonary arrest or code rates; CRO = code rates
outside the ICU; HMR = hospital-wide mortality rates;
NE = not evaluated; NR = not reported; UICUA = unplanned
ICU admissions
a higher-level evidence; b statistically significant findings; c
statistical significance not reported; d non-ICU mortality was
reduced
[email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11
49
the important outcomes to mea-
sure are: CRO, non-ICU mortality
(excluding patients with do not
resuscitate [DNR] orders), UICUA,
and cost.
Appraisal: Worth to Practice.
As the team discusses their syn-
thesis and the decision they’ll
make based on the evidence,
data in the “Findings” column
that shows a financial return on
in vestment for an RRT.9 Carlos
remarks to the group that this is
only one study, and that they’ll
need to make sure to collect data
on the costs of their RRT as well
as the cost implications of the
outcomes. They determine that
promising results for UICUA, but
the team agrees to include it in
the outcomes for their RRT pro j -
ect be cause it wasn’t evaluated
in most of the articles they ap-
praised.
As the EBP team continues
to discusses probable outcomes,
Re becca points to one study’s
Table 4. Defined Criteria for Initiating an RRT Consult
4 8 9 13 15
Respiratory distress
(breaths/min)
Airway threatened
Respiratory arrest
RR < 5 or > 36
RR < 10 or
> 30
RR < 8 or > 30
Unexplained dys-
pnea
RR < 8 or > 28
New-onset difficulty
breathing
RR < 10 or > 30
Shortness of breath
Change in mental
status
Change in LOC
Decrease in Glasgow
Coma Scale of
> 2 points
ND Unexplained change Sudden decrease
in LOC with normal
blood glucose
Decreased LOC
Tachycardia (beats/
min)
>140 > 130 Unexplained > 130
for 15 min
> 120 > 130
Bradycardia (beats/
min)
< 40 < 60 Unexplained < 50
for 15 min
< 40 < 40
Blood pressure
(mmHg)
SBP < 90 SBP < 90 or >
180
Hypotension (unex-
plained)
SBP > 200 or < 90 SBP < 90
Chest pain Cardiac arrest ND ND Complaint of nontrau-
matic chest pain
Complaint of nontraumatic
chest pain
Seizures Sudden or extended ND ND Repeated or pro-
longed
ND
Concern/worry
about patient
Serious concern
about a patient who
doesn’t fit the above
criteria
NE Nurse concern about
overall deterioration
in patients’ condi-
tion without any of
the above criteria
(p. 2077)
Nurse concern • Uncontrolled pain
• Failure to respond to
treatment
• Unable to obtain prompt
assistance for unstable
patient
Pulse oximetry (SpO2) NE NE NE < 92% < 92%
Other • Color change of
patient
• Unexplained agita-
tion for > 10 min
• CIWA > 15 points
• UOP < 50 cc/4 hr
• Color change of patient
(pale, dusky, gray, or
blue)
• New-onset limb weak-
ness or smile droop
• Sepsis: ≥ 2 SIRS criteria
4 = Hillman K, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.;
13 = Benson L, et al.; 15 = Bader MK, et al.
cc = cubic centimeters; CIWA = Clinical Institute Withdrawal
Assessment; hr = hour; LOC = level of consciousness; min =
minute; mmHg = millimeters
of mercury; ND = not defined; NE = not evaluated; RR =
respiratory rate; SBP = systolic blood pressure; SIRS =
systemic inflammatory response
syndrome; SpO2= arterial oxygen saturation; UOP = urine
output
50 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com
that an RRT is a valuable inter-
vention to initiate. They decide
to take the criteria for activating
an RRT from several successful
studies/projects and put them
into a synthesis table to better
see their ma jor similarities (see
Table 44, 8, 9, 13, 15). From this com-
bined list, they choose the criteria
for initiating an RRT consult that
they’ll use in their project (see
Table 5). The team also be gins
discussing the ideal make up for
their RRT. Again, they go back to
the evaluation table and look
of excitement about their project,
that their colleagues across all
disciplines have been eager to hear
the re sults of their review of the
evidence. In addition, Carlos says
that many re sources in their hos-
pital will be available to help them
get started with their project and
reminds them of their hospital
administrators’ commitment to
support the team.
ACTING ON THE EVIDENCE
As they consider the synthesis
of the evidence, the team agrees
Re becca raises a question that’s
been on her mind. She reminds
them that in the “Appraisal: Worth
to Practice” column, teaching was
identified as an important factor
in initiating an RRT and expresses
concern that their hospital is not
an aca demic medical center. Chen
re minds her that even though
theirs is not a designated teaching
hospital with residents on staff
24 hours a day, it has a culture
of teaching that should enhance
the success of an RRT. She adds
that she’s al ready hearing a buzz
Table 5. Defined Criteria for Initiating an RRT Consult at Our
Hospital
Pulmonary
Ventilation Color change of patient (pale, dusky, gray, or blue)
Respiratory distress RR < 10 or > 30 breaths/min or
unexplained dyspnea or new-onset difficulty breathing
or shortness of breath
Cardiovascular
Tachycardia Unexplained > 130 beats/min for 15 min
Bradycardia Unexplained < 50 beats/min for 15 min
Blood pressure Unexplained SBP < 90 or > 200 mmHg
Chest pain Complaint of nontraumatic chest pain
Pulse oximetry < 92% SpO2
Perfusion UOP < 50 cc/4 hr
Neurologic
Seizures Initial, repeated, or prolonged
Change in mental status • Sudden decrease in LOC with normal
blood glucose
• Unexplained agitation for > 10 min
• New-onset limb weakness or smile droop
Concern/worry about
patient
Nurse concern about overall deterioration in patients’ condition
without any of the above
criteria
Sepsis
• Temp, > 38°C
• HR, > 90 beats/min
• RR, > 20 breaths/min
• WBC, > 12,000, < 4,000, or > 10% bands
cc = cubic centimeters; hr = hours; HR = heart rate; LOC =
level of consciousness; min = minute; mmHg = millimeters of
mercury; RR = respiratory rate; SBP = systolic blood pressure;
SpO2 = arterial oxygen saturation; Temp = temperature;
UOP = urine output; WBC = white blood count
[email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11
51
3. Winters BD, et al. Rapid response sys -
tems: a systematic review. Crit Care
Med 2007;35(5):1238-43.
4. Hillman K, et al. Introduction of
the medical emergency team (MET)
system: a cluster-randomised con-
trolled trial. Lancet 2005;365(9477):
2091-7.
5. Sharek PJ, et al. Effect of a rapid re -
sponse team on hospital-wide mortal-
ity and code rates outside the ICU in
a children’s hospital. JAMA 2007;
298(19):2267-74.
6. Chan PS, et al. Hospital-wide code
rates and mortality before and after
implementation of a rapid response
team. JAMA 2008;300(21):2506-13.
7. DeVita MA, et al. Use of medical
emergency team responses to reduce
hospital cardiopulmonary arrests.
Qual Saf Health Care 2004;13(4):
251-4.
8. Mailey J, et al. Reducing hospital
standardized mortality rate with early
interventions. J Trauma Nurs 2006;
13(4):178-82.
9. Dacey MJ, et al. The effect of a rapid
response team on major clinical out-
come measures in a community hos-
pital. Crit Care Med 2007;35(9):
2076-82.
10. McFarlan SJ, Hensley S. Implementa-
tion and outcomes of a rapid response
team. J Nurs Care Qual 2007;22(4):
307-13.
11. Offner PJ, et al. Implementation of a
rapid response team decreases cardiac
arrest outside the intensive care unit.
J Trauma 2007;62(5):1223-8.
12. Bertaut Y, et al. Implementing a rapid-
response team using a nurse-to-nurse
consult approach. J Vasc Nurs 2008;
26(2):37-42.
13. Benson L, et al. Using an advanced
practice nursing model for a rapid re -
sp onse team. Jt Comm J Qual Pa tient
Saf 2008;34(12):743-7.
14. Hatler C, et al. Implementing a rapid
response team to decrease emergen-
cies. Medsurg Nurs 2009;18(2):84-90,
126.
15. Bader MK, et al. Rescue me: saving
the vulnerable non-ICU patient popu-
lation. Jt Comm J Qual Patient Saf
2009;35(4):199-205.
16. Institute for Healthcare Improvement.
Establish a rapid response team.
n.d. http://www.ihi.org/IHI/topics/
criticalcare/intensivecare/changes/
establisharapidresponseteam.htm.
evidence that led to the project,
how to call an RRT, and out-
come measures that will indicate
whether or not the implementation
of the evidence was successful.
They’ll also need an evaluation
plan. From reviewing the studies
and projects, they also re alize that
it’s important to focus their plan
on evidence implementation, in-
cluding carefully evaluating both
the process of implementation and
project outcomes.
Be sure to join the EBP team
in the next installment of this se -
ries as they develop their imple-
mentation plan for initiating an
RRT in their hospital, including
the submission of their project
proposal to the ethics review
board. ▼
Ellen Fineout-Overholt is clinical pro-
fessor and director of the Center for the
Advancement of Evidence-Based Prac -
tice at Arizona State University in Phoe -
nix, where Bernadette Mazurek Melnyk
is dean and distinguished foundation
professor of nursing, Susan B. Stillwell
is clinical associate professor and pro-
gram coordinator of the Nurse Educator
Evidence-Based Practice Men torship
Program, and Kathleen M. Williamson
is associate director of the Center for
the Advancement of Evidence-Based
Pra ctice. Contact author: Ellen Fineout-
Overholt, [email protected]
edu.
REFERENCES
1. Chan PS, et al. (2010). Rapid re -
sponse teams: a systematic review
and meta- analysis. Arch Intern Med
2010;170(1):18-26.
2. McGaughey J, et al. Outreach and
early warning systems (EWS) for the
prevention of intensive care admission
and death of critically ill adult patients
on general hospital wards. Cochrane
Database Syst Rev 2007;3:CD005529.
over the “Major Variables
Studied” column, noting that the
composition of the RRT varied
among the studies/projects. Some
RRTs had active physician partic-
ipation (n = 6), some had desig-
nated phy sician consultation on
an as-needed basis (n = 2), and
some were nurse-led teams (n = 4).
Most RRTs also had a respira-
tory therapist (RT). All RRT mem-
bers had expertise in intensive
care and many were certified in
ad vanced cardiac life support
(ACLS). They agree that their
team will be comprised of ACLS-
certified mem bers. It will be led
by an acute care nurse prac ti-
tioner (ACNP) credentialed for
advanced procedures, such as
cen tral line insertion. Members
will include an ICU RN and an
RT who can intubate. They also
discuss having physicians will-
ing to be called when needed.
Although no studies or projects
had a chaplain on their RRT,
Chen says that it would make
sense in their hospital. Carlos,
who’s been on staff the longest
of the three, says that interdisci-
plinary collaboration has been a
mainstay of their organization. A
physician, ACNP, ICU RN, RT,
and chaplain are logical choices
for their RRT.
As the team ponders the evi-
dence, they begin to discuss the
next step, which is to develop
ideas for writing their project
im plementation plan (also called
a protocol). Included in this pro-
tocol will be an educational plan
to let those involved in the proj-
ect know information such as the
As they consider the synthesis of the
evidence, the team agrees that an RRT is a
valuable intervention to initiate.
By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN
In May’s evidence-based prac-tice (EBP) article, Rebecca R.,
our hypothetical staff nurse,
and Carlos A., her hospital’s ex-
pert EBP mentor, learned how to
search for the evidence to answer
their clinical question (shown
here in PICOT format): “In hos­
pitalized adults (P), how does a
rapid response team (I) compared
with no rapid response team (C)
affect the number of cardiac ar-
rests (O) and unplanned admis-
sions to the ICU (O) during a
three­month period (T)?” With
the help of Lynne Z., the hospi-
tal librarian, Rebecca and Car-
los searched three databases,
PubMed, the Cumulative Index
of Nursing and Allied Health
Literature (CINAHL), and the
Cochrane Database of Systematic
Reviews. They used keywords
from their clinical question, in-
cluding ICU, rapid response
team, cardiac arrest, and un-
planned ICU admissions, as
well as the following synonyms:
failure to rescue, never events,
medical emergency teams, rapid
response systems, and code
blue. Whenever terms from a
database’s own indexing lan-
guage, or controlled vocabulary,
matched the keywords or syn-
onyms, those terms were also
searched. At the end of the data-
base searches, Rebecca and Car-
los chose to retain 18 of the 18
studies found in PubMed; six of
the 79 studies found in CINAHL;
and the one study found in the
Cochrane Database of System-
atic Reviews, because they best
answered the clinical question.
As a final step, at Lynne’s rec-
ommendation, Rebecca and Car-
los conducted a hand search of
the reference lists of each study
they retained looking for any rele-
vant studies they hadn’t found in
their original search; this process
is also called the ancestry method.
The hand search yielded one ad-
ditional study, for a total of 26.
RAPID CRITICAL APPRAISAL
The next time Rebecca and Car-
los meet, they discuss the next
step in the EBP process—critically
appraising the 26 studies. They
obtain copies of the studies by
printing those that are immedi-
ately available as full text through
library subscription or those
flagged as “free full text” by a
database or journal’s Web site.
Others are available through in-
terlibrary loan, when another
hos pital library shares its articles
with Rebecca and Carlos’s hospi-
tal library.
Carlos explains to Rebecca that
the purpose of critical appraisal
isn’t solely to find the flaws in a
study, but to determine its worth
to practice. In this rapid critical
appraisal (RCA), they will review
each study to determine
• its level of evidence.
• how well it was conducted.
• how useful it is to practice.
Once they determine which
studies are “keepers,” Rebecca
and Carlos will move on to the
final steps of critical appraisal:
evaluation and synthesis (to be
discussed in the next two install-
ments of the series). These final
steps will determine whether
overall findings from the evi-
dence review can help clinicians
improve patient outcomes.
Rebecca is a bit apprehensive
because it’s been a few years since
she took a research class. She
[email protected] AJN ▼ July 2010 ▼ Vol. 110, No. 7 47
Critical Appraisal of the Evidence: Part I
An introduction to gathering, evaluating, and recording the
evidence.
This is the fifth article in a series from the Arizona State
University College of Nursing and Health Innovation’s Center
for the Advancement of Evidence - Based Practice. Evidence-
based practice (EBP) is a problem-solving approach to the
delivery of health care that integrates the best evidence from
studies and patient care data with clinician expertise and
patient preferences and values. When delivered in a context of
caring and in a supportive organizational culture, the
highest quality of care and best patient outcomes can be
achieved.
The purpose of this series is to give nurses the knowledge and
skills they need to implement EBP consistently, one
step at a time. Articles will appear every two months to allow
you time to incorporate information as you work toward
implementing EBP at your institution. Also, we’ve scheduled
“Chat with the Authors” calls every few months to provide
a direct line to the experts to help you resolve questions. Details
about how to participate in the next call will be pub-
lished with September’s Evidence-Based Practice, Step by Step.
and the Boston University Medi-
cal Center Alumni Medical Li-
brary [http://medlib.bu.edu/
bugms/content.cfm/content/
ebmglossary.cfm#R].)
Determining the level of evi-
dence. The team begins to divide
the 26 studies into categories ac-
cording to study design. To help
in this, Carlos provides a list of
several different study designs
(see Hierarchy of Evidence for
Intervention Studies). Rebecca,
Carlos, and Chen work together
to determine each study’s design
by reviewing its abstract. They
also create an “I don’t know”
pile of studies that don’t appear
to fit a specific design. When they
find studies that don’t actively
answer the clinical question but
new EBP team, Carlos provides
Rebecca and Chen with a glossary
of terms so they can learn basic
research terminology, such as sam-
ple, independent variable, and de-
pendent variable. The glossary
also defines some of the study de-
signs the team is likely to come
across in doing their RCA, such
as systematic review, randomized
controlled trial, and cohort, qual-
itative, and descriptive studies.
(For the definitions of these terms
and others, see the glossaries pro-
vided by the Center for the Ad-
vancement of Evidence-Based
Practice at the Arizona State Uni-
versity College of Nursing and
Health Innovation [http://nursing
andhealth.asu.edu/evidence-based-
practice/resources/glossary.htm]
48 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com
shares her anxiety with Chen M.,
a fellow staff nurse, who says
she never studied research in
school but would like to learn;
she asks if she can join Carlos
and Rebecca’s EBP team. Chen’s
spirit of inquiry encourages Re-
becca, and they talk about the
opportunity to learn that this
project affords them. Together
they speak with the nurse man-
ager on their medical–surgical
unit, who agrees to let them use
their allotted continuing educa-
tion time to work on this project,
after they discuss their expecta-
tions for the project and how its
outcome may benefit the patients,
the unit staff, and the hospital.
Learning research terminol-
ogy. At the first meeting of the
Hierarchy of Evidence for Intervention Studies
Type of evidence Level of evidence Description
Systematic review or
meta-analysis
I A synthesis of evidence from all relevant random ized
controlled trials.
Randomized con-
trolled trial
II An experiment in which subjects are randomized to a
treatment group
or control group.
Controlled trial with-
out randomization
III An experiment in which subjects are nonrandomly assigned
to a
treatment group or control group.
Case-control or
cohort study
IV Case-control study: a comparison of subjects with a
condition (case)
with those who don’t have the condition (control) to determine
characteristics that might predict the condition.
Cohort study: an observation of a group(s) (cohort[s]) to
determine the
development of an outcome(s) such as a disease.
Systematic review of
qualitative or descrip-
tive studies
V A synthesis of evidence from qualitative or descrip tive
studies to
answer a clinical question.
Qualitative or de-
scriptive study
VI Qualitative study: gathers data on human behavior to
understand why
and how decisions are made.
Descriptive study: provides background informa tion on the
what,
where, and when of a topic of interest.
Expert opinion or
consensus
VII Authoritative opinion of expert committee.
Adapted with permission from Melnyk BM, Fineout-Overholt E,
editors. Evidence-based practice in nursing and healthcare:
a guide to best practice [forthcoming]. 2nd ed. Philadelphia:
Wolters Kluwer Health/Lippincott Williams and Wilkins.
http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cf
m#R
http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cf
m#R
http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cf
m#R
http://nursingandhealth.asu.edu/evidence-based-
practice/resources/glossary.htm
http://nursingandhealth.asu.edu/evidence-based-
practice/resources/glossary.htm
http://nursingandhealth.asu.edu/evidence-based-
practice/resources/glossary.htm
[email protected] AJN ▼ July 2010 ▼ Vol. 110, No. 7 49
may inform thinking, such as
descriptive research, expert opin-
ions, or guidelines, they put them
aside. Carlos explains that they’ll
be used later to support Rebecca’s
case for having a rapid response
team (RRT) in her hospital, sh-
ould the evidence point in that
direction.
After the studies—including
those in the “I don’t know”
group—are categorized, 15 of
the original 26 remain and will
be included in the RCA: three
systematic reviews that include
one meta-analysis (Level I evi-
dence), one randomized con-
trolled trial (Level II evidence),
two cohort studies (Level IV evi-
dence), one retrospective pre-
post study with historic controls
(Level VI evidence), four preex-
perimental (pre-post) interven-
tion studies (no control group)
(Level VI evidence), and four EBP
implementation projects (Level
VI evidence). Carlos reminds
Rebecca and Chen that Level I
evidence—a systematic review
of randomized controlled trials
or a meta-analysis—is the most
reliable and the best evidence to
answer their clinical question.
Using a critical appraisal
guide. Carlos recommends that
the team use a critical appraisal
checklist (see Critical Appraisal
Guide for Quantitative Studies)
to help evaluate the 15 studies.
This checklist is relevant to all
studies and contains questions
about the essential elements of
research (such as, pur pose of the
study, sample size, and major
variables).
The questions in the critical ap-
praisal guide seem a little strange
to Rebecca and Chen. As they re-
view the guide together, Carlos
explains and clarifies each ques-
tion. He suggests that as they try
to figure out which are the essen-
tial elements of the studies, they
focus on answering the first three
questions: Why was the study
done? What is the sample size?
Are the instruments of the major
variables valid and reliable? The
remaining questions will be ad-
dressed later on in the critical
appraisal process (to appear in
future installments of this series).
Creating a study evaluation
table. Carlos provides an online
template for a table where Re-
becca and Chen can put all the
data they’ll need for the RCA.
Here they’ll record each study’s
essential elements that answer the
three questions and begin to ap-
praise the 15 studies. (To use this
template to create your own eval-
uation table, download the Eval-
uation Table Template at http://
links.lww.com/AJN/A10.)
EXTRACTING THE DATA
Starting with level I evidence
studies and moving down the
hierarchy list, the EBP team takes
each study and, one by one, finds
and enters its essential elements
into the first five columns of
the evaluation table (see Table
1; to see the entire table with
all 15 studies, go to http://links.
lww.com/AJN/A11). The team
discusses each element as they
enter it, and tries to determine if
it meets the criteria of the critical
Critical Appraisal Guide for Quantitative Studies
1. Why was the study done?
• Was there a clear explanation of the purpose of the study
and, if so, what was it?
2. What is the sample size?
• Were there enough people in the study to establish that the
findings did not occur by chance?
3. Are the instruments of the major variables valid and
reliable?
• How were variables defined? Were the instruments designed
to measure a concept valid (did
they measure what the researchers said they measured)? Were
they reliable (did they measure a
concept the same way every time they were used)?
4. How were the data analyzed?
• What statistics were used to determine if the purpose of the
study was achieved?
5. Were there any untoward events during the study?
• Did people leave the study and, if so, was there something
special about them?
6. How do the results fit with previous research in the area?
• Did the researchers base their work on a thorough literature
review?
7. What does this research mean for clinical practice?
• Is the study purpose an important clinical issue?
Adapted with permission from Melnyk BM, Fineout-Overholt E,
editors. Evidence-based practice in nursing and healthcare:
a guide to best practice [forthcoming]. 2nd ed. Philadelphia:
Wolters Kluwer Health/Lippincott Williams and Wilkins.
http://links.lww.com/AJN/A10
http://links.lww.com/AJN/A10
http://links.lww.com/AJN/A11
http://links.lww.com/AJN/A11
50 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com
Ta
bl
e
1.
E
va
lu
at
io
n
Ta
bl
e,
P
ha
se
I
Fi
rs
t A
ut
ho
r
(Y
ea
r)
Co
nc
ep
tu
al
Fr
am
ew
or
k
D
es
ig
n/
M
et
ho
d
Sa
m
pl
e/
Se
tti
ng
M
aj
or
V
ar
ia
bl
es
S
tu
di
ed
(a
nd
T
he
ir
D
ef
in
iti
on
s)
M
ea
su
re
-
m
en
t
D
at
a
A
na
ly
si
s
Fi
nd
in
gs
A
pp
ra
is
al
:
W
or
th
to
Pr
ac
tic
e
C
ha
n
PS
, e
t a
l.
A
rc
h
In
te
rn
M
ed
20
10
;1
70
(1
):1
8
-2
6.
N
on
e
SR Pu
rp
os
e:
e
ffe
ct
o
f R
RT
o
n
H
M
R
an
d
C
R
•
Se
ar
ch
ed
5
d
at
ab
as
es
fro
m
1
95
0
-2
00
8,
a
nd
“g
re
y
lit
er
at
ur
e”
fr
om
M
D
c
on
fe
re
nc
es
•
In
cl
ud
ed
o
nl
y
stu
di
es
w
ith
a
c
on
tro
l g
ro
up
N
=
1
8
stu
di
es
Se
tti
ng
: a
cu
te
c
ar
e
ho
s-
pi
ta
ls;
1
3
ad
ul
t,
5
pe
ds
A
ve
ra
ge
n
o.
b
ed
s:
N
R
A
ttr
iti
on
: N
R
IV
: R
RT
D
V1
: H
M
R
D
V2
: C
R
M
cG
au
gh
ey
J,
e
t a
l.
C
oc
hr
an
e
D
at
ab
as
e
Sy
st
Re
v
20
07
;3
:
C
D
00
55
29
.
N
on
e
SR
(C
oc
hr
an
e
re
vi
ew
)
Pu
rp
os
e:
e
ffe
ct
o
f R
RT
on
H
M
R
•
Se
ar
ch
ed
6
d
at
ab
as
es
fro
m
1
99
0
-2
00
6
•
Ex
cl
ud
ed
a
ll
bu
t 2
RC
Ts
N
=
2
s
tu
di
es
24
a
du
lt
ho
sp
ita
ls
A
ttr
iti
on
: N
R
IV
: R
RT
D
V1
: H
M
R
W
in
te
rs
B
D
, e
t a
l.
C
rit
C
ar
e
M
ed
20
07
;3
5(
5)
:
12
38
-4
3.
N
on
e
SR Pu
rp
os
e:
e
ffe
ct
o
f R
RT
o
n
H
M
R
an
d
C
R
•
Se
ar
ch
ed
3
d
at
ab
as
es
fro
m
1
99
0
-2
00
5
•
In
cl
ud
ed
o
nl
y
stu
di
es
w
ith
a
c
on
tro
l g
ro
up
N
=
8
s
tu
di
es
A
ve
ra
ge
n
o.
b
ed
s:
5
00
A
ttr
iti
on
: N
R
IV
: R
RT
D
V1
: H
M
R
D
V2
: C
R
H
ill
m
an
K
, e
t a
l.
La
nc
et
2
00
5;
36
5(
94
77
):
20
91
-7
.
N
on
e
RC
T
Pu
rp
os
e:
e
ffe
ct
o
f R
RT
o
n
C
R,
H
M
R,
a
nd
U
IC
U
A
N
=
2
3
ho
sp
ita
ls
A
ve
ra
ge
n
o.
b
ed
s:
3
40
•
In
te
rv
en
tio
n
gr
ou
p
(n
=
1
2)
•
C
on
tro
l g
ro
up
(n
=
1
1)
Se
tti
ng
: A
us
tra
lia
A
ttr
iti
on
: n
on
e
IV
: R
RT
p
ro
to
co
l f
or
6
m
on
th
s
•
1
A
P
•
1
IC
U
o
r E
D
R
N
D
V1
: H
M
R
(u
ne
xp
ec
te
d
de
at
hs
, e
xc
lu
di
ng
D
N
Rs
)
D
V2
: C
R
(e
xc
lu
di
ng
D
N
Rs
)
D
V3
: U
IC
U
A
H
M
R
C
R
ra
te
s
of
U
IC
U
A
N
ot
e:
•
C
rit
er
ia
fo
r
ac
tiv
at
in
g
RR
T
Sh
ad
ed
c
ol
um
ns
in
di
ca
te
w
he
re
d
at
a
w
ill
b
e
en
te
re
d
in
fu
tu
re
in
st
al
lm
en
ts
o
f t
he
s
er
ie
s.
A
P
=
at
te
nd
in
g
ph
ys
ic
ia
n;
C
R
=
ca
rd
io
pu
lm
on
ar
y
ar
re
st
o
r
co
de
r
at
es
;
D
N
R
=
do
n
ot
r
es
us
ci
ta
te
;
D
V
=
d
ep
en
de
nt
v
ar
ia
bl
e;
E
D
=
e
m
er
ge
nc
y
de
pa
rtm
en
t;
H
M
R:
h
os
pi
ta
l-w
id
e
m
or
-
ta
lit
y
ra
te
s;
IC
U
=
in
te
ns
iv
e
ca
re
u
ni
t;
IV
=
in
de
pe
nd
en
t v
ar
ia
bl
e;
M
D
=
m
ed
ic
al
d
oc
to
r;
N
R
=
no
t r
ep
or
te
d;
P
ed
s
=
pe
di
at
ric
;
RC
T
=
ra
nd
om
iz
ed
c
on
tro
lle
d
tri
al
;
RN
=
r
eg
is
te
re
d
nu
rs
e;
R
RT
=
r
ap
id
r
es
po
ns
e
te
am
;
SR
=
s
ys
te
m
at
ic
r
ev
ie
w
;
U
IC
U
A
=
u
np
la
nn
ed
IC
U
a
dm
is
si
on
s.
[email protected] AJN ▼ July 2010 ▼ Vol. 110, No. 7 51
suggests they leave the column in.
He says they can further discuss
this point later on in the process
when they synthesize the studies’
findings. As Rebecca and Chen
review each study, they enter its
citation in a separate reference list
so that they won’t have to create
this list at the end of the pro cess.
The reference list will be shared
with colleagues and placed at the
end of any RRT policy that re-
sults from this endeavor.
Carlos spends much of his
time answering Rebecca’s and
Chen’s questions concerning how
to phrase the information they’re
entering in the table. He suggests
that they keep it simple and con-
sistent. For example, if a study
indicated that it was implement-
ing an RRT and hoped to see a
change in a certain outcome, the
nurses could enter “change in
[the outcome] after RRT” as the
purpose of the study. For studies
examining the effect of an RRT
on an outcome, they could say as
the purpose, “effect of RRT on
[the outcome].” Using the same
words to describe the same pur-
pose, even though it may not have
been stated exactly that way in
the study, can help when they
compare studies later on.
Rebecca and Chen find it frus-
trating that the study data are
not always presented in the same
way from study to study. They
ask Carlos why the authors or
journals wouldn’t present similar
information in a similar manner.
Carlos explains that the purpose
of publishing these studies may
have been to disseminate the
find ings, not to compare them
with other like studies. Rebecca
realizes that she enjoys this kind
of conversation, in which she
and Chen have a voice and can
contribute to a deeper under-
standing of how research impacts
practice.
As Rebecca and Chen con-
tinue to enter data into the table,
they begin to see similarities and
differences across studies. They
mention this to Carlos, who tells
them they’ve begun the process
of synthesis! Both nurses are en-
couraged by the fact that they’re
learning this new skill.
The MERIT trial is next in the
stack of studies and it’s a good
trial to use to illustrate this phase
of the RCA process. Set in Aus-
tralia, the MERIT trial1 examined
whether the introduction of an
RRT (called a medical emergency
team or MET in the study) would
reduce the incidence of cardiac
arrest, unplanned admissions to
the ICU, and death in the hospi-
tals studied. See Table 1 to follow
along as the EBP team finds and
enters the trial data into the table.
Design/Method. After Rebecca
and Chen enter the citation infor-
mation and note the lack of a con-
ceptual framework, they’re ready
to fill in the “Design/Method”
column. First they enter RCT
for randomized controlled trial,
which they find in both the study
title and introduction. But MERIT
is called a “cluster- randomised
controlled trial,” and cluster is a
term they haven’t seen before.
Carlos explains that it means that
hospitals, not individuals or pa-
tients, were randomly assigned to
the RRT. He says that the likely
reason the researchers chose to
randomly assign hospitals is that
if they had randomly assigned
individual patients or units, oth-
ers in the hospital might have
heard about the RRT and poten-
tially influenced the outcome.
appraisal guide. These elements—
such as purpose of the study, sam-
ple size, and major variables—are
typical parts of a research report
and should be presented in a pre-
dictable fashion in every study
so that the reader understands
what’s being reported.
As the EBP team continues to
review the studies and fill in the
evaluation table, they realize that
it’s taking about 10 to 15 minutes
per study to locate and enter the
information. This may be because
when they look for a description
of the sample, for example, it’s
important that they note how the
sample was obtained, how many
patients are included, other char-
acteristics of the sample, as well
as any diagnoses or illnesses the
sample might have that could be
important to the study outcome.
They discuss with Carlos the like-
lihood that they’ll need a few ses-
sions to enter all the data into the
table. Carlos responds that the
more studies they do, the less
time it will take. He also says
that it takes less time to find the
information when study reports
are clearly written. He adds that
usually the important informa-
tion can be found in the abstract.
Rebecca and Chen ask if it
would be all right to take out
the “Conceptual Framework”
column, since none of the stud-
ies they’re reviewing have con-
ceptual frameworks (which help
guide researchers as to how a
study should proceed). Carlos
replies that it’s helpful to know
that a study has no framework
underpinning the research and
Usually the important information in a study
can be found in the abstract.
52 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com
To randomly assign hospitals
(instead of units or patients) to
the intervention and comparison
groups is a cleaner research de-
sign.
To keep the study purposes
con sistent among the studies in
the RCA, the EBP team uses inclu-
sive terminology they developed
after they noticed that different
trials had different ways of de-
scribing the same objectives. Now
they write that the purpose of the
MERIT trial is to see if an RRT
can reduce CR, for cardiopulmo-
nary arrest or code rates, HMR,
for hospital-wide mortality rates,
and UICUA for unplanned ICU
admissions. They use those same
terms consistently throughout the
evaluation table.
Sample/Setting. A total of 23
hospitals in Australia with an
average of 340 beds per hospi-
tal is the study sample. Twelve
hospitals had an RRT (the inter-
vention group) and 11 hospitals
didn’t (the control group).
Major Variables Studied. The
independent variable is the vari-
able that influences the outcome
(in this trial, it’s an RRT for six
months). The dependent vari-
able is the outcome (in this case,
HMR, CR, and UICUA). In this
trial, the outcomes didn’t include
do-not-resuscitate data. The RRT
was made up of an attending phy-
sician and an ICU or ED nurse.
While the MERIT trial seems
to perfectly answer Rebecca’s
PICOT question, it contains ele-
ments that aren’t entirely relevant,
such as the fact that the research-
ers collected information on how
the RRTs were activated and pro-
vided their protocol for calling the
RRTs. However, these elements
might be helpful to the EBP team
later on when they make decisions
about implementing an RRT in
their hospital. So that they can
come back to this information,
they place it in the last column,
“Appraisal: Worth to Practice.”
After reviewing the studies to
make sure they’ve captured the
essential elements in the evalua-
tion table, Rebecca and Chen still
feel unsure about whether the in-
formation is complete. Carlos
reminds them that a system-wide
practice change—such as the
change Rebecca is exploring, that
of implementing an RRT in her
hospital—requires careful consid-
eration of the evidence and this is
only the first step. He cautions
them not to worry too much
about perfection and to put their
efforts into understanding the
information in the studies. He re-
minds them that as they move on
to the next steps in the critical
appraisal process, and learn even
more about the studies and proj-
ects, they can refine any data in
the table. Rebecca and Chen feel
uncomfortable with this uncer-
tainty but decide to trust the pro-
cess. They continue extracting
data and entering it into the table
even though they may not com-
pletely understand what they’re
entering at present. They both
realize that this will be a learn-
ing opportunity and, though the
le arning curve may be steep at
times, they value the outcome of
improving patient care enough to
continue the work—as long as
Carlos is there to help.
In applying these principles
for evaluating research studies
to your own search for the evi-
dence to answer your PICOT
question, remember that this se-
ries can’t contain all the available
infor mation about research meth-
od ology. Fortunately, there are
many good resources available in
books and online. For example,
to find out more about sample
size, which can affect the likeli-
hood that researchers’ results oc-
cur by chance (a random finding)
rather than that the intervention
brought about the expected out-
come, search the Web using terms
that describe what you want to
know. If you type sample size
findings by chance in a search en-
gine, you’ll find several Web sites
that can help you better under-
stand this study essential.
Be sure to join the EBP team
in the next installment of the se-
ries, “Critical Appraisal of the
Evi dence: Part II,” when Rebecca
and Chen will use the MERIT
trial to illustrate the next steps
in the RCA process, complete
the rest of the evaluation table,
and dig a little deeper into the
studies in order to detect the
“keepers.” ▼
Ellen Fineout-Overholt is clinical profes-
sor and director of the Center for the
Advancement of Evidence-Based Practice
at Arizona State University in Phoenix,
where Bernadette Mazurek Melnyk
is dean and distinguished foundation
professor of nursing, Susan B. Stillwell
is clinical associate professor and pro-
gram coordinator of the Nurse Educator
Evidence-Based Practice Mentorship
Program, and Kathleen M. Williamson
is associate director of the Center for the
Advancement of Evidence-Based Practice.
Contact author: Ellen Fineout-Overholt,
ellen.fineout-[email protected]
REFERENCE
1. Hillman K, et al. Introduction of
the medical emergency team (MET)
system: a cluster-randomised con-
trolled trial. Lancet 2005;365(9477):
2091-7.
Keep the data in the table consistent by using
simple, inclusive terminology.
By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN
In July’s evidence-based prac-tice (EBP) article, Rebecca R.,
our hypothetical staff
nurse, Carlos A., her hospital’s
expert EBP mentor, and Chen
M., Rebecca’s nurse colleague,
col lected the evidence to an-
swer their clinical question: “In
hospitalized adults (P), how
does a rapid response team
(I) compared with no rapid
response team (C) affect the
number of cardiac arrests (O)
and unplanned admissions to
the ICU (O) during a three-
month period (T)?” As part of
their rapid critical appraisal
(RCA) of the 15 potential
“keeper” studies, the EBP team
found and placed the essential
elements of each study (such as
its population, study design,
and setting) into an evaluation
table. In so doing, they began
to see similarities and differ-
ences between the studies,
which Carlos told them is the
beginning of synthesis. We now
join the team as they continue
with their RCA of these studies
to determine their worth to
practice.
RAPID CRITICAL APPRAISAL
Carlos explains that typically an
RCA is conducted along with an
RCA checklist that’s specific to
the research design of the study
being evaluated—and before any
data are entered into an evalua-
tion table. However, since Rebecca
and Chen are new to appraising
studies, he felt it would be easier
for them to first enter the essen-
tials into the table and then eval-
uate each study. Carlos shows
Rebecca several RCA checklists
and explains that all checklists
have three major questions in
common, each of which contains
other more specific subquestions
about what constitutes a well-
conducted study for the research
design under review (see Example
of a Rapid Critical Appraisal
Checklist).
Although the EBP team will
be looking at how well the re -
searchers conducted their studies
and discussing what makes a
“good” research study, Carlos
reminds them that the goal of
critical appraisal is to determine
the worth of a study to practice,
not solely to find flaws. He also
suggests that they consult their
glossary when they see an unfa-
miliar word. For example, the
term randomization, or random
assignment, is a relevant feature
of research methodology for in-
tervention studies that may be
unfamiliar. Using the glossary, he
explains that random assignment
and random sampling are often
confused with one another, but
that they’re very different. When
researchers select subjects from
within a certain population to
participate in a study by using a
random strategy, such as tossing
a coin, this is random sampling.
It allows the entire population
to be fairly represented. But
because it requires access to a
particular population, random
sampling is not always feasible.
Carlos adds that many health
care studies are based on a con-
venience sample—participants
recruited from a readily available
population, such as a researcher’s
affiliated hospital, which may or
may not represent the desired
population. Random assignment,
on the other hand, is the use of a
random strategy to assign study
[email protected] AJN ▼ September 2010 ▼ Vol. 110, No. 9 41
Critical Appraisal of the Evidence: Part II
Digging deeper—examining the “keeper” studies.
This is the sixth article in a series from the Arizona State
University College of Nursing and Health Innovation’s Center
for the Advancement of Evidence-Based Practice. Evidence-
based practice (EBP) is a problem-solving approach to the
delivery of health care that integrates the best evidence from
studies and patient care data with clinician expertise and
patient preferences and values. When delivered in a context of
caring and in a supportive organizational culture, the
highest quality of care and best patient outcomes can be
achieved.
The purpose of this series is to give nurses the knowledge and
skills they need to implement EBP consistently, one
step at a time. Articles will appear every two months to allow
you time to incorporate information as you work toward
implementing EBP at your institution. Also, we’ve scheduled
“Chat with the Authors” calls every few months to provide
a direct line to the experts to help you resolve questions. Details
about how to participate in the next call will be pub-
lished with November’s Evidence-Based Practice, Step by Step.
are the same as three of their
po tential “keeper” studies. They
wonder whether they should keep
those studies in the pile, or if, as
duplicates, they’re unnecessary.
Carlos says that because the meta-
analysis only included studies
with control groups, it’s impor-
tant to keep these three studies so
that they can be compared with
other studies in the pile that don’t
have control groups. Rebecca
notes that more than half of their
15 studies don’t have control or
comparison groups. They agree
as a team to include all 15 stud-
ies at all levels of evidence and go
on to appraise the two remaining
systematic reviews.
The MERIT trial1 is next in
the EBP team’s stack of studies.
with him, Rebecca and Chen
find the checklist for systematic
reviews.
As they start to rapidly criti-
cally appraise the meta-analysis,
they discuss that it seems to be
biased since the authors included
only studies with a control group.
Carlos explains that while hav-
ing a control group in a study is
ideal, in the real world most stud-
ies are lower-level evidence and
don’t have control or compari-
son groups. He emphasizes that,
in eliminating lower-level studies,
the meta-analysis lacks evidence
that may be informative to the
question. Rebecca and Chen—
who are clearly growing in their
appraisal skills—also realize that
three studies in the meta-analysis
42 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com
participants to the intervention
or control group. Random as-
signment is an important feature
of higher-level studies in the hier-
archy of evidence.
Carlos also reminds the team
that it’s important to begin the
RCA with the studies at the high-
est level of evidence in order to see
the most reliable evidence first. In
their pile of studies, these are the
three systematic reviews, includ-
ing the meta-analysis and the
Cochrane review, they retrieved
from their database search (see
“Searching for the Evidence,”
and “Critical Appraisal of the
Evidence: Part I,” Evidence-
Based Practice, Step by Step,
May and July). Among the RCA
checklists Carlos has brought
Example of a Rapid Critical Appraisal Checklist
Rapid Critical Appraisal of Systematic Reviews of Clinical
Interventions or Treatments
1. Are the results of the review valid?
A. Are the studies in the review randomized controlled trials?
Yes No
B. Does the review include a detailed description of the search
strategy used to find the relevant studies? Yes No
C. Does the review describe how the validity of the individual
studies was assessed (such as, methodological quality,
including the use of random assignment to study groups and
complete follow-up of subjects)? Yes No
D. Are the results consistent across studies? Yes No
E. Did the analysis use individual patient data or aggregate
data? Patient Aggregate
2. What are the results?
A. How large is the intervention or treatment effect (odds
ratio,
relative risk, effect size, level of significance)?
B. How precise is the intervention or treatment (confidence
interval)?
3. Will the results assist me in caring for my patients?
A. Are my patients similar to those in the review? Yes No
B. Is it feasible to implement the findings in my practice
setting? Yes No
C. Were all clinically important outcomes considered,
including
both risks and benefits of the treatment? Yes No
D. What is my clinical assessment of the patient, and are there
any
contraindications or circumstances that would keep me from
implementing the treatment? Yes No
E. What are my patients’ and their families’ preferences and
values concerning the treatment? Yes No
© Fineout-Overholt and Melnyk, 2005.
[email protected] AJN ▼ September 2010 ▼ Vol. 110, No. 9 43
As we noted in the last install-
ment of this series, MERIT is a
good study to use to illustrate
the different steps of the critical
appraisal process. (Readers may
want to retrieve the article, if
possible, and follow along with
the RCA.) Set in Australia, the
MERIT trial examined whether
the introduction of a rapid re -
sponse team (RRT; called a med-
ical emergency team or MET
in the study) would reduce the
incidence of cardiac arrest, death,
and unplanned admissions to
the ICU in the hospitals studied.
To follow along as the EBP team
addresses each of the essential
elements of a well-conducted
randomized controlled trial (RCT)
and how they apply to the MERIT
study, see their notes in Rapid
Critical Appraisal of the MERIT
Study.
ARE THE RESULTS OF THE STUDY VALID?
The first section of every RCA
checklist addresses the validity
of the study at hand—did the
researchers use sound scientific
methods to obtain their study
results? Rebecca asks why valid-
ity is so important. Carlos replies
that if the study’s conclusion can
be trusted—that is, relied upon
to inform practice—the study
must be conducted in a way that
reduces bias or eliminates con-
founding variables (factors that
influence how the intervention
affects the outcome). Researchers
typically use rigorous research
methods to reduce the risk of
bias. The purpose of the RCA
checklist is to help the user deter-
mine whether or not rigorous
methods have been used in the
study under review, with most
questions offering the option of
a quick answer of “yes,” “no,”
or “unknown.”
Were the subjects randomly
assigned to the intervention and
control groups? Carlos explains
that this is an important question
when appraising RCTs. If a study
calls itself an RCT but didn’t
randomly assign participants,
then bias could be present. In
appraising the MERIT study, the
team discusses how the research-
ers randomly assigned entire
hospitals, not individual patients,
to the RRT intervention and
control groups using a technique
called cluster randomization. To
better understand this method,
the EBP team looks it up on the
Internet and finds a PowerPoint
presentation by a World Health
Organization researcher that
explains it in simplified terms:
“Cluster randomized trials are
experiments in which social units
or clusters [in our case, hospitals]
rather than individuals are ran-
domly allocated to intervention
groups.”2
Was random assignment
concealed from the individuals
enrolling the subjects? Conceal-
ment helps researchers reduce
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx
202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx

More Related Content

Similar to 202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx

Qualititaive research
Qualititaive researchQualititaive research
Qualititaive research
ctkmedia
 
Response 150 words 1 nursing reference within 5 yrsQualitative r.docx
Response 150 words 1 nursing reference within 5 yrsQualitative r.docxResponse 150 words 1 nursing reference within 5 yrsQualitative r.docx
Response 150 words 1 nursing reference within 5 yrsQualitative r.docx
infantkimber
 
Participant Sampling And Recruitment Essay Paper.docx
Participant Sampling And Recruitment Essay Paper.docxParticipant Sampling And Recruitment Essay Paper.docx
Participant Sampling And Recruitment Essay Paper.docx
4934bk
 
Qualitative Research-Grounded Theory
Qualitative Research-Grounded TheoryQualitative Research-Grounded Theory
Qualitative Research-Grounded Theory
Tina Jordan
 
Author & TitleAuthors Maggie Lawrence & Sue Kinn.Title Need.docx
Author & TitleAuthors Maggie Lawrence & Sue Kinn.Title Need.docxAuthor & TitleAuthors Maggie Lawrence & Sue Kinn.Title Need.docx
Author & TitleAuthors Maggie Lawrence & Sue Kinn.Title Need.docx
rock73
 
Trent Focus for Research and Development in Primary .docx
Trent Focus for Research and Development in Primary .docxTrent Focus for Research and Development in Primary .docx
Trent Focus for Research and Development in Primary .docx
turveycharlyn
 

Similar to 202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx (20)

Module 1 phenom a
Module 1 phenom aModule 1 phenom a
Module 1 phenom a
 
Qualitative research
Qualitative researchQualitative research
Qualitative research
 
Selected Research Sample Writing
Selected Research Sample WritingSelected Research Sample Writing
Selected Research Sample Writing
 
Qualititaive research
Qualititaive researchQualititaive research
Qualititaive research
 
Methodological quality.pptx
Methodological quality.pptxMethodological quality.pptx
Methodological quality.pptx
 
Response 150 words 1 nursing reference within 5 yrsQualitative r.docx
Response 150 words 1 nursing reference within 5 yrsQualitative r.docxResponse 150 words 1 nursing reference within 5 yrsQualitative r.docx
Response 150 words 1 nursing reference within 5 yrsQualitative r.docx
 
Participant Sampling And Recruitment Essay Paper.docx
Participant Sampling And Recruitment Essay Paper.docxParticipant Sampling And Recruitment Essay Paper.docx
Participant Sampling And Recruitment Essay Paper.docx
 
Presentation on-Resarch-paradigms.pptx
Presentation on-Resarch-paradigms.pptxPresentation on-Resarch-paradigms.pptx
Presentation on-Resarch-paradigms.pptx
 
LESSON-2-PR1.pptx
LESSON-2-PR1.pptxLESSON-2-PR1.pptx
LESSON-2-PR1.pptx
 
SEMINAR ON QUALITATIVE RESEARCH-THEORIES AND METHODS- modified.pptx
SEMINAR ON QUALITATIVE RESEARCH-THEORIES AND METHODS- modified.pptxSEMINAR ON QUALITATIVE RESEARCH-THEORIES AND METHODS- modified.pptx
SEMINAR ON QUALITATIVE RESEARCH-THEORIES AND METHODS- modified.pptx
 
Qualitative Research-Grounded Theory
Qualitative Research-Grounded TheoryQualitative Research-Grounded Theory
Qualitative Research-Grounded Theory
 
qualitative research designs
qualitative research designsqualitative research designs
qualitative research designs
 
What is a theory
What is a theoryWhat is a theory
What is a theory
 
Interdisciplinary seminarvii traditionsinqualitativeresearch
Interdisciplinary seminarvii traditionsinqualitativeresearchInterdisciplinary seminarvii traditionsinqualitativeresearch
Interdisciplinary seminarvii traditionsinqualitativeresearch
 
Understanding philosophy of research
Understanding philosophy of researchUnderstanding philosophy of research
Understanding philosophy of research
 
Chapter 3(methodology) Rough
Chapter  3(methodology) RoughChapter  3(methodology) Rough
Chapter 3(methodology) Rough
 
Author & TitleAuthors Maggie Lawrence & Sue Kinn.Title Need.docx
Author & TitleAuthors Maggie Lawrence & Sue Kinn.Title Need.docxAuthor & TitleAuthors Maggie Lawrence & Sue Kinn.Title Need.docx
Author & TitleAuthors Maggie Lawrence & Sue Kinn.Title Need.docx
 
Children With Special Health Care
Children With Special Health CareChildren With Special Health Care
Children With Special Health Care
 
Qualitative research
Qualitative researchQualitative research
Qualitative research
 
Trent Focus for Research and Development in Primary .docx
Trent Focus for Research and Development in Primary .docxTrent Focus for Research and Development in Primary .docx
Trent Focus for Research and Development in Primary .docx
 

More from lorainedeserre

4 Shaping and Sustaining Change Ryan McVayPhotodiscThink.docx
4 Shaping and Sustaining Change Ryan McVayPhotodiscThink.docx4 Shaping and Sustaining Change Ryan McVayPhotodiscThink.docx
4 Shaping and Sustaining Change Ryan McVayPhotodiscThink.docx
lorainedeserre
 
4.1 EXPLORING INCENTIVE PAY4-1 Explore the incentive pay a.docx
4.1 EXPLORING INCENTIVE PAY4-1 Explore the incentive pay a.docx4.1 EXPLORING INCENTIVE PAY4-1 Explore the incentive pay a.docx
4.1 EXPLORING INCENTIVE PAY4-1 Explore the incentive pay a.docx
lorainedeserre
 
38 u December 2017 January 2018The authorities beli.docx
38  u   December 2017  January 2018The authorities beli.docx38  u   December 2017  January 2018The authorities beli.docx
38 u December 2017 January 2018The authorities beli.docx
lorainedeserre
 
3Prototypes of Ethical ProblemsObjectivesThe reader shou.docx
3Prototypes of Ethical ProblemsObjectivesThe reader shou.docx3Prototypes of Ethical ProblemsObjectivesThe reader shou.docx
3Prototypes of Ethical ProblemsObjectivesThe reader shou.docx
lorainedeserre
 
4-5 Annotations and Writing Plan - Thu Jan 30 2111Claire Knaus.docx
4-5 Annotations and Writing Plan - Thu Jan 30 2111Claire Knaus.docx4-5 Annotations and Writing Plan - Thu Jan 30 2111Claire Knaus.docx
4-5 Annotations and Writing Plan - Thu Jan 30 2111Claire Knaus.docx
lorainedeserre
 
3Moral Identity Codes of Ethics and Institutional Ethics .docx
3Moral Identity Codes of  Ethics and Institutional  Ethics .docx3Moral Identity Codes of  Ethics and Institutional  Ethics .docx
3Moral Identity Codes of Ethics and Institutional Ethics .docx
lorainedeserre
 
3NIMH Opinion or FactThe National Institute of Mental Healt.docx
3NIMH Opinion or FactThe National Institute of Mental Healt.docx3NIMH Opinion or FactThe National Institute of Mental Healt.docx
3NIMH Opinion or FactThe National Institute of Mental Healt.docx
lorainedeserre
 
4.1Updated April-09Lecture NotesChapter 4Enterpr.docx
4.1Updated April-09Lecture NotesChapter 4Enterpr.docx4.1Updated April-09Lecture NotesChapter 4Enterpr.docx
4.1Updated April-09Lecture NotesChapter 4Enterpr.docx
lorainedeserre
 
3Type your name hereType your three-letter and -number cours.docx
3Type your name hereType your three-letter and -number cours.docx3Type your name hereType your three-letter and -number cours.docx
3Type your name hereType your three-letter and -number cours.docx
lorainedeserre
 
3Welcome to Writing at Work! After you have completed.docx
3Welcome to Writing at Work! After you have completed.docx3Welcome to Writing at Work! After you have completed.docx
3Welcome to Writing at Work! After you have completed.docx
lorainedeserre
 
3JWI 531 Finance II Assignment 1TemplateHOW TO USE THIS TEMP.docx
3JWI 531 Finance II Assignment 1TemplateHOW TO USE THIS TEMP.docx3JWI 531 Finance II Assignment 1TemplateHOW TO USE THIS TEMP.docx
3JWI 531 Finance II Assignment 1TemplateHOW TO USE THIS TEMP.docx
lorainedeserre
 
3Big Data Analyst QuestionnaireWithin this document are fo.docx
3Big Data Analyst QuestionnaireWithin this document are fo.docx3Big Data Analyst QuestionnaireWithin this document are fo.docx
3Big Data Analyst QuestionnaireWithin this document are fo.docx
lorainedeserre
 
3HR StrategiesKey concepts and termsHigh commitment .docx
3HR StrategiesKey concepts and termsHigh commitment .docx3HR StrategiesKey concepts and termsHigh commitment .docx
3HR StrategiesKey concepts and termsHigh commitment .docx
lorainedeserre
 
3Implementing ChangeConstruction workers on scaffolding..docx
3Implementing ChangeConstruction workers on scaffolding..docx3Implementing ChangeConstruction workers on scaffolding..docx
3Implementing ChangeConstruction workers on scaffolding..docx
lorainedeserre
 
3Assignment Three Purpose of the study and Research Questions.docx
3Assignment Three Purpose of the study and Research Questions.docx3Assignment Three Purpose of the study and Research Questions.docx
3Assignment Three Purpose of the study and Research Questions.docx
lorainedeserre
 
380067.docxby Jamie FeryllFILET IME SUBMIT T ED 22- .docx
380067.docxby Jamie FeryllFILET IME SUBMIT T ED 22- .docx380067.docxby Jamie FeryllFILET IME SUBMIT T ED 22- .docx
380067.docxby Jamie FeryllFILET IME SUBMIT T ED 22- .docx
lorainedeserre
 
392Group Development JupiterimagesStockbyteThinkstoc.docx
392Group Development JupiterimagesStockbyteThinkstoc.docx392Group Development JupiterimagesStockbyteThinkstoc.docx
392Group Development JupiterimagesStockbyteThinkstoc.docx
lorainedeserre
 
39Chapter 7Theories of TeachingIntroductionTheories of l.docx
39Chapter 7Theories of TeachingIntroductionTheories of l.docx39Chapter 7Theories of TeachingIntroductionTheories of l.docx
39Chapter 7Theories of TeachingIntroductionTheories of l.docx
lorainedeserre
 
3902    wileyonlinelibrary.comjournalmec Molecular Ecology.docx
3902     wileyonlinelibrary.comjournalmec Molecular Ecology.docx3902     wileyonlinelibrary.comjournalmec Molecular Ecology.docx
3902    wileyonlinelibrary.comjournalmec Molecular Ecology.docx
lorainedeserre
 
38  Monthly Labor Review  •  June 2012TelecommutingThe.docx
38  Monthly Labor Review  •  June 2012TelecommutingThe.docx38  Monthly Labor Review  •  June 2012TelecommutingThe.docx
38  Monthly Labor Review  •  June 2012TelecommutingThe.docx
lorainedeserre
 

More from lorainedeserre (20)

4 Shaping and Sustaining Change Ryan McVayPhotodiscThink.docx
4 Shaping and Sustaining Change Ryan McVayPhotodiscThink.docx4 Shaping and Sustaining Change Ryan McVayPhotodiscThink.docx
4 Shaping and Sustaining Change Ryan McVayPhotodiscThink.docx
 
4.1 EXPLORING INCENTIVE PAY4-1 Explore the incentive pay a.docx
4.1 EXPLORING INCENTIVE PAY4-1 Explore the incentive pay a.docx4.1 EXPLORING INCENTIVE PAY4-1 Explore the incentive pay a.docx
4.1 EXPLORING INCENTIVE PAY4-1 Explore the incentive pay a.docx
 
38 u December 2017 January 2018The authorities beli.docx
38  u   December 2017  January 2018The authorities beli.docx38  u   December 2017  January 2018The authorities beli.docx
38 u December 2017 January 2018The authorities beli.docx
 
3Prototypes of Ethical ProblemsObjectivesThe reader shou.docx
3Prototypes of Ethical ProblemsObjectivesThe reader shou.docx3Prototypes of Ethical ProblemsObjectivesThe reader shou.docx
3Prototypes of Ethical ProblemsObjectivesThe reader shou.docx
 
4-5 Annotations and Writing Plan - Thu Jan 30 2111Claire Knaus.docx
4-5 Annotations and Writing Plan - Thu Jan 30 2111Claire Knaus.docx4-5 Annotations and Writing Plan - Thu Jan 30 2111Claire Knaus.docx
4-5 Annotations and Writing Plan - Thu Jan 30 2111Claire Knaus.docx
 
3Moral Identity Codes of Ethics and Institutional Ethics .docx
3Moral Identity Codes of  Ethics and Institutional  Ethics .docx3Moral Identity Codes of  Ethics and Institutional  Ethics .docx
3Moral Identity Codes of Ethics and Institutional Ethics .docx
 
3NIMH Opinion or FactThe National Institute of Mental Healt.docx
3NIMH Opinion or FactThe National Institute of Mental Healt.docx3NIMH Opinion or FactThe National Institute of Mental Healt.docx
3NIMH Opinion or FactThe National Institute of Mental Healt.docx
 
4.1Updated April-09Lecture NotesChapter 4Enterpr.docx
4.1Updated April-09Lecture NotesChapter 4Enterpr.docx4.1Updated April-09Lecture NotesChapter 4Enterpr.docx
4.1Updated April-09Lecture NotesChapter 4Enterpr.docx
 
3Type your name hereType your three-letter and -number cours.docx
3Type your name hereType your three-letter and -number cours.docx3Type your name hereType your three-letter and -number cours.docx
3Type your name hereType your three-letter and -number cours.docx
 
3Welcome to Writing at Work! After you have completed.docx
3Welcome to Writing at Work! After you have completed.docx3Welcome to Writing at Work! After you have completed.docx
3Welcome to Writing at Work! After you have completed.docx
 
3JWI 531 Finance II Assignment 1TemplateHOW TO USE THIS TEMP.docx
3JWI 531 Finance II Assignment 1TemplateHOW TO USE THIS TEMP.docx3JWI 531 Finance II Assignment 1TemplateHOW TO USE THIS TEMP.docx
3JWI 531 Finance II Assignment 1TemplateHOW TO USE THIS TEMP.docx
 
3Big Data Analyst QuestionnaireWithin this document are fo.docx
3Big Data Analyst QuestionnaireWithin this document are fo.docx3Big Data Analyst QuestionnaireWithin this document are fo.docx
3Big Data Analyst QuestionnaireWithin this document are fo.docx
 
3HR StrategiesKey concepts and termsHigh commitment .docx
3HR StrategiesKey concepts and termsHigh commitment .docx3HR StrategiesKey concepts and termsHigh commitment .docx
3HR StrategiesKey concepts and termsHigh commitment .docx
 
3Implementing ChangeConstruction workers on scaffolding..docx
3Implementing ChangeConstruction workers on scaffolding..docx3Implementing ChangeConstruction workers on scaffolding..docx
3Implementing ChangeConstruction workers on scaffolding..docx
 
3Assignment Three Purpose of the study and Research Questions.docx
3Assignment Three Purpose of the study and Research Questions.docx3Assignment Three Purpose of the study and Research Questions.docx
3Assignment Three Purpose of the study and Research Questions.docx
 
380067.docxby Jamie FeryllFILET IME SUBMIT T ED 22- .docx
380067.docxby Jamie FeryllFILET IME SUBMIT T ED 22- .docx380067.docxby Jamie FeryllFILET IME SUBMIT T ED 22- .docx
380067.docxby Jamie FeryllFILET IME SUBMIT T ED 22- .docx
 
392Group Development JupiterimagesStockbyteThinkstoc.docx
392Group Development JupiterimagesStockbyteThinkstoc.docx392Group Development JupiterimagesStockbyteThinkstoc.docx
392Group Development JupiterimagesStockbyteThinkstoc.docx
 
39Chapter 7Theories of TeachingIntroductionTheories of l.docx
39Chapter 7Theories of TeachingIntroductionTheories of l.docx39Chapter 7Theories of TeachingIntroductionTheories of l.docx
39Chapter 7Theories of TeachingIntroductionTheories of l.docx
 
3902    wileyonlinelibrary.comjournalmec Molecular Ecology.docx
3902     wileyonlinelibrary.comjournalmec Molecular Ecology.docx3902     wileyonlinelibrary.comjournalmec Molecular Ecology.docx
3902    wileyonlinelibrary.comjournalmec Molecular Ecology.docx
 
38  Monthly Labor Review  •  June 2012TelecommutingThe.docx
38  Monthly Labor Review  •  June 2012TelecommutingThe.docx38  Monthly Labor Review  •  June 2012TelecommutingThe.docx
38  Monthly Labor Review  •  June 2012TelecommutingThe.docx
 

Recently uploaded

SURVEY I created for uni project research
SURVEY I created for uni project researchSURVEY I created for uni project research
SURVEY I created for uni project research
CaitlinCummins3
 
MSc Ag Genetics & Plant Breeding: Insights from Previous Year JNKVV Entrance ...
MSc Ag Genetics & Plant Breeding: Insights from Previous Year JNKVV Entrance ...MSc Ag Genetics & Plant Breeding: Insights from Previous Year JNKVV Entrance ...
MSc Ag Genetics & Plant Breeding: Insights from Previous Year JNKVV Entrance ...
Krashi Coaching
 

Recently uploaded (20)

ANTI PARKISON DRUGS.pptx
ANTI         PARKISON          DRUGS.pptxANTI         PARKISON          DRUGS.pptx
ANTI PARKISON DRUGS.pptx
 
SURVEY I created for uni project research
SURVEY I created for uni project researchSURVEY I created for uni project research
SURVEY I created for uni project research
 
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
 
Improved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppImproved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio App
 
The Ball Poem- John Berryman_20240518_001617_0000.pptx
The Ball Poem- John Berryman_20240518_001617_0000.pptxThe Ball Poem- John Berryman_20240518_001617_0000.pptx
The Ball Poem- John Berryman_20240518_001617_0000.pptx
 
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
 
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjjStl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
 
Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17
 
Including Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdfIncluding Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdf
 
Basic Civil Engineering notes on Transportation Engineering, Modes of Transpo...
Basic Civil Engineering notes on Transportation Engineering, Modes of Transpo...Basic Civil Engineering notes on Transportation Engineering, Modes of Transpo...
Basic Civil Engineering notes on Transportation Engineering, Modes of Transpo...
 
How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17
 
diagnosting testing bsc 2nd sem.pptx....
diagnosting testing bsc 2nd sem.pptx....diagnosting testing bsc 2nd sem.pptx....
diagnosting testing bsc 2nd sem.pptx....
 
demyelinated disorder: multiple sclerosis.pptx
demyelinated disorder: multiple sclerosis.pptxdemyelinated disorder: multiple sclerosis.pptx
demyelinated disorder: multiple sclerosis.pptx
 
PSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxPSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptx
 
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community PartnershipsSpring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
 
Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...
 
Chapter 7 Pharmacosy Traditional System of Medicine & Ayurvedic Preparations ...
Chapter 7 Pharmacosy Traditional System of Medicine & Ayurvedic Preparations ...Chapter 7 Pharmacosy Traditional System of Medicine & Ayurvedic Preparations ...
Chapter 7 Pharmacosy Traditional System of Medicine & Ayurvedic Preparations ...
 
MSc Ag Genetics & Plant Breeding: Insights from Previous Year JNKVV Entrance ...
MSc Ag Genetics & Plant Breeding: Insights from Previous Year JNKVV Entrance ...MSc Ag Genetics & Plant Breeding: Insights from Previous Year JNKVV Entrance ...
MSc Ag Genetics & Plant Breeding: Insights from Previous Year JNKVV Entrance ...
 
Đề tieng anh thpt 2024 danh cho cac ban hoc sinh
Đề tieng anh thpt 2024 danh cho cac ban hoc sinhĐề tieng anh thpt 2024 danh cho cac ban hoc sinh
Đề tieng anh thpt 2024 danh cho cac ban hoc sinh
 
Dementia (Alzheimer & vasular dementia).
Dementia (Alzheimer & vasular dementia).Dementia (Alzheimer & vasular dementia).
Dementia (Alzheimer & vasular dementia).
 

202 Copyright © 2009 The Author(s)Evidence-Based Practice.docx

  • 1. 202 Copyright © 2009 The Author(s) Evidence-Based Practice: Critical Appraisal of Qualitative Evidence Kathleen M. Williamson One of the key steps of evidence-based practice is to critically appraise evidence to best answer a clinical question. Mental health clinicians need to understand the importance of qualitative evidence to their practice, including levels of qualitative evidence, qualitative inquiry methods, and criteria used to appraise qualitative evidence to determine how implementing the best qualitative evidence into their practice will influence mental health outcomes. The goal of qualitative research is to develop a complete understanding of reality as it is perceived by the individual and to uncover the truths that exist. These important aspects of mental health require clinicians to engage this evidence. J Am Psychiatr Nurses Assoc, 2009; 15(3), 202-207. DOI: 10.1177/1078390309338733 Keywords: evidence-based practice; qualitative inquiry; qualitative designs; critical appraisal of qualitative evidence; mental health Evidence-based practice (EBP) is an approach that enables psychiatric mental health care practitioners as well as all clinicians to provide the highest quality of care using the best evidence available (Melnyk & Fineout-Overholt, 2005). One of the key steps of EBP is to critically appraise evidence to best answer a
  • 2. clinical question. For many mental health questions, understanding levels of evidence, qualitative inquiry methods, and questions used to appraise the evidence are necessary to implement the best qualitative evi- dence into practice. Drawing conclusions and making judgments about the evidence are imperative to the EBP process and clinical decision making (Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008). The over- all purpose of this article is to familiarize clinicians with qualitative research as an important source of evidence to guide practice decisions. In this article, an overview of the goals, methods and types of qualita- tive research, and the criteria used to appraise the quality of this type of evidence will be presented. QUALITATIVE BELIEFS Qualitative research aims to generate insight, describe, and understand the nature of reality in human experiences (Ayers, 2007; Milne & Oberle, 2005; Polit & Beck, 2008; Saddler, 2006; Sandelowski, 2004; Speziale & Carpenter, 2003; Thorne, 2000). Qualitative researchers are inquisitive and seek to understand knowledge about how people think and feel, about the circumstances in which they find themselves, and use methods to uncover and decon- struct the meaning of a phenomenon (Saddler, 2006; Thorne, 2000). Qualitative data are collected in a natural setting. These data are not numerical; rather, they are full and rich descriptions from participants who are experiencing the phenomenon under study. The goal of qualitative research is to uncover the truths that exist and develop a complete understand- ing of reality and the individual’s perception of what is real. This method of inquiry is deeply rooted in
  • 3. descriptive modes of research. “The idea that multiple realties exist and create meaning for the individuals studied is a fundamental belief of qualitative research- ers” (Speziale & Carpenter, 2003, p. 17). Qualitative research is the studying, collecting, and understand- ing the meaning of individuals’ lives using a variety of materials and methods (Denzin & Lincoln, 2005). WHAT IS A QUALITATIVE RESEARCHER? Qualitative researchers commonly believe that indi- viduals come to know and understand their reality in Kathleen M. Williamson, PhD, RN, associate director, Center for the Advancement of Evidence-Based Practice, Arizona State University, College of Nursing & Healthcare Innovation, Phoenix, Arizona; [email protected] Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 203 Critical Appraisal of Qualitative Evidence different ways. It is through the lived experience and the interactions that take place in the natural setting that the researcher is able to discover and understand the phenomenon under study (Miles & Huberman, 1994; Patton, 2002; Speziale & Carpenter, 2003). To ensure the least disruption to the environ- ment/natural setting, qualitative researchers care- fully consider the best research method to answer
  • 4. the research question (Speziale & Carpenter, 2003). These researchers are intensely involved in all aspects of the research process and are considered participants and observers in setting or field (Patton, 2002; Polit & Beck, 2008; Speziale & Carpenter, 2003). Flexibility is required to obtain data from the richest possible sources of information. Using a holistic approach, the researcher attempts to cap- ture the perceptions of the participants from an “emic” approach (i.e., from an insider’s viewpoint; Miles & Huberman, 1994; Speziale & Carpenter, 2003). Often, this is accomplished through the use of a variety of data collection methods, such as inter- views, observations, and written documents (Patton, 2002). As the data are collected, the researcher simultaneously analyzes it, which includes identi- fying emerging themes, patterns, and insights within the data. According to Patton (2002), quali- tative analysis engages exploration, discovery, and inductive logic. The researcher uses a rich literary account of the setting, actions, feelings, and mean- ing of the phenomenon to report the findings (Patton, 2002). COMMONLY USED QUALITATIVE DESIGNS According to Patton (2002), “Qualitative methods are first and foremost research methods. They are ways of finding out what people do, know, think, and feel by observing, interviewing, and analyzing docu- ments” (p. 145). Qualitative research designs vary by type and purpose: data collection strategies used and the type of question or phenomenon under study. To critically appraise qualitative evidence for its valid-
  • 5. ity and use in practice, an understanding of the types of qualitative methods as well as how they are employed and reported is necessary. Many of the methods are routed in the anthropol- ogy, psychological, and sociology disciplines. Many commonly used methods in the health sciences research are ethnography, phenomenology, and grounded theory (see Table 1). Ethnography Ethnography has its traditions in cultural anthropology, which describe the values, beliefs, and practice of cultural groups (Ploeg, 1999; Polit & Beck, 2008). According to Speziale and Carpenter (2003), the characteristics that are central to eth- nography are that (a) the research is focused on culture, (b) the researcher is totally immersed in the culture, and (c) the researcher is aware of her/ his own perspective as well as those in the study. Ethnographic researchers strive to study cultures from an emic approach. The researcher as a par- ticipant observer becomes involved in the culture to collect data, learn from participants, and report on the way participants see their world (Patton, 2002). Data are primarily collected through obser- vations and interviews. Analysis of ethnographic results involves identifying the meanings attrib- uted to objects and events by members of the cul- ture. These meanings are often validated by members of the culture before finalizing the results (called member checks). This is a labor-intensive method that requires extensive fieldwork. TABLE 1. Most Commonly Used Qualitative Research Methods
  • 6. Method Purpose Research question(s) Sample size (on average) Data sources/collection Ethnography Describe culture of people What is it like to live . . . What is it . . . 30-50 Interviews, observations, field notes, records, chart data, life histories Phenomenology Describe phenomena, the appearance of things, as lived experience of humans in a natural setting What is it like to have this experience? What does it feel like? 6-8 Interviews, videotapes, observations,
  • 7. in-depth conversations Grounded theory To develop a theory rather than describe a phenomenon Questions emerge from the data 25-50 Taped interview, observation, diaries, and memos from researcher Source. Adapted from Polit and Beck (2008) and Speziale and Carpenter(2003). 204 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 Williamson Phenomenology Phenomenology has its roots in both philosophy and psychology. Polit and Beck (2008) reported, “Phenomenological researchers believe that lived experience gives meaning to each person’s percep- tion of a particular phenomenon” (p. 227). According to Polit and Beck, there are four aspects of the human experience that are of interest to the phe- nomenological researcher: (a) lived space (spatial-
  • 8. ity), (b) lived body (corporeality), (c) lived human relationships (relationality), and (d) lived time (tem- porality). Phenomenological inquiry is focused on exploring how participants in the experience make sense of the experience, transform the experience into consciousness, and the nature or meaning of the experience (Patton, 2002). Interpretive phenom- enology (hermeneutics) focuses on the meaning and interpretation of the lived experience to better understand social, cultural, political, and historical context. Descriptive phenomenology shares vivid reports and describes the phenomenon. In a phenomenological study, the researcher is an active participant/observer who is totally immersed in the investigation. It involves gaining access to participants who could provide rich descriptions from in-depth interviews to gather all the informa- tion needed to describe the phenomenon under study (Speziale & Carpenter, 2003). Ongoing analyses of direct quotes and statements by participants occur until common themes emerge. The outcome is a vivid description of the experience that captures the meaning of the experience and communicates clearly and logically the phenomenon under study (Speziale & Carpenter, 2003). Grounded Theory Grounded theory has its roots in sociology and explores the social processes that are present within human interactions (Speziale & Carpenter, 2003). The purpose is to develop or build a theory rather than test a theory or describe a phenomenon (Patton, 2002). Grounded theory takes an inductive approach in which the researcher seeks to generate emergent
  • 9. categories and integrate them into a theory grounded in the data (Polit & Beck, 2008). The research does not start with a focused problem; it evolves and is discovered as the study progresses. A feature of grounded theory is that the data collection, data analysis, and sampling of participants occur simulta- neously (Polit & Beck, 2008; Powers, 2005). The researchers using ground theory methodology are able to critically analyze situations, not remove themselves from the study but realize that they are part of it, recognize bias, obtain valid and reliable data, and think abstractly (Strauss & Corbin, 1990). Data collection is through in-depth interview and observations. A constant comparative process is used for two reasons: (a) to compare every piece of data with every other piece to more accurately refine the relevant categories and (b) to assure the researcher that saturation has occurred. Once saturation is reached the researcher connects the categories, pat- terns, or themes that describe the overall picture that emerged that will lead to theory development. ASPECTS OF QUALITATIVE RESEARCH The most important aspects of qualitative inquiry is that participants are actively involved in the research process rather than receiving an interven- tion or being observed for some risk or event to be quantified. Another aspect is that the sample is pur- posefully selected and is based on experience with a culture, social process, or phenomena to collect infor- mation that is rich and thick in descriptions. The final essential aspect of qualitative research is that one or more of the following strategies are used to collect
  • 10. data: interviews, focus groups, narratives, chat rooms, and observation and/or field notes. These methods may be used in combination with each other. The researcher may choose to use triangulation strategies on data collection, investigator, method, or theory and use multiple sources to draw conclusions about the phenomenon (Patton, 2002; Polit & Beck, 2009). SUMMARY This is not an inclusive list of qualitative methods that researchers could choose to use to answer a research question, other methods include historical research, feminist research, case study method, and action research. All qualitative research methods are used to describe and discover meaning, understand- ing, or develop a theory and transport the reader to the time and place of the observation and/or inter- view (Patton, 2002). THE HIERARCHY OF QUALITATIVE EVIDENCE Clinical questions that require qualitative evi- dence to answer them focus on human response and Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 205 Critical Appraisal of Qualitative Evidence meaning. An important step in the process of apprais- ing qualitative research as a guide for clinical prac- tice is the identification of the level of evidence or the
  • 11. “best” evidence. The level of evidence is a guide that helps identify the most appropriate, rigorous, and clinically relevant evidence to answer the clinical question (Polit & Beck, 2008). Evidence hierarchy for qualitative research ranges from opinion of authori- ties and/or reports of expert committees to a single qualitative research study to metasynthesis (Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008). A metasynthesis is comparable to meta-analysis (i.e., systematic reviews) of quantitative studies. A meta- synthesis is a technique that integrates findings of multiple qualitative studies on a specific topic, pro- viding an interpretative synthesis of the research findings in narrative form (Polit & Beck, 2008). This is the strongest level of evidence in which to answer a clinical question. The higher the level of evidence the stronger the evidence is to change practice. However, all evidence needs be critically appraised based on (a) the best available evidence (i.e., level of evidence), (b) the quality and reliability of the study, and (c) the applicability of the findings to practice. CRITICAL APPRAISAL OF QUALITATIVE EVIDENCE Once the clinical issue has been identified, the PICOT question constructed, and the best evidence located through an exhaustive search, the next step is to critically appraise each study for its validity (i.e., the quality), reliability, and applicability to use in practice (Melnyk & Fineout-Overholt, 2005). Although there is no consensus among qualitative researchers on the quality criteria (Cutcliffe & McKenna, 1999; Polit & Beck, 2008; Powers, 2005; Russell & Gregory, 2003; Sandelowski, 2004), many have published excellent tools that guide the process
  • 12. for critically appraising qualitative evidence (Duffy, 2005; Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008; Powers, 2005; Russell & Gregory, 2003; Speziale & Carpenter, 2003). They all base their cri- teria on three primary questions: (a) Are the study findings valid? (b) What were the results of the study? (c) Will the results help me in caring for my patients? According to Melnyk and Fineout-Overholt (2005), “The answers to these questions ensure rele- vance and transferability of the evidence from the search to the specific population for whom the practi- tioner provides care” (p. 120). In using the questions in Tables 2, 3, and 4, one can evaluate the evidence and determine if the study findings are valid, the method and instruments used to acquire the knowl- edge credible, and if the findings are transferable. The qualitative process contributes to the rigor or trustworthiness of the data (i.e., the quality). “The goal of rigor in qualitative research is to accurately represent study participants’ experiences” (Speziale & Carpenter, 2003, p. 38). The qualitative attributes of validity include credibility, dependability, confirm- ability, transferability, and authenticity (Guba & Lincoln, 1994; Miles & Huberman, 1994; Speziale & Carpenter, 2003). Credibility is having confidence and truth about the data and interpretations (Polit & Beck, 2008). The credibility of the findings hinges on the skill, competence, and rigor of the researcher to describe the content shared by the participants and the abil- ity of the participants to accurately describe the phenomenon (Patton, 2002; Speziale & Carpenter, 2003). Cutcliffe and McKenna (1999) reported that
  • 13. the most important indicator of the credibility of findings is when a practitioner reads the study find- ings and regards them meaningful and applicable and incorporates them into his or her practice. Confirmability refers to the way the researcher documents and confirms the study findings (Speziale TABLE 2. Subquestions to Further Answer, Are the Study Findings Valid? Participants Sample Data collection How were they selected? Was it adequate? How were the data collected? Did they provide rich and thick descriptions? Was the setting appropriate to
  • 14. acquire an adequate sample? Were the tools adequate? Were the participants’ rights protected? Was the sampling method appropriate? How were the data coded? If so how? Did the researcher eliminate bias? Do the data accurately represent the study participants? How accurate and complete were the data? Was the group or population adequately described? Was saturation achieved?
  • 15. Does gathering the data adequately portray the phenomenon? Source. Adapted from Powers (2005), Polit and Beck (2008), Russell and Gregory (2003), and Speziale and Carpenter (2003). 206 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 Williamson & Carpenter, 2003). Confirmability is the process of confirming the accuracy, relevance, and meaning of the data collected. Confirmability exists if (a) the researcher identifies if saturation was reached and (b) records of the methods and procedures are detailed enough that they can be followed by an audit trail (Miles & Huberman, 1994). Dependability is a standard that demonstrates whether (a) the process of the study was consistent, (b) data remained consistent over time and conditions, and (c) the results are reliable (Miles & Huberman, 1994; Polit & Beck, 2008; Speziale & Carpenter, 2003). For example, if study methods and results are depend- able, the researcher consistently approaches each occurrence in the same way with each encounter and results were coded with accuracy across the study. Transferability refers to the probability that the study findings have meaning and are usable by oth- ers in similar situations (i.e., generalizable to others
  • 16. in that situation; Miles & Huberman, 1994; Polit & Beck, 2008; Speziale & Carpenter, 2003). To deter- mine if the findings of a study are transferable and can be used by others, the clinician must consider the potential client to whom the findings may be applied (Speziale & Carpenter, 2003). Authenticity is when the researcher fairly and faithfully shows a range of different realities and develops an accurate and authentic portrait for the phenomenon under study (Polit & Beck, 2008). For example, if a clinician were to be in the same environment as the researcher describes, they would experience the phenomenon similarly. All mental health providers need to become familiar with these aspects of qualitative evidence and hone their criti- cal appraisal skills to enable them to improve the outcomes of their clients. CONCLUSION Qualitative research aims to impart meaning of the human experience and understand how people think and feel about their circumstances. Qualitative researchers use a holistic approach in an attempt to uncover truths and understand a person’s reality. The researcher is intensely involved in all aspects of the research design, collection, and analysis pro- cesses. Ethnography, phenomenology, and grounded theory are some of the designs that a researcher may use to study a culture, phenomenon, or theory. Data collection strategies vary based on the research question, method, and informants. Methods such as interviews, observations, and journals allow for information-rich participants to provide detailed lit-
  • 17. erary accounts of the phenomenon. Data analysis occurs simultaneously as data collection and is the process by which the researcher identifies themes, concepts, and patterns that provide insight into the phenomenon under study. One of the crucial steps in the EBP process is to critically appraise the evidence for its use in practice TABLE 3. Subquestions to Further Answer, What Were the Results of the Study? Is the research design appropriate for the research question? Is the description of findings thorough? Do findings fit the data from which they were generated? Are the results logical, consistent, and easy to follow?
  • 18. Was the purpose of the study clear? Were all themes identified, useful, creative, and convincing of the phenomena? Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003). TABLE 4. Subquestions to Further Answer, Will the Results Help Me in Caring for My Patients? What meaning and relevance does this study have for my patients? How would I use these findings in my practice? How does the study help provide perspective on my practice? Are the conclusions appropriate to my patient population?
  • 19. Are the results applicable to my patients? How would patient and family values be considered in applying these results? Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003). Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 207 Critical Appraisal of Qualitative Evidence and determine the value of findings. Critical appraisal is the review of the evidence for its validity (i.e., strengths and weaknesses), reliability, and usefulness for clients in daily practice. “Psychiatric mental health clinicians are practicing in an era emphasizing the use of the most current evidence to direct their treatment and interventions” (Rice, 2008, p. 186). Appraising the evidence is essential for assurance that the best knowledge in the field is being applied in a cost-effective, holistic, and effective way. To do this, one must incorporate the critically appraised findings with their abilities as clinicians and their clients’ preferences. As professionals, clinicians are expected to use the EBP process, which includes appraising the evidence to determine if the best
  • 20. results are believable, useable, and dependable. Clinicians in psychiatric mental health must use qualitative evidence to inform their practice deci- sions. For example, how do clients newly diagnosed with bipolar and their families perceive the life impact of this diagnosis? Having a well done meta- synthesis that provides an accurate representation of the participants’ experiences, and is trustworthy (i.e., credible, dependable, confirmable, transferable, and authentic), will provide insight into the situational context, human response, and meaning for these cli- ents and will assist clinicians in delivering the best care to achieve the best outcomes. REFERENCES Ayers, L. (2007). Qualitative research proposals—Part I. Journal Wound Ostomy Continence Nursing, 34, 30-32. Cutcliffe, J. R., & McKenna, H. P. (1999). Establishing the credibil- ity of qualitative research findings: The plot thickens. Journal of Advanced Nursing, 30, 374-380. Denzin, N. K., & Lincoln, Y. S. (2005). The Sage handbook of qualitative research (3rd ed.). Thousand Oaks, CA: Sage. Duffy, M. E. (2005). Resources for critically appraising qualitative research evidence of nursing practice clinical question. Clinical Nursing Specialist, 19, 288-290. Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 105-117). Thousand
  • 21. Oaks, CA: Sage. Melnyk, B. M., & Fineout-Overholt, E. (Eds.). (2005). Evidence-based practice in nursing and healthcare. Philadelphia: Lippincott Williams & Wilkins. Miles, M. B., & Huberman, A. M. (1994). An expend sourcebook qualitative data analysis (4th ed.). Thousand Oaks, CA: Sage. Milne, J., & Oberle, K. (2005). Enhancing rigor in qualitative description: A case study. Journal Wound Ostomy Continence Nursing, 32, 413-420. Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks: Sage. Ploeg, J. (1999). Identifying the best research design to fit the question. Part 2: Qualitative designs. Evidence-Based Nursing, 2, 36-37. Polit, D. F., & Beck, C. T. (2008). Nursing research: Generating and assessing evidence fro nursing practice. Philadelphia: Lippincott Williams & Wilkins. Powers, B. A. (2005). Critically appraising qualitative evidence. In B. M. Melnyk & E. Fineout-Overholt (Eds.), Evidence-based practice in nursing and healthcare (pp. 127-162). Philadelphia: Lippincott Williams & Wilkins. Rice, M. J. (2008). Evidence-based practice in psychiatric care:
  • 22. Defining levels of evidence. Journal of the American Psychiatric Nurses Association, 14(3), 181-187. Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence-Based Nursing, 6, 36-40. Saddler, D. (2006). Research 101. Gastroenterology Nursing, 30, 314-316. Sandelowski, M. (2004). Using qualitative research. Qualitative Health Research, 14, 1366-1386. Speziale, H. J. S., & Carpenter, D. R. (2003). Qualitative research in nursing: Advancing the humanistic imperative. Philadelphia: Lippincott Williams & Wilkins. Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. London: Sage. Thorne, S. (2000). Data analysis in qualitative research. Evidence- Based Nursing, 3, 68-70. For reprints and permission queries, please visit SAGE’s Web site at http://www.sagepub.com/journalsPermissions.nav. By Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN, Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP,
  • 23. FNAP, FAAN, Susan B. Stillwell, DNP, RN, CNE, and Kathleen M. Williamson, PhD, RN In September’s evidence- based practice (EBP) article, Rebecca R., our hypotheti cal staff nurse, Carlos A., her hospi- tal’s expert EBP mentor, and Chen M., Rebecca’s nurse colleague, ra- pidly critically appraised the 15 articles they found to answer their clinical question—“In hospital- ized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three-month period (T)?”—and determined that they were all “keepers.” The team now begins the process of evaluation and syn thesis of the articles to see what the evidence says about initiating a rapid re- sponse team (RRT) in their hos- pital. Carlos reminds them that evaluation and synthesis are syn- ergistic processes and don’t neces- sarily happen one after the other. Nevertheless, to help them learn, he will guide them through the EBP process one step at a time. STARTING THE EVALUATION
  • 24. Rebecca, Carlos, and Chen begin to work with the evaluation table they created earlier in this process when they found and filled in the essential elements of the 15 stud- ies and projects (see “Critical Ap - praisal of the Evidence: Part I,” July). Now each takes a stack of the “keeper” studies and system- atically begins adding to the table any remaining data that best re - flect the study elements pertain- ing to the group’s clinical question (see Table 1; for the entire table with all 15 articles, go to http:// links.lww.com/AJN/A17). They had agreed that a “Notes” sec- tion within the “Appraisal: Worth to Practice” column would be a good place to record the nuances of an article, their impressions of it, as well as any tips—such as what worked in calling an RRT— that could be used later when they write up their ideas for ini- tiating an RRT at their hospital, if the evidence points in that direc- tion. Chen remarks that al though she thought their ini tial table con- tained a lot of information, this final version is more thorough by far. She appreciates the opportu- nity to go back and confirm her original understanding of the
  • 25. study essentials. The team members discuss the evolving patterns as they complete the table. The three systematic Critical Appraisal of the Evidence: Part III The process of synthesis: seeing similarities and differences across the body of evidence. This is the seventh article in a series from the Arizona State University College of Nursing and Health Innovation’s Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician exper- tise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide a direct line to the experts to help you resolve questions. See details below. Need Help with Evidence-Based Practice? Chat with the Authors on November 16! On November 16 at 3 PM EST, join the “Chat with the Au - thors” call. It’s your chance to get personal consultation from the experts! Dial-in early! U.S. and Canada, dial 1-800-947-
  • 26. 5134 (International, dial 001-574-941-6964). When prompted, enter code 121028#. Go to www.ajnonline.com and click on “Podcasts” and then on “Conversations” to listen to our interview with Ellen Fineout- Overholt and Bernadette Mazurek Melnyk. [email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11 43 44 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com Ta bl e 1. F in al E va lu at io n Ta
  • 59. tif ie d no . o f a ct iv a- tio ns o f R RT /1 ,0 00 ad m is si on s •
  • 67. m ): be ne fit s ou tw ei gh ri sk s [email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11 45 M cG au gh ey J,
  • 73. iti on : N R IV : R RT D V1 : H M R H M R: A us tra lia : ov er al
  • 87. be ds : 5 00 A ttr iti on : N R IV : R RT D V1 : H M R D V2 : C R
  • 113. em at ic r ev ie w ; U K = U ni te d Ki ng do m 46 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com as well as a good num ber of jour- nals have encouraged their use. When they review the actual guidelines, the team notices that
  • 114. they seem to be fo cused on re- search; for example, they require a research question and refer to the study of an intervention, whereas EBP projects have PICOT questions and apply evidence to practice. The team discusses that these guidelines can be confusing to the clinicians au thoring the re- ports on their proj ects. In addition, they note that there’s no mention of the syn thesis of the body of evidence that should drive an evidence-based project. While the SQUIRE Guidelines are a step in the right direction for the future, Carlos, Rebecca, and Chen con- clude that, for now, they’ll need to learn to read these studies as they find them—looking care- fully for the details that inform their clinical question. Once the data have been en- tered into the table, Carlos sug- gests that they take each column, one by one, and note the similari- ties and differences across the studies and projects. After they’ve briefly looked over the columns, he asks the team which ones they think they should focus on to an- swer their question. Re becca and Chen choose “Design/ Method,” “Sample/Setting,” “Findings,” and
  • 115. “Appraisal: Worth to Practice” (see Table 1) as the ini tial ones to consider. Carlos agrees that these are the columns in which they’re most likely to find the most pertinent information for their syn thesis. Chen in their efforts to appraise the MERIT study and comments on how well they’re putting the pieces of the evidence puzzle to- gether. The nurses are excited that they’re able to use their new knowledge to shed light on the study. They discuss with Carlos how the interpretation of the MERIT study has perhaps con- tributed to a misunderstanding of the impact of RRTs. Comparing the evidence. As the team enters the lower-level evi- dence into the evaluation table, they note that it’s challenging to compare the project reports with studies that have clearly described methodology, measurement, anal - ysis, and findings. Chen remarks that she wishes researchers and clinicians would write study and project reports similarly. Although each of the studies has a process or method determining how it was conducted, as well as how out-
  • 116. comes were measured, data were analyzed, and results interpreted, comparing the studies as they’re currently written adds an other layer of complexity to the eval- uation. Carlos says that while it would be great to have studies and projects written in a similar for- mat so they’re easier to compare, that’s unlikely to happen. But he tells the team not to lose all hope, as a format has been de veloped for re porting quality improve- ment initiatives called the SQUIRE Guidelines; however, they aren’t ideal. The team looks up the guide- lines online (www.squire-statement. org) and finds that the In stitute for Healthcare Improve ment (IHI) reviews, which are higher-level evidence, seem to have an inher- ent bias in that they included only studies with control groups. In general, these studies weren’t in favor of initiating an RRT. Carlos asks Rebecca and Chen whether, now that they’ve appraised all the evidence about RRTs, they’re con - fident in their decision to include all the studies and projects (in - cluding the lower-level evidence) among the “keepers.” The nurses reply with an emphatic affirma- tive! They tell Carlos that the pro j -
  • 117. ects and descriptive studies were what brought the issue to life for them. They realize that the higher- level evidence is somewhat in conflict with the lower-level evi- dence, but they’re most interested in the conclusions that can be drawn from considering the entire body of evidence. Rebecca and Chen admit they have issues with the systematic reviews, all of which include the MERIT study.1-4 In particular, they discuss how the authors of the systematic reviews made sure to report the MERIT study’s finding that the RRT had no effect, but didn’t emphasize the MERIT study authors’ discussion about how their study methods may have influenced the reliability of the findings (for more, see “Critical Appraisal of the Evi dence: Part II,” Septem ber). Carlos says that this is an excellent observation. He also reminds the team that clinicians may read a systematic review for the conclusion and never consider the original stud- ies. He encourages Rebecca and It’s not the number of studies or projects that determines the reliability of their findings, but the uniformity and
  • 118. quality of their methods. [email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11 47 SYNTHESIZING: MAKING DECISIONS BASED ON THE EVIDENCE Design/Method. The team starts with the “Design/Method” column because Carlos reminds them that it’s important to note each study’s level of evidence. He suggests that they take this information and create a synthesis table (one in which data is extracted from the evaluation table to better see the similarities and differences bet ween studies) (see Table 21-15). The synthesis table makes it clear that there is less higher-level and more lower-level evidence, which will impact the reliability of the overall findings. As the team noted, the higher-level evidence is not without meth odological issues, which will increase the challenge of coming to a conclusion about the impact of an RRT on the out - comes. Sample/Setting. In reviewing the “Sample/Setting” column, the group notes that the number of
  • 119. hospital beds ranged from 218 to 662 across the studies. There were several types of hospitals represented (4 teaching, 4 com- munity, 4 no mention, 2 acute care hospitals, and 1 public hos- pital). The evidence they’ve col- lected seems applicable, since their hospital is a community hos pital. Findings. To help the team better discuss the evidence, Car- los suggests that they refer to all pro j ects or studies as “the body of evidence.” They don’t want to get confused by calling them all studies, as they aren’t, but at the same time continually referring to “stud ies and projects” is cum- bersome. He goes on to say that, as part of the synthesis process, it’s impor tant for the group to determine the overall impact of the intervention across the body of evi dence. He helps them create a second synthesis table contain- ing the findings of each study or pro ject (see Table 31-15). As they look over the results, Rebecca and Chen note that RRTs reduce code rates, par ti cularly outside the ICU, whereas unplanned ICU admissions (UICUA) don’t seem to be as affected by them.
  • 120. How ever, 10 of the 15 studies and projects reviewed didn’t ev aluate this outcome, so it may not be fair to write it off just yet. Table 2: The 15 Studies: Levels and Types of Evidence 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Level I: Systematic review or meta-analysis X X X Level II: Randomized con- trolled trial X Level III: Controlled trial without randomization Level IV: Case-control or cohort study X X Level V: Systematic review of qualitative or descrip- tive studies Level VI: Qualitative or descriptive study (includes evidence implementation projects)
  • 121. X X X X X X X X X Level VII: Expert opinion or consensus Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice. 2nd ed. Philadelphia: Wolters Kluwer Health / Lippincott Williams and Wilkins; 2010. 1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.; 6 = Chan PS, et al. (2009); 7 = DeVita MA, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 10 = McFarlan SJ, Hensley S.; 11 = Offner PJ, et al.; 12 = Bertaut Y, et al.; 13 = Benson L, et al.; 14 = Hatler C, et al.; 15 = Bader MK, et al. hav ing level- VI evidence, a study and a project, had statistically significant (less likely to occur by chance, P < 0.05) reductions in HMR, which in creases the reli- ability of the results. Chen asks, since four level-VI reports documented that an RRT reduces HMR, should they put more confidence in findings that occur more than once? Carlos re- plies that it’s not the number of
  • 122. studies or projects that determines the re liability of their findings, but the uniformity and quality of their methods. He recites something he heard in his Expert EBP Mentor program that helped to clarify the concept of making decisions based on the evidence: the level of the evidence (the design) plus the quality of the evidence (the validity of the methods) equals the strength of the evidence, which is what leads clinicians to act in con - fidence and apply the evidence (or not) to their practice and expect similar findings (outcomes). In terms of making a decision about whether or not to initiate an RRT, Carlos says that their evidence stacks up: first, the MERIT study’s results are questionable because of problems with the study meth- ods, and this affects the reliability of the three systematic reviews as well as the MERIT study it self; second, the reasonably conducted lower-level studies/projects, with their statistically significant find- ings, are persuasive. Therefore, the team begins to consider the possibility that initiating an RRT may re duce code rates outside the ICU (CRO) and may impact non- ICU mor tality; both are outcomes they would like to address. The
  • 123. evidence doesn’t provide equally The EBP team can tell from reading the evidence that research - ers consider the impact of an RRT on hospital-wide mortality rates (HMR) as the more important outcome; however, the group re - mains unconvinced that this out- come is the best for evaluating the purpose of an RRT, which, according to the IHI, is early in - tervention in patients who are unstable or at risk for cardiac or respiratory arrest.16 That said, of the 11 studies and projects that evaluated mortality, more than half found that an RRT reduced it. Carlos reminds the group that four of those six articles are level-VI evidence and that some weren’t research. The findings produced at this level of evidence are typi- cally less reliable than those at higher levels of evidence; how- ever, Carlos notes that two articles 48 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com Table 3: Effect of the Rapid Response Team on Outcomes 1a 2a 3a 4a 5a 6a 7 8 9 10 11 12 13 14 15 HMR adult b
  • 124. peds b NE c b NR NE c NE b, d CRO NE NE NE NE c b NE NE b c b c NE c c CR b peds and adult NE b NE b c NE NE NE NE b NE NE UICUA NE NE NE NE NE NE NE b c NE NE NE b 1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.; 6 = Chan PS, et al. (2009); 7 = DeVita MA, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 10 = McFarlan SJ, Hensley S.; 11 = Offner PJ, et al.; 12 = Bertaut Y, et al.; 13 = Benson L, et al.; 14 = Hatler C, et al.; 15 = Bader MK, et al. CR = cardiopulmonary arrest or code rates; CRO = code rates outside the ICU; HMR = hospital-wide mortality rates; NE = not evaluated; NR = not reported; UICUA = unplanned ICU admissions a higher-level evidence; b statistically significant findings; c statistical significance not reported; d non-ICU mortality was reduced
  • 125. [email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11 49 the important outcomes to mea- sure are: CRO, non-ICU mortality (excluding patients with do not resuscitate [DNR] orders), UICUA, and cost. Appraisal: Worth to Practice. As the team discusses their syn- thesis and the decision they’ll make based on the evidence, data in the “Findings” column that shows a financial return on in vestment for an RRT.9 Carlos remarks to the group that this is only one study, and that they’ll need to make sure to collect data on the costs of their RRT as well as the cost implications of the outcomes. They determine that promising results for UICUA, but the team agrees to include it in the outcomes for their RRT pro j - ect be cause it wasn’t evaluated in most of the articles they ap- praised. As the EBP team continues to discusses probable outcomes, Re becca points to one study’s Table 4. Defined Criteria for Initiating an RRT Consult
  • 126. 4 8 9 13 15 Respiratory distress (breaths/min) Airway threatened Respiratory arrest RR < 5 or > 36 RR < 10 or > 30 RR < 8 or > 30 Unexplained dys- pnea RR < 8 or > 28 New-onset difficulty breathing RR < 10 or > 30 Shortness of breath Change in mental status Change in LOC Decrease in Glasgow Coma Scale of > 2 points ND Unexplained change Sudden decrease
  • 127. in LOC with normal blood glucose Decreased LOC Tachycardia (beats/ min) >140 > 130 Unexplained > 130 for 15 min > 120 > 130 Bradycardia (beats/ min) < 40 < 60 Unexplained < 50 for 15 min < 40 < 40 Blood pressure (mmHg) SBP < 90 SBP < 90 or > 180 Hypotension (unex- plained) SBP > 200 or < 90 SBP < 90 Chest pain Cardiac arrest ND ND Complaint of nontrau- matic chest pain Complaint of nontraumatic
  • 128. chest pain Seizures Sudden or extended ND ND Repeated or pro- longed ND Concern/worry about patient Serious concern about a patient who doesn’t fit the above criteria NE Nurse concern about overall deterioration in patients’ condi- tion without any of the above criteria (p. 2077) Nurse concern • Uncontrolled pain • Failure to respond to treatment • Unable to obtain prompt assistance for unstable patient Pulse oximetry (SpO2) NE NE NE < 92% < 92% Other • Color change of patient
  • 129. • Unexplained agita- tion for > 10 min • CIWA > 15 points • UOP < 50 cc/4 hr • Color change of patient (pale, dusky, gray, or blue) • New-onset limb weak- ness or smile droop • Sepsis: ≥ 2 SIRS criteria 4 = Hillman K, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 13 = Benson L, et al.; 15 = Bader MK, et al. cc = cubic centimeters; CIWA = Clinical Institute Withdrawal Assessment; hr = hour; LOC = level of consciousness; min = minute; mmHg = millimeters of mercury; ND = not defined; NE = not evaluated; RR = respiratory rate; SBP = systolic blood pressure; SIRS = systemic inflammatory response syndrome; SpO2= arterial oxygen saturation; UOP = urine output 50 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com that an RRT is a valuable inter- vention to initiate. They decide to take the criteria for activating an RRT from several successful
  • 130. studies/projects and put them into a synthesis table to better see their ma jor similarities (see Table 44, 8, 9, 13, 15). From this com- bined list, they choose the criteria for initiating an RRT consult that they’ll use in their project (see Table 5). The team also be gins discussing the ideal make up for their RRT. Again, they go back to the evaluation table and look of excitement about their project, that their colleagues across all disciplines have been eager to hear the re sults of their review of the evidence. In addition, Carlos says that many re sources in their hos- pital will be available to help them get started with their project and reminds them of their hospital administrators’ commitment to support the team. ACTING ON THE EVIDENCE As they consider the synthesis of the evidence, the team agrees Re becca raises a question that’s been on her mind. She reminds them that in the “Appraisal: Worth to Practice” column, teaching was identified as an important factor in initiating an RRT and expresses concern that their hospital is not an aca demic medical center. Chen
  • 131. re minds her that even though theirs is not a designated teaching hospital with residents on staff 24 hours a day, it has a culture of teaching that should enhance the success of an RRT. She adds that she’s al ready hearing a buzz Table 5. Defined Criteria for Initiating an RRT Consult at Our Hospital Pulmonary Ventilation Color change of patient (pale, dusky, gray, or blue) Respiratory distress RR < 10 or > 30 breaths/min or unexplained dyspnea or new-onset difficulty breathing or shortness of breath Cardiovascular Tachycardia Unexplained > 130 beats/min for 15 min Bradycardia Unexplained < 50 beats/min for 15 min Blood pressure Unexplained SBP < 90 or > 200 mmHg Chest pain Complaint of nontraumatic chest pain Pulse oximetry < 92% SpO2 Perfusion UOP < 50 cc/4 hr Neurologic Seizures Initial, repeated, or prolonged
  • 132. Change in mental status • Sudden decrease in LOC with normal blood glucose • Unexplained agitation for > 10 min • New-onset limb weakness or smile droop Concern/worry about patient Nurse concern about overall deterioration in patients’ condition without any of the above criteria Sepsis • Temp, > 38°C • HR, > 90 beats/min • RR, > 20 breaths/min • WBC, > 12,000, < 4,000, or > 10% bands cc = cubic centimeters; hr = hours; HR = heart rate; LOC = level of consciousness; min = minute; mmHg = millimeters of mercury; RR = respiratory rate; SBP = systolic blood pressure; SpO2 = arterial oxygen saturation; Temp = temperature; UOP = urine output; WBC = white blood count [email protected] AJN ▼ November 2010 ▼ Vol. 110, No. 11 51 3. Winters BD, et al. Rapid response sys - tems: a systematic review. Crit Care Med 2007;35(5):1238-43. 4. Hillman K, et al. Introduction of the medical emergency team (MET)
  • 133. system: a cluster-randomised con- trolled trial. Lancet 2005;365(9477): 2091-7. 5. Sharek PJ, et al. Effect of a rapid re - sponse team on hospital-wide mortal- ity and code rates outside the ICU in a children’s hospital. JAMA 2007; 298(19):2267-74. 6. Chan PS, et al. Hospital-wide code rates and mortality before and after implementation of a rapid response team. JAMA 2008;300(21):2506-13. 7. DeVita MA, et al. Use of medical emergency team responses to reduce hospital cardiopulmonary arrests. Qual Saf Health Care 2004;13(4): 251-4. 8. Mailey J, et al. Reducing hospital standardized mortality rate with early interventions. J Trauma Nurs 2006; 13(4):178-82. 9. Dacey MJ, et al. The effect of a rapid response team on major clinical out- come measures in a community hos- pital. Crit Care Med 2007;35(9): 2076-82. 10. McFarlan SJ, Hensley S. Implementa- tion and outcomes of a rapid response team. J Nurs Care Qual 2007;22(4): 307-13.
  • 134. 11. Offner PJ, et al. Implementation of a rapid response team decreases cardiac arrest outside the intensive care unit. J Trauma 2007;62(5):1223-8. 12. Bertaut Y, et al. Implementing a rapid- response team using a nurse-to-nurse consult approach. J Vasc Nurs 2008; 26(2):37-42. 13. Benson L, et al. Using an advanced practice nursing model for a rapid re - sp onse team. Jt Comm J Qual Pa tient Saf 2008;34(12):743-7. 14. Hatler C, et al. Implementing a rapid response team to decrease emergen- cies. Medsurg Nurs 2009;18(2):84-90, 126. 15. Bader MK, et al. Rescue me: saving the vulnerable non-ICU patient popu- lation. Jt Comm J Qual Patient Saf 2009;35(4):199-205. 16. Institute for Healthcare Improvement. Establish a rapid response team. n.d. http://www.ihi.org/IHI/topics/ criticalcare/intensivecare/changes/ establisharapidresponseteam.htm. evidence that led to the project, how to call an RRT, and out- come measures that will indicate whether or not the implementation
  • 135. of the evidence was successful. They’ll also need an evaluation plan. From reviewing the studies and projects, they also re alize that it’s important to focus their plan on evidence implementation, in- cluding carefully evaluating both the process of implementation and project outcomes. Be sure to join the EBP team in the next installment of this se - ries as they develop their imple- mentation plan for initiating an RRT in their hospital, including the submission of their project proposal to the ethics review board. ▼ Ellen Fineout-Overholt is clinical pro- fessor and director of the Center for the Advancement of Evidence-Based Prac - tice at Arizona State University in Phoe - nix, where Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, Susan B. Stillwell is clinical associate professor and pro- gram coordinator of the Nurse Educator Evidence-Based Practice Men torship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Pra ctice. Contact author: Ellen Fineout- Overholt, [email protected] edu.
  • 136. REFERENCES 1. Chan PS, et al. (2010). Rapid re - sponse teams: a systematic review and meta- analysis. Arch Intern Med 2010;170(1):18-26. 2. McGaughey J, et al. Outreach and early warning systems (EWS) for the prevention of intensive care admission and death of critically ill adult patients on general hospital wards. Cochrane Database Syst Rev 2007;3:CD005529. over the “Major Variables Studied” column, noting that the composition of the RRT varied among the studies/projects. Some RRTs had active physician partic- ipation (n = 6), some had desig- nated phy sician consultation on an as-needed basis (n = 2), and some were nurse-led teams (n = 4). Most RRTs also had a respira- tory therapist (RT). All RRT mem- bers had expertise in intensive care and many were certified in ad vanced cardiac life support (ACLS). They agree that their team will be comprised of ACLS- certified mem bers. It will be led by an acute care nurse prac ti- tioner (ACNP) credentialed for advanced procedures, such as
  • 137. cen tral line insertion. Members will include an ICU RN and an RT who can intubate. They also discuss having physicians will- ing to be called when needed. Although no studies or projects had a chaplain on their RRT, Chen says that it would make sense in their hospital. Carlos, who’s been on staff the longest of the three, says that interdisci- plinary collaboration has been a mainstay of their organization. A physician, ACNP, ICU RN, RT, and chaplain are logical choices for their RRT. As the team ponders the evi- dence, they begin to discuss the next step, which is to develop ideas for writing their project im plementation plan (also called a protocol). Included in this pro- tocol will be an educational plan to let those involved in the proj- ect know information such as the As they consider the synthesis of the evidence, the team agrees that an RRT is a valuable intervention to initiate.
  • 138. By Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN, Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP, FNAP, FAAN, Susan B. Stillwell, DNP, RN, CNE, and Kathleen M. Williamson, PhD, RN In May’s evidence-based prac-tice (EBP) article, Rebecca R., our hypothetical staff nurse, and Carlos A., her hospital’s ex- pert EBP mentor, learned how to search for the evidence to answer their clinical question (shown here in PICOT format): “In hos­ pitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac ar- rests (O) and unplanned admis- sions to the ICU (O) during a three­month period (T)?” With the help of Lynne Z., the hospi- tal librarian, Rebecca and Car- los searched three databases, PubMed, the Cumulative Index of Nursing and Allied Health Literature (CINAHL), and the Cochrane Database of Systematic Reviews. They used keywords from their clinical question, in- cluding ICU, rapid response team, cardiac arrest, and un- planned ICU admissions, as well as the following synonyms:
  • 139. failure to rescue, never events, medical emergency teams, rapid response systems, and code blue. Whenever terms from a database’s own indexing lan- guage, or controlled vocabulary, matched the keywords or syn- onyms, those terms were also searched. At the end of the data- base searches, Rebecca and Car- los chose to retain 18 of the 18 studies found in PubMed; six of the 79 studies found in CINAHL; and the one study found in the Cochrane Database of System- atic Reviews, because they best answered the clinical question. As a final step, at Lynne’s rec- ommendation, Rebecca and Car- los conducted a hand search of the reference lists of each study they retained looking for any rele- vant studies they hadn’t found in their original search; this process is also called the ancestry method. The hand search yielded one ad- ditional study, for a total of 26. RAPID CRITICAL APPRAISAL The next time Rebecca and Car- los meet, they discuss the next step in the EBP process—critically appraising the 26 studies. They obtain copies of the studies by
  • 140. printing those that are immedi- ately available as full text through library subscription or those flagged as “free full text” by a database or journal’s Web site. Others are available through in- terlibrary loan, when another hos pital library shares its articles with Rebecca and Carlos’s hospi- tal library. Carlos explains to Rebecca that the purpose of critical appraisal isn’t solely to find the flaws in a study, but to determine its worth to practice. In this rapid critical appraisal (RCA), they will review each study to determine • its level of evidence. • how well it was conducted. • how useful it is to practice. Once they determine which studies are “keepers,” Rebecca and Carlos will move on to the final steps of critical appraisal: evaluation and synthesis (to be discussed in the next two install- ments of the series). These final steps will determine whether overall findings from the evi- dence review can help clinicians improve patient outcomes. Rebecca is a bit apprehensive
  • 141. because it’s been a few years since she took a research class. She [email protected] AJN ▼ July 2010 ▼ Vol. 110, No. 7 47 Critical Appraisal of the Evidence: Part I An introduction to gathering, evaluating, and recording the evidence. This is the fifth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center for the Advancement of Evidence - Based Practice. Evidence- based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub- lished with September’s Evidence-Based Practice, Step by Step. and the Boston University Medi- cal Center Alumni Medical Li- brary [http://medlib.bu.edu/ bugms/content.cfm/content/
  • 142. ebmglossary.cfm#R].) Determining the level of evi- dence. The team begins to divide the 26 studies into categories ac- cording to study design. To help in this, Carlos provides a list of several different study designs (see Hierarchy of Evidence for Intervention Studies). Rebecca, Carlos, and Chen work together to determine each study’s design by reviewing its abstract. They also create an “I don’t know” pile of studies that don’t appear to fit a specific design. When they find studies that don’t actively answer the clinical question but new EBP team, Carlos provides Rebecca and Chen with a glossary of terms so they can learn basic research terminology, such as sam- ple, independent variable, and de- pendent variable. The glossary also defines some of the study de- signs the team is likely to come across in doing their RCA, such as systematic review, randomized controlled trial, and cohort, qual- itative, and descriptive studies. (For the definitions of these terms and others, see the glossaries pro- vided by the Center for the Ad- vancement of Evidence-Based Practice at the Arizona State Uni-
  • 143. versity College of Nursing and Health Innovation [http://nursing andhealth.asu.edu/evidence-based- practice/resources/glossary.htm] 48 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com shares her anxiety with Chen M., a fellow staff nurse, who says she never studied research in school but would like to learn; she asks if she can join Carlos and Rebecca’s EBP team. Chen’s spirit of inquiry encourages Re- becca, and they talk about the opportunity to learn that this project affords them. Together they speak with the nurse man- ager on their medical–surgical unit, who agrees to let them use their allotted continuing educa- tion time to work on this project, after they discuss their expecta- tions for the project and how its outcome may benefit the patients, the unit staff, and the hospital. Learning research terminol- ogy. At the first meeting of the Hierarchy of Evidence for Intervention Studies Type of evidence Level of evidence Description Systematic review or meta-analysis
  • 144. I A synthesis of evidence from all relevant random ized controlled trials. Randomized con- trolled trial II An experiment in which subjects are randomized to a treatment group or control group. Controlled trial with- out randomization III An experiment in which subjects are nonrandomly assigned to a treatment group or control group. Case-control or cohort study IV Case-control study: a comparison of subjects with a condition (case) with those who don’t have the condition (control) to determine characteristics that might predict the condition. Cohort study: an observation of a group(s) (cohort[s]) to determine the development of an outcome(s) such as a disease. Systematic review of qualitative or descrip- tive studies V A synthesis of evidence from qualitative or descrip tive studies to
  • 145. answer a clinical question. Qualitative or de- scriptive study VI Qualitative study: gathers data on human behavior to understand why and how decisions are made. Descriptive study: provides background informa tion on the what, where, and when of a topic of interest. Expert opinion or consensus VII Authoritative opinion of expert committee. Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins. http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cf m#R http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cf m#R http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cf m#R http://nursingandhealth.asu.edu/evidence-based- practice/resources/glossary.htm http://nursingandhealth.asu.edu/evidence-based- practice/resources/glossary.htm http://nursingandhealth.asu.edu/evidence-based- practice/resources/glossary.htm
  • 146. [email protected] AJN ▼ July 2010 ▼ Vol. 110, No. 7 49 may inform thinking, such as descriptive research, expert opin- ions, or guidelines, they put them aside. Carlos explains that they’ll be used later to support Rebecca’s case for having a rapid response team (RRT) in her hospital, sh- ould the evidence point in that direction. After the studies—including those in the “I don’t know” group—are categorized, 15 of the original 26 remain and will be included in the RCA: three systematic reviews that include one meta-analysis (Level I evi- dence), one randomized con- trolled trial (Level II evidence), two cohort studies (Level IV evi- dence), one retrospective pre- post study with historic controls (Level VI evidence), four preex- perimental (pre-post) interven- tion studies (no control group) (Level VI evidence), and four EBP implementation projects (Level VI evidence). Carlos reminds Rebecca and Chen that Level I evidence—a systematic review of randomized controlled trials or a meta-analysis—is the most
  • 147. reliable and the best evidence to answer their clinical question. Using a critical appraisal guide. Carlos recommends that the team use a critical appraisal checklist (see Critical Appraisal Guide for Quantitative Studies) to help evaluate the 15 studies. This checklist is relevant to all studies and contains questions about the essential elements of research (such as, pur pose of the study, sample size, and major variables). The questions in the critical ap- praisal guide seem a little strange to Rebecca and Chen. As they re- view the guide together, Carlos explains and clarifies each ques- tion. He suggests that as they try to figure out which are the essen- tial elements of the studies, they focus on answering the first three questions: Why was the study done? What is the sample size? Are the instruments of the major variables valid and reliable? The remaining questions will be ad- dressed later on in the critical appraisal process (to appear in future installments of this series). Creating a study evaluation
  • 148. table. Carlos provides an online template for a table where Re- becca and Chen can put all the data they’ll need for the RCA. Here they’ll record each study’s essential elements that answer the three questions and begin to ap- praise the 15 studies. (To use this template to create your own eval- uation table, download the Eval- uation Table Template at http:// links.lww.com/AJN/A10.) EXTRACTING THE DATA Starting with level I evidence studies and moving down the hierarchy list, the EBP team takes each study and, one by one, finds and enters its essential elements into the first five columns of the evaluation table (see Table 1; to see the entire table with all 15 studies, go to http://links. lww.com/AJN/A11). The team discusses each element as they enter it, and tries to determine if it meets the criteria of the critical Critical Appraisal Guide for Quantitative Studies 1. Why was the study done? • Was there a clear explanation of the purpose of the study and, if so, what was it? 2. What is the sample size? • Were there enough people in the study to establish that the findings did not occur by chance? 3. Are the instruments of the major variables valid and
  • 149. reliable? • How were variables defined? Were the instruments designed to measure a concept valid (did they measure what the researchers said they measured)? Were they reliable (did they measure a concept the same way every time they were used)? 4. How were the data analyzed? • What statistics were used to determine if the purpose of the study was achieved? 5. Were there any untoward events during the study? • Did people leave the study and, if so, was there something special about them? 6. How do the results fit with previous research in the area? • Did the researchers base their work on a thorough literature review? 7. What does this research mean for clinical practice? • Is the study purpose an important clinical issue? Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins. http://links.lww.com/AJN/A10 http://links.lww.com/AJN/A10 http://links.lww.com/AJN/A11 http://links.lww.com/AJN/A11 50 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com Ta bl
  • 161. iti on : N R IV : R RT D V1 : H M R D V2 : C R M cG au gh ey J, e
  • 172. D V2 : C R H ill m an K , e t a l. La nc et 2 00 5; 36 5( 94 77 ):
  • 192. si on s. [email protected] AJN ▼ July 2010 ▼ Vol. 110, No. 7 51 suggests they leave the column in. He says they can further discuss this point later on in the process when they synthesize the studies’ findings. As Rebecca and Chen review each study, they enter its citation in a separate reference list so that they won’t have to create this list at the end of the pro cess. The reference list will be shared with colleagues and placed at the end of any RRT policy that re- sults from this endeavor. Carlos spends much of his time answering Rebecca’s and Chen’s questions concerning how to phrase the information they’re entering in the table. He suggests that they keep it simple and con- sistent. For example, if a study indicated that it was implement- ing an RRT and hoped to see a change in a certain outcome, the nurses could enter “change in
  • 193. [the outcome] after RRT” as the purpose of the study. For studies examining the effect of an RRT on an outcome, they could say as the purpose, “effect of RRT on [the outcome].” Using the same words to describe the same pur- pose, even though it may not have been stated exactly that way in the study, can help when they compare studies later on. Rebecca and Chen find it frus- trating that the study data are not always presented in the same way from study to study. They ask Carlos why the authors or journals wouldn’t present similar information in a similar manner. Carlos explains that the purpose of publishing these studies may have been to disseminate the find ings, not to compare them with other like studies. Rebecca realizes that she enjoys this kind of conversation, in which she and Chen have a voice and can contribute to a deeper under- standing of how research impacts practice. As Rebecca and Chen con- tinue to enter data into the table, they begin to see similarities and differences across studies. They
  • 194. mention this to Carlos, who tells them they’ve begun the process of synthesis! Both nurses are en- couraged by the fact that they’re learning this new skill. The MERIT trial is next in the stack of studies and it’s a good trial to use to illustrate this phase of the RCA process. Set in Aus- tralia, the MERIT trial1 examined whether the introduction of an RRT (called a medical emergency team or MET in the study) would reduce the incidence of cardiac arrest, unplanned admissions to the ICU, and death in the hospi- tals studied. See Table 1 to follow along as the EBP team finds and enters the trial data into the table. Design/Method. After Rebecca and Chen enter the citation infor- mation and note the lack of a con- ceptual framework, they’re ready to fill in the “Design/Method” column. First they enter RCT for randomized controlled trial, which they find in both the study title and introduction. But MERIT is called a “cluster- randomised controlled trial,” and cluster is a term they haven’t seen before. Carlos explains that it means that hospitals, not individuals or pa- tients, were randomly assigned to
  • 195. the RRT. He says that the likely reason the researchers chose to randomly assign hospitals is that if they had randomly assigned individual patients or units, oth- ers in the hospital might have heard about the RRT and poten- tially influenced the outcome. appraisal guide. These elements— such as purpose of the study, sam- ple size, and major variables—are typical parts of a research report and should be presented in a pre- dictable fashion in every study so that the reader understands what’s being reported. As the EBP team continues to review the studies and fill in the evaluation table, they realize that it’s taking about 10 to 15 minutes per study to locate and enter the information. This may be because when they look for a description of the sample, for example, it’s important that they note how the sample was obtained, how many patients are included, other char- acteristics of the sample, as well as any diagnoses or illnesses the sample might have that could be important to the study outcome. They discuss with Carlos the like- lihood that they’ll need a few ses- sions to enter all the data into the
  • 196. table. Carlos responds that the more studies they do, the less time it will take. He also says that it takes less time to find the information when study reports are clearly written. He adds that usually the important informa- tion can be found in the abstract. Rebecca and Chen ask if it would be all right to take out the “Conceptual Framework” column, since none of the stud- ies they’re reviewing have con- ceptual frameworks (which help guide researchers as to how a study should proceed). Carlos replies that it’s helpful to know that a study has no framework underpinning the research and Usually the important information in a study can be found in the abstract. 52 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com To randomly assign hospitals (instead of units or patients) to the intervention and comparison groups is a cleaner research de- sign. To keep the study purposes
  • 197. con sistent among the studies in the RCA, the EBP team uses inclu- sive terminology they developed after they noticed that different trials had different ways of de- scribing the same objectives. Now they write that the purpose of the MERIT trial is to see if an RRT can reduce CR, for cardiopulmo- nary arrest or code rates, HMR, for hospital-wide mortality rates, and UICUA for unplanned ICU admissions. They use those same terms consistently throughout the evaluation table. Sample/Setting. A total of 23 hospitals in Australia with an average of 340 beds per hospi- tal is the study sample. Twelve hospitals had an RRT (the inter- vention group) and 11 hospitals didn’t (the control group). Major Variables Studied. The independent variable is the vari- able that influences the outcome (in this trial, it’s an RRT for six months). The dependent vari- able is the outcome (in this case, HMR, CR, and UICUA). In this trial, the outcomes didn’t include do-not-resuscitate data. The RRT was made up of an attending phy- sician and an ICU or ED nurse.
  • 198. While the MERIT trial seems to perfectly answer Rebecca’s PICOT question, it contains ele- ments that aren’t entirely relevant, such as the fact that the research- ers collected information on how the RRTs were activated and pro- vided their protocol for calling the RRTs. However, these elements might be helpful to the EBP team later on when they make decisions about implementing an RRT in their hospital. So that they can come back to this information, they place it in the last column, “Appraisal: Worth to Practice.” After reviewing the studies to make sure they’ve captured the essential elements in the evalua- tion table, Rebecca and Chen still feel unsure about whether the in- formation is complete. Carlos reminds them that a system-wide practice change—such as the change Rebecca is exploring, that of implementing an RRT in her hospital—requires careful consid- eration of the evidence and this is only the first step. He cautions them not to worry too much about perfection and to put their efforts into understanding the information in the studies. He re-
  • 199. minds them that as they move on to the next steps in the critical appraisal process, and learn even more about the studies and proj- ects, they can refine any data in the table. Rebecca and Chen feel uncomfortable with this uncer- tainty but decide to trust the pro- cess. They continue extracting data and entering it into the table even though they may not com- pletely understand what they’re entering at present. They both realize that this will be a learn- ing opportunity and, though the le arning curve may be steep at times, they value the outcome of improving patient care enough to continue the work—as long as Carlos is there to help. In applying these principles for evaluating research studies to your own search for the evi- dence to answer your PICOT question, remember that this se- ries can’t contain all the available infor mation about research meth- od ology. Fortunately, there are many good resources available in books and online. For example, to find out more about sample size, which can affect the likeli- hood that researchers’ results oc- cur by chance (a random finding)
  • 200. rather than that the intervention brought about the expected out- come, search the Web using terms that describe what you want to know. If you type sample size findings by chance in a search en- gine, you’ll find several Web sites that can help you better under- stand this study essential. Be sure to join the EBP team in the next installment of the se- ries, “Critical Appraisal of the Evi dence: Part II,” when Rebecca and Chen will use the MERIT trial to illustrate the next steps in the RCA process, complete the rest of the evaluation table, and dig a little deeper into the studies in order to detect the “keepers.” ▼ Ellen Fineout-Overholt is clinical profes- sor and director of the Center for the Advancement of Evidence-Based Practice at Arizona State University in Phoenix, where Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, Susan B. Stillwell is clinical associate professor and pro- gram coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Practice. Contact author: Ellen Fineout-Overholt,
  • 201. ellen.fineout-[email protected] REFERENCE 1. Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised con- trolled trial. Lancet 2005;365(9477): 2091-7. Keep the data in the table consistent by using simple, inclusive terminology. By Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN, Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP, FNAP, FAAN, Susan B. Stillwell, DNP, RN, CNE, and Kathleen M. Williamson, PhD, RN In July’s evidence-based prac-tice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospital’s expert EBP mentor, and Chen M., Rebecca’s nurse colleague, col lected the evidence to an- swer their clinical question: “In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the
  • 202. number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three- month period (T)?” As part of their rapid critical appraisal (RCA) of the 15 potential “keeper” studies, the EBP team found and placed the essential elements of each study (such as its population, study design, and setting) into an evaluation table. In so doing, they began to see similarities and differ- ences between the studies, which Carlos told them is the beginning of synthesis. We now join the team as they continue with their RCA of these studies to determine their worth to practice. RAPID CRITICAL APPRAISAL Carlos explains that typically an RCA is conducted along with an RCA checklist that’s specific to the research design of the study being evaluated—and before any data are entered into an evalua- tion table. However, since Rebecca and Chen are new to appraising studies, he felt it would be easier for them to first enter the essen- tials into the table and then eval- uate each study. Carlos shows Rebecca several RCA checklists and explains that all checklists
  • 203. have three major questions in common, each of which contains other more specific subquestions about what constitutes a well- conducted study for the research design under review (see Example of a Rapid Critical Appraisal Checklist). Although the EBP team will be looking at how well the re - searchers conducted their studies and discussing what makes a “good” research study, Carlos reminds them that the goal of critical appraisal is to determine the worth of a study to practice, not solely to find flaws. He also suggests that they consult their glossary when they see an unfa- miliar word. For example, the term randomization, or random assignment, is a relevant feature of research methodology for in- tervention studies that may be unfamiliar. Using the glossary, he explains that random assignment and random sampling are often confused with one another, but that they’re very different. When researchers select subjects from within a certain population to participate in a study by using a random strategy, such as tossing a coin, this is random sampling.
  • 204. It allows the entire population to be fairly represented. But because it requires access to a particular population, random sampling is not always feasible. Carlos adds that many health care studies are based on a con- venience sample—participants recruited from a readily available population, such as a researcher’s affiliated hospital, which may or may not represent the desired population. Random assignment, on the other hand, is the use of a random strategy to assign study [email protected] AJN ▼ September 2010 ▼ Vol. 110, No. 9 41 Critical Appraisal of the Evidence: Part II Digging deeper—examining the “keeper” studies. This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center for the Advancement of Evidence-Based Practice. Evidence- based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward
  • 205. implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub- lished with November’s Evidence-Based Practice, Step by Step. are the same as three of their po tential “keeper” studies. They wonder whether they should keep those studies in the pile, or if, as duplicates, they’re unnecessary. Carlos says that because the meta- analysis only included studies with control groups, it’s impor- tant to keep these three studies so that they can be compared with other studies in the pile that don’t have control groups. Rebecca notes that more than half of their 15 studies don’t have control or comparison groups. They agree as a team to include all 15 stud- ies at all levels of evidence and go on to appraise the two remaining systematic reviews. The MERIT trial1 is next in the EBP team’s stack of studies. with him, Rebecca and Chen find the checklist for systematic reviews. As they start to rapidly criti-
  • 206. cally appraise the meta-analysis, they discuss that it seems to be biased since the authors included only studies with a control group. Carlos explains that while hav- ing a control group in a study is ideal, in the real world most stud- ies are lower-level evidence and don’t have control or compari- son groups. He emphasizes that, in eliminating lower-level studies, the meta-analysis lacks evidence that may be informative to the question. Rebecca and Chen— who are clearly growing in their appraisal skills—also realize that three studies in the meta-analysis 42 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com participants to the intervention or control group. Random as- signment is an important feature of higher-level studies in the hier- archy of evidence. Carlos also reminds the team that it’s important to begin the RCA with the studies at the high- est level of evidence in order to see the most reliable evidence first. In their pile of studies, these are the three systematic reviews, includ- ing the meta-analysis and the Cochrane review, they retrieved from their database search (see
  • 207. “Searching for the Evidence,” and “Critical Appraisal of the Evidence: Part I,” Evidence- Based Practice, Step by Step, May and July). Among the RCA checklists Carlos has brought Example of a Rapid Critical Appraisal Checklist Rapid Critical Appraisal of Systematic Reviews of Clinical Interventions or Treatments 1. Are the results of the review valid? A. Are the studies in the review randomized controlled trials? Yes No B. Does the review include a detailed description of the search strategy used to find the relevant studies? Yes No C. Does the review describe how the validity of the individual studies was assessed (such as, methodological quality, including the use of random assignment to study groups and complete follow-up of subjects)? Yes No D. Are the results consistent across studies? Yes No E. Did the analysis use individual patient data or aggregate data? Patient Aggregate 2. What are the results? A. How large is the intervention or treatment effect (odds ratio, relative risk, effect size, level of significance)? B. How precise is the intervention or treatment (confidence interval)?
  • 208. 3. Will the results assist me in caring for my patients? A. Are my patients similar to those in the review? Yes No B. Is it feasible to implement the findings in my practice setting? Yes No C. Were all clinically important outcomes considered, including both risks and benefits of the treatment? Yes No D. What is my clinical assessment of the patient, and are there any contraindications or circumstances that would keep me from implementing the treatment? Yes No E. What are my patients’ and their families’ preferences and values concerning the treatment? Yes No © Fineout-Overholt and Melnyk, 2005. [email protected] AJN ▼ September 2010 ▼ Vol. 110, No. 9 43 As we noted in the last install- ment of this series, MERIT is a good study to use to illustrate the different steps of the critical appraisal process. (Readers may want to retrieve the article, if possible, and follow along with the RCA.) Set in Australia, the MERIT trial examined whether the introduction of a rapid re - sponse team (RRT; called a med- ical emergency team or MET in the study) would reduce the
  • 209. incidence of cardiac arrest, death, and unplanned admissions to the ICU in the hospitals studied. To follow along as the EBP team addresses each of the essential elements of a well-conducted randomized controlled trial (RCT) and how they apply to the MERIT study, see their notes in Rapid Critical Appraisal of the MERIT Study. ARE THE RESULTS OF THE STUDY VALID? The first section of every RCA checklist addresses the validity of the study at hand—did the researchers use sound scientific methods to obtain their study results? Rebecca asks why valid- ity is so important. Carlos replies that if the study’s conclusion can be trusted—that is, relied upon to inform practice—the study must be conducted in a way that reduces bias or eliminates con- founding variables (factors that influence how the intervention affects the outcome). Researchers typically use rigorous research methods to reduce the risk of bias. The purpose of the RCA checklist is to help the user deter- mine whether or not rigorous methods have been used in the study under review, with most questions offering the option of
  • 210. a quick answer of “yes,” “no,” or “unknown.” Were the subjects randomly assigned to the intervention and control groups? Carlos explains that this is an important question when appraising RCTs. If a study calls itself an RCT but didn’t randomly assign participants, then bias could be present. In appraising the MERIT study, the team discusses how the research- ers randomly assigned entire hospitals, not individual patients, to the RRT intervention and control groups using a technique called cluster randomization. To better understand this method, the EBP team looks it up on the Internet and finds a PowerPoint presentation by a World Health Organization researcher that explains it in simplified terms: “Cluster randomized trials are experiments in which social units or clusters [in our case, hospitals] rather than individuals are ran- domly allocated to intervention groups.”2 Was random assignment concealed from the individuals enrolling the subjects? Conceal- ment helps researchers reduce