SlideShare a Scribd company logo
1 of 68
Assignment: WK 9Assessing a Healthcare Program/Policy
Evaluation
Program/policy evaluation is a valuable tool that can help
strengthen the quality of programs/policies and improve
outcomes for the populations they serve. Program/policy
evaluation answers basic questions about program/policy
effectiveness. It involves collecting and analyzing information
about program/policy activities, characteristics, and outcomes.
This information can be used to ultimately improve program
services or policy initiatives.
Nurses can play a very important role assessing program/policy
evaluation for the same reasons that they can be so important to
program/policy design. Nurses bring expertise and patient
advocacy that can add significant insight and impact. In this
Assignment, you will practice applying this expertise and
insight by selecting an existing healthcare program or policy
evaluation and reflecting on the criteria used to measure the
effectiveness of the program/policy.
To Prepare:
· Review the Healthcare Program/Policy Evaluation Analysis
Template provided in the Resources.
· Select an existing healthcare program or policy evaluation or
choose one of interest to you.
· Review community, state, or federal policy evaluation and
reflect on the criteria used to measure the effectiveness of the
program or policy described.
The Assignment: (2–3 pages)
Based on the program or policy evaluation you selected,
complete the Healthcare Program/Policy Evaluation Analysis
Template. Be sure to address the following:
· Describe the healthcare program or policy outcomes.
· How was the success of the program or policy measured?
· How many people were reached by the program or policy
selected?
· How much of an impact was realized with the program or
policy selected?
· At what point in program implementation was the program or
policy evaluation conducted?
· What data was used to conduct the program or policy
evaluation?
· What specific information on unintended consequences was
identified?
· What stakeholders were identified in the evaluation of the
program or policy? Who would benefit most from the results
and reporting of the program or policy evaluation? Be specific
and provide examples.
· Did the program or policy meet the original intent and
objectives? Why or why not?
· Would you recommend implementing this program or policy in
your place of work? Why or why not?
· Identify at least two ways that you, as a nurse advocate, could
become involved in evaluating a program or policy after 1 year
of implementation.
By Day 7 of Week 10
Submit your completed healthcare program/policy evaluation
analysis.
Milstead, J. A., & Short, N. M. (2019).
Health policy and politics: A nurse's guide (6th ed.).
Jones & Bartlett Learning.
· Chapter 7, “Health Policy and Social Program Evaluation”
(pp. 116–124 only)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5409875/
https://www.sciencedirect.com/science/article/pii/S0029655418
300617
i J LUUU^S
Why Don't We See More Translation
of Health Promotion Research to Practice?
Rethinking the Efficacy-to-Effectiveness Transition
I Russell E. Glasgow, PhD, Edward Lichtenstein, PhD, and
Alfred C, Marcus, PhD
The gap between research and practice is well documented. We
address one of the
underlying reasons for this gap: the assumption that
effectiveness research naturally
and logically follows from successful efficacy research. These 2
research traditions
have evolved different methods and values; consequently, there
are inherent differ-
ences between the characteristics of a successful efficacy
intervention versus those of
an effectiveness one. Moderating factors that limit robustness
across settings, popu-
lations, and intervention staff need to be addressed in efficacy
studies, as well as in
effectiveness trials. Greater attention needs to be paid to
documenting intervention
reach, adoption, implementation, and maintenance.
Recommendations are offered to
help close the gap between efficacy and effectiveness research
and to guide evaluation
and possible adoption of new programs. (Am J Public Health.
2003;93:1261-1267)
Despite a growing literature documenting pre-
vention and health promotion interventions
that have proven successful in well-controlled
research, few of these interventions are consis-
tently implemented in applied settings. This is
true across preventive counseling services for
numerous target behaviors, including tobacco
use, dietary change, physical activity, and
behavioral heailth issues (e.g., alcohol use, de-
pression). Several recent reviews and meta-
analyses have documented this gap,''^ and the
task forces on both clinical preventive services
and community preventive services have noted
that in several areas there is insufSdent ap-
pUed evidence available to make recommenda-
tions at present ̂ "̂ Most of the Healthy People
2000 objectives^ were not met, and the even
more ambitious goals in Healthy People 2010
are similarly unlikely to be met without signifi-
cant changes in the status quo.̂ '* To meet these
challenges, we will need to have substantially
more demonstrations of how to effectively im-
plement recommendations in typical settings
and in locations serving minority, low-income,
and rural populations facing health disparities.
This situation is not unique to preventive in-
terventions, as strikingly documented in the re-
cent Institute of Medicine report Crossing the
Chasm^ which summarizes the similar state of
affairs regarding many medical and disease
management interventions. For example, there
is increasing consensus on evidence-based
diabetes management practices to prevent
complications and on the importance and cost-
effectiveness of these practices.'" However,
these recommendations—and especially those
related to lifestyle counseling and behavioral
issues—are poorly implemented in practice."^''*
This gap between research and practice is
the result of several interacting factors, includ-
ing limited time and resources of practition-
ers, insufficient training,'' lack of feedback
and incentives for use of evidence-based
practices, and inadequate infrastructure and
systems organization to support translation.®'̂
In this article, we focus on another reason for
the slow and incomplete translation of re-
search findings into practice: the logic and as-
sumptions behind the design of efficacy and
effectiveness research trials.
EFFICACY AND EFFECTIVENESS
TRIALS
Many of the methods used in current pre-
vention science are based on 2 influential pa-
pers published in the 1980s: Greenwald and
Cullen's'^ description of the phases of cancer
control research and Flay's analysis of efficacy
and effectiveness research.'^ Both papers ar-
gued for a logical progression of research de-
signs through which promising intervention
ideas should proceed. These papers had many
positive effects in helping to establish preven-
tion research and enhancing acceptability
among other disciplines. However, they may
also have had an important and inadvertent
negative consequence that derives from the
assumption that the best candidates for effec-
tiveness studies—and later dissemination—are
interventions that prove successful in certain
types of efficacy research. We argue that this
assumption, or at least the way in which it has
been operationalized over the past 15 years,
has often led to interventions that have low
probability of success in real-world settings.
To understand this point, it is necessary first
to briefly review the seminal papers by Flay'̂
and Greenwald and Cullen.'̂ Efficacy trials are
defined by Flay as a test of whether a "pro-
gram does more good than harm when deliv-
ered under optimum conditions."'*''''"" Effi-
cacy trials are characterized by strong control
in that a standardized program is delivered in
a uniform fashion to a specific, often narrowly
defined, homogeneous target audience. Owing
to the strict standardization of efficacy trials,
any positive (or negative) effect can be directly
attributed to the intervention being studied.
Effectiveness trials are defined as a test of
whether a "program does more good than
harm when delivered under real-wOrld condi-
tions."'*"'"''̂ " They typically standardize avail-
ability and access among a defined popula-
tion while allowing implementation and levels
of participation to vary on the basis of real-
world conditions. The primary goal of an ef-
fectiveness tried is to detennine whether an
intervention works among a broadly defined
population. Effectiveness trials that result in
no change may be the result of a lack of
proper implementation or weak acceptance or
adherence by participants.'*'^
Greenwald and Cullen'̂ proposed 5 phases
of intervention research presumed to unfold in
August 2003 , Vol 93 , No. 8 | American Journal of Public
Health Glasgow et al. | Peer Reviewed | Public Health Matters |
1 2 6 1
ME
a sequential fashion. This continuum begins
with Phase I research to formtiJate and develop
intervention Jiypotheses for future study. Phase
II studies develop methodologies that can be
used in future efBcacy or effectiveness studies.
Phase III (efficacy) studies test intervention hy-
potheses, using methods that have been tested
in Phase Jl. TJius, Phase III studies are de-
signed to test interventions for efBcacy, vnth an
emphasis on internal validity, tJie purpose of
wJiich is to establish a eausal link between the
intervention and outcomes. Given this empha-
sis on internal control, Greenwald and Cullen
note that Phase III studies can be conducted in
settings and witb stimples that will "optimize in-
terpretation of efBcacy," including study sam-
ples tbat may be more homogeneous tban tbe
ultimate target population, and settings tbat will
maximize management of and control over tbe
researcb process.
Tbe main objective of Phase fV (effective-
ness) studies is to measure tbe impact of an in-
tervention when it is tested witbin a population
tbat is representative of tbe intended target au-
dienee. Given that Pbase JV studies should
yield results tbat are generalizable, there is also
tbe presumption tbat tbe context and setting
for delivering tbe intervention should likewise
be generalizable to tbe intended program
users. Jn Pbase V studies, effective Pbase JV in-
terventions are translated into large-scale dem-
onstration projects. Tbe major concern is im-
plementation fidelity of an intervention tbat
will now be introduced witbin even broader
populations, including entire communities. Tbis
final pbase (dissemination researeb), wbere col-
laboration and coordination witb various com-
munity partners is likely to receive even
greater attention, is intended to provide tbe
necessary data and experience to move inter-
ventions into public bealth service programs at
tbe national, regional, state, and local levels.
Greenwald and Cullen spedficaUy advocated
tbat intervention researcb unfold in a system-
atic fasbion, building on and extending tbe
body of science acctimulated in previous
pbases. By explicitly defining tbe difference be-
tween Pbase JJJ and Pbase IV researcb as being
an empbasis on internal control versus repre-
sentativeness, botb Flay and Greenwald and
CuUen assumed tbat successful Pbase III trials
would lead naturally to Pbase fV trials. Unfor-
tunately, tbis bas not ocaured.''"'^" Instead, we
currently find ourselves in a situation in wbicb
we bave many small-scale efBcacy studies of
unJoiown generalizability and few suceessiuJ ef-
fectiveness trials.̂ ''̂ ^ In particular, we know
very little about tbe representativeness of par-
ticipants, settings, or intervention agents partici-
pating in bealtb promotion research.''^'
Altbougb tbe National Gancer Institute no
longer empbasizes tJiis linear "pbases of re-
searcb" model,^'''^'' tbe model was extremely
influential in guiding an entire generation of
researeb; many researcbers, reviewers, and
editors still use tbis framework wben design-
ing, ftmding, and evaluating research—and in
deciding wbat types of studies are needed to
advance a given area. Similar pbase models
are influential in evaluating prevention effec-
tiveness^^ and in developing drug therapies.
In tbe remainder of tbis article, we discuss
bow tbis well-intentioned and logical pbase of
researcb paradigm may bave fallen sbort of
its intended goal, and propose approacbes to
remedy tbe present situation.
Our primary thesis is tbat tbis "triekle-
down" model of bow to translate researcb
into practice—namely, tbat tbe optimal way to
develop disseminable interventions is to
progress from efBcacy studies to effectiveness
trials to dissemination projects—is inherently
flawed, or at least incomplete. We posit that
given tbe respective cultures, values, and
methodological traditions tbat bave devel-
oped witbin efBcacy versus population-based
effectiveness researcb, it is bigbly unlikely
tbat interventions tbat are successful in efB-
cacy studies will do well in effectiveness stud-
ies, or in real-world applications.
Table 1 summarizes tbe key cbaracteristics
of well-designed efficacy and effectiveness tri-
als, using tbe RE-AIM evaluation frame-
work.̂ '̂̂ ^ Tbis model for evaluating interven-
tions is intended to refoctis priorities on
public bealtb issues, and it gives balanced em-
pbasis to internal and external validity (see
bttp://www.re-aim.org). RE-AIM is an acro-
nym for Reach, Efficacy or Effectiveness (de-
pending on tbe stage of researcb). Adoption,
Implementation, and Maintenance.
Reach refers to tbe participation rate among
tbose approacbed and tbe representativeness
of participants. Factors determining reaeb are
tbe size and cbaracteristics of tbe potential au-
dience and tbe barriers to participation (e.g.,
cost, sodaJ and environmental context, neces-
sary referrals, transportation, and inconven-
ience). Efficacy or effectiveness pertains to tbe
impact of an intervention on specified out-
come criteria and includes measures of poten-
tial negative outcomes as well as intended re-
sults (as recommended by Flay,'* but seldom
eolJected)̂ ®'̂ ^ (D.A. Dzewaltowski et al., un-
publisbed data, 2002). Adoption operates at
the setting level and concerns the percentage
and representativeness of organizations or set-
tings tbat wifl conduct a given program.
Rogers^" bas written extensively on adoption
and dissemination issues. Factors associated
witb adoption include political and cultural fit.
TABLE 1-Distinctive Characteristics of Efficacy and
Effectiveness intervention Studies,
Using RE-AIM^^'" Dimensions for Program Evaluation
RE-AIM Issue Efficacy Studies Effectiveness Studies
Reacli
Efficacy or
effectiveness
Adoption
Implementation
Maintenance and
cost
Homogeneous, highly motivated sample;
exclude those with complications.
other comorbid problems
Intensive, specialized interventions that
attempt to maximize effect size; very
standardized; randomized designs
Usually 1 setting to reduce variability; settings
with many resources and expert staff
Implemented by research staff closely
following specific protocol
Few or no issues; focus on individual level.
Broad, heterogeneous, representative sample;
often use a defined population
Brief, feasible interventions not requiring great
expertise; adaptable to setting; randomized,
time series, or quasi-experimental designs
Appeal to and work in multiple settings; able
to be adapted to fit setting
Implemented by variety of different staff with
competing demands, using adapted protocol
Major issues; setting-level maintenance is as
Important as Individual-level maintenance
1262 I Public Health Matters | Peer Reviewed | Glasgow et al.
American Journal of Public Health | August 2003, Vol 93, No. 8
cost, level of resources and expertise required,
and how similar a proposed service is to cur-
rent practices of an organization. Implementa-
tion refers to intervention integrity, or the
quality and consistency of delivery. Finally,
maintenance operates at both the individual
and the setting or organizational level. At the
individual level, maintenance refers to how
well hehavior changes hold up in the long
term. At the setting level, it refers to the ex-
tent to which a treatment or practice becomes
institutionalized in an organization.
Table 1 summarizes how the RE-AIM di-
mensions apply to the efiicacy-efTectiveness
distinction. Efficacy trials typically limit reach
by seeking motivated, homogeneous partici-
pants with minimal or no complications or co-
morbidities. The considerable degree of initial
screening for eligibility inherently limits the
reach of an eflicacy trial. Adoption is often
treated as a nonissue for efficacy trials so long
as at least one or, in some tdeds, a few set-
tings are willing to participate. For effective-
ness trials, reach is usually higher because
participants are drawn from a broad and "de-
fined" population. Adoption is critical because
the settings need to commit their own re-
sources and expect the intervention to "fit"
with existing procedures.
Implementation in an efficacy trial is usually
accomplished by research staff following a
standardized protocol, whereas in an effective-
ness trial, regular stciff with many competing
demands on their time must implement the in-
tervention. While such staff are also guided by
a protocol, adherence is likely to be more vari-
able.' Because they are implemented by re-
search staff, efficacy interventions are often
more complex and intensive than effectiveness
interventions. Maintenance is usually a nonis-
sue for efficacy trials at the setting level; it is
expected that the intervention will cease when
final assessments are completed and research
staff depart Since effectiveness trials are in-
tended to represent typical setting conditions, it
is hoped that the intervention will be main-
tained, assuming there are positive results.
WHY THE DISCONNECT?
We conclude that the characteristics that
cause an intervention to be successful in effi-
cacy research (e.g., intensive, complex, highly
standardized) are fundamentally different
from, and often at odds with, programs that
succeed in population-based effectiveness set-
tings (e.g., having broad appeal, being adapt-
able for both participants and intervention
agents). If this is the case, then the "system" of
moving from research to usual service pro-
grtims, to which we have subscribed, may be
broken and may need to be substantially
modified.
Why does this linear progression of re-
search, which is analogous to the steps used
successfully to evaluate emd bring pharma-
ceuticals to market, seem to fail with behav-
ioral and health promotion research? One
contextual factor is that, before trials, phar-
maceutical companies invest considerable
time and money establishing that the drug af-
fects relevant biological mediators to a much
greater extent than behavioral researchers in-
vest in showing that their interventions affect
psychosocial mediators. Granted, industry
has vastly more resources. But we suggest
that key differences also reside in the nature
of the interventions.
Standard medical interventions (e.g., drugs
or surgery) are presumed to be robust, readily
transferable from setting to setting, and to
work approximately equally across broad cate-
gories of patients. Clinicians exercise discretion
about dosage and surgeons vary in experience,
but it is still presumed that the pill is the same
whoever administers it Medicinal and surgical
protocols can be relatively precisely defined,
and adherence to them can be more easily
monitored relative to behavioral interventions.
Behavioral interventions are more difficult to
define and standardize in part because of the
inherent interactivity with client characteristics,
preferences, and behaviors. This is exacer-
bated when behavioral interventions are deliv-
ered by staff whose training and expertise fall
outside of behavioral science. In efficacy trials,
research st£iff usually bring expertise in behav-
ioral intervention and ensure that it is imple-
mented consistently. This level of quality con-
trol and standardization is typically absent
among regular health care staff implementing
interventions for effectiveness trials.
Tbere are 2 underl}Tng differences between
efficacy and effectiveness approaches that we
feel are responsible for the current state of af-
fairs. Tbe first is that in an effort to enhance
internal validity and control extraneous fac-
tors, the tradition in efficacy studies has been
to simplify and narrow settings, conditions,
participants, and a variety of other factors.
There is nothing inherently wrong with this
methodological approach, and the tradition of
reductionism (e.g., understanding effects by
isolating them and removing or controlling
other factors) has contributed much to the ad-
vancement of science and theory.^' The prob-
lem is that usually the longer-range intent is to
generalize beyond the narrow conditions of
the efficacy trial. In effectiveness trials, an in-
tervention must be robust across a variety of
different participants, settings, conditions, and
other less controlled factors. Equally impor-
tant, it must appeal to a broad "defined popu-
lation" or target audience.
A dassic example of the typical differences
between a health care efficacy study and an ef-
fectiveness trial concerns subject selection. In a
tightly controlled efficacy trial, only highly mo-
tivated, homogenous self-selected volunteers
who do not have any complications or other
comorbid conditions are eligible (to control for
potential confounding factors). Then, following
success in such an efficacy study, we expect
the same intervention to appeal to and be ef-
fective in a much broader cross-section of par-
ticipants, many of whom have comorbid condi-
tions and may not volunteer for treatment
The second key difference between effi-
cacy and effectiveness trials concerns how
settings and contextual factors are treated. In
efficacy studies, the usual approach is to con-
trol variance by restricting the setting to one
set of circumstances—for example, one partic-
ular clinic (which often includes intervention
experts). In contrast, a key characteristic of ef-
fectiveness trials is to produce robust effects
and to understand variation in outcomes
across heterogeneous settings and delivery
agents. Therefore, it should not be surprising
when the results of an intervention are effica-
cious under a highly specific set of circum-
stances but fail to replicate across a vkide vari-
ety of settings, conditions, and intervention
agents in effectiveness research.
SHALL THE TWAIN EVER MEET?
From the above discussion, it may seem
hopeless to expect congruence across findings
August 2003, Vol 93, No. 8 | American Journal of Public Health
Glasgow et al.  Peer Reviewed | Public Health Matters | 1263
fi'om efficacy and effectiveness studies. Some
might go so far as to suggest, as one reviewer
of this manuscript did, that perhaps efficacy
studies should be abandoned altogether. We
are optimistic, however, that there are solu-
tions to the present disconnect. In brief, we
need to embrace and study the complexity of
the world, rather than attempting to ignore
or reduce it by studying only isolated (and
often unrepresentative) situations.''^ What is
needed is a "science of larger social units"''''
that takes into account and analyzes the so-
cial context(s) in which experiments are con-
ducted. To advance our present state of sci-
ence, the question that we need to ask of
both efficacy and effectiveness studies is
"What are the characteristics of interventions
that can (a) reach large numbers of people,
especially those who can most benefit, (b) he
broadly adopted by different settings (work-
site, school, health, or community), (c) be con-
sistently implemented by different staff mem-
bers with moderate levels of training and
expertise, and (d) produce replicable and
long-lasting effects (and minimal negative im-
pacts) at a reasonable cost?"
This suggested focus has important implica-
tions. It implies that we need to consider not
only individual participants but also the set-
tings within which they reside and receive
treatment This move to a multilevel ap-
proach is consistent with developments in
several fields, and methodologies for how to
handle such factors are available. There is not
only a rich conceptual history to the study of
generalization"*"* and of representative or pur-
poseful sampling,''̂ '̂ ^ but also statistical meth-
ods for handling these contextual factors.''̂
This comes down to an issue of generaliza-
tion.̂ * The prevailing view seems to be that
efficacy studies should focus only on interned
validity and theoretical process mechanisms,
and that issues of external validity should be
left until later effectiveness studies. In con-
trast, we argue that issues of moderating vari-
ables (external validity) need to be addressed
in both efficacy cind effectiveness studies.
Brewer''* conceptualizes such sodal context
factors as moderating variables that infiuence
the conclusions that can be drawn about the
efficacy of an intervention. Moderating vari-
ahles (e.g., race/ethnicity, socioeconomic sta-
tus, type of setting or intervention agent) are
relatively stable factors that interact with the
intervention or change the effect of the pro-
gram. Researchers should consider elevating
hypotheses related to moderator variables to
primary aims.
WHAT CAN BE DONE? DISCUSSION
AND RECOMMENDATIONS
It is difficult to change established practice
patterns, regardless of whether they be of cli-
nidans, researchers, or funding agendes. It
cannot reasonably be expected that many sd-
entists will quickly discontinue practices in
which they have been trained and become
comfortable. It is also more efficient, and
much more under one's control, to continue
to conduct efficacy studies without consider-
ing moderating variables or external validity
because "the purpose is to study interventions
under ideal conditions." However, as illus-
trated above,, this is only true if one does not
intend to generalize one's conclusions beyond
the very limited sample and conditions of a
given study,'•^' which is hardly ever the case
in health promotion research.
There is an increasingly well-documented
disparity hetween the large amount of infor-
mation on efficacy and the very small amount
of information on effectiveness and represen-
tativeness.^''^^'"' To produce significant im-
provement in the current state of affairs,
changes will be necessary on the part of re-
searchers, funding organizations, joumal re-
viewers, cind grant review panels. We propose
4 spedfic changes—2 of which focus on re-
searchers, 1 on joumal editors, and 1 on
funding organizations.
1. Researchers should pay increased attention to
moderating factors in both efficaqj and effective-
ness research. Table 2 outlines how data col-
lection and information about moderating fac-
tors, such as participant characteristics (reach)
and setting characteristics (adoption), can be
incorporated into both efficacy and effective-
ness research in a manner appropriate to that
phase. Using the RE-AIM framework, we sug-
gest that researchers consider the types of set-
tings, intervention agents, and individuals that
they wish their program to be used by when
designing and evaluating interventions. Dur-
ing efficacy studies, purposeful or oversam-
pling strategies can be used to include both
spedfic end-user groups (e.g., minorities, less
educated) and settings of interest A critical
concem for broader application—and an inte-
gral part of Flay's original description'*—was
measurement of potential harmful outcomes.
This part of his definition has seldom been
addressed, but it needs to be.
Participatory research methods, including
developing one's intervention ideas collabora-
tively with members of the intended audi-
ence (individuals, intervention agents, and or-
ganization decisionmakers) should not be left
for later phases of research but built into effi-
cacy studies. More formal measures of adop-
tion and setting level maintenance may need
to wait until later effectiveness studies
(Table 2), but both qualitative and quantita-
tive "proxy measures" of these factors can
and should be addressed in efficacy studies.
Such infonnation can lead to better tailoring
of interventions to organizational culture in
the same way that tailoring of intervention at
the individual level has led to increased suc-
cess."*''*̂ A final recommendation for both ef-
ficacy and effectiveness studies is to include a
variety of intervention agents, to describe
their backgrounds emd levels of experience/
expertise with regard to the target behavior,
and to report on potential differences in im-
plementation and outcomes associated with
these differences.'*''
As illustrated in Table 2, issues pertaining
to moderating factors—and eventual transla-
tion into practice—are best addressed during
the p/anning phases of research. RE-AIM, or
other evaluation models,'^'^can be used to
help plan and select samples, interventions,
settings, and agents in ways that make it more
likely that results will be replicated in later
studies.
2. Realize that public health impact involves
more than just efficacy. Our training and cur-
rent review criteria all emphasize producing
large effect sizes under tightly controlled con-
ditions. To make a real-world impact, several
other criteria are also necessary.
a. At the individual level, several research
groups have proposed that Impact=Reach
(R) X Efficacy (E)."̂ ""*̂ It is not enough to
produce a highly efficadous intervention. To
have broad public health impact, an interven-
1264 I Public Heaith Matters | Peer Reviewed | Glasgow et al.
American Journal of Public Heaith | August 2003, Voi 93, No. 8
TABLE 2-Ways to Address RE-AIM^°'" Issues in Efficacy and
Effectiveness Studies
Efficacy trials
(Phase III
research)
Effectiveness trials
in defined
populations
(Phase IV
research)
Reach
Have specified inclusion
criteria or purposeful
selection, but participants
will be volunteers in a
specific research setting.
Report exclusions,
participation rates.
dropouts, and
representativeness on
key characteristics.
Include all relevant members
of a defined population.
Report exclusions.
participation rates.
dropouts, and
representativeness.
Efficacy or Effectiveness
Measure outcomes using
intent to treat
assumptions or
imputation of missing
values and a high level
of rigor.
Assess both positive
(anticipated) and
negative (unintended)
outcomes.
Report effects of moderator
variables.
Address as above, though
measures are usually
more limited.
Include economic
outcomes.
Adoption
Have potential adoptees
assess fit of prototype
intervention to their
setting.
Include "proxy measures" of
adoption, such as
participation among
those staff members of
a system who v»ill
participate in the study.
Assess willingness of
stakeholders from multiple
settings to adopt and
adapt the program.
Report on representativeness
of settings, participation
rate, and reasons for
declining.
Implementation
Collect data on likely
treatment demands.
Evaluate delivery of
intervention protocol
by different intervention
agents (usually research
staff).
Assess staff ability to
implement key
components of the
intervention in routine
practice.
Evaluate consistency of
intervention delivery
by agency staff who
are not part of
research team.
Maintenance
Assess recidivism among
participants.
Engage potential community
settings in strategic
planning efforts from
the outset.
Document extent to which
research protocol is
retained by setting/agency
once the formal study is
completed.
Assess continuation of
program over time.
and especially after
research phase
concludes.
Systematically program
for and evaluate the
level of institutionalization
ofthe program elements
after formal study
assistance is terminated.
don must also have high reach. To the Im-
paet=R X E formula, we would add a third
eomponent: implementation (I). As diseussed
by Basch et al.,'̂ a program cannot be effee-
dve if it is not implemented. Thus, we pro-
pose that individual-level Impaet=R x E x I.
b. An individual-level foeus is, however, not
suffieient An intervention also has to be ae-
eeptable to and adopted by a variety of inter-
vention settings, and to be implemented rela-
tively consistently by different intervention
agents. In other words, the parallel setting or
organizational-level impaet formula should be
Organizational Impact (01)=Adoption (A) x
Implementation (I). Several authors have
diseussed issues of nesting and setting fac-
tors'''''^ and how to adjust individual-level
effects for issues of nonindependenee. How-
ever, to otir knowledge, the A x 1=01 for-
mula for estimating the impaet of an interven-
tion across settings has not been diseussed,
with the exception of an early related pro-
posal by Kolbe^^ that Impact=Effectiveness x
Dissemination x Maintenance. It is important
to emphasize that in terms of overall public
health effect, adoption and implementation
are as important as reach and effieaey, and
that we need more emphasis on studies of or-
ganizational- and system-level faetors.
3. Include external validity reporting criteria in
author guidelines. Within medieine, a widely
agreed upon set of criteria for reporting the
results of randomized clinical trials has been
developed. Known as the CONSORT crite-
ria,^" these reporting standards have been
widely adopted by leading medieal journals
and have helped to increase the quality of
published research. As helpftil as the
CONSORT criteria are, they are almost exclu-
sively concerned with issues of internal valid-
ity. Only 1 out of 22 reeommendations di-
rectly addresses external validity issues^'; in
contrast to the other very specific and con-
crete criteria, it simply states "Generalizability
(external validity) of the trial findings" and
provides no guidance as to how this issue
should be reported.
We propose the following 7 additions to
the existing CONSORT criteria, whieh would
help greatly to increase awareness of and re-
porting on extemcil validity. If sueh criteria
were widely adopted, it would greatly en-
hance the quality and information value not
only of individual studies but also of evi-
dence-based reviews and meta-analyses. The
current state of health promotion research is
so biased toward reporting on internal valid-
ity issues that it is difficult to draw any eon-
elusions about generalization. In particular,
there has been a serious lack of attention to
issues of representativeness, especially at the
level of settings and intervention agents.̂ ''̂ *'̂ ^
This becomes even more problematic when
the evidence upon which meta-analyses and
practice reeommendations are based eonsists
largely or solely of effieaey studies of un-
known genendizabiUty.
The 7 items that we propose below
should apply to both effieacy and effective-
ness studies. They would not require a great
deal of additional joumal space and are de-
August 2003, Vol 93, No. 8 | American Journal of Public Health
Glasgow et al. | Peer Revievi/ed | Public Health Matters | 1265
scribed below in the same format as existing
CONSORT items. These criteria were re-
cently added by the Evidence-Based Behav-
ioral Medicine Committee of the Society of
Behavioral Medicine^^ to their recommenda-
tions for reporting on behavioral interven-
tion studies.
a. State the target population to which the
study intends to generalize.
b. Report the rate of exclusions, the participa-
tion rate among those eligible, and the repre-
sentativeness oi participants.
c. Report on methods of recruiting study set-
tings, including exclusion rate, pariicipation
rate among those approached, and represen-
tativeness of settings studied.
d. Describe the pariicipation rate and charac-
teristics of those delivering the intervention.
State the population of intervention agents
that one wotild see eventually implementing
the program and how the study intervention-
ists compcire with those who will eventually
deliver the intervention.
e. Report the extent to which different com-
ponents of the intervention are delivered (by
different intervention agents) as intended in
the protocol.
f Report the specific time, and costs required
to deliver the intervention,
g. Report on organizational level of continu-
ance, discontinuance or adaptation in modi-
fied form of the intervention once the trial is
completed, and individual-level maintenance
of results.
We think that such infonnation should be
of relevance not only to researchers but also
to clinicians, health directors, and decision-
makers responsible for selecting prevention
and health promotion programs. In fact, we
think that these parties already make implicit
tise of these dimensions. Making them explicit
should aid reading of the literature and guide
more informed program selections.
4. Increase funding for research focused on
moderating variables, external validity, and ro-
bustness. The large imbalance between the ex-
tent to which health promotion investigations
focus on internal validity emd the extent to
which they foeus on external validity will not
be remedied without substantial ehanges in
fiinding priorities. Table 3 lists several reeom-
TABLE 3-Recommendations for
Funding Organizations to Acceierate
Transfer of Researcii to Practice
• Solicit proposals that investigate interventions in
multiple settings and especially settings that are
representative of those to which the program is
intended to generalize.
• Fund innovative investigations of ways to enhance
reach, adoption, implementation, and
maintenance (which have all been
de-emphasized relative to efficacy).
• Require standard and comprehensive reporting of
exclusions, participation rates, and
representativeness of both participants and
settings.
• Fund cross-over designs, sequential program
changes, replications, multiple baseline, and
other designs in addition to randomized
controlled trials that can efficiently and
practically address key issues in translation.
• Invite programs that investigate and can
demonstrate quality implementation and
outcomes across a wide range of intervention
agents similar to those present in applied
settings.
• Require a maintenance/sustainability phase in
research projects and implementation of plans to
enhance institutionalization once the original
research has been completed.
• Fund competitive proposals to investigate long-term
effects and sustalnability of initially successful
interventions.
• Encourage innovation in intervention design and
standardization in reporting on process and
outcome measures at both individual and
setting/intervention agent levels.
• Request more cost-effectiveness studies and other
economic evaluations that are of interest to
program administrators and policymakers.
mendations for fiinding organizations that
would help correct this imbalance.
These reeommendations would have 2 ef-
feets. The first would be to increase the small
number of well-eonducted effectiveness stud-
ies now available. The second would be to
increase the relevance of efficacy studies for
practice by focusing attention on moderating
variables and the range of conditions, set-
tings, intervention agents, and partidpants to
whieh the results apply. Such refocused fund-
ing priorities should also increase tmder-
standing of health disparities and help reduce
them, since more research would be con-
ducted involving minorities and low-income
settings. Finally, fiinding organizations might
explicitly have reviewers rate proposals on
their likely robustness or potential for wide-
spread application and impact. This could be
done by methods described in the Gtiide to
Community Preventive Services.'^
CONCLUSIONS
In summary, at least part of the reason for
the slow and uneven translation of research
findings into practice in the health promotion
sciences is lack of attention to issues of gen-
eralization and extemal validity (moderating
factors that potentially limit the robustness of
interventions). There also needs to be a
greater understanding of, and research on,
setting-level social contextual faetors.'̂ '̂ '̂̂ ^ If
these issues were addressed in the design
and reporting of efficacy as well as effective-
ness studies, it would greatly advance the
current quality of research Eind our knowl-
edge base. These issues are to a large extent
under the control of researchers, reviewers,
and fiinding organizations, and we have
listed actions that each of these parties can
take to facilitate better transfer from efficacy
to effectiveness research. •
About the Authors
Russell E. Glasgow and Alfred C. Marcus are with Kaiser
Permanente Colorado and AMC Cancer Research Center,
Denver. Edward Lichtenstein is with the Oregon Research
Institute, Eugene.
Requests for reprints should be sent to Russell E. Glas-
gow. PhD, PO Box 349, Canon City, CO 81215 (e-mail:
[email protected]).
This article was accepted October 24, 2002.
Contributors
All authors produced original drafts of sections of the
manuscript, extensively edited each other's contribu-
tions, and made substantive contributions to the ideas
expressed in the manuscript
Acknowledgmeuts
This project was supported by The Robert Wood John-
son Foundation (grant 030102) and the Agency for
Healthcare Research and Quality (grant HS 10123).
We acknowledge the contributions of Allan Best,
PhD, Brian Flay, PhD, Lisa Klesges, PhD, and Thomas
M. Vogt, MD, MPH, for their helpful comments on an
earlier draft of the manuscript
1266 I Public Health Matters | Peer Reviewed | Glasgow et al.
American Journal of Public Health 1 August 2003, Vol 93, No.
8
References
I. Clark GN. Improving the transition from basie effi-
eaey research to effeetiveness studies: methodologieal
issues and procedures./ Consuft Clin Psychol, 1995;63:
718-725.
2. Weisz JR. Weisz B, Donenberg GR. The lab versus
the elinie: effects of child and adolescent psychotherapy.
Am Psychol 1992;47:1578-1585.
3. Briss PA, Zaza S. Papaioanou M, et al. Developing
an evidence-based Guide to Community Preventive
Services-methods. Prev Med, 2000;18(suppl 1);
35-43.
4. Centers for Disease Control and Prevention. The
Guide to Community Preventive Services. 2002. Avail-
able at: http://www.theeommunityguide.oi^. Accessed
Mareh 11. 2003.
5. Whitlock EP, Orleans CT, Prender N, Allan ].
Evaluating primary eare behavioral counseling inter-
ventions: an evidence-based approaeh. Am] Prev Med.
2002;22:267-284.
6. Department of Health and Human Services.
Healthy People 2000. 2002. Available at: http://www.
health.gov/healthypeople/data/PROGRVW/default.
htm. Accessed March 11, 2003.
7. Smedley BD, Syme SL. Promoting health: inter-
vention strategies from social and behavioral research.
Am J Health Promot, 2001;15:149-166.
8. Integration of Health Behavior Counseling Into Rou-
tine Medical Care, Washington, DC: Center for the Ad-
vancement of Health; 2001.
9. Committee on Quality Health Care in America.
Crossing the Quality Chasm: A New Health System for the
21st Century, Washington, DC: National Academy
Press; 2001.
10. Joyner L, McNeeley S, Kahn R. ADA's provider
recognition progmm. HMO Pract, 1997;ll:168-170.
II. Glasgow RE, Strycker LA. Level of preventive
practices for diabetes management: patient, physician,
and oflice correlates in two primary care samples. AmJ
Prev Med 2000;19:9-14.
12. Health Behavior Change in Managed Care: A Status
Report. Washington, DC: Center for the Advaneement
of Health; 2000.
13. Kottke TE, Edwards BS, Hagen PT. Counseling:
implementing our knowledge in a hunied and complex
woM. Amf Prev Med 1999;17:295-298.
14. Woolf SH, Atkins D. The evolving role of preven-
tion in health care contributions of the US Preventive
Services Task Foree. AmJ Prev Med 2001;20:13-20.
15. Orlandi MA. Promoting health and preventing dis-
ease in health care settings: an analysis of barriers.
Prev Med, 1987;16:119-130.
16. Green LW From research to "best practices" in
other settings and populations. Amf Health Behav,
2001,25:165-178.
17. Greenwald P, Cullen JW. The new emphasis in
cancer control JNatl Cancer Inst, 1985;74:543-551.
18. Flay BR. Efficacy and effectiveness trials (and
other phases of researeh) in the development of health
promotion programs. Prev Med. 1986;15:451-474.
19. Basch CE, Sliepcevich EM, Gold RS. Avoiding
type 111 errors in health education program evaluations.
Health EducQ. 1985;12:315-331.
20. King AC. The coming of age of behavioral re-
search in physical activity. Ann Behav Med. 2001 ;23:
227-228.
21. Glasgow RE. Bull SS, Gillette C, Klesges LM, Dze-
waltowski DA. Behavior change intervention research
in health care settings: a review of reeent reports with
emphasis on external validity. Am f Prev Med, 2002;
23:62-69.
22. Oldenburg B, Ffreneh BF, SalUs JF Health behav-
ior research: the quality of the evidence base. Am]
Health Promot, 2000;14:253-257
23. Hiatt RA, Rimer BK. A new strategy for cancer
control researeh. Cancer Epidemiol Biomarkers Prev.
1999:8:957-964.
24. Kemer JF. Closing the Gap Between Discovery and
Delivery. Washington, DC: National Caneer Institute;
2002.
25. Teutseh SM. A framework for assessing the effec-
tiveness of disease and injury prevention. MMWR
RecommRep, 1992;41(RR-3):1-12.
26. Glasgow RE, Vogt TM, Boles SM. Evaluating the
public health impact of health promotion interventions:
the RE-AIM framework. Am f Public Health, 1999;89:
1322-1327
27. Glasgow RE, McKay HG, Piette JD, Reynolds KD.
The RE-AIM framework for evaluating interventions:
what can it tell us about approaches to chronic illness
management? Patient Educ Couns. 2001 ;44:119-127
28. Glasgow RE, Klesges LM, Dzewaltowski DA, Bull
SS, Estabrooks P. The future of health behavior change
research: what is needed to improve translation of re-
search into health promotion praetice? Ann Behav Med,
In press.
29. Estabrooks PA, Dzewaltowski DA, Glasgow RE,
Klesges LM. How well has recent literature reported on
important issues related to translating school-based
health promotion research into practice? / School
Health, 2003;73:21-28.
30. Rogers EM. Diffusion of Innovations, 4th ed. New
York, NY: Free Press; 1995.
31. Mook DG. In defense of external invalidity. Am
Psychol, 1983;38:379-387
32. Axelrod R, Cohen MD. Harnessing Complexity: Or-
ganizational Implications of a Scientific Frontier, New
York, NY: Simon & Sehuster; 2000.
33. Biglan A, Glasgow RE, Singer G. The need for a
science of larger soeial unite: a contextual approach.
Behav Ther, 1990;21:195-215.
34. Gleser GC, Cronbach LJ, Rajaratnam N. Generaliz-
ability of scores influeneed by multiple sources of vari-
ance. Psychometrika, 1965;30:1373-1385.
35. Shadish WR, Cook TD, Campbell PT. Experimen-
tal and Quasi-Experimental Design for Generalized
Causal Inference. Boston, Mass: Houghton Mifflin;
2002.
36. Brunswik E. Representative design and probabilis-
tic theory in functional psychology. Psychol Rev, 1955;
62:217
37. Murray DM. Statistical models appropriate for de-
signs often used in group-randomized trials. Stat Med.
2001;20:1373-1385.
38. Cook TD, Campbell DT. Quasi-Experimentation:
Design and Analysis Issues for Field Settings. Chicago,
111: Rand McNally; 1979.
39. Brewer MB. Research design and issues of valid-
ity. In: Reis HT, Judd CM, eds. Handbook of Research
Methods in Social and Personality Psychology, New York,
NY: Cambridge University Press; 2000:3-39.
40. Oldenburg BF, Sallis JF, Ffiisnch ML, Owen N.
Health promotion research and the diffusion and insti-
tutionalization of interventions. Health Educ Res, 1999;
14:121-130.
41. Skinner CS, Campbell MK, Rimer BK, Curry S,
Prochaska JO. How effeetive is tailored print communi-
cation? Ann Behav Med, 1999;21:290-298.
42. Kreuter MW, Strecher VJ, Glassman B. One size
does not fit all: the case for tailoring print materials.
Ann Behav Med 1999;21:276-283.
43. Glasgow RE, Toobert DJ, Hampson SE, Stryeker
LA. Implementation, generalization, and long-term re-
sults of the "Choosing Well" diabetes self-management
intervention. Patient Educ Couns. 2002;48:115-122.
44. Abrams DB, Emmons KM, Lirman L, Biener L.
Smoking eessation at the workplace: conceptual and
practical considerations. In: Riehmond R, ed. Interven-
tions for Smokers: An International Perspective, New
York, NY: Williams & Wilkins; 1994:137-169.
45. Prochaska JO, Velicer WF, Fava JL, Rossi JS, Tsoh
JY. Evaluating a population-based recruitment approach
and a stage-based expert system intervention for smok-
ing eessation. Addict Behav, 2001;26:583-602.
46. Jeffery RW Risk behaviors and health: contrasting
individual and population perspectives. Am Psychol.
1989;44:n94-1202.
47 Lichtenstein E, Glasgow RE. A pragmatic frame-
work for smoking cessation: implieations for clinical
and public health programs. Psychol Addict Behav.
1997;11:142-151.
48. Elboume DR, Campbell MK. Extending the
CONSORT statement to cluster randomized trials:
for discussion. Stat Med 2001;20:489-496.
49. Kolbe LJ. Increasing the impact of school health
promotion programs: emerging researeh perspectives.
Health Educ. 1986;17:49-52.
50. Moher D, Schulz KF, Altman D. The CONSORT
statement: revised recommendations for improving the
quality of reports./>1M4. 2001;285:1987-1991.
51. Zaza S, Lawrenee RS, Mahan CS, Fullilove M, et
al. Scope and organization of the Guide to Community
Preventive Services. Task Foree on Community Preven-
tive Services. Amf Prev Med, 2000;18(suppl l):27-34.
52. Bull SS, Gillette C, Glasgow RE, Estabrooks P.
Worksite health promotion research: to what extent
can we generalize the resulte and what is needed to
translate researeh to practice? Health Educ Behav, In
press.
53. Davidson K, Goldstein M, Kaplan R, et al. Evi-
dence-based behavioral medieine: what is it and how
do we get there? Ann Behav Med. In press.
54. Green LW, Kreuter MW. Commentary on the
emerging Guide to Community Preventive Services
from a health promotion perspective. AmJ Prev Med.
2000;18:7-9.
55. Institute of Medidne. Promoting Health: Interven-
tion Strategies From Social and Behavioral Research.
Washington, DC: National Aeademy Press; 2000.
56. Green LM. Kreuter MW Health Promotion Plan-
ning: An Educational and Ecological Approach. 3rd ed.
Mountain View, Calif: Mayfield Publishing Co; 1999.
August 2003, Vol 93, No. 8 | American Journal of Public Health
Glasgow et al.  Peer Reviewed | Public Healtfi Matters | 1267
Healthcare Program/Policy Evaluation Analysis Template
Use this document to complete the Module 5 Assessment
Assessing a Healthcare Program/Policy Evaluation
Healthcare Program/Policy Evaluation
Description
How was the success of the program or policy measured?
How many people were reached by the program or policy
selected? How much of an impact was realized with the program
or policy selected?
At what point in program implementation was the program or
policy evaluation conducted?
What data was used to conduct the program or policy
evaluation?
What specific information on unintended consequences were
identified?
What stakeholders were identified in the evaluation of the
program or policy? Who would benefit most from the results
and reporting of the program or policy evaluation? Be specific
and provide examples.
Did the program or policy meet the original intent and
objectives? Why or why not?
Would you recommend implementing this program or policy in
your place of work? Why or why not?
Identify at least two ways that you, as a nurse advocate, could
become involved in evaluating a program or policy after one
year of implementation.
General Notes/Comments
Healthcare Program/Policy Evaluation Analysis
Template
© 2021 Walden University, LLC
2
10/22/22, 8:48 PMRubric Detail – Blackboard Learn
Page 1 of 5https://class.waldenu.edu/webapps/bbgs-deep-links-
BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i
d=_3280054_1
Rubric Detail
Select Grid View or List View to change the rubric's layout.
Excellent Good Fair Poor
Program/Policy
Evaluation
Based on the
program or
policy
evaluation you
seelcted,
complete the
Healthcare
Program/Policy
Evaluation
Analysis
Template. Be
sure to address
the following:
· Describe the
healthcare
program or
policy outcomes.
· How was the
success of the
program or
policy
measured?
· How many
people were
reached by the
32 (32%) - 35
(35%)
Using su!cient
evidence,
response clearly
and accurately
describes the
healthcare
program or
policy
outcomes.
Response
accurately and
clearly explains
how the success
of the program
or policy was
measured.
Response
accurately and
clearly describes
how many
people were
reached by the
program or
policy and
accurately
describes the
impact of the
program or
28 (28%) - 31
(31%)
Using su!cient
evidence,
response
accurately
describes the
healthcare
program or
policy
outcomes.
Response
accurately
explains how
the success of
the program or
policy was
measured.
Response
accurately
describes how
many people
were reached
by the program
or policy and
accurately
describes the
impact of the
program or
policy.
25 (25%) - 27
(27%)
Description of
the healthcare
program or
policy
outcomes is
inaccurate or
incomplete.
Explanation of
how the
success of the
program or
policy was
measured is
inaccurate or
incomplete.
Description of
how many
people were
reached by the
program or
policy and the
impact is vague
or inaccurate.
Response
vaguely
describes the
point at which
0 (0%) - 24 (24%)
Description of
the healthcare
program or
policy outcomes
is inaccurate
and incomplete
or is missing.
Explanation of
how the success
of the program
or policy was
measured is
inaccurate and
incomplete or is
missing.
Description of
how many
people were
reached by the
program or
policy and the
associated
impacts is vague
and inaccurate
or is missing.
Response of the
point at which
time the
Name: NURS_6050_Module05_Week10_Assignment_Rubric
EXIT
Grid View List View
https://class.waldenu.edu/webapps/bbgs-deep-links-
BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i
d=_3280054_1#
https://class.waldenu.edu/webapps/bbgs-deep-links-
BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i
d=_3280054_1#
10/22/22, 8:48 PMRubric Detail – Blackboard Learn
Page 2 of 5https://class.waldenu.edu/webapps/bbgs-deep-links-
BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i
d=_3280054_1
program or
policy selected?
How much of an
impact was
realized with the
program or
policy selected?
· At what point
in time in
program
implementation
was the program
or policy
evaluation
conducted?
policy.
Response
accurately and
clearly indicates
the point at
which time the
program or
policy
evaluation was
conducted.
Response
accurately
indicates the
point at which
time the
program or
policy
evaluation was
conducted.
the program or
policy
evaluation was
conducted.
program or
policy was
conducted is
missing.
Reporting of
Program/Policy
Evaluations
· What data was
used to conduct
the program or
policy
evaluation?
· What speci!c
information on
unintended
consequences
was identi!ed?
· What
stakeholders
were identi!ed
in the
evaluation of
the program or
policy? Who
would bene!t
the most from
the results and
reporting of the
program or
policy
evaluation? Be
speci!c and
provide
examples.
45 (45%) - 50
(50%)
Response
clearly and
thoroughly
explains in
detail: -speci"c
information on
outcomes and
unintended
consequences
identi"ed
through the
program or
policy
evaluation. -the
stakeholders
involved in the
program or
policy
evaluation. -who
would bene"t
most from the
results and
reporting of the
program or
policy
evaluation. -
whether the
program met
the original
intent and
40 (40%) - 44
(44%)
Using su!cient
evidence,
response
accurately
identi"es the
data used to
conduct the
program or
policy
evaluation.
Response
explains in
detail speci"c
information on
outcomes and
unintended
consequences
identi"ed
through the
program or
policy
evaluation.
Response
explains in
detail the
stakeholders
involved in the
program or
policy
evaluation.
35 (35%) - 39
(39%)
Response
vaguely or
inaccurately
identi"es the
data used to
conduct the
program or
policy
evaluation.
Explanation of
speci"c
information on
outcomes and
unintended
consequences
identi"ed
through the
program or
policy
evaluation is
vague or
incomplete.
Explanation of
the
stakeholders
involved in the
program or
policy
evaluation is
vague or
0 (0%) - 34 (34%)
Identi"cation of
the data used to
conduct the
program or
policy
evaluation is
vague and
inaccurate or is
missing.
Response
includes vague
and incomplete
or is missing
explanation of: -
speci"c
information on
outcomes and
unintended
consequences
identi"ed
through the
program or
policy
evaluation. -the
stakeholders
involved in the
program or
policy
evaluation. -who
would bene"t
most from the
10/22/22, 8:48 PMRubric Detail – Blackboard Learn
Page 3 of 5https://class.waldenu.edu/webapps/bbgs-deep-links-
BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i
d=_3280054_1
· Did the
program or
policy meet the
original intent
and objectives?
Why or why not?
· Would you
recommend
implementing
this program or
policy in your
place of work?
Why or why not?
· Identify at
least two ways
that you, as a
nurse advocate,
could become
involved in
evaluating a
program or
policy after 1
year of
implementation.
outcomes,
including an
accurate and
detailed
explanation of
the reasons
supporting why
or why not. -
whether the
program should
be
implemented,
including an
accurate and
detailed
explanation of
the reasons
supporting why
or why not. -at
least two ways
that the nurse
advocate could
become
involved in the
evaluation of
the program or
policy after 1
year of
implementation.
Response
explains who
would bene"t
most from the
results and
reporting of the
program or
policy
evaluation.
Response
includes an
accurate
explanation of
whether the
program met
the original
intent and
outcomes,
including an
accurate
explanation of
the reasons
supporting why
or why not.
Response
includes an
accurate
explanation of
whether the
program should
be
implemented,
including an
accurate
explanation of
the reasons
supporting why
or why not.
Response
includes an
accurate
explanation of
two ways that
the nurse
advocate could
become
involved in the
evaluation of
the program or
policy after 1
inaccurate.
Explanation of
who would
bene"t most
from the
results and
reporting of the
program or
policy
evaluation is
vague or
inaccurate.
Explanation of
whether the
program/policy
met the original
intent and
outcomes, and
the reasons
why or why not
is incomplete
or inaccurate.
Explanation of
whether the
program or
policy should
be
implemented,
and the
reasons why or
why not, is
incomplete or
inaccurate.
Explanation of
ways that the
nurse advocate
could become
involved in the
evaluation or
policy after 1
year of
implementation
is incomplete
or inaccurate.
results and
reporting of the
program or
policy
evaluation. -
whether the
program or
policy met the
original intent
and outcomes,
and the reasons
why or why not.
-whether the
program or
policy should be
implemented,
and the reasons
why or why not.
-ways that the
nurse advocate
could become
involved in the
evaluation or
policy after 1
year of
implementation.
10/22/22, 8:48 PMRubric Detail – Blackboard Learn
Page 4 of 5https://class.waldenu.edu/webapps/bbgs-deep-links-
BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i
d=_3280054_1
year of
implementation.
Written
Expression and
Formatting -
Paragraph
Development
and
Organization:
Paragraphs
make clear
points that
support well
developed ideas,
low logically,
and
demonstrate
continuity of
ideas.
Sentences are
carefully
focused--
neither long and
rambling nor
short and
lacking
substance. A
clear and
comprehensive
purpose
statement and
introduction is
provided which
delineates all
required
criteria.
5 (5%) - 5 (5%)
Paragraphs and
sentences
follow writing
standards for
#ow, continuity,
and clarity.
A clear and
comprehensive
purpose
statement,
introduction,
and conclusion
is provided
which
delineates all
required
criteria.
4 (4%) - 4 (4%)
Paragraphs and
sentences
follow writing
standards for
#ow, continuity,
and clarity 80%
of the time.
Purpose,
introduction,
and conclusion
of the
assignment is
stated, yet is
brief and not
descriptive.
3 (3%) - 3 (3%)
Paragraphs and
sentences
follow writing
standards for
#ow, continuity,
and clarity 60%-
79% of the
time.
Purpose,
introduction,
and conclusion
of the
assignment is
vague or o$
topic.
0 (0%) - 2 (2%)
Paragraphs and
sentences
follow writing
standards for
#ow, continuity,
and clarity <
60% of the time.
Purpose,
introduction,
and conclusion
of the
assignment is
incomplete or
missing.
Written
Expression and
Formatting -
English Writing
Standards:
Correct
grammar,
mechanics, and
proper
5 (5%) - 5 (5%)
Uses correct
grammar,
spelling, and
punctuation
with no errors.
4 (4%) - 4 (4%)
Contains a few
(1-2) grammar,
spelling, and
punctuation
errors.
3 (3%) - 3 (3%)
Contains
several (3-4)
grammar,
spelling, and
punctuation
errors.
0 (0%) - 2 (2%)
Contains many
(≥5) grammar,
spelling, and
punctuation
errors that
interfere with
the reader’s
understanding.
10/22/22, 8:48 PMRubric Detail – Blackboard Learn
Page 5 of 5https://class.waldenu.edu/webapps/bbgs-deep-links-
BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i
d=_3280054_1
punctuation
Written
Expression and
Formatting:
The paper
follows correct
APA format for
title page, font,
spacing,
parenthetical/in-
text citations,
and reference
list).
5 (5%) - 5 (5%)
Uses correct
APA format with
no errors.
4 (4%) - 4 (4%)
Contains a few
(1-2) APA format
errors.
3 (3%) - 3 (3%)
Contains
several (3-4)
APA format
errors.
0 (0%) - 2 (2%)
Contains many
(≥5) APA format
errors.
Total Points: 100
Name: NURS_6050_Module05_Week10_Assignment_Rubric
EXIT

More Related Content

Similar to Assignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docx

HCFOFindingsBriefMarch2014FINAL
HCFOFindingsBriefMarch2014FINALHCFOFindingsBriefMarch2014FINAL
HCFOFindingsBriefMarch2014FINALEmily Blecker
 
Summary Various industries, including health care, have adop.docx
Summary Various industries, including health care, have adop.docxSummary Various industries, including health care, have adop.docx
Summary Various industries, including health care, have adop.docxpicklesvalery
 
Quality improvement and patient safety in anesthesia
Quality improvement and patient safety in anesthesiaQuality improvement and patient safety in anesthesia
Quality improvement and patient safety in anesthesiaDr. Ravikiran H M Gowda
 
Assessing a Healthcare.docx
Assessing a Healthcare.docxAssessing a Healthcare.docx
Assessing a Healthcare.docx4934bk
 
Clingov5understandingaudit2003
Clingov5understandingaudit2003Clingov5understandingaudit2003
Clingov5understandingaudit2003Papri Sarkar
 
Clearing the Error: Patient Participation in Reducing Diagnostic Error
Clearing the Error: Patient Participation in Reducing Diagnostic ErrorClearing the Error: Patient Participation in Reducing Diagnostic Error
Clearing the Error: Patient Participation in Reducing Diagnostic ErrorJefferson Center
 
Discussion 1 Leadership Theories in Practice.docx
Discussion 1 Leadership Theories in Practice.docxDiscussion 1 Leadership Theories in Practice.docx
Discussion 1 Leadership Theories in Practice.docxbkbk37
 
2o C Parte 3 Primary Prevention Mental Health Programs
2o C  Parte 3   Primary Prevention Mental Health Programs2o C  Parte 3   Primary Prevention Mental Health Programs
2o C Parte 3 Primary Prevention Mental Health Programsc.meza
 
Regenstrief Conference Doebbeling
Regenstrief Conference DoebbelingRegenstrief Conference Doebbeling
Regenstrief Conference DoebbelingShawnHoke
 
Developing comprehensive health promotion - MedCrave Online Publishing
Developing comprehensive health promotion - MedCrave Online PublishingDeveloping comprehensive health promotion - MedCrave Online Publishing
Developing comprehensive health promotion - MedCrave Online PublishingMedCrave
 
Healthy People 2020Healthy People was a call to action and an.docx
Healthy People 2020Healthy People  was a call to action and an.docxHealthy People 2020Healthy People  was a call to action and an.docx
Healthy People 2020Healthy People was a call to action and an.docxpooleavelina
 
Study design-guide
Study design-guideStudy design-guide
Study design-guideAmareBelete
 
Assignment DescriptionA reputable hospital has high quality .docx
Assignment DescriptionA reputable hospital has high quality .docxAssignment DescriptionA reputable hospital has high quality .docx
Assignment DescriptionA reputable hospital has high quality .docxluearsome
 
Relationship Between Obesity Diabetes PICOT Essay.pdf
Relationship Between Obesity Diabetes PICOT Essay.pdfRelationship Between Obesity Diabetes PICOT Essay.pdf
Relationship Between Obesity Diabetes PICOT Essay.pdf4934bk
 
BioMed CentralBMC Health Services ResearchssOpen AcceDeb
BioMed CentralBMC Health Services ResearchssOpen AcceDebBioMed CentralBMC Health Services ResearchssOpen AcceDeb
BioMed CentralBMC Health Services ResearchssOpen AcceDebChantellPantoja184
 
Relevant health care laws and regulation.docx
Relevant health care laws and regulation.docxRelevant health care laws and regulation.docx
Relevant health care laws and regulation.docx4934bk
 
Revista de Asisten] Social, anul X, nr. 12011, 25-33 25.docx
Revista de Asisten] Social, anul X, nr. 12011, 25-33 25.docxRevista de Asisten] Social, anul X, nr. 12011, 25-33 25.docx
Revista de Asisten] Social, anul X, nr. 12011, 25-33 25.docxmalbert5
 

Similar to Assignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docx (20)

HCFOFindingsBriefMarch2014FINAL
HCFOFindingsBriefMarch2014FINALHCFOFindingsBriefMarch2014FINAL
HCFOFindingsBriefMarch2014FINAL
 
[287 met]
[287 met][287 met]
[287 met]
 
Summary Various industries, including health care, have adop.docx
Summary Various industries, including health care, have adop.docxSummary Various industries, including health care, have adop.docx
Summary Various industries, including health care, have adop.docx
 
Quality improvement and patient safety in anesthesia
Quality improvement and patient safety in anesthesiaQuality improvement and patient safety in anesthesia
Quality improvement and patient safety in anesthesia
 
PROGRAM EVALUATION
PROGRAM EVALUATIONPROGRAM EVALUATION
PROGRAM EVALUATION
 
Assessing a Healthcare.docx
Assessing a Healthcare.docxAssessing a Healthcare.docx
Assessing a Healthcare.docx
 
Clingov5understandingaudit2003
Clingov5understandingaudit2003Clingov5understandingaudit2003
Clingov5understandingaudit2003
 
Clearing the Error: Patient Participation in Reducing Diagnostic Error
Clearing the Error: Patient Participation in Reducing Diagnostic ErrorClearing the Error: Patient Participation in Reducing Diagnostic Error
Clearing the Error: Patient Participation in Reducing Diagnostic Error
 
Discussion 1 Leadership Theories in Practice.docx
Discussion 1 Leadership Theories in Practice.docxDiscussion 1 Leadership Theories in Practice.docx
Discussion 1 Leadership Theories in Practice.docx
 
2o C Parte 3 Primary Prevention Mental Health Programs
2o C  Parte 3   Primary Prevention Mental Health Programs2o C  Parte 3   Primary Prevention Mental Health Programs
2o C Parte 3 Primary Prevention Mental Health Programs
 
Regenstrief Conference Doebbeling
Regenstrief Conference DoebbelingRegenstrief Conference Doebbeling
Regenstrief Conference Doebbeling
 
Developing comprehensive health promotion - MedCrave Online Publishing
Developing comprehensive health promotion - MedCrave Online PublishingDeveloping comprehensive health promotion - MedCrave Online Publishing
Developing comprehensive health promotion - MedCrave Online Publishing
 
Healthy People 2020Healthy People was a call to action and an.docx
Healthy People 2020Healthy People  was a call to action and an.docxHealthy People 2020Healthy People  was a call to action and an.docx
Healthy People 2020Healthy People was a call to action and an.docx
 
Study design-guide
Study design-guideStudy design-guide
Study design-guide
 
Assignment DescriptionA reputable hospital has high quality .docx
Assignment DescriptionA reputable hospital has high quality .docxAssignment DescriptionA reputable hospital has high quality .docx
Assignment DescriptionA reputable hospital has high quality .docx
 
Relationship Between Obesity Diabetes PICOT Essay.pdf
Relationship Between Obesity Diabetes PICOT Essay.pdfRelationship Between Obesity Diabetes PICOT Essay.pdf
Relationship Between Obesity Diabetes PICOT Essay.pdf
 
BioMed CentralBMC Health Services ResearchssOpen AcceDeb
BioMed CentralBMC Health Services ResearchssOpen AcceDebBioMed CentralBMC Health Services ResearchssOpen AcceDeb
BioMed CentralBMC Health Services ResearchssOpen AcceDeb
 
Relevant health care laws and regulation.docx
Relevant health care laws and regulation.docxRelevant health care laws and regulation.docx
Relevant health care laws and regulation.docx
 
Research methodology
Research methodologyResearch methodology
Research methodology
 
Revista de Asisten] Social, anul X, nr. 12011, 25-33 25.docx
Revista de Asisten] Social, anul X, nr. 12011, 25-33 25.docxRevista de Asisten] Social, anul X, nr. 12011, 25-33 25.docx
Revista de Asisten] Social, anul X, nr. 12011, 25-33 25.docx
 

More from jesuslightbody

Be prepared to answer the following questionsWhat are the thr.docx
Be prepared to answer the following questionsWhat are the thr.docxBe prepared to answer the following questionsWhat are the thr.docx
Be prepared to answer the following questionsWhat are the thr.docxjesuslightbody
 
Based  upon our readings concerning the work of Pierre Bourdieu, ple.docx
Based  upon our readings concerning the work of Pierre Bourdieu, ple.docxBased  upon our readings concerning the work of Pierre Bourdieu, ple.docx
Based  upon our readings concerning the work of Pierre Bourdieu, ple.docxjesuslightbody
 
Based on the documentary and the article, please answer the followin.docx
Based on the documentary and the article, please answer the followin.docxBased on the documentary and the article, please answer the followin.docx
Based on the documentary and the article, please answer the followin.docxjesuslightbody
 
Based on Case Study Fetal Abnormality and the required topic Reso.docx
Based on Case Study Fetal Abnormality and the required topic Reso.docxBased on Case Study Fetal Abnormality and the required topic Reso.docx
Based on Case Study Fetal Abnormality and the required topic Reso.docxjesuslightbody
 
Bad time to be humble! When andwhy leaders should not be hum.docx
Bad time to be humble! When andwhy leaders should not be hum.docxBad time to be humble! When andwhy leaders should not be hum.docx
Bad time to be humble! When andwhy leaders should not be hum.docxjesuslightbody
 
be 3-5 sentences per answerDescribe what is meant by Maslo.docx
be 3-5 sentences per answerDescribe what is meant by Maslo.docxbe 3-5 sentences per answerDescribe what is meant by Maslo.docx
be 3-5 sentences per answerDescribe what is meant by Maslo.docxjesuslightbody
 
Be sure to complete the topic Physical Development before particip.docx
Be sure to complete the topic Physical Development before particip.docxBe sure to complete the topic Physical Development before particip.docx
Be sure to complete the topic Physical Development before particip.docxjesuslightbody
 
BCO 117 IT Software for Business Lecture Reference Notes.docx
BCO 117 IT Software for Business Lecture Reference Notes.docxBCO 117 IT Software for Business Lecture Reference Notes.docx
BCO 117 IT Software for Business Lecture Reference Notes.docxjesuslightbody
 
Authors Anna, Alisa, David & PreslavaThis article is desi.docx
Authors Anna, Alisa, David & PreslavaThis article is desi.docxAuthors Anna, Alisa, David & PreslavaThis article is desi.docx
Authors Anna, Alisa, David & PreslavaThis article is desi.docxjesuslightbody
 
Authoritarianism, Populism, and the GlobalRetreat of Democra.docx
Authoritarianism, Populism, and the GlobalRetreat of Democra.docxAuthoritarianism, Populism, and the GlobalRetreat of Democra.docx
Authoritarianism, Populism, and the GlobalRetreat of Democra.docxjesuslightbody
 
ASSOCIATE SCIENCE IN NURSINGCLINICAL WORKSHEET MATERNITY NU.docx
ASSOCIATE SCIENCE IN NURSINGCLINICAL WORKSHEET MATERNITY NU.docxASSOCIATE SCIENCE IN NURSINGCLINICAL WORKSHEET MATERNITY NU.docx
ASSOCIATE SCIENCE IN NURSINGCLINICAL WORKSHEET MATERNITY NU.docxjesuslightbody
 
Assume that you are a Healthcare Quality Specialist at a healthcare .docx
Assume that you are a Healthcare Quality Specialist at a healthcare .docxAssume that you are a Healthcare Quality Specialist at a healthcare .docx
Assume that you are a Healthcare Quality Specialist at a healthcare .docxjesuslightbody
 
AssignmentYour healthcare organization’s.docx
AssignmentYour healthcare organization’s.docxAssignmentYour healthcare organization’s.docx
AssignmentYour healthcare organization’s.docxjesuslightbody
 
ASSIGNMENT Planning an Effective Press ReleaseSelect a topic an.docx
ASSIGNMENT Planning an Effective Press ReleaseSelect a topic an.docxASSIGNMENT Planning an Effective Press ReleaseSelect a topic an.docx
ASSIGNMENT Planning an Effective Press ReleaseSelect a topic an.docxjesuslightbody
 
Assignment Marketing Plan – Part 2 of 2Positioning and Adverti.docx
Assignment Marketing Plan – Part 2 of 2Positioning and Adverti.docxAssignment Marketing Plan – Part 2 of 2Positioning and Adverti.docx
Assignment Marketing Plan – Part 2 of 2Positioning and Adverti.docxjesuslightbody
 
Assume an African American character and write from the perspective .docx
Assume an African American character and write from the perspective .docxAssume an African American character and write from the perspective .docx
Assume an African American character and write from the perspective .docxjesuslightbody
 
Assignment WK 8 Advocating for the Nursing Role in Program Design .docx
Assignment WK 8 Advocating for the Nursing Role in Program Design .docxAssignment WK 8 Advocating for the Nursing Role in Program Design .docx
Assignment WK 8 Advocating for the Nursing Role in Program Design .docxjesuslightbody
 
Assignment TitleStudents NameCourse TitleProfessor.docx
Assignment TitleStudents NameCourse TitleProfessor.docxAssignment TitleStudents NameCourse TitleProfessor.docx
Assignment TitleStudents NameCourse TitleProfessor.docxjesuslightbody
 
Assignment OverviewYou and a few of your fellow learners have be.docx
Assignment OverviewYou and a few of your fellow learners have be.docxAssignment OverviewYou and a few of your fellow learners have be.docx
Assignment OverviewYou and a few of your fellow learners have be.docxjesuslightbody
 
Assistive Technology in the Classroom Enhancing the School Expe.docx
Assistive Technology in the Classroom Enhancing the School Expe.docxAssistive Technology in the Classroom Enhancing the School Expe.docx
Assistive Technology in the Classroom Enhancing the School Expe.docxjesuslightbody
 

More from jesuslightbody (20)

Be prepared to answer the following questionsWhat are the thr.docx
Be prepared to answer the following questionsWhat are the thr.docxBe prepared to answer the following questionsWhat are the thr.docx
Be prepared to answer the following questionsWhat are the thr.docx
 
Based  upon our readings concerning the work of Pierre Bourdieu, ple.docx
Based  upon our readings concerning the work of Pierre Bourdieu, ple.docxBased  upon our readings concerning the work of Pierre Bourdieu, ple.docx
Based  upon our readings concerning the work of Pierre Bourdieu, ple.docx
 
Based on the documentary and the article, please answer the followin.docx
Based on the documentary and the article, please answer the followin.docxBased on the documentary and the article, please answer the followin.docx
Based on the documentary and the article, please answer the followin.docx
 
Based on Case Study Fetal Abnormality and the required topic Reso.docx
Based on Case Study Fetal Abnormality and the required topic Reso.docxBased on Case Study Fetal Abnormality and the required topic Reso.docx
Based on Case Study Fetal Abnormality and the required topic Reso.docx
 
Bad time to be humble! When andwhy leaders should not be hum.docx
Bad time to be humble! When andwhy leaders should not be hum.docxBad time to be humble! When andwhy leaders should not be hum.docx
Bad time to be humble! When andwhy leaders should not be hum.docx
 
be 3-5 sentences per answerDescribe what is meant by Maslo.docx
be 3-5 sentences per answerDescribe what is meant by Maslo.docxbe 3-5 sentences per answerDescribe what is meant by Maslo.docx
be 3-5 sentences per answerDescribe what is meant by Maslo.docx
 
Be sure to complete the topic Physical Development before particip.docx
Be sure to complete the topic Physical Development before particip.docxBe sure to complete the topic Physical Development before particip.docx
Be sure to complete the topic Physical Development before particip.docx
 
BCO 117 IT Software for Business Lecture Reference Notes.docx
BCO 117 IT Software for Business Lecture Reference Notes.docxBCO 117 IT Software for Business Lecture Reference Notes.docx
BCO 117 IT Software for Business Lecture Reference Notes.docx
 
Authors Anna, Alisa, David & PreslavaThis article is desi.docx
Authors Anna, Alisa, David & PreslavaThis article is desi.docxAuthors Anna, Alisa, David & PreslavaThis article is desi.docx
Authors Anna, Alisa, David & PreslavaThis article is desi.docx
 
Authoritarianism, Populism, and the GlobalRetreat of Democra.docx
Authoritarianism, Populism, and the GlobalRetreat of Democra.docxAuthoritarianism, Populism, and the GlobalRetreat of Democra.docx
Authoritarianism, Populism, and the GlobalRetreat of Democra.docx
 
ASSOCIATE SCIENCE IN NURSINGCLINICAL WORKSHEET MATERNITY NU.docx
ASSOCIATE SCIENCE IN NURSINGCLINICAL WORKSHEET MATERNITY NU.docxASSOCIATE SCIENCE IN NURSINGCLINICAL WORKSHEET MATERNITY NU.docx
ASSOCIATE SCIENCE IN NURSINGCLINICAL WORKSHEET MATERNITY NU.docx
 
Assume that you are a Healthcare Quality Specialist at a healthcare .docx
Assume that you are a Healthcare Quality Specialist at a healthcare .docxAssume that you are a Healthcare Quality Specialist at a healthcare .docx
Assume that you are a Healthcare Quality Specialist at a healthcare .docx
 
AssignmentYour healthcare organization’s.docx
AssignmentYour healthcare organization’s.docxAssignmentYour healthcare organization’s.docx
AssignmentYour healthcare organization’s.docx
 
ASSIGNMENT Planning an Effective Press ReleaseSelect a topic an.docx
ASSIGNMENT Planning an Effective Press ReleaseSelect a topic an.docxASSIGNMENT Planning an Effective Press ReleaseSelect a topic an.docx
ASSIGNMENT Planning an Effective Press ReleaseSelect a topic an.docx
 
Assignment Marketing Plan – Part 2 of 2Positioning and Adverti.docx
Assignment Marketing Plan – Part 2 of 2Positioning and Adverti.docxAssignment Marketing Plan – Part 2 of 2Positioning and Adverti.docx
Assignment Marketing Plan – Part 2 of 2Positioning and Adverti.docx
 
Assume an African American character and write from the perspective .docx
Assume an African American character and write from the perspective .docxAssume an African American character and write from the perspective .docx
Assume an African American character and write from the perspective .docx
 
Assignment WK 8 Advocating for the Nursing Role in Program Design .docx
Assignment WK 8 Advocating for the Nursing Role in Program Design .docxAssignment WK 8 Advocating for the Nursing Role in Program Design .docx
Assignment WK 8 Advocating for the Nursing Role in Program Design .docx
 
Assignment TitleStudents NameCourse TitleProfessor.docx
Assignment TitleStudents NameCourse TitleProfessor.docxAssignment TitleStudents NameCourse TitleProfessor.docx
Assignment TitleStudents NameCourse TitleProfessor.docx
 
Assignment OverviewYou and a few of your fellow learners have be.docx
Assignment OverviewYou and a few of your fellow learners have be.docxAssignment OverviewYou and a few of your fellow learners have be.docx
Assignment OverviewYou and a few of your fellow learners have be.docx
 
Assistive Technology in the Classroom Enhancing the School Expe.docx
Assistive Technology in the Classroom Enhancing the School Expe.docxAssistive Technology in the Classroom Enhancing the School Expe.docx
Assistive Technology in the Classroom Enhancing the School Expe.docx
 

Recently uploaded

How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfadityarao40181
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxUnboundStockton
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfakmcokerachita
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 

Recently uploaded (20)

How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdf
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docx
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdf
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 

Assignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docx

  • 1. Assignment: WK 9Assessing a Healthcare Program/Policy Evaluation Program/policy evaluation is a valuable tool that can help strengthen the quality of programs/policies and improve outcomes for the populations they serve. Program/policy evaluation answers basic questions about program/policy effectiveness. It involves collecting and analyzing information about program/policy activities, characteristics, and outcomes. This information can be used to ultimately improve program services or policy initiatives. Nurses can play a very important role assessing program/policy evaluation for the same reasons that they can be so important to program/policy design. Nurses bring expertise and patient advocacy that can add significant insight and impact. In this Assignment, you will practice applying this expertise and insight by selecting an existing healthcare program or policy evaluation and reflecting on the criteria used to measure the effectiveness of the program/policy. To Prepare: · Review the Healthcare Program/Policy Evaluation Analysis Template provided in the Resources. · Select an existing healthcare program or policy evaluation or choose one of interest to you. · Review community, state, or federal policy evaluation and reflect on the criteria used to measure the effectiveness of the program or policy described. The Assignment: (2–3 pages) Based on the program or policy evaluation you selected, complete the Healthcare Program/Policy Evaluation Analysis Template. Be sure to address the following: · Describe the healthcare program or policy outcomes. · How was the success of the program or policy measured? · How many people were reached by the program or policy selected?
  • 2. · How much of an impact was realized with the program or policy selected? · At what point in program implementation was the program or policy evaluation conducted? · What data was used to conduct the program or policy evaluation? · What specific information on unintended consequences was identified? · What stakeholders were identified in the evaluation of the program or policy? Who would benefit most from the results and reporting of the program or policy evaluation? Be specific and provide examples. · Did the program or policy meet the original intent and objectives? Why or why not? · Would you recommend implementing this program or policy in your place of work? Why or why not? · Identify at least two ways that you, as a nurse advocate, could become involved in evaluating a program or policy after 1 year of implementation. By Day 7 of Week 10 Submit your completed healthcare program/policy evaluation analysis. Milstead, J. A., & Short, N. M. (2019). Health policy and politics: A nurse's guide (6th ed.). Jones & Bartlett Learning. · Chapter 7, “Health Policy and Social Program Evaluation” (pp. 116–124 only) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5409875/ https://www.sciencedirect.com/science/article/pii/S0029655418 300617
  • 3. i J LUUU^S Why Don't We See More Translation of Health Promotion Research to Practice? Rethinking the Efficacy-to-Effectiveness Transition I Russell E. Glasgow, PhD, Edward Lichtenstein, PhD, and Alfred C, Marcus, PhD The gap between research and practice is well documented. We address one of the underlying reasons for this gap: the assumption that effectiveness research naturally and logically follows from successful efficacy research. These 2 research traditions have evolved different methods and values; consequently, there are inherent differ- ences between the characteristics of a successful efficacy intervention versus those of an effectiveness one. Moderating factors that limit robustness across settings, popu- lations, and intervention staff need to be addressed in efficacy studies, as well as in effectiveness trials. Greater attention needs to be paid to documenting intervention reach, adoption, implementation, and maintenance. Recommendations are offered to help close the gap between efficacy and effectiveness research and to guide evaluation and possible adoption of new programs. (Am J Public Health. 2003;93:1261-1267)
  • 4. Despite a growing literature documenting pre- vention and health promotion interventions that have proven successful in well-controlled research, few of these interventions are consis- tently implemented in applied settings. This is true across preventive counseling services for numerous target behaviors, including tobacco use, dietary change, physical activity, and behavioral heailth issues (e.g., alcohol use, de- pression). Several recent reviews and meta- analyses have documented this gap,''^ and the task forces on both clinical preventive services and community preventive services have noted that in several areas there is insufSdent ap- pUed evidence available to make recommenda- tions at present ̂ "̂ Most of the Healthy People 2000 objectives^ were not met, and the even more ambitious goals in Healthy People 2010 are similarly unlikely to be met without signifi- cant changes in the status quo.̂ '* To meet these challenges, we will need to have substantially more demonstrations of how to effectively im- plement recommendations in typical settings and in locations serving minority, low-income, and rural populations facing health disparities. This situation is not unique to preventive in- terventions, as strikingly documented in the re- cent Institute of Medicine report Crossing the Chasm^ which summarizes the similar state of affairs regarding many medical and disease management interventions. For example, there is increasing consensus on evidence-based diabetes management practices to prevent complications and on the importance and cost-
  • 5. effectiveness of these practices.'" However, these recommendations—and especially those related to lifestyle counseling and behavioral issues—are poorly implemented in practice."^''* This gap between research and practice is the result of several interacting factors, includ- ing limited time and resources of practition- ers, insufficient training,'' lack of feedback and incentives for use of evidence-based practices, and inadequate infrastructure and systems organization to support translation.®'̂ In this article, we focus on another reason for the slow and incomplete translation of re- search findings into practice: the logic and as- sumptions behind the design of efficacy and effectiveness research trials. EFFICACY AND EFFECTIVENESS TRIALS Many of the methods used in current pre- vention science are based on 2 influential pa- pers published in the 1980s: Greenwald and Cullen's'^ description of the phases of cancer control research and Flay's analysis of efficacy and effectiveness research.'^ Both papers ar- gued for a logical progression of research de- signs through which promising intervention ideas should proceed. These papers had many positive effects in helping to establish preven- tion research and enhancing acceptability among other disciplines. However, they may also have had an important and inadvertent negative consequence that derives from the
  • 6. assumption that the best candidates for effec- tiveness studies—and later dissemination—are interventions that prove successful in certain types of efficacy research. We argue that this assumption, or at least the way in which it has been operationalized over the past 15 years, has often led to interventions that have low probability of success in real-world settings. To understand this point, it is necessary first to briefly review the seminal papers by Flay'̂ and Greenwald and Cullen.'̂ Efficacy trials are defined by Flay as a test of whether a "pro- gram does more good than harm when deliv- ered under optimum conditions."'*''''"" Effi- cacy trials are characterized by strong control in that a standardized program is delivered in a uniform fashion to a specific, often narrowly defined, homogeneous target audience. Owing to the strict standardization of efficacy trials, any positive (or negative) effect can be directly attributed to the intervention being studied. Effectiveness trials are defined as a test of whether a "program does more good than harm when delivered under real-wOrld condi- tions."'*"'"''̂ " They typically standardize avail- ability and access among a defined popula- tion while allowing implementation and levels of participation to vary on the basis of real- world conditions. The primary goal of an ef- fectiveness tried is to detennine whether an intervention works among a broadly defined population. Effectiveness trials that result in no change may be the result of a lack of proper implementation or weak acceptance or
  • 7. adherence by participants.'*'^ Greenwald and Cullen'̂ proposed 5 phases of intervention research presumed to unfold in August 2003 , Vol 93 , No. 8 | American Journal of Public Health Glasgow et al. | Peer Reviewed | Public Health Matters | 1 2 6 1 ME a sequential fashion. This continuum begins with Phase I research to formtiJate and develop intervention Jiypotheses for future study. Phase II studies develop methodologies that can be used in future efBcacy or effectiveness studies. Phase III (efficacy) studies test intervention hy- potheses, using methods that have been tested in Phase Jl. TJius, Phase III studies are de- signed to test interventions for efBcacy, vnth an emphasis on internal validity, tJie purpose of wJiich is to establish a eausal link between the intervention and outcomes. Given this empha- sis on internal control, Greenwald and Cullen note that Phase III studies can be conducted in settings and witb stimples that will "optimize in- terpretation of efBcacy," including study sam- ples tbat may be more homogeneous tban tbe ultimate target population, and settings tbat will maximize management of and control over tbe researcb process. Tbe main objective of Phase fV (effective- ness) studies is to measure tbe impact of an in-
  • 8. tervention when it is tested witbin a population tbat is representative of tbe intended target au- dienee. Given that Pbase JV studies should yield results tbat are generalizable, there is also tbe presumption tbat tbe context and setting for delivering tbe intervention should likewise be generalizable to tbe intended program users. Jn Pbase V studies, effective Pbase JV in- terventions are translated into large-scale dem- onstration projects. Tbe major concern is im- plementation fidelity of an intervention tbat will now be introduced witbin even broader populations, including entire communities. Tbis final pbase (dissemination researeb), wbere col- laboration and coordination witb various com- munity partners is likely to receive even greater attention, is intended to provide tbe necessary data and experience to move inter- ventions into public bealth service programs at tbe national, regional, state, and local levels. Greenwald and Cullen spedficaUy advocated tbat intervention researcb unfold in a system- atic fasbion, building on and extending tbe body of science acctimulated in previous pbases. By explicitly defining tbe difference be- tween Pbase JJJ and Pbase IV researcb as being an empbasis on internal control versus repre- sentativeness, botb Flay and Greenwald and CuUen assumed tbat successful Pbase III trials would lead naturally to Pbase fV trials. Unfor- tunately, tbis bas not ocaured.''"'^" Instead, we currently find ourselves in a situation in wbicb we bave many small-scale efBcacy studies of unJoiown generalizability and few suceessiuJ ef-
  • 9. fectiveness trials.̂ ''̂ ^ In particular, we know very little about tbe representativeness of par- ticipants, settings, or intervention agents partici- pating in bealtb promotion research.''^' Altbougb tbe National Gancer Institute no longer empbasizes tJiis linear "pbases of re- searcb" model,^'''^'' tbe model was extremely influential in guiding an entire generation of researeb; many researcbers, reviewers, and editors still use tbis framework wben design- ing, ftmding, and evaluating research—and in deciding wbat types of studies are needed to advance a given area. Similar pbase models are influential in evaluating prevention effec- tiveness^^ and in developing drug therapies. In tbe remainder of tbis article, we discuss bow tbis well-intentioned and logical pbase of researcb paradigm may bave fallen sbort of its intended goal, and propose approacbes to remedy tbe present situation. Our primary thesis is tbat tbis "triekle- down" model of bow to translate researcb into practice—namely, tbat tbe optimal way to develop disseminable interventions is to progress from efBcacy studies to effectiveness trials to dissemination projects—is inherently flawed, or at least incomplete. We posit that given tbe respective cultures, values, and methodological traditions tbat bave devel- oped witbin efBcacy versus population-based effectiveness researcb, it is bigbly unlikely tbat interventions tbat are successful in efB- cacy studies will do well in effectiveness stud-
  • 10. ies, or in real-world applications. Table 1 summarizes tbe key cbaracteristics of well-designed efficacy and effectiveness tri- als, using tbe RE-AIM evaluation frame- work.̂ '̂̂ ^ Tbis model for evaluating interven- tions is intended to refoctis priorities on public bealtb issues, and it gives balanced em- pbasis to internal and external validity (see bttp://www.re-aim.org). RE-AIM is an acro- nym for Reach, Efficacy or Effectiveness (de- pending on tbe stage of researcb). Adoption, Implementation, and Maintenance. Reach refers to tbe participation rate among tbose approacbed and tbe representativeness of participants. Factors determining reaeb are tbe size and cbaracteristics of tbe potential au- dience and tbe barriers to participation (e.g., cost, sodaJ and environmental context, neces- sary referrals, transportation, and inconven- ience). Efficacy or effectiveness pertains to tbe impact of an intervention on specified out- come criteria and includes measures of poten- tial negative outcomes as well as intended re- sults (as recommended by Flay,'* but seldom eolJected)̂ ®'̂ ^ (D.A. Dzewaltowski et al., un- publisbed data, 2002). Adoption operates at the setting level and concerns the percentage and representativeness of organizations or set- tings tbat wifl conduct a given program. Rogers^" bas written extensively on adoption and dissemination issues. Factors associated witb adoption include political and cultural fit. TABLE 1-Distinctive Characteristics of Efficacy and
  • 11. Effectiveness intervention Studies, Using RE-AIM^^'" Dimensions for Program Evaluation RE-AIM Issue Efficacy Studies Effectiveness Studies Reacli Efficacy or effectiveness Adoption Implementation Maintenance and cost Homogeneous, highly motivated sample; exclude those with complications. other comorbid problems Intensive, specialized interventions that attempt to maximize effect size; very standardized; randomized designs Usually 1 setting to reduce variability; settings with many resources and expert staff Implemented by research staff closely
  • 12. following specific protocol Few or no issues; focus on individual level. Broad, heterogeneous, representative sample; often use a defined population Brief, feasible interventions not requiring great expertise; adaptable to setting; randomized, time series, or quasi-experimental designs Appeal to and work in multiple settings; able to be adapted to fit setting Implemented by variety of different staff with competing demands, using adapted protocol Major issues; setting-level maintenance is as Important as Individual-level maintenance 1262 I Public Health Matters | Peer Reviewed | Glasgow et al. American Journal of Public Health | August 2003, Vol 93, No. 8 cost, level of resources and expertise required, and how similar a proposed service is to cur- rent practices of an organization. Implementa- tion refers to intervention integrity, or the
  • 13. quality and consistency of delivery. Finally, maintenance operates at both the individual and the setting or organizational level. At the individual level, maintenance refers to how well hehavior changes hold up in the long term. At the setting level, it refers to the ex- tent to which a treatment or practice becomes institutionalized in an organization. Table 1 summarizes how the RE-AIM di- mensions apply to the efiicacy-efTectiveness distinction. Efficacy trials typically limit reach by seeking motivated, homogeneous partici- pants with minimal or no complications or co- morbidities. The considerable degree of initial screening for eligibility inherently limits the reach of an eflicacy trial. Adoption is often treated as a nonissue for efficacy trials so long as at least one or, in some tdeds, a few set- tings are willing to participate. For effective- ness trials, reach is usually higher because participants are drawn from a broad and "de- fined" population. Adoption is critical because the settings need to commit their own re- sources and expect the intervention to "fit" with existing procedures. Implementation in an efficacy trial is usually accomplished by research staff following a standardized protocol, whereas in an effective- ness trial, regular stciff with many competing demands on their time must implement the in- tervention. While such staff are also guided by a protocol, adherence is likely to be more vari- able.' Because they are implemented by re- search staff, efficacy interventions are often
  • 14. more complex and intensive than effectiveness interventions. Maintenance is usually a nonis- sue for efficacy trials at the setting level; it is expected that the intervention will cease when final assessments are completed and research staff depart Since effectiveness trials are in- tended to represent typical setting conditions, it is hoped that the intervention will be main- tained, assuming there are positive results. WHY THE DISCONNECT? We conclude that the characteristics that cause an intervention to be successful in effi- cacy research (e.g., intensive, complex, highly standardized) are fundamentally different from, and often at odds with, programs that succeed in population-based effectiveness set- tings (e.g., having broad appeal, being adapt- able for both participants and intervention agents). If this is the case, then the "system" of moving from research to usual service pro- grtims, to which we have subscribed, may be broken and may need to be substantially modified. Why does this linear progression of re- search, which is analogous to the steps used successfully to evaluate emd bring pharma- ceuticals to market, seem to fail with behav- ioral and health promotion research? One contextual factor is that, before trials, phar- maceutical companies invest considerable time and money establishing that the drug af- fects relevant biological mediators to a much
  • 15. greater extent than behavioral researchers in- vest in showing that their interventions affect psychosocial mediators. Granted, industry has vastly more resources. But we suggest that key differences also reside in the nature of the interventions. Standard medical interventions (e.g., drugs or surgery) are presumed to be robust, readily transferable from setting to setting, and to work approximately equally across broad cate- gories of patients. Clinicians exercise discretion about dosage and surgeons vary in experience, but it is still presumed that the pill is the same whoever administers it Medicinal and surgical protocols can be relatively precisely defined, and adherence to them can be more easily monitored relative to behavioral interventions. Behavioral interventions are more difficult to define and standardize in part because of the inherent interactivity with client characteristics, preferences, and behaviors. This is exacer- bated when behavioral interventions are deliv- ered by staff whose training and expertise fall outside of behavioral science. In efficacy trials, research st£iff usually bring expertise in behav- ioral intervention and ensure that it is imple- mented consistently. This level of quality con- trol and standardization is typically absent among regular health care staff implementing interventions for effectiveness trials. Tbere are 2 underl}Tng differences between efficacy and effectiveness approaches that we feel are responsible for the current state of af- fairs. Tbe first is that in an effort to enhance
  • 16. internal validity and control extraneous fac- tors, the tradition in efficacy studies has been to simplify and narrow settings, conditions, participants, and a variety of other factors. There is nothing inherently wrong with this methodological approach, and the tradition of reductionism (e.g., understanding effects by isolating them and removing or controlling other factors) has contributed much to the ad- vancement of science and theory.^' The prob- lem is that usually the longer-range intent is to generalize beyond the narrow conditions of the efficacy trial. In effectiveness trials, an in- tervention must be robust across a variety of different participants, settings, conditions, and other less controlled factors. Equally impor- tant, it must appeal to a broad "defined popu- lation" or target audience. A dassic example of the typical differences between a health care efficacy study and an ef- fectiveness trial concerns subject selection. In a tightly controlled efficacy trial, only highly mo- tivated, homogenous self-selected volunteers who do not have any complications or other comorbid conditions are eligible (to control for potential confounding factors). Then, following success in such an efficacy study, we expect the same intervention to appeal to and be ef- fective in a much broader cross-section of par- ticipants, many of whom have comorbid condi- tions and may not volunteer for treatment The second key difference between effi- cacy and effectiveness trials concerns how
  • 17. settings and contextual factors are treated. In efficacy studies, the usual approach is to con- trol variance by restricting the setting to one set of circumstances—for example, one partic- ular clinic (which often includes intervention experts). In contrast, a key characteristic of ef- fectiveness trials is to produce robust effects and to understand variation in outcomes across heterogeneous settings and delivery agents. Therefore, it should not be surprising when the results of an intervention are effica- cious under a highly specific set of circum- stances but fail to replicate across a vkide vari- ety of settings, conditions, and intervention agents in effectiveness research. SHALL THE TWAIN EVER MEET? From the above discussion, it may seem hopeless to expect congruence across findings August 2003, Vol 93, No. 8 | American Journal of Public Health Glasgow et al. Peer Reviewed | Public Health Matters | 1263 fi'om efficacy and effectiveness studies. Some might go so far as to suggest, as one reviewer of this manuscript did, that perhaps efficacy studies should be abandoned altogether. We are optimistic, however, that there are solu- tions to the present disconnect. In brief, we need to embrace and study the complexity of the world, rather than attempting to ignore or reduce it by studying only isolated (and often unrepresentative) situations.''^ What is
  • 18. needed is a "science of larger social units"'''' that takes into account and analyzes the so- cial context(s) in which experiments are con- ducted. To advance our present state of sci- ence, the question that we need to ask of both efficacy and effectiveness studies is "What are the characteristics of interventions that can (a) reach large numbers of people, especially those who can most benefit, (b) he broadly adopted by different settings (work- site, school, health, or community), (c) be con- sistently implemented by different staff mem- bers with moderate levels of training and expertise, and (d) produce replicable and long-lasting effects (and minimal negative im- pacts) at a reasonable cost?" This suggested focus has important implica- tions. It implies that we need to consider not only individual participants but also the set- tings within which they reside and receive treatment This move to a multilevel ap- proach is consistent with developments in several fields, and methodologies for how to handle such factors are available. There is not only a rich conceptual history to the study of generalization"*"* and of representative or pur- poseful sampling,''̂ '̂ ^ but also statistical meth- ods for handling these contextual factors.''̂ This comes down to an issue of generaliza- tion.̂ * The prevailing view seems to be that efficacy studies should focus only on interned validity and theoretical process mechanisms, and that issues of external validity should be left until later effectiveness studies. In con-
  • 19. trast, we argue that issues of moderating vari- ables (external validity) need to be addressed in both efficacy cind effectiveness studies. Brewer''* conceptualizes such sodal context factors as moderating variables that infiuence the conclusions that can be drawn about the efficacy of an intervention. Moderating vari- ahles (e.g., race/ethnicity, socioeconomic sta- tus, type of setting or intervention agent) are relatively stable factors that interact with the intervention or change the effect of the pro- gram. Researchers should consider elevating hypotheses related to moderator variables to primary aims. WHAT CAN BE DONE? DISCUSSION AND RECOMMENDATIONS It is difficult to change established practice patterns, regardless of whether they be of cli- nidans, researchers, or funding agendes. It cannot reasonably be expected that many sd- entists will quickly discontinue practices in which they have been trained and become comfortable. It is also more efficient, and much more under one's control, to continue to conduct efficacy studies without consider- ing moderating variables or external validity because "the purpose is to study interventions under ideal conditions." However, as illus- trated above,, this is only true if one does not intend to generalize one's conclusions beyond the very limited sample and conditions of a given study,'•^' which is hardly ever the case in health promotion research.
  • 20. There is an increasingly well-documented disparity hetween the large amount of infor- mation on efficacy and the very small amount of information on effectiveness and represen- tativeness.^''^^'"' To produce significant im- provement in the current state of affairs, changes will be necessary on the part of re- searchers, funding organizations, joumal re- viewers, cind grant review panels. We propose 4 spedfic changes—2 of which focus on re- searchers, 1 on joumal editors, and 1 on funding organizations. 1. Researchers should pay increased attention to moderating factors in both efficaqj and effective- ness research. Table 2 outlines how data col- lection and information about moderating fac- tors, such as participant characteristics (reach) and setting characteristics (adoption), can be incorporated into both efficacy and effective- ness research in a manner appropriate to that phase. Using the RE-AIM framework, we sug- gest that researchers consider the types of set- tings, intervention agents, and individuals that they wish their program to be used by when designing and evaluating interventions. Dur- ing efficacy studies, purposeful or oversam- pling strategies can be used to include both spedfic end-user groups (e.g., minorities, less educated) and settings of interest A critical concem for broader application—and an inte- gral part of Flay's original description'*—was measurement of potential harmful outcomes. This part of his definition has seldom been
  • 21. addressed, but it needs to be. Participatory research methods, including developing one's intervention ideas collabora- tively with members of the intended audi- ence (individuals, intervention agents, and or- ganization decisionmakers) should not be left for later phases of research but built into effi- cacy studies. More formal measures of adop- tion and setting level maintenance may need to wait until later effectiveness studies (Table 2), but both qualitative and quantita- tive "proxy measures" of these factors can and should be addressed in efficacy studies. Such infonnation can lead to better tailoring of interventions to organizational culture in the same way that tailoring of intervention at the individual level has led to increased suc- cess."*''*̂ A final recommendation for both ef- ficacy and effectiveness studies is to include a variety of intervention agents, to describe their backgrounds emd levels of experience/ expertise with regard to the target behavior, and to report on potential differences in im- plementation and outcomes associated with these differences.'*'' As illustrated in Table 2, issues pertaining to moderating factors—and eventual transla- tion into practice—are best addressed during the p/anning phases of research. RE-AIM, or other evaluation models,'^'^can be used to help plan and select samples, interventions, settings, and agents in ways that make it more likely that results will be replicated in later studies.
  • 22. 2. Realize that public health impact involves more than just efficacy. Our training and cur- rent review criteria all emphasize producing large effect sizes under tightly controlled con- ditions. To make a real-world impact, several other criteria are also necessary. a. At the individual level, several research groups have proposed that Impact=Reach (R) X Efficacy (E)."̂ ""*̂ It is not enough to produce a highly efficadous intervention. To have broad public health impact, an interven- 1264 I Public Heaith Matters | Peer Reviewed | Glasgow et al. American Journal of Public Heaith | August 2003, Voi 93, No. 8 TABLE 2-Ways to Address RE-AIM^°'" Issues in Efficacy and Effectiveness Studies Efficacy trials (Phase III research) Effectiveness trials in defined populations (Phase IV research)
  • 23. Reach Have specified inclusion criteria or purposeful selection, but participants will be volunteers in a specific research setting. Report exclusions, participation rates. dropouts, and representativeness on key characteristics. Include all relevant members of a defined population. Report exclusions. participation rates. dropouts, and representativeness. Efficacy or Effectiveness
  • 24. Measure outcomes using intent to treat assumptions or imputation of missing values and a high level of rigor. Assess both positive (anticipated) and negative (unintended) outcomes. Report effects of moderator variables. Address as above, though measures are usually more limited. Include economic outcomes. Adoption
  • 25. Have potential adoptees assess fit of prototype intervention to their setting. Include "proxy measures" of adoption, such as participation among those staff members of a system who v»ill participate in the study. Assess willingness of stakeholders from multiple settings to adopt and adapt the program. Report on representativeness of settings, participation rate, and reasons for declining.
  • 26. Implementation Collect data on likely treatment demands. Evaluate delivery of intervention protocol by different intervention agents (usually research staff). Assess staff ability to implement key components of the intervention in routine practice. Evaluate consistency of intervention delivery by agency staff who are not part of research team.
  • 27. Maintenance Assess recidivism among participants. Engage potential community settings in strategic planning efforts from the outset. Document extent to which research protocol is retained by setting/agency once the formal study is completed. Assess continuation of program over time. and especially after research phase concludes. Systematically program
  • 28. for and evaluate the level of institutionalization ofthe program elements after formal study assistance is terminated. don must also have high reach. To the Im- paet=R X E formula, we would add a third eomponent: implementation (I). As diseussed by Basch et al.,'̂ a program cannot be effee- dve if it is not implemented. Thus, we pro- pose that individual-level Impaet=R x E x I. b. An individual-level foeus is, however, not suffieient An intervention also has to be ae- eeptable to and adopted by a variety of inter- vention settings, and to be implemented rela- tively consistently by different intervention agents. In other words, the parallel setting or organizational-level impaet formula should be Organizational Impact (01)=Adoption (A) x Implementation (I). Several authors have diseussed issues of nesting and setting fac- tors'''''^ and how to adjust individual-level effects for issues of nonindependenee. How- ever, to otir knowledge, the A x 1=01 for- mula for estimating the impaet of an interven- tion across settings has not been diseussed, with the exception of an early related pro- posal by Kolbe^^ that Impact=Effectiveness x Dissemination x Maintenance. It is important
  • 29. to emphasize that in terms of overall public health effect, adoption and implementation are as important as reach and effieaey, and that we need more emphasis on studies of or- ganizational- and system-level faetors. 3. Include external validity reporting criteria in author guidelines. Within medieine, a widely agreed upon set of criteria for reporting the results of randomized clinical trials has been developed. Known as the CONSORT crite- ria,^" these reporting standards have been widely adopted by leading medieal journals and have helped to increase the quality of published research. As helpftil as the CONSORT criteria are, they are almost exclu- sively concerned with issues of internal valid- ity. Only 1 out of 22 reeommendations di- rectly addresses external validity issues^'; in contrast to the other very specific and con- crete criteria, it simply states "Generalizability (external validity) of the trial findings" and provides no guidance as to how this issue should be reported. We propose the following 7 additions to the existing CONSORT criteria, whieh would help greatly to increase awareness of and re- porting on extemcil validity. If sueh criteria were widely adopted, it would greatly en- hance the quality and information value not only of individual studies but also of evi- dence-based reviews and meta-analyses. The current state of health promotion research is so biased toward reporting on internal valid- ity issues that it is difficult to draw any eon-
  • 30. elusions about generalization. In particular, there has been a serious lack of attention to issues of representativeness, especially at the level of settings and intervention agents.̂ ''̂ *'̂ ^ This becomes even more problematic when the evidence upon which meta-analyses and practice reeommendations are based eonsists largely or solely of effieaey studies of un- known genendizabiUty. The 7 items that we propose below should apply to both effieacy and effective- ness studies. They would not require a great deal of additional joumal space and are de- August 2003, Vol 93, No. 8 | American Journal of Public Health Glasgow et al. | Peer Revievi/ed | Public Health Matters | 1265 scribed below in the same format as existing CONSORT items. These criteria were re- cently added by the Evidence-Based Behav- ioral Medicine Committee of the Society of Behavioral Medicine^^ to their recommenda- tions for reporting on behavioral interven- tion studies. a. State the target population to which the study intends to generalize. b. Report the rate of exclusions, the participa- tion rate among those eligible, and the repre- sentativeness oi participants. c. Report on methods of recruiting study set- tings, including exclusion rate, pariicipation rate among those approached, and represen-
  • 31. tativeness of settings studied. d. Describe the pariicipation rate and charac- teristics of those delivering the intervention. State the population of intervention agents that one wotild see eventually implementing the program and how the study intervention- ists compcire with those who will eventually deliver the intervention. e. Report the extent to which different com- ponents of the intervention are delivered (by different intervention agents) as intended in the protocol. f Report the specific time, and costs required to deliver the intervention, g. Report on organizational level of continu- ance, discontinuance or adaptation in modi- fied form of the intervention once the trial is completed, and individual-level maintenance of results. We think that such infonnation should be of relevance not only to researchers but also to clinicians, health directors, and decision- makers responsible for selecting prevention and health promotion programs. In fact, we think that these parties already make implicit tise of these dimensions. Making them explicit should aid reading of the literature and guide more informed program selections. 4. Increase funding for research focused on moderating variables, external validity, and ro- bustness. The large imbalance between the ex- tent to which health promotion investigations focus on internal validity emd the extent to which they foeus on external validity will not be remedied without substantial ehanges in
  • 32. fiinding priorities. Table 3 lists several reeom- TABLE 3-Recommendations for Funding Organizations to Acceierate Transfer of Researcii to Practice • Solicit proposals that investigate interventions in multiple settings and especially settings that are representative of those to which the program is intended to generalize. • Fund innovative investigations of ways to enhance reach, adoption, implementation, and maintenance (which have all been de-emphasized relative to efficacy). • Require standard and comprehensive reporting of exclusions, participation rates, and representativeness of both participants and settings. • Fund cross-over designs, sequential program changes, replications, multiple baseline, and other designs in addition to randomized
  • 33. controlled trials that can efficiently and practically address key issues in translation. • Invite programs that investigate and can demonstrate quality implementation and outcomes across a wide range of intervention agents similar to those present in applied settings. • Require a maintenance/sustainability phase in research projects and implementation of plans to enhance institutionalization once the original research has been completed. • Fund competitive proposals to investigate long-term effects and sustalnability of initially successful interventions. • Encourage innovation in intervention design and standardization in reporting on process and outcome measures at both individual and setting/intervention agent levels.
  • 34. • Request more cost-effectiveness studies and other economic evaluations that are of interest to program administrators and policymakers. mendations for fiinding organizations that would help correct this imbalance. These reeommendations would have 2 ef- feets. The first would be to increase the small number of well-eonducted effectiveness stud- ies now available. The second would be to increase the relevance of efficacy studies for practice by focusing attention on moderating variables and the range of conditions, set- tings, intervention agents, and partidpants to whieh the results apply. Such refocused fund- ing priorities should also increase tmder- standing of health disparities and help reduce them, since more research would be con- ducted involving minorities and low-income settings. Finally, fiinding organizations might explicitly have reviewers rate proposals on their likely robustness or potential for wide- spread application and impact. This could be done by methods described in the Gtiide to Community Preventive Services.'^ CONCLUSIONS In summary, at least part of the reason for the slow and uneven translation of research findings into practice in the health promotion sciences is lack of attention to issues of gen-
  • 35. eralization and extemal validity (moderating factors that potentially limit the robustness of interventions). There also needs to be a greater understanding of, and research on, setting-level social contextual faetors.'̂ '̂ '̂̂ ^ If these issues were addressed in the design and reporting of efficacy as well as effective- ness studies, it would greatly advance the current quality of research Eind our knowl- edge base. These issues are to a large extent under the control of researchers, reviewers, and fiinding organizations, and we have listed actions that each of these parties can take to facilitate better transfer from efficacy to effectiveness research. • About the Authors Russell E. Glasgow and Alfred C. Marcus are with Kaiser Permanente Colorado and AMC Cancer Research Center, Denver. Edward Lichtenstein is with the Oregon Research Institute, Eugene. Requests for reprints should be sent to Russell E. Glas- gow. PhD, PO Box 349, Canon City, CO 81215 (e-mail: [email protected]). This article was accepted October 24, 2002. Contributors All authors produced original drafts of sections of the manuscript, extensively edited each other's contribu- tions, and made substantive contributions to the ideas expressed in the manuscript Acknowledgmeuts This project was supported by The Robert Wood John-
  • 36. son Foundation (grant 030102) and the Agency for Healthcare Research and Quality (grant HS 10123). We acknowledge the contributions of Allan Best, PhD, Brian Flay, PhD, Lisa Klesges, PhD, and Thomas M. Vogt, MD, MPH, for their helpful comments on an earlier draft of the manuscript 1266 I Public Health Matters | Peer Reviewed | Glasgow et al. American Journal of Public Health 1 August 2003, Vol 93, No. 8 References I. Clark GN. Improving the transition from basie effi- eaey research to effeetiveness studies: methodologieal issues and procedures./ Consuft Clin Psychol, 1995;63: 718-725. 2. Weisz JR. Weisz B, Donenberg GR. The lab versus the elinie: effects of child and adolescent psychotherapy. Am Psychol 1992;47:1578-1585. 3. Briss PA, Zaza S. Papaioanou M, et al. Developing an evidence-based Guide to Community Preventive Services-methods. Prev Med, 2000;18(suppl 1); 35-43. 4. Centers for Disease Control and Prevention. The Guide to Community Preventive Services. 2002. Avail- able at: http://www.theeommunityguide.oi^. Accessed Mareh 11. 2003. 5. Whitlock EP, Orleans CT, Prender N, Allan ].
  • 37. Evaluating primary eare behavioral counseling inter- ventions: an evidence-based approaeh. Am] Prev Med. 2002;22:267-284. 6. Department of Health and Human Services. Healthy People 2000. 2002. Available at: http://www. health.gov/healthypeople/data/PROGRVW/default. htm. Accessed March 11, 2003. 7. Smedley BD, Syme SL. Promoting health: inter- vention strategies from social and behavioral research. Am J Health Promot, 2001;15:149-166. 8. Integration of Health Behavior Counseling Into Rou- tine Medical Care, Washington, DC: Center for the Ad- vancement of Health; 2001. 9. Committee on Quality Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century, Washington, DC: National Academy Press; 2001. 10. Joyner L, McNeeley S, Kahn R. ADA's provider recognition progmm. HMO Pract, 1997;ll:168-170. II. Glasgow RE, Strycker LA. Level of preventive practices for diabetes management: patient, physician, and oflice correlates in two primary care samples. AmJ Prev Med 2000;19:9-14. 12. Health Behavior Change in Managed Care: A Status Report. Washington, DC: Center for the Advaneement of Health; 2000. 13. Kottke TE, Edwards BS, Hagen PT. Counseling: implementing our knowledge in a hunied and complex
  • 38. woM. Amf Prev Med 1999;17:295-298. 14. Woolf SH, Atkins D. The evolving role of preven- tion in health care contributions of the US Preventive Services Task Foree. AmJ Prev Med 2001;20:13-20. 15. Orlandi MA. Promoting health and preventing dis- ease in health care settings: an analysis of barriers. Prev Med, 1987;16:119-130. 16. Green LW From research to "best practices" in other settings and populations. Amf Health Behav, 2001,25:165-178. 17. Greenwald P, Cullen JW. The new emphasis in cancer control JNatl Cancer Inst, 1985;74:543-551. 18. Flay BR. Efficacy and effectiveness trials (and other phases of researeh) in the development of health promotion programs. Prev Med. 1986;15:451-474. 19. Basch CE, Sliepcevich EM, Gold RS. Avoiding type 111 errors in health education program evaluations. Health EducQ. 1985;12:315-331. 20. King AC. The coming of age of behavioral re- search in physical activity. Ann Behav Med. 2001 ;23: 227-228. 21. Glasgow RE. Bull SS, Gillette C, Klesges LM, Dze- waltowski DA. Behavior change intervention research in health care settings: a review of reeent reports with emphasis on external validity. Am f Prev Med, 2002; 23:62-69. 22. Oldenburg B, Ffreneh BF, SalUs JF Health behav-
  • 39. ior research: the quality of the evidence base. Am] Health Promot, 2000;14:253-257 23. Hiatt RA, Rimer BK. A new strategy for cancer control researeh. Cancer Epidemiol Biomarkers Prev. 1999:8:957-964. 24. Kemer JF. Closing the Gap Between Discovery and Delivery. Washington, DC: National Caneer Institute; 2002. 25. Teutseh SM. A framework for assessing the effec- tiveness of disease and injury prevention. MMWR RecommRep, 1992;41(RR-3):1-12. 26. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am f Public Health, 1999;89: 1322-1327 27. Glasgow RE, McKay HG, Piette JD, Reynolds KD. The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? Patient Educ Couns. 2001 ;44:119-127 28. Glasgow RE, Klesges LM, Dzewaltowski DA, Bull SS, Estabrooks P. The future of health behavior change research: what is needed to improve translation of re- search into health promotion praetice? Ann Behav Med, In press. 29. Estabrooks PA, Dzewaltowski DA, Glasgow RE, Klesges LM. How well has recent literature reported on important issues related to translating school-based health promotion research into practice? / School Health, 2003;73:21-28.
  • 40. 30. Rogers EM. Diffusion of Innovations, 4th ed. New York, NY: Free Press; 1995. 31. Mook DG. In defense of external invalidity. Am Psychol, 1983;38:379-387 32. Axelrod R, Cohen MD. Harnessing Complexity: Or- ganizational Implications of a Scientific Frontier, New York, NY: Simon & Sehuster; 2000. 33. Biglan A, Glasgow RE, Singer G. The need for a science of larger soeial unite: a contextual approach. Behav Ther, 1990;21:195-215. 34. Gleser GC, Cronbach LJ, Rajaratnam N. Generaliz- ability of scores influeneed by multiple sources of vari- ance. Psychometrika, 1965;30:1373-1385. 35. Shadish WR, Cook TD, Campbell PT. Experimen- tal and Quasi-Experimental Design for Generalized Causal Inference. Boston, Mass: Houghton Mifflin; 2002. 36. Brunswik E. Representative design and probabilis- tic theory in functional psychology. Psychol Rev, 1955; 62:217 37. Murray DM. Statistical models appropriate for de- signs often used in group-randomized trials. Stat Med. 2001;20:1373-1385. 38. Cook TD, Campbell DT. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago, 111: Rand McNally; 1979.
  • 41. 39. Brewer MB. Research design and issues of valid- ity. In: Reis HT, Judd CM, eds. Handbook of Research Methods in Social and Personality Psychology, New York, NY: Cambridge University Press; 2000:3-39. 40. Oldenburg BF, Sallis JF, Ffiisnch ML, Owen N. Health promotion research and the diffusion and insti- tutionalization of interventions. Health Educ Res, 1999; 14:121-130. 41. Skinner CS, Campbell MK, Rimer BK, Curry S, Prochaska JO. How effeetive is tailored print communi- cation? Ann Behav Med, 1999;21:290-298. 42. Kreuter MW, Strecher VJ, Glassman B. One size does not fit all: the case for tailoring print materials. Ann Behav Med 1999;21:276-283. 43. Glasgow RE, Toobert DJ, Hampson SE, Stryeker LA. Implementation, generalization, and long-term re- sults of the "Choosing Well" diabetes self-management intervention. Patient Educ Couns. 2002;48:115-122. 44. Abrams DB, Emmons KM, Lirman L, Biener L. Smoking eessation at the workplace: conceptual and practical considerations. In: Riehmond R, ed. Interven- tions for Smokers: An International Perspective, New York, NY: Williams & Wilkins; 1994:137-169. 45. Prochaska JO, Velicer WF, Fava JL, Rossi JS, Tsoh JY. Evaluating a population-based recruitment approach and a stage-based expert system intervention for smok- ing eessation. Addict Behav, 2001;26:583-602. 46. Jeffery RW Risk behaviors and health: contrasting individual and population perspectives. Am Psychol.
  • 42. 1989;44:n94-1202. 47 Lichtenstein E, Glasgow RE. A pragmatic frame- work for smoking cessation: implieations for clinical and public health programs. Psychol Addict Behav. 1997;11:142-151. 48. Elboume DR, Campbell MK. Extending the CONSORT statement to cluster randomized trials: for discussion. Stat Med 2001;20:489-496. 49. Kolbe LJ. Increasing the impact of school health promotion programs: emerging researeh perspectives. Health Educ. 1986;17:49-52. 50. Moher D, Schulz KF, Altman D. The CONSORT statement: revised recommendations for improving the quality of reports./>1M4. 2001;285:1987-1991. 51. Zaza S, Lawrenee RS, Mahan CS, Fullilove M, et al. Scope and organization of the Guide to Community Preventive Services. Task Foree on Community Preven- tive Services. Amf Prev Med, 2000;18(suppl l):27-34. 52. Bull SS, Gillette C, Glasgow RE, Estabrooks P. Worksite health promotion research: to what extent can we generalize the resulte and what is needed to translate researeh to practice? Health Educ Behav, In press. 53. Davidson K, Goldstein M, Kaplan R, et al. Evi- dence-based behavioral medieine: what is it and how do we get there? Ann Behav Med. In press. 54. Green LW, Kreuter MW. Commentary on the emerging Guide to Community Preventive Services from a health promotion perspective. AmJ Prev Med. 2000;18:7-9.
  • 43. 55. Institute of Medidne. Promoting Health: Interven- tion Strategies From Social and Behavioral Research. Washington, DC: National Aeademy Press; 2000. 56. Green LM. Kreuter MW Health Promotion Plan- ning: An Educational and Ecological Approach. 3rd ed. Mountain View, Calif: Mayfield Publishing Co; 1999. August 2003, Vol 93, No. 8 | American Journal of Public Health Glasgow et al. Peer Reviewed | Public Healtfi Matters | 1267 Healthcare Program/Policy Evaluation Analysis Template Use this document to complete the Module 5 Assessment Assessing a Healthcare Program/Policy Evaluation Healthcare Program/Policy Evaluation Description How was the success of the program or policy measured? How many people were reached by the program or policy selected? How much of an impact was realized with the program or policy selected? At what point in program implementation was the program or policy evaluation conducted?
  • 44. What data was used to conduct the program or policy evaluation? What specific information on unintended consequences were identified? What stakeholders were identified in the evaluation of the program or policy? Who would benefit most from the results and reporting of the program or policy evaluation? Be specific and provide examples. Did the program or policy meet the original intent and objectives? Why or why not? Would you recommend implementing this program or policy in your place of work? Why or why not? Identify at least two ways that you, as a nurse advocate, could become involved in evaluating a program or policy after one year of implementation. General Notes/Comments
  • 45. Healthcare Program/Policy Evaluation Analysis Template © 2021 Walden University, LLC 2 10/22/22, 8:48 PMRubric Detail – Blackboard Learn Page 1 of 5https://class.waldenu.edu/webapps/bbgs-deep-links- BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i d=_3280054_1 Rubric Detail Select Grid View or List View to change the rubric's layout. Excellent Good Fair Poor Program/Policy Evaluation Based on the program or
  • 46. policy evaluation you seelcted, complete the Healthcare Program/Policy Evaluation Analysis Template. Be sure to address the following: · Describe the healthcare program or policy outcomes. · How was the success of the program or policy measured? · How many people were reached by the 32 (32%) - 35 (35%) Using su!cient evidence, response clearly and accurately describes the healthcare
  • 47. program or policy outcomes. Response accurately and clearly explains how the success of the program or policy was measured. Response accurately and clearly describes how many people were reached by the program or policy and accurately describes the impact of the program or 28 (28%) - 31 (31%) Using su!cient evidence, response accurately describes the healthcare program or policy
  • 48. outcomes. Response accurately explains how the success of the program or policy was measured. Response accurately describes how many people were reached by the program or policy and accurately describes the impact of the program or policy. 25 (25%) - 27 (27%) Description of the healthcare program or policy outcomes is inaccurate or incomplete. Explanation of how the
  • 49. success of the program or policy was measured is inaccurate or incomplete. Description of how many people were reached by the program or policy and the impact is vague or inaccurate. Response vaguely describes the point at which 0 (0%) - 24 (24%) Description of the healthcare program or policy outcomes is inaccurate and incomplete or is missing. Explanation of how the success of the program or policy was measured is
  • 50. inaccurate and incomplete or is missing. Description of how many people were reached by the program or policy and the associated impacts is vague and inaccurate or is missing. Response of the point at which time the Name: NURS_6050_Module05_Week10_Assignment_Rubric EXIT Grid View List View https://class.waldenu.edu/webapps/bbgs-deep-links- BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i d=_3280054_1# https://class.waldenu.edu/webapps/bbgs-deep-links- BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i d=_3280054_1# 10/22/22, 8:48 PMRubric Detail – Blackboard Learn Page 2 of 5https://class.waldenu.edu/webapps/bbgs-deep-links- BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i
  • 51. d=_3280054_1 program or policy selected? How much of an impact was realized with the program or policy selected? · At what point in time in program implementation was the program or policy evaluation conducted? policy. Response accurately and clearly indicates the point at which time the program or policy evaluation was conducted. Response accurately indicates the point at which time the
  • 52. program or policy evaluation was conducted. the program or policy evaluation was conducted. program or policy was conducted is missing. Reporting of Program/Policy Evaluations · What data was used to conduct the program or policy evaluation? · What speci!c information on unintended consequences was identi!ed? · What stakeholders were identi!ed in the evaluation of
  • 53. the program or policy? Who would bene!t the most from the results and reporting of the program or policy evaluation? Be speci!c and provide examples. 45 (45%) - 50 (50%) Response clearly and thoroughly explains in detail: -speci"c information on outcomes and unintended consequences identi"ed through the program or policy evaluation. -the stakeholders involved in the program or policy evaluation. -who would bene"t
  • 54. most from the results and reporting of the program or policy evaluation. - whether the program met the original intent and 40 (40%) - 44 (44%) Using su!cient evidence, response accurately identi"es the data used to conduct the program or policy evaluation. Response explains in detail speci"c information on outcomes and unintended consequences identi"ed through the program or policy evaluation.
  • 55. Response explains in detail the stakeholders involved in the program or policy evaluation. 35 (35%) - 39 (39%) Response vaguely or inaccurately identi"es the data used to conduct the program or policy evaluation. Explanation of speci"c information on outcomes and unintended consequences identi"ed through the program or policy evaluation is vague or incomplete. Explanation of the
  • 56. stakeholders involved in the program or policy evaluation is vague or 0 (0%) - 34 (34%) Identi"cation of the data used to conduct the program or policy evaluation is vague and inaccurate or is missing. Response includes vague and incomplete or is missing explanation of: - speci"c information on outcomes and unintended consequences identi"ed through the program or policy evaluation. -the stakeholders involved in the program or
  • 57. policy evaluation. -who would bene"t most from the 10/22/22, 8:48 PMRubric Detail – Blackboard Learn Page 3 of 5https://class.waldenu.edu/webapps/bbgs-deep-links- BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i d=_3280054_1 · Did the program or policy meet the original intent and objectives? Why or why not? · Would you recommend implementing this program or policy in your place of work? Why or why not? · Identify at least two ways that you, as a nurse advocate, could become involved in evaluating a program or
  • 58. policy after 1 year of implementation. outcomes, including an accurate and detailed explanation of the reasons supporting why or why not. - whether the program should be implemented, including an accurate and detailed explanation of the reasons supporting why or why not. -at least two ways that the nurse advocate could become involved in the evaluation of the program or policy after 1 year of implementation. Response explains who
  • 59. would bene"t most from the results and reporting of the program or policy evaluation. Response includes an accurate explanation of whether the program met the original intent and outcomes, including an accurate explanation of the reasons supporting why or why not. Response includes an accurate explanation of whether the program should be implemented, including an accurate explanation of the reasons supporting why or why not.
  • 60. Response includes an accurate explanation of two ways that the nurse advocate could become involved in the evaluation of the program or policy after 1 inaccurate. Explanation of who would bene"t most from the results and reporting of the program or policy evaluation is vague or inaccurate. Explanation of whether the program/policy met the original intent and outcomes, and the reasons why or why not is incomplete or inaccurate. Explanation of
  • 61. whether the program or policy should be implemented, and the reasons why or why not, is incomplete or inaccurate. Explanation of ways that the nurse advocate could become involved in the evaluation or policy after 1 year of implementation is incomplete or inaccurate. results and reporting of the program or policy evaluation. - whether the program or policy met the original intent and outcomes, and the reasons why or why not. -whether the program or
  • 62. policy should be implemented, and the reasons why or why not. -ways that the nurse advocate could become involved in the evaluation or policy after 1 year of implementation. 10/22/22, 8:48 PMRubric Detail – Blackboard Learn Page 4 of 5https://class.waldenu.edu/webapps/bbgs-deep-links- BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i d=_3280054_1 year of implementation. Written Expression and Formatting - Paragraph Development and Organization: Paragraphs make clear points that support well
  • 63. developed ideas, low logically, and demonstrate continuity of ideas. Sentences are carefully focused-- neither long and rambling nor short and lacking substance. A clear and comprehensive purpose statement and introduction is provided which delineates all required criteria. 5 (5%) - 5 (5%) Paragraphs and sentences follow writing standards for #ow, continuity, and clarity. A clear and comprehensive purpose
  • 64. statement, introduction, and conclusion is provided which delineates all required criteria. 4 (4%) - 4 (4%) Paragraphs and sentences follow writing standards for #ow, continuity, and clarity 80% of the time. Purpose, introduction, and conclusion of the assignment is stated, yet is brief and not descriptive. 3 (3%) - 3 (3%) Paragraphs and sentences follow writing standards for #ow, continuity, and clarity 60%-
  • 65. 79% of the time. Purpose, introduction, and conclusion of the assignment is vague or o$ topic. 0 (0%) - 2 (2%) Paragraphs and sentences follow writing standards for #ow, continuity, and clarity < 60% of the time. Purpose, introduction, and conclusion of the assignment is incomplete or missing. Written Expression and Formatting - English Writing Standards: Correct
  • 66. grammar, mechanics, and proper 5 (5%) - 5 (5%) Uses correct grammar, spelling, and punctuation with no errors. 4 (4%) - 4 (4%) Contains a few (1-2) grammar, spelling, and punctuation errors. 3 (3%) - 3 (3%) Contains several (3-4) grammar, spelling, and punctuation errors. 0 (0%) - 2 (2%) Contains many (≥5) grammar, spelling, and punctuation errors that
  • 67. interfere with the reader’s understanding. 10/22/22, 8:48 PMRubric Detail – Blackboard Learn Page 5 of 5https://class.waldenu.edu/webapps/bbgs-deep-links- BBLEARN/app/course/rubric?course_id=_16998532_1&rubric_i d=_3280054_1 punctuation Written Expression and Formatting: The paper follows correct APA format for title page, font, spacing, parenthetical/in- text citations, and reference list). 5 (5%) - 5 (5%) Uses correct APA format with no errors. 4 (4%) - 4 (4%)
  • 68. Contains a few (1-2) APA format errors. 3 (3%) - 3 (3%) Contains several (3-4) APA format errors. 0 (0%) - 2 (2%) Contains many (≥5) APA format errors. Total Points: 100 Name: NURS_6050_Module05_Week10_Assignment_Rubric EXIT