6410 Application 3: Becoming a Leader in the Translation of Evidence to Practice
Note: Have an APA Level 1 header for each area noted below in blue (a level 1 header is centered, bolded, using upper and lower case letters—see APA manual area 3.03) Follow APA format and include a minimum of 5 scholarly references less than 5 years old.
Include a BRIEF Introduction and Summary in addition to the headers below. DO NOT EXCEED THREE PAGES AND MUST CITE OFTEN THROUGHOUT THE PAPER.
Grading Area
Points Possible
Points Earned
Potential areas for earning points:
Header: Efforts to Increase Finance and Economic Knowledge
How you would continue to increase your knowledge and awareness of financial, economic, and other concerns related to new practice approaches
2
Header: Use of Evidence to Improve Practice
How translating evidence would enable you to affect or strengthen health care delivery and nursing practice
2
Header: Advocating for EBP Policy Change
How you would advocate for the use of new evidence-based practice approaches through the policy arena
2
Potential areas for losing points:
Grammar, Spelling, and APA errors
Up to 2 pt. deduction
Went Over Page Limit (2-3 pages max)
Up to 2 pt. deduction
Improper credit & citation issue
(See Turnitin Report)
1-6 pt. deduction
Late Submission
20% deduction (1.2 pts) per day late (per syllabus)
6 Total Points Possible
Total Points Earned
P.S. Under the first header on “Effort to Increase Finance and Economic Knowledge, please refer to the attached week 6 discussion you did for me, except you did not include specific numbers and statistics. Below is the critique made by the professor on that area. Please read through the critique and try to incorporate it in this portion of this paper.
Dear student: Thank you for your contribution to this week’s discussion. You brought forward potential costs associated with increased mobilization of ICU patients….namely the need for more nurse time. Do you have some hard numbers you can provide on the potential cost of this? Do you have any local or national information on the cost of not mobilizing the patients (longer stays, increased infection, readmission)? Calculating approximate cost associated with the practice change versus the cost of not changing is important. This will help stakeholders see the value in the investment.
DISCUSSION PAPER
Evidence-based practice models for organizational change: overview
and practical applications
Marjorie A. Schaffer, Kristin E. Sandau & Lee Diedrick
Accepted for publication 19 July 2012
Correspondence to M.A. Schaffer:
e-mail: [email protected]
Marjorie A. Schaffer PhD RN
Professor of Nursing
Bethel University, St. Paul, Minnesota, USA
Kristin E. Sandau PhD RN CNE
Professor of Nursing
Bethel University, St. Paul, Minnesota, USA
Lee Diedrick MAN RN C-NIC
Clinical Educator
Children’s Hospitals and Clinics of
Minnesota, St. Paul, Minnesota, USA
S C H A F F E R M . A . , S A N D A U K ..
6410 Application 3 Becoming a Leader in the Translation of Evide.docx
1. 6410 Application 3: Becoming a Leader in the Translation of
Evidence to Practice
Note: Have an APA Level 1 header for each area noted below in
blue (a level 1 header is centered, bolded, using upper and
lower case letters—see APA manual area 3.03) Follow APA
format and include a minimum of 5 scholarly references less
than 5 years old.
Include a BRIEF Introduction and Summary in addition to the
headers below. DO NOT EXCEED THREE PAGES AND MUST
CITE OFTEN THROUGHOUT THE PAPER.
Grading Area
Points Possible
Points Earned
Potential areas for earning points:
Header: Efforts to Increase Finance and Economic Knowledge
How you would continue to increase your knowledge and
awareness of financial, economic, and other concerns related to
new practice approaches
2
Header: Use of Evidence to Improve Practice
How translating evidence would enable you to affect or
strengthen health care delivery and nursing practice
2
Header: Advocating for EBP Policy Change
How you would advocate for the use of new evidence-based
practice approaches through the policy arena
2. 2
Potential areas for losing points:
Grammar, Spelling, and APA errors
Up to 2 pt. deduction
Went Over Page Limit (2-3 pages max)
Up to 2 pt. deduction
Improper credit & citation issue
(See Turnitin Report)
1-6 pt. deduction
Late Submission
20% deduction (1.2 pts) per day late (per syllabus)
6 Total Points Possible
Total Points Earned
P.S. Under the first header on “Effort to Increase Finance and
Economic Knowledge, please refer to the attached week 6
discussion you did for me, except you did not include specific
numbers and statistics. Below is the critique made by the
professor on that area. Please read through the critique and try
to incorporate it in this portion of this paper.
Dear student: Thank you for your contribution to this week’s
discussion. You brought forward potential costs associated with
increased mobilization of ICU patients….namely the need for
3. more nurse time. Do you have some hard numbers you can
provide on the potential cost of this? Do you have any local or
national information on the cost of not mobilizing the patients
(longer stays, increased infection, readmission)? Calculating
approximate cost associated with the practice change versus the
cost of not changing is important. This will help stakeholders
see the value in the investment.
DISCUSSION PAPER
Evidence-based practice models for organizational change:
overview
and practical applications
Marjorie A. Schaffer, Kristin E. Sandau & Lee Diedrick
Accepted for publication 19 July 2012
Correspondence to M.A. Schaffer:
e-mail: [email protected]
Marjorie A. Schaffer PhD RN
Professor of Nursing
Bethel University, St. Paul, Minnesota, USA
Kristin E. Sandau PhD RN CNE
Professor of Nursing
Bethel University, St. Paul, Minnesota, USA
4. Lee Diedrick MAN RN C-NIC
Clinical Educator
Children’s Hospitals and Clinics of
Minnesota, St. Paul, Minnesota, USA
S C H A F F E R M . A . , S A N D A U K . E . & D I E D R I C
K L . ( 2 0 1 3 ) Evidence-based
practice models for organizational change: overview and
practical applications. Jour-
nal of Advanced Nursing 69(5), 1197–1209. doi:
10.1111/j.1365-2648.2012.06122.x
Abstract
Aim. To provide an overview, summary of key features and
evaluation of
usefulness of six evidence-based practice models frequently
discussed in the
literature.
Background. The variety of evidence-based practice models and
frameworks,
complex terminology and organizational culture challenges
nurses in selecting the
model that best fits their practice setting.
Data sources. The authors: (1) initially identified models
5. described in a
predominant nursing text; (2) searched the literature through
CINAHL from
1998 to current year, using combinations of ‘evidence’,
‘evidence-based practice’,
‘models’, ‘nursing’ and ‘research’; (3) refined the list of
selected models based on
the initial literature review; and (4) conducted a second search
of the literature on
the selected models for all available years to locate both
historical and recent
articles on their use in nursing practice.
Discussion. Authors described model key features and provided
an evaluation of
model usefulness based on specific criteria, which focused on
facilitating the
evidence-based practice process and guiding practice change.
Implications for nursing. The evaluation of model usefulness
can be used to
determine the best fit of the models to the practice setting.
Conclusion. The Johns Hopkins Model and the Academic Center
for Evidence-
Based Practice Star Model emphasize the processes of finding
7. of evidence-based practice (EBP) models to facilitate the
implementation of research findings into nursing practice
(van Achterberg et al. 2008, Mitchell et al. 2010, Rycroft-
Malone & Bucknall 2010, Wilson et al. 2010, Melnyk &
Fineout-Overholt 2011). Application of EBP models is
intended to break down the complexity of the challenge of
translating evidence into clinical practice. Effective models
to guide translation of research into practice are needed to
avoid failure accompanied by a costly investment of time
and resources. However, enthusiastic efforts by clinicians
and educators to use EBP are often dampened by a confus-
ing array of terms, a plethora of models and a growing
variety of approaches to implementation of EBP.
To help the practitioner decide which EBP model is most
appropriate for a clinical or educational setting, an over-
view of commonly used nursing models is needed to assist
the clinician in comparing, contrasting, and eventually
selecting the model best-fit for their organization and a
8. specific clinical problem. This article provides definitions of
common EBP-related terms, a description of major EBP
models with examples of use in practice and an evaluation
of each model.
Background
Clarification of terms
It is important to begin with a clarification of related terms.
The first term, EBP, has been defined a variety of ways.
However, Melnyk and Fineout-Overholt’s (2011) definition
captures the essence:
Evidence-based practice is a paradigm and life-long prob-
lem solving approach to clinical decision-making that
involves the conscientious use of the best available evidence
(including a systematic search for and critical appraisal of
the most relevant evidence to answer a clinical question)
with one’s own clinical expertise and patient values and
preferences to improve outcomes for individuals, groups,
communities and systems (Melnyk & Fineout-Overholt
9. 2011, p. 575).
A similar definition is provided by Ciliska and colleagues,
who described EBP as integration of the best available
research evidence with information about patient prefer-
ences, clinical skill level and available resources to make
decisions about care (Ciliska et al. 2001).
Table 1 provides definitions for terms commonly used in
EBP discussions. ‘Research utilization’, an older term, is
now recognized as just one piece of the broader concept of
EBP. EBP theories have undergone a change in focus over
the past two decades, which is reflected in use of terms.
Straus and Haynes (2009) delineated this process into
‘knowledge creation’ achieved through research, ‘knowledge
distillation’ through systematic reviews and construction of
guidelines and ‘knowledge dissemination’ through journal
articles and presentations. Attempts have been made in EBP
and change theory literature to distinguish between defini-
tions of diffusion and dissemination. Diffusion is considered
10. a natural and passive process, while dissemination is an
active and planned persuasion and spread of knowledge.
Straus and Haynes stated that these process components are
not adequate for knowledge use in clinical decision-making
and what is needed is ‘knowledge translation’.
Thus, the current EBP focus has shifted to the process of
moving existing knowledge into the daily routines of
practice. ‘EBP is the process of integrating evidence into
Table 1 Definitions of key terms.
Term Definition
Evidence-based
practice (EBP)
‘…a paradigm and life-long problem solving
approach to clinical decision-making that
involves the conscientious use of the best
available evidence (including a systematic
search for and critical appraisal of the most
relevant evidence to answer a clinical
11. question) with one’s own clinical expertise
and patient values and preferences to
improve outcomes for individuals, groups,
communities and systems’ (Melnyk &
Fineout-Overholt 2011, p. 575)
Integrating best available research evidence
with information about patient preferences,
clinical skill level and available resources to
make decisions about care (Ciliska et al.
2001)
Research utilization Use of research findings in clinical
practice,
often based on a single study (Melnyk &
Fineout-Overholt 2011)
[Note: Research utilization is a sub-set of
EBP]
Adoption A continuum of the rate and amount of
practice change, starting with a decision of
13. (Titler 2011, p. 291). It is important to note that the term
‘adoption’ has been used differently by scholars as if on a
continuum. At the beginning of the continuum, adoption is
described as a simple decision to accept a practice change
(Greenhalgh et al. 2004, van Achterberg et al. 2008, Gale
& Schaffer 2009). At the other end of the continuum,
adoption has been described as a more complete incorpora-
tion of the practice change to the extent that is has become
routine (Mitchell et al. 2010). Titler’s model for translation
research uses the terms ‘rate’ and ‘extent of adoption’, sug-
gesting a potential continuum of adoption starting with a
decision of a practice change, moving to implementation
and sustained, routine use in practice (Titler et al. 2007).
The terms ‘translation research’ and ‘implementation sci-
ence’ include a growing body of study – that of how to
effectively facilitate full adoption of best practice into an
organization. These terms have been used synonymously; it
may be helpful to point out that usage of terms has been
14. somewhat dependent on geographical region. The term
research translation has been more prevalent in the U.S.
(National Institutes of Health 2012). Since 2006, the NIH
has prioritized translational research, creating centres for
translational research at its institutes. The term implemen-
tation science has been used more in the UK and may
become more commonly used due to ‘Implementation
Science’, an open-access journal from the UK; implementa-
tion science is defined as the ‘scientific study of methods to
promote the uptake of research findings into routine health-
care in both clinical and policy contexts’ (Implementation
Science 2012).
Aim
The EBP models can support an organized approach to
implementation of EBP, prevent incomplete implementa-
tion, improve use of resources, and facilitate evaluation of
outcomes (Gawlinski & Rutledge 2008). However, clini-
cians find there is not one model that meets the needs of all
15. the settings where nurses provide care.
The purpose of this discussion is to present a succinct
overview of selected EBP models that can be applied to
nursing practice and to evaluate their usefulness in clinical
and educational settings. It is beyond the scope of this
paper to present an in-depth analysis of each EBP model
for nursing practice. Rather, this review provides a concise
description and evaluation of selected models that occur
most frequently in the literature and are used in practice.
In addition, this paper may serve as a guide to the
evidence-based nursing practice of staff nurses, educators,
and healthcare organizations.
Data sources
Selection of data sources to identify relevant EBP models
involved four steps. First, Melnyk and Fineout-Overholt’s
text on EBP provided an initial list of models to consider
for application to nursing EBP projects (Melnyk & Fineout-
Overholt 2011). They described seven models that ‘have
16. been created to facilitate change to EBP’ (Ciliska et al.
2011, p. 245). This approach was selected because the
authors of the text have considerable expertise in
application of models and frameworks for EBP.
Second, to gain a broad perspective on EBP models used
in nursing, CINAHL was searched using various combina-
tions of terms: ‘evidence’, ‘evidence-based practice’, ‘mod-
els’, ‘nursing’ and ‘research’. Articles that described EBP
models used in only one setting or were infrequently used
in EBP projects were excluded.
Third, following the initial review of the literature, two
models described in the Melynk and Fineout-Overholt text
(Ciliska et al. 2011) were excluded and one other model
was added. An EBP change model, originally developed by
Rosswurm and Larrabee (1999), was excluded because it
was not predominant in current literature. Also, the Clinical
Scholar Model (Schultz 2005) was excluded because it
focused on strategies for preparing nurses to conduct and
17. use research. The ACE Star Model, which was included in
Melynk and Fineout-Overholt’s chapter on teaching EBP in
academic settings (Melnyk & Fineout-Overholt 2011), but
not in their chapter on EBP models, was added to the final-
ized list of EBP models because it was featured in several
articles found in the literature.
Fourth, once models were selected, specific names of
models were used in the search process. The final list
selected for inclusion were: (1) the ACE Star Model of
Knowledge Transformation; (2) Advancing Research and
Clinical Practice Through Close Collaboration (ARCC); (3)
the Iowa Model; (4) the Johns Hopkins Nursing Evidence-
Based Practice Model (JHNEBP); (5) Promoting Action on
Research Implementation in Health Services Framework
(PARIHS); and (6) the Stetler Model. Literature was
searched in CINAHL to understand the history of model
development from 1998 to the current year.
Discussion
19. Table 3 provides a brief evaluation of each EBP model
using the four criteria for selecting an EBP model identified
by Newhouse and Johnson (2009). Although other criteria
exist for evaluation of model selection, the following crite-
ria are particularly relevant to the needs of nurses in prac-
tice. The EBP model should: (1) facilitate the work required
for completing an EBP project; (2) have educational compo-
nents that help nurses to critique and assess the strength
and quality of the evidence; (3) guide the process of imple-
menting practice changes; and (4) potentially be imple-
mented across specialty practice areas (Table 3). In
addition, an implementation or application example is
provided for each model.
Overview and evaluation of evidence-based practice
models
ACE Star Model of Knowledge Transformation
The Academic Center for Evidence-Based Practice (ACE)
developed the ACE Star Model as an interdisciplinary strat-
20. egy for transferring knowledge into nursing and healthcare
practice to meet the goal of quality improvement (Stevens
2004). This model addresses both translation and imple-
mentation aspects of the EBP process. The five model steps
are: (1) discovery of new knowledge; (2) summary of the
evidence following a rigorous review process; (3) translation
of the evidence for clinical practice; (4) integration of the
recommended change into practice; and (5) evaluation of
the impact of the practice change for its contribution to
quality improvement in health care. The model emphasizes
applying evidence to bedside nursing practice and considers
factors that determine likelihood of adoption of evidence
into practice.
The Ace Star Model has been used in both educational
and clinical practice. In an educational example, the Uni-
versity of Wisconsin-Eau Claire used the ACE Star Model
to design an evidence-based approach to promote student
success on the NCLEX-RN® exam. Authors reviewed
21. trends in exam pass rates, conducted a review of the litera-
ture on student success strategies, made recommendations
to improve student performance, implemented the strate-
gies, and achieved a statistically significant increase in stu-
dent pass rate (Bonis et al. 2007). Other educational
projects that have applied the ACE Star Model include
identification of EBP competencies for clinical nurse special-
ists (Kring 2008) and use of the ACE Star Model as an
organizing framework for teaching EBP concepts to under-
graduates (Heye & Stevens 2009). Clinically, practitioners
have used the model to guide development of a clinical
practice guideline for ventilator-associated pneumonia
(Abbot et al. 2006) and apply knowledge on social support
and positive health practices to working with adolescents in
community and school settings (Mahon et al. 2007).
The ACE Star Model can be used by both individual
practitioners and organizations to guide practice change in
a variety of settings. The model has been used as a guide to
22. incorporate EBP into nursing curriculum and is also easily
understood by staff nurses, in part due to similarity to the
nursing process. The emphasis on knowledge transforma-
tion contributes to validating the contribution of nursing
interventions to quality improvement. Additionally, the
translation stage includes clinician expertise and has poten-
tial to discuss patient expertise, but is not addressed in the
model. Strategies for successful implementation of a prac-
tice change are less well defined, such as the organizational
culture and context that influence adoption of a practice
change.
Advancing Research and Clinical Practice through Close
Collaboration
The ARCC model focuses on EBP implementation and pro-
motes sustainability at a system wide level (Melnyk & Fine-
out-Overholt 2002, Melnyk et al. 2010, Levin et al. 2011).
The model has five steps: (1) assessment of organizational
culture and readiness for implementation in the healthcare
24. (Stevens 2004, Kring 2008)
Focuses on finding nursing evidence for bedside
nursing practice, including qualitative evidence
Addresses factors that influence adoption of
innovation
Major focus is knowledge transformation
Organizational or individual use
1. Discovery – search for new knowledge through
traditional research
2. Evidence Summary – a rigorous systematic
review process of multiple studies to formulate a
statement of evidence
3. Translation – creation of a practice document or
tool that guides practice, such as a clinical
practice guideline
4. Integration – change in practice; supports EBP
through influencing individual and
organizational change
25. 5. Evaluation – consider impact of EBP practice
change on quality improvement in health care
Advancing Research and Clinical Practice Through
Close Collaboration (ARCC) (Ciliska et al. 2011)
Cognitive Behavioural Theory guides clinicians to
change behaviour towards adopting EBP
Organizational and Readiness Scale for EBP for
assessment of organizational culture
Evidence-Based Implementation Scale for
measurement of EBP in practice
Emphasis on organizational use
1. Assess organizational culture and readiness for
system-wide implementation
2. Identify organizational strengths and barriers to
EBP
3. Identify EBP mentors within the organization to
mentor direct care staff on clinical units
4. Implement evidence into practice
26. 5. Evaluate outcomes
Iowa Model (Titler et al. 2001) Flowchart used to guide
decision-making
Uses problem-solving steps
Uses feedback loops to guide change process (e.g.
lack of evidence leads to conducting research)
Includes a trial of the practice change before
implementation occurs across the system
Designed as an interdisciplinary approach
Emphasis on organizational use
1. Identify practice questions (problem-focused or
knowledge-focused ‘triggers’)
2. Determine whether or not the topic is an
organizational priority
3. Form a team to search, critique, and synthesize
available evidence
4. Determine the sufficiency of the evidence (if
insufficient, conduct research)
27. 5. If evidence base is sufficient and the change
appropriate, pilot the recommended practice
change
6. Evaluate pilot success and if successful,
disseminate results and implement into practice
Johns Hopkins Nursing Evidence-Based Practice
Model (JHNEBP) (Newhouse et al. 2007)
A practical guide for the bedside nurse to use the
best evidence for care decisions
Provides tools for process and critique, including
question development, evidence rating scale, and
research and non-research evidence appraisal
Applicable to a variety of healthcare settings
Emphasis on individual use
1. Practice Question – identify the EBP question
using a team approach
2. Evidence – search, critique, summarize, rate
evidence strength, and develop recommendations
29. nity practice settings and has been tested as a strategy for
improving practice outcomes. The emphasis on identifying
organizational strengths and barriers to EBP and identifying
mentors to work with direct care staff contributes to an
organizational culture that supports EBP. As the ARCC
model emphasizes organizational environment and factors
that support EBP, there is less emphasis in the model on
evaluating evidence. The model’s authors caution that while
the model emphasizes organizational processes to advance
EBP in care delivery, it is important to note that decision-
making at the point of care includes clinician expertise and
patient preference (Melnyk & Fineout-Overholt 2011).
Iowa Model
The Iowa Model, originally developed as a research utiliza-
tion model at the University of Iowa Hospitals and Clinics,
has been revised to focus on implementation of EBP at the
organizational level (Titler et al. 2001). The model is repre-
sented as an algorithm with defined decision points and
30. feedback loops. The first decision is whether the problem or
knowledge-focused trigger is a priority for the organization.
An affirmative decision leads to formation of a team which
searches, critiques, and synthesizes the literature. The sec-
ond decision point considers the adequacy of evidence to
change practice. Inadequate evidence leads the practitioner
to a choice between conduction of research or utilization of
alternative types of evidence (i.e. case reports and expert
opinion). When adequate evidence is found, a pilot of the
change is conducted. Evaluation of the pilot leads to the
third decision point – whether to adopt the change in prac-
tice. Ongoing evaluation of the change and dissemination
of results are further components of the Iowa Model.
There are numerous examples of application of the Iowa
Model to organizational practice change. A New York hos-
pital applied the Iowa Model to the implementation of a
critical care pain observation tool for pain assessment of
Table 2 (Continued).
31. Model/EBP steps Key features Model classification
Promoting Action on Research Implementation in
Health Services Framework (PARIHS) (Rycroft-
Malone 2004)
Can use framework to evaluate progress in
implementing practice change
For each element, model provides sub-elements or
factors that predict the likelihood of success (high
to low continuum)
Emphasis on organizational use
Successful implementation of a practice change is
based on the following interacting elements:
1. Evidence – search for evidence from research,
clinical experience, patient experience, and local
data and information
2. Context– adoption of an innovation is
influenced by the organizational culture,
leadership support, and evaluation practices
32. 3. Facilitation – individuals in the organization use
their knowledge and skills to assist with
implementation of the practice change
Stetler Model (Stetler 2001, Ciliska et al. 2011) Focuses on
critical thinking and use of evidence by
individual clinician
Categorizes evidence as external (from research)
and internal (systematic locally obtained evidence,
such as outcome data, consensus opinion,
experiential information)
Emphasis on individual nurse as
critical thinker, but may
include groups of clinicians
1. Preparation – define priority need and initiate a
search for evidence
2. Validation – systematically critique and
summarize evidence
3. Comparative Evaluation and Decision-Making –
34. improved patient outcomes and the use of the pain assess-
ment tool was approved. A search of the literature demon-
strated a wide variety of applications for the Iowa Model
(Madsen et al. 2005, Gordon et al. 2008, Farrington et al.
2009, 2010, Hermes & Lee 2009, Missal et al. 2010).
Multiple reports by researchers have demonstrated suc-
cessful use of the Iowa Model in a variety of settings to
guide decisions and implementation for practice change.
Practitioners, regardless of prior EBP experience, find the
Iowa Model algorithm helpful. The model considers input
from the entire organizational system, including the patient,
providers, and organizational infrastructure, and involves
nurses in each of the steps (Kowal 2010). An additional
strength is the inclusion of a trial of the practice change
before making the decision about implementation. Although
Table 3 Application of criteria* for selecting an EBP Model.
EBP Model
Facilitates completion of an
35. EBP Project
Educational components
to guide evaluation of the
evidence Guidance for practice change
Applicable across specialty
areas
ACE Star
Model
Rigorous systematic review
process
Addresses translation and
implementation
Not emphasized Creates a tool for guiding
practice (practice guideline)
Examples include acute care
and school health
ARRC
Model
36. Takes into account
organizational culture and
readiness
Primarily addresses
implementation
Available on CD-ROM with
Melnyk and Fineout-
Overholt (2011) text
Offers tools to assess
organizational feasibility
and evaluate EBP outcomes
Used for hospital and
community practice; best fit
is for large organizations
Iowa Model Decision points and feedback
loops throughout process
Addresses translation and
implementation
37. Not emphasized Pilot of the change and
evaluation are essential
components
Useful in a wide variety of
specialty areas, most notably
acute care; best fit is for
large organizations
Johns
Hopkins
EBP
Model
Detailed attention to
identifying practice questions
and evaluating evidence
Greater emphasis on
translation – creation of an
action plan
Offers tools for question
38. development, rating the
evidence, and appraising
research and non-research
evidence; text by Newhouse
et al. (2007) provides a
simplified, clear description
of the EBP process
Includes an action plan with
implementation, evaluation
and dissemination steps; less
emphasis on organizational
culture and change
Potential usefulness in a
variety of settings; useful for
teaching the EBP process to
nursing students
PARIHS
Framework
39. Highlights the often under-
recognized effect of the
‘context’ (e.g. leader support)
as one influence impacting
success of EBP
implementation
Addresses translation and
implementation
Newly published guide
available online, along with
tools such as the successful
implementation tool (Stetler
et al. 2011)
Three elements (evidence,
context, and facilitation)
provide a process for
practice change
Has been retrospectively
40. applied in a variety of
settings; may be good
strategy for teaching
doctoral students to test a
developing model; look for
connections to
complementary models
Stetler
Model
Comprehensive guide for
implementation which
includes practitioner
expertise, context and
evidence
Addresses translation and
implementation
Evaluation tools for
critiquing literature for
42. Johns Hopkins Nursing Evidence-Based Practice Model
The Johns Hopkins Nursing EBP Model resulted from the
collaborative work of leaders in nursing education and
practice at Johns Hopkins Hospital and the Johns Hopkins
University School of Nursing (Newhouse et al. 2007). The
major focus of the model is translation of best evidence for
nurses at the bedside to use in care decisions. The model
provides three major steps with subcategories in each of the
steps: (1) identification of the practice question, using a
team approach; (2) collection of the evidence, which
involves searching, critiquing, summarizing, determining
strength of evidence, and making recommendations; and (3)
translation of the evidence for use in practice, which
includes determining feasibility of adopting the change and
creating an action plan for implementation. The model
includes tools for assisting the user: a question development
tool, an evidence rating scale, and appraisal criteria for
research and non-research evidence.
43. The clear, concise text (Newhouse et al. 2007) describing
the Johns Hopkins model has been adopted in a university
setting for use among baccalaureate and graduate students
as a method for searching and appraising the literature.
University professors and hospital nurse researchers
describe collaboration between the university and the
research councils of two hospitals on EBP projects (Missal
et al. 2010). Clinical nursing leaders identified ‘burning’
clinical practice questions offered by staff nurses for which
the university master’s students performed critical apprais-
als of the literature and made practice recommendations
using the model. Examples of practice changes included
venous thromboembolism prevention for same-day postop-
erative surgery patients, RN interventions to prevent read-
mission of adults related to health literacy, and EBP
protocols for opiate drug withdrawal of chemically depen-
dent adult patients (M. A. Schaffer, personal communica-
tion, 10 June 2011).
44. The Johns Hopkins EBP Model is comprehensive,
addressing all important components of the EBP process.
Specific steps are provided for identifying the practice ques-
tion and leadership responsibility, evaluating the evidence
and developing recommendations and translating evidence
for practice change. This model includes a rating scale for
strength of evidence and quality for both research and non-
research evidence. Practitioner expertise and patient experi-
ence are included in the rating scale. The critical appraisal
tools provide a helpful guide for educators teaching the
process of review of evidence to students. However, there is
a lack of literature on the use of the JHNEBP Model in a
variety of clinical settings and the model has less emphasis
on organizational culture and change.
Promoting Action on Research Implementation in Health
Services Framework
Readers new to the PARIHS framework may find early
studies describing retrospective application of the frame-
45. work confusing unless they understand that the framework
was developed over several years by a variety of authors
with practice improvement and guideline implementation
experience. The authors refer to their work as a framework,
rather than a model, which is perhaps most appropriate as
a model is expected to have undergone more rigorous
explanation and testable hypotheses (Titler et al. 2007).
The PARIHS framework’s three key elements mutually
influence one another during a successful implementation of
EBP (Stetler et al. 2011). The first element, evidence, is
described as sources of knowledge as perceived by multiple
stakeholders. The second element, context, describes the
quality of the environment where the research is being con-
ducted. The third element, facilitation, is a technique to
support people to change (i.e. attitude and skills). A predic-
tion for degree of success in EBP implementation is based
on the strength and appropriateness of the three elements.
In a critical synthesis of the literature on the PARIHS
46. framework, Helfrich et al. (2010) identified six overview
articles presenting core concepts in the new framework,
along with 18 empirical articles from 2001–2008 where the
framework was applied. With the exception of the survey
development studies, studies reviewed by Helfrich and col-
leagues were retrospective in their application of the PARI-
HS framework. However, authors demonstrated that the
framework could be used to guide an analysis of evidence
and the context for dissemination.
Recently, a group of researchers applied the PARIHS
framework to prospectively guide an implementation study
on the use of consultation recording in oncology so patients
could access the digital recording later to review what was
said during the consult (Hack et al. 2011). In planning the
study, researchers considered the interrelationship of evi-
dence, context and facilitation aspects through analysis of
the pre-implementation, implementation, and postimple-
mentation phases. Investigators in New Zealand used the
48. framework initially appeared more theoretical than practi-
cal and earlier publications often focused on framework
refinement and development rather than prospective clinical
application. Recently, attempts have been made to clarify
and strengthen the framework. A revision (Stetler et al.
2011) offers users online tools with a User Guide. In terms
of a process guide to implement practice change, the PARI-
HS framework has great applicability for engaging stake-
holders across healthcare disciplines; collaboration is
needed for making financial, quality, and administrative
decisions that lead to successful implementation.
Stetler Model
The Stetler Model, which in its original development
focused on research utilization, has been updated and
refined to fit in the EBP paradigm. The model emphasizes
the critical thinking process and although practitioner-ori-
ented, is also used by groups for implementing formal orga-
nizational change (Stetler 2001). An important assumption
49. for the model revision is that internal factors such as the
characteristics of individual EBP users and organizational
practices influence implementation of evidence along with
external factors that include formal research and organiza-
tional standards and protocols. The Stetler Model consists
of five phases. Phase I, preparation, includes definition of
the purpose, contextual assessment and search for sources
of evidence. Phase II is validation of the evidence found.
Phase III is comparative evaluation/decision-making, where
the evidence found is critiqued, synthesized, and a decision
for use is made with consideration of external and internal
factors. Phase IV refinements provide implementation/trans-
lation guidance for change in practice. Finally, Phase V is
evaluation, which includes outcomes met and the degree to
which the practice change was implemented (Ciliska et al.
2011).
Romp and Kiehl (2009) used the Stetler Model to guide
the redesign of a preceptor program with the goal of
50. improving satisfaction levels of new nurses and reducing
the turnover rate. They described how each of the five steps
or phases of the Stetler Model led to program redesign.
After reviewing literature on preceptor education, decision
makers disseminated recommendations to administrators,
managers, and preceptors through committee meetings,
individual meetings, or direct mailings. New nurse satisfac-
tion with their preceptors showed a significant improvement
and the turnover rate decreased by 3�9%. Additional appli-
cation examples of the Stetler Model include analysis of evi-
dence for using humour with cancer patients, evaluation of
evidence on a screening tool for anxiety in patients with
Parkinson’s disease, and development of a screening tool
for postpartum depression (Christie & Moore 2005, Bishop
2007, Snyder et al. 2011).
The Stetler Model, although oriented to the individual
practitioner, can also be used by a team that is making a
practice change decision. The model takes into account
51. characteristics of the individual EBP user. The Stetler
Model uses critical thinking and a logical process that
emphasizes evaluation of the evidence. In the model, evi-
dence includes quality improvement data, operational and
evaluation data, and consensus of experts. Authors caution
that experiential information from individual professionals
should receive critical reflection before use as evidence
(Stetler 2001, Melnyk & Fineout-Overholt 2011).
An updated diagram of the model is used to convey the
key points and relationships of the model. However, read-
ers may be confused by the details and complexity. The
comprehensive approach of the Stetler Model makes it best
suited for practitioners with skills in EBP (Stetler 2010).
The intersection of quality improvement and EBP
Nursing administrators and staff nurses should consider
how the selection of a specific EBP model fits with the con-
cept of quality improvement. While the concept of continu-
ous quality improvement (CQI) has been used for decades,
53. laureate and master’s students to understand the EBP cri-
tique process. The ACE Star Model can be readily
understood by undergraduate students due to its similarity
to the nursing process.
Individual clinicians may find both the Johns Hopkins
and Stetler models helpful because of their emphasis on
critical thinking and a logical decision-making process.
Organizations may find a best-fit with the PARIHS, ARCC,
and Iowa models because of the emphasis on team deci-
sion-making processes. The Iowa model is prominent in the
literature for organizational decisions about adoption of
specific clinical practice guidelines. The PARIHS and ARCC
models stress the practical and contextual application of
evidence, including sustainability.
The PARIHS model considers factors that contribute to
likelihood of success for the practice change and, with fur-
ther refinement in clarity and succinct presentation, has
potential for judging the merit of cost and time expendi-
54. tures.
In addition to considering the setting for the best-fit EBP
model, the reader may wish to consider the degree of guid-
ance for reviewing and critiquing evidence. In this regard,
only the Johns Hopkins and ARCC models provided clear
criteria to rate level and quality of evidence. While all six
models mentioned patient experience and clinician exper-
tise, there was variation on the emphasis and process for
appraising these experiences.
Future scholars should focus not on development of new
EBP models but rather on the review, testing, and refinement
of existing models. Consistent use of terminology will help
counteract the challenge of navigating the array of terms and
models faced by educators and clinicians. Finally, clinicians
need to consider EBP recommendations in light of patients’
unique characteristics and values. Baumann (2010) cautions,
‘nurses need to recognize that the generalizations of EBP
findings must always be checked by listening to and respecting
55. the views and choices of each individual’ (p. 229).
Limitations
The process used to identify EBP models for discussion,
although systematic, may have resulted in overlooking
models with potential for application to practice. It should
also be pointed out that the article featuring the four crite-
ria used to evaluate the selected EBP models in Table 3 is
co-authored by Newhouse who has also been instrumental
in development of the Johns Hopkins Nursing Evidence-
Based Practice Model. This discussion of EBP models and
application in practice is not exhaustive; more in-depth dis-
cussion is provided by others (Gawlinski & Rutledge 2008,
Rycroft-Malone & Bucknall 2010).
Conclusion
This discussion, which has provided an overview of EBP
models in practical use, application examples, and an
evaluation of model usefulness, will facilitate the reader in
identifying the model that best fits the clinician, organiza-
56. tion, and the desired goal. Consideration of how the model
facilitates EBP projects, provides guidelines for evidence cri-
tique, guides the process for implementing practice change,
and can be used across practice areas will assist clinicians
in selecting a model that is understood, used, and leads to
improved practice.
What is already known about this topic
● Evidence-based practice promotes using best evidence
to make decisions to improve health outcomes for
individuals, groups, communities, and systems.
● Evidence includes research, clinical expertise, and
patient values and preferences.
● Many models have been developed for guiding imple-
mentation of evidence into nursing practice.
● Translation of evidence is a critical step for imple-
menting practice change.
What this paper adds
● This review provides an overview and summary of key
58. M.A. Schaffer et al.
Funding
This research received no specific grant from any funding
agency in the public, commercial, or not-for-profit sectors.
Conflict of interest
No conflict of interest has been declared by the authors.
Author contributions
All authors meet at least one of the following criteria
(recommended by the ICMJE: http://www.icmje.org/
ethical_1author.html) and have agreed on the final version:
● substantial contributions to conception and design,
acquisition of data, or analysis and interpretation of
data;
● drafting the article or revising it critically for important
intellectual content.
References
Abbot C.A., Dremsa T., Stewart D.W., Mark D.D. & Caren C.
59. (2006)
Adoption of a ventilator-associated pneumonia clinical practice
guideline. Worldviews on Evidence-Based Nursing 3(4), 139–
152.
van Achterberg T., Schoonhoven L. & Grol R. (2008) Nursing
implementation science: how evidence-based nursing requires
evidence-based implementation. Journal of Nursing Scholarship
40(4), 302–310.
Baumann S.L. (2010) The limitations of evidence-based
practice.
Nursing Science Quarterly 23(3), 226–230.
Bishop K.B. (2007) Utilization of the Stetler model: evaluating
the
scientific evidence on screening for postpartum depression risk
factors in a primary care setting. Kentucky Nurse 55(1), 7.
Bonis S., Taft L. & Wendler M.C. (2007) Strategies to promote
success on the NCLEX-RN®: an evidence-based approach using
the ACE Star model of knowledge transformation. Nursing
Education Perspectives 28(2), 82–87.
Christie W. & Moore C. (2005) The impact of humor on
60. patients with cancer. Clinical Journal of Oncology Nursing 9(2),
211–218.
Ciliska D.K., Pinelli J., DiCenso A. & Cullum N. (2001)
Resources
to enhance evidence based nursing practice. AACN Clinical
Issues 12, 520–528.
Ciliska D., DiCenso A., Melynk B.M., Fineout-Overholt E.,
Stettler
C.B., Cullent L., Larrabee J.H., Schultz A.A., Rycroft-Malone
J.,
Newhouse R. & Dang D. (2011) Models to guide
implementation of evidence-based practice. In Evidence-Based
Practice in Nursing and Healthcare: A Guide to Best Practice
(Melynk B.M., Fineout-Overholt E., eds), Wolters Kluwer,
Philadelphia, PA, pp. 241–275.
Farrington M., Lang S., Cullen L. & Stewart S. (2009)
Nasogastric
tube placement verification in pediatric and neonatal patients.
Pediatric Nursing 35(1), 17–24.
Farrington M., Cullen L. & Dawson C. (2010) Assessment of
61. oral mucositis in adult and pediatric oncology patients: an
evidence-
based approach. ORL – Head and Neck Nursing 28(3), 8–15.
Gale B. & Schaffer M. (2009) Organizational readiness for
evidence-based practice. The Journal of Nursing Administration
39, 91–97.
Gawlinski A. & Rutledge D. (2008) Selecting a model for
evidenced-based practice changes: a practical approach. AACN
Advanced Critical Care 19, 291–300.
Gordon M., Bartruff L., Gordon S., Lofgren M. & Widness J.A.
(2008) How fast is too fast? A practice change in umbilical
arterial
catheter blood sampling using the Iowa model for evidence-
based
practice. Advances in Neonatal Care 8(4), 198–207.
Greenhalgh T., Robert G., Macfarlane F., Bate P. & Kyriakidou
O.
(2004) Diffusion of innovations in service organizations:
systematic review and recommendations. Milbank Quarterly 82
(4), 581–629.
62. Hack T.F., Ruether J.D., Weir L.M., Grenier D. & Degner L.F.
(2011) Study protocol: addressing evidence and context to
facilitate transfer and uptake of consultation recording use in
oncology: a knowledge translation implementation study.
Implementation Science 6(20), doi: 10.1186/1748-5908-6-20.
Helfrich C.D., Damschroder L.J., Hagedorn H.J., Daggett G.S.,
Sahay A., Ritchie M., Damush T., Guihan M., Ullrich P.M. &
Stetler C.B. (2010) A critical synthesis of literature on the
promoting action on research implementation of health services
(PARIHS) framework. Implementation Science 5(82), Retrived
from http://www.implementationscience.com/content/4/1/38 on
23 July 2012.
Hermes B. & Lee K. (2009) Suicide risk assessment: 6 steps to a
better instrument. Journal of Psychosocial Nursing 47(6), 44–
49.
Heye M.L. & Stevens K.R. (2009) Using new resources to teach
evidence-based practice. Journal of Nursing Education 48(6),
334–339.
63. Implementation Science (2012) About implementation science.
Retrieved from http://www.implementationscience.com/about/
on
23 July 2012.
Kowal C.D. (2010) Implementing the critical care pain
observation
tool using the Iowa model. Journal of New York State Nurses
Association 41(1), 4–10.
Kring D.L. (2008) Practice domains and evidence-based
practice
competencies: a matrix of influence. Clinical Nurse Specialist
22
(4), 179–183.
Levin R.F., Fineout-Overholt E., Melnyk B.M., Barnes M. &
Vetter M.J. (2011) Fostering evidence-based practice to
improve
nurse and cost outcomes in a community health setting: a pilot
test of the advancing research and clinical practice through
close
collaboration model. Nursing Administration Quarterly 35(1),
21–33.
65. searchable questions and searching for the best evidence.
Pediatric Nursing Journal 28(3), 262–263.
Melnyk B.M. & Fineout-Overholt E. (2011) Evidence-Based
Practice in Nursing and Healthcare: A Guide to Best Practice.
Wolters Kluwer, Philadelphia, PA.
Melnyk B.M., Fineout-Overholt E., Giggleman M. & Cruz R.
(2010) Correlates among cognitive beliefs, EBP
implementation,
organizational culture, cohesion and job satisfaction in
evidence-
based practice mentors from a community hospital system.
Nursing Outlook 58(6), 301–308.
Missal B., Schafer B.K., Halm M.A. & Schaffer M.A. (2010) A
university and healthcare organization partnership to prepare
nurses for evidence-based practice. Journal of Nursing
Education
49(8), 456–461.
Mitchell S., Fisher C., Hastings C., Silverman L. & Wallen G.
(2010) A thematic analysis of theoretical models for
66. translational science in nursing: mapping the field. Nursing
Outlook 58, 287–300.
National Institutes of Health (2012) National Center for
Advancing Translational Sciences (NCATS). Retrieved from
http://www.ncrr.nih.gov/clinical_research_resources/clinical_an
d_
translational_science_awards/publications/ctsa_2011/index.asp
on
23 July 2012.
Newhouse R.P. & Johnson K. (2009) A case study in evaluating
infrastructure for EBP and selecting a model. Journal of
Nursing
Administration 39(10), 409–411.
Newhouse R.P., Dearholt S.L., Poe S.S., Pugh L.C. & White
K.M.
(2007) Johns Hopkins Nursing Evidence-Based Practice Model
and Guidelines. Sigma Theta, Indianapolis, IN.
Romp C.R. & Kiehl E. (2009) Applying the Stetler model of
research utilization in staff development: revitalizing a
preceptor
program. Journal for Nurses in Staff Development 25(6), 278–
67. 284.
Rosswurm M.A. & Larrabee J. (1999) A model for change to
evidence-based practice. Image: Journal of Nursing Scholarship,
31(4), 317–322.
Rycroft-Malone J. (2004) The PARIHS framework – A
framework
for guiding the implementation of evidence-based practice.
Journal of Nursing Care Quality 19(4), 297–304.
Rycroft-Malone J. &Bucknall T., eds (2010) Models and
Frameworks for Implementing Evidence-Based Practice:
Linking
Evidence to Action. Wiley-Blackwell, West Sussex, UK.
Schultz A.A. (2005) Advancing Evidence into Practice: Clinical
Scholars at the Bedside. Excellence in Nursing Knowledge,
Indianapolis, IN.
Snyder C.H., Facchiano L. & Brewer M. (2011) Using evidence-
based practice to improve the recognition of anxiety in
Parkisnson’s disease. The Journal for Nurse Practitioners 7(2),
136–141.
68. Stetler C.B. (2001) Updating the Stetler model of research
utilization to facilitate evidence-based practice. Nursing
Outlook
49, 272–278.
Stetler C.B. (2010) Stetler model. In Models and Frameworks
for
Implementing Evidence-Based Practice: Linking Evidence to
Action (Rycroft-Malone J. & Bucknall T., eds), Wiley-
Blackwell,
West Sussex, UK, pp. 51–88.
Stetler C.B., Morsi D., Rucki S., Broughton S., Corrigan B.,
Fitzgerald J., Giuliano K., Havener P. & Sheridan E.A. (1998)
Utilization-focused integrative reviews in nursing service.
Applied
Nursing Research 11(4), 195–206.
Stetler C.B., Damschroder L.J., Helfrich C.D. & Hagedorn H.J.
(2011) A guide for applying a revised version of the PARIHS
framework for implementation. Implementation Science 6(1),
99.
Stevens K.R. (2004) ACE Star Model of EBP: Knowledge
69. Transformation. Academic Center for Evidence-Based Practice.
The University of Texas Health Science Center at San Antonio,
Retrieved from http://www.acestar.uthscsa.edu on 20 July 2012.
Straus S. & Haynes B. (2009) Managing evidence-based
knowledge: the need for reliable, relevant and readable
resources.
Canadian Medical Association Journal 180, 942–945.
Titler M. (2011) Nursing science and evidence-based practice.
Western Journal of Nursing Research 33(3), 291–295.
Titler M.G., Kleiber C., Steelman V., Goode C., Rakel B.,
Budreau
G., Everett L.Q., Buckwalter K.C., Tripp-Reimer T. & Goode
C.J. (2001) The Iowa model of evidence-based practice to
promote quality care. Critical Care Nursing Clinics of North
America 13(4), 497–509.
Titler M.G., Everett L.Q. & Adams S. (2007) Implications for
implementation science. Nursing Research 56(4S), S53–S59.
Wilson P.M., Petticrew M., Calnan M.W. & Nazareth I.
(2010) Disseminating research findings: what should
researchers
72. Even the highest quality evidence will have little impact unless
it is incorporated into decision-making for health. It is
therefore critical to overcome the many barriers to using
evidence in decision-making, including (1) missing the window
of opportunity, (2) knowledge gaps and uncertainty, (3)
controversy, irrelevant and conflicting evidence, as well as
(4) vested interests and conflicts of interest. While this is
certainly not a comprehensive list, it covers a number of main
themes discussed in the knowledge translation literature on this
topic, and better understanding these barriers can help
readers of the evidence to be more savvy knowledge users and
help researchers overcome challenges to getting their
evidence into practice. Thus, the first step in being able to use
research evidence for improving population health is
ensuring that the evidence is available at the right time and in
the right format and language so that knowledge users
can take the evidence into consideration alongside a multitude
of other factors that also influence decision-making. The
sheer volume of scientific publications makes it difficult to find
the evidence that can actually help inform decisions for
health. Policymakers, especially in low- and middle-income
countries, require context-specific evidence to ensure local
relevance. Knowledge synthesis and dissemination of policy-
relevant local evidence is important, but it is still not
enough. There are times when the interpretation of the evidence
leads to various controversies and disagreements,
which act as barriers to the uptake of evidence. Research
evidence can also be influenced and misused for various aims
and agendas. It is therefore important to ensure that any new
evidence comes from reliable sources and is interpreted
in light of the overall body of scientific literature. It is not
enough to simply produce evidence, nor even to synthesize
and package evidence into a more user-friendly format.
Particularly at the policy level, political savvy is also needed to
ensure that vested interests do not undermine decisions that can
73. impact the health of individuals and populations.
Keywords: Barriers, Decision-making, Evidence based
medicine, Health policy, Public health, Research
Background
Billions of dollars are spent each year on producing re-
search, but to what extent does all this research actually
serve to improve health outcomes? Notwithstanding the
value of creating knowledge for its own sake, it is diffi-
cult to justify spending limited resources on countless
research studies, especially where there is potential for
causing harm to research subjects (whether human or
animal), if this does not contribute to a deeper under-
standing of the world we live in and how to make it a
better place [1]. Even the highest quality evidence will
have no impact unless it is incorporated into decision-
making for health. Indeed, in recent years, there has
been a push towards closing the ‘know-do’ gap – i.e. the
gap between what we know works based on research
evidence and what we actually do in practice [2]. There
has also been an attempt to increase the likelihood of
evidence being used to change policy and practice
through co-production of knowledge between re-
searchers and policymakers [3], as well as increasing the
impact of research on improving health outcomes [4,5].
However, if our goal is to make more evidence-informed
decisions that improve health, it is critical to be aware of
the many barriers to using evidence in decision-making
and to overcome these so that we can facilitate translat-
ing research evidence into improved health outcomes.
* Correspondence: [email protected]
1Department of Family Medicine and Department of
Epidemiology,
75. There exists a large and growing literature on the bar-
riers to using evidence in decision-making for health.
Some of the key themes that are often discussed include
(1) missing the window of opportunity – often due to
the relatively long time-frame required to generate new
evidence and synthesize existing evidence and the rela-
tively short time-frame available for making decisions,
(2) knowledge gaps and uncertainty – especially the pau-
city of contextually-relevant evidence from local studies,
(3) controversy, irrelevant and conflicting evidence –
which act as smokescreens clouding the picture and
making it difficult to decide how best to proceed, as well
as (4) vested interests and conflicts of interest – which
deliberately manipulate the evidence base to the detri-
ment of the public’s health and wellbeing. These barriers
often stem from the complexity inherent to producing
knowledge, the disconnect between the worlds of
researchers and policymakers, intentional subverting of
the evidence for political or economic gain, or a combin-
ation of the above.
Missing the window of opportunity
The first step in being able to use research evidence is
ensuring that it is available at the right time and in the
right format [6], so that knowledge users can take the
evidence into consideration alongside a multitude of
other factors that also influence decision-making.
The window of opportunity for incorporating evidence
into decision-making can be very narrow, so unless the
‘work’ of having synthesized and packaged the evidence
into a usable format has already been done beforehand,
decisions are likely to be made without the aid of the
large scientific literature that may be available. For in-
stance, in the clinical setting, couples opting for prenatal
screening for Down syndrome must make very difficult,
76. morally-charged and potentially life-changing decisions
in just a few weeks, and frontline health workers are
making important treatment decisions on a daily basis
within a 10–20 minute time window during each patient
encounter. This explains why busy clinicians often turn
to ‘pre-digested’ evidence-based resources such as Up-
to-Date, JAMA Evidence and MD Consult, since these
are quick and easy to use, as well as being clinically rele-
vant, even if they come at a price. However, patients
rarely have access to such tools, or even to information
leaflets intended for patients which are not always dis-
tributed in practice, and therefore they must rely on
what they can recall from the medical visit or else risk
the information minefield which is the Internet. While
‘Googling’ for answers is certainly an option, there is a
great deal of misleading information on the Internet. At
the very least, it is important to use reputable websites
and, ideally, sites that adhere to the HONcode princi-
ples, which is a code of ethics for quality health and
medical information on the Internet [7].
In the policy world, the window of opportunity for
decision-making can also be very constrained. Whether
a controversy erupts in the media or an outbreak occurs,
decisions must often be made under pressure. A good
example of this is the policy response needed to deal
with the H5N1 bird flu outbreak in Hong Kong in 1997.
Margaret Chan, who was the Hong Kong Director of
Health at the time, had to make a policy decision within
48 hours. She decided, based on her professional experi-
ence and convictions, to cull 1.5 million chickens in
order to control the epidemic. While there are occasions
when governments have the luxury of commissioning re-
search from Health Technology Assessment agencies to
aid decision-making, there are also occasions when the
77. evidence is needed before the 5 pm press conference
and there is simply no time to embark upon a systematic
synthesis of the literature, even if one could call upon
the experts to do so.
Thus, it is advisable to foresee common and/or im-
portant decisions that must be made – whether in a
clinical setting or in a political context – and collect the
best available evidence in advance to lay out the options
and guide decision-makers through the pros and cons of
these various options. Of course, decision-making is not
straightforward, but rather a dynamic and non-linear
process [8], and evidence is only one type of input that
goes into this process (which also integrates many other
considerations such as valuations, preferences, feasibility,
cost, etc.). Nonetheless, evidence is more likely to be
used when it is available during the window of oppor-
tunity, if it points to options which are actionable and if
the existing resources and infrastructure can be used to
make it happen, rather than requiring an influx of new
resources [4]. For instance, one of the reasons for the
success of the Thai universal health coverage policy, in
addition to the solid evidence behind it, was that the
proponents underlined that it could be done within the
existing budget.
Knowledge gaps and uncertainty
Another major barrier to using evidence for informing
policy and practice is the lack of available evidence in
specific areas or in specific contexts. This can be ex-
tremely problematic. Particularly as we move away from
strictly clinical questions, such as the best pharmaceut-
ical treatment for hypertension or the best diagnostic
test for colon cancer, and attempt to address broader
social questions, such as how to tackle gender-based
violence and how to create supportive environments for
78. Andermann et al. Health Research Policy and Systems (2016)
14:17 Page 2 of 7
health, we enter a realm where it is often difficult to be
entirely evidence-based because the evidence has not
been produced to the same extent. Since research re-
quires funding, and a great deal of research is privately
funded in part or in whole, it is not surprising that there
is relatively little research on issues such as gender-
based violence, child maltreatment or health inequities
more broadly, since there is no pill or product that can
be used to manage these health-related problems. Simi-
larly, when dealing with specific under-served popula-
tions, such as immigrants or Indigenous groups, one
may also run into the same problem that there is often
relatively little research evidence on ‘what works’ to
guide action in these specific populations, not to men-
tion the glaring research gap with respect to low- and
middle-income countries [9]. Even when there is solid
evidence at an international level, policymakers, espe-
cially in resource-constrained settings, often lack the
local evidence relevant to their particular context which
would be important in helping them make more in-
formed decisions for improving the health of their local
populations given the particular circumstances and chal-
lenges that they face.
A lack of evidence does not mean that the interven-
tions or strategies in question are not effective. It simply
means that they have not been sufficiently investigated.
The bottom line is that, in the absence of evidence, one
is left with a great deal of uncertainty under which deci-
sions must nonetheless be made. Therefore, at the very
79. least, it is necessary to gather together the best possible
evidence and to at least make explicit the knowledge
gaps. Indeed, there is no such thing as zero uncertainty
and therefore it is simply a matter of degree. Often pleas
for more research, while helpful, will generally not
provide the additional evidence required within the
timeline necessary to make decisions. Nonetheless, in
the meantime, decision-makers must resort to examin-
ing best practices or examples of what has worked in
different contexts to make the best possible decisions in
situations where there is a lack of guidance from the
existing body of research evidence.
Controversy, irrelevant and conflicting evidence
It is a strange paradox that, although we suffer from im-
portant knowledge gaps regarding how to improve popu-
lation health, the scientific literature is filled with a
plethora of ‘irrelevant evidence’ that is ‘nice to know’ but
not really what we ‘need to know’ to improve health. Re-
search topics chosen by researchers tend to be of aca-
demic interest, and only loosely driven by the information
needs of patients or decision-makers, if at all. It is there-
fore not surprising that the evidence produced then tends
to have little relevance or practical value for patient
choices or policymaking. According to the Agency for
Healthcare Research and Quality [10], before imple-
menting a new intervention, policymakers want to know
(1) can it work? (2) will it work here? and (3) is it worth
it? Yet, one must spend time going through a great deal of
irrelevant evidence to find patient-informed, policy-friendly
data that helps to answer these questions. As the number
of research publications continues to grow exponentially
(Fig. 1), it is becoming virtually impossible to keep abreast
of the scientific literature in any given field [11].
Beyond the lack of relevant evidence and the ‘noise’
80. created by irrelevant evidence, another challenge for
decision-making is disagreement about what the evidence
says. While generally evidence-based recommendations
are quite similar across jurisdictions and contexts [12],
there are occasions when different groups make re-
commendations based on the same international body of
evidence which are clearly at odds with the recom-
mendations made elsewhere. This can lead to significant
976
103,643
1,013,724
0
200,000
400,000
600,000
800,000
1,000,000
1,200,000
1891 1951 2011
Number of scientific
articles indexed in
PubMed that year
81. Fig. 1 The exponential increase in research evidence over the
last century. Data based on articles indexed PubMed
(http://www.ncbi.nlm.nih.gov/
pubmed/)
Andermann et al. Health Research Policy and Systems (2016)
14:17 Page 3 of 7
http://www.ncbi.nlm.nih.gov/pubmed/
http://www.ncbi.nlm.nih.gov/pubmed/
confusion and contentious debate, which undermines
confidence in the scientific process, and is a major barrier
to the uptake and use of evidence.
The mammography controversy is a case in point [13].
When the Nordic Cochrane Centre published a system-
atic review in 2001 stating that “currently available reli-
able evidence does not show a survival benefit of mass
screening for breast cancer” it caused a great outcry [14].
There were many articles written condemning anyone
who dared to question the clinical utility of breast
screening as being part of an “active anti-screening cam-
paign… based on erroneous interpretation of data from
cancer registries and peer-reviewed articles” [15]. Indeed,
unlike the other reviews of breast cancer screening, the
Nordic review excluded certain studies for methodo-
logical reasons and included others, thereby arriving at a
different conclusion, which is certainly legitimate. How-
ever, the interpretation of the authors went a little too
far to counteract the then prevailing school of thought
on the benefits of screening. A few years later the Nordic
Cochrane Centre updated their analysis and revised their
conclusions to be more nuanced. The most recent up-
date of the review in 2011 states that “screening is likely
82. to reduce breast cancer mortality” – but, they qualify this
statement with an explanation [16]. The authors con-
clude that the mortality reduction attributable to screen-
ing is much lower than previously suggested by reviews
of the mammography literature (i.e. 15% versus 30%).
Further, overdiagnosis and overtreatment are much
more common than is generally acknowledged. Thus,
without rejecting breast screening altogether, the Nordic
Cochrane Centre provides the data for patients and pol-
icymakers to better weight the benefits and harms. If
people invited for screening only knew that, out of 2,000
women screened for 10 years, only one will avoid dying
from breast cancer, whereas 10 will receive unnecessary
treatment, and over 200 will experience a false alarm
requiring repeat testing and invasive diagnostic proce-
dures, they would be better equipped to judge for
themselves the merits of participating in the screening
program [17]. Indeed, if they choose to participate, they
may even find this information reassuring by knowing in
advance that false alarms are common and that only a
small proportion of women who are recalled for
additional testing actually have cancer.
Even though we all share the same international body
of research evidence, there is a great deal of subjectivity
in the analysis of that evidence [18]. There are times
when subtly different approaches to interpretation leads
to various controversies and disagreements, which act as
barriers to the uptake of evidence, whereas people
should be presented the evidence with all its nuances
and complexity, so that they can better choose for them-
selves. Since the body of evidence is not static, and the
knowledge base is constantly being revised and refined,
these controversies prove to be counter-productive and
claims that a single study can overturn decades of re-
83. search are misguided. Instead, the focus should be on
the quality and rigour of examining large bodies of
evidence produced over time and producing nuanced
information that is helpful for decision-makers [19].
However, of even greater concern is when the evidence
base is deliberately manipulated and misused with the
intention of promoting certain vested interests, as dis-
cussed in the following section.
Vested interests and conflicts of interest
The degree to which vested interests have infiltrated the
international body of evidence is not to be underesti-
mated. Often the way in which these interests play out
in the literature can be quite subtle. However, astute
readers of the literature should consider the underlying
motives behind research publications, and what the
authors stand to gain or lose. In areas such as tobacco
smoking, the introduction of new pharmaceuticals and
the manufacture and export of asbestos, the research
evidence can make or break an industry, with millions of
dollars as well as thousands of jobs at stake. Therefore,
there is a great deal to lose and tremendous incentive
for these mega-industries to prevent the diffusion of
studies which go against their commercial interests or
even actively infuse the larger body of evidence with
conflicting studies to raise some doubt as to whether
their product is indeed harmful – sufficient doubt to
allow the money-making enterprise to continue reaping
profits, even if only for a few more years. This sounds
very sinister and hard to believe, yet, there are regular
accounts of just this.
Indeed, a great deal has been written over the years
about how the tobacco industry has attempted in various
ways to call into question the link between smoking and
cancer by producing their own evidence – often flawed
84. or misinterpreted to their own ends [20]. The industry is
known to have co-opted a large number of “venal or
naive scientists” [21] and even commissioned consultants
to write articles that call into question the link between
second hand smoke (SHS) and sudden infant death syn-
drome (SIDS). According to internal memos made pub-
lic during legal action against the tobacco industry, there
is proof that tobacco company executives “successfully
encouraged one author to change his original conclusion
that SHS is an independent risk factor for SIDS to state
that the role of SHS is ‘less well established’…” [22]. The
author’s disclosure of industry funding did not reveal the
full extent of the company’s involvement in shaping the
content of the article. Yet another example of how
accepting industry funds can disrupt the integrity of the
scientific process.
Andermann et al. Health Research Policy and Systems (2016)
14:17 Page 4 of 7
However, the tobacco industry is not the only culprit.
A great deal has also been written in the medical litera-
ture about the pharmaceutical industry introducing bias
which could have a favourable impact on medication
sales and ultimately increase profits for shareholders.
For instance, through “multiple publication of positive
trials and non-publication of negative trials, reinterpret-
ing data submitted to regulatory agencies, discordance
between results and conclusions, [and] conflict-of-interest
leading to more positive conclusions” [23].
Additionally, cash-strapped universities are increas-
ingly looking for alternative routes to fund-raising and
attempt to capitalize on the production of intellectual
85. property. Indeed, some universities have even developed
venture capital teams to support the roll-out and mar-
keting of scientific discoveries and technological innova-
tions developed on their campuses. Yet, a commitment
to open scientific inquiry and the pursuit of financial
gains are two goals that can be very difficult to reconcile,
often requiring a number of safeguards to be in place
[24]. Even beyond academia, governments are also impli-
cated. In the interest of keeping the economy running
and creating more jobs, governments are often eager
supporters of new industrial sectors. For instance, the
Canadian government is pouring money into genome re-
search [25], even though benefits to human health are
still unproven [26]. At times, these business ventures are
even shown to have serious negative consequences for
human health and well-being [27]. Nevertheless, any evi-
dence of such links must be quashed to avoid interfering
with economic gains and even with re-election. There-
fore, in certain situations, governments have been ac-
cused of obfuscating the truth [28]. For instance,
according to an editorial in the Canadian Medical
Association Journal [29], “Canada is the only Western
democracy to have consistently opposed international
efforts to regulate the global trade in asbestos. And the
government of Canada has done so with shameful
political manipulation of science”. However, Canada is
not alone when it comes to benefitting from ‘overlooking’
the evidence when it is inconvenient. Many countries, in-
cluding Mexico, Indonesia and China, in spite of having
ratified the Framework Convention on Tobacco Control,
continue to receive so much income from the sale of these
products through ‘sin taxes’ and other channels that they
have difficulty implementing measures intended to reduce
the use of tobacco products within their borders with the
aim of improving the health of their own people [30].
Even the ways in which health systems are organized
86. and run are not immune to the misappropriation of evi-
dence. Universal access to publicly-funded healthcare is
an important determinant of health [31] and “a privatized,
‘American-style’ health financing and provision system is
neither a feasible nor desirable model for developing
countries” [32]. Nonetheless, according to Whitehead et
al. [33], “In the past two decades, powerful international
trends in market-oriented health-sector reforms have been
sweeping around the world… advocated by agencies such
as the World Bank to promote privatisation”. Thus, the
authors call for an evidence-based approach to health sec-
tor reform, rather than promoting mass privatization of
healthcare services and charging poor people user fees,
which have been shown time and again to have negative
impacts on health [34,35]. Interestingly, under new leader-
ship, even the World Bank has done an abrupt turn in
support of universal health coverage, which “suggests that
an evidence-based approach to policy may finally be pre-
vailing over an ideologically driven approach” [36]. Thus,
there is hope that even the most powerful vested interests
can be overcome.
Conclusions
Readers of the scientific literature should be aware of
the ways in which the research evidence can be influ-
enced and misused for various aims and agendas. That
is not to say that we should throw out the whole
‘evidence-based’ enterprise, but simply that a healthy
dose of scepticism and critical thinking is advisable. It is
not enough to simply produce evidence, nor even to
synthesize and package evidence into a more user-
friendly format. New evidence needs to be critically ap-
praised and considered in light of the larger body of
existing scientific literature, both local and international.
87. Particularly at the policy level, political savvy is also
needed to ensure that vested interests do not undermine
decisions that can impact the health of individuals and
populations. Whether it is tobacco companies trying to
flood the literature with contradicting evidence or
pharmaceutical companies hiding research demonstrat-
ing that their products have no effect or lead to harm,
these conflicts of interest can lead us to make erroneous
conclusions and misinformed decisions. Ultimately, the
public pays the price, whether through poor health or
misspent money.
In response to these challenges, a growing number of
organisations, agencies, task forces and even global net-
works have been developed to synthesize, appraise and
disseminate research evidence with the goal of im-
proving health and reducing inequities. These include
the Cochrane and Campbell Collaborations [37], Health
Technology Assessment Agencies [38], the Canadian
Task Force on Preventive Health Care [39], and the US
Preventive Services Task Force [40], all of which focus
more on clinical-level guidance, as well as the Community
Guide, which focuses more on population-level health in-
terventions [41]. However, while finding, appraising and
synthesising the evidence and ‘putting it out there’ in a
clinical practice guideline, on a website or in a database is
Andermann et al. Health Research Policy and Systems (2016)
14:17 Page 5 of 7
important, it is often not sufficient to ensure that the evi-
dence will be used to influence policy and practice.
Increasingly, it is being recognised that evidence needs to
be packaged in such a way so as to actively promote up-
88. take, since passive diffusion measures are much less likely
to have an impact [42].
Many different models are being developed to increase
the uptake and use of evidence in practice [43]. These
often involve some form of evidence summaries or
decision-support tools. For instance, the EVIPNet Portal
includes a repertory of EVIPNet Policy Briefs which
synthesise the research evidence and offer evidence-
informed and contextualised policy options in a user-
friendly format to support well-informed policy decisions
[44]. Public Health England’s Longer Lives/Healthier Lives
website is another example that provides statistical data
tools that allow people to see how their local area com-
pares to the rest of the country in terms of specific health
indicators and provides a route to existing evidence sum-
maries produced by the UK’s National Institute for Health
and Clinical Excellence [45]. Indeed, such policy briefs,
which are free from technical jargon and highlight key
messages in a brief executive summary, dramatically in-
crease the likelihood that policymakers will read, consider
and apply the evidence where appropriate. The EVIPNet
partners with multiple organisations to produce these
policy-relevant evidence syntheses, including the Alliance
for Health Policy and Systems Research, the Health
Evidence Network, and Supporting Policy relevant
Reviews and Trials. Similarly, the Cochrane Collaboration
produces Cochrane Summaries’ to make their systematic
reviews more readily accessible to a wider audience of
knowledge users [46].
While it is important to be aware of the barriers and
facilitators when using evidence in decision-making for
health, at the end of the day, a decision must be made
which takes into account the needs and concerns of the
individual patients and local populations involved. In the
89. next article in the series, a model for ensuring more par-
ticipatory, transparent and evidence-informed decisions
will be presented and discussed.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
TP conceived of the idea of producing a series of articles on
Evidence for
Health based on the book by AA entitled Evidence for Health:
From Patient
Choice to Global Policy (Cambridge University Press, 2013) for
which TP wrote
the foreword. AA wrote the first draft of the articles for the
commentary
series which were then circulated, discussed and revised
through the
contributions of TP, JN, AD and UP. All authors read and
approved the
final manuscript.
Authors’ information
AA is an Associate Professor at McGill University, Medical
Specialist in Public
Health and Preventive Medicine for Health Canada’s First
Nations and Inuit
Health Branch (FNIHB), Public health consultant for the Cree
Board of Health
and Social Services of James Bay Northern Quebec, Chair of the
public
health theme for the new undergraduate medical curriculum at
McGill’s
Faculty of Medicine, and founder of an international research
collaboration
90. (www.mcgill.ca/clear) that aims to develop guidance for
frontline health
workers so that they can play a greater role in addressing the
social causes
of poor health and reducing health inequities. AA has
previously worked at
the World Health Organization in Geneva on research capacity
strengthening
in low- and middle-income countries. During that time, she was
a member
of the WHO Research Ethics Review Committee and a main
contributing
author to the World Health Report 2008 on increasing universal
access to
primary health care. TP is Visiting Professor at the Lee Kuan
Yew School of
Public Policy, National University of Singapore, Singapore, and
formerly
Director of Research, Policy and Cooperation at the World
Health
Organization, Geneva, Switzerland. JN is Chief Knowledge
Officer of Public
Health England in the United Kingdom and responsible for the
research
strategy. He works with AD who is the Director of Population
Health Science
at Public Health England, with the aim of using high quality
evidence for
informing policy and practice. UP is Coordinator of the Centre
for
International Relations in the Faculty of Medicine of the
Federal University of
Minas Gerais, Belo Horizonte, Brazil, and the founder and Co-
Chair of the
Evidence Informed Policy Network (EVIPNet) Steering Group
at the World
91. Health Organization, Geneva, Switzerland.
Acknowledgements
AA is funded through a Clinician Scholar Award by the Quebec
Health
Research Fund (Fonds de Recherche du Québec – Sante) and the
Quebec
Federation of Medical Specialists (Fédération des Médecins
Spécialistes du
Québec).
Author details
1Department of Family Medicine and Department of
Epidemiology,
Biostatistics and Occupational Health, Faculty of Medicine,
McGill University,
Montreal, Canada. 2Lee Kuan Yew School of Public Policy,
National University
of Singapore, Singapore, Singapore. 3Institute of Population
Health, Faculty of
Medical and Human Sciences, University of Manchester,
Manchester,
England. 4Public Health England, London, England.
5Department of
Preventive and Social Medicine-Health Policy, Faculty of
Medicine, Federal
University of Minas Gerais, Belo Horizonte, Brazil. 6Evidence
Informed Policy
Network (EVIPNet) Steering Group, World Health
Organization, Geneva,
Switzerland.
Received: 25 February 2015 Accepted: 16 February 2016
References
1. Andermann A. Evidence for Health: From Patient Choice to
92. Global Policy.
Cambridge: Cambridge University Press; 2013.
2. Rycroft-Malone J. From knowing to doing-from the academy
to practice:
Comment on “The many meanings of evidence: implications for
the
translational science agenda in healthcare.”. Int J Health Policy
Manag.
2013;2(1):45–6.
3. Lehmann U, Gilson L. Action learning for health system
governance: the
reward and challenge of co-production. Health Policy Plan.
2015;30(8):957–63.
4. Greenhalgh T, Fahy N. Research impact in the community-
based health
sciences: an analysis of 162 case studies from the 2014 UK
Research
Excellence Framework. BMC Med. 2015;13:232.
5. Redman S, Haynes A, Williamson A. Research impact:
neither quick nor easy.
BMC Med. 2015;13:265.
6. Muir Gray JA. Evidence-based Health Care. Edinburgh:
Churchill Livingstone;
2001.
7. Safe Use of Internet. Chene-Bourg: Health on the Net
Foundation; 2011.
http://www.hon.ch/HONcode/Patients/visitor_safeUse2.html.
Accessed 11
February 2016.
93. 8. Nutbeam D, Harris E. Theory in a Nutshell: A Practical
Guide to Health
Promotion Theories. 2nd ed. Sydney: McGraw Hill; 2004.
9. Commission for Health Research on Development. Health
Research:
Essential Link to Equity in Development. Oxford: Oxford
University Press;
1990.
10. Brach C, Lenfestey N, Roussel A, Amoozegar J, Sorenson
A. RTI International.
Will It Work Here? A Decisionmaker’s Guide to Adopting
Innovations. Rockville:
Agency for Healthcare Research and Quality; 2008.
https://innovations.ahrq.
Andermann et al. Health Research Policy and Systems (2016)
14:17 Page 6 of 7
http://www.mcgill.ca/clear
http://www.hon.ch/HONcode/Patients/visitor_safeUse2.html
https://innovations.ahrq.gov/sites/default/files/guides/Innovatio
nAdoptionGuide.pdf
gov/sites/default/files/guides/InnovationAdoptionGuide.pdf.
Accessed 11
February 2016.
11. Garba S, Ahmed A, Mai A, Makama G, Odigie V.
Proliferations of scientific
medical journals: a burden or a blessing. Oman Med J.
2010;25(4):311–4.
94. 12. Mavriplis C, Thériault G. The periodic health examination:
a comparison of
United States and Canadian recommendations (French). Can
Fam Physician.
2006;52:58–63.
13. Finkel M. Understanding the Mammography Controversy:
Science, Politics
and Breast Cancer Screening. Westport: Praeger Publishers;
2005.
14. Olsen O, Gøtzsche PC. Screening for breast cancer with
mammography.
Cochrane Database Syst Rev. 2001;4:CD001877.
15. Bock K, Borisch B, Cawson J, Damtjernhaug B, de Wolf C,
Dean P, et al.
Effect of population-based screening on breast cancer mortality.
Lancet.
2011;378(9805):1775–6.
16. Gøtzsche PC, Nielsen M. Screening for breast cancer with
mammography.
Cochrane Database Syst Rev. 2011;1:CD001877.
17. Gotzsche P, Hartling O, Nielsen M, Broderson J, Jorgensen
K. Breast
screening: the facts—or maybe not. BMJ. 2009;338:b86.
18. Subjectivity in data analysis. Lancet. 1991;337(8738):401–
2.
19. AGREE Collaboration. Development and validation of an
international
appraisal instrument for assessing the quality of clinical
practice guidelines:
95. the AGREE project. Qual Saf Health Care. 2003;12:18–23.
20. Tong EK, Glantz SA. Tobacco industry efforts undermining
evidence
linking secondhand smoke with cardiovascular disease.
Circulation.
2007;116(16):1845–54.
21. Chapman S, Shatenstein S. The ethics of the cash register:
taking tobacco
research dollars. Tob Control. 2001;10(1):1–2.
22. Tong EK, England L, Glantz SA. Changing conclusions on
secondhand
smoke in a sudden infant death syndrome review funded by the
tobacco
industry. Pediatrics. 2005;115(3):e356–66.
23. Lexchin J. Those who have the gold make the evidence: how
the
pharmaceutical industry biases the outcomes of clinical trials of
medications. Sci Eng Ethics. 2012;18(2):247–61.
24. Omobowale EB, Kuziw M, Naylor MT, Daar AS, Singer PA.
Addressing
conflicts of interest in Public Private Partnerships. BMC Int
Health Hum
Rights. 2010;10:19.
25. Harper Government Invests in Personalized Medicine [press
release].
Genome Canada. 2012.
http://www.genomecanada.ca/en/about/news.
aspx?i=409. Accessed 11 February 2016.
26. Wade N. A Decade Later, Genetic Map Yields Few New
96. Cures. New York
Times. 2010 (June 13).
http://www.nytimes.com/2010/06/13/health/
research/13genome.html. Accessed 11 February 2016.
27. Nuffield Council on Bioethics. Medical Profiling and Online
Medicine: The
Ethics of ‘Personalised Healthcare’ in a Consumer Age.
London: Nuffield
Council on Bioethics; 2010.
http://nuffieldbioethics.org/project/personalised-
healthcare-0/. Accessed 11 February 2016.
28. Kazan-Allen L. The asbestos war. Int J Occup Environ
Health. 2003;9(3):173–93.
29. Attaran A, Boyd D, Stanbrook M. Asbestos mortality: a
Canadian export.
CMAJ. 2008;179(9):871–4.
30. Madrazo-Lajous A, Guerrero-Alcántara A. Undue tobacco
industry
interference in tobacco control policies in Mexico. Salud
Publica Mex.
2012;54(3):315–22.
31. Van Lerberghe W, Evans T, Rasanathan K, Mechbal A,
Andermann A, Evans
D, et al. World Health Report 2008 - Primary Health Care: Now
More than
Ever. Geneva: World Health Organization; 2008.
http://www.who.int/whr/
2008/en/. Accessed 11 February 2016.
32. Hozumi D, Frost L, Suraratdecha C, Pratt B, Sezgin Y,
Reichenbach L, et al.