SlideShare a Scribd company logo
1 of 100
HealthIT.gov
National Learning Consortium logo
Advancing America's Health Care
Continuous Quality
Improvement (CQI)
Strategies to Optimize
your Practice
Primer
Provided By:
The National Learning Consortium (NLC)
Developed By:
Health Information Technology Research Center (HITRC)
The material in this document was developed by Regional
Extension Center staff in the
performance of technical support and EHR implementation. The
information in this document is
not intended to serve as legal advice nor should it substitute for
legal counsel. Users are
encouraged to seek additional detailed technical guidance to
supplement the information
contained within. The REC staff developed these materials
based on the technology and law
that were in place at the time this document was developed.
Therefore, advances in technology
and/or changes to the law subsequent to that date may not have
been incorporated into this
material.
�
Illustration titled the EHR Implementation Lifecycle shows 6
arrows, arrow 6 is highlighted. Step 6: Continue Quality
Improvement
NATIONAL LEARNING CONSORTIUM
The National Learning Consortium (NLC) is a virtual and
evolving body of knowledge and resources
designed to support health care providers and health IT
professionals working toward the implementation,
adoption, and Meaningful Use of certified electronic health
record (EHR) systems.
The NLC represents the collective EHR implementation
experiences and knowledge gained directly from
the field of ONC’s outreach programs (REC, Beacon, State HIE)
and through the Health Information
Technology Research Center (HITRC) Communities of Practice
(CoPs).
The following resource can be used in support of the EHR
Implementation Lifecycle. It is recommended
by “boots-on-the-ground” professionals for use by others who
have made the commitment to implement
or upgrade to certified EHR systems.
EHR Implementation Lifecycle
EHR Implementation Lifecycle illustration shows 6 arrows.
Step 1 Access; Step 2 Plan; Step 3 Select; Step 4 Implement;
Step 5 Meaningfully Use; Step 6 Improve Quality.
DESCRIPTION AND INSTRUCTIONS
Continuous Quality Improvement (CQI) is a quality
management process that encourages all health care
team members to continuously ask the questions, “How are we
doing?” and “Can we do it better?”
(Edwards, 2008). To address these questions, a practice needs
structured clinical and administrative
data. EHRs can, if properly designed and implemented, capture
these data efficiently and effectively,
thereby transforming patient care in ways that might have been
difficult or impossible with paper records
alone.
This Primer introduces CQI concepts, strategies, and techniques
a practice can use to design an effective
CQI strategy for EHR implementation, achieve Meaningful Use
of the system, and ultimately improve the
quality and safety of patient care. A practice can use CQI
throughout the EHR implementation lifecycle.
CQI strategies have been successfully implemented in many
industries, including health care. The CQI
conceptual framework presented in this Primer provides a
foundation to design and manage CQI
initiatives and offers points to consider when deciding which
strategy works best for a particular practice
or organization. A CQI toolkit, currently under development,
will elaborate in more detail the concepts
provided in this Primer.
Section 1 of this Primer defines CQI and its relationship to
Meaningful Use and EHR implementation and
presents a case study of CQI concepts. Section 2 describes the
leading strategies to be considered when
designing a CQI program for a practice. Section 3 provides
guidance for planning a CQI initiative and
selecting a CQI strategy that matches the needs of various
practice settings.
�
TABLE OF CONTENTS
1Continuous Quality Improvement (CQI) in the EHR
Implementation Lifecycle ...................................... 1
1.1Introduction
...............................................................................................
...................................... 1
1.2What Is Continuous Quality Improvement?
.................................................................................... 1
1.3How Can CQI Help a Practice Make the most OF Meaningful
Use? ............................................. 2
1.4What Does CQI Look Like in Practice?
.......................................................................................... 4
2Strategies for CQI
...............................................................................................
.................................... 6
2.1Leading CQI Strategies in Health care
........................................................................................... 6
2.1.1The Institute for Healthcare Improvement (IHI) Model for
Improvement ............................ 62.1.2Lean
...............................................................................................
..................................... 72.1.3Six Sigma
...............................................................................................
........................... 102.1.4Baldrige Quality Award Criteria
.........................................................................................
122.2Which CQI Strategy Is Right?
...............................................................................................
....... 142.3Best Practices to Consider in Using a CQI Strategy
.................................................................... 152.3.1Have the
Right Data and Use the Data Well
.................................................................... 15
2.3.2Have the Resources to Finish the Job
.............................................................................. 16
LIST OF EXHIBITS
Exhibit 1. Using CQI to Move From Current State to Future
State ............................................................... 3
Exhibit 2. CQI Framework Model
...............................................................................................
................... 4
Exhibit 3. IHI Model for Improvement
...............................................................................................
............. 6
Exhibit 4. Lean Principles for Operational Efficiency
.................................................................................... 8
Exhibit 5. Six Sigma Model
...............................................................................................
.......................... 10
Exhibit 6. Baldrige Core Values and Concepts
...........................................................................................
12
Exhibit 7. Summary of Leading Strategies for CQI
..................................................................................... 14
�
1 Continuous Quality Improvement (CQI) in the
EHR Implementation Lifecyclei
1.1 INTRODUCTION
The quest to use health information technology (IT),
specifically EHRs, to improve the quality of health care
throughout the health care delivery continuum is a consistent
goal of health care providers, national and local
policymakers, and health IT developers. The seminal Institute
of Medicine (IOM) report, Crossing the Quality
Chasm: A New Health System for the 21st Century (IOM,
2001), was a call for all health care organizations
to renew their focus on improving the quality and safety of
patient care in all health care delivery settings.
Since the IOM report, the health care industry has emphasized
the design and implementation of health IT
that supports quality improvement (QI) and quality monitoring
mechanisms in all levels of the health care
delivery system. Many QI strategies currently used in health
care, including Continuous Quality Improvement
(CQI), have been adopted from other industries that have
effectively used QI techniques to improve the
efficiency and quality of their goods and services. Experience
and research have shown that CQI principles,
strategies, and techniques are critical drivers of new care
models such as Patient-Centered Medical Homes
(PCMHs) or Accountable Care Organizations (ACOs). As
practice leaders and staff learn more about CQI
strategies and identify what works best for the desired type and
level of changes in the practice setting (i.e.,
moving from the current state to the desired future state), they
will recognize the value in designing an EHR
implementation to meet both the Meaningful Use requirements
and their own QI goals.
This Primer provides an overview of CQI concepts and
processes and will:
• Define CQI and how it applies to EHR implementations and
practice improvement strategies;
• Identify a conceptual framework to consider when
implementing CQI techniques in a practice setting;
• Explore tools, techniques, and strategies that health care and
other service industries use to guide and
manage CQI initiatives;
• Guide the selection of the most appropriate CQI technique or
strategy for the type and scale of
improvements the practice is considering; and,
• Provide tips to help the practice leaders tailor the approach,
tools, methods, and processes to the
unique CQI initiative and practice setting.
1.2 WHAT IS CONTINUOUS QUALITY IMPROVEMENT?
Put simply, CQI is a philosophy that encourages all health care
team members to continuously ask: “How are
we doing?” and “Can we do it better?”(Edwards, 2008). More
specifically, can we do it more efficiently? Can
we be more effective? Can we do it faster? Can we do it in a
more timely way? Continuous improvement
begins with the culture of improvement for the patient, the
practice, and the population in general.
Besides creating this inquisitive CQI culture in an organization,
the key to any CQI initiative is using a
structured planning approach to evaluate the current practice
processes and improve systems and
processes to achieve the desired outcome and vision for the
desired future state. Tools commonly used in
CQI include strategies that enable team members to assess and
improve health care delivery and services.
Applying CQI to a practice’s EHR implementation means that
the health care team must understand what
works and what does not work in the current state and how the
EHR will change care delivery and QI aims.
The CQI plan identifies the desired clinical or administrative
outcome and the evaluation strategies that
enable the team to determine if they are achieving that outcome.
The team also intervenes, when needed, to
adjust the CQI plan based on continuous monitoring of progress
through an adaptive, real-time feedback
loop.
�
1.3 HOW CAN CQI HELP A PRACTICE MAKE THE MOST
OF MEANINGFUL
USE?
Meaningful Use is an important means to achieving the triple
aims of health care—improving the experience
of patient care, improving population health, and reducing per
capita costs of health care (Berwick et al.,
2008). The Centers for Medicare and Medicaid Services’ EHR
Incentive Program provides eligible
professionals, eligible hospitals, and critical access hospitals
incentive payments that support the optimal use
of technology for health care (Incentive Programs—Regulations
and Guidance). Although a practice can
implement an EHR without addressing Meaningful Use,
practices that do so are less likely to realize the full
potential of EHRs to improve patient care and practice
operations (Mostashari, Tripathi, & Kendall, 2009).
Attesting to Meaningful Use, although it is an important
milestone for a practice, is not an end unto itself.
Practices that can achieve Meaningful Use will be able to use
their EHR to obtain a deep understanding of
their patient population and uncover aspects of patient care that
could be improved. Using a planned,
strategic approach to CQI will help a practice move from
reporting the requirements for Meaningful Use to
improving patient care and meeting other practice goals. The
literature shows a strong link between an
explicit CQI strategy and high performance (Shortell et al.,
2009). Thus, applying CQI principles and
strategies will transform numbers on a spreadsheet or a report
into a plan for action that identifies areas of
focus and the steps and processes needed to improve those areas
continually and iteratively.
To establish an effective CQI strategy, a practice should
(Wagner et al., 2012)
• Choose and use a formal model for QI.
• Establish and monitor metrics to evaluate improvement efforts
and outcomes routinely.
• Ensure all staff members understand the metrics for success.
• Ensure that patients, families, providers, and care team
members are involved in QI activities.
• Optimize use of an EHR and health IT to meet Meaningful Use
criteria.
Put together, CQI and Meaningful Use can move a practice from
its current state to a more desirable future
state. As depicted in Exhibit 1, CQI begins with a clear vision
of the transformed environment, identification
of necessary changes to achieve that vision, and input from
engaged team members who understand the
needs for the practice. In short, the journey to the desired future
state involves a transformation of people,
process, and technology. Meaningful Use of health information
and an explicit commitment to CQI can help a
practice establish that clear vision and implement it
successfully.
�
Exhibit 1. Using CQI to Move From Current State to Future
State
Illustration of the Continuous Quality Improvement (CQI)
process. CQI acts on People, Process, and Technology to create
the Future State of Improved Patient Care, Improved Population
Health and Reduced Cost. Meaningful Use is shown in the
background.
�
1.4 WHAT DOES CQI LOOK LIKE IN PRACTICE?
In undertaking any CQI initiative, a practice must consider
three components: (1) structure, (2) process, and
(3) outcomes (Donabedian, 1980). Exhibit 2 builds on these
three components within the context of health
information technology and illustrates the basic premise of CQI:
any initiative involving an EHR to improve
patient care must focus on the structure (especially technology
and people) and process that lead to the
expected outputs and then ultimately to the desired outcomes.
Exhibit 2. CQI Framework Model
An illustration of the CQI Framework Model shows the
Continuous Quality Improvement (CQI) Initiative providing
input to Structure (People and Technology) and Process phases.
Structure leads to Process which leads to Output and finally to
Outcome. The Outcome provides feedback to CQI and the
process repeats.
Structure. Structure includes the technological, human,
physical, and financial assets a practice possesses
to carry out its work. CQI examines the characteristics (e.g.,
number, mix, location, quality, and adequacy) of
health IT resources, staff and consultants, physical space, and
financial resources.
Process. The activities, workflows, or task(s) carried out to
achieve an output or outcome are considered
process. Although CQI strategies in the literature focus more
commonly on clinical processes, CQI also
applies to administrative processes. The EHR functions that
meet Meaningful Use Stage I Core Objectives
support key clinical and administrative processes (see this link:
MU Objectives).
Output. Outputs are the immediate predecessor to the change in
the patient’s status. Not all outputs are
clinical; many practices will have outputs tied to business or
efficiency goals and, accordingly, require
changes to administrative and billing processes.
Outcome. Outcomes are the end result of care (AHRQ, 2009)
and a change in the patient’s current and
future health status due to antecedent health care interventions
(Kazley, 2008). Desired changes in the cost
and efficiency of patient care or a return on investment in the
EHR can also be considered outcomes.
Feedback Loop. In Exhibit 2, a feedback loop between the
output/outcome and the CQI initiative represents
its cyclical, iterative nature. Once a change to the structure and
process is implemented, a practice must
determine whether it achieved the intended outcome and, if not,
what other changes could be considered. If
the outcome is achieved, the practice could determine how to
produce an even better outcome or achieve it
more efficiently and with less cost.
�
Text box titled "It's not just a model: CQI in action"
Consider structure, process, and outcomes and how they apply
to a CQI initiative to improve
obesity screening and follow-up in an adult primary care
practice that has attested to Meaningful
Use (MU) Stage 1.
• The structural assessment of an obesity screening CQI
initiative would examine the
functionality of the EHR for weight management tasks; staff’s
knowledge and expertise to
counsel overweight and obese patients; and the adequacy of the
space and materials to provide
education and social support. The CQI initiative might also
consider the acquisition of a
separate Adult Body Mass Index (BMI) Improvement module
that would randomly survey patient
charts at two different times to monitor BMI changes in the
clinic’s overweight and obese patient
population.
• A process assessment of an obesity screening CQI initiative
would ensure that the EHR can
record and chart vital signs such as BMI (MU Core Measure #
8) so providers can easily
capture, monitor, and trend a patient’s weight from visit to
visit. The CQI initiative would also
assess the clinical summaries given to the patient (MU Core
Measure # 13) and whether these
summaries effectively educate patients about what they need to
do between visits to maintain or
reduce their BMI.
• An output leading to an improvement in BMI could be
ensuring that BMI is recorded in the EHR
at each visit for every patient so that the provider can track the
patient’s progress. Another
output might be to ensure that every patient receives nutrition
and exercise counseling between
office visits.
A primary outcome for an obesity CQI initiative is reducing
BMI by a certain amount within a
specified time frame that the provider/patient believes is
achievable. Longer term outcomes for the
individual patient are improved quality of life and a longer life.
For the practice that is part of an
ACO or PCMH, reducing BMI in its patient panel may result in
better reimbursement and receipt of
financial incentives. A CQI initiative supported by an EHR
allows the practice to monitor progress
towards the outcome for a patient. Equally important, however
is the ability to monitor the progress
of the practice’s obese population by establishing a disease
registry. Thus, a practice can monitor
changes in BMI for each patient, estimate the proportion of
patients in the practice that are
overweight and obese, and identify patients who are outliers.
These powerful data can transform
how a provider chooses to care for these patients.
If the obesity CQI initiative reveals that efforts to screen and
counsel are successful in reducing
BMI in patients, the next question is whether the enhanced
screening works for everyone equally
well. A closer look at the data may reveal, for example, that
screening had almost no effect on older
male patients. These insights would be used to consider
additional modifications to the structure
and process of the screening to help older male patients achieve
weight management goals.
�
2 Strategies for CQI
2.1 LEADING CQI STRATEGIES IN HEALTH CARE
Fortunately, a practice can choose many well-established CQI
programs and strategies to achieve its QI
aims. The specific strategy a practice selects depends on several
factors. For example, a practice
incorporating CQI in an initial EHR implementation will have
different goals and objectives than a practice
that has already achieved Stage 1 Meaningful Use and wants a
more targeted CQI effort directed at
Meaningful Use Stage 2. The following sections briefly describe
four strategies for CQI widely used in the
health care industry today. The descriptions cover the basic
principles of each strategy followed by the
specific action steps each strategy recommends. An
accompanying case study illustrates the use of the
strategy in a practice setting. This section concludes with a
side-by-side comparison of the strategies and
guidance on selecting the appropriate strategy. Again, the final
CQI strategy the practice uses could be a
combination of tools, methods, and processes that best meet the
practice culture and the unique CQI
initiative.
2.1.1 The Institute for Healthcare Improvement (IHI) Model for
Improvement
The IHI Model for Improvement is a simple strategy that many
organizations currently use to accelerate their improvement
strategies. A CQI initiative based on the IHI Model for
Improvement focuses on setting aims and teambuilding to
achieve change. As depicted in Exhibit 3, it promotes
improvement by seeking answers to three questions:
• What are we trying to accomplish?
• How will we know that a change is an improvement?
• What changes can we make that will result in
improvement?
Exhibit 3. IHI Model for Improvement
Illustration of the IHI Model for Improvement shows a cycle
with steps "Act", "Plan", "Study", and "Do". The cyclle receives
inputs from and provides feedback to the motivating questions:
What are we trying to accomplish?
How will we know that a change is an improvement? and
What changes can we make that will result in improvement?
Principles
To answer these questions, a CQI initiative uses a Plan-Do-
Study-Act (PDSA) cycle to test a proposed change or CQI
initiative in the actual work setting so changes are rapidly
deployed and disseminated. The cycle involves the following
seven steps:
Form the team. Including the appropriate people on a process
improvement team is critical to a successful effort. The practice
(or provider) must determine the team’s size and members.
Practice staff persons are the experts at what works well in the
practice and what needs to be improved. Include them in
identifying and planning the implementation of any CQI
initiative.
Set aims. This step answers the question: What are we trying to
accomplish? Aims should be specific, have
a defined time period, and be measurable. Aims should also
include a definition of who will be affected:
patient population, staff members, etc. For practice
transformation, the aims should ideally be consistent with
achieving one or more of the triple aims previously discussed.
�
textbox reads "Big Sandy Health Care in Eastern Kentucky used
the Plan-Do-Study-Act cycle to develop and test a new patient
portal. The staff first identified the priority functions for the
portal and the workflow changes involved. Then, staff received
training and pilot tested the portal with volunteer patients.
Issues from the pilot testing were addressed before the rollout,
and a patient survey on the portal provided feedback to aid
further improvements."
When to use. The IHI Model for Improvement is best used for a
CQI initiative that requires a
gradual, incremental, and sustained approach to QI so changes
are not undermined by excessive
detail and unknowns (Hughes, 2008).
To find out more:
http://www.ihi.org/knowledge/Pages/HowtoImprove/ScienceofI
mprovementHowtoImprove.aspx
Establish measures. This step answers the question: How will
we know that a change is an improvement? Outcome measures
should be identified to evaluate if aims are met. Practices
should select measures using data they are able to collect.
Select changes. This step answers the question: What
changes can we make that will result in improvement? The
team should consider ideas from multiple sources and select
changes that make sense.
Test changes. First, the changes must be planned and
downstream impacts analyzed to assess whether they had the
desired outcome or output. Once the changes are
implemented, the results should be observed so that lessons
learned and best practices can be used to drive future changes.
Implement changes. After testing a change on a small scale,
learning from each test, and refining the
change through several PDSA cycles, the team may implement
the change on a broader scale—for
example, for a pilot population or on an entire unit.
Spread changes. After successful implementation of a change(s)
for a pilot population or an entire unit, the
team can disseminate the changes to other parts of the
organization.
2.1.2 Lean
Lean is a continuous improvement process that gained
international recognition when Womack, Jones and
Roos published a book on the Toyota Production System. Many
hospitals have borrowed the key Lean
principles of reducing non-value added activities, mistake-
proofing tasks, and relentlessly focusing on
reducing waste to improve health care delivery. Lean helps
operationalize the change to create work flows,
handoffs, and processes that work over the long term (see
Exhibit 4). A key focus of change is on reducing
or eliminating seven kinds of waste and improving efficiency
(Levinson & Renick, 2002):
• Overproduction
• Waiting; time in queue
• Transportation
• Nonvalue-adding processes
• Inventory
• Motion
• Costs of quality, scrap, rework, and inspection
�
Exhibit 4. Lean Principles for Operational Efficiency1
1 http://www.pmvantage.com/leanbusiness.php
Illustration of the five principles to guide the activities for
operational efficiency shows a cycle with the title "Lean
Business" at center. Step 1. Identify Value. Define value from
the client's viewpoing and express value in terms of a specific
project. Step 2. Specify value. Map each step that brings a
proocess or service to the client... both value added and non-
value added. Step 3. Workflow Process. The continuous
improvement of processors, information, and all services from
end-to-end... throughout a cycle. Step 4. Push/Pull Activities.
Fortifying an upstream process at the request of the customer's
downstream requirements or specifications. Step 5. Process
Delivery. The elimination of wasteful activities create value for
the customer and promotes continuous improvement.
Principles
A Lean CQI initiative focuses mainly on alleviating overburden
and
inconsistency while reducing waste to create a process that can
deliver the required results smoothly (Holweg, 2007). Teams
frequently use these principles once they know what system
change
will result in an improvement.
Identify which features create value. Identifying value is
determined through both internal and external perspectives. For
example, patients might value reduced phone time on hold,
while
providers value having all the information at hand available
when
making an appointment. The specific values depend on the
process
being streamlined.
Identify the value stream (the main set of activities for care).
Once value is determined, practice leaders should identify
activities
that contribute value. The entire sequence of activities is called
the
value stream. Activities that fall outside that realm are
considered
waste and should be streamlined.
Group Health in Seattle,
Washington used Lean principles
to standardize a PCMH model
across 26 sites over 14 months.
The initiative focused on
improving the efficiency of clinic
workflow and processes to
accommodate smaller panel
sizes, longer appointment times,
proactive chronic care, and
greater use of licensed practical
nurse and medical assistants for
pre-visit preparation, follow-up
and outreach (Hsu et al., 2012).
�
Improve flow so activities and information move in an efficient
process. Once value-added activities
and necessary nonvalue activities are identified, practice leaders
should ensure that improvement efforts are
directed to make the activities flow smoothly without
interruption.
Test the clinical and/or administrative process. Once the
improved flow is identified and documented, it
should be tested. Testing can include a pilot of a few patients or
a one-day test where the new process is
used throughout the practice.
Perfect the process. Lessons learned and tensions that arise
from this process can be identified and
changes can be made before the final process is rolled out.
Text box reads
Heading: When to use.
The Lean approach is useful in simplifying overcomplicated
processes and takes a
holistic approach that considers interrelated processes and
workflows. Lean is not as well suited to
small, discrete changes to a process as it is to whole groups or
clusters of processes. Typically, Lean is
applied in large health care settings although individual
practices that belong to a physician network or a
hospital may be engaged in a Lean initiative. Teams frequently
use Lean once they know what system
change will result in improvement from tried and tested best
practices adopted in other settings.
To find out more:
http://www.lean.org/w hatslean/
http://www.pmvantage.com/leanbusiness.php
http://asq.org/learn-about-quality/lean/overview/overview.html
http://www.lean.org/WhoWeAre/HealthcarePartner.cfmend of
text box
�
2.1.3 Six Sigma
Six Sigma is a business management and QI strategy that
originated in the U.S. manufacturing industry; it
seeks to improve efficiency by identifying and removing the
causes of defects (errors) and minimizing
variability in manufacturing and business processes (as shown
in Exhibit 5). The term Six Sigma originated
from terminology associated with the statistical modeling of
processes. This QI strategy, thus, combines
statistical analysis with quality management methods. Six
Sigma also creates a special infrastructure of
people within the organization [Green Belts (beginner) to Black
Belts (most advanced)] who are experts in
these methods. Lean and Six Sigma are often combined when a
key goal is to reduce waste and errors.
Exhibit 5. Six Sigma Model2
2 http://archian.wordpress.com/2012/11/05/marketing-wisdom-
of-understanding-six-sigma/
An illustration of the Six Sigma Model show cycle with 5
spheres: Define, Measure, Analyze, Improve and Control.
text box reads
Six Sigma was used to improve coordination between primary
care physicians and diabetic specialists, reduce unnecessary
appointments and reduce waiting times for appointments with
specialists. The initiative defined and measured process
indicators, analyzed descriptive statistics, and developed
strategies based on the results. These strategies involved
changing clinical protocol for hospitalized patients, increasing
the autonomy of nursing staff, reorganizing the scheduling
office, and specializing diabetic clinics to provide certain types
of diabetic care. (Paccagnella et al., 2012).
end of text box
Principles
Six Sigma relies on the ability to obtain data and on process and
outcome measures. A CQI initiative using
Six Sigma adheres to five principles:
Define. The first step is to define the process and outcome to be
improved, define their key characteristics,
and map the relevant inputs into the process that will lead to the
desired outputs and outcomes. This step
also involves defining the boundary for the CQI project.
Measure. Once the process and outcome to be improved are
defined, the CQI initiative must track
performance through data collection. Performance measures can
be captured through structured
observation, surveys, and the EHR.
Analyze. After measures are put in place, data can be collected
and analyzed to determine how the practice
is doing. Ideally, baseline data would be collected prior to
putting new processes into place and at regular
intervals. A CQI initiative would review these data as part of a
process review to identify the causes of
problems.
Improve. The results of the analysis are used to inform
improvements. For example, if process changes
result in workarounds, adjustments should be made. Similarly,
if quality measures do not show improvement
�
after a change is implemented, the CQI initiative should
examine what additional or alternate changes could
be implemented to improve the process.
Control. Control involves ongoing monitoring and improvement
as needed.
Text box starts
Heading When to use.
In health care settings, Six Sigma is often combined with
aspects of Lean to focus on both
quality and efficiency, particularly when practices embark on a
large-scale EHR implementation. When
smaller changes are involved, one method may make more
sense.
To find out more:
http://www.isixsigma.com/dictionary/dmaic/.
http://asq.org/healthcaresixsigma/lean.html
http://archian.wordpress.com/2012/11/05/marketing-wisdom-of-
understanding-six-sigma/text box ends
�
2.1.4 Baldrige Quality Award Criteria
The Malcolm Baldrige Quality Award has evolved from an
honor to recognize quality to an approach and
methodology that enables continued excellence through self-
assessment. This QI strategy is focused on total
organizational improvement and instituting a culture of CQI. As
depicted in Exhibit 6, it promotes the
achievement of strategic goals through the alignment of
resources and improved communication,
productivity, and effectiveness in the seven criteria categories :
• Leadership
• Strategic planning
• Customer focus
• Measurement, analysis, and knowledge management
• Workforce focus
• Operations focus
• Results
Exhibit 6. Baldrige Core Values and Concepts3
3 http://www.nist.gov/baldrige/graphics.cfm
An illustration of the Baldrige Core values and Concepts shows
3 concentric circles. Baldrige Health Care Criteria build on
Core Values and Concepts (the innermost circle): Visionary
leadership, Focus on the future, Managing for innovation,
Agility, Organizational and personal learning, Valuing
workforce members and partners, Patient-focused excellence,
Societal responsibility, Management by fact, Focus on results
and creating value, and Systems perspective. These are
embedded in systematic processes (Criteria categories 1-6 –
middle circle). 1 Leadership, 2 Strategic Planning, 3 Operations
Focus, 4 Workforce Focus, 5 Customer Focus, and 6
Measurement, Analysis and Knowledge Management. These
yield Performance Results (Criteria Category 7 – the outer
circle) Leadership and Governance Results, Financial and
Market Results, Workforce-Focused Results, Customer-Focused
Results, and Health Care and Process Results
Principles
Examining the Baldrige criteria thoroughly can help a practice
identify strengths and opportunities for
improvement. These areas can shape future organizational
plans. Because award criteria have been widely
used, they provide a common language and principles on which
to base improvement. A CQI initiative using
the Baldrige Award criteria follows these principles (NIST,
2010).
Identify the boundaries/scope of the self-assessment. This
assessment could be the entire practice or a
particular function (structure, process, or outcome) in the
practice.
Select champions. Because Baldrige involves an organization-
wide change, each of the seven criteria
areas requires a champion. Some champions may lead more than
one criteria area.
�
Select teams for each champion. The purpose of the team is
to work with the champion to conduct the self-assessment. This
group should be interdisciplinary so multiple stakeholders are
represented.
Collect data and information to answer questions. The core
self-assessment is guided by 10 critical questions for each
Baldrige criterion. Some examples of questions are: What are
the organization’s core competencies? What are your key work
processes? How do you build and manage relationships with
your customers? Champions can prepare an organizational
profile describing the practice and its challenges as a starting
point for the data collection.
Share the answers to the Baldrige criteria questions among
the category teams. The goal of this exercise is to identify
common themes in the answers to the 10 structured questions
for each criterion.
Create and communicate an action plan for improvement.
Each criterion team creates and communicates an action plan
for improvement based on the answers and organizational
priorities. Teams identify strengths and opportunities, set
priorities, and develop action plans.
Evaluate the self-assessment process and identify possible
improvements. Practice leaders,
champions, and teams evaluate the self-assessment and think
about improvement mechanisms.
Text box starts
The Southcentral Foundation (SCF),
a
not-for-profit health care organization
serving Alaska Natives and American
Indian peoples in Anchorage and the
surrounding area, used the Baldrige
Criteria to create the Nuka system of
care, which is owned, managed,
designed, and driven by Alaska Native
people, referred to as “customer-
owners”. This focus on the customer-
owner helped SCF to achieve the
highest level of Patient-Centered
Medical Home™ recognition from the
National Committee on Quality
Assurance (NCQA) in 2010 (National
Institute of Standards and Technology
http://www.nist.gov/baldrige/award_recipients/southcentral_pro
file.cfm). Textbox ends
Text box starts
When to use.
Baldrige Award criteria focuses more on identifying problems
and setting up teams to take
ownership of them and less on the specific steps to carry out the
planned improvement. This strategy is
holistic and may require the organization to make significant
cultural changes.
To find out more:
http://www.nist.gov/baldrige/
http://asq.org/learn-about-quality/malcolm-baldrige-
award/overview/overview.htmltext box ends
�
2.2 WHICH CQI STRATEGY IS RIGHT?
A practice should choose a CQI strategy based on its goals and
objectives for adoption, implementation, and
Meaningful Use of its EHR, and could include practice
transformation priorities beyond Meaningful Use (i.e.,
becoming a PCMH, etc.). The strategies described have
similarities and differences. Exhibit 7 summari zes
the key features of these strategies and the CQI initiatives for
which they are most appropriate.
Exhibit 7. Summary of Leading Strategies for CQI
Strategy
Brief Description
Type of CQI Initiative
Example
References
IHI Model for
Improvement
Emphasizes the use of the
Plan-Do-Study-Act
methodology to establish aims,
define the problem, identify
measures of success, and
systematically test them in
short, rapid cycles.
Can be combined with other
strategies.
• Emphasis on process and
outcome.
• Best for specific problems
whose solutions can be
refined over time.
• Is a gradual, incremental
approach to CQI.
• Ideal for achieving small,
quick wins; applying lessons
learned to new cycles and
identifying best practices.
Big Sandy
HealthCare uses
PDSA to pilot test
a new patient
portal.
• Institute of Healthcare
Improvement
• MN Department of
Health
Lean
Alleviates overburden and
inconsistency in processes by
eliminating waste, redundancy,
and unnecessary effort.
Emphasis is on developing
efficient systems that involve
whole groups or clusters of
related processes.
Is focused on groups or whole
categories of related
processes.
• Emphasis on process.
• Simplifies overcomplicated
processes and considers
interdependencies.
• Best for known problems
with known system change
solution.
• Integrated throughout the
organization or practice.
• Ideal for large complex
health care organizations
and practice networks that
want to standardize
operations across multiple
units or practice sites.
Group Health uses
Lean to
standardize PCMH
processes across
26 diverse
practices.
• http://www.lean.org/whatslean/
• http://asq.org/learn-
about-
quality/lean/overview/
overview.html
• http://www.lean.org/
WhoWeAre/HealthcarePartner.cfm
• Practice White Paper
• Creating a Lean
Practice
�
Strategy
Brief Description
Type of CQI Initiative
Example
References
Six Sigma
Emphasizes identifying and
removing the causes of defects
(errors) and minimizing
variability in manufacturing and
business processes. Uses
statistical methods and a
hierarchy of users within the
organization (Black Belts,
Green Belts, etc.) who are
experts in these methods.
• Emphasis on processes and
outcomes.
• Best for processes plagued
by wide variability—logging
of pharmaceuticals,
standardizing referral
processes, etc.
• Is a heavily quantitative
approach to CQI.
• Can be adapted for targeted
changes to specific
processes.
• Typically combined with Lean
when the focus is on
efficiency and quality.
• Ideal for practices that want
to rigorously quantify
improvements in safety,
quality, and cost
effectiveness.
An integrated
provider network
uses Six Sigma to
reduce
appointment times
for diabetic care.
• SixSigma
• Six Sigma Online HRSA Exit Disclaimer icon
• How to Develop a
Lean Six Sigma
Deployment Plan HRSA Exit Disclaimer icon
6sigma.us/ HRSA Exit Disclaimer icon
• ASQ
• Wisdom of Six Sigma
Baldrige Award
Criteria
Emphasizes identifying
problems and setting up teams
to take ownership of those
problems. Promotes culture of
continued excellence via team-
based self-assessment in
seven criteria areas. Is less
focused on the specific steps to
achieve improvement.
• Emphasis on structure and
outcomes.
• Best for practice-wide
problem assessment and
goal setting.
• A broad, holistic approach to
CQI initiated at strategic
times.
• Ideal for practices that want
to establish a new CQI
system or overhaul an
existing one.
Southcentral
Foundation uses
Baldrige Award to
completely
redesign patient
care and achieve
the highest level of
PCMH recognition
• NIST—Baldrige
• Baldrige Overview
2.3 BEST PRACTICES TO CONSIDER IN USING A CQI
STRATEGY
2.3.1 Have the Right Data and Use the Data Well
Regardless of the strategy, a practice must analyze quality data
to properly conduct CQI; thus, every effort
should be made to ensure data are timely, accurate, and measure
what they are intended to measure.
Consider the source of the data for each metric needed to assess
performance. The EHR cannot
collect every kind of data needed for CQI. Some data may have
to be collected by someone watching and
tracking activities in real time or through surveys of staff and
patients.
Ensure that the EHR collects the data needed to support CQI
efforts as structured data in the EHR.
Data stored in free text fields or document images will not
automate data collection. Ideally, it is best to know
this before an EHR is purchased or upgraded.
Establish targets and benchmarks. The results of a CQI analysis
are meaningless if no data exist for
comparison. Many clinical measures have national and regional
benchmarks (e.g., HEDIS for process of
care measures). The best benchmark, however, is generated
within the practice through the collection of
�
i References
Agency for Healthcare Research and Quality. What is health
care quality and who decides? Statement of
Carolyn Clancy before the Subcommittee on Health Care,
Committee on Finance, U.S. Senate,
2009. Available at:
http://www.finance.senate.gov/library/hearings/download/?id=c
eb25971-e946-
47aa-921b-01736f7e0d35
Berwick DM, Nolan TW, Whittington J. The triple aim: care,
health, and cost. Health Aff (Millwood). 2008
May-Jun;27(3):759-69.
baseline data which the practice uses to set a reasonable target
for improvement over a specified period.
Improvement is tracked by periodic comparison of pre- and
post-data.
Establish a broad set of measures—structure, process, and
outcomes. Although quality (outcome)
measurement is a prime concern, on its own it tells nothing
about why outcomes occur. Collecting structure
and process measures will help uncover and address the
underlying causes of poor performance.
Aggregate data to assess the practice population. One of the
most efficient ways to carry out CQI
initiatives focused on quality of care is to aggregate the data for
patients with similar conditions into a
registry. These patients often experience similar issues with
treatments, medication adherence, and
coordination with specialists so it makes sense to view them as
a distinct population that a practice monitors
and tracks over time. In addition, a disease registry allows the
practice to identify patients who are outliers
and may need even more attention and follow-up.
Conduct periodic data quality audits. Most measures are
captured as simple statistics (e.g., counts,
percent, mean and mode) so ensure that the EHR is producing
accurate and complete denominator and
numerator data.
2.3.2 Have the Resources to Finish the Job
Align the scope of your CQI initiative to the time and resources
to carry it out. Although the EHR can make
the CQI process more efficient by automating some data
collection, staff members will still need to analyze
and interpret the data and then translate those interpretations
into action.
Establish reasonable budgets and time frames for any given CQI
initiative. The inputs for any given
CQI initiative are more likely to be sufficient if the practice
considers up front what is needed to complete the
initiative. A dedicated staff and a timeline also establish
accountability for the CQI initiative.
Break down larger CQI initiatives into smaller ones. No matter
how expansive and complex, almost any
CQI initiative can be achieved if it can be broken down into
smaller projects that build on one another.
Successfully completing one small project builds enthusiasm
and energy for the next project.
Establish a stopping point where success is defined and new
initiatives started. Almost any process
could be refined and improved indefinitely—a practice should
determine what is “good enough” so CQI
resources can be directed to other quality improvement needs.
Invest in a CQI infrastructure. CQI requires a certain set of
inputs (e.g., a culture of learning; skills to
collect, analyze, and use data from the EHR) that take time and
money to acquire but are investments in the
practice’s long-term viability. Like any new skill or task, CQI
gets easier and more efficient over time as staff
members become competent in CQI methods and figure out the
best ways to integrate CQI into practice
operations.
Celebrate Success. Any CQI effort, no matter how big or small,
involves time and effort that should be
recognized by the participating staff and leadership. Taking
time to reflect on the CQI team’s
accomplishments will build enthusiasm and energy to tackle the
next problem.
�
Donabedian A. Explorations in quality assessment and
monitoring: the definition of quality and approaches to
its assessment. Ann Arbor, MI: Health Administration Press;
1980.
Edwards PJ, et al. Maximizing your investment in EHR:
Utilizing EHRs to inform continuous quality
improvement. JHIM 2008;22(1):32-7.
Holweg M. The genealogy of lean production. J Oper Manag
2008;25(2):420-37.
Hughes RG, ed. Patient safety and quality: An evidence-based
handbook for nurses. Rockville, MD: Agency
for Healthcare Research and Quality; 2008 Chapter 44, Tools
and strategies for quality improvement
and patient safety.
Hsu C, Coleman K, Ross TR, Johnson E, Fishman PA, Larson
EB, Liss D, Trescott C, Reid RJ. Spreading a
patient-centered medical home redesign: A case study. J Ambul
Care Manage. 2012 Apr-
Jun;35(2):99-108.
Kazley AS, Ozcan YA. Do hospitals with electronic medical
records (EMRs) provide higher quality care? An
examination of three clinical conditions. Med Care Res Rev
2008;65(4):496-513.
Levinson WA, Rerick RA. Lean enterprise: A synergistic
approach to minimizing waste. ASQ Quality Press,
2002, xiii-xiv, 38.
Mostashari F, Tripath, M, Kendall M. A tale of two large
community electronic health record extension
projects. Health Affairs (Millwood). 2009 Mar-Apr;28(2), 345–
56.
Nutting PA, Miller WL, Crabtree BF, et al. Initial lessons from
the first national demonstration project on
practice transformation to a patient-centered medical home. Ann
Fam Med 2009;7(3):254–60.
Paccagnella A, Mauri A, Spinella N. Quality improvement for
integrated management of patients with type 2
diabetes (PRIHTA project stage 1). Qual Manag Health Care.
2012 Jul-Sep;21(3):146-59.
The National Institute of Standards and Technolo gy (NIST).
Baldrige performance excellence program: Self-
assessing your organization; 2010. Available at:
http://www.nist.gov/baldrige/enter/self.cfm
The National Institute of Standards and Technology (NIST).The
Malcolm Baldrige national quality award
2011 award recipient, healthcare category: Southcentral
foundation.
http://www.nist.gov/Baldrige/award_recipients/southcentral_pro
file.cfm
Institute for Healthcare Improvement. Science of improvement:
How to improve; 2011, Apr. Available at:
http://www.ihi.org/knowledge/Pages/HowtoImprove/ScienceofI
mprovementHowtoImprove.aspx
Institute of Medicine. Crossing the quality chasm: A new health
system for the 21st century. Washington, DC:
National Academies of Science; 2001. Available at:
http://www.nap.edu/catalog/10027.html
Shortell SM, Gillies R Siddique J, et al . Improving chronic
illness care: A longitudinal cohort analysis of large
physician organizations. Med Care 2009;47(9):932–9.
Wagner EH, Coleman K, Reid RJ, Phillips K, Abrams MK,
Sugarman JR. The changes involved in patient-
centered medical home transformation. Prim Care. 2012
Jun;39(2):241-59.
Womack JP, Jones DT, Roos, D. The Machine That Changed the
World: The Story of Lean Production;
1991. HarperCollins.
�
Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2
209
Exploring the Evidence
Quantitative and Qualitative Research
Focusing on the Fundamentals:
A Simplistic Differentiation Between
Qualitative and Quantitative Research
Shannon Rutberg
Christina D. Bouikidis
R
esearch is categorized as quantitative or qualitative
in nature. Quantitative research employs the use of
numbers and accuracy, while qualitative research
focuses on lived experiences and human percep-
tions (Polit & Beck, 2012). Research itself has a few vari-
eties that can be explained using analogies of making a
cup of coffee or tea.
To make coffee, the amount of water and coffee
grounds to be used must be measured. This precise meas-
urement determines the amount of coffee and the strength
of the brew. The key word in this quantitative research
analogy is measure. To make tea, hot water must be poured
over a tea bag in a mug. The length of time a person
leaves a tea bag in the mug comes down to perception of
the strength of the tea desired. The key word in qualitative
research is perception. This article describes and explores
the differences between quantitative (measure) and quali -
tative (perception) research.
Types of Research
Nursing research can be defined as a “systematic
inquiry designed to develop trustworthy evidence about
issues of importance to the nursing profession, including
nursing practice, education, administration, and informat-
ics” (Polit & Beck, 2012, p. 736). Researchers determine
the type of research to employ based upon the research
question being investigated. The two types of research
methods are quantitative and qualitative. Quantitative
research uses a rigorous and controlled design to examine
phenomena using precise measurement (Polit & Beck,
2012). For example, a quantitative study may investigate a
patient’s heart rate before and after consuming a caffeinat-
ed beverage, like a specific brand/type of coffee. In our
coffee and tea analogy, in a quantitative study, the
research participant may be asked to drink a 12-ounce cup
of coffee, and after the participant consumes the coffee,
the researcher measures the participant’s heart rate in
beats per minute. Qualitative research examines phenom-
ena using an in-depth, holistic approach and a fluid
Exploring the Evidence is a department in the Nephrology
Nursing
Journal designed to provide a summary of evidence-based
research
reports related to contemporary nephrology nursing practice
issues.
Content for this department is provided by members of the
ANNA
Research Committee. Committee members review the current
literature
related to a clinical practice topic and provide a summary of the
evidence
and implications for best practice. Readers are invited to submit
questions or topic areas that pertain to evidence-based
nephrology
practice issues. Address correspondence to: Tamara Kear,
Exploring the
Evidence Department Editor, ANNA National Office, East Holly
Avenue/Box 56, Pitman, NJ 08071-0056; (856) 256-2320; or via
e-mail
at [email protected] The opinions and assertions contained
herein
are the private views of the contributors and do not necessarily
reflect
the views of the American Nephrology Nurses’ Association.
Copyright 2018 American Nephrology Nurses’ Association
Rutberg, S., & Bouikidis, C.D. (2018). Focusing on the
fundamen-
tals: A simplistic differentiation between qualitative and
quantitative research. Nephrology Nursing Journal, 45(2), 209-
212.
This article describes qualitative, quantitative, and mixed meth-
ods research. Various classifications of each research design,
including specific categories within each research method, are
explored. Attributes and differentiating characteristics, such as
formulating research questions and identifying a research prob-
lem, are examined, and various research method designs and
reasons to select one method over another for a research project
are discussed.
Key Words: Qualitative research, quantitative research,
mixed methods research, method design.
Shannon Rutberg, MS, MSN, BS, RN-BC, is a Clinical Nurse
Educator,
Bryn Mawr Hospital, Main Line Health System Bryn Mawr, PA.
Christina D. Bouikidis, MSN, RNC-OB, is a Clinical
Informatics
Educator, Main Line Health System, Berwyn, PA.
Statement of Disclosure: The authors reported no actual or
potential con-
flict of interest in relation to this continuing nursing education
activity.
Note: The Learning Outcome, additional statements of
disclosure, and
instructions for CNE evaluation can be found on page 213.
Continuing Nursing
Education
Exploring the Evidence
Quantitative and Qualitative Research
Nephrology Nursing Journal March-April 2018 Vol. 45, No.
2210
Focusing on the Fundamentals: A Simplistic Differentiation
Between Qualitative and Quantitative Research
research design that produces rich, telling narratives (Polit
& Beck, 2012). An example of a qualitative study is explor -
ing the participant’s preference of coffee over tea, and feel -
ings or mood one experiences after drinking this favorite
hot beverage.
Quantitative Research
Quantitative research can range from clinical trials for
new treatments and medications to surveying nursing staff
and patients. There are many reasons for selecting a quan-
titative research study design. For example, one may
choose quantitative research if a lack of research exists on
a particular topic, if there are unanswered research ques-
tions, or if the research topic under consideration could
make a meaningful impact on patient care (Polit & Beck,
2012). There are several different types of quantitative
research. Some of the most commonly employed quanti-
tative designs include experimental, quasi-experimental,
and non-experimental.
Experimental Design
An experimental design isolates the identified phenom-
ena in a laboratory and controls conditions under which
the experiment occurs (Polit & Beck, 2012). There is a con-
trol group and at least one experimental group in this
design. The most reliable studies use a randomization
process for group assignment wherein the control group
receives a placebo (an intervention that does not have ther-
apeutic significance) and the experimental group receives
an intervention (Polit & Beck, 2012). For example, if one is
studying the effects of caffeine on heart rate 15 minutes
after consuming coffee, using a quantitati ve experimental
design, the design may be set up similarly to the descrip-
tion in Table 1. Randomization will allow an equal chance
for each participant to be assigned to either the control or
the experimental group. Then the heart rate is measured
before and after the intervention. The intervention is drink-
ing decaffeinated coffee for the control group and drinking
caffeinated coffee for the experimental group. Data collect-
ed (heart rate pre- and post-coffee consumption) are then
analyzed, and conclusions are drawn.
Quasi-Experimental Design
Quasi-experimental designs include an intervention in
the design; however, designs do not always include a con-
trol group, which is a cornerstone to an authentic experi -
mental design. This type of design does not have random-
ization like the experimental design (Polit & Beck, 2012).
Instead, there may be an intervention put into place with
outcome measures pre- and post-intervention implemen-
tation, and a comparison used to identify if the interven-
tion made a difference. For example, perhaps a coffee
chain store wants to see if sampling a new flavor of coffee,
like hazelnut, will increase revenue over a one-month
period. At location A, hazelnut coffee was distributed as a
sample to customers in line waiting to purchase coffee. At
location B, no samples were distributed to customers.
Sales of hazelnut coffee are examined at both locations
prior to the intervention (hazelnut sample given out to
customers waiting in line) and then again one month later,
after the intervention. Lastly, a monthly revenue is com-
pared at both sites to measure if free hazelnut coffee sam-
ples impacted sales.
Non-Experimental Design
The final type of quantitative research discussed in this
article is the nonexperimental design. Manipulation of
variables does not occur with this design, but an interest
exists to observe the phenomena and identify if a relation-
ship exists (Polit & Beck, 2012). Perhaps someone is inter -
ested if drinking coffee throughout one’s life decreases the
incidence of having a stroke. Researchers for this type of
study will ask participants to report approximately how
much coffee they drank daily, and data would be com-
pared to their stroke incidence. Researchers will analyze
data to determine if a causal relationship exists between
coffee and stroke incidence by examining behavior that
occurred in the past.
Table 1
Comparison of Control
Control Group Experimental Group
Assignment 10 participants randomly assigned to the control
group
10 participants randomly assigned to the
experimental group
Pre-Intervention
Data Collection
Heart rate prior to beverage consumption Heart rate prior to
beverage consumption
Intervention Consume a placebo drink (i.e., decaffeinated
coffee)
Consume the experimental drink (i.e.,
caffeinated coffee)
Post-Intervention
Data Collection
Obtain heart rate 15 minutes after the beverage
was consumed
Obtain heart rate 15 minutes after the beverage
was consumed
Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2
211
Quantitative Study Attributes
In quantitative studies, the researcher uses standardized
questionnaires or experiments to collect numeric data.
Quantitative research is conducted in a more structured
environment that often allows the researcher to have con-
trol over study variables, environment, and research ques-
tions. Quantitative research may be used to determine rela-
tionships between variables and outcomes. Quantitative
research involves the development of a hypothesis – a
description of the anticipated result, relationship, or
expected outcome from the question being researched
(Polit & Beck, 2012). For example, in the experimental
study mentioned in Table 1, one may hypothesize the con-
trol group will not see an increase in heart rate. However,
in the experimental group, one may hypothesize an
increase in heart rate will occur. Data collected (heart rate
before and after coffee consumption) are analyzed, and
conclusions are drawn.
Qualitative Research
According to Choy (2014), qualitative studies address
the social aspect of research. The researcher uses open-
ended questions and interviews subjects in a semi-struc-
tured fashion. Interviews often take place in the partici -
pant’s natural setting or a quiet environment, like a confer-
ence room. Qualitative research methodology is often
employed when the problem is not well understood and
there is an existing desire to explore the problem thor-
oughly. Typically, a rich narrative from participant inter-
views is generated and then analyzed in qualitative
research in an attempt to answer the research question.
Many questions will be used to uncover the problem and
address it comprehensively (Polit & Beck, 2014).
Types of Qualitative Research Design
Types of qualitative research include ethnography, phe-
nomenology, grounded theory, historical research, and
case studies (Polit & Beck, 2014). Ethnography reveals the
way culture is defined, the behavior associated with cul-
ture, and how culture is understood. Ethnography design
allows the researcher to investigate shared meanings that
influence behaviors of a group (Polit & Beck, 2012).
Phenomenology is employed to investigate a person’s
lived experience and uncover meanings of this experience
(Polit & Beck, 2012). Nurses often use phenomenology
research to better understand topics that may be part of
human experiences, such as chronic pain or domestic vio-
lence (Polit & Beck, 2012). Using the coffee analogy, a
researcher may use phenomenology to investigate atti-
tudes and practices around a specific time of day for coffee
consumption. For example, some individuals may prefer
coffee in the morning, while some prefer coffee through-
out the day, and others only enjoy coffee after a meal.
Grounded theory investigates actions and effects of the
behavior in a culture. Grounded theory methodology may
be used to investigate afternoon tea time practices in
Europe as compared to morning coffee habits in the
United States.
Historical research examines the past using recorded
data, such as photos or objects. The historical researcher
may look at the size of coffee mugs in photos or mugs
from antique photos over a few centuries to provide a his-
torical perspective. Perhaps photos are telling of cultural
practices surrounding consuming coffee over the centuries
– in solitude, or in small or large groups.
Qualitative Research Attributes
When selecting a qualitative research design, keep in
mind the unique attributes. Qualitative research method-
ology may involve multiple means of data collection to
further understand the problem, such as interviews in
addition to observations (Polit & Beck, 2012). Further,
qualitative research is flexible and adapts to new informa-
tion based on data collected, provides a holistic perspec-
tive on the topic, and allows the researcher to become
entrenched in the investigation. The researcher is the
research tool, and data are constantly being analyzed to
identify commencement of the study. The decision to
select a qualitative methodology requires several consider -
ations, a great amount of planning (such as which research
design fits the study best, the time necessary to devote to
the study, a data collection plan, and resources available
to collect the data), and finally, self-reflection on any per-
sonal presumptions and biases toward the topic (Polit &
Beck, 2014).
Selecting a sample population in qualitative research
begins with identifying eligibility to participate in the
study based on the research question. The participant
needs to have had exposure or experience with the con-
tent being investigated. A thorough interview will uncover
the encounter the participant had with the research ques-
tion or experience. There will most likely be a few stan-
dard questions asked of all participants and subsequent
questions that will evolve based upon the participant’s
experience/answers. Thus, there tends to be small sample
size with a high volume of narrative data that needs to be
analyzed and interpreted to identify trends intended to
answer the research question (Polit & Beck, 2014).
Mixed Methods
Using both quantitative and qualitative methodology
into a single study is known as a mixed methods study.
According to Tashakkori and Creswell (2007), mixed
methods research is “research in which the researcher col -
lects and analyzes data, integrates the findings, and draws
inferences using both qualitative and quantitative
approaches or methods in a single study or program of
inquiry” (p. 4). This approach has the potential to allow
the researcher to collect two sets of data. An example of
using mixed methods would be examining effects of con-
suming a caffeinated beverage prior to bedtime. The
researcher may want to investigate the impact of caffeine
Nephrology Nursing Journal March-April 2018 Vol. 45, No.
2212
Focusing on the Fundamentals: A Simplistic Differentiation
Between Qualitative and Quantitative Research
on the participant’s heart rate and ask how consuming the
caffeine drink makes him or her feel. There is the quanti -
tative numeric data measuring the heart rate and the qual-
itative data addressing the participant’s experience or per -
ception.
Polit and Beck (2012) describe advantages of using a
mixed method approach, including complementary, prac-
ticality, incrementality, enhanced validity, and collabora-
tion. Complementary refers to quantitative and qualitative
approaches complementing each other. Mixed methods
use words and numbers, so the study is not limited to
using just one method of data collection. Practicality refers
to the researcher using the method that best addresses the
research question while using one method from the mixed
method approach. Incrementality is defined as the
researcher taking steps in the study, where each step leads
to another in an orderly fashion. An example of incremen-
tality includes following a particular sequence, or an
order, just as a recipe follows steps in order to accurately
produce a desired product. Data collected from one
method provide feedback to promote understanding of
data from the other method. With enhanced variability,
researchers can be more confident about the validity of
their results because of multiple types of data supporting
the hypothesis. Collaboration provides opportunity for
both quantitative and qualitative researchers to work on
similar problems in a collaborative nature.
Once the researcher has decided to move forward with
a mixed-method study, the next step is to decide on the
designs to employ. Design options include triangulation,
embedded, explanatory, and exploratory. Triangulation is
obtaining quantitative and qualitative data concurrently,
with equal importance given to each design. An embedded
design uses one type of data to support the other data type.
An explanatory design focuses on collecting one data type
and then moving to collecting the other data type.
Exploratory is another sequential design, where the
researcher collects one type of data, such as qualitative, in
the first phase; then using those findings, the researcher col -
lects the other data, quantitative, in the second phase.
Typically, the first phase concentrates on thorough investi -
gation of a minutely researched occurrence, and the second
phase is focused on sorting data to use for further investiga-
tion. While using a two-step process may provide the
researcher with more data, it can also be time-consuming.
Conclusion
In summary, just like tea and coffee, research has simi -
larities and differences. In the simplest of terms, it can be
viewed as measuring (how many scoops of coffee
grounds?) compared to perception and experience (how
long to steep the tea?). Quantitative research can range
from experiments with a control group to studies looking
at retrospective data and suggesting causal relationships.
Qualitative research is conducted in the presence of limit-
ed research on a particular topic, and descriptive narra-
tives have the potential to provide detailed information
regarding this particular area. Mixed methods research
involves combining the quantitative and qualitative
threads into data conversion and using those results to
make meta-inferences about the research question.
Research is hard work, but just like sipping coffee or tea at
the end of a long day, it is rewarding and satisfying.
References
Choy, L.T. (2014). The strengths and weaknesses of research
methodology: Comparison and complimentary between
qualitative and quantitative approaches. Journal of
Humanities and Social Science, 19(4), 99-104.
Polit, D.F., & Beck, C.T. (2012). Nursing research: Generating
and
assessing evidence for nursing practice. (9th ed.). Philadelphia,
PA: Wolters Kluwer.
Polit, D.F., & Beck, C.T. (2014). Essentials of nursing research:
Appraising evidence for nursing practice (8th ed.). Philadelphia,
PA: Wolters Kluwer.
Tashakkori, A., & Creswell, J.W. (2007). The new era of mixed
methods. Journal of Mixed Methods Research, 1(1), 3-7.
Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2
213
Focusing on the Fundamentals: A Simplistic Differentiation
Between Qualitative and Quantitative Research
Name:
_____________________________________________________
______________
Address:
_____________________________________________________
____________
City:
_____________________________________________________
________________
Telephone: _________________ Email:
________________________________________
ANNA Member: Yes No Member
#___________________________
Payment: Check Enclosed American Express Visa
MasterCard
Total Amount Submitted: ___________
Credit Card Number:
____________________________________ Exp. Date:
___________
Name as it Appears on the Card:
______________________________________________
Complete the Following (please print)
1. I verify I have completed this education activity. n Yes n
No
__________________________________________________
SIGNATURE Strongly Strongly
Disagree (Circle one) Agree
2. The learning outcome could be achieved using the content
provided. 1 2 3 4 5
3. The authors stimulated my desire to learn, and demonstrated
knowledge 1 2 3 4 5
and expertise in the content areas.
4. I am more confident in my abilities since completing this 1 2
3 4 5
education activity.
5. The content was relevant to my practice. 1 2 3 4 5
6. Did the learner engagement activity add value to this
education activity? n Yes n No
7. Commitment to change practice (select one):
a. I will make a change to my current practice as the result of
this education activity.
b. I am considering a change to my current practice.
c. This education activity confirms my current practice.
d. I am not yet convinced that any change in practice is
warranted.
e. I perceive there may be barriers to changing my current
practice.
8. What do you plan to do differently in your practice as a result
of completing this educational activity?
(Required)____________________________________________
__________________________
_____________________________________________________
_________________________
9. What information from this education activity do you plan to
share with a professional colleague?
(Required)
_____________________________________________________
____________________
_____________________________________________________
____________________________
10. This education activity was free of bias, product promotion,
and commercial interest influence. (Required)
n Yes n No
11. If no, please explain:
_____________________________________________________
____________
* Commercial interest – any entity either producing, marketing,
reselling, or distributing healthcare goods or
services consumed by or used on patients or an entity that is
owned or controlled by an entity that produces,
markets, resells, or distributes healthcare goods or services
consumed by or used on patients. Exceptions are
non-profits, government and non-healthcare related companies.
Nephrology Nursing Journal Editorial Board
Statements of Disclosure
In accordance with ANCC governing rules
Nephrology Nursing Journal Editorial Board state-
ments of disclosure are published with each CNE
offering. The statements of disclosure for this offer-
ing are published below.
Paula Dutka, MSN, RN, CNN, disclosed that she
is a consultant for Rockwell Medical, GSK, CARA
Therapeutics, Otsuka, Akebia Therapeutics, Bayer,
and Fibrogen.
Norma J. Gomez, MBA, MSN, CNNe, disclosed that
she is a member of the ZS Pharma Advisory Council.
Tamara M. Kear, PhD, RN, CNS, CNN, disclosed
that she is a member of the ANNA Board of Directors,
serves on the Scientific Advisory Board for Kibow
Biotech, Inc.
All other members of the Editorial Board had no actu-
al or potential conflict of interest in relation to this
continuing nursing education activity.
This article was reviewed and formatted for contact
hour credit by Beth Ulrich, EdD, RN, FACHE, FAAN,
Nephrology Nursing Journal Editor, and Sally
Russell, MN, CMSRN, CPP, ANNA Education Director.
American Nephrology Nurses Association – Provider
is accredited with distinction as a provider of contin-
uing nursing education by the American Nurses
Credentialing Center’s Commission on Accreditation.
ANNA is a provider approved by the California Board
of Registered Nursing, provider number CEP 00910.
This CNE article meets the Nephrology Nursing
Certification Commission’s (NNCC’s) continuing
nursing education requirements for certification and
recertification.
SUbMISSION INSTRUCTIONS
Online Submission
Articles are free to ANNA members
Regular Article Price: $15
CNE Evaluation Price: $15
Online submissions of this CNE evaluation form are
available at annanurse.org/library. CNE certificates will
be available im mediately upon successful completion of
the evaluation.
Mail/Fax Submission
ANNA Member Price: $15
Regular Price: $25
• Send this page to the ANNA National Office; East
Holly Avenue/Box 56; Pitman, NJ 08071-0056, or
fax this form to (856) 589-7463.
• Enclose a check or money order payable to ANNA.
Fees listed in payment section.
• A certificate for the contact hours will be awarded
by ANNA.
• Please allow 2-3 weeks for processing.
• You may submit multiple answer forms in one mail-
ing; however, because of various processing proce-
dures for each answer form, you may not receive all
of your certificates returned in one mailing.Note: If you wish to
keep the journal intact, you may photocopy the answer sheet or
access this activity at www.annanurse.org/journal
Evaluation Form (All questions must be answered to complete
the learning activity.
Longer answers to open-ended questions may be typed on a
separate page.)
ANNJ1811
Learning Outcome
After completing this learning activity, the learner will be able
to define
quantitative, qualitative, and mixed method research studies,
and dis-
cuss their attributes.
Learner Engagement Activity
For more information on this topic, view the session “Reading
and
Understanding Research” presented by Tamara Kear in the
ANNA Online
Library (https://library.annanurse.org/anna/sessions/4361/view).
EVALUATION FORM
1.3 Contact Hours | Expires: April 30, 2020
Copyright of Nephrology Nursing Journal is the property of
American Nephrology Nurses'
Association and its content may not be copied or emailed to
multiple sites or posted to a
listserv without the copyright holder's express written
permission. However, users may print,
download, or email articles for individual use.
1 3
Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–2308
DOI 10.1007/s00167-017-4582-y
EDITORIAL
While modern medicine evolves continuously, evidence‑ based
research methodology remains: how register studies should be
interpreted and appreciated
Eleonor Svantesson2 · Eric Hamrin Senorski2 · Kurt P.
Spindler3 · Olufemi R. Ayeni4 ·
Freddie H. Fu5 · Jón Karlsson1,2 · Kristian Samuelsson1,2
Published online: 13 June 2017
© European Society of Sports Traumatolo gy, Knee Surgery,
Arthroscopy (ESSKA) 2017
findings, it is more important than ever critically to evaluate
the evidence that is presented and be aware of the limita-
tions and pitfalls that we encounter every day as modern
scientists and clinicians.
Look! A significant result!
One of the goals for researchers is to get their work pub-
lished and acknowledged, preferably with multiple cita-
tions. A winning tactic to accomplish this is to present
novel results and findings. Interestingly, it often happens
that the most cited papers are those that contradict other
reports or are proved to be fundamentally wrong [14]. So,
it does not really matter how likely a result is to be true or
clinically valuable—a spectacular result can entrench the
findings of a study and influence clinical practice. It goes
without saying that the most important factor of all in this
quest is that a significant P value is presented. Today, it is
generally accepted that significance, often defined as a P
value of <0.05, means impact and evidence. However, this
is an incorrect appreciation of the P value and could lead to
an inappropriate approach to this statistical method. It has
been shown that P values and hypothesis-testing methods
are commonly misunderstood by researchers [6, 11, 17]
In just a few decades, the scientific stage has undergone
some dramatic changes. Novel studies are produced at a
“faster than ever” pace, and technological advances enable
insights into areas that would previously have been referr ed
to as science fiction. However, the purpose of research will
always be the same—to serve as a firm foundation to prac-
tise evidence-based medicine and ultimately improve the
treatment of our patients. Is the explosive evolvement of
research publications and technological advances always
beneficial when it comes to fulfilling this purpose? As
we are served with a steady stream of new “significant”
* Kristian Samuelsson
[email protected]
1 Department of Orthopaedics, Sahlgrenska University
Hospital, Mölndal, Sweden
2 Department of Orthopaedics, Institute of Clinical Sciences,
The Sahlgrenska Academy, University of Gothenburg,
431 80 Gothenburg, Sweden
3 Cleveland Clinic Sports Health Center, Garfield Heights,
OH, USA
4 Division of Orthopaedic Surgery, Department of Surgery,
McMaster University, Hamilton, ON, Canada
5 Department of Orthopedic Surgery, University of Pittsburgh,
Pittsburgh, PA, USA
http://crossmark.crossref.org/dialog/?doi=10.1007/s00167-017-
4582-y&domain=pdf
2306 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–
2308
1 3
and instead tend to lead to a limited perspective in relation
to a study result.
Sir Ronald Fisher is regarded as one of the founders of
modern statistics and is probably most associated with the
concept of the P value [10, 23]. Fisher suggested that the P
value reflected the probability that the result being observed
was compatible with the null hypothesis. In other words,
if it were true that there was no (null) difference between
the factors being investigated, the P value would give an
estimation of the likelihood of observing a difference as
extreme as or more extreme than your outcome showed.
However, Fisher never propagated the P < 0.05 criterion
that is currently almost glorified as our ultimate means of
conclusion making. On the contrary, Fisher appeared not
to give much consideration to the actual P value number
[19]. The most important thing, according to Fisher, was to
repeat the experiments until the investigator felt that he or
she had a plausible certainty of declaring how the experi -
ment should be performed and interpreted, something that
is infrequently implemented nowadays. The P value was
originally an indicative tool throughout this process, not
something synonymous with evidence.
In a study recently published in JAMA cardiology [19],
common misconceptions about P values were discussed. It
was emphasised that, at best, the P value plays a minor role
in defining the scientific or clinical importance of a study
and that multiple elements, including effect size, precision
of estimate of effect size and knowledge of prior relevant
research, need to be integrated in the assessment [19]. This
is strongly inconsistent with the concept of a P value of
<0.05 as an indicator of a clinically or scientifically impor-
tant difference. Moreover, the authors highlight the miscon-
ception that a small P value indicates reliable and replicable
results by stating that what works in medicine is a process
and not the product of a single experiment. No information
about a given study regarding reproducibility can be made
based on the P value, nor can the reliability be determined
without considering other factors [19]. One frequently for -
gotten factor is how plausible the hypothesis was in the
first place. It is easy to fall into the trap of thinking that a
P value of <0.05 means that there is a 95% chance of true
effect. However, as probability is always based on certain
conditions, the most important question should be: what
was the probability from the beginning? If the chance of a
real effect from the beginning is small, a significant P value
will only slightly increase the chances of a true effect. Or,
as Regina Nuzzo put it in an article highlighting statistical
errors in Nature [21]: “The more implausible the hypothe-
sis—telepathy, aliens, homeopathy—the greater the chance
that an exciting finding is a false alarm, no matter what the
P value is” [21].
Moreover, the P value says nothing about the effect si ze.
The P value is basically a calculation of two factors—the
difference from null and the variance. In a study with a
small standard deviation (high precision), even a very
small difference from zero (treatment effect) can therefore
result in a significant P value. How frequently do we ask
ourselves: “From what numbers was this P value gener-
ated?” when reading a paper. It is not until we look at the
effect size that it is really possible to determine whether
the treatment of interest has an impact. Well then, what
is the definition of impact? A term often used to describe
the effectiveness of a treatment is the “minimum clini-
cally important difference” (MCID). For a study to impact
clinical decision-making, the measurement given must be
greater than the MCID and, moreover, the absolute differ-
ence needs to be known. These factors determine the num-
ber needed to treat and thereby indicate the impact. How -
ever, current methods for determining MCID are subject of
debate and it has been concluded that they are associated
with shortcomings [9].
We should also remember that non-significant P values
are sometimes used to conclude the interventions of interest
as “equivalence” or “non-inferiority”, which is extremely
incorrect if the primary study design was not intended to
investigate equivalence between two treatments [18]. With-
out primarily designing the study for this purpose, it is
impossible to ascertain power for detecting the ideal clini -
cally relevant difference that is needed for a declaration of
equivalence. It can, in fact, have detrimental downstream
effects on patient care if a true suboptimal treatment is
declared as being non-inferior to a gold-standard treatment
[12]. Instead, let us accept the fact that not all studi es will
show significant results, nor should they. There has been a
bias against “negative trials”, not showing significance, in
the past and because of this we can only speculate about
whether or not they could have impacted any of today’s
knowledge. If the acceptance of non-significant results
increases, this could contribute to the elimination of pub-
lication bias.
The impact of study design
Regardless of study design, the optimal research study
should give an estimate of the effectiveness of one treat-
ment over another, with a minimised risk of systematic
bias. The ability and validity of doing this for observa-
tional studies compared with randomised controlled tri -
als (RCTs) has been the subject of an ongoing debate
for decades. To determine the efficacy of a treatment or
intervention (i.e. the extent to which a beneficial result
is produced under ideal conditions), the RCTs remain
the gold standard and are regarded as the most suitable
tool for making the most precise estimates of treatment
effect [22]. The only more highly valued study design is
2307Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–
2308
1 3
the meta-analysis of large, well-conducted RCTs. Stud-
ies with an observational design are often conducted
when determining the effectiveness of an intervention in
“real-world” scenarios (i.e. the extent to which an inter-
vention produces an outcome under normal day-to-day
circumstances). A Cochrane Review published in 2014
[3] examined fourteen methodological reviews compar-
ing quantitative effect size estimates measuring the effi -
cacy or effectiveness of interventions tested in RCTs with
those tested in observational studies. Eleven (79%) of
the examined reviews showed no significant difference
between observational studies and RCTs. Two reviews
concluded that observational studies had smaller effects
of interest, while one suggested the exact opposite.
Moreover, the review underscored the importance of con-
sidering the heterogeneity of meta-analyses of RCTs or
observational studies, in addition to focusing on the study
design, as these factors influence the estimates reflective
of true effectiveness [3].
We must never take away the power and the validity
of a well-conducted RCT. However, we need to under-
line the fact that evidence-based medicine is at risk if we
focus myopically on the RCT study design and give it the
false credibility of being able to answer all our questions.
We must also acknowledge the weaknesses of RCTs and
combine information obtained from this study design,
while recognising the value of additional information
from prospective longitudinal cohort studies. The Fragil -
ity Index (FI) is a method for determining the robustness
of statistically significant findings in RCTs, and it was
recently applied to 48 clinical trials related to sports med-
icine and arthroscopic surgery [16]. The FI represents the
minimum number of patients in one arm of an RCT that
is required to change the outcome, from a non-event to an
event, in order to change a result from statistically sig-
nificant to non-significant. So, the lower the number is,
the more fragile the significant result. The systematic sur-
vey somewhat worryingly showed that the median FI of
included studies was 2 [16]. Could it be that we are cur-
rently concluding evidence based on the outcome of two
single patients in some orthopaedic sports medicine stud-
ies? The FI should be an indicative tool in future clinical
studies which, in combination with other statistical sum-
maries from a study, could identify results that should be
interpreted cautiously or require further investigation [5].
Ultimately, the foundation of science is the ability to
generalise the results of a study. The factors that affect
the risk of an event or an outcome in a real-life situation
are a result of the natural individual variation surround-
ing us. It is therefore somewhat paradoxical in RCTs to
distribute risk factors evenly and eliminate all the fac-
tors that may interact with the intervention. We should
remember that, when drawing conclusions from a RCT,
this is based on many occasions on data obtained from
highly specialised centres in one part of the world. The
population is enrolled based on strict inclusion and exclu-
sion criteria, which should always trigger the questions
of “how many individuals failed to meet them?” and
“could their participation have made any difference to the
result?” Moreover, RCTs have also been criticised for not
representing usual care, which may in fact be the case at
a highly specialised centre for sports medicine [1].
High‑ quality observational studies—an asset
in evidence‑ based medicine
In addition to the generalisability of the results, large
observational studies originating from large registers
offer advantages in terms of identifying incidences,
understanding practices and determining the long-term
effects of different types of exposure/intervention. In par -
ticular, adverse events can be identified and rare outcomes
can be found [13]. Well-conducted, large cohort stud-
ies are regarded as the highest level of evidence among
observational studies, as the temporality of events can be
established. To put it another way, the cause of an event
always precedes the effect [13]. The SPORT trial spine,
MOON ACLR and MARS revision ACLR are examples
of prospective longitudinal cohorts based on STROBE
criteria [24] and multivariate modelling, where almost
100% of patients are enrolled. Further, they address the
modelling of who will respond to an intervention. While
an RCT determines an average of who the intervention
will benefit, registers like these determine the individual
patient to whom the treatment should be applied, as they
model the multitude of risk factors a patient presents to
clinicians.
On the other hand, observational studies are limited
by indication bias and are subject to the potential effect
of unmeasured confounders. The random variation, the
confounding factors, must of course be reconciled with
existing knowledge in observational studies. The more
individual variation, the more the precision of what we
are measuring is affected. There is variation in biologi -
cal responses, in previous therapies, in activity levels
and types of activity and variation in lifestyles, to men-
tion just a few. However, we would like to underline the
importance of seeing these factors as an opportunity in
observational studies. An opportunity to acquire a greater
knowledge of the relationship between factors of vari -
ance and the outcome, as well as possible underlying
mechanisms. Using statistical approaches that adjust for
confounders makes a good analysis possible [7, 20].
In order to improve the precision of results from the
many registers being established around the world, we
2308 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–
2308
1 3
must more clearly define and investigate the true con-
founders. With regard to anterior cruciate ligament (ACL)
reconstruction, data from several large registers have ena-
bled the valuable identification of predictors of outcome.
However, there is as yet no existing predictive model for
multivariate analysis where confounders are taken into
account, which could potentially jeopardise the validity
[2]. ACL reconstruction is one of the most researched
areas in orthopaedic medicine, and this is therefore
noteworthy because the lack of consensus in determin-
ing the factors that need to be included in a model may
alter the results of studies investigating the same condi -
tion. Another key point for high-level register studies is
that the representativeness of the cohort is ensured. When
registers are being established, comprehensive data entry
is needed and it is important that the investigators take
responsibility for monitoring the enrolment and attrition
of the cohort.
As researchers and clinicians, we sometimes need to take
a step back and evaluate how we can continue to implement
evidence-based medicine. We must understand that many
factors contribute to what we choose to call evidence and
that there is no single way to find it. Moreover, scientists
before us recognised that only by repeated experiments is
it possible to establish the reproducibility of the investiga-
tion and thereby get closer to the truth about the efficacy of
our treatments. Instead of naively believing that significant
results (P < 0.05) from any study are synonymous with
evidence, we can take advantage of the strengths of differ -
ent study designs. We should remember that many studies
have found that the results of observational and RCT stud-
ies correlate well [4, 8, 15]. We encourage the performance
of the best research whenever possible. Sometimes this is
a well-conducted RCT or highly controlled prospective
longitudinal cohorts, and at other times it is the establish-
ment of large patient registers. With regard to the obser-
vational study design, future comprehensive prospective
cohort studies can provide us with important knowledge
and be significant contributors to evidence-based medicine.
Nevertheless, the P value is a tool that can be helpful, but
it must be applied thoughtfully and while appreciating its
limitations and assumptions. Evidence-based medicine as
defined by the original practitioners involves making a cli n-
ical decision by combining clinical experience and the best
available evidence from RCTs and registers and by incor -
porating the patient’s values and preferences.
References
1. Albert RK (2013) “Lies, damned lies…” and observational
stud-
ies in comparative effectiveness research. Am J Respir Crit
Care
Med 187(11):1173–1177
2. An VV, Scholes C, Mhaskar VA, Hadden W, Parker D
(2016)
Limitations in predicting outcome following primary ACL
reconstruction with single-bundle hamstring autograft—A sys-
tematic review. Knee 24(2):170–178
3. Anglemyer A, Horvath HT, Bero L (2014) Healthcare
outcomes
assessed with observational study designs compared with those
assessed in randomized trials. Cochrane Database Syst Rev
4:Mr000034
4. Benson K, Hartz AJ (2000) A comparison of observational
studies and
randomized, controlled trials. N Engl J Med 342(25):1878–1886
5. Carter RE, McKie PM, Storlie CB (2017) The Fragility
Index: a
P-value in sheep’s clothing? Eur Heart J 38(5):346–348
6. Cohen HW (2011) P values: use and misuse in medical
litera-
ture. Am J Hypertens 24(1):18–23
7. Concato J (2012) Is it time for medicine-based evidence?
JAMA
307(15):1641–1643
8. Concato J, Shah N, Horwitz RI (2000) Randomized,
controlled
trials, observational studies, and the hierarchy of research
designs. N Engl J Med 342(25):1887–1892
9. Copay AG, Subach BR, Glassman SD, Polly DW Jr, Schuler
TC
(2007) Understanding the minimum clinically important differ -
ence: a review of concepts and methods. Spine J 7(5):541–546
10. Fisher R (1973) Statistical methods and scientific
inference, 3rd
edn. Hafner Publishing Company, New York
11. Goodman S (2008) A dirty dozen: twelve p-value
misconcep-
tions. Semin Hematol 45(3):135–140
12. Greene WL, Concato J, Feinstein AR (2000) Claims of
equiva-
lence in medical research: are they supported by the evidence?
Ann Intern Med 132(9):715–722
13. Inacio MC, Paxton EW, Dillon MT (2016) Understanding
ortho-
paedic registry studies: a comparison with clinical studies. J
Bone Joint Surg Am 98(1):e3
14. Ioannidis JA (2005) Contradicted and initially stronger
effects in
highly cited clinical research. JAMA 294(2):218–228
15. Ioannidis JP, Haidich AB, Pappa M, Pantazis N, Kokori SI,
Tek-
tonidou MG, Contopoulos-Ioannidis DG, Lau J (2001) Compari-
son of evidence of treatment effects in randomized and nonrand-
omized studies. JAMA 286(7):821–830
16. Khan M, Evaniew N, Gichuru M, Habib A, Ayeni OR, Bedi
A,
Walsh M, Devereaux PJ, Bhandari M (2016) The fragility of
statistically significant findings from randomized trials in
sports
surgery. Am J Sports Med. doi:10.1177/0363546516674469
17. Kyriacou DN (2016) The enduring evolution of the P value.
JAMA 315(11):1113–1115
18. Lowe WR (2016) Editorial Commentary: “There, It Fits!”—
Jus-
tifying Nonsignificant P Values. Arthroscopy 32(11):2318–2321
19. Mark DB, Lee KL, Harrell FE Jr (2016) Understanding the
role
of P values and hypothesis tests in clinical research. JAMA Car -
diol 1(9):1048–1054
20. Methodology Committee of the Patient-Centered Outcomes
Research Institute (PCORI) (2012) Methodological standards
and patient-centeredness in comparative effectiveness research:
the PCORI perspective. JAMA 307(15):1636–1640
21. Nuzzo R (2014) Statistical errors—P values, the ‘golden
stand-
ard’ of statistical validity, are not as reliable as many scientists
assume. Nature 508:150–152
22. Rosenberg W, Donald A (1995) Evidence based medicine:
an
approach to clinical problem-solving. BMJ 310(6987):1122–
1126
23. Salsburg D (2002) The lady tasting tea, 31728th edn. Holt
Paper-
backs, New York
24. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC,
Vandenbroucke JP (2007) The Strengthening the Reporting
of Observational Studies in Epidemiology (STROBE) state-
ment: guidelines for reporting observational studies. Lancet
370(9596):1453–1457
http://dx.doi.org/10.1177/0363546516674469While modern
medicine evolves continuously, evidence-based research
methodology remains: how register studies should be
interpreted and appreciatedLook! A significant result!The
impact of study designHigh-quality observational studies—an
asset in evidence-based medicineReferences

More Related Content

Similar to HealthIT.govNational Learning Consortium logoAdvancing Ameri

Running head QUALITY IMPROVEMENT .docx
Running head QUALITY IMPROVEMENT                                 .docxRunning head QUALITY IMPROVEMENT                                 .docx
Running head QUALITY IMPROVEMENT .docx
charisellington63520
 
Final Project Implementation Assessment of Electronic Health .docx
Final Project Implementation Assessment of Electronic Health .docxFinal Project Implementation Assessment of Electronic Health .docx
Final Project Implementation Assessment of Electronic Health .docx
tjane3
 
Community Health Assessments and Continuous Quality ImprovementP.docx
Community Health Assessments and Continuous Quality ImprovementP.docxCommunity Health Assessments and Continuous Quality ImprovementP.docx
Community Health Assessments and Continuous Quality ImprovementP.docx
cargillfilberto
 
Quality improvement healthcare final
Quality improvement healthcare finalQuality improvement healthcare final
Quality improvement healthcare final
Evanvs
 
Critical Access Hospital Goal Setting Provided By The Nat.docx
Critical Access Hospital Goal Setting Provided By The Nat.docxCritical Access Hospital Goal Setting Provided By The Nat.docx
Critical Access Hospital Goal Setting Provided By The Nat.docx
willcoxjanay
 
Healthcare quality improvement organization.docx
Healthcare quality improvement organization.docxHealthcare quality improvement organization.docx
Healthcare quality improvement organization.docx
sdfghj21
 
1. Assignment #2   Technology Project pertinent to their pract
1. Assignment #2   Technology Project pertinent to their pract1. Assignment #2   Technology Project pertinent to their pract
1. Assignment #2   Technology Project pertinent to their pract
TatianaMajor22
 
OverviewConduct a health information technology needs assessment.docx
OverviewConduct a health information technology needs assessment.docxOverviewConduct a health information technology needs assessment.docx
OverviewConduct a health information technology needs assessment.docx
jacksnathalie
 

Similar to HealthIT.govNational Learning Consortium logoAdvancing Ameri (20)

Running head QUALITY IMPROVEMENT .docx
Running head QUALITY IMPROVEMENT                                 .docxRunning head QUALITY IMPROVEMENT                                 .docx
Running head QUALITY IMPROVEMENT .docx
 
Final Project Implementation Assessment of Electronic Health .docx
Final Project Implementation Assessment of Electronic Health .docxFinal Project Implementation Assessment of Electronic Health .docx
Final Project Implementation Assessment of Electronic Health .docx
 
The use of guidelines and clinical pathways
The use of guidelines and clinical pathwaysThe use of guidelines and clinical pathways
The use of guidelines and clinical pathways
 
Developing evaluation framework for clinical information systems and incorpor...
Developing evaluation framework for clinical information systems and incorpor...Developing evaluation framework for clinical information systems and incorpor...
Developing evaluation framework for clinical information systems and incorpor...
 
A Guide to Applying Quality improvement to Healthcare Five Principles
A Guide to Applying Quality improvement to Healthcare Five PrinciplesA Guide to Applying Quality improvement to Healthcare Five Principles
A Guide to Applying Quality improvement to Healthcare Five Principles
 
Delivering Precision Medicine: How Data Drives Individualized Healthcare
Delivering Precision Medicine: How Data Drives Individualized HealthcareDelivering Precision Medicine: How Data Drives Individualized Healthcare
Delivering Precision Medicine: How Data Drives Individualized Healthcare
 
Six Steps Towards Meaningful, Ongoing Healthcare Performance Improvement
Six Steps Towards Meaningful, Ongoing Healthcare Performance ImprovementSix Steps Towards Meaningful, Ongoing Healthcare Performance Improvement
Six Steps Towards Meaningful, Ongoing Healthcare Performance Improvement
 
Community Health Assessments and Continuous Quality ImprovementP.docx
Community Health Assessments and Continuous Quality ImprovementP.docxCommunity Health Assessments and Continuous Quality ImprovementP.docx
Community Health Assessments and Continuous Quality ImprovementP.docx
 
NABH 5th edition hospital std april 2020
NABH  5th edition hospital std april 2020NABH  5th edition hospital std april 2020
NABH 5th edition hospital std april 2020
 
“Setting high standards (NABH for PHC / CHC) of services and continued improv...
“Setting high standards (NABH for PHC / CHC) of services and continued improv...“Setting high standards (NABH for PHC / CHC) of services and continued improv...
“Setting high standards (NABH for PHC / CHC) of services and continued improv...
 
Quality improvement healthcare final
Quality improvement healthcare finalQuality improvement healthcare final
Quality improvement healthcare final
 
ISO Meets PI
ISO Meets PIISO Meets PI
ISO Meets PI
 
Critical Access Hospital Goal Setting Provided By The Nat.docx
Critical Access Hospital Goal Setting Provided By The Nat.docxCritical Access Hospital Goal Setting Provided By The Nat.docx
Critical Access Hospital Goal Setting Provided By The Nat.docx
 
Inspiring change in the NHS: the five frames
Inspiring change in the NHS: the five framesInspiring change in the NHS: the five frames
Inspiring change in the NHS: the five frames
 
Healthcare quality improvement organization.docx
Healthcare quality improvement organization.docxHealthcare quality improvement organization.docx
Healthcare quality improvement organization.docx
 
1. Assignment #2   Technology Project pertinent to their pract
1. Assignment #2   Technology Project pertinent to their pract1. Assignment #2   Technology Project pertinent to their pract
1. Assignment #2   Technology Project pertinent to their pract
 
OverviewConduct a health information technology needs assessment.docx
OverviewConduct a health information technology needs assessment.docxOverviewConduct a health information technology needs assessment.docx
OverviewConduct a health information technology needs assessment.docx
 
Quality In Heath Sector
Quality  In Heath SectorQuality  In Heath Sector
Quality In Heath Sector
 
Quality improvement and patient safety in anesthesia
Quality improvement and patient safety in anesthesiaQuality improvement and patient safety in anesthesia
Quality improvement and patient safety in anesthesia
 
Quality Of Quality Management Process
Quality Of Quality Management ProcessQuality Of Quality Management Process
Quality Of Quality Management Process
 

More from SusanaFurman449

Please read the description of the Religion ethnography carefully an.docx
Please read the description of the Religion ethnography carefully an.docxPlease read the description of the Religion ethnography carefully an.docx
Please read the description of the Religion ethnography carefully an.docx
SusanaFurman449
 
PRAISE FOR CRUCIAL CONVERSATIONS Relationships ar.docx
PRAISE FOR CRUCIAL CONVERSATIONS Relationships ar.docxPRAISE FOR CRUCIAL CONVERSATIONS Relationships ar.docx
PRAISE FOR CRUCIAL CONVERSATIONS Relationships ar.docx
SusanaFurman449
 
Must Be a hip-hop concert!!!!attend a .docx
Must Be a             hip-hop concert!!!!attend a           .docxMust Be a             hip-hop concert!!!!attend a           .docx
Must Be a hip-hop concert!!!!attend a .docx
SusanaFurman449
 
Mini-Paper #3 Johnson & Johnson and a Tale of Two Crises - An Eth.docx
Mini-Paper #3 Johnson & Johnson and a Tale of Two Crises - An Eth.docxMini-Paper #3 Johnson & Johnson and a Tale of Two Crises - An Eth.docx
Mini-Paper #3 Johnson & Johnson and a Tale of Two Crises - An Eth.docx
SusanaFurman449
 
Please write these 2 assignments in first person.docx
Please write these 2 assignments in                 first person.docxPlease write these 2 assignments in                 first person.docx
Please write these 2 assignments in first person.docx
SusanaFurman449
 
Personal Leadership Training plan AttributesColumbia South.docx
Personal Leadership Training plan  AttributesColumbia South.docxPersonal Leadership Training plan  AttributesColumbia South.docx
Personal Leadership Training plan AttributesColumbia South.docx
SusanaFurman449
 
Jung Typology AssessmentThe purpose of this assignment is to ass.docx
Jung Typology AssessmentThe purpose of this assignment is to ass.docxJung Typology AssessmentThe purpose of this assignment is to ass.docx
Jung Typology AssessmentThe purpose of this assignment is to ass.docx
SusanaFurman449
 
Journal of Organizational Behavior J. Organiz. Behav. 31, .docx
Journal of Organizational Behavior J. Organiz. Behav. 31, .docxJournal of Organizational Behavior J. Organiz. Behav. 31, .docx
Journal of Organizational Behavior J. Organiz. Behav. 31, .docx
SusanaFurman449
 
LDR535 v4Organizational Change ChartLDR535 v4Page 2 of 2.docx
LDR535 v4Organizational Change ChartLDR535 v4Page 2 of 2.docxLDR535 v4Organizational Change ChartLDR535 v4Page 2 of 2.docx
LDR535 v4Organizational Change ChartLDR535 v4Page 2 of 2.docx
SusanaFurman449
 
Overview After analyzing your public health issue in Milestone On.docx
Overview After analyzing your public health issue in Milestone On.docxOverview After analyzing your public health issue in Milestone On.docx
Overview After analyzing your public health issue in Milestone On.docx
SusanaFurman449
 
Judicial OpinionsOverview After the simulation, justices writ.docx
Judicial OpinionsOverview After the simulation, justices writ.docxJudicial OpinionsOverview After the simulation, justices writ.docx
Judicial OpinionsOverview After the simulation, justices writ.docx
SusanaFurman449
 
IntroductionReview the Vila Health scenario and complete the int.docx
IntroductionReview the Vila Health scenario and complete the int.docxIntroductionReview the Vila Health scenario and complete the int.docx
IntroductionReview the Vila Health scenario and complete the int.docx
SusanaFurman449
 

More from SusanaFurman449 (20)

Please read the description of the Religion ethnography carefully an.docx
Please read the description of the Religion ethnography carefully an.docxPlease read the description of the Religion ethnography carefully an.docx
Please read the description of the Religion ethnography carefully an.docx
 
PLEASE read the question carefully.  The creation of teen ido.docx
PLEASE read the question carefully.  The creation of teen ido.docxPLEASE read the question carefully.  The creation of teen ido.docx
PLEASE read the question carefully.  The creation of teen ido.docx
 
Please reflect on the relationship between faith, personal disciplin.docx
Please reflect on the relationship between faith, personal disciplin.docxPlease reflect on the relationship between faith, personal disciplin.docx
Please reflect on the relationship between faith, personal disciplin.docx
 
Please read the following questions and answer the questions.docx
Please read the following questions and answer the questions.docxPlease read the following questions and answer the questions.docx
Please read the following questions and answer the questions.docx
 
PRAISE FOR CRUCIAL CONVERSATIONS Relationships ar.docx
PRAISE FOR CRUCIAL CONVERSATIONS Relationships ar.docxPRAISE FOR CRUCIAL CONVERSATIONS Relationships ar.docx
PRAISE FOR CRUCIAL CONVERSATIONS Relationships ar.docx
 
Must Be a hip-hop concert!!!!attend a .docx
Must Be a             hip-hop concert!!!!attend a           .docxMust Be a             hip-hop concert!!!!attend a           .docx
Must Be a hip-hop concert!!!!attend a .docx
 
Mini-Paper #3 Johnson & Johnson and a Tale of Two Crises - An Eth.docx
Mini-Paper #3 Johnson & Johnson and a Tale of Two Crises - An Eth.docxMini-Paper #3 Johnson & Johnson and a Tale of Two Crises - An Eth.docx
Mini-Paper #3 Johnson & Johnson and a Tale of Two Crises - An Eth.docx
 
Please write these 2 assignments in first person.docx
Please write these 2 assignments in                 first person.docxPlease write these 2 assignments in                 first person.docx
Please write these 2 assignments in first person.docx
 
Personal Leadership Training plan AttributesColumbia South.docx
Personal Leadership Training plan  AttributesColumbia South.docxPersonal Leadership Training plan  AttributesColumbia South.docx
Personal Leadership Training plan AttributesColumbia South.docx
 
Need help on researching why women join gangs1.How does anxi.docx
Need help on researching why women join gangs1.How does anxi.docxNeed help on researching why women join gangs1.How does anxi.docx
Need help on researching why women join gangs1.How does anxi.docx
 
Jung Typology AssessmentThe purpose of this assignment is to ass.docx
Jung Typology AssessmentThe purpose of this assignment is to ass.docxJung Typology AssessmentThe purpose of this assignment is to ass.docx
Jung Typology AssessmentThe purpose of this assignment is to ass.docx
 
Journal of Organizational Behavior J. Organiz. Behav. 31, .docx
Journal of Organizational Behavior J. Organiz. Behav. 31, .docxJournal of Organizational Behavior J. Organiz. Behav. 31, .docx
Journal of Organizational Behavior J. Organiz. Behav. 31, .docx
 
LDR535 v4Organizational Change ChartLDR535 v4Page 2 of 2.docx
LDR535 v4Organizational Change ChartLDR535 v4Page 2 of 2.docxLDR535 v4Organizational Change ChartLDR535 v4Page 2 of 2.docx
LDR535 v4Organizational Change ChartLDR535 v4Page 2 of 2.docx
 
In this paper, you will select an ethics issue from among the topics.docx
In this paper, you will select an ethics issue from among the topics.docxIn this paper, you will select an ethics issue from among the topics.docx
In this paper, you will select an ethics issue from among the topics.docx
 
In the past few weeks, you practiced observation skills by watchin.docx
In the past few weeks, you practiced observation skills by watchin.docxIn the past few weeks, you practiced observation skills by watchin.docx
In the past few weeks, you practiced observation skills by watchin.docx
 
Overview After analyzing your public health issue in Milestone On.docx
Overview After analyzing your public health issue in Milestone On.docxOverview After analyzing your public health issue in Milestone On.docx
Overview After analyzing your public health issue in Milestone On.docx
 
Judicial OpinionsOverview After the simulation, justices writ.docx
Judicial OpinionsOverview After the simulation, justices writ.docxJudicial OpinionsOverview After the simulation, justices writ.docx
Judicial OpinionsOverview After the simulation, justices writ.docx
 
IntroductionReview the Vila Health scenario and complete the int.docx
IntroductionReview the Vila Health scenario and complete the int.docxIntroductionReview the Vila Health scenario and complete the int.docx
IntroductionReview the Vila Health scenario and complete the int.docx
 
In studying Social Problems, sociologists (and historians) identify .docx
In studying Social Problems, sociologists (and historians) identify .docxIn studying Social Problems, sociologists (and historians) identify .docx
In studying Social Problems, sociologists (and historians) identify .docx
 
I need help correcting an integrative review.This was the profes.docx
I need help correcting an integrative review.This was the profes.docxI need help correcting an integrative review.This was the profes.docx
I need help correcting an integrative review.This was the profes.docx
 

Recently uploaded

The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

Recently uploaded (20)

Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
 
Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17
 
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptxOn_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.ppt
 
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdfFICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
 
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptxExploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
VAMOS CUIDAR DO NOSSO PLANETA! .
VAMOS CUIDAR DO NOSSO PLANETA!                    .VAMOS CUIDAR DO NOSSO PLANETA!                    .
VAMOS CUIDAR DO NOSSO PLANETA! .
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
 
How to Manage Call for Tendor in Odoo 17
How to Manage Call for Tendor in Odoo 17How to Manage Call for Tendor in Odoo 17
How to Manage Call for Tendor in Odoo 17
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...
 
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
 
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
 
Details on CBSE Compartment Exam.pptx1111
Details on CBSE Compartment Exam.pptx1111Details on CBSE Compartment Exam.pptx1111
Details on CBSE Compartment Exam.pptx1111
 
Economic Importance Of Fungi In Food Additives
Economic Importance Of Fungi In Food AdditivesEconomic Importance Of Fungi In Food Additives
Economic Importance Of Fungi In Food Additives
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
21st_Century_Skills_Framework_Final_Presentation_2.pptx
21st_Century_Skills_Framework_Final_Presentation_2.pptx21st_Century_Skills_Framework_Final_Presentation_2.pptx
21st_Century_Skills_Framework_Final_Presentation_2.pptx
 

HealthIT.govNational Learning Consortium logoAdvancing Ameri

  • 1. HealthIT.gov National Learning Consortium logo Advancing America's Health Care Continuous Quality Improvement (CQI) Strategies to Optimize your Practice Primer Provided By: The National Learning Consortium (NLC) Developed By: Health Information Technology Research Center (HITRC)
  • 2. The material in this document was developed by Regional Extension Center staff in the performance of technical support and EHR implementation. The information in this document is not intended to serve as legal advice nor should it substitute for legal counsel. Users are encouraged to seek additional detailed technical guidance to supplement the information contained within. The REC staff developed these materials based on the technology and law that were in place at the time this document was developed. Therefore, advances in technology and/or changes to the law subsequent to that date may not have been incorporated into this material. � Illustration titled the EHR Implementation Lifecycle shows 6 arrows, arrow 6 is highlighted. Step 6: Continue Quality Improvement NATIONAL LEARNING CONSORTIUM
  • 3. The National Learning Consortium (NLC) is a virtual and evolving body of knowledge and resources designed to support health care providers and health IT professionals working toward the implementation, adoption, and Meaningful Use of certified electronic health record (EHR) systems. The NLC represents the collective EHR implementation experiences and knowledge gained directly from the field of ONC’s outreach programs (REC, Beacon, State HIE) and through the Health Information Technology Research Center (HITRC) Communities of Practice (CoPs). The following resource can be used in support of the EHR Implementation Lifecycle. It is recommended by “boots-on-the-ground” professionals for use by others who have made the commitment to implement or upgrade to certified EHR systems. EHR Implementation Lifecycle
  • 4. EHR Implementation Lifecycle illustration shows 6 arrows. Step 1 Access; Step 2 Plan; Step 3 Select; Step 4 Implement; Step 5 Meaningfully Use; Step 6 Improve Quality. DESCRIPTION AND INSTRUCTIONS Continuous Quality Improvement (CQI) is a quality management process that encourages all health care team members to continuously ask the questions, “How are we doing?” and “Can we do it better?” (Edwards, 2008). To address these questions, a practice needs structured clinical and administrative data. EHRs can, if properly designed and implemented, capture these data efficiently and effectively, thereby transforming patient care in ways that might have been difficult or impossible with paper records alone. This Primer introduces CQI concepts, strategies, and techniques a practice can use to design an effective CQI strategy for EHR implementation, achieve Meaningful Use of the system, and ultimately improve the quality and safety of patient care. A practice can use CQI throughout the EHR implementation lifecycle.
  • 5. CQI strategies have been successfully implemented in many industries, including health care. The CQI conceptual framework presented in this Primer provides a foundation to design and manage CQI initiatives and offers points to consider when deciding which strategy works best for a particular practice or organization. A CQI toolkit, currently under development, will elaborate in more detail the concepts provided in this Primer. Section 1 of this Primer defines CQI and its relationship to Meaningful Use and EHR implementation and presents a case study of CQI concepts. Section 2 describes the leading strategies to be considered when designing a CQI program for a practice. Section 3 provides guidance for planning a CQI initiative and selecting a CQI strategy that matches the needs of various practice settings. � TABLE OF CONTENTS
  • 6. 1Continuous Quality Improvement (CQI) in the EHR Implementation Lifecycle ...................................... 1 1.1Introduction ............................................................................................... ...................................... 1 1.2What Is Continuous Quality Improvement? .................................................................................... 1 1.3How Can CQI Help a Practice Make the most OF Meaningful Use? ............................................. 2 1.4What Does CQI Look Like in Practice? .......................................................................................... 4 2Strategies for CQI ............................................................................................... .................................... 6 2.1Leading CQI Strategies in Health care ........................................................................................... 6
  • 7. 2.1.1The Institute for Healthcare Improvement (IHI) Model for Improvement ............................ 62.1.2Lean ............................................................................................... ..................................... 72.1.3Six Sigma ............................................................................................... ........................... 102.1.4Baldrige Quality Award Criteria ......................................................................................... 122.2Which CQI Strategy Is Right? ............................................................................................... ....... 142.3Best Practices to Consider in Using a CQI Strategy .................................................................... 152.3.1Have the Right Data and Use the Data Well .................................................................... 15 2.3.2Have the Resources to Finish the Job .............................................................................. 16 LIST OF EXHIBITS Exhibit 1. Using CQI to Move From Current State to Future State ............................................................... 3
  • 8. Exhibit 2. CQI Framework Model ............................................................................................... ................... 4 Exhibit 3. IHI Model for Improvement ............................................................................................... ............. 6 Exhibit 4. Lean Principles for Operational Efficiency .................................................................................... 8 Exhibit 5. Six Sigma Model ............................................................................................... .......................... 10 Exhibit 6. Baldrige Core Values and Concepts ........................................................................................... 12 Exhibit 7. Summary of Leading Strategies for CQI ..................................................................................... 14
  • 9. � 1 Continuous Quality Improvement (CQI) in the EHR Implementation Lifecyclei 1.1 INTRODUCTION The quest to use health information technology (IT), specifically EHRs, to improve the quality of health care throughout the health care delivery continuum is a consistent goal of health care providers, national and local policymakers, and health IT developers. The seminal Institute of Medicine (IOM) report, Crossing the Quality Chasm: A New Health System for the 21st Century (IOM, 2001), was a call for all health care organizations
  • 10. to renew their focus on improving the quality and safety of patient care in all health care delivery settings. Since the IOM report, the health care industry has emphasized the design and implementation of health IT that supports quality improvement (QI) and quality monitoring mechanisms in all levels of the health care delivery system. Many QI strategies currently used in health care, including Continuous Quality Improvement (CQI), have been adopted from other industries that have effectively used QI techniques to improve the efficiency and quality of their goods and services. Experience and research have shown that CQI principles, strategies, and techniques are critical drivers of new care models such as Patient-Centered Medical Homes (PCMHs) or Accountable Care Organizations (ACOs). As practice leaders and staff learn more about CQI strategies and identify what works best for the desired type and level of changes in the practice setting (i.e., moving from the current state to the desired future state), they will recognize the value in designing an EHR implementation to meet both the Meaningful Use requirements and their own QI goals.
  • 11. This Primer provides an overview of CQI concepts and processes and will: • Define CQI and how it applies to EHR implementations and practice improvement strategies; • Identify a conceptual framework to consider when implementing CQI techniques in a practice setting; • Explore tools, techniques, and strategies that health care and other service industries use to guide and manage CQI initiatives; • Guide the selection of the most appropriate CQI technique or strategy for the type and scale of improvements the practice is considering; and, • Provide tips to help the practice leaders tailor the approach, tools, methods, and processes to the unique CQI initiative and practice setting. 1.2 WHAT IS CONTINUOUS QUALITY IMPROVEMENT? Put simply, CQI is a philosophy that encourages all health care
  • 12. team members to continuously ask: “How are we doing?” and “Can we do it better?”(Edwards, 2008). More specifically, can we do it more efficiently? Can we be more effective? Can we do it faster? Can we do it in a more timely way? Continuous improvement begins with the culture of improvement for the patient, the practice, and the population in general. Besides creating this inquisitive CQI culture in an organization, the key to any CQI initiative is using a structured planning approach to evaluate the current practice processes and improve systems and processes to achieve the desired outcome and vision for the desired future state. Tools commonly used in CQI include strategies that enable team members to assess and improve health care delivery and services. Applying CQI to a practice’s EHR implementation means that the health care team must understand what works and what does not work in the current state and how the EHR will change care delivery and QI aims. The CQI plan identifies the desired clinical or administrative outcome and the evaluation strategies that
  • 13. enable the team to determine if they are achieving that outcome. The team also intervenes, when needed, to adjust the CQI plan based on continuous monitoring of progress through an adaptive, real-time feedback loop. � 1.3 HOW CAN CQI HELP A PRACTICE MAKE THE MOST OF MEANINGFUL USE? Meaningful Use is an important means to achieving the triple aims of health care—improving the experience of patient care, improving population health, and reducing per capita costs of health care (Berwick et al., 2008). The Centers for Medicare and Medicaid Services’ EHR Incentive Program provides eligible professionals, eligible hospitals, and critical access hospitals incentive payments that support the optimal use of technology for health care (Incentive Programs—Regulations and Guidance). Although a practice can implement an EHR without addressing Meaningful Use, practices that do so are less likely to realize the full
  • 14. potential of EHRs to improve patient care and practice operations (Mostashari, Tripathi, & Kendall, 2009). Attesting to Meaningful Use, although it is an important milestone for a practice, is not an end unto itself. Practices that can achieve Meaningful Use will be able to use their EHR to obtain a deep understanding of their patient population and uncover aspects of patient care that could be improved. Using a planned, strategic approach to CQI will help a practice move from reporting the requirements for Meaningful Use to improving patient care and meeting other practice goals. The literature shows a strong link between an explicit CQI strategy and high performance (Shortell et al., 2009). Thus, applying CQI principles and strategies will transform numbers on a spreadsheet or a report into a plan for action that identifies areas of focus and the steps and processes needed to improve those areas continually and iteratively. To establish an effective CQI strategy, a practice should (Wagner et al., 2012)
  • 15. • Choose and use a formal model for QI. • Establish and monitor metrics to evaluate improvement efforts and outcomes routinely. • Ensure all staff members understand the metrics for success. • Ensure that patients, families, providers, and care team members are involved in QI activities. • Optimize use of an EHR and health IT to meet Meaningful Use criteria. Put together, CQI and Meaningful Use can move a practice from its current state to a more desirable future state. As depicted in Exhibit 1, CQI begins with a clear vision of the transformed environment, identification of necessary changes to achieve that vision, and input from engaged team members who understand the needs for the practice. In short, the journey to the desired future state involves a transformation of people, process, and technology. Meaningful Use of health information and an explicit commitment to CQI can help a practice establish that clear vision and implement it successfully.
  • 16. � Exhibit 1. Using CQI to Move From Current State to Future State Illustration of the Continuous Quality Improvement (CQI) process. CQI acts on People, Process, and Technology to create the Future State of Improved Patient Care, Improved Population Health and Reduced Cost. Meaningful Use is shown in the background. � 1.4 WHAT DOES CQI LOOK LIKE IN PRACTICE? In undertaking any CQI initiative, a practice must consider three components: (1) structure, (2) process, and (3) outcomes (Donabedian, 1980). Exhibit 2 builds on these three components within the context of health information technology and illustrates the basic premise of CQI: any initiative involving an EHR to improve patient care must focus on the structure (especially technology and people) and process that lead to the expected outputs and then ultimately to the desired outcomes.
  • 17. Exhibit 2. CQI Framework Model An illustration of the CQI Framework Model shows the Continuous Quality Improvement (CQI) Initiative providing input to Structure (People and Technology) and Process phases. Structure leads to Process which leads to Output and finally to Outcome. The Outcome provides feedback to CQI and the process repeats. Structure. Structure includes the technological, human, physical, and financial assets a practice possesses to carry out its work. CQI examines the characteristics (e.g., number, mix, location, quality, and adequacy) of health IT resources, staff and consultants, physical space, and financial resources. Process. The activities, workflows, or task(s) carried out to achieve an output or outcome are considered process. Although CQI strategies in the literature focus more commonly on clinical processes, CQI also applies to administrative processes. The EHR functions that meet Meaningful Use Stage I Core Objectives support key clinical and administrative processes (see this link: MU Objectives).
  • 18. Output. Outputs are the immediate predecessor to the change in the patient’s status. Not all outputs are clinical; many practices will have outputs tied to business or efficiency goals and, accordingly, require changes to administrative and billing processes. Outcome. Outcomes are the end result of care (AHRQ, 2009) and a change in the patient’s current and future health status due to antecedent health care interventions (Kazley, 2008). Desired changes in the cost and efficiency of patient care or a return on investment in the EHR can also be considered outcomes. Feedback Loop. In Exhibit 2, a feedback loop between the output/outcome and the CQI initiative represents its cyclical, iterative nature. Once a change to the structure and process is implemented, a practice must determine whether it achieved the intended outcome and, if not, what other changes could be considered. If the outcome is achieved, the practice could determine how to produce an even better outcome or achieve it more efficiently and with less cost.
  • 19. � Text box titled "It's not just a model: CQI in action" Consider structure, process, and outcomes and how they apply to a CQI initiative to improve obesity screening and follow-up in an adult primary care practice that has attested to Meaningful Use (MU) Stage 1. • The structural assessment of an obesity screening CQI initiative would examine the functionality of the EHR for weight management tasks; staff’s knowledge and expertise to counsel overweight and obese patients; and the adequacy of the space and materials to provide education and social support. The CQI initiative might also consider the acquisition of a separate Adult Body Mass Index (BMI) Improvement module that would randomly survey patient charts at two different times to monitor BMI changes in the clinic’s overweight and obese patient population.
  • 20. • A process assessment of an obesity screening CQI initiative would ensure that the EHR can record and chart vital signs such as BMI (MU Core Measure # 8) so providers can easily capture, monitor, and trend a patient’s weight from visit to visit. The CQI initiative would also assess the clinical summaries given to the patient (MU Core Measure # 13) and whether these summaries effectively educate patients about what they need to do between visits to maintain or reduce their BMI. • An output leading to an improvement in BMI could be ensuring that BMI is recorded in the EHR at each visit for every patient so that the provider can track the patient’s progress. Another output might be to ensure that every patient receives nutrition and exercise counseling between office visits. A primary outcome for an obesity CQI initiative is reducing BMI by a certain amount within a specified time frame that the provider/patient believes is
  • 21. achievable. Longer term outcomes for the individual patient are improved quality of life and a longer life. For the practice that is part of an ACO or PCMH, reducing BMI in its patient panel may result in better reimbursement and receipt of financial incentives. A CQI initiative supported by an EHR allows the practice to monitor progress towards the outcome for a patient. Equally important, however is the ability to monitor the progress of the practice’s obese population by establishing a disease registry. Thus, a practice can monitor changes in BMI for each patient, estimate the proportion of patients in the practice that are overweight and obese, and identify patients who are outliers. These powerful data can transform how a provider chooses to care for these patients. If the obesity CQI initiative reveals that efforts to screen and counsel are successful in reducing BMI in patients, the next question is whether the enhanced screening works for everyone equally well. A closer look at the data may reveal, for example, that screening had almost no effect on older
  • 22. male patients. These insights would be used to consider additional modifications to the structure and process of the screening to help older male patients achieve weight management goals. � 2 Strategies for CQI 2.1 LEADING CQI STRATEGIES IN HEALTH CARE Fortunately, a practice can choose many well-established CQI programs and strategies to achieve its QI aims. The specific strategy a practice selects depends on several factors. For example, a practice incorporating CQI in an initial EHR implementation will have different goals and objectives than a practice that has already achieved Stage 1 Meaningful Use and wants a more targeted CQI effort directed at Meaningful Use Stage 2. The following sections briefly describe four strategies for CQI widely used in the health care industry today. The descriptions cover the basic principles of each strategy followed by the
  • 23. specific action steps each strategy recommends. An accompanying case study illustrates the use of the strategy in a practice setting. This section concludes with a side-by-side comparison of the strategies and guidance on selecting the appropriate strategy. Again, the final CQI strategy the practice uses could be a combination of tools, methods, and processes that best meet the practice culture and the unique CQI initiative. 2.1.1 The Institute for Healthcare Improvement (IHI) Model for Improvement The IHI Model for Improvement is a simple strategy that many organizations currently use to accelerate their improvement strategies. A CQI initiative based on the IHI Model for Improvement focuses on setting aims and teambuilding to achieve change. As depicted in Exhibit 3, it promotes improvement by seeking answers to three questions: • What are we trying to accomplish?
  • 24. • How will we know that a change is an improvement? • What changes can we make that will result in improvement? Exhibit 3. IHI Model for Improvement Illustration of the IHI Model for Improvement shows a cycle with steps "Act", "Plan", "Study", and "Do". The cyclle receives inputs from and provides feedback to the motivating questions: What are we trying to accomplish? How will we know that a change is an improvement? and What changes can we make that will result in improvement? Principles To answer these questions, a CQI initiative uses a Plan-Do- Study-Act (PDSA) cycle to test a proposed change or CQI initiative in the actual work setting so changes are rapidly deployed and disseminated. The cycle involves the following
  • 25. seven steps: Form the team. Including the appropriate people on a process improvement team is critical to a successful effort. The practice (or provider) must determine the team’s size and members. Practice staff persons are the experts at what works well in the practice and what needs to be improved. Include them in identifying and planning the implementation of any CQI initiative. Set aims. This step answers the question: What are we trying to accomplish? Aims should be specific, have a defined time period, and be measurable. Aims should also include a definition of who will be affected: patient population, staff members, etc. For practice transformation, the aims should ideally be consistent with achieving one or more of the triple aims previously discussed. � textbox reads "Big Sandy Health Care in Eastern Kentucky used
  • 26. the Plan-Do-Study-Act cycle to develop and test a new patient portal. The staff first identified the priority functions for the portal and the workflow changes involved. Then, staff received training and pilot tested the portal with volunteer patients. Issues from the pilot testing were addressed before the rollout, and a patient survey on the portal provided feedback to aid further improvements." When to use. The IHI Model for Improvement is best used for a CQI initiative that requires a gradual, incremental, and sustained approach to QI so changes are not undermined by excessive detail and unknowns (Hughes, 2008). To find out more: http://www.ihi.org/knowledge/Pages/HowtoImprove/ScienceofI mprovementHowtoImprove.aspx Establish measures. This step answers the question: How will we know that a change is an improvement? Outcome measures should be identified to evaluate if aims are met. Practices should select measures using data they are able to collect.
  • 27. Select changes. This step answers the question: What changes can we make that will result in improvement? The team should consider ideas from multiple sources and select changes that make sense. Test changes. First, the changes must be planned and downstream impacts analyzed to assess whether they had the desired outcome or output. Once the changes are implemented, the results should be observed so that lessons learned and best practices can be used to drive future changes. Implement changes. After testing a change on a small scale, learning from each test, and refining the change through several PDSA cycles, the team may implement the change on a broader scale—for example, for a pilot population or on an entire unit. Spread changes. After successful implementation of a change(s) for a pilot population or an entire unit, the
  • 28. team can disseminate the changes to other parts of the organization. 2.1.2 Lean Lean is a continuous improvement process that gained international recognition when Womack, Jones and Roos published a book on the Toyota Production System. Many hospitals have borrowed the key Lean principles of reducing non-value added activities, mistake- proofing tasks, and relentlessly focusing on reducing waste to improve health care delivery. Lean helps operationalize the change to create work flows, handoffs, and processes that work over the long term (see Exhibit 4). A key focus of change is on reducing or eliminating seven kinds of waste and improving efficiency (Levinson & Renick, 2002): • Overproduction • Waiting; time in queue
  • 29. • Transportation • Nonvalue-adding processes • Inventory • Motion • Costs of quality, scrap, rework, and inspection � Exhibit 4. Lean Principles for Operational Efficiency1 1 http://www.pmvantage.com/leanbusiness.php Illustration of the five principles to guide the activities for operational efficiency shows a cycle with the title "Lean Business" at center. Step 1. Identify Value. Define value from the client's viewpoing and express value in terms of a specific project. Step 2. Specify value. Map each step that brings a proocess or service to the client... both value added and non- value added. Step 3. Workflow Process. The continuous improvement of processors, information, and all services from end-to-end... throughout a cycle. Step 4. Push/Pull Activities. Fortifying an upstream process at the request of the customer's downstream requirements or specifications. Step 5. Process
  • 30. Delivery. The elimination of wasteful activities create value for the customer and promotes continuous improvement. Principles A Lean CQI initiative focuses mainly on alleviating overburden and inconsistency while reducing waste to create a process that can deliver the required results smoothly (Holweg, 2007). Teams frequently use these principles once they know what system change will result in an improvement. Identify which features create value. Identifying value is determined through both internal and external perspectives. For example, patients might value reduced phone time on hold, while providers value having all the information at hand available when making an appointment. The specific values depend on the
  • 31. process being streamlined. Identify the value stream (the main set of activities for care). Once value is determined, practice leaders should identify activities that contribute value. The entire sequence of activities is called the value stream. Activities that fall outside that realm are considered waste and should be streamlined. Group Health in Seattle, Washington used Lean principles to standardize a PCMH model across 26 sites over 14 months. The initiative focused on improving the efficiency of clinic workflow and processes to accommodate smaller panel
  • 32. sizes, longer appointment times, proactive chronic care, and greater use of licensed practical nurse and medical assistants for pre-visit preparation, follow-up and outreach (Hsu et al., 2012). � Improve flow so activities and information move in an efficient process. Once value-added activities and necessary nonvalue activities are identified, practice leaders should ensure that improvement efforts are directed to make the activities flow smoothly without interruption. Test the clinical and/or administrative process. Once the improved flow is identified and documented, it should be tested. Testing can include a pilot of a few patients or a one-day test where the new process is used throughout the practice.
  • 33. Perfect the process. Lessons learned and tensions that arise from this process can be identified and changes can be made before the final process is rolled out. Text box reads Heading: When to use. The Lean approach is useful in simplifying overcomplicated processes and takes a holistic approach that considers interrelated processes and workflows. Lean is not as well suited to small, discrete changes to a process as it is to whole groups or clusters of processes. Typically, Lean is applied in large health care settings although individual practices that belong to a physician network or a hospital may be engaged in a Lean initiative. Teams frequently use Lean once they know what system change will result in improvement from tried and tested best practices adopted in other settings. To find out more:
  • 34. http://www.lean.org/w hatslean/ http://www.pmvantage.com/leanbusiness.php http://asq.org/learn-about-quality/lean/overview/overview.html http://www.lean.org/WhoWeAre/HealthcarePartner.cfmend of text box � 2.1.3 Six Sigma Six Sigma is a business management and QI strategy that originated in the U.S. manufacturing industry; it seeks to improve efficiency by identifying and removing the causes of defects (errors) and minimizing variability in manufacturing and business processes (as shown in Exhibit 5). The term Six Sigma originated from terminology associated with the statistical modeling of
  • 35. processes. This QI strategy, thus, combines statistical analysis with quality management methods. Six Sigma also creates a special infrastructure of people within the organization [Green Belts (beginner) to Black Belts (most advanced)] who are experts in these methods. Lean and Six Sigma are often combined when a key goal is to reduce waste and errors. Exhibit 5. Six Sigma Model2 2 http://archian.wordpress.com/2012/11/05/marketing-wisdom- of-understanding-six-sigma/ An illustration of the Six Sigma Model show cycle with 5 spheres: Define, Measure, Analyze, Improve and Control. text box reads Six Sigma was used to improve coordination between primary care physicians and diabetic specialists, reduce unnecessary appointments and reduce waiting times for appointments with specialists. The initiative defined and measured process indicators, analyzed descriptive statistics, and developed strategies based on the results. These strategies involved changing clinical protocol for hospitalized patients, increasing the autonomy of nursing staff, reorganizing the scheduling office, and specializing diabetic clinics to provide certain types
  • 36. of diabetic care. (Paccagnella et al., 2012). end of text box Principles Six Sigma relies on the ability to obtain data and on process and outcome measures. A CQI initiative using Six Sigma adheres to five principles: Define. The first step is to define the process and outcome to be improved, define their key characteristics, and map the relevant inputs into the process that will lead to the desired outputs and outcomes. This step also involves defining the boundary for the CQI project. Measure. Once the process and outcome to be improved are defined, the CQI initiative must track performance through data collection. Performance measures can be captured through structured observation, surveys, and the EHR.
  • 37. Analyze. After measures are put in place, data can be collected and analyzed to determine how the practice is doing. Ideally, baseline data would be collected prior to putting new processes into place and at regular intervals. A CQI initiative would review these data as part of a process review to identify the causes of problems. Improve. The results of the analysis are used to inform improvements. For example, if process changes result in workarounds, adjustments should be made. Similarly, if quality measures do not show improvement � after a change is implemented, the CQI initiative should examine what additional or alternate changes could be implemented to improve the process. Control. Control involves ongoing monitoring and improvement as needed.
  • 38. Text box starts Heading When to use. In health care settings, Six Sigma is often combined with aspects of Lean to focus on both quality and efficiency, particularly when practices embark on a large-scale EHR implementation. When smaller changes are involved, one method may make more sense. To find out more: http://www.isixsigma.com/dictionary/dmaic/. http://asq.org/healthcaresixsigma/lean.html http://archian.wordpress.com/2012/11/05/marketing-wisdom-of- understanding-six-sigma/text box ends
  • 39. � 2.1.4 Baldrige Quality Award Criteria The Malcolm Baldrige Quality Award has evolved from an honor to recognize quality to an approach and methodology that enables continued excellence through self- assessment. This QI strategy is focused on total organizational improvement and instituting a culture of CQI. As depicted in Exhibit 6, it promotes the achievement of strategic goals through the alignment of resources and improved communication, productivity, and effectiveness in the seven criteria categories : • Leadership • Strategic planning • Customer focus • Measurement, analysis, and knowledge management • Workforce focus • Operations focus • Results
  • 40. Exhibit 6. Baldrige Core Values and Concepts3 3 http://www.nist.gov/baldrige/graphics.cfm An illustration of the Baldrige Core values and Concepts shows 3 concentric circles. Baldrige Health Care Criteria build on Core Values and Concepts (the innermost circle): Visionary leadership, Focus on the future, Managing for innovation, Agility, Organizational and personal learning, Valuing workforce members and partners, Patient-focused excellence, Societal responsibility, Management by fact, Focus on results and creating value, and Systems perspective. These are embedded in systematic processes (Criteria categories 1-6 – middle circle). 1 Leadership, 2 Strategic Planning, 3 Operations Focus, 4 Workforce Focus, 5 Customer Focus, and 6 Measurement, Analysis and Knowledge Management. These yield Performance Results (Criteria Category 7 – the outer circle) Leadership and Governance Results, Financial and Market Results, Workforce-Focused Results, Customer-Focused Results, and Health Care and Process Results Principles Examining the Baldrige criteria thoroughly can help a practice identify strengths and opportunities for
  • 41. improvement. These areas can shape future organizational plans. Because award criteria have been widely used, they provide a common language and principles on which to base improvement. A CQI initiative using the Baldrige Award criteria follows these principles (NIST, 2010). Identify the boundaries/scope of the self-assessment. This assessment could be the entire practice or a particular function (structure, process, or outcome) in the practice. Select champions. Because Baldrige involves an organization- wide change, each of the seven criteria areas requires a champion. Some champions may lead more than one criteria area. � Select teams for each champion. The purpose of the team is to work with the champion to conduct the self-assessment. This group should be interdisciplinary so multiple stakeholders are represented.
  • 42. Collect data and information to answer questions. The core self-assessment is guided by 10 critical questions for each Baldrige criterion. Some examples of questions are: What are the organization’s core competencies? What are your key work processes? How do you build and manage relationships with your customers? Champions can prepare an organizational profile describing the practice and its challenges as a starting point for the data collection. Share the answers to the Baldrige criteria questions among the category teams. The goal of this exercise is to identify common themes in the answers to the 10 structured questions for each criterion. Create and communicate an action plan for improvement. Each criterion team creates and communicates an action plan for improvement based on the answers and organizational
  • 43. priorities. Teams identify strengths and opportunities, set priorities, and develop action plans. Evaluate the self-assessment process and identify possible improvements. Practice leaders, champions, and teams evaluate the self-assessment and think about improvement mechanisms. Text box starts The Southcentral Foundation (SCF), a not-for-profit health care organization serving Alaska Natives and American Indian peoples in Anchorage and the surrounding area, used the Baldrige Criteria to create the Nuka system of care, which is owned, managed, designed, and driven by Alaska Native people, referred to as “customer-
  • 44. owners”. This focus on the customer- owner helped SCF to achieve the highest level of Patient-Centered Medical Home™ recognition from the National Committee on Quality Assurance (NCQA) in 2010 (National Institute of Standards and Technology http://www.nist.gov/baldrige/award_recipients/southcentral_pro file.cfm). Textbox ends Text box starts When to use. Baldrige Award criteria focuses more on identifying problems and setting up teams to take ownership of them and less on the specific steps to carry out the planned improvement. This strategy is holistic and may require the organization to make significant cultural changes.
  • 45. To find out more: http://www.nist.gov/baldrige/ http://asq.org/learn-about-quality/malcolm-baldrige- award/overview/overview.htmltext box ends � 2.2 WHICH CQI STRATEGY IS RIGHT? A practice should choose a CQI strategy based on its goals and objectives for adoption, implementation, and Meaningful Use of its EHR, and could include practice transformation priorities beyond Meaningful Use (i.e., becoming a PCMH, etc.). The strategies described have similarities and differences. Exhibit 7 summari zes the key features of these strategies and the CQI initiatives for which they are most appropriate.
  • 46. Exhibit 7. Summary of Leading Strategies for CQI Strategy Brief Description Type of CQI Initiative Example References IHI Model for Improvement Emphasizes the use of the Plan-Do-Study-Act
  • 47. methodology to establish aims, define the problem, identify measures of success, and systematically test them in short, rapid cycles. Can be combined with other strategies. • Emphasis on process and outcome. • Best for specific problems whose solutions can be refined over time. • Is a gradual, incremental approach to CQI.
  • 48. • Ideal for achieving small, quick wins; applying lessons learned to new cycles and identifying best practices. Big Sandy HealthCare uses PDSA to pilot test a new patient portal. • Institute of Healthcare Improvement • MN Department of Health
  • 49. Lean Alleviates overburden and inconsistency in processes by eliminating waste, redundancy, and unnecessary effort. Emphasis is on developing efficient systems that involve whole groups or clusters of related processes. Is focused on groups or whole categories of related
  • 50. processes. • Emphasis on process. • Simplifies overcomplicated processes and considers interdependencies. • Best for known problems with known system change solution. • Integrated throughout the organization or practice. • Ideal for large complex health care organizations and practice networks that want to standardize operations across multiple units or practice sites.
  • 51. Group Health uses Lean to standardize PCMH processes across 26 diverse practices. • http://www.lean.org/whatslean/ • http://asq.org/learn- about- quality/lean/overview/ overview.html • http://www.lean.org/ WhoWeAre/HealthcarePartner.cfm • Practice White Paper • Creating a Lean Practice
  • 52. � Strategy Brief Description Type of CQI Initiative Example References Six Sigma
  • 53. Emphasizes identifying and removing the causes of defects (errors) and minimizing variability in manufacturing and business processes. Uses statistical methods and a hierarchy of users within the organization (Black Belts, Green Belts, etc.) who are experts in these methods. • Emphasis on processes and outcomes. • Best for processes plagued by wide variability—logging of pharmaceuticals, standardizing referral
  • 54. processes, etc. • Is a heavily quantitative approach to CQI. • Can be adapted for targeted changes to specific processes. • Typically combined with Lean when the focus is on efficiency and quality. • Ideal for practices that want to rigorously quantify improvements in safety, quality, and cost effectiveness. An integrated provider network
  • 55. uses Six Sigma to reduce appointment times for diabetic care. • SixSigma • Six Sigma Online HRSA Exit Disclaimer icon • How to Develop a Lean Six Sigma Deployment Plan HRSA Exit Disclaimer icon 6sigma.us/ HRSA Exit Disclaimer icon • ASQ • Wisdom of Six Sigma Baldrige Award
  • 56. Criteria Emphasizes identifying problems and setting up teams to take ownership of those problems. Promotes culture of continued excellence via team- based self-assessment in seven criteria areas. Is less focused on the specific steps to achieve improvement. • Emphasis on structure and outcomes. • Best for practice-wide problem assessment and goal setting. • A broad, holistic approach to
  • 57. CQI initiated at strategic times. • Ideal for practices that want to establish a new CQI system or overhaul an existing one. Southcentral Foundation uses Baldrige Award to completely redesign patient care and achieve the highest level of PCMH recognition • NIST—Baldrige
  • 58. • Baldrige Overview 2.3 BEST PRACTICES TO CONSIDER IN USING A CQI STRATEGY 2.3.1 Have the Right Data and Use the Data Well Regardless of the strategy, a practice must analyze quality data to properly conduct CQI; thus, every effort should be made to ensure data are timely, accurate, and measure what they are intended to measure. Consider the source of the data for each metric needed to assess performance. The EHR cannot collect every kind of data needed for CQI. Some data may have to be collected by someone watching and tracking activities in real time or through surveys of staff and patients.
  • 59. Ensure that the EHR collects the data needed to support CQI efforts as structured data in the EHR. Data stored in free text fields or document images will not automate data collection. Ideally, it is best to know this before an EHR is purchased or upgraded. Establish targets and benchmarks. The results of a CQI analysis are meaningless if no data exist for comparison. Many clinical measures have national and regional benchmarks (e.g., HEDIS for process of care measures). The best benchmark, however, is generated within the practice through the collection of � i References Agency for Healthcare Research and Quality. What is health care quality and who decides? Statement of Carolyn Clancy before the Subcommittee on Health Care, Committee on Finance, U.S. Senate, 2009. Available at:
  • 60. http://www.finance.senate.gov/library/hearings/download/?id=c eb25971-e946- 47aa-921b-01736f7e0d35 Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008 May-Jun;27(3):759-69. baseline data which the practice uses to set a reasonable target for improvement over a specified period. Improvement is tracked by periodic comparison of pre- and post-data. Establish a broad set of measures—structure, process, and outcomes. Although quality (outcome) measurement is a prime concern, on its own it tells nothing about why outcomes occur. Collecting structure and process measures will help uncover and address the underlying causes of poor performance. Aggregate data to assess the practice population. One of the most efficient ways to carry out CQI
  • 61. initiatives focused on quality of care is to aggregate the data for patients with similar conditions into a registry. These patients often experience similar issues with treatments, medication adherence, and coordination with specialists so it makes sense to view them as a distinct population that a practice monitors and tracks over time. In addition, a disease registry allows the practice to identify patients who are outliers and may need even more attention and follow-up. Conduct periodic data quality audits. Most measures are captured as simple statistics (e.g., counts, percent, mean and mode) so ensure that the EHR is producing accurate and complete denominator and numerator data. 2.3.2 Have the Resources to Finish the Job Align the scope of your CQI initiative to the time and resources to carry it out. Although the EHR can make the CQI process more efficient by automating some data collection, staff members will still need to analyze
  • 62. and interpret the data and then translate those interpretations into action. Establish reasonable budgets and time frames for any given CQI initiative. The inputs for any given CQI initiative are more likely to be sufficient if the practice considers up front what is needed to complete the initiative. A dedicated staff and a timeline also establish accountability for the CQI initiative. Break down larger CQI initiatives into smaller ones. No matter how expansive and complex, almost any CQI initiative can be achieved if it can be broken down into smaller projects that build on one another. Successfully completing one small project builds enthusiasm and energy for the next project. Establish a stopping point where success is defined and new initiatives started. Almost any process could be refined and improved indefinitely—a practice should determine what is “good enough” so CQI resources can be directed to other quality improvement needs.
  • 63. Invest in a CQI infrastructure. CQI requires a certain set of inputs (e.g., a culture of learning; skills to collect, analyze, and use data from the EHR) that take time and money to acquire but are investments in the practice’s long-term viability. Like any new skill or task, CQI gets easier and more efficient over time as staff members become competent in CQI methods and figure out the best ways to integrate CQI into practice operations. Celebrate Success. Any CQI effort, no matter how big or small, involves time and effort that should be recognized by the participating staff and leadership. Taking time to reflect on the CQI team’s accomplishments will build enthusiasm and energy to tackle the next problem. � Donabedian A. Explorations in quality assessment and monitoring: the definition of quality and approaches to its assessment. Ann Arbor, MI: Health Administration Press; 1980.
  • 64. Edwards PJ, et al. Maximizing your investment in EHR: Utilizing EHRs to inform continuous quality improvement. JHIM 2008;22(1):32-7. Holweg M. The genealogy of lean production. J Oper Manag 2008;25(2):420-37. Hughes RG, ed. Patient safety and quality: An evidence-based handbook for nurses. Rockville, MD: Agency for Healthcare Research and Quality; 2008 Chapter 44, Tools and strategies for quality improvement and patient safety. Hsu C, Coleman K, Ross TR, Johnson E, Fishman PA, Larson EB, Liss D, Trescott C, Reid RJ. Spreading a patient-centered medical home redesign: A case study. J Ambul Care Manage. 2012 Apr- Jun;35(2):99-108. Kazley AS, Ozcan YA. Do hospitals with electronic medical records (EMRs) provide higher quality care? An
  • 65. examination of three clinical conditions. Med Care Res Rev 2008;65(4):496-513. Levinson WA, Rerick RA. Lean enterprise: A synergistic approach to minimizing waste. ASQ Quality Press, 2002, xiii-xiv, 38. Mostashari F, Tripath, M, Kendall M. A tale of two large community electronic health record extension projects. Health Affairs (Millwood). 2009 Mar-Apr;28(2), 345– 56. Nutting PA, Miller WL, Crabtree BF, et al. Initial lessons from the first national demonstration project on practice transformation to a patient-centered medical home. Ann Fam Med 2009;7(3):254–60. Paccagnella A, Mauri A, Spinella N. Quality improvement for integrated management of patients with type 2 diabetes (PRIHTA project stage 1). Qual Manag Health Care. 2012 Jul-Sep;21(3):146-59.
  • 66. The National Institute of Standards and Technolo gy (NIST). Baldrige performance excellence program: Self- assessing your organization; 2010. Available at: http://www.nist.gov/baldrige/enter/self.cfm The National Institute of Standards and Technology (NIST).The Malcolm Baldrige national quality award 2011 award recipient, healthcare category: Southcentral foundation. http://www.nist.gov/Baldrige/award_recipients/southcentral_pro file.cfm Institute for Healthcare Improvement. Science of improvement: How to improve; 2011, Apr. Available at: http://www.ihi.org/knowledge/Pages/HowtoImprove/ScienceofI mprovementHowtoImprove.aspx Institute of Medicine. Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academies of Science; 2001. Available at: http://www.nap.edu/catalog/10027.html
  • 67. Shortell SM, Gillies R Siddique J, et al . Improving chronic illness care: A longitudinal cohort analysis of large physician organizations. Med Care 2009;47(9):932–9. Wagner EH, Coleman K, Reid RJ, Phillips K, Abrams MK, Sugarman JR. The changes involved in patient- centered medical home transformation. Prim Care. 2012 Jun;39(2):241-59. Womack JP, Jones DT, Roos, D. The Machine That Changed the World: The Story of Lean Production; 1991. HarperCollins. � Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2 209 Exploring the Evidence
  • 68. Quantitative and Qualitative Research Focusing on the Fundamentals: A Simplistic Differentiation Between Qualitative and Quantitative Research Shannon Rutberg Christina D. Bouikidis R esearch is categorized as quantitative or qualitative in nature. Quantitative research employs the use of numbers and accuracy, while qualitative research focuses on lived experiences and human percep- tions (Polit & Beck, 2012). Research itself has a few vari- eties that can be explained using analogies of making a cup of coffee or tea. To make coffee, the amount of water and coffee grounds to be used must be measured. This precise meas- urement determines the amount of coffee and the strength of the brew. The key word in this quantitative research analogy is measure. To make tea, hot water must be poured over a tea bag in a mug. The length of time a person leaves a tea bag in the mug comes down to perception of the strength of the tea desired. The key word in qualitative research is perception. This article describes and explores the differences between quantitative (measure) and quali - tative (perception) research. Types of Research Nursing research can be defined as a “systematic inquiry designed to develop trustworthy evidence about issues of importance to the nursing profession, including nursing practice, education, administration, and informat-
  • 69. ics” (Polit & Beck, 2012, p. 736). Researchers determine the type of research to employ based upon the research question being investigated. The two types of research methods are quantitative and qualitative. Quantitative research uses a rigorous and controlled design to examine phenomena using precise measurement (Polit & Beck, 2012). For example, a quantitative study may investigate a patient’s heart rate before and after consuming a caffeinat- ed beverage, like a specific brand/type of coffee. In our coffee and tea analogy, in a quantitative study, the research participant may be asked to drink a 12-ounce cup of coffee, and after the participant consumes the coffee, the researcher measures the participant’s heart rate in beats per minute. Qualitative research examines phenom- ena using an in-depth, holistic approach and a fluid Exploring the Evidence is a department in the Nephrology Nursing Journal designed to provide a summary of evidence-based research reports related to contemporary nephrology nursing practice issues. Content for this department is provided by members of the ANNA Research Committee. Committee members review the current literature related to a clinical practice topic and provide a summary of the evidence and implications for best practice. Readers are invited to submit questions or topic areas that pertain to evidence-based nephrology practice issues. Address correspondence to: Tamara Kear, Exploring the Evidence Department Editor, ANNA National Office, East Holly Avenue/Box 56, Pitman, NJ 08071-0056; (856) 256-2320; or via
  • 70. e-mail at [email protected] The opinions and assertions contained herein are the private views of the contributors and do not necessarily reflect the views of the American Nephrology Nurses’ Association. Copyright 2018 American Nephrology Nurses’ Association Rutberg, S., & Bouikidis, C.D. (2018). Focusing on the fundamen- tals: A simplistic differentiation between qualitative and quantitative research. Nephrology Nursing Journal, 45(2), 209- 212. This article describes qualitative, quantitative, and mixed meth- ods research. Various classifications of each research design, including specific categories within each research method, are explored. Attributes and differentiating characteristics, such as formulating research questions and identifying a research prob- lem, are examined, and various research method designs and reasons to select one method over another for a research project are discussed. Key Words: Qualitative research, quantitative research, mixed methods research, method design. Shannon Rutberg, MS, MSN, BS, RN-BC, is a Clinical Nurse Educator, Bryn Mawr Hospital, Main Line Health System Bryn Mawr, PA. Christina D. Bouikidis, MSN, RNC-OB, is a Clinical Informatics Educator, Main Line Health System, Berwyn, PA. Statement of Disclosure: The authors reported no actual or
  • 71. potential con- flict of interest in relation to this continuing nursing education activity. Note: The Learning Outcome, additional statements of disclosure, and instructions for CNE evaluation can be found on page 213. Continuing Nursing Education Exploring the Evidence Quantitative and Qualitative Research Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2210 Focusing on the Fundamentals: A Simplistic Differentiation Between Qualitative and Quantitative Research research design that produces rich, telling narratives (Polit & Beck, 2012). An example of a qualitative study is explor - ing the participant’s preference of coffee over tea, and feel - ings or mood one experiences after drinking this favorite hot beverage. Quantitative Research Quantitative research can range from clinical trials for new treatments and medications to surveying nursing staff and patients. There are many reasons for selecting a quan- titative research study design. For example, one may choose quantitative research if a lack of research exists on a particular topic, if there are unanswered research ques-
  • 72. tions, or if the research topic under consideration could make a meaningful impact on patient care (Polit & Beck, 2012). There are several different types of quantitative research. Some of the most commonly employed quanti- tative designs include experimental, quasi-experimental, and non-experimental. Experimental Design An experimental design isolates the identified phenom- ena in a laboratory and controls conditions under which the experiment occurs (Polit & Beck, 2012). There is a con- trol group and at least one experimental group in this design. The most reliable studies use a randomization process for group assignment wherein the control group receives a placebo (an intervention that does not have ther- apeutic significance) and the experimental group receives an intervention (Polit & Beck, 2012). For example, if one is studying the effects of caffeine on heart rate 15 minutes after consuming coffee, using a quantitati ve experimental design, the design may be set up similarly to the descrip- tion in Table 1. Randomization will allow an equal chance for each participant to be assigned to either the control or the experimental group. Then the heart rate is measured before and after the intervention. The intervention is drink- ing decaffeinated coffee for the control group and drinking caffeinated coffee for the experimental group. Data collect- ed (heart rate pre- and post-coffee consumption) are then analyzed, and conclusions are drawn. Quasi-Experimental Design Quasi-experimental designs include an intervention in the design; however, designs do not always include a con- trol group, which is a cornerstone to an authentic experi -
  • 73. mental design. This type of design does not have random- ization like the experimental design (Polit & Beck, 2012). Instead, there may be an intervention put into place with outcome measures pre- and post-intervention implemen- tation, and a comparison used to identify if the interven- tion made a difference. For example, perhaps a coffee chain store wants to see if sampling a new flavor of coffee, like hazelnut, will increase revenue over a one-month period. At location A, hazelnut coffee was distributed as a sample to customers in line waiting to purchase coffee. At location B, no samples were distributed to customers. Sales of hazelnut coffee are examined at both locations prior to the intervention (hazelnut sample given out to customers waiting in line) and then again one month later, after the intervention. Lastly, a monthly revenue is com- pared at both sites to measure if free hazelnut coffee sam- ples impacted sales. Non-Experimental Design The final type of quantitative research discussed in this article is the nonexperimental design. Manipulation of variables does not occur with this design, but an interest exists to observe the phenomena and identify if a relation- ship exists (Polit & Beck, 2012). Perhaps someone is inter - ested if drinking coffee throughout one’s life decreases the incidence of having a stroke. Researchers for this type of study will ask participants to report approximately how much coffee they drank daily, and data would be com- pared to their stroke incidence. Researchers will analyze data to determine if a causal relationship exists between coffee and stroke incidence by examining behavior that occurred in the past. Table 1 Comparison of Control
  • 74. Control Group Experimental Group Assignment 10 participants randomly assigned to the control group 10 participants randomly assigned to the experimental group Pre-Intervention Data Collection Heart rate prior to beverage consumption Heart rate prior to beverage consumption Intervention Consume a placebo drink (i.e., decaffeinated coffee) Consume the experimental drink (i.e., caffeinated coffee) Post-Intervention Data Collection Obtain heart rate 15 minutes after the beverage was consumed Obtain heart rate 15 minutes after the beverage was consumed Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2 211 Quantitative Study Attributes In quantitative studies, the researcher uses standardized
  • 75. questionnaires or experiments to collect numeric data. Quantitative research is conducted in a more structured environment that often allows the researcher to have con- trol over study variables, environment, and research ques- tions. Quantitative research may be used to determine rela- tionships between variables and outcomes. Quantitative research involves the development of a hypothesis – a description of the anticipated result, relationship, or expected outcome from the question being researched (Polit & Beck, 2012). For example, in the experimental study mentioned in Table 1, one may hypothesize the con- trol group will not see an increase in heart rate. However, in the experimental group, one may hypothesize an increase in heart rate will occur. Data collected (heart rate before and after coffee consumption) are analyzed, and conclusions are drawn. Qualitative Research According to Choy (2014), qualitative studies address the social aspect of research. The researcher uses open- ended questions and interviews subjects in a semi-struc- tured fashion. Interviews often take place in the partici - pant’s natural setting or a quiet environment, like a confer- ence room. Qualitative research methodology is often employed when the problem is not well understood and there is an existing desire to explore the problem thor- oughly. Typically, a rich narrative from participant inter- views is generated and then analyzed in qualitative research in an attempt to answer the research question. Many questions will be used to uncover the problem and address it comprehensively (Polit & Beck, 2014). Types of Qualitative Research Design Types of qualitative research include ethnography, phe-
  • 76. nomenology, grounded theory, historical research, and case studies (Polit & Beck, 2014). Ethnography reveals the way culture is defined, the behavior associated with cul- ture, and how culture is understood. Ethnography design allows the researcher to investigate shared meanings that influence behaviors of a group (Polit & Beck, 2012). Phenomenology is employed to investigate a person’s lived experience and uncover meanings of this experience (Polit & Beck, 2012). Nurses often use phenomenology research to better understand topics that may be part of human experiences, such as chronic pain or domestic vio- lence (Polit & Beck, 2012). Using the coffee analogy, a researcher may use phenomenology to investigate atti- tudes and practices around a specific time of day for coffee consumption. For example, some individuals may prefer coffee in the morning, while some prefer coffee through- out the day, and others only enjoy coffee after a meal. Grounded theory investigates actions and effects of the behavior in a culture. Grounded theory methodology may be used to investigate afternoon tea time practices in Europe as compared to morning coffee habits in the United States. Historical research examines the past using recorded data, such as photos or objects. The historical researcher may look at the size of coffee mugs in photos or mugs from antique photos over a few centuries to provide a his- torical perspective. Perhaps photos are telling of cultural practices surrounding consuming coffee over the centuries – in solitude, or in small or large groups. Qualitative Research Attributes
  • 77. When selecting a qualitative research design, keep in mind the unique attributes. Qualitative research method- ology may involve multiple means of data collection to further understand the problem, such as interviews in addition to observations (Polit & Beck, 2012). Further, qualitative research is flexible and adapts to new informa- tion based on data collected, provides a holistic perspec- tive on the topic, and allows the researcher to become entrenched in the investigation. The researcher is the research tool, and data are constantly being analyzed to identify commencement of the study. The decision to select a qualitative methodology requires several consider - ations, a great amount of planning (such as which research design fits the study best, the time necessary to devote to the study, a data collection plan, and resources available to collect the data), and finally, self-reflection on any per- sonal presumptions and biases toward the topic (Polit & Beck, 2014). Selecting a sample population in qualitative research begins with identifying eligibility to participate in the study based on the research question. The participant needs to have had exposure or experience with the con- tent being investigated. A thorough interview will uncover the encounter the participant had with the research ques- tion or experience. There will most likely be a few stan- dard questions asked of all participants and subsequent questions that will evolve based upon the participant’s experience/answers. Thus, there tends to be small sample size with a high volume of narrative data that needs to be analyzed and interpreted to identify trends intended to answer the research question (Polit & Beck, 2014). Mixed Methods Using both quantitative and qualitative methodology
  • 78. into a single study is known as a mixed methods study. According to Tashakkori and Creswell (2007), mixed methods research is “research in which the researcher col - lects and analyzes data, integrates the findings, and draws inferences using both qualitative and quantitative approaches or methods in a single study or program of inquiry” (p. 4). This approach has the potential to allow the researcher to collect two sets of data. An example of using mixed methods would be examining effects of con- suming a caffeinated beverage prior to bedtime. The researcher may want to investigate the impact of caffeine Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2212 Focusing on the Fundamentals: A Simplistic Differentiation Between Qualitative and Quantitative Research on the participant’s heart rate and ask how consuming the caffeine drink makes him or her feel. There is the quanti - tative numeric data measuring the heart rate and the qual- itative data addressing the participant’s experience or per - ception. Polit and Beck (2012) describe advantages of using a mixed method approach, including complementary, prac- ticality, incrementality, enhanced validity, and collabora- tion. Complementary refers to quantitative and qualitative approaches complementing each other. Mixed methods use words and numbers, so the study is not limited to using just one method of data collection. Practicality refers to the researcher using the method that best addresses the research question while using one method from the mixed
  • 79. method approach. Incrementality is defined as the researcher taking steps in the study, where each step leads to another in an orderly fashion. An example of incremen- tality includes following a particular sequence, or an order, just as a recipe follows steps in order to accurately produce a desired product. Data collected from one method provide feedback to promote understanding of data from the other method. With enhanced variability, researchers can be more confident about the validity of their results because of multiple types of data supporting the hypothesis. Collaboration provides opportunity for both quantitative and qualitative researchers to work on similar problems in a collaborative nature. Once the researcher has decided to move forward with a mixed-method study, the next step is to decide on the designs to employ. Design options include triangulation, embedded, explanatory, and exploratory. Triangulation is obtaining quantitative and qualitative data concurrently, with equal importance given to each design. An embedded design uses one type of data to support the other data type. An explanatory design focuses on collecting one data type and then moving to collecting the other data type. Exploratory is another sequential design, where the researcher collects one type of data, such as qualitative, in the first phase; then using those findings, the researcher col - lects the other data, quantitative, in the second phase. Typically, the first phase concentrates on thorough investi - gation of a minutely researched occurrence, and the second phase is focused on sorting data to use for further investiga- tion. While using a two-step process may provide the researcher with more data, it can also be time-consuming. Conclusion In summary, just like tea and coffee, research has simi -
  • 80. larities and differences. In the simplest of terms, it can be viewed as measuring (how many scoops of coffee grounds?) compared to perception and experience (how long to steep the tea?). Quantitative research can range from experiments with a control group to studies looking at retrospective data and suggesting causal relationships. Qualitative research is conducted in the presence of limit- ed research on a particular topic, and descriptive narra- tives have the potential to provide detailed information regarding this particular area. Mixed methods research involves combining the quantitative and qualitative threads into data conversion and using those results to make meta-inferences about the research question. Research is hard work, but just like sipping coffee or tea at the end of a long day, it is rewarding and satisfying. References Choy, L.T. (2014). The strengths and weaknesses of research methodology: Comparison and complimentary between qualitative and quantitative approaches. Journal of Humanities and Social Science, 19(4), 99-104. Polit, D.F., & Beck, C.T. (2012). Nursing research: Generating and assessing evidence for nursing practice. (9th ed.). Philadelphia, PA: Wolters Kluwer. Polit, D.F., & Beck, C.T. (2014). Essentials of nursing research: Appraising evidence for nursing practice (8th ed.). Philadelphia, PA: Wolters Kluwer. Tashakkori, A., & Creswell, J.W. (2007). The new era of mixed methods. Journal of Mixed Methods Research, 1(1), 3-7.
  • 81. Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2 213 Focusing on the Fundamentals: A Simplistic Differentiation Between Qualitative and Quantitative Research Name: _____________________________________________________ ______________ Address: _____________________________________________________ ____________ City: _____________________________________________________ ________________ Telephone: _________________ Email: ________________________________________ ANNA Member: Yes No Member #___________________________ Payment: Check Enclosed American Express Visa MasterCard Total Amount Submitted: ___________ Credit Card Number: ____________________________________ Exp. Date: ___________ Name as it Appears on the Card:
  • 82. ______________________________________________ Complete the Following (please print) 1. I verify I have completed this education activity. n Yes n No __________________________________________________ SIGNATURE Strongly Strongly Disagree (Circle one) Agree 2. The learning outcome could be achieved using the content provided. 1 2 3 4 5 3. The authors stimulated my desire to learn, and demonstrated knowledge 1 2 3 4 5 and expertise in the content areas. 4. I am more confident in my abilities since completing this 1 2 3 4 5 education activity. 5. The content was relevant to my practice. 1 2 3 4 5 6. Did the learner engagement activity add value to this education activity? n Yes n No 7. Commitment to change practice (select one): a. I will make a change to my current practice as the result of this education activity. b. I am considering a change to my current practice. c. This education activity confirms my current practice. d. I am not yet convinced that any change in practice is warranted. e. I perceive there may be barriers to changing my current practice.
  • 83. 8. What do you plan to do differently in your practice as a result of completing this educational activity? (Required)____________________________________________ __________________________ _____________________________________________________ _________________________ 9. What information from this education activity do you plan to share with a professional colleague? (Required) _____________________________________________________ ____________________ _____________________________________________________ ____________________________ 10. This education activity was free of bias, product promotion, and commercial interest influence. (Required) n Yes n No 11. If no, please explain: _____________________________________________________ ____________ * Commercial interest – any entity either producing, marketing, reselling, or distributing healthcare goods or services consumed by or used on patients or an entity that is owned or controlled by an entity that produces, markets, resells, or distributes healthcare goods or services consumed by or used on patients. Exceptions are non-profits, government and non-healthcare related companies. Nephrology Nursing Journal Editorial Board Statements of Disclosure In accordance with ANCC governing rules
  • 84. Nephrology Nursing Journal Editorial Board state- ments of disclosure are published with each CNE offering. The statements of disclosure for this offer- ing are published below. Paula Dutka, MSN, RN, CNN, disclosed that she is a consultant for Rockwell Medical, GSK, CARA Therapeutics, Otsuka, Akebia Therapeutics, Bayer, and Fibrogen. Norma J. Gomez, MBA, MSN, CNNe, disclosed that she is a member of the ZS Pharma Advisory Council. Tamara M. Kear, PhD, RN, CNS, CNN, disclosed that she is a member of the ANNA Board of Directors, serves on the Scientific Advisory Board for Kibow Biotech, Inc. All other members of the Editorial Board had no actu- al or potential conflict of interest in relation to this continuing nursing education activity. This article was reviewed and formatted for contact hour credit by Beth Ulrich, EdD, RN, FACHE, FAAN, Nephrology Nursing Journal Editor, and Sally Russell, MN, CMSRN, CPP, ANNA Education Director. American Nephrology Nurses Association – Provider is accredited with distinction as a provider of contin- uing nursing education by the American Nurses Credentialing Center’s Commission on Accreditation. ANNA is a provider approved by the California Board of Registered Nursing, provider number CEP 00910. This CNE article meets the Nephrology Nursing
  • 85. Certification Commission’s (NNCC’s) continuing nursing education requirements for certification and recertification. SUbMISSION INSTRUCTIONS Online Submission Articles are free to ANNA members Regular Article Price: $15 CNE Evaluation Price: $15 Online submissions of this CNE evaluation form are available at annanurse.org/library. CNE certificates will be available im mediately upon successful completion of the evaluation. Mail/Fax Submission ANNA Member Price: $15 Regular Price: $25 • Send this page to the ANNA National Office; East Holly Avenue/Box 56; Pitman, NJ 08071-0056, or fax this form to (856) 589-7463. • Enclose a check or money order payable to ANNA. Fees listed in payment section. • A certificate for the contact hours will be awarded by ANNA. • Please allow 2-3 weeks for processing. • You may submit multiple answer forms in one mail- ing; however, because of various processing proce- dures for each answer form, you may not receive all
  • 86. of your certificates returned in one mailing.Note: If you wish to keep the journal intact, you may photocopy the answer sheet or access this activity at www.annanurse.org/journal Evaluation Form (All questions must be answered to complete the learning activity. Longer answers to open-ended questions may be typed on a separate page.) ANNJ1811 Learning Outcome After completing this learning activity, the learner will be able to define quantitative, qualitative, and mixed method research studies, and dis- cuss their attributes. Learner Engagement Activity For more information on this topic, view the session “Reading and Understanding Research” presented by Tamara Kear in the ANNA Online Library (https://library.annanurse.org/anna/sessions/4361/view). EVALUATION FORM 1.3 Contact Hours | Expires: April 30, 2020 Copyright of Nephrology Nursing Journal is the property of American Nephrology Nurses' Association and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
  • 87. permission. However, users may print, download, or email articles for individual use. 1 3 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–2308 DOI 10.1007/s00167-017-4582-y EDITORIAL While modern medicine evolves continuously, evidence‑ based research methodology remains: how register studies should be interpreted and appreciated Eleonor Svantesson2 · Eric Hamrin Senorski2 · Kurt P. Spindler3 · Olufemi R. Ayeni4 · Freddie H. Fu5 · Jón Karlsson1,2 · Kristian Samuelsson1,2 Published online: 13 June 2017 © European Society of Sports Traumatolo gy, Knee Surgery, Arthroscopy (ESSKA) 2017 findings, it is more important than ever critically to evaluate the evidence that is presented and be aware of the limita- tions and pitfalls that we encounter every day as modern scientists and clinicians. Look! A significant result! One of the goals for researchers is to get their work pub- lished and acknowledged, preferably with multiple cita- tions. A winning tactic to accomplish this is to present novel results and findings. Interestingly, it often happens
  • 88. that the most cited papers are those that contradict other reports or are proved to be fundamentally wrong [14]. So, it does not really matter how likely a result is to be true or clinically valuable—a spectacular result can entrench the findings of a study and influence clinical practice. It goes without saying that the most important factor of all in this quest is that a significant P value is presented. Today, it is generally accepted that significance, often defined as a P value of <0.05, means impact and evidence. However, this is an incorrect appreciation of the P value and could lead to an inappropriate approach to this statistical method. It has been shown that P values and hypothesis-testing methods are commonly misunderstood by researchers [6, 11, 17] In just a few decades, the scientific stage has undergone some dramatic changes. Novel studies are produced at a “faster than ever” pace, and technological advances enable insights into areas that would previously have been referr ed to as science fiction. However, the purpose of research will always be the same—to serve as a firm foundation to prac- tise evidence-based medicine and ultimately improve the treatment of our patients. Is the explosive evolvement of research publications and technological advances always beneficial when it comes to fulfilling this purpose? As we are served with a steady stream of new “significant” * Kristian Samuelsson [email protected] 1 Department of Orthopaedics, Sahlgrenska University Hospital, Mölndal, Sweden 2 Department of Orthopaedics, Institute of Clinical Sciences, The Sahlgrenska Academy, University of Gothenburg, 431 80 Gothenburg, Sweden 3 Cleveland Clinic Sports Health Center, Garfield Heights,
  • 89. OH, USA 4 Division of Orthopaedic Surgery, Department of Surgery, McMaster University, Hamilton, ON, Canada 5 Department of Orthopedic Surgery, University of Pittsburgh, Pittsburgh, PA, USA http://crossmark.crossref.org/dialog/?doi=10.1007/s00167-017- 4582-y&domain=pdf 2306 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305– 2308 1 3 and instead tend to lead to a limited perspective in relation to a study result. Sir Ronald Fisher is regarded as one of the founders of modern statistics and is probably most associated with the concept of the P value [10, 23]. Fisher suggested that the P value reflected the probability that the result being observed was compatible with the null hypothesis. In other words, if it were true that there was no (null) difference between the factors being investigated, the P value would give an estimation of the likelihood of observing a difference as extreme as or more extreme than your outcome showed. However, Fisher never propagated the P < 0.05 criterion that is currently almost glorified as our ultimate means of conclusion making. On the contrary, Fisher appeared not to give much consideration to the actual P value number [19]. The most important thing, according to Fisher, was to repeat the experiments until the investigator felt that he or she had a plausible certainty of declaring how the experi -
  • 90. ment should be performed and interpreted, something that is infrequently implemented nowadays. The P value was originally an indicative tool throughout this process, not something synonymous with evidence. In a study recently published in JAMA cardiology [19], common misconceptions about P values were discussed. It was emphasised that, at best, the P value plays a minor role in defining the scientific or clinical importance of a study and that multiple elements, including effect size, precision of estimate of effect size and knowledge of prior relevant research, need to be integrated in the assessment [19]. This is strongly inconsistent with the concept of a P value of <0.05 as an indicator of a clinically or scientifically impor- tant difference. Moreover, the authors highlight the miscon- ception that a small P value indicates reliable and replicable results by stating that what works in medicine is a process and not the product of a single experiment. No information about a given study regarding reproducibility can be made based on the P value, nor can the reliability be determined without considering other factors [19]. One frequently for - gotten factor is how plausible the hypothesis was in the first place. It is easy to fall into the trap of thinking that a P value of <0.05 means that there is a 95% chance of true effect. However, as probability is always based on certain conditions, the most important question should be: what was the probability from the beginning? If the chance of a real effect from the beginning is small, a significant P value will only slightly increase the chances of a true effect. Or, as Regina Nuzzo put it in an article highlighting statistical errors in Nature [21]: “The more implausible the hypothe- sis—telepathy, aliens, homeopathy—the greater the chance that an exciting finding is a false alarm, no matter what the P value is” [21]. Moreover, the P value says nothing about the effect si ze.
  • 91. The P value is basically a calculation of two factors—the difference from null and the variance. In a study with a small standard deviation (high precision), even a very small difference from zero (treatment effect) can therefore result in a significant P value. How frequently do we ask ourselves: “From what numbers was this P value gener- ated?” when reading a paper. It is not until we look at the effect size that it is really possible to determine whether the treatment of interest has an impact. Well then, what is the definition of impact? A term often used to describe the effectiveness of a treatment is the “minimum clini- cally important difference” (MCID). For a study to impact clinical decision-making, the measurement given must be greater than the MCID and, moreover, the absolute differ- ence needs to be known. These factors determine the num- ber needed to treat and thereby indicate the impact. How - ever, current methods for determining MCID are subject of debate and it has been concluded that they are associated with shortcomings [9]. We should also remember that non-significant P values are sometimes used to conclude the interventions of interest as “equivalence” or “non-inferiority”, which is extremely incorrect if the primary study design was not intended to investigate equivalence between two treatments [18]. With- out primarily designing the study for this purpose, it is impossible to ascertain power for detecting the ideal clini - cally relevant difference that is needed for a declaration of equivalence. It can, in fact, have detrimental downstream effects on patient care if a true suboptimal treatment is declared as being non-inferior to a gold-standard treatment [12]. Instead, let us accept the fact that not all studi es will show significant results, nor should they. There has been a bias against “negative trials”, not showing significance, in the past and because of this we can only speculate about
  • 92. whether or not they could have impacted any of today’s knowledge. If the acceptance of non-significant results increases, this could contribute to the elimination of pub- lication bias. The impact of study design Regardless of study design, the optimal research study should give an estimate of the effectiveness of one treat- ment over another, with a minimised risk of systematic bias. The ability and validity of doing this for observa- tional studies compared with randomised controlled tri - als (RCTs) has been the subject of an ongoing debate for decades. To determine the efficacy of a treatment or intervention (i.e. the extent to which a beneficial result is produced under ideal conditions), the RCTs remain the gold standard and are regarded as the most suitable tool for making the most precise estimates of treatment effect [22]. The only more highly valued study design is 2307Knee Surg Sports Traumatol Arthrosc (2017) 25:2305– 2308 1 3 the meta-analysis of large, well-conducted RCTs. Stud- ies with an observational design are often conducted when determining the effectiveness of an intervention in “real-world” scenarios (i.e. the extent to which an inter- vention produces an outcome under normal day-to-day circumstances). A Cochrane Review published in 2014 [3] examined fourteen methodological reviews compar- ing quantitative effect size estimates measuring the effi - cacy or effectiveness of interventions tested in RCTs with
  • 93. those tested in observational studies. Eleven (79%) of the examined reviews showed no significant difference between observational studies and RCTs. Two reviews concluded that observational studies had smaller effects of interest, while one suggested the exact opposite. Moreover, the review underscored the importance of con- sidering the heterogeneity of meta-analyses of RCTs or observational studies, in addition to focusing on the study design, as these factors influence the estimates reflective of true effectiveness [3]. We must never take away the power and the validity of a well-conducted RCT. However, we need to under- line the fact that evidence-based medicine is at risk if we focus myopically on the RCT study design and give it the false credibility of being able to answer all our questions. We must also acknowledge the weaknesses of RCTs and combine information obtained from this study design, while recognising the value of additional information from prospective longitudinal cohort studies. The Fragil - ity Index (FI) is a method for determining the robustness of statistically significant findings in RCTs, and it was recently applied to 48 clinical trials related to sports med- icine and arthroscopic surgery [16]. The FI represents the minimum number of patients in one arm of an RCT that is required to change the outcome, from a non-event to an event, in order to change a result from statistically sig- nificant to non-significant. So, the lower the number is, the more fragile the significant result. The systematic sur- vey somewhat worryingly showed that the median FI of included studies was 2 [16]. Could it be that we are cur- rently concluding evidence based on the outcome of two single patients in some orthopaedic sports medicine stud- ies? The FI should be an indicative tool in future clinical studies which, in combination with other statistical sum- maries from a study, could identify results that should be
  • 94. interpreted cautiously or require further investigation [5]. Ultimately, the foundation of science is the ability to generalise the results of a study. The factors that affect the risk of an event or an outcome in a real-life situation are a result of the natural individual variation surround- ing us. It is therefore somewhat paradoxical in RCTs to distribute risk factors evenly and eliminate all the fac- tors that may interact with the intervention. We should remember that, when drawing conclusions from a RCT, this is based on many occasions on data obtained from highly specialised centres in one part of the world. The population is enrolled based on strict inclusion and exclu- sion criteria, which should always trigger the questions of “how many individuals failed to meet them?” and “could their participation have made any difference to the result?” Moreover, RCTs have also been criticised for not representing usual care, which may in fact be the case at a highly specialised centre for sports medicine [1]. High‑ quality observational studies—an asset in evidence‑ based medicine In addition to the generalisability of the results, large observational studies originating from large registers offer advantages in terms of identifying incidences, understanding practices and determining the long-term effects of different types of exposure/intervention. In par - ticular, adverse events can be identified and rare outcomes can be found [13]. Well-conducted, large cohort stud- ies are regarded as the highest level of evidence among observational studies, as the temporality of events can be established. To put it another way, the cause of an event always precedes the effect [13]. The SPORT trial spine, MOON ACLR and MARS revision ACLR are examples
  • 95. of prospective longitudinal cohorts based on STROBE criteria [24] and multivariate modelling, where almost 100% of patients are enrolled. Further, they address the modelling of who will respond to an intervention. While an RCT determines an average of who the intervention will benefit, registers like these determine the individual patient to whom the treatment should be applied, as they model the multitude of risk factors a patient presents to clinicians. On the other hand, observational studies are limited by indication bias and are subject to the potential effect of unmeasured confounders. The random variation, the confounding factors, must of course be reconciled with existing knowledge in observational studies. The more individual variation, the more the precision of what we are measuring is affected. There is variation in biologi - cal responses, in previous therapies, in activity levels and types of activity and variation in lifestyles, to men- tion just a few. However, we would like to underline the importance of seeing these factors as an opportunity in observational studies. An opportunity to acquire a greater knowledge of the relationship between factors of vari - ance and the outcome, as well as possible underlying mechanisms. Using statistical approaches that adjust for confounders makes a good analysis possible [7, 20]. In order to improve the precision of results from the many registers being established around the world, we 2308 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305– 2308 1 3
  • 96. must more clearly define and investigate the true con- founders. With regard to anterior cruciate ligament (ACL) reconstruction, data from several large registers have ena- bled the valuable identification of predictors of outcome. However, there is as yet no existing predictive model for multivariate analysis where confounders are taken into account, which could potentially jeopardise the validity [2]. ACL reconstruction is one of the most researched areas in orthopaedic medicine, and this is therefore noteworthy because the lack of consensus in determin- ing the factors that need to be included in a model may alter the results of studies investigating the same condi - tion. Another key point for high-level register studies is that the representativeness of the cohort is ensured. When registers are being established, comprehensive data entry is needed and it is important that the investigators take responsibility for monitoring the enrolment and attrition of the cohort. As researchers and clinicians, we sometimes need to take a step back and evaluate how we can continue to implement evidence-based medicine. We must understand that many factors contribute to what we choose to call evidence and that there is no single way to find it. Moreover, scientists before us recognised that only by repeated experiments is it possible to establish the reproducibility of the investiga- tion and thereby get closer to the truth about the efficacy of our treatments. Instead of naively believing that significant results (P < 0.05) from any study are synonymous with evidence, we can take advantage of the strengths of differ - ent study designs. We should remember that many studies have found that the results of observational and RCT stud- ies correlate well [4, 8, 15]. We encourage the performance of the best research whenever possible. Sometimes this is a well-conducted RCT or highly controlled prospective
  • 97. longitudinal cohorts, and at other times it is the establish- ment of large patient registers. With regard to the obser- vational study design, future comprehensive prospective cohort studies can provide us with important knowledge and be significant contributors to evidence-based medicine. Nevertheless, the P value is a tool that can be helpful, but it must be applied thoughtfully and while appreciating its limitations and assumptions. Evidence-based medicine as defined by the original practitioners involves making a cli n- ical decision by combining clinical experience and the best available evidence from RCTs and registers and by incor - porating the patient’s values and preferences. References 1. Albert RK (2013) “Lies, damned lies…” and observational stud- ies in comparative effectiveness research. Am J Respir Crit Care Med 187(11):1173–1177 2. An VV, Scholes C, Mhaskar VA, Hadden W, Parker D (2016) Limitations in predicting outcome following primary ACL reconstruction with single-bundle hamstring autograft—A sys- tematic review. Knee 24(2):170–178 3. Anglemyer A, Horvath HT, Bero L (2014) Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev 4:Mr000034 4. Benson K, Hartz AJ (2000) A comparison of observational studies and randomized, controlled trials. N Engl J Med 342(25):1878–1886
  • 98. 5. Carter RE, McKie PM, Storlie CB (2017) The Fragility Index: a P-value in sheep’s clothing? Eur Heart J 38(5):346–348 6. Cohen HW (2011) P values: use and misuse in medical litera- ture. Am J Hypertens 24(1):18–23 7. Concato J (2012) Is it time for medicine-based evidence? JAMA 307(15):1641–1643 8. Concato J, Shah N, Horwitz RI (2000) Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med 342(25):1887–1892 9. Copay AG, Subach BR, Glassman SD, Polly DW Jr, Schuler TC (2007) Understanding the minimum clinically important differ - ence: a review of concepts and methods. Spine J 7(5):541–546 10. Fisher R (1973) Statistical methods and scientific inference, 3rd edn. Hafner Publishing Company, New York 11. Goodman S (2008) A dirty dozen: twelve p-value misconcep- tions. Semin Hematol 45(3):135–140 12. Greene WL, Concato J, Feinstein AR (2000) Claims of equiva- lence in medical research: are they supported by the evidence? Ann Intern Med 132(9):715–722
  • 99. 13. Inacio MC, Paxton EW, Dillon MT (2016) Understanding ortho- paedic registry studies: a comparison with clinical studies. J Bone Joint Surg Am 98(1):e3 14. Ioannidis JA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294(2):218–228 15. Ioannidis JP, Haidich AB, Pappa M, Pantazis N, Kokori SI, Tek- tonidou MG, Contopoulos-Ioannidis DG, Lau J (2001) Compari- son of evidence of treatment effects in randomized and nonrand- omized studies. JAMA 286(7):821–830 16. Khan M, Evaniew N, Gichuru M, Habib A, Ayeni OR, Bedi A, Walsh M, Devereaux PJ, Bhandari M (2016) The fragility of statistically significant findings from randomized trials in sports surgery. Am J Sports Med. doi:10.1177/0363546516674469 17. Kyriacou DN (2016) The enduring evolution of the P value. JAMA 315(11):1113–1115 18. Lowe WR (2016) Editorial Commentary: “There, It Fits!”— Jus- tifying Nonsignificant P Values. Arthroscopy 32(11):2318–2321 19. Mark DB, Lee KL, Harrell FE Jr (2016) Understanding the role of P values and hypothesis tests in clinical research. JAMA Car - diol 1(9):1048–1054 20. Methodology Committee of the Patient-Centered Outcomes Research Institute (PCORI) (2012) Methodological standards
  • 100. and patient-centeredness in comparative effectiveness research: the PCORI perspective. JAMA 307(15):1636–1640 21. Nuzzo R (2014) Statistical errors—P values, the ‘golden stand- ard’ of statistical validity, are not as reliable as many scientists assume. Nature 508:150–152 22. Rosenberg W, Donald A (1995) Evidence based medicine: an approach to clinical problem-solving. BMJ 310(6987):1122– 1126 23. Salsburg D (2002) The lady tasting tea, 31728th edn. Holt Paper- backs, New York 24. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP (2007) The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) state- ment: guidelines for reporting observational studies. Lancet 370(9596):1453–1457 http://dx.doi.org/10.1177/0363546516674469While modern medicine evolves continuously, evidence-based research methodology remains: how register studies should be interpreted and appreciatedLook! A significant result!The impact of study designHigh-quality observational studies—an asset in evidence-based medicineReferences