SlideShare a Scribd company logo
1 of 60
1 3
Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–2308
DOI 10.1007/s00167-017-4582-y
EDITORIAL
While modern medicine evolves continuously, evidence‑ based
research methodology remains: how register studies should be
interpreted and appreciated
Eleonor Svantesson2 · Eric Hamrin Senorski2 · Kurt P.
Spindler3 · Olufemi R. Ayeni4 ·
Freddie H. Fu5 · Jón Karlsson1,2 · Kristian Samuelsson1,2
Published online: 13 June 2017
© European Society of Sports Traumatology, Knee Surgery,
Arthroscopy (ESSKA) 2017
findings, it is more important than ever critically to evaluate
the evidence that is presented and be aware of the limita-
tions and pitfalls that we encounter every day as modern
scientists and clinicians.
Look! A significant result!
One of the goals for researchers is to get their work pub-
lished and acknowledged, preferably with multiple cita-
tions. A winning tactic to accomplish this is to present
novel results and findings. Interestingly, it often happens
that the most cited papers are those that contradict other
reports or are proved to be fundamentally wrong [14]. So,
it does not really matter how likely a result is to be true or
clinically valuable—a spectacular result can entrench the
findings of a study and influence clinical practice. It goes
without saying that the most important factor of all in this
quest is that a significant P value is presented. Today, it is
generally accepted that significance, often defined as a P
value of <0.05, means impact and evidence. However, this
is an incorrect appreciation of the P value and could lead to
an inappropriate approach to this statistical method. It has
been shown that P values and hypothesis-testing methods
are commonly misunderstood by researchers [6, 11, 17]
In just a few decades, the scientific stage has undergone
some dramatic changes. Novel studies are produced at a
“faster than ever” pace, and technological advances enable
insights into areas that would previously have been referred
to as science fiction. However, the purpose of research will
always be the same—to serve as a firm foundation to prac-
tise evidence-based medicine and ultimately improve the
treatment of our patients. Is the explosive evolvement of
research publications and technological advances always
beneficial when it comes to fulfilling this purpose? As
we are served with a steady stream of new “significant”
* Kristian Samuelsson
[email protected]
1 Department of Orthopaedics, Sahlgrenska University
Hospital, Mölndal, Sweden
2 Department of Orthopaedics, Institute of Clinical Sciences,
The Sahlgrenska Academy, University of Gothenburg,
431 80 Gothenburg, Sweden
3 Cleveland Clinic Sports Health Center, Garfield Heights,
OH, USA
4 Division of Orthopaedic Surgery, Department of Surgery,
McMaster University, Hamilton, ON, Canada
5 Department of Orthopedic Surgery, University of Pittsburgh,
Pittsburgh, PA, USA
http://crossmark.crossref.org/dialog/?doi=10.1007/s00167-017-
4582-y&domain=pdf
2306 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–
2308
1 3
and instead tend to lead to a limited perspective in relation
to a study result.
Sir Ronald Fisher is regarded as one of the founders of
modern statistics and is probably most associated with the
concept of the P value [10, 23]. Fisher suggested that the P
value reflected the probability that the result being observed
was compatible with the null hypothesis. In other words,
if it were true that there was no (null) difference between
the factors being investigated, the P value would give an
estimation of the likelihood of observing a difference as
extreme as or more extreme than your outcome showed.
However, Fisher never propagated the P < 0.05 criterion
that is currently almost glorified as our ultimate means of
conclusion making. On the contrary, Fisher appeared not
to give much consideration to the actual P value number
[19]. The most important thing, according to Fisher, was to
repeat the experiments until the investigator felt that he or
she had a plausible certainty of declaring how the experi-
ment should be performed and interpreted, something that
is infrequently implemented nowadays. The P value was
originally an indicative tool throughout this process, not
something synonymous with evidence.
In a study recently published in JAMA cardiology [19],
common misconceptions about P values were discussed. It
was emphasised that, at best, the P value plays a minor role
in defining the scientific or clinical importance of a study
and that multiple elements, including effect size, precision
of estimate of effect size and knowledge of prior relevant
research, need to be integrated in the assessment [19]. This
is strongly inconsistent with the concept of a P value of
<0.05 as an indicator of a clinically or scientifically impor-
tant difference. Moreover, the authors highlight the miscon-
ception that a small P value indicates reliable and replicable
results by stating that what works in medicine is a process
and not the product of a single experiment. No information
about a given study regarding reproducibility can be made
based on the P value, nor can the reliability be determined
without considering other factors [19]. One frequently for -
gotten factor is how plausible the hypothesis was in the
first place. It is easy to fall into the trap of thinking that a
P value of <0.05 means that there is a 95% chance of true
effect. However, as probability is always based on certain
conditions, the most important question should be: what
was the probability from the beginning? If the chance of a
real effect from the beginning is small, a significant P value
will only slightly increase the chances of a true effect. Or,
as Regina Nuzzo put it in an article highlighting statistical
errors in Nature [21]: “The more implausible the hypothe-
sis—telepathy, aliens, homeopathy—the greater the chance
that an exciting finding is a false alarm, no matter what the
P value is” [21].
Moreover, the P value says nothing about the effect size.
The P value is basically a calculation of two factors—the
difference from null and the variance. In a study with a
small standard deviation (high precision), even a very
small difference from zero (treatment effect) can therefore
result in a significant P value. How frequently do we ask
ourselves: “From what numbers was this P value gener-
ated?” when reading a paper. It is not until we look at the
effect size that it is really possible to determine whether
the treatment of interest has an impact. Well then, what
is the definition of impact? A term often used to describe
the effectiveness of a treatment is the “minimum clini-
cally important difference” (MCID). For a study to impact
clinical decision-making, the measurement given must be
greater than the MCID and, moreover, the absolute differ-
ence needs to be known. These factors determine the num-
ber needed to treat and thereby indicate the impact. How -
ever, current methods for determining MCID are subject of
debate and it has been concluded that they are associated
with shortcomings [9].
We should also remember that non-significant P values
are sometimes used to conclude the interventions of interest
as “equivalence” or “non-inferiority”, which is extremely
incorrect if the primary study design was not intended to
investigate equivalence between two treatments [18]. With-
out primarily designing the study for this purpose, it is
impossible to ascertain power for detecting the ideal clini -
cally relevant difference that is needed for a declaration of
equivalence. It can, in fact, have detrimental downstream
effects on patient care if a true suboptimal treatment is
declared as being non-inferior to a gold-standard treatment
[12]. Instead, let us accept the fact that not all studies will
show significant results, nor should they. There has been a
bias against “negative trials”, not showing significance, in
the past and because of this we can only speculate about
whether or not they could have impacted any of today’s
knowledge. If the acceptance of non-significant results
increases, this could contribute to the elimination of pub-
lication bias.
The impact of study design
Regardless of study design, the optimal research study
should give an estimate of the effectiveness of one treat-
ment over another, with a minimised risk of systematic
bias. The ability and validity of doing this for observa-
tional studies compared with randomised controlled tri -
als (RCTs) has been the subject of an ongoing debate
for decades. To determine the efficacy of a treatment or
intervention (i.e. the extent to which a beneficial result
is produced under ideal conditions), the RCTs remain
the gold standard and are regarded as the most suitable
tool for making the most precise estimates of treatment
effect [22]. The only more highly valued study design is
2307Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–
2308
1 3
the meta-analysis of large, well-conducted RCTs. Stud-
ies with an observational design are often conducted
when determining the effectiveness of an intervention in
“real-world” scenarios (i.e. the extent to which an inter-
vention produces an outcome under normal day-to-day
circumstances). A Cochrane Review published in 2014
[3] examined fourteen methodological reviews compar-
ing quantitative effect size estimates measuring the effi-
cacy or effectiveness of interventions tested in RCTs with
those tested in observational studies. Eleven (79%) of
the examined reviews showed no significant difference
between observational studies and RCTs. Two reviews
concluded that observational studies had smaller effects
of interest, while one suggested the exact opposite.
Moreover, the review underscored the importance of con-
sidering the heterogeneity of meta-analyses of RCTs or
observational studies, in addition to focusing on the study
design, as these factors influence the estimates reflective
of true effectiveness [3].
We must never take away the power and the validity
of a well-conducted RCT. However, we need to under-
line the fact that evidence-based medicine is at risk if we
focus myopically on the RCT study design and give it the
false credibility of being able to answer all our questions.
We must also acknowledge the weaknesses of RCTs and
combine information obtained from this study design,
while recognising the value of additional information
from prospective longitudinal cohort studies. The Fragil -
ity Index (FI) is a method for determining the robustness
of statistically significant findings in RCTs, and it was
recently applied to 48 clinical trials related to sports med-
icine and arthroscopic surgery [16]. The FI represents the
minimum number of patients in one arm of an RCT that
is required to change the outcome, from a non-event to an
event, in order to change a result from statistically sig-
nificant to non-significant. So, the lower the number is,
the more fragile the significant result. The systematic sur-
vey somewhat worryingly showed that the median FI of
included studies was 2 [16]. Could it be that we are cur-
rently concluding evidence based on the outcome of two
single patients in some orthopaedic sports medicine stud-
ies? The FI should be an indicative tool in future clinical
studies which, in combination with other statistical sum-
maries from a study, could identify results that should be
interpreted cautiously or require further investigation [5].
Ultimately, the foundation of science is the ability to
generalise the results of a study. The factors that affect
the risk of an event or an outcome in a real-life situation
are a result of the natural individual variation surround-
ing us. It is therefore somewhat paradoxical in RCTs to
distribute risk factors evenly and eliminate all the fac-
tors that may interact with the intervention. We should
remember that, when drawing conclusions from a RCT,
this is based on many occasions on data obtained from
highly specialised centres in one part of the world. The
population is enrolled based on strict inclusion and exclu-
sion criteria, which should always trigger the questions
of “how many individuals failed to meet them?” and
“could their participation have made any difference to the
result?” Moreover, RCTs have also been criticised for not
representing usual care, which may in fact be the case at
a highly specialised centre for sports medicine [1].
High‑ quality observational studies—an asset
in evidence‑ based medicine
In addition to the generalisability of the results, large
observational studies originating from large registers
offer advantages in terms of identifying incidences,
understanding practices and determining the long-term
effects of different types of exposure/intervention. In par -
ticular, adverse events can be identified and rare outcomes
can be found [13]. Well-conducted, large cohort stud-
ies are regarded as the highest level of evidence among
observational studies, as the temporality of events can be
established. To put it another way, the cause of an event
always precedes the effect [13]. The SPORT trial spine,
MOON ACLR and MARS revision ACLR are examples
of prospective longitudinal cohorts based on STROBE
criteria [24] and multivariate modelling, where almost
100% of patients are enrolled. Further, they address the
modelling of who will respond to an intervention. While
an RCT determines an average of who the intervention
will benefit, registers like these determine the individual
patient to whom the treatment should be applied, as they
model the multitude of risk factors a patient presents to
clinicians.
On the other hand, observational studies are limited
by indication bias and are subject to the potential effect
of unmeasured confounders. The random variation, the
confounding factors, must of course be reconciled with
existing knowledge in observational studies. The more
individual variation, the more the precision of what we
are measuring is affected. There is variation in biologi -
cal responses, in previous therapies, in activity levels
and types of activity and variation in lifestyles, to men-
tion just a few. However, we would like to underline the
importance of seeing these factors as an opportunity in
observational studies. An opportunity to acquire a greater
knowledge of the relationship between factors of vari -
ance and the outcome, as well as possible underlying
mechanisms. Using statistical approaches that adjust for
confounders makes a good analysis possible [7, 20].
In order to improve the precision of results from the
many registers being established around the world, we
2308 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–
2308
1 3
must more clearly define and investigate the true con-
founders. With regard to anterior cruciate ligament (ACL)
reconstruction, data from several large registers have ena-
bled the valuable identification of predictors of outcome.
However, there is as yet no existing predictive model for
multivariate analysis where confounders are taken into
account, which could potentially jeopardise the validity
[2]. ACL reconstruction is one of the most researched
areas in orthopaedic medicine, and this is therefore
noteworthy because the lack of consensus in determin-
ing the factors that need to be included in a model may
alter the results of studies investigating the same condi -
tion. Another key point for high-level register studies is
that the representativeness of the cohort is ensured. When
registers are being established, comprehensive data entry
is needed and it is important that the investigators take
responsibility for monitoring the enrolment and attrition
of the cohort.
As researchers and clinicians, we sometimes need to take
a step back and evaluate how we can continue to implement
evidence-based medicine. We must understand that many
factors contribute to what we choose to call evidence and
that there is no single way to find it. Moreover, scientists
before us recognised that only by repeated experiments is
it possible to establish the reproducibility of the investiga-
tion and thereby get closer to the truth about the efficacy of
our treatments. Instead of naively believing that significant
results (P < 0.05) from any study are synonymous with
evidence, we can take advantage of the strengths of differ -
ent study designs. We should remember that many studies
have found that the results of observational and RCT stud-
ies correlate well [4, 8, 15]. We encourage the performance
of the best research whenever possible. Sometimes this is
a well-conducted RCT or highly controlled prospective
longitudinal cohorts, and at other times it is the establish-
ment of large patient registers. With regard to the obser-
vational study design, future comprehensive prospective
cohort studies can provide us with important knowledge
and be significant contributors to evidence-based medicine.
Nevertheless, the P value is a tool that can be helpful, but
it must be applied thoughtfully and while appreciating its
limitations and assumptions. Evidence-based medicine as
defined by the original practitioners involves making a clin-
ical decision by combining clinical experience and the best
available evidence from RCTs and registers and by incor -
porating the patient’s values and preferences.
References
1. Albert RK (2013) “Lies, damned lies…” and observational
stud-
ies in comparative effectiveness research. Am J Respir Crit
Care
Med 187(11):1173–1177
2. An VV, Scholes C, Mhaskar VA, Hadden W, Parker D
(2016)
Limitations in predicting outcome following primary ACL
reconstruction with single-bundle hamstring autograft—A sys-
tematic review. Knee 24(2):170–178
3. Anglemyer A, Horvath HT, Bero L (2014) Healthcare
outcomes
assessed with observational study designs compared with those
assessed in randomized trials. Cochrane Database Syst Rev
4:Mr000034
4. Benson K, Hartz AJ (2000) A comparison of observational
studies and
randomized, controlled trials. N Engl J Med 342(25):1878–1886
5. Carter RE, McKie PM, Storlie CB (2017) The Fragility
Index: a
P-value in sheep’s clothing? Eur Heart J 38(5):346–348
6. Cohen HW (2011) P values: use and misuse in medical
litera-
ture. Am J Hypertens 24(1):18–23
7. Concato J (2012) Is it time for medicine-based evidence?
JAMA
307(15):1641–1643
8. Concato J, Shah N, Horwitz RI (2000) Randomized,
controlled
trials, observational studies, and the hierarchy of research
designs. N Engl J Med 342(25):1887–1892
9. Copay AG, Subach BR, Glassman SD, Polly DW Jr, Schuler
TC
(2007) Understanding the minimum clinically important differ -
ence: a review of concepts and methods. Spine J 7(5):541–546
10. Fisher R (1973) Statistical methods and scientific
inference, 3rd
edn. Hafner Publishing Company, New York
11. Goodman S (2008) A dirty dozen: twelve p-value
misconcep-
tions. Semin Hematol 45(3):135–140
12. Greene WL, Concato J, Feinstein AR (2000) Claims of
equiva-
lence in medical research: are they supported by the evidence?
Ann Intern Med 132(9):715–722
13. Inacio MC, Paxton EW, Dillon MT (2016) Understanding
ortho-
paedic registry studies: a comparison with clinical studies. J
Bone Joint Surg Am 98(1):e3
14. Ioannidis JA (2005) Contradicted and initially stronger
effects in
highly cited clinical research. JAMA 294(2):218–228
15. Ioannidis JP, Haidich AB, Pappa M, Pantazis N, Kokori SI,
Tek-
tonidou MG, Contopoulos-Ioannidis DG, Lau J (2001) Compari-
son of evidence of treatment effects in randomized and nonrand-
omized studies. JAMA 286(7):821–830
16. Khan M, Evaniew N, Gichuru M, Habib A, Ayeni OR, Bedi
A,
Walsh M, Devereaux PJ, Bhandari M (2016) The fragility of
statistically significant findings from randomized trials in
sports
surgery. Am J Sports Med. doi:10.1177/0363546516674469
17. Kyriacou DN (2016) The enduring evolution of the P value.
JAMA 315(11):1113–1115
18. Lowe WR (2016) Editorial Commentary: “There, It Fits!”—
Jus-
tifying Nonsignificant P Values. Arthroscopy 32(11):2318–2321
19. Mark DB, Lee KL, Harrell FE Jr (2016) Understanding the
role
of P values and hypothesis tests in clinical research. JAMA Car -
diol 1(9):1048–1054
20. Methodology Committee of the Patient-Centered Outcomes
Research Institute (PCORI) (2012) Methodological standards
and patient-centeredness in comparative effectiveness research:
the PCORI perspective. JAMA 307(15):1636–1640
21. Nuzzo R (2014) Statistical errors—P values, the ‘golden
stand-
ard’ of statistical validity, are not as reliable as many scientists
assume. Nature 508:150–152
22. Rosenberg W, Donald A (1995) Evidence based medicine:
an
approach to clinical problem-solving. BMJ 310(6987):1122–
1126
23. Salsburg D (2002) The lady tasting tea, 31728th edn. Holt
Paper-
backs, New York
24. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC,
Vandenbroucke JP (2007) The Strengthening the Reporting
of Observational Studies in Epidemiology (STROBE) state-
ment: guidelines for reporting observational studies. Lancet
370(9596):1453–1457
http://dx.doi.org/10.1177/0363546516674469While modern
medicine evolves continuously, evidence-based research
methodology remains: how register studies should be
interpreted and appreciatedLook! A significant result!The
impact of study designHigh-quality observational studies—an
asset in evidence-based medicineReferences
Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2
209
Exploring the Evidence
Quantitative and Qualitative Research
Focusing on the Fundamentals:
A Simplistic Differentiation Between
Qualitative and Quantitative Research
Shannon Rutberg
Christina D. Bouikidis
R
esearch is categorized as quantitative or qualitative
in nature. Quantitative research employs the use of
numbers and accuracy, while qualitative research
focuses on lived experiences and human percep-
tions (Polit & Beck, 2012). Research itself has a few vari -
eties that can be explained using analogies of making a
cup of coffee or tea.
To make coffee, the amount of water and coffee
grounds to be used must be measured. This precise meas-
urement determines the amount of coffee and the strength
of the brew. The key word in this quantitative research
analogy is measure. To make tea, hot water must be poured
over a tea bag in a mug. The length of time a person
leaves a tea bag in the mug comes down to perception of
the strength of the tea desired. The key word in qualitative
research is perception. This article describes and explores
the differences between quantitative (measure) and quali-
tative (perception) research.
Types of Research
Nursing research can be defined as a “systematic
inquiry designed to develop trustworthy evidence about
issues of importance to the nursing profession, including
nursing practice, education, administration, and informat-
ics” (Polit & Beck, 2012, p. 736). Researchers determine
the type of research to employ based upon the research
question being investigated. The two types of research
methods are quantitative and qualitative. Quantitative
research uses a rigorous and controlled design to examine
phenomena using precise measurement (Polit & Beck,
2012). For example, a quantitative study may investigate a
patient’s heart rate before and after consuming a caffeinat-
ed beverage, like a specific brand/type of coffee. In our
coffee and tea analogy, in a quantitative study, the
research participant may be asked to drink a 12-ounce cup
of coffee, and after the participant consumes the coffee,
the researcher measures the participant’s heart rate in
beats per minute. Qualitative research examines phenom-
ena using an in-depth, holistic approach and a fluid
Exploring the Evidence is a department in the Nephrology
Nursing
Journal designed to provide a summary of evidence-based
research
reports related to contemporary nephrology nursing practice
issues.
Content for this department is provided by members of the
ANNA
Research Committee. Committee members review the current
literature
related to a clinical practice topic and provide a summary of the
evidence
and implications for best practice. Readers are invited to submit
questions or topic areas that pertain to evidence-based
nephrology
practice issues. Address correspondence to: Tamara Kear,
Exploring the
Evidence Department Editor, ANNA National Office, East Holly
Avenue/Box 56, Pitman, NJ 08071-0056; (856) 256-2320; or via
e-mail
at [email protected] The opinions and assertions contained
herein
are the private views of the contributors and do not necessarily
reflect
the views of the American Nephrology Nurses’ Association.
Copyright 2018 American Nephrology Nurses’ Association
Rutberg, S., & Bouikidis, C.D. (2018). Focusing on the
fundamen-
tals: A simplistic differentiation between qualitative and
quantitative research. Nephrology Nursing Journal, 45(2), 209-
212.
This article describes qualitative, quantitative, and mixed meth-
ods research. Various classifications of each research design,
including specific categories within each research method, are
explored. Attributes and differentiating characteristics, such as
formulating research questions and identifying a research prob-
lem, are examined, and various research method designs and
reasons to select one method over another for a research project
are discussed.
Key Words: Qualitative research, quantitative research,
mixed methods research, method design.
Shannon Rutberg, MS, MSN, BS, RN-BC, is a Clinical Nurse
Educator,
Bryn Mawr Hospital, Main Line Health System Bryn Mawr, PA.
Christina D. Bouikidis, MSN, RNC-OB, is a Clinical
Informatics
Educator, Main Line Health System, Berwyn, PA.
Statement of Disclosure: The authors reported no actual or
potential con-
flict of interest in relation to this continuing nursing education
activity.
Note: The Learning Outcome, additional statements of
disclosure, and
instructions for CNE evaluation can be found on page 213.
Continuing Nursing
Education
Exploring the Evidence
Quantitative and Qualitative Research
Nephrology Nursing Journal March-April 2018 Vol. 45, No.
2210
Focusing on the Fundamentals: A Simplistic Differentiation
Between Qualitative and Quantitative Research
research design that produces rich, telling narratives (Polit
& Beck, 2012). An example of a qualitative study is explor -
ing the participant’s preference of coffee over tea, and feel-
ings or mood one experiences after drinking this favorite
hot beverage.
Quantitative Research
Quantitative research can range from clinical trials for
new treatments and medications to surveying nursing staff
and patients. There are many reasons for selecting a quan-
titative research study design. For example, one may
choose quantitative research if a lack of research exists on
a particular topic, if there are unanswered research ques-
tions, or if the research topic under consideration could
make a meaningful impact on patient care (Polit & Beck,
2012). There are several different types of quantitative
research. Some of the most commonly employed quanti-
tative designs include experimental, quasi-experimental,
and non-experimental.
Experimental Design
An experimental design isolates the identified phenom-
ena in a laboratory and controls conditions under which
the experiment occurs (Polit & Beck, 2012). There is a con-
trol group and at least one experimental group in this
design. The most reliable studies use a randomization
process for group assignment wherein the control group
receives a placebo (an intervention that does not have ther -
apeutic significance) and the experimental group receives
an intervention (Polit & Beck, 2012). For example, if one is
studying the effects of caffeine on heart rate 15 minutes
after consuming coffee, using a quantitative experimental
design, the design may be set up similarly to the descrip-
tion in Table 1. Randomization will allow an equal chance
for each participant to be assigned to either the control or
the experimental group. Then the heart rate is measured
before and after the intervention. The intervention is drink-
ing decaffeinated coffee for the control group and drinking
caffeinated coffee for the experimental group. Data collect-
ed (heart rate pre- and post-coffee consumption) are then
analyzed, and conclusions are drawn.
Quasi-Experimental Design
Quasi-experimental designs include an intervention in
the design; however, designs do not always include a con-
trol group, which is a cornerstone to an authentic experi -
mental design. This type of design does not have random-
ization like the experimental design (Polit & Beck, 2012).
Instead, there may be an intervention put into place with
outcome measures pre- and post-intervention implemen-
tation, and a comparison used to identify if the interven-
tion made a difference. For example, perhaps a coffee
chain store wants to see if sampling a new flavor of coffee,
like hazelnut, will increase revenue over a one-month
period. At location A, hazelnut coffee was distributed as a
sample to customers in line waiting to purchase coffee. At
location B, no samples were distributed to customers.
Sales of hazelnut coffee are examined at both locations
prior to the intervention (hazelnut sample given out to
customers waiting in line) and then again one month later,
after the intervention. Lastly, a monthly revenue is com-
pared at both sites to measure if free hazelnut coffee sam-
ples impacted sales.
Non-Experimental Design
The final type of quantitative research discussed in this
article is the nonexperimental design. Manipulation of
variables does not occur with this design, but an interest
exists to observe the phenomena and identify if a relation-
ship exists (Polit & Beck, 2012). Perhaps someone is inter-
ested if drinking coffee throughout one’s life decreases the
incidence of having a stroke. Researchers for this type of
study will ask participants to report approximately how
much coffee they drank daily, and data would be com-
pared to their stroke incidence. Researchers will analyze
data to determine if a causal relationship exists between
coffee and stroke incidence by examining behavior that
occurred in the past.
Table 1
Comparison of Control
Control Group Experimental Group
Assignment 10 participants randomly assigned to the control
group
10 participants randomly assigned to the
experimental group
Pre-Intervention
Data Collection
Heart rate prior to beverage consumption Heart rate prior to
beverage consumption
Intervention Consume a placebo drink (i.e., decaffeinated
coffee)
Consume the experimental drink (i.e.,
caffeinated coffee)
Post-Intervention
Data Collection
Obtain heart rate 15 minutes after the beverage
was consumed
Obtain heart rate 15 minutes after the beverage
was consumed
Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2
211
Quantitative Study Attributes
In quantitative studies, the researcher uses standardized
questionnaires or experiments to collect numeric data.
Quantitative research is conducted in a more structured
environment that often allows the researcher to have con-
trol over study variables, environment, and research ques-
tions. Quantitative research may be used to determine rela-
tionships between variables and outcomes. Quantitative
research involves the development of a hypothesis – a
description of the anticipated result, relationship, or
expected outcome from the question being researched
(Polit & Beck, 2012). For example, in the experimental
study mentioned in Table 1, one may hypothesize the con-
trol group will not see an increase in heart rate. However,
in the experimental group, one may hypothesize an
increase in heart rate will occur. Data collected (heart rate
before and after coffee consumption) are analyzed, and
conclusions are drawn.
Qualitative Research
According to Choy (2014), qualitative studies address
the social aspect of research. The researcher uses open-
ended questions and interviews subjects in a semi-struc-
tured fashion. Interviews often take place in the partici-
pant’s natural setting or a quiet environment, like a confer-
ence room. Qualitative research methodology is often
employed when the problem is not well understood and
there is an existing desire to explore the problem thor-
oughly. Typically, a rich narrative from participant inter-
views is generated and then analyzed in qualitative
research in an attempt to answer the research question.
Many questions will be used to uncover the problem and
address it comprehensively (Polit & Beck, 2014).
Types of Qualitative Research Design
Types of qualitative research include ethnography, phe-
nomenology, grounded theory, historical research, and
case studies (Polit & Beck, 2014). Ethnography reveals the
way culture is defined, the behavior associated with cul-
ture, and how culture is understood. Ethnography design
allows the researcher to investigate shared meanings that
influence behaviors of a group (Polit & Beck, 2012).
Phenomenology is employed to investigate a person’s
lived experience and uncover meanings of this experience
(Polit & Beck, 2012). Nurses often use phenomenology
research to better understand topics that may be part of
human experiences, such as chronic pain or domestic vio-
lence (Polit & Beck, 2012). Using the coffee analogy, a
researcher may use phenomenology to investigate atti-
tudes and practices around a specific time of day for coffee
consumption. For example, some individuals may prefer
coffee in the morning, while some prefer coffee through-
out the day, and others only enjoy coffee after a meal.
Grounded theory investigates actions and effects of the
behavior in a culture. Grounded theory methodology may
be used to investigate afternoon tea time practices in
Europe as compared to morning coffee habits in the
United States.
Historical research examines the past using recorded
data, such as photos or objects. The historical researcher
may look at the size of coffee mugs in photos or mugs
from antique photos over a few centuries to provide a his-
torical perspective. Perhaps photos are telling of cultural
practices surrounding consuming coffee over the centuries
– in solitude, or in small or large groups.
Qualitative Research Attributes
When selecting a qualitative research design, keep in
mind the unique attributes. Qualitative research method-
ology may involve multiple means of data collection to
further understand the problem, such as interviews in
addition to observations (Polit & Beck, 2012). Further,
qualitative research is flexible and adapts to new informa-
tion based on data collected, provides a holistic perspec-
tive on the topic, and allows the researcher to become
entrenched in the investigation. The researcher is the
research tool, and data are constantly being analyzed to
identify commencement of the study. The decision to
select a qualitative methodology requires several consider -
ations, a great amount of planning (such as which research
design fits the study best, the time necessary to devote to
the study, a data collection plan, and resources available
to collect the data), and finally, self-reflection on any per-
sonal presumptions and biases toward the topic (Polit &
Beck, 2014).
Selecting a sample population in qualitative research
begins with identifying eligibility to participate in the
study based on the research question. The participant
needs to have had exposure or experience with the con-
tent being investigated. A thorough interview will uncover
the encounter the participant had with the research ques-
tion or experience. There will most likely be a few stan-
dard questions asked of all participants and subsequent
questions that will evolve based upon the participant’s
experience/answers. Thus, there tends to be small sample
size with a high volume of narrative data that needs to be
analyzed and interpreted to identify trends intended to
answer the research question (Polit & Beck, 2014).
Mixed Methods
Using both quantitative and qualitative methodology
into a single study is known as a mixed methods study.
According to Tashakkori and Creswell (2007), mixed
methods research is “research in which the researcher col -
lects and analyzes data, integrates the findings, and draws
inferences using both qualitative and quantitative
approaches or methods in a single study or program of
inquiry” (p. 4). This approach has the potential to allow
the researcher to collect two sets of data. An example of
using mixed methods would be examining effects of con-
suming a caffeinated beverage prior to bedtime. The
researcher may want to investigate the impact of caffeine
Nephrology Nursing Journal March-April 2018 Vol. 45, No.
2212
Focusing on the Fundamentals: A Simplistic Differentiation
Between Qualitative and Quantitative Research
on the participant’s heart rate and ask how consumi ng the
caffeine drink makes him or her feel. There is the quanti -
tative numeric data measuring the heart rate and the qual -
itative data addressing the participant’s experience or per -
ception.
Polit and Beck (2012) describe advantages of using a
mixed method approach, including complementary, prac-
ticality, incrementality, enhanced validity, and collabora-
tion. Complementary refers to quantitative and qualitative
approaches complementing each other. Mixed methods
use words and numbers, so the study is not limited to
using just one method of data collection. Practicality refers
to the researcher using the method that best addresses the
research question while using one method from the mixed
method approach. Incrementality is defined as the
researcher taking steps in the study, where each step leads
to another in an orderly fashion. An example of incremen-
tality includes following a particular sequence, or an
order, just as a recipe follows steps in order to accurately
produce a desired product. Data collected from one
method provide feedback to promote understanding of
data from the other method. With enhanced variability,
researchers can be more confident about the validity of
their results because of multiple types of data supporting
the hypothesis. Collaboration provides opportunity for
both quantitative and qualitative researchers to work on
similar problems in a collaborative nature.
Once the researcher has decided to move forward with
a mixed-method study, the next step is to decide on the
designs to employ. Design options include triangulation,
embedded, explanatory, and exploratory. Triangulation is
obtaining quantitative and qualitative data concurrently,
with equal importance given to each design. An embedded
design uses one type of data to support the other data type.
An explanatory design focuses on collecting one data type
and then moving to collecting the other data type.
Exploratory is another sequential design, where the
researcher collects one type of data, such as qualitative, in
the first phase; then using those findings, the researcher col-
lects the other data, quantitative, in the second phase.
Typically, the first phase concentrates on thorough investi -
gation of a minutely researched occurrence, and the second
phase is focused on sorting data to use for further investiga-
tion. While using a two-step process may provide the
researcher with more data, it can also be time-consuming.
Conclusion
In summary, just like tea and coffee, research has simi -
larities and differences. In the simplest of terms, it can be
viewed as measuring (how many scoops of coffee
grounds?) compared to perception and experience (how
long to steep the tea?). Quantitative research can range
from experiments with a control group to studies looking
at retrospective data and suggesting causal relationships.
Qualitative research is conducted in the presence of limit-
ed research on a particular topic, and descriptive narra-
tives have the potential to provide detailed information
regarding this particular area. Mixed methods research
involves combining the quantitative and qualitative
threads into data conversion and using those results to
make meta-inferences about the research question.
Research is hard work, but just like sipping coffee or tea at
the end of a long day, it is rewarding and satisfying.
References
Choy, L.T. (2014). The strengths and weaknesses of research
methodology: Comparison and complimentary between
qualitative and quantitative approaches. Journal of
Humanities and Social Science, 19(4), 99-104.
Polit, D.F., & Beck, C.T. (2012). Nursing research: Generating
and
assessing evidence for nursing practice. (9th ed.). Philadelphia,
PA: Wolters Kluwer.
Polit, D.F., & Beck, C.T. (2014). Essentials of nursing research:
Appraising evidence for nursing practice (8th ed.). Philadelphia,
PA: Wolters Kluwer.
Tashakkori, A., & Creswell, J.W. (2007). The new era of mixed
methods. Journal of Mixed Methods Research, 1(1), 3-7.
Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2
213
Focusing on the Fundamentals: A Simplistic Differentiation
Between Qualitative and Quantitative Research
Name:
_____________________________________________________
______________
Address:
_____________________________________________________
____________
City:
_____________________________________________________
________________
Telephone: _________________ Email:
________________________________________
ANNA Member: Yes No Member
#___________________________
Payment: Check Enclosed American Express Visa
MasterCard
Total Amount Submitted: ___________
Credit Card Number:
____________________________________ Exp. Date:
___________
Name as it Appears on the Card:
______________________________________________
Complete the Following (please print)
1. I verify I have completed this education activity. n Yes n
No
__________________________________________________
SIGNATURE Strongly Strongly
Disagree (Circle one) Agree
2. The learning outcome could be achieved using the content
provided. 1 2 3 4 5
3. The authors stimulated my desire to learn, and demonstrated
knowledge 1 2 3 4 5
and expertise in the content areas.
4. I am more confident in my abilities since completing this 1 2
3 4 5
education activity.
5. The content was relevant to my practice. 1 2 3 4 5
6. Did the learner engagement activity add value to this
education activity? n Yes n No
7. Commitment to change practice (select one):
a. I will make a change to my current practice as the result of
this education activity.
b. I am considering a change to my current practice.
c. This education activity confirms my current practice.
d. I am not yet convinced that any change in practice is
warranted.
e. I perceive there may be barriers to changing my current
practice.
8. What do you plan to do differently in your practice as a result
of completing this educational activity?
(Required)____________________________________________
__________________________
_____________________________________________________
_________________________
9. What information from this education activity do you plan to
share with a professional colleague?
(Required)
_____________________________________________________
____________________
_____________________________________________________
____________________________
10. This education activity was free of bias, product promotion,
and commercial interest influence. (Required)
n Yes n No
11. If no, please explain:
_____________________________________________________
____________
* Commercial interest – any entity either producing, marketing,
reselling, or distributing healthcare goods or
services consumed by or used on patients or an entity that is
owned or controlled by an entity that produces,
markets, resells, or distributes healthcare goods or services
consumed by or used on patients. Exceptions are
non-profits, government and non-healthcare related companies.
Nephrology Nursing Journal Editorial Board
Statements of Disclosure
In accordance with ANCC governing rules
Nephrology Nursing Journal Editorial Board state-
ments of disclosure are published with each CNE
offering. The statements of disclosure for this offer-
ing are published below.
Paula Dutka, MSN, RN, CNN, disclosed that she
is a consultant for Rockwell Medical, GSK, CARA
Therapeutics, Otsuka, Akebia Therapeutics, Bayer,
and Fibrogen.
Norma J. Gomez, MBA, MSN, CNNe, disclosed that
she is a member of the ZS Pharma Advisory Council.
Tamara M. Kear, PhD, RN, CNS, CNN, disclosed
that she is a member of the ANNA Board of Directors,
serves on the Scientific Advisory Board for Kibow
Biotech, Inc.
All other members of the Editorial Board had no actu-
al or potential conflict of interest in relation to this
continuing nursing education activity.
This article was reviewed and formatted for contact
hour credit by Beth Ulrich, EdD, RN, FACHE, FAAN,
Nephrology Nursing Journal Editor, and Sally
Russell, MN, CMSRN, CPP, ANNA Education Director.
American Nephrology Nurses Association – Provider
is accredited with distinction as a provider of contin-
uing nursing education by the American Nurses
Credentialing Center’s Commission on Accreditation.
ANNA is a provider approved by the California Board
of Registered Nursing, provider number CEP 00910.
This CNE article meets the Nephrology Nursing
Certification Commission’s (NNCC’s) continuing
nursing education requirements for certification and
recertification.
SUbMISSION INSTRUCTIONS
Online Submission
Articles are free to ANNA members
Regular Article Price: $15
CNE Evaluation Price: $15
Online submissions of this CNE evaluation form are
available at annanurse.org/library. CNE certificates will
be available im mediately upon successful completion of
the evaluation.
Mail/Fax Submission
ANNA Member Price: $15
Regular Price: $25
• Send this page to the ANNA National Office; East
Holly Avenue/Box 56; Pitman, NJ 08071-0056, or
fax this form to (856) 589-7463.
• Enclose a check or money order payable to ANNA.
Fees listed in payment section.
• A certificate for the contact hours will be awarded
by ANNA.
• Please allow 2-3 weeks for processing.
• You may submit multiple answer forms in one mail-
ing; however, because of various processing proce-
dures for each answer form, you may not receive all
of your certificates returned in one mailing.Note: If you wish to
keep the journal intact, you may photocopy the answer sheet or
access this activity at www.annanurse.org/journal
Evaluation Form (All questions must be answered to complete
the learning activity.
Longer answers to open-ended questions may be typed on a
separate page.)
ANNJ1811
Learning Outcome
After completing this learning activity, the learner will be able
to define
quantitative, qualitative, and mixed method research studies,
and dis-
cuss their attributes.
Learner Engagement Activity
For more information on this topic, view the session “Reading
and
Understanding Research” presented by Tamara Kear in the
ANNA Online
Library (https://library.annanurse.org/an na/sessions/4361/view).
EVALUATION FORM
1.3 Contact Hours | Expires: April 30, 2020
Copyright of Nephrology Nursing Journal is the property of
American Nephrology Nurses'
Association and its content may not be copied or emailed to
multiple sites or posted to a
listserv without the copyright holder's express written
permission. However, users may print,
download, or email articles for individual use.
EVIDENCE BASED PRACTICE
MA I N L ES S O N O VER VI EW A
This module takes a closer look at the two basic types of
research
introduced in the previous module: Qualitative Research and
Quantitative Research. The module also begins the process of
determining whether a study is well designed and implemented,
and whether the findings are solid enough to merit incorporation
into practice.
A
A: Example of a Good Study
The researchers want to see if a new drug for high blood
pressure
works to keep the blood pressure (BP) at acceptable low
levels. The research design would require a random assignment
of
the study participants into either an experimental group or a
control group; the subjects would have their BPs measured in
advance; and the experimental group would receive the
medication, followed by additional measures of BP for both
groups. The [valid] research question would be: does the drug
cause a reduction in blood pressure that would not be seen in a
group without the drug? The sample size would be 50 in each
group, and subjects would all be matched on age, weight,
history
of blood pressure spikes, and no other medical problems
(control
for error). The data would be interval-scaled, allowing
parametric
statistics to be used. The research question asks if there would
be
significant differences in BP measures between the two groups
due to the new drug. The statistic would thus be a statistical
analysis of variance (ANOVA), since it is the highest level of
statistical analysis that fits the research question, the data type,
and the expected causal relationship between the drug and
patient
BP levels.
The following explains how the above is an example of how a
Good Research Study responds to satisfactorily meeting the
following four questions.
• Does the study address a valid clinical question? Yes –
studying a drug to see if its use results in acceptable BP levels
is a valid clinical question.
• Do the study group participants replicate the overall
population
under study? Yes – the study participants are selected
according to a number of applicable factors.
• Does the study randomly assign participants into the
experimental or control groups? Yes – once qualified, the
participants are randomly assigned to the experimental and
control groups.
• Does the study’s structure aim at validly applicable statistics?
Yes – the study results would be an ANOVA.
MA I N L ES S O N O VER VI EW B
Basically, the way to determine if a study is “bad” is to measure
it
against the criteria of a “good” study. This sounds simple, but
often
times studies are labeled as good, valid, when they really do not
measure up. It is the role of reviewers to make sure all of the
“i’s”
are dotted and the “t’s” are crossed. Decisions based on
unsound
studies can have potentially large negative financial
implications,
and can result in injuries and death.
B
A: Example of a Bad Study
Bad—Same research question, but the researchers do not screen
their subjects for weight, age, and other medical conditions.
They
do not use a control group, so they can’t tell if any changes in
BP
after the drug is given are due to the drug, and not to other
possible factors. They use a single sample of 5 people, which is
not big enough to tell anything. Instead of actual BP measures,
they just list the results as “up” or “down,” but try to use an
ANOVA
(which cannot be used with categorical data such as up or
down).
The statistic does not fit the data or the sample, and all results
are
questionable due to bad design and poor statistical selection.
The following explains why the above is an example of a Bad
Research Study.
1. Does the study address a valid clinical question? Yes –
studying a drug to see if its use results in acceptable BP levels
is a valid clinical question.
2. Do the study group participants replicate the overall
population
under study? NO – the study group participants are not
screened to a valid set of criteria.
3. Does the study randomly assign participants into the
experimental or control groups? NO – the study does not use a
control group.
4. Does the study’s structure aim at validly applicable
statistics?
NO – the study does not result in valid statistical data.
Therefore, the study is considered bad since three of four
markers
of a valid, or good, study are missing (questions 2, 3, and 4).
Summary of Lesson
The purpose of this tutorial was to familiarize you with the
basic
concepts in the methods, measurement, and the review of
healthcare
evidence-based research studies. If structured and conducted
correctly, evidence-based research studies can lead to evidence-
based practice, and consequently, more effective healthcare for
the
patient.
Main Module 1 Readings
1. Introduction
The term evidence-based practice has become pervasive in the
health care industry. This concept signals a major shift in how
health
care providers deliver their services, and in how health care
institutions function. It is no longer acceptable just to perform
in a
given way because “that’s how it’s always been done.”
Research to promote evidence-based practice is becoming more
and
more a part of the regular work of health care leaders. However,
as
with any research, it is important to be able to tell the
difference
between good, solid research, and flawed research with
questionable
conclusions. Since changing practice can be difficult at best, it
is
essential that changes be grounded in solid evidence.
2. Evidence-Based Practice
What is evidence-based practice? This approach can be defined
as
the continuous use of current, best evidence-based research in
decisions regarding patient care. Such research involves having
a
clinical question that needs addressing; the search for
information
and critical appraisal of that information as it relates to the
clinical
question; integration of the question’s basic
concepts/components
with existing clinical expertise; and understanding the projected
impacts any change can have on patients.
This approach also requires the review and integration of the
results
of more than one study into the critical appraisal, so that the
reliability
and generalizability of the studies’ results are stronger than any
one
study can be.
3. Types of Research
A health care facility leader deals with two major types of
research:
quantitative and qualitative.
When addressing clinical issues, research is traditionally
performed
using a quantitative design. This may involve timed studies,
where
the experimental variables are measured at different points in
time on
the same study sample; it may also involve comparison of an
experimental group against a control group; or it may involve
the
impacts of several independent variables on a single dependent
variable.
There are many different types of quantitative studies, but they
all
require the following: the appropriate selection of a random
sample of
subjects that replicates the overall population under study; the
use of
a statistical analysis appropriate to the design; and a design that
effectively controls the variables under study.
4. Qualitative Research
The other type of standard research is the qualitative study.
Qualitative studies tend to focus on the experiences of subjects
and
on gaining a stronger understanding of those experiences.
Researchers observe subjects in a given setting, watch for
behavioral
themes, and develop formative and summative observations and
conclusions. An example of qualitative research would be a case
study, where a particular patient, process, or event is studied
and
analyzed, and logical conclusions drawn from the data gathered.
Another example could be a root-cause analysis for determining
flaws
and error causes in a system.
A significant amount of exploratory research is done on a
qualitative
basis. Qualitative studies are often followed up by a range of
quantitative studies to derive specific answers to more narrowly
focused research questions.
5. Research Findings
How are evidence-based research findings used in a health care
setting? One of the most powerful ways to use research is as a
foundation toward improving practice outcomes. Hence, one
finds the
concept of evidence-based practice; that is, practice based on
evidence research. For example, a physician may be using a
standard mix of medications to control infections. However,
current
research results indicate that a particular single medication is
more
effective at reducing infections than the mix traditionally used.
Since a major goal of health care leaders is to reduce hospital-
acquired infections, it would be important to disseminate this
information to physicians, with the goal of changing their
practice to
achieve better outcomes.
Another example is the research on causes of stomach ulcers.
Traditionally and historically, the assumption was that high
levels of
stomach acid eroded the stomach lining, producing bleeding
ulcers.
The treatments at the time included medications to dilute
stomach
acids, dietary changes such as drinking more milk to coat the
stomach lining, and lifestyle changes to reduce the stress that
was
presumed to cause increases. People were shocked to see
research
revealing H Pylori, a bacterium in the stomachs of ulcer
sufferers, as
the real cause of ulcers, and that the treatment was a course of
antibiotics.
6. Health Care Leader Plan
What factors would a health care leader incorporate in a plan
that
would make the process of evidence-based research leading to
improved medical care successful? It is critical to base change
on
valid, reliable research findings. The strongest findings
typically come
from studies that have been replicated by subsequent
researchers.
There are examples across the scientific world of seemingly
earth-
shaking results from a single study that could not be reproduced
by
other researchers. One example is the cold fusion debacles of
recent
decades, wherein various researchers claimed to be able to make
nuclear fusion occur at room temperatures. In each case, study
results rocked the world of physics and many researchers rushed
to
duplicate the study. However, not a single duplication attempt
was
successful. Failure to replicate findings means that, at best, they
are
not generalizable outside the study sample, and, at worst, the
research methodology was flawed in some way.
Other issues to consider include: the correlation between the
research question and the study design; the type of data
collected
and its impact on the statistics produced; and the ability, or
inability,
to control the variables within the study as well as extraneous
variables that may have an impact on results.
7. Implications of a Successful Practice
Finally, what are the implications of successfully implementing
evidence-based practice changes in the health care
environment?
One of the fundamental implications the leader must consider is
the
financial implications of the change. Some may be favorable, as
when a generic antibiotic is shown to be as effective as a brand
name
antibiotic, but 70% cheaper.
Other practice changes are more expensive, as when drug-
coated
heart stents first made their appearance for application to
patients
with coronary artery disease and the cost of the procedure went
up
by thousands of dollars versus the nondrug-coated stents.
Considering the organization’s stakeholders when making
change
can have a powerful effect on the viability and smoothness of
the
change. For example, key stakeholders such as physicians, high
level staff, or even outside vendors can make a change effort
difficult,
or even unsuccessful, if they align to resist the change. The
experienced change agent must understand the “political”
climate and
stakeholders thoroughly before initiating a change process.
Finally, change theory repeatedly demonstrates that change is
most
difficult when the people affected by the change are satisfied
with the
status quo. One of the elements that new, valid research can
demonstrate is better outcomes than the status quo can achieve.
This
can help to create a readiness to change that will facilitate the
entire
process. But be prepared; logic does not always ensure an easy
transition.
8. Conclusion
The implementation of change precipitated by research findi ngs
from
evidence-based practice studies is an increasing responsibility
of the
health care leadership role. Monitors of quality of patient care
are
becoming publicly available through the Center for Medicare
and
Medicaid Studies (CMS) and its national Web site.
Consequently, the
general public is now able to see how different health care
organizations perform on national indicators of quality.
In order to meet the thresholds required by payers, changes in
practice must be implemented, driven by effective research.
Financial
reimbursement may be tied to successful changes in practice,
especially where patient outcomes improve. Therefore a key
element
of successful implementation of practice change is that it be
based on
valid research.
A critical skill set for the health care leader to develop is the
ability to
distinguish excellent research studies from those that may
contain
errors that affect the validity of the results. As we continue in
the
course, you will learn techniques for assessing research studies.
You
will also explore issues that can complicate implementation of
changes in practice.
9. References
Melnyk, B., & Fineout-Overholt, E. (2005). Evidence Based
Practice
in Nursing and Healthcare: A guide to best practice. New York:
Lippincott Williams & Wilkins.
Keys to Assess a Research Study (Panel)
There are a number of elements that comprise valid research.
The
following is an examination of the key elements that the health
care
administrator/manager would want to look at as basic to the
makeup
of a valid research study. That is, what are the questions that
need to
be answered?
KEY ELEMENTS
1. Literature Review:
a. Is the research question appropriately derived from the
literature review?
b. Does it make sense to ask the question after reading the
review of the literature?
c. Is the research question clearly stated and the variables
included in it?
2. Variables:
a. What are the independent and dependent variables?
b. Do they make sense from the perspective of the research
question?
c. Is the data measurement of each of the variables the correct
type of data needed by the selected statistic?
3. Research Design:
a. Is the sample randomly selected from the population, or are
subjects picked on the basis of a criterion?
b. What is the sample size?
c. What errors could occur in the design, and are they
controlled for?
d. Does the study have validity; does it answer the research
question on the expected relationship between the Independent
variable and the dependent variable?
e. Does the design call for an experimental and a control
group?
4. Statistic:
a. Is the statistic the correct one based on the research
question, the expected relationship between the independent
variable and the dependent variable, and the type of data taken
from subjects?
b. Is the statistic significant at the accepted level?
More Information (Alphabetical Order)
ANOVA
When speaking of statistics, an analysis of variance (ANOVA)
within
the study’s structure aims at testing to ensure the study has
resulted
in validly applicable statistics.
Another key question to ask when reviewing a study is “does
the
study structure aim at validly applicable statistics?” If yes, the
study
results are said to be an ANOVA.
Experimental Group or Control Group
In quantitative studies, the most common design uses an
experimental group and a control group. The experimental
group is
exposed to the independent variables, while the control group is
not.
Then both groups are measured on the dependent variable.
Before deciding upon the experimental group upon which to
base the
study, the variables being studied need to be defined. Variables
represent a relationship that is tested by the research design. In
a
quantitative study, this requires at least one independent
variable and
one dependent variable, although more complex studies may
include
more than one of each. Independent variables are those that are
assumed to have some impact or influence on the dependent, or
outcome variable. For example, in a study that examines the
intent to
remain in one’s job, the dependent variable, the effects of one’s
relationship with the manager is one possible independent
variable
that can influence the decision to stay. The dependent variable
is
always the expected outcome measure and the independent
variables are the ones that are theorized to have an impact on
changes in the dependent variable. It is important to note if
anything
in the literature review explains why the researchers chose the
particular independent and dependent variables that they did. In
a
well-designed study, the rationale for variable selection will be
obvious from the review of past literature.
Main Lesson Overview A
This module takes a closer look at the two basic types of
research
introduced in the previous module: Qualitative Research and
Quantitative Research. The module also begins the process of
determining whether a study is well designed and implemented,
and
whether the findings are solid enough to merit incorporation
into
practice.
Qualitative research begins with identifying a broad topic to be
explored and studied, rather than a narrowly designed research
question; it does not use a research question as such, or a
research
hypothesis. Since it is focused on a general study for
identifying and
studying broad concepts, it involves the organization and
interpretation of non-numeric data for the purpose of
discovering
important patterns or relationships. It may or may not involve
literature review, and it doesn’t use a formal sample selection,
data
analysis, or statistical interpretations. That noted, however,
qualitative
research plays an important role in the activities of healthcare
leaders. The panel at left lists basic components of a qualitative
study.
Main Lesson Overview B
Basically, the way to determine if a study is bad is to measure it
against the criteria of a good study. This sounds simple, but
often
times, studies are labeled as good, valid, when they really do
not
measure up. It is the role of reviewers to make sure all of the I’s
are
dotted and the T’s are crossed. Decisions based on unsound
studies
can have potentially large negative financial implications and
can
result in injuries and death.
Random Assignment
Random assignment of the study participants into either an
experimental group or a control group: i.e. does the study use
random sample selection of subjects, or are they matched on key
variables?
Statistical Analysis
The analysis of the data collected, the statistics, is an essential
component of the research design. The choice of the statistics to
use
is affected by the types of data collected, the expected
relationships
between the independent and dependent variables, and the
format of
the research question.
The discussion section of the research is where the researches
pull
together the entire study, discuss their findings, and tie the
results
back to the research question. Discussion focuses on the
statistics
resulted from the study. Do they reveal a significant
relationship
between the variables? That is, did the independent variables
affect
the dependent variable in a way the researchers had anticipated?
When reviewing a study, it is important to compare the results,
the
statistics of the study, in the discussion section to the initial
review of
the literature and the research question.
Valid Research
Study Validity determines whether the independent variable is
really
having an effect on the dependent variable, as opposed to the
study
being affected by variables extraneous to the study.
Are the instruments used to measure variables valid and
reliable?
This research aspect can be determined by looking at the way
the
instruments were originally built and tested. The researches
should
use instruments that have been used for the type of study under
question and put through a series of analyses that confirm the
concepts of the instrument validity and reliability. Validity
demonstrates that the instrument measures the abstract concepts
it is
supposed to measure. For example, the back depression
inventory
has been shown to measure accurately the intensity of
depression in
multiple studies involving thousands of patients. Consequently,
it is
said to have validity.
Valid Set Criteria
Four of the most important factors included in a valid research
study
are:
1. Does the study address a valid clinical question?
2. Do the study group participants replicate the overall
population
under study?
3. Does the study randomly assign participants into the
experimental
or control groups?
4. Does the study’s structure aim at validly applicable
statistics?
In the final analysis, a study will fail to be valid if it does not
contain
basic elements within its structure.
Evidence Based PracticeMain Lesson Overview AMain Lesson
Overview B

More Related Content

Similar to 1 3Knee Surg Sports Traumatol Arthrosc (2017) 252305–2308

Choosing statistical tests
Choosing statistical testsChoosing statistical tests
Choosing statistical testsAkiode Noah
 
NES Pharmacy, Critical Appraisal 2011
NES Pharmacy, Critical Appraisal 2011NES Pharmacy, Critical Appraisal 2011
NES Pharmacy, Critical Appraisal 2011NES
 
DEBATE Open AccessComparative effectiveness research what.docx
DEBATE Open AccessComparative effectiveness research what.docxDEBATE Open AccessComparative effectiveness research what.docx
DEBATE Open AccessComparative effectiveness research what.docxedwardmarivel
 
Guide for conducting meta analysis in health research
Guide for conducting meta analysis in health researchGuide for conducting meta analysis in health research
Guide for conducting meta analysis in health researchYogitha P
 
Common statistical pitfalls in basic science research
Common statistical pitfalls in basic science researchCommon statistical pitfalls in basic science research
Common statistical pitfalls in basic science researchRamachandra Barik
 
Feasibility of using_placebo_vision_therapy_in_a.9
Feasibility of using_placebo_vision_therapy_in_a.9Feasibility of using_placebo_vision_therapy_in_a.9
Feasibility of using_placebo_vision_therapy_in_a.9Yesenia Castillo Salinas
 
Feasibility of using_placebo_vision_therapy_in_a.9
Feasibility of using_placebo_vision_therapy_in_a.9Feasibility of using_placebo_vision_therapy_in_a.9
Feasibility of using_placebo_vision_therapy_in_a.9Yesenia Castillo Salinas
 
K7 - Critical Appraisal.pdf
K7 - Critical Appraisal.pdfK7 - Critical Appraisal.pdf
K7 - Critical Appraisal.pdfJeslynTengkawan1
 
Reliability of a German Questionnaire about General Practitioners? Handling o...
Reliability of a German Questionnaire about General Practitioners? Handling o...Reliability of a German Questionnaire about General Practitioners? Handling o...
Reliability of a German Questionnaire about General Practitioners? Handling o...Healthcare and Medical Sciences
 
Lemeshow samplesize
Lemeshow samplesizeLemeshow samplesize
Lemeshow samplesize1joanenab
 
Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...
Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...
Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...John Hoey
 
Respond to this Statistical Significance as described by Was.docx
Respond to this Statistical Significance as described by Was.docxRespond to this Statistical Significance as described by Was.docx
Respond to this Statistical Significance as described by Was.docxcwilliam4
 
Eblm pres final
Eblm pres finalEblm pres final
Eblm pres finalprasath172
 
Therapeutic_Innovation_&_Regulatory_Science-2015-Tantsyura
Therapeutic_Innovation_&_Regulatory_Science-2015-TantsyuraTherapeutic_Innovation_&_Regulatory_Science-2015-Tantsyura
Therapeutic_Innovation_&_Regulatory_Science-2015-TantsyuraVadim Tantsyura
 
Evidence based medicine what it is and what it is not
Evidence based medicine what it is and what it is notEvidence based medicine what it is and what it is not
Evidence based medicine what it is and what it is notDr. Jiri Pazdirek
 
20050325 Design of clinical trails in radiology
20050325 Design of clinical trails in radiology20050325 Design of clinical trails in radiology
20050325 Design of clinical trails in radiologyInternet Medical Journal
 
Evaluating medical literature guide final 5.7.12
Evaluating medical literature guide final 5.7.12Evaluating medical literature guide final 5.7.12
Evaluating medical literature guide final 5.7.12CreativeQi
 
Evaluating the Medical Literature
Evaluating the Medical LiteratureEvaluating the Medical Literature
Evaluating the Medical LiteratureClista Clanton
 

Similar to 1 3Knee Surg Sports Traumatol Arthrosc (2017) 252305–2308 (20)

Choosing statistical tests
Choosing statistical testsChoosing statistical tests
Choosing statistical tests
 
Confidence intervals
Confidence intervalsConfidence intervals
Confidence intervals
 
NES Pharmacy, Critical Appraisal 2011
NES Pharmacy, Critical Appraisal 2011NES Pharmacy, Critical Appraisal 2011
NES Pharmacy, Critical Appraisal 2011
 
DEBATE Open AccessComparative effectiveness research what.docx
DEBATE Open AccessComparative effectiveness research what.docxDEBATE Open AccessComparative effectiveness research what.docx
DEBATE Open AccessComparative effectiveness research what.docx
 
Guide for conducting meta analysis in health research
Guide for conducting meta analysis in health researchGuide for conducting meta analysis in health research
Guide for conducting meta analysis in health research
 
Common statistical pitfalls in basic science research
Common statistical pitfalls in basic science researchCommon statistical pitfalls in basic science research
Common statistical pitfalls in basic science research
 
Feasibility of using_placebo_vision_therapy_in_a.9
Feasibility of using_placebo_vision_therapy_in_a.9Feasibility of using_placebo_vision_therapy_in_a.9
Feasibility of using_placebo_vision_therapy_in_a.9
 
Feasibility of using_placebo_vision_therapy_in_a.9
Feasibility of using_placebo_vision_therapy_in_a.9Feasibility of using_placebo_vision_therapy_in_a.9
Feasibility of using_placebo_vision_therapy_in_a.9
 
K7 - Critical Appraisal.pdf
K7 - Critical Appraisal.pdfK7 - Critical Appraisal.pdf
K7 - Critical Appraisal.pdf
 
Reliability of a German Questionnaire about General Practitioners? Handling o...
Reliability of a German Questionnaire about General Practitioners? Handling o...Reliability of a German Questionnaire about General Practitioners? Handling o...
Reliability of a German Questionnaire about General Practitioners? Handling o...
 
Lemeshow samplesize
Lemeshow samplesizeLemeshow samplesize
Lemeshow samplesize
 
Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...
Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...
Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...
 
Respond to this Statistical Significance as described by Was.docx
Respond to this Statistical Significance as described by Was.docxRespond to this Statistical Significance as described by Was.docx
Respond to this Statistical Significance as described by Was.docx
 
Eblm pres final
Eblm pres finalEblm pres final
Eblm pres final
 
Copenhagen 2008
Copenhagen 2008Copenhagen 2008
Copenhagen 2008
 
Therapeutic_Innovation_&_Regulatory_Science-2015-Tantsyura
Therapeutic_Innovation_&_Regulatory_Science-2015-TantsyuraTherapeutic_Innovation_&_Regulatory_Science-2015-Tantsyura
Therapeutic_Innovation_&_Regulatory_Science-2015-Tantsyura
 
Evidence based medicine what it is and what it is not
Evidence based medicine what it is and what it is notEvidence based medicine what it is and what it is not
Evidence based medicine what it is and what it is not
 
20050325 Design of clinical trails in radiology
20050325 Design of clinical trails in radiology20050325 Design of clinical trails in radiology
20050325 Design of clinical trails in radiology
 
Evaluating medical literature guide final 5.7.12
Evaluating medical literature guide final 5.7.12Evaluating medical literature guide final 5.7.12
Evaluating medical literature guide final 5.7.12
 
Evaluating the Medical Literature
Evaluating the Medical LiteratureEvaluating the Medical Literature
Evaluating the Medical Literature
 

More from MartineMccracken314

1. Jack is the principal.  Mary is Jacks agent.  Mary enters into
1. Jack is the principal.  Mary is Jacks agent.  Mary enters into1. Jack is the principal.  Mary is Jacks agent.  Mary enters into
1. Jack is the principal.  Mary is Jacks agent.  Mary enters intoMartineMccracken314
 
1. IntroversionScore 11 pts.4 - 22 pts.Feedback Some peop
1. IntroversionScore  11 pts.4 - 22 pts.Feedback Some peop1. IntroversionScore  11 pts.4 - 22 pts.Feedback Some peop
1. IntroversionScore 11 pts.4 - 22 pts.Feedback Some peopMartineMccracken314
 
1. International financial investors are moving funds from Talona
1. International financial investors are moving funds from Talona 1. International financial investors are moving funds from Talona
1. International financial investors are moving funds from Talona MartineMccracken314
 
1. Interventionstreatment· The viral pinkeye does not need any
1. Interventionstreatment· The viral pinkeye does not need any 1. Interventionstreatment· The viral pinkeye does not need any
1. Interventionstreatment· The viral pinkeye does not need any MartineMccracken314
 
1. Introduction and background information about solvatochromism u
1. Introduction and background information about solvatochromism u1. Introduction and background information about solvatochromism u
1. Introduction and background information about solvatochromism uMartineMccracken314
 
1. Integrity, the basic principle of healthcare leadership.Conta
1. Integrity, the basic principle of healthcare leadership.Conta1. Integrity, the basic principle of healthcare leadership.Conta
1. Integrity, the basic principle of healthcare leadership.ContaMartineMccracken314
 
1. Information organized and placed in a logical sequence (10 po
1. Information organized and placed in a logical sequence (10 po1. Information organized and placed in a logical sequence (10 po
1. Information organized and placed in a logical sequence (10 poMartineMccracken314
 
1. In our grant application, we included the following interventio
1. In our grant application, we included the following interventio1. In our grant application, we included the following interventio
1. In our grant application, we included the following interventioMartineMccracken314
 
1. I believe that the protagonist is Nel because she is the one th
1. I believe that the protagonist is Nel because she is the one th1. I believe that the protagonist is Nel because she is the one th
1. I believe that the protagonist is Nel because she is the one thMartineMccracken314
 
1. If the profit from the sale of x units of a product is P =
1. If the profit from the sale of x units of a product is P = 1. If the profit from the sale of x units of a product is P =
1. If the profit from the sale of x units of a product is P = MartineMccracken314
 
1. How does CO2 and other greenhouse gases promote global warmin
1. How does CO2 and other greenhouse gases promote global warmin1. How does CO2 and other greenhouse gases promote global warmin
1. How does CO2 and other greenhouse gases promote global warminMartineMccracken314
 
1. How do you think communication and the role of training address
1. How do you think communication and the role of training address1. How do you think communication and the role of training address
1. How do you think communication and the role of training addressMartineMccracken314
 
1. How brain meets its requirement for its energy in terms of well
1. How brain meets its requirement for its energy in terms of well1. How brain meets its requirement for its energy in terms of well
1. How brain meets its requirement for its energy in terms of wellMartineMccracken314
 
1. Give an introduction to contemporary Chinese art (Talk a little
1. Give an introduction to contemporary Chinese art (Talk a little1. Give an introduction to contemporary Chinese art (Talk a little
1. Give an introduction to contemporary Chinese art (Talk a littleMartineMccracken314
 
1. For this reaction essay is a brief written reaction to the read
1. For this reaction essay is a brief written reaction to the read1. For this reaction essay is a brief written reaction to the read
1. For this reaction essay is a brief written reaction to the readMartineMccracken314
 
1. Find something to negotiate in your personal or professional li
1. Find something to negotiate in your personal or professional li1. Find something to negotiate in your personal or professional li
1. Find something to negotiate in your personal or professional liMartineMccracken314
 
1. FAMILYMy 57 year old mother died after a short illness
1. FAMILYMy 57 year old mother died after a short illness 1. FAMILYMy 57 year old mother died after a short illness
1. FAMILYMy 57 year old mother died after a short illness MartineMccracken314
 
1. Explain the four characteristics of B-DNA structure Differenti
1. Explain the four characteristics of B-DNA structure Differenti1. Explain the four characteristics of B-DNA structure Differenti
1. Explain the four characteristics of B-DNA structure DifferentiMartineMccracken314
 
1. examine three of the upstream impacts of mining. Which of these
1. examine three of the upstream impacts of mining. Which of these1. examine three of the upstream impacts of mining. Which of these
1. examine three of the upstream impacts of mining. Which of theseMartineMccracken314
 
1. Examine Hofstedes model of national culture. Are all four dime
1. Examine Hofstedes model of national culture. Are all four dime1. Examine Hofstedes model of national culture. Are all four dime
1. Examine Hofstedes model of national culture. Are all four dimeMartineMccracken314
 

More from MartineMccracken314 (20)

1. Jack is the principal.  Mary is Jacks agent.  Mary enters into
1. Jack is the principal.  Mary is Jacks agent.  Mary enters into1. Jack is the principal.  Mary is Jacks agent.  Mary enters into
1. Jack is the principal.  Mary is Jacks agent.  Mary enters into
 
1. IntroversionScore 11 pts.4 - 22 pts.Feedback Some peop
1. IntroversionScore  11 pts.4 - 22 pts.Feedback Some peop1. IntroversionScore  11 pts.4 - 22 pts.Feedback Some peop
1. IntroversionScore 11 pts.4 - 22 pts.Feedback Some peop
 
1. International financial investors are moving funds from Talona
1. International financial investors are moving funds from Talona 1. International financial investors are moving funds from Talona
1. International financial investors are moving funds from Talona
 
1. Interventionstreatment· The viral pinkeye does not need any
1. Interventionstreatment· The viral pinkeye does not need any 1. Interventionstreatment· The viral pinkeye does not need any
1. Interventionstreatment· The viral pinkeye does not need any
 
1. Introduction and background information about solvatochromism u
1. Introduction and background information about solvatochromism u1. Introduction and background information about solvatochromism u
1. Introduction and background information about solvatochromism u
 
1. Integrity, the basic principle of healthcare leadership.Conta
1. Integrity, the basic principle of healthcare leadership.Conta1. Integrity, the basic principle of healthcare leadership.Conta
1. Integrity, the basic principle of healthcare leadership.Conta
 
1. Information organized and placed in a logical sequence (10 po
1. Information organized and placed in a logical sequence (10 po1. Information organized and placed in a logical sequence (10 po
1. Information organized and placed in a logical sequence (10 po
 
1. In our grant application, we included the following interventio
1. In our grant application, we included the following interventio1. In our grant application, we included the following interventio
1. In our grant application, we included the following interventio
 
1. I believe that the protagonist is Nel because she is the one th
1. I believe that the protagonist is Nel because she is the one th1. I believe that the protagonist is Nel because she is the one th
1. I believe that the protagonist is Nel because she is the one th
 
1. If the profit from the sale of x units of a product is P =
1. If the profit from the sale of x units of a product is P = 1. If the profit from the sale of x units of a product is P =
1. If the profit from the sale of x units of a product is P =
 
1. How does CO2 and other greenhouse gases promote global warmin
1. How does CO2 and other greenhouse gases promote global warmin1. How does CO2 and other greenhouse gases promote global warmin
1. How does CO2 and other greenhouse gases promote global warmin
 
1. How do you think communication and the role of training address
1. How do you think communication and the role of training address1. How do you think communication and the role of training address
1. How do you think communication and the role of training address
 
1. How brain meets its requirement for its energy in terms of well
1. How brain meets its requirement for its energy in terms of well1. How brain meets its requirement for its energy in terms of well
1. How brain meets its requirement for its energy in terms of well
 
1. Give an introduction to contemporary Chinese art (Talk a little
1. Give an introduction to contemporary Chinese art (Talk a little1. Give an introduction to contemporary Chinese art (Talk a little
1. Give an introduction to contemporary Chinese art (Talk a little
 
1. For this reaction essay is a brief written reaction to the read
1. For this reaction essay is a brief written reaction to the read1. For this reaction essay is a brief written reaction to the read
1. For this reaction essay is a brief written reaction to the read
 
1. Find something to negotiate in your personal or professional li
1. Find something to negotiate in your personal or professional li1. Find something to negotiate in your personal or professional li
1. Find something to negotiate in your personal or professional li
 
1. FAMILYMy 57 year old mother died after a short illness
1. FAMILYMy 57 year old mother died after a short illness 1. FAMILYMy 57 year old mother died after a short illness
1. FAMILYMy 57 year old mother died after a short illness
 
1. Explain the four characteristics of B-DNA structure Differenti
1. Explain the four characteristics of B-DNA structure Differenti1. Explain the four characteristics of B-DNA structure Differenti
1. Explain the four characteristics of B-DNA structure Differenti
 
1. examine three of the upstream impacts of mining. Which of these
1. examine three of the upstream impacts of mining. Which of these1. examine three of the upstream impacts of mining. Which of these
1. examine three of the upstream impacts of mining. Which of these
 
1. Examine Hofstedes model of national culture. Are all four dime
1. Examine Hofstedes model of national culture. Are all four dime1. Examine Hofstedes model of national culture. Are all four dime
1. Examine Hofstedes model of national culture. Are all four dime
 

Recently uploaded

Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitolTechU
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentInMediaRes1
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...jaredbarbolino94
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 

Recently uploaded (20)

Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptx
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media Component
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 

1 3Knee Surg Sports Traumatol Arthrosc (2017) 252305–2308

  • 1. 1 3 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305–2308 DOI 10.1007/s00167-017-4582-y EDITORIAL While modern medicine evolves continuously, evidence‑ based research methodology remains: how register studies should be interpreted and appreciated Eleonor Svantesson2 · Eric Hamrin Senorski2 · Kurt P. Spindler3 · Olufemi R. Ayeni4 · Freddie H. Fu5 · Jón Karlsson1,2 · Kristian Samuelsson1,2 Published online: 13 June 2017 © European Society of Sports Traumatology, Knee Surgery, Arthroscopy (ESSKA) 2017 findings, it is more important than ever critically to evaluate the evidence that is presented and be aware of the limita- tions and pitfalls that we encounter every day as modern scientists and clinicians. Look! A significant result! One of the goals for researchers is to get their work pub- lished and acknowledged, preferably with multiple cita- tions. A winning tactic to accomplish this is to present novel results and findings. Interestingly, it often happens that the most cited papers are those that contradict other reports or are proved to be fundamentally wrong [14]. So,
  • 2. it does not really matter how likely a result is to be true or clinically valuable—a spectacular result can entrench the findings of a study and influence clinical practice. It goes without saying that the most important factor of all in this quest is that a significant P value is presented. Today, it is generally accepted that significance, often defined as a P value of <0.05, means impact and evidence. However, this is an incorrect appreciation of the P value and could lead to an inappropriate approach to this statistical method. It has been shown that P values and hypothesis-testing methods are commonly misunderstood by researchers [6, 11, 17] In just a few decades, the scientific stage has undergone some dramatic changes. Novel studies are produced at a “faster than ever” pace, and technological advances enable insights into areas that would previously have been referred to as science fiction. However, the purpose of research will always be the same—to serve as a firm foundation to prac- tise evidence-based medicine and ultimately improve the treatment of our patients. Is the explosive evolvement of research publications and technological advances always beneficial when it comes to fulfilling this purpose? As we are served with a steady stream of new “significant” * Kristian Samuelsson [email protected] 1 Department of Orthopaedics, Sahlgrenska University Hospital, Mölndal, Sweden 2 Department of Orthopaedics, Institute of Clinical Sciences, The Sahlgrenska Academy, University of Gothenburg, 431 80 Gothenburg, Sweden 3 Cleveland Clinic Sports Health Center, Garfield Heights, OH, USA
  • 3. 4 Division of Orthopaedic Surgery, Department of Surgery, McMaster University, Hamilton, ON, Canada 5 Department of Orthopedic Surgery, University of Pittsburgh, Pittsburgh, PA, USA http://crossmark.crossref.org/dialog/?doi=10.1007/s00167-017- 4582-y&domain=pdf 2306 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305– 2308 1 3 and instead tend to lead to a limited perspective in relation to a study result. Sir Ronald Fisher is regarded as one of the founders of modern statistics and is probably most associated with the concept of the P value [10, 23]. Fisher suggested that the P value reflected the probability that the result being observed was compatible with the null hypothesis. In other words, if it were true that there was no (null) difference between the factors being investigated, the P value would give an estimation of the likelihood of observing a difference as extreme as or more extreme than your outcome showed. However, Fisher never propagated the P < 0.05 criterion that is currently almost glorified as our ultimate means of conclusion making. On the contrary, Fisher appeared not to give much consideration to the actual P value number [19]. The most important thing, according to Fisher, was to repeat the experiments until the investigator felt that he or she had a plausible certainty of declaring how the experi- ment should be performed and interpreted, something that is infrequently implemented nowadays. The P value was
  • 4. originally an indicative tool throughout this process, not something synonymous with evidence. In a study recently published in JAMA cardiology [19], common misconceptions about P values were discussed. It was emphasised that, at best, the P value plays a minor role in defining the scientific or clinical importance of a study and that multiple elements, including effect size, precision of estimate of effect size and knowledge of prior relevant research, need to be integrated in the assessment [19]. This is strongly inconsistent with the concept of a P value of <0.05 as an indicator of a clinically or scientifically impor- tant difference. Moreover, the authors highlight the miscon- ception that a small P value indicates reliable and replicable results by stating that what works in medicine is a process and not the product of a single experiment. No information about a given study regarding reproducibility can be made based on the P value, nor can the reliability be determined without considering other factors [19]. One frequently for - gotten factor is how plausible the hypothesis was in the first place. It is easy to fall into the trap of thinking that a P value of <0.05 means that there is a 95% chance of true effect. However, as probability is always based on certain conditions, the most important question should be: what was the probability from the beginning? If the chance of a real effect from the beginning is small, a significant P value will only slightly increase the chances of a true effect. Or, as Regina Nuzzo put it in an article highlighting statistical errors in Nature [21]: “The more implausible the hypothe- sis—telepathy, aliens, homeopathy—the greater the chance that an exciting finding is a false alarm, no matter what the P value is” [21]. Moreover, the P value says nothing about the effect size. The P value is basically a calculation of two factors—the
  • 5. difference from null and the variance. In a study with a small standard deviation (high precision), even a very small difference from zero (treatment effect) can therefore result in a significant P value. How frequently do we ask ourselves: “From what numbers was this P value gener- ated?” when reading a paper. It is not until we look at the effect size that it is really possible to determine whether the treatment of interest has an impact. Well then, what is the definition of impact? A term often used to describe the effectiveness of a treatment is the “minimum clini- cally important difference” (MCID). For a study to impact clinical decision-making, the measurement given must be greater than the MCID and, moreover, the absolute differ- ence needs to be known. These factors determine the num- ber needed to treat and thereby indicate the impact. How - ever, current methods for determining MCID are subject of debate and it has been concluded that they are associated with shortcomings [9]. We should also remember that non-significant P values are sometimes used to conclude the interventions of interest as “equivalence” or “non-inferiority”, which is extremely incorrect if the primary study design was not intended to investigate equivalence between two treatments [18]. With- out primarily designing the study for this purpose, it is impossible to ascertain power for detecting the ideal clini - cally relevant difference that is needed for a declaration of equivalence. It can, in fact, have detrimental downstream effects on patient care if a true suboptimal treatment is declared as being non-inferior to a gold-standard treatment [12]. Instead, let us accept the fact that not all studies will show significant results, nor should they. There has been a bias against “negative trials”, not showing significance, in the past and because of this we can only speculate about whether or not they could have impacted any of today’s knowledge. If the acceptance of non-significant results
  • 6. increases, this could contribute to the elimination of pub- lication bias. The impact of study design Regardless of study design, the optimal research study should give an estimate of the effectiveness of one treat- ment over another, with a minimised risk of systematic bias. The ability and validity of doing this for observa- tional studies compared with randomised controlled tri - als (RCTs) has been the subject of an ongoing debate for decades. To determine the efficacy of a treatment or intervention (i.e. the extent to which a beneficial result is produced under ideal conditions), the RCTs remain the gold standard and are regarded as the most suitable tool for making the most precise estimates of treatment effect [22]. The only more highly valued study design is 2307Knee Surg Sports Traumatol Arthrosc (2017) 25:2305– 2308 1 3 the meta-analysis of large, well-conducted RCTs. Stud- ies with an observational design are often conducted when determining the effectiveness of an intervention in “real-world” scenarios (i.e. the extent to which an inter- vention produces an outcome under normal day-to-day circumstances). A Cochrane Review published in 2014 [3] examined fourteen methodological reviews compar- ing quantitative effect size estimates measuring the effi- cacy or effectiveness of interventions tested in RCTs with those tested in observational studies. Eleven (79%) of the examined reviews showed no significant difference
  • 7. between observational studies and RCTs. Two reviews concluded that observational studies had smaller effects of interest, while one suggested the exact opposite. Moreover, the review underscored the importance of con- sidering the heterogeneity of meta-analyses of RCTs or observational studies, in addition to focusing on the study design, as these factors influence the estimates reflective of true effectiveness [3]. We must never take away the power and the validity of a well-conducted RCT. However, we need to under- line the fact that evidence-based medicine is at risk if we focus myopically on the RCT study design and give it the false credibility of being able to answer all our questions. We must also acknowledge the weaknesses of RCTs and combine information obtained from this study design, while recognising the value of additional information from prospective longitudinal cohort studies. The Fragil - ity Index (FI) is a method for determining the robustness of statistically significant findings in RCTs, and it was recently applied to 48 clinical trials related to sports med- icine and arthroscopic surgery [16]. The FI represents the minimum number of patients in one arm of an RCT that is required to change the outcome, from a non-event to an event, in order to change a result from statistically sig- nificant to non-significant. So, the lower the number is, the more fragile the significant result. The systematic sur- vey somewhat worryingly showed that the median FI of included studies was 2 [16]. Could it be that we are cur- rently concluding evidence based on the outcome of two single patients in some orthopaedic sports medicine stud- ies? The FI should be an indicative tool in future clinical studies which, in combination with other statistical sum- maries from a study, could identify results that should be interpreted cautiously or require further investigation [5].
  • 8. Ultimately, the foundation of science is the ability to generalise the results of a study. The factors that affect the risk of an event or an outcome in a real-life situation are a result of the natural individual variation surround- ing us. It is therefore somewhat paradoxical in RCTs to distribute risk factors evenly and eliminate all the fac- tors that may interact with the intervention. We should remember that, when drawing conclusions from a RCT, this is based on many occasions on data obtained from highly specialised centres in one part of the world. The population is enrolled based on strict inclusion and exclu- sion criteria, which should always trigger the questions of “how many individuals failed to meet them?” and “could their participation have made any difference to the result?” Moreover, RCTs have also been criticised for not representing usual care, which may in fact be the case at a highly specialised centre for sports medicine [1]. High‑ quality observational studies—an asset in evidence‑ based medicine In addition to the generalisability of the results, large observational studies originating from large registers offer advantages in terms of identifying incidences, understanding practices and determining the long-term effects of different types of exposure/intervention. In par - ticular, adverse events can be identified and rare outcomes can be found [13]. Well-conducted, large cohort stud- ies are regarded as the highest level of evidence among observational studies, as the temporality of events can be established. To put it another way, the cause of an event always precedes the effect [13]. The SPORT trial spine, MOON ACLR and MARS revision ACLR are examples of prospective longitudinal cohorts based on STROBE criteria [24] and multivariate modelling, where almost
  • 9. 100% of patients are enrolled. Further, they address the modelling of who will respond to an intervention. While an RCT determines an average of who the intervention will benefit, registers like these determine the individual patient to whom the treatment should be applied, as they model the multitude of risk factors a patient presents to clinicians. On the other hand, observational studies are limited by indication bias and are subject to the potential effect of unmeasured confounders. The random variation, the confounding factors, must of course be reconciled with existing knowledge in observational studies. The more individual variation, the more the precision of what we are measuring is affected. There is variation in biologi - cal responses, in previous therapies, in activity levels and types of activity and variation in lifestyles, to men- tion just a few. However, we would like to underline the importance of seeing these factors as an opportunity in observational studies. An opportunity to acquire a greater knowledge of the relationship between factors of vari - ance and the outcome, as well as possible underlying mechanisms. Using statistical approaches that adjust for confounders makes a good analysis possible [7, 20]. In order to improve the precision of results from the many registers being established around the world, we 2308 Knee Surg Sports Traumatol Arthrosc (2017) 25:2305– 2308 1 3 must more clearly define and investigate the true con-
  • 10. founders. With regard to anterior cruciate ligament (ACL) reconstruction, data from several large registers have ena- bled the valuable identification of predictors of outcome. However, there is as yet no existing predictive model for multivariate analysis where confounders are taken into account, which could potentially jeopardise the validity [2]. ACL reconstruction is one of the most researched areas in orthopaedic medicine, and this is therefore noteworthy because the lack of consensus in determin- ing the factors that need to be included in a model may alter the results of studies investigating the same condi - tion. Another key point for high-level register studies is that the representativeness of the cohort is ensured. When registers are being established, comprehensive data entry is needed and it is important that the investigators take responsibility for monitoring the enrolment and attrition of the cohort. As researchers and clinicians, we sometimes need to take a step back and evaluate how we can continue to implement evidence-based medicine. We must understand that many factors contribute to what we choose to call evidence and that there is no single way to find it. Moreover, scientists before us recognised that only by repeated experiments is it possible to establish the reproducibility of the investiga- tion and thereby get closer to the truth about the efficacy of our treatments. Instead of naively believing that significant results (P < 0.05) from any study are synonymous with evidence, we can take advantage of the strengths of differ - ent study designs. We should remember that many studies have found that the results of observational and RCT stud- ies correlate well [4, 8, 15]. We encourage the performance of the best research whenever possible. Sometimes this is a well-conducted RCT or highly controlled prospective longitudinal cohorts, and at other times it is the establish- ment of large patient registers. With regard to the obser-
  • 11. vational study design, future comprehensive prospective cohort studies can provide us with important knowledge and be significant contributors to evidence-based medicine. Nevertheless, the P value is a tool that can be helpful, but it must be applied thoughtfully and while appreciating its limitations and assumptions. Evidence-based medicine as defined by the original practitioners involves making a clin- ical decision by combining clinical experience and the best available evidence from RCTs and registers and by incor - porating the patient’s values and preferences. References 1. Albert RK (2013) “Lies, damned lies…” and observational stud- ies in comparative effectiveness research. Am J Respir Crit Care Med 187(11):1173–1177 2. An VV, Scholes C, Mhaskar VA, Hadden W, Parker D (2016) Limitations in predicting outcome following primary ACL reconstruction with single-bundle hamstring autograft—A sys- tematic review. Knee 24(2):170–178 3. Anglemyer A, Horvath HT, Bero L (2014) Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev 4:Mr000034 4. Benson K, Hartz AJ (2000) A comparison of observational studies and randomized, controlled trials. N Engl J Med 342(25):1878–1886 5. Carter RE, McKie PM, Storlie CB (2017) The Fragility
  • 12. Index: a P-value in sheep’s clothing? Eur Heart J 38(5):346–348 6. Cohen HW (2011) P values: use and misuse in medical litera- ture. Am J Hypertens 24(1):18–23 7. Concato J (2012) Is it time for medicine-based evidence? JAMA 307(15):1641–1643 8. Concato J, Shah N, Horwitz RI (2000) Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med 342(25):1887–1892 9. Copay AG, Subach BR, Glassman SD, Polly DW Jr, Schuler TC (2007) Understanding the minimum clinically important differ - ence: a review of concepts and methods. Spine J 7(5):541–546 10. Fisher R (1973) Statistical methods and scientific inference, 3rd edn. Hafner Publishing Company, New York 11. Goodman S (2008) A dirty dozen: twelve p-value misconcep- tions. Semin Hematol 45(3):135–140 12. Greene WL, Concato J, Feinstein AR (2000) Claims of equiva- lence in medical research: are they supported by the evidence? Ann Intern Med 132(9):715–722 13. Inacio MC, Paxton EW, Dillon MT (2016) Understanding ortho-
  • 13. paedic registry studies: a comparison with clinical studies. J Bone Joint Surg Am 98(1):e3 14. Ioannidis JA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294(2):218–228 15. Ioannidis JP, Haidich AB, Pappa M, Pantazis N, Kokori SI, Tek- tonidou MG, Contopoulos-Ioannidis DG, Lau J (2001) Compari- son of evidence of treatment effects in randomized and nonrand- omized studies. JAMA 286(7):821–830 16. Khan M, Evaniew N, Gichuru M, Habib A, Ayeni OR, Bedi A, Walsh M, Devereaux PJ, Bhandari M (2016) The fragility of statistically significant findings from randomized trials in sports surgery. Am J Sports Med. doi:10.1177/0363546516674469 17. Kyriacou DN (2016) The enduring evolution of the P value. JAMA 315(11):1113–1115 18. Lowe WR (2016) Editorial Commentary: “There, It Fits!”— Jus- tifying Nonsignificant P Values. Arthroscopy 32(11):2318–2321 19. Mark DB, Lee KL, Harrell FE Jr (2016) Understanding the role of P values and hypothesis tests in clinical research. JAMA Car - diol 1(9):1048–1054 20. Methodology Committee of the Patient-Centered Outcomes Research Institute (PCORI) (2012) Methodological standards and patient-centeredness in comparative effectiveness research: the PCORI perspective. JAMA 307(15):1636–1640
  • 14. 21. Nuzzo R (2014) Statistical errors—P values, the ‘golden stand- ard’ of statistical validity, are not as reliable as many scientists assume. Nature 508:150–152 22. Rosenberg W, Donald A (1995) Evidence based medicine: an approach to clinical problem-solving. BMJ 310(6987):1122– 1126 23. Salsburg D (2002) The lady tasting tea, 31728th edn. Holt Paper- backs, New York 24. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP (2007) The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) state- ment: guidelines for reporting observational studies. Lancet 370(9596):1453–1457 http://dx.doi.org/10.1177/0363546516674469While modern medicine evolves continuously, evidence-based research methodology remains: how register studies should be interpreted and appreciatedLook! A significant result!The impact of study designHigh-quality observational studies—an asset in evidence-based medicineReferences Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2 209 Exploring the Evidence Quantitative and Qualitative Research
  • 15. Focusing on the Fundamentals: A Simplistic Differentiation Between Qualitative and Quantitative Research Shannon Rutberg Christina D. Bouikidis R esearch is categorized as quantitative or qualitative in nature. Quantitative research employs the use of numbers and accuracy, while qualitative research focuses on lived experiences and human percep- tions (Polit & Beck, 2012). Research itself has a few vari - eties that can be explained using analogies of making a cup of coffee or tea. To make coffee, the amount of water and coffee grounds to be used must be measured. This precise meas- urement determines the amount of coffee and the strength of the brew. The key word in this quantitative research analogy is measure. To make tea, hot water must be poured over a tea bag in a mug. The length of time a person leaves a tea bag in the mug comes down to perception of the strength of the tea desired. The key word in qualitative research is perception. This article describes and explores the differences between quantitative (measure) and quali- tative (perception) research. Types of Research Nursing research can be defined as a “systematic inquiry designed to develop trustworthy evidence about issues of importance to the nursing profession, including nursing practice, education, administration, and informat- ics” (Polit & Beck, 2012, p. 736). Researchers determine
  • 16. the type of research to employ based upon the research question being investigated. The two types of research methods are quantitative and qualitative. Quantitative research uses a rigorous and controlled design to examine phenomena using precise measurement (Polit & Beck, 2012). For example, a quantitative study may investigate a patient’s heart rate before and after consuming a caffeinat- ed beverage, like a specific brand/type of coffee. In our coffee and tea analogy, in a quantitative study, the research participant may be asked to drink a 12-ounce cup of coffee, and after the participant consumes the coffee, the researcher measures the participant’s heart rate in beats per minute. Qualitative research examines phenom- ena using an in-depth, holistic approach and a fluid Exploring the Evidence is a department in the Nephrology Nursing Journal designed to provide a summary of evidence-based research reports related to contemporary nephrology nursing practice issues. Content for this department is provided by members of the ANNA Research Committee. Committee members review the current literature related to a clinical practice topic and provide a summary of the evidence and implications for best practice. Readers are invited to submit questions or topic areas that pertain to evidence-based nephrology practice issues. Address correspondence to: Tamara Kear, Exploring the Evidence Department Editor, ANNA National Office, East Holly Avenue/Box 56, Pitman, NJ 08071-0056; (856) 256-2320; or via e-mail
  • 17. at [email protected] The opinions and assertions contained herein are the private views of the contributors and do not necessarily reflect the views of the American Nephrology Nurses’ Association. Copyright 2018 American Nephrology Nurses’ Association Rutberg, S., & Bouikidis, C.D. (2018). Focusing on the fundamen- tals: A simplistic differentiation between qualitative and quantitative research. Nephrology Nursing Journal, 45(2), 209- 212. This article describes qualitative, quantitative, and mixed meth- ods research. Various classifications of each research design, including specific categories within each research method, are explored. Attributes and differentiating characteristics, such as formulating research questions and identifying a research prob- lem, are examined, and various research method designs and reasons to select one method over another for a research project are discussed. Key Words: Qualitative research, quantitative research, mixed methods research, method design. Shannon Rutberg, MS, MSN, BS, RN-BC, is a Clinical Nurse Educator, Bryn Mawr Hospital, Main Line Health System Bryn Mawr, PA. Christina D. Bouikidis, MSN, RNC-OB, is a Clinical Informatics Educator, Main Line Health System, Berwyn, PA. Statement of Disclosure: The authors reported no actual or potential con-
  • 18. flict of interest in relation to this continuing nursing education activity. Note: The Learning Outcome, additional statements of disclosure, and instructions for CNE evaluation can be found on page 213. Continuing Nursing Education Exploring the Evidence Quantitative and Qualitative Research Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2210 Focusing on the Fundamentals: A Simplistic Differentiation Between Qualitative and Quantitative Research research design that produces rich, telling narratives (Polit & Beck, 2012). An example of a qualitative study is explor - ing the participant’s preference of coffee over tea, and feel- ings or mood one experiences after drinking this favorite hot beverage. Quantitative Research Quantitative research can range from clinical trials for new treatments and medications to surveying nursing staff and patients. There are many reasons for selecting a quan- titative research study design. For example, one may choose quantitative research if a lack of research exists on a particular topic, if there are unanswered research ques- tions, or if the research topic under consideration could
  • 19. make a meaningful impact on patient care (Polit & Beck, 2012). There are several different types of quantitative research. Some of the most commonly employed quanti- tative designs include experimental, quasi-experimental, and non-experimental. Experimental Design An experimental design isolates the identified phenom- ena in a laboratory and controls conditions under which the experiment occurs (Polit & Beck, 2012). There is a con- trol group and at least one experimental group in this design. The most reliable studies use a randomization process for group assignment wherein the control group receives a placebo (an intervention that does not have ther - apeutic significance) and the experimental group receives an intervention (Polit & Beck, 2012). For example, if one is studying the effects of caffeine on heart rate 15 minutes after consuming coffee, using a quantitative experimental design, the design may be set up similarly to the descrip- tion in Table 1. Randomization will allow an equal chance for each participant to be assigned to either the control or the experimental group. Then the heart rate is measured before and after the intervention. The intervention is drink- ing decaffeinated coffee for the control group and drinking caffeinated coffee for the experimental group. Data collect- ed (heart rate pre- and post-coffee consumption) are then analyzed, and conclusions are drawn. Quasi-Experimental Design Quasi-experimental designs include an intervention in the design; however, designs do not always include a con- trol group, which is a cornerstone to an authentic experi - mental design. This type of design does not have random-
  • 20. ization like the experimental design (Polit & Beck, 2012). Instead, there may be an intervention put into place with outcome measures pre- and post-intervention implemen- tation, and a comparison used to identify if the interven- tion made a difference. For example, perhaps a coffee chain store wants to see if sampling a new flavor of coffee, like hazelnut, will increase revenue over a one-month period. At location A, hazelnut coffee was distributed as a sample to customers in line waiting to purchase coffee. At location B, no samples were distributed to customers. Sales of hazelnut coffee are examined at both locations prior to the intervention (hazelnut sample given out to customers waiting in line) and then again one month later, after the intervention. Lastly, a monthly revenue is com- pared at both sites to measure if free hazelnut coffee sam- ples impacted sales. Non-Experimental Design The final type of quantitative research discussed in this article is the nonexperimental design. Manipulation of variables does not occur with this design, but an interest exists to observe the phenomena and identify if a relation- ship exists (Polit & Beck, 2012). Perhaps someone is inter- ested if drinking coffee throughout one’s life decreases the incidence of having a stroke. Researchers for this type of study will ask participants to report approximately how much coffee they drank daily, and data would be com- pared to their stroke incidence. Researchers will analyze data to determine if a causal relationship exists between coffee and stroke incidence by examining behavior that occurred in the past. Table 1 Comparison of Control
  • 21. Control Group Experimental Group Assignment 10 participants randomly assigned to the control group 10 participants randomly assigned to the experimental group Pre-Intervention Data Collection Heart rate prior to beverage consumption Heart rate prior to beverage consumption Intervention Consume a placebo drink (i.e., decaffeinated coffee) Consume the experimental drink (i.e., caffeinated coffee) Post-Intervention Data Collection Obtain heart rate 15 minutes after the beverage was consumed Obtain heart rate 15 minutes after the beverage was consumed Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2 211 Quantitative Study Attributes In quantitative studies, the researcher uses standardized
  • 22. questionnaires or experiments to collect numeric data. Quantitative research is conducted in a more structured environment that often allows the researcher to have con- trol over study variables, environment, and research ques- tions. Quantitative research may be used to determine rela- tionships between variables and outcomes. Quantitative research involves the development of a hypothesis – a description of the anticipated result, relationship, or expected outcome from the question being researched (Polit & Beck, 2012). For example, in the experimental study mentioned in Table 1, one may hypothesize the con- trol group will not see an increase in heart rate. However, in the experimental group, one may hypothesize an increase in heart rate will occur. Data collected (heart rate before and after coffee consumption) are analyzed, and conclusions are drawn. Qualitative Research According to Choy (2014), qualitative studies address the social aspect of research. The researcher uses open- ended questions and interviews subjects in a semi-struc- tured fashion. Interviews often take place in the partici- pant’s natural setting or a quiet environment, like a confer- ence room. Qualitative research methodology is often employed when the problem is not well understood and there is an existing desire to explore the problem thor- oughly. Typically, a rich narrative from participant inter- views is generated and then analyzed in qualitative research in an attempt to answer the research question. Many questions will be used to uncover the problem and address it comprehensively (Polit & Beck, 2014). Types of Qualitative Research Design Types of qualitative research include ethnography, phe-
  • 23. nomenology, grounded theory, historical research, and case studies (Polit & Beck, 2014). Ethnography reveals the way culture is defined, the behavior associated with cul- ture, and how culture is understood. Ethnography design allows the researcher to investigate shared meanings that influence behaviors of a group (Polit & Beck, 2012). Phenomenology is employed to investigate a person’s lived experience and uncover meanings of this experience (Polit & Beck, 2012). Nurses often use phenomenology research to better understand topics that may be part of human experiences, such as chronic pain or domestic vio- lence (Polit & Beck, 2012). Using the coffee analogy, a researcher may use phenomenology to investigate atti- tudes and practices around a specific time of day for coffee consumption. For example, some individuals may prefer coffee in the morning, while some prefer coffee through- out the day, and others only enjoy coffee after a meal. Grounded theory investigates actions and effects of the behavior in a culture. Grounded theory methodology may be used to investigate afternoon tea time practices in Europe as compared to morning coffee habits in the United States. Historical research examines the past using recorded data, such as photos or objects. The historical researcher may look at the size of coffee mugs in photos or mugs from antique photos over a few centuries to provide a his- torical perspective. Perhaps photos are telling of cultural practices surrounding consuming coffee over the centuries – in solitude, or in small or large groups. Qualitative Research Attributes When selecting a qualitative research design, keep in
  • 24. mind the unique attributes. Qualitative research method- ology may involve multiple means of data collection to further understand the problem, such as interviews in addition to observations (Polit & Beck, 2012). Further, qualitative research is flexible and adapts to new informa- tion based on data collected, provides a holistic perspec- tive on the topic, and allows the researcher to become entrenched in the investigation. The researcher is the research tool, and data are constantly being analyzed to identify commencement of the study. The decision to select a qualitative methodology requires several consider - ations, a great amount of planning (such as which research design fits the study best, the time necessary to devote to the study, a data collection plan, and resources available to collect the data), and finally, self-reflection on any per- sonal presumptions and biases toward the topic (Polit & Beck, 2014). Selecting a sample population in qualitative research begins with identifying eligibility to participate in the study based on the research question. The participant needs to have had exposure or experience with the con- tent being investigated. A thorough interview will uncover the encounter the participant had with the research ques- tion or experience. There will most likely be a few stan- dard questions asked of all participants and subsequent questions that will evolve based upon the participant’s experience/answers. Thus, there tends to be small sample size with a high volume of narrative data that needs to be analyzed and interpreted to identify trends intended to answer the research question (Polit & Beck, 2014). Mixed Methods Using both quantitative and qualitative methodology
  • 25. into a single study is known as a mixed methods study. According to Tashakkori and Creswell (2007), mixed methods research is “research in which the researcher col - lects and analyzes data, integrates the findings, and draws inferences using both qualitative and quantitative approaches or methods in a single study or program of inquiry” (p. 4). This approach has the potential to allow the researcher to collect two sets of data. An example of using mixed methods would be examining effects of con- suming a caffeinated beverage prior to bedtime. The researcher may want to investigate the impact of caffeine Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2212 Focusing on the Fundamentals: A Simplistic Differentiation Between Qualitative and Quantitative Research on the participant’s heart rate and ask how consumi ng the caffeine drink makes him or her feel. There is the quanti - tative numeric data measuring the heart rate and the qual - itative data addressing the participant’s experience or per - ception. Polit and Beck (2012) describe advantages of using a mixed method approach, including complementary, prac- ticality, incrementality, enhanced validity, and collabora- tion. Complementary refers to quantitative and qualitative approaches complementing each other. Mixed methods use words and numbers, so the study is not limited to using just one method of data collection. Practicality refers to the researcher using the method that best addresses the research question while using one method from the mixed method approach. Incrementality is defined as the
  • 26. researcher taking steps in the study, where each step leads to another in an orderly fashion. An example of incremen- tality includes following a particular sequence, or an order, just as a recipe follows steps in order to accurately produce a desired product. Data collected from one method provide feedback to promote understanding of data from the other method. With enhanced variability, researchers can be more confident about the validity of their results because of multiple types of data supporting the hypothesis. Collaboration provides opportunity for both quantitative and qualitative researchers to work on similar problems in a collaborative nature. Once the researcher has decided to move forward with a mixed-method study, the next step is to decide on the designs to employ. Design options include triangulation, embedded, explanatory, and exploratory. Triangulation is obtaining quantitative and qualitative data concurrently, with equal importance given to each design. An embedded design uses one type of data to support the other data type. An explanatory design focuses on collecting one data type and then moving to collecting the other data type. Exploratory is another sequential design, where the researcher collects one type of data, such as qualitative, in the first phase; then using those findings, the researcher col- lects the other data, quantitative, in the second phase. Typically, the first phase concentrates on thorough investi - gation of a minutely researched occurrence, and the second phase is focused on sorting data to use for further investiga- tion. While using a two-step process may provide the researcher with more data, it can also be time-consuming. Conclusion In summary, just like tea and coffee, research has simi -
  • 27. larities and differences. In the simplest of terms, it can be viewed as measuring (how many scoops of coffee grounds?) compared to perception and experience (how long to steep the tea?). Quantitative research can range from experiments with a control group to studies looking at retrospective data and suggesting causal relationships. Qualitative research is conducted in the presence of limit- ed research on a particular topic, and descriptive narra- tives have the potential to provide detailed information regarding this particular area. Mixed methods research involves combining the quantitative and qualitative threads into data conversion and using those results to make meta-inferences about the research question. Research is hard work, but just like sipping coffee or tea at the end of a long day, it is rewarding and satisfying. References Choy, L.T. (2014). The strengths and weaknesses of research methodology: Comparison and complimentary between qualitative and quantitative approaches. Journal of Humanities and Social Science, 19(4), 99-104. Polit, D.F., & Beck, C.T. (2012). Nursing research: Generating and assessing evidence for nursing practice. (9th ed.). Philadelphia, PA: Wolters Kluwer. Polit, D.F., & Beck, C.T. (2014). Essentials of nursing research: Appraising evidence for nursing practice (8th ed.). Philadelphia, PA: Wolters Kluwer. Tashakkori, A., & Creswell, J.W. (2007). The new era of mixed methods. Journal of Mixed Methods Research, 1(1), 3-7.
  • 28. Nephrology Nursing Journal March-April 2018 Vol. 45, No. 2 213 Focusing on the Fundamentals: A Simplistic Differentiation Between Qualitative and Quantitative Research Name: _____________________________________________________ ______________ Address: _____________________________________________________ ____________ City: _____________________________________________________ ________________ Telephone: _________________ Email: ________________________________________ ANNA Member: Yes No Member #___________________________ Payment: Check Enclosed American Express Visa MasterCard Total Amount Submitted: ___________ Credit Card Number: ____________________________________ Exp. Date: ___________ Name as it Appears on the Card: ______________________________________________
  • 29. Complete the Following (please print) 1. I verify I have completed this education activity. n Yes n No __________________________________________________ SIGNATURE Strongly Strongly Disagree (Circle one) Agree 2. The learning outcome could be achieved using the content provided. 1 2 3 4 5 3. The authors stimulated my desire to learn, and demonstrated knowledge 1 2 3 4 5 and expertise in the content areas. 4. I am more confident in my abilities since completing this 1 2 3 4 5 education activity. 5. The content was relevant to my practice. 1 2 3 4 5 6. Did the learner engagement activity add value to this education activity? n Yes n No 7. Commitment to change practice (select one): a. I will make a change to my current practice as the result of this education activity. b. I am considering a change to my current practice. c. This education activity confirms my current practice. d. I am not yet convinced that any change in practice is warranted. e. I perceive there may be barriers to changing my current practice.
  • 30. 8. What do you plan to do differently in your practice as a result of completing this educational activity? (Required)____________________________________________ __________________________ _____________________________________________________ _________________________ 9. What information from this education activity do you plan to share with a professional colleague? (Required) _____________________________________________________ ____________________ _____________________________________________________ ____________________________ 10. This education activity was free of bias, product promotion, and commercial interest influence. (Required) n Yes n No 11. If no, please explain: _____________________________________________________ ____________ * Commercial interest – any entity either producing, marketing, reselling, or distributing healthcare goods or services consumed by or used on patients or an entity that is owned or controlled by an entity that produces, markets, resells, or distributes healthcare goods or services consumed by or used on patients. Exceptions are non-profits, government and non-healthcare related companies. Nephrology Nursing Journal Editorial Board Statements of Disclosure In accordance with ANCC governing rules Nephrology Nursing Journal Editorial Board state-
  • 31. ments of disclosure are published with each CNE offering. The statements of disclosure for this offer- ing are published below. Paula Dutka, MSN, RN, CNN, disclosed that she is a consultant for Rockwell Medical, GSK, CARA Therapeutics, Otsuka, Akebia Therapeutics, Bayer, and Fibrogen. Norma J. Gomez, MBA, MSN, CNNe, disclosed that she is a member of the ZS Pharma Advisory Council. Tamara M. Kear, PhD, RN, CNS, CNN, disclosed that she is a member of the ANNA Board of Directors, serves on the Scientific Advisory Board for Kibow Biotech, Inc. All other members of the Editorial Board had no actu- al or potential conflict of interest in relation to this continuing nursing education activity. This article was reviewed and formatted for contact hour credit by Beth Ulrich, EdD, RN, FACHE, FAAN, Nephrology Nursing Journal Editor, and Sally Russell, MN, CMSRN, CPP, ANNA Education Director. American Nephrology Nurses Association – Provider is accredited with distinction as a provider of contin- uing nursing education by the American Nurses Credentialing Center’s Commission on Accreditation. ANNA is a provider approved by the California Board of Registered Nursing, provider number CEP 00910. This CNE article meets the Nephrology Nursing Certification Commission’s (NNCC’s) continuing
  • 32. nursing education requirements for certification and recertification. SUbMISSION INSTRUCTIONS Online Submission Articles are free to ANNA members Regular Article Price: $15 CNE Evaluation Price: $15 Online submissions of this CNE evaluation form are available at annanurse.org/library. CNE certificates will be available im mediately upon successful completion of the evaluation. Mail/Fax Submission ANNA Member Price: $15 Regular Price: $25 • Send this page to the ANNA National Office; East Holly Avenue/Box 56; Pitman, NJ 08071-0056, or fax this form to (856) 589-7463. • Enclose a check or money order payable to ANNA. Fees listed in payment section. • A certificate for the contact hours will be awarded by ANNA. • Please allow 2-3 weeks for processing. • You may submit multiple answer forms in one mail- ing; however, because of various processing proce- dures for each answer form, you may not receive all of your certificates returned in one mailing.Note: If you wish to
  • 33. keep the journal intact, you may photocopy the answer sheet or access this activity at www.annanurse.org/journal Evaluation Form (All questions must be answered to complete the learning activity. Longer answers to open-ended questions may be typed on a separate page.) ANNJ1811 Learning Outcome After completing this learning activity, the learner will be able to define quantitative, qualitative, and mixed method research studies, and dis- cuss their attributes. Learner Engagement Activity For more information on this topic, view the session “Reading and Understanding Research” presented by Tamara Kear in the ANNA Online Library (https://library.annanurse.org/an na/sessions/4361/view). EVALUATION FORM 1.3 Contact Hours | Expires: April 30, 2020 Copyright of Nephrology Nursing Journal is the property of American Nephrology Nurses' Association and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print,
  • 34. download, or email articles for individual use. EVIDENCE BASED PRACTICE MA I N L ES S O N O VER VI EW A This module takes a closer look at the two basic types of research introduced in the previous module: Qualitative Research and Quantitative Research. The module also begins the process of determining whether a study is well designed and implemented, and whether the findings are solid enough to merit incorporation into practice. A A: Example of a Good Study The researchers want to see if a new drug for high blood pressure works to keep the blood pressure (BP) at acceptable low levels. The research design would require a random assignment of the study participants into either an experimental group or a
  • 35. control group; the subjects would have their BPs measured in advance; and the experimental group would receive the medication, followed by additional measures of BP for both groups. The [valid] research question would be: does the drug cause a reduction in blood pressure that would not be seen in a group without the drug? The sample size would be 50 in each group, and subjects would all be matched on age, weight, history of blood pressure spikes, and no other medical problems (control for error). The data would be interval-scaled, allowing parametric statistics to be used. The research question asks if there would be significant differences in BP measures between the two groups due to the new drug. The statistic would thus be a statistical analysis of variance (ANOVA), since it is the highest level of statistical analysis that fits the research question, the data type, and the expected causal relationship between the drug and patient
  • 36. BP levels. The following explains how the above is an example of how a Good Research Study responds to satisfactorily meeting the following four questions. • Does the study address a valid clinical question? Yes – studying a drug to see if its use results in acceptable BP levels is a valid clinical question. • Do the study group participants replicate the overall population under study? Yes – the study participants are selected according to a number of applicable factors. • Does the study randomly assign participants into the experimental or control groups? Yes – once qualified, the participants are randomly assigned to the experimental and control groups. • Does the study’s structure aim at validly applicable statistics? Yes – the study results would be an ANOVA. MA I N L ES S O N O VER VI EW B
  • 37. Basically, the way to determine if a study is “bad” is to measure it against the criteria of a “good” study. This sounds simple, but often times studies are labeled as good, valid, when they really do not measure up. It is the role of reviewers to make sure all of the “i’s” are dotted and the “t’s” are crossed. Decisions based on unsound studies can have potentially large negative financial implications, and can result in injuries and death. B A: Example of a Bad Study Bad—Same research question, but the researchers do not screen their subjects for weight, age, and other medical conditions. They do not use a control group, so they can’t tell if any changes in BP after the drug is given are due to the drug, and not to other possible factors. They use a single sample of 5 people, which is
  • 38. not big enough to tell anything. Instead of actual BP measures, they just list the results as “up” or “down,” but try to use an ANOVA (which cannot be used with categorical data such as up or down). The statistic does not fit the data or the sample, and all results are questionable due to bad design and poor statistical selection. The following explains why the above is an example of a Bad Research Study. 1. Does the study address a valid clinical question? Yes – studying a drug to see if its use results in acceptable BP levels is a valid clinical question. 2. Do the study group participants replicate the overall population under study? NO – the study group participants are not screened to a valid set of criteria. 3. Does the study randomly assign participants into the experimental or control groups? NO – the study does not use a control group. 4. Does the study’s structure aim at validly applicable statistics?
  • 39. NO – the study does not result in valid statistical data. Therefore, the study is considered bad since three of four markers of a valid, or good, study are missing (questions 2, 3, and 4). Summary of Lesson The purpose of this tutorial was to familiarize you with the basic concepts in the methods, measurement, and the review of healthcare evidence-based research studies. If structured and conducted correctly, evidence-based research studies can lead to evidence- based practice, and consequently, more effective healthcare for the patient. Main Module 1 Readings 1. Introduction The term evidence-based practice has become pervasive in the health care industry. This concept signals a major shift in how health care providers deliver their services, and in how health care
  • 40. institutions function. It is no longer acceptable just to perform in a given way because “that’s how it’s always been done.” Research to promote evidence-based practice is becoming more and more a part of the regular work of health care leaders. However, as with any research, it is important to be able to tell the difference between good, solid research, and flawed research with questionable conclusions. Since changing practice can be difficult at best, it is essential that changes be grounded in solid evidence. 2. Evidence-Based Practice What is evidence-based practice? This approach can be defined as the continuous use of current, best evidence-based research in decisions regarding patient care. Such research involves having a clinical question that needs addressing; the search for
  • 41. information and critical appraisal of that information as it relates to the clinical question; integration of the question’s basic concepts/components with existing clinical expertise; and understanding the projected impacts any change can have on patients. This approach also requires the review and integration of the results of more than one study into the critical appraisal, so that the reliability and generalizability of the studies’ results are stronger than any one study can be. 3. Types of Research A health care facility leader deals with two major types of research: quantitative and qualitative. When addressing clinical issues, research is traditionally performed using a quantitative design. This may involve timed studies, where
  • 42. the experimental variables are measured at different points in time on the same study sample; it may also involve comparison of an experimental group against a control group; or it may involve the impacts of several independent variables on a single dependent variable. There are many different types of quantitative studies, but they all require the following: the appropriate selection of a random sample of subjects that replicates the overall population under study; the use of a statistical analysis appropriate to the design; and a design that effectively controls the variables under study. 4. Qualitative Research The other type of standard research is the qualitative study. Qualitative studies tend to focus on the experiences of subjects and on gaining a stronger understanding of those experiences. Researchers observe subjects in a given setting, watch for behavioral
  • 43. themes, and develop formative and summative observations and conclusions. An example of qualitative research would be a case study, where a particular patient, process, or event is studied and analyzed, and logical conclusions drawn from the data gathered. Another example could be a root-cause analysis for determining flaws and error causes in a system. A significant amount of exploratory research is done on a qualitative basis. Qualitative studies are often followed up by a range of quantitative studies to derive specific answers to more narrowly focused research questions. 5. Research Findings How are evidence-based research findings used in a health care setting? One of the most powerful ways to use research is as a foundation toward improving practice outcomes. Hence, one finds the concept of evidence-based practice; that is, practice based on
  • 44. evidence research. For example, a physician may be using a standard mix of medications to control infections. However, current research results indicate that a particular single medication is more effective at reducing infections than the mix traditionally used. Since a major goal of health care leaders is to reduce hospital- acquired infections, it would be important to disseminate this information to physicians, with the goal of changing their practice to achieve better outcomes. Another example is the research on causes of stomach ulcers. Traditionally and historically, the assumption was that high levels of stomach acid eroded the stomach lining, producing bleeding ulcers. The treatments at the time included medications to dilute stomach acids, dietary changes such as drinking more milk to coat the stomach lining, and lifestyle changes to reduce the stress that was
  • 45. presumed to cause increases. People were shocked to see research revealing H Pylori, a bacterium in the stomachs of ulcer sufferers, as the real cause of ulcers, and that the treatment was a course of antibiotics. 6. Health Care Leader Plan What factors would a health care leader incorporate in a plan that would make the process of evidence-based research leading to improved medical care successful? It is critical to base change on valid, reliable research findings. The strongest findings typically come from studies that have been replicated by subsequent researchers. There are examples across the scientific world of seemingly earth- shaking results from a single study that could not be reproduced by other researchers. One example is the cold fusion debacles of recent
  • 46. decades, wherein various researchers claimed to be able to make nuclear fusion occur at room temperatures. In each case, study results rocked the world of physics and many researchers rushed to duplicate the study. However, not a single duplication attempt was successful. Failure to replicate findings means that, at best, they are not generalizable outside the study sample, and, at worst, the research methodology was flawed in some way. Other issues to consider include: the correlation between the research question and the study design; the type of data collected and its impact on the statistics produced; and the ability, or inability, to control the variables within the study as well as extraneous variables that may have an impact on results. 7. Implications of a Successful Practice Finally, what are the implications of successfully implementing evidence-based practice changes in the health care environment?
  • 47. One of the fundamental implications the leader must consider is the financial implications of the change. Some may be favorable, as when a generic antibiotic is shown to be as effective as a brand name antibiotic, but 70% cheaper. Other practice changes are more expensive, as when drug- coated heart stents first made their appearance for application to patients with coronary artery disease and the cost of the procedure went up by thousands of dollars versus the nondrug-coated stents. Considering the organization’s stakeholders when making change can have a powerful effect on the viability and smoothness of the change. For example, key stakeholders such as physicians, high level staff, or even outside vendors can make a change effort difficult, or even unsuccessful, if they align to resist the change. The
  • 48. experienced change agent must understand the “political” climate and stakeholders thoroughly before initiating a change process. Finally, change theory repeatedly demonstrates that change is most difficult when the people affected by the change are satisfied with the status quo. One of the elements that new, valid research can demonstrate is better outcomes than the status quo can achieve. This can help to create a readiness to change that will facilitate the entire process. But be prepared; logic does not always ensure an easy transition. 8. Conclusion The implementation of change precipitated by research findi ngs from evidence-based practice studies is an increasing responsibility of the health care leadership role. Monitors of quality of patient care are becoming publicly available through the Center for Medicare
  • 49. and Medicaid Studies (CMS) and its national Web site. Consequently, the general public is now able to see how different health care organizations perform on national indicators of quality. In order to meet the thresholds required by payers, changes in practice must be implemented, driven by effective research. Financial reimbursement may be tied to successful changes in practice, especially where patient outcomes improve. Therefore a key element of successful implementation of practice change is that it be based on valid research. A critical skill set for the health care leader to develop is the ability to distinguish excellent research studies from those that may contain errors that affect the validity of the results. As we continue in the course, you will learn techniques for assessing research studies.
  • 50. You will also explore issues that can complicate implementation of changes in practice. 9. References Melnyk, B., & Fineout-Overholt, E. (2005). Evidence Based Practice in Nursing and Healthcare: A guide to best practice. New York: Lippincott Williams & Wilkins. Keys to Assess a Research Study (Panel) There are a number of elements that comprise valid research. The following is an examination of the key elements that the health care administrator/manager would want to look at as basic to the makeup of a valid research study. That is, what are the questions that need to be answered? KEY ELEMENTS 1. Literature Review:
  • 51. a. Is the research question appropriately derived from the literature review? b. Does it make sense to ask the question after reading the review of the literature? c. Is the research question clearly stated and the variables included in it? 2. Variables: a. What are the independent and dependent variables? b. Do they make sense from the perspective of the research question? c. Is the data measurement of each of the variables the correct type of data needed by the selected statistic? 3. Research Design: a. Is the sample randomly selected from the population, or are subjects picked on the basis of a criterion? b. What is the sample size? c. What errors could occur in the design, and are they
  • 52. controlled for? d. Does the study have validity; does it answer the research question on the expected relationship between the Independent variable and the dependent variable? e. Does the design call for an experimental and a control group? 4. Statistic: a. Is the statistic the correct one based on the research question, the expected relationship between the independent variable and the dependent variable, and the type of data taken from subjects? b. Is the statistic significant at the accepted level? More Information (Alphabetical Order) ANOVA When speaking of statistics, an analysis of variance (ANOVA)
  • 53. within the study’s structure aims at testing to ensure the study has resulted in validly applicable statistics. Another key question to ask when reviewing a study is “does the study structure aim at validly applicable statistics?” If yes, the study results are said to be an ANOVA. Experimental Group or Control Group In quantitative studies, the most common design uses an experimental group and a control group. The experimental group is exposed to the independent variables, while the control group is not. Then both groups are measured on the dependent variable. Before deciding upon the experimental group upon which to base the study, the variables being studied need to be defined. Variables represent a relationship that is tested by the research design. In a
  • 54. quantitative study, this requires at least one independent variable and one dependent variable, although more complex studies may include more than one of each. Independent variables are those that are assumed to have some impact or influence on the dependent, or outcome variable. For example, in a study that examines the intent to remain in one’s job, the dependent variable, the effects of one’s relationship with the manager is one possible independent variable that can influence the decision to stay. The dependent variable is always the expected outcome measure and the independent variables are the ones that are theorized to have an impact on changes in the dependent variable. It is important to note if anything in the literature review explains why the researchers chose the particular independent and dependent variables that they did. In a well-designed study, the rationale for variable selection will be
  • 55. obvious from the review of past literature. Main Lesson Overview A This module takes a closer look at the two basic types of research introduced in the previous module: Qualitative Research and Quantitative Research. The module also begins the process of determining whether a study is well designed and implemented, and whether the findings are solid enough to merit incorporation into practice. Qualitative research begins with identifying a broad topic to be explored and studied, rather than a narrowly designed research question; it does not use a research question as such, or a research hypothesis. Since it is focused on a general study for identifying and studying broad concepts, it involves the organization and interpretation of non-numeric data for the purpose of discovering
  • 56. important patterns or relationships. It may or may not involve literature review, and it doesn’t use a formal sample selection, data analysis, or statistical interpretations. That noted, however, qualitative research plays an important role in the activities of healthcare leaders. The panel at left lists basic components of a qualitative study. Main Lesson Overview B Basically, the way to determine if a study is bad is to measure it against the criteria of a good study. This sounds simple, but often times, studies are labeled as good, valid, when they really do not measure up. It is the role of reviewers to make sure all of the I’s are dotted and the T’s are crossed. Decisions based on unsound studies can have potentially large negative financial implications and can result in injuries and death.
  • 57. Random Assignment Random assignment of the study participants into either an experimental group or a control group: i.e. does the study use random sample selection of subjects, or are they matched on key variables? Statistical Analysis The analysis of the data collected, the statistics, is an essential component of the research design. The choice of the statistics to use is affected by the types of data collected, the expected relationships between the independent and dependent variables, and the format of the research question. The discussion section of the research is where the researches pull together the entire study, discuss their findings, and tie the results back to the research question. Discussion focuses on the
  • 58. statistics resulted from the study. Do they reveal a significant relationship between the variables? That is, did the independent variables affect the dependent variable in a way the researchers had anticipated? When reviewing a study, it is important to compare the results, the statistics of the study, in the discussion section to the initial review of the literature and the research question. Valid Research Study Validity determines whether the independent variable is really having an effect on the dependent variable, as opposed to the study being affected by variables extraneous to the study. Are the instruments used to measure variables valid and reliable? This research aspect can be determined by looking at the way the instruments were originally built and tested. The researches
  • 59. should use instruments that have been used for the type of study under question and put through a series of analyses that confirm the concepts of the instrument validity and reliability. Validity demonstrates that the instrument measures the abstract concepts it is supposed to measure. For example, the back depression inventory has been shown to measure accurately the intensity of depression in multiple studies involving thousands of patients. Consequently, it is said to have validity. Valid Set Criteria Four of the most important factors included in a valid research study are: 1. Does the study address a valid clinical question? 2. Do the study group participants replicate the overall population
  • 60. under study? 3. Does the study randomly assign participants into the experimental or control groups? 4. Does the study’s structure aim at validly applicable statistics? In the final analysis, a study will fail to be valid if it does not contain basic elements within its structure. Evidence Based PracticeMain Lesson Overview AMain Lesson Overview B