The Qualitative Report
Volume 13 | Number 4 Article 2
12-1-2008
Qualitative Case Study Methodology: Study
Design and Implementation for Novice
Researchers
Pamela Baxter
McMaster University, [email protected]
Susan Jack
McMaster University, [email protected]
Follow this and additional works at: https://nsuworks.nova.edu/tqr
Part of the Quantitative, Qualitative, Comparative, and Historical Methodologies Commons, and
the Social Statistics Commons
This Article is brought to you for free and open access by the The Qualitative Report at NSUWorks. It has been accepted for inclusion in The
Qualitative Report by an authorized administrator of NSUWorks. For more information, please contact [email protected]
Recommended APA Citation
Baxter, P., & Jack, S. (2008). Qualitative Case Study Methodology: Study Design and Implementation for Novice Researchers . The
Qualitative Report, 13(4), 544-559. Retrieved from https://nsuworks.nova.edu/tqr/vol13/iss4/2
http://nsuworks.nova.edu/tqr/?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
http://nsuworks.nova.edu/tqr/?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
https://nsuworks.nova.edu/tqr?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
https://nsuworks.nova.edu/tqr/vol13?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
https://nsuworks.nova.edu/tqr/vol13/iss4?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
https://nsuworks.nova.edu/tqr/vol13/iss4/2?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
https://nsuworks.nova.edu/tqr?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
http://network.bepress.com/hgg/discipline/423?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
http://network.bepress.com/hgg/discipline/1275?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
https://nsuworks.nova.edu/tqr/vol13/iss4/2?utm_source=nsuworks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF&utm_campaign=PDFCoverPages
mailto:[email protected]
Qualitative Case Study Methodology: Study Design and Implementation
for Novice Researchers
Abstract
Qualitative case study methodology provides tools for researchers to study complex phenomena within their
contexts. When the approach is applied correctly, it becomes a valuable method for health science research to
develop theory, evaluate programs, and develop interventions. The purpose of this paper is to guide the novice
researcher in identifying the key elements for designing and implementing qualitative case study research
projects. An overview of the types of case study designs is provided along with general recommendations for
writing the research questions.
The Qualitative ReportVolume 13 Number 4 Article 212.docx
1. The Qualitative Report
Volume 13 | Number 4 Article 2
12-1-2008
Qualitative Case Study Methodology: Study
Design and Implementation for Novice
Researchers
Pamela Baxter
McMaster University, [email protected]
Susan Jack
McMaster University, [email protected]
Follow this and additional works at:
https://nsuworks.nova.edu/tqr
Part of the Quantitative, Qualitative, Comparative, and
Historical Methodologies Commons, and
the Social Statistics Commons
This Article is brought to you for free and open access by the
The Qualitative Report at NSUWorks. It has been accepted for
inclusion in The
Qualitative Report by an authorized administrator of
NSUWorks. For more information, please contact
[email protected]
Recommended APA Citation
Baxter, P., & Jack, S. (2008). Qualitative Case Study
Methodology: Study Design and Implementation for Novice
Researchers . The
Qualitative Report, 13(4), 544-559. Retrieved from
https://nsuworks.nova.edu/tqr/vol13/iss4/2
3. for Novice Researchers
Abstract
Qualitative case study methodology provides tools for
researchers to study complex phenomena within their
contexts. When the approach is applied correctly, it becomes a
valuable method for health science research to
develop theory, evaluate programs, and develop interventions.
The purpose of this paper is to guide the novice
researcher in identifying the key elements for designing and
implementing qualitative case study research
projects. An overview of the types of case study designs is
provided along with general recommendations for
writing the research questions, developing propositions,
determining the “case” under study, binding the case
and a discussion of data sources and triangulation. To facilitate
application of these principles, clear examples
of research questions, study propositions and the different types
of case study designs are provided
Keywords
Case Study and Qualitative Method
Creative Commons License
This work is licensed under a Creative Commons Attribution-
Noncommercial-Share Alike 4.0 License.
This article is available in The Qualitative Report:
https://nsuworks.nova.edu/tqr/vol13/iss4/2
https://goo.gl/u1Hmes
https://goo.gl/u1Hmes
http://creativecommons.org/licenses/by-nc-sa/4.0/
http://creativecommons.org/licenses/by-nc-sa/4.0/
http://creativecommons.org/licenses/by-nc-sa/4.0/
4. https://nsuworks.nova.edu/tqr/vol13/iss4/2?utm_source=nsuwor
ks.nova.edu%2Ftqr%2Fvol13%2Fiss4%2F2&utm_medium=PDF
&utm_campaign=PDFCoverPages
The Qualitative Report Volume 13 Number 4 December 2008
544-559
http://www.nova.edu/ssss/QR/QR13-4/baxter.pdf
Qualitative Case Study Methodology: Study Design and
Implementation for Novice Researchers
Pamela Baxter and Susan Jack
McMaster University, West Hamilton, Ontario, Canada
Qualitative case study methodology provides tools for
researchers to study
complex phenomena within their contexts. When the approach is
applied
correctly, it becomes a valuable method for health science
research to
develop theory, evaluate programs, and develop interventions.
The
purpose of this paper is to guide the novice researcher in
identifying the
key elements for designing and implementing qualitative case
study
research projects. An overview of the types of case study
designs is
provided along with general recommendations for writing the
research
questions, developing propositions, determining the “case”
5. under study,
binding the case and a discussion of data sources and
triangulation. To
facilitate application of these principles, clear examples of
research
questions, study propositions and the different types of case
study designs
are provided. Key Words: Case Study and Qualitative Methods
Introduction
To graduate students and researchers unfamiliar with case study
methodology,
there is often misunderstanding about what a case study is and
how it, as a form of
qualitative research, can inform professional practice or
evidence-informed decision
making in both clinical and policy realms. In a graduate level
introductory qualitative
research methods course, we have listened to novice researchers
describe their views of
case studies and their perceptions of it as a method only to be
used to study individuals or
specific historical events, or as a teaching strategy to
holistically understand exemplary
“cases.” It has been a privilege to teach these students that
rigorous qualitative case
studies afford researchers opportunities to explore or describe a
phenomenon in context
using a variety of data sources. It allows the researcher to
explore individuals or
organizations, simple through complex interventions,
relationships, communities, or
programs (Yin, 2003) and supports the deconstruction and the
6. subsequent reconstruction
of various phenomena. This approach is valuable for health
science research to develop
theory, evaluate programs, and develop interventions because of
its flexibility and rigor.
Background
This qualitative case study is an approach to research that
facilitates exploration
of a phenomenon within its context using a variety of data
sources. This ensures that the
issue is not explored through one lens, but rather a variety of
lenses which allows for
multiple facets of the phenomenon to be revealed and
understood. There are two key
approaches that guide case study methodology; one proposed by
Robert Stake (1995) and
545 The Qualitative Report December 2008
the second by Robert Yin (2003, 2006). Both seek to ensure that
the topic of interest is
well explored, and that the essence of the phenomenon is
revealed, but the methods that
they each employ are quite different and are worthy of
discussion. For a more lengthy
treatment of case study methods we encourage you to read
Hancock and Algozzine’s,
Doing case study research: A practical guide for beginning
researchers (2006).
7. Philosophical Underpinnings
First, both Stake (1995) and Yin (2003) base their approach to
case study on a
constructivist paradigm. Constructivists claim that truth is
relative and that it is dependent
on one’s perspective. This paradigm “recognizes the importance
of the subjective human
creation of meaning, but doesn’t reject outright some notion of
objectivity. Pluralism, not
relativism, is stressed with focus on the circular dynamic
tension of subject and object”
(Miller & Crabtree, 1999, p. 10). Constructivism is built upon
the premise of a social
construction of reality (Searle, 1995). One of the advantages of
this approach is the close
collaboration between the researcher and the participant, while
enabling participants to
tell their stories (Crabtree & Miller, 1999). Through these
stories the participants are
able to describe their views of reality and this enables the
researcher to better understand
the participants’ actions (Lather, 1992; Robottom & Hart,
1993).
When to Use a Case Study Approach
So when should you use a case study approach? According to
Yin (2003) a case
study design should be considered when: (a) the focus of the
study is to answer “how”
and “why” questions; (b) you cannot manipulate the behaviour
of those involved in the
study; (c) you want to cover contextual conditions because you
believe they are relevant
8. to the phenomenon under study; or (d) the boundaries are not
clear between the
phenomenon and context. For instance, a study of the decision
making of nursing
students conducted by Baxter (2006) sought to determine the
types of decisions made by
nursing students and the factors that influenced the decision
making. A case study was
chosen because the case was the decision making of nursing
students, but the case could
not be considered without the context, the School of Nursing,
and more specifically the
clinical and classroom settings. It was in these settings that the
decision making skills
were developed and utilized. It would have been impossible for
this author to have a true
picture of nursing student decision making without considering
the context within which
it occurred.
Determining the Case/Unit of Analysis
While you are considering what your research question will be,
you must also
consider what the case is. This may sound simple, but
determining what the unit of
analysis (case) is can be a challenge for both novice and
seasoned researchers alike. The
case is defined by Miles and Huberman (1994) as, “a
phenomenon of some sort occurring
in a bounded context. The case is, “in effect, your unit of
analysis” (p. 25). Asking
yourself the following questions can help to determine what
your case is; do I want to
9. “analyze” the individual? Do I want to “analyze” a program? Do
I want to “analyze” the
Pamela Baxter and Susan Jack 546
process? Do I want to “analyze” the difference between
organizations? Answering these
questions along with talking with a colleague can be effective
strategies to further
delineate your case. For example, your question might be, “How
do women in their 30s
who have had breast cancer decide whether or not to have breast
reconstruction?” In this
example, the case could be the decision making process of
women between the age of 30
and 40 years who have experienced breast cancer. However, it
may be that you are less
interested in the activity of decision making and more interested
in focussing specifically
on the experiences of 30-40 year old women. In the first
example, the case would be the
decision making of this group of women and it would be a
process being analyzed, but in
the second example the case would be focussing on an analysis
of individuals or the
experiences of 30 year old women. What is examined has
shifted in these examples (See
Case Examples #1 and #2 in Table 1).
Table 1
Developing Case Study Research Questions
10. Case Examples The Research Questions
1. The decision making process of women
between the age of 30 and 40 years
How do women between the ages of 30 and
40 years decide whether or not to have
reconstructive surgery after a radical
mastectomy? What factors influence their
decision?
2. The experiences of 30-40 year old
women following radical mastectomy faced
with the decision of whether or not to
undergo reconstructive surgery
How women (30-40 years of age) describe
their post-op (first 6 months) experiences
following a radical mastectomy? Do these
experiences influence their decisions
making related to breast reconstructive
surgery?
3. The decision making process (related to
breast reconstruction post-radical
mastectomy) of women between the age of
30 and 40 years attending four cancer
centers in Ontario.
How do women (ages 30-40) attending
four different cancer centers in Ontario
describe their decision making related to
breast reconstructive surgery following a
radical mastectomy?
11. Binding the Case
Once you have determined what your case will be, you will
have to consider what
your case will NOT be. One of the common pitfalls associated
with case study is that
there is a tendency for researchers to attempt to answer a
question that is too broad or a
topic that has too many objectives for one study. In order to
avoid this problem, several
authors including Yin (2003) and Stake (1995) have suggested
that placing boundaries on
a case can prevent this explosion from occurring. Suggestions
on how to bind a case
include: (a) by time and place (Creswell, 2003); (b) time and
activity (Stake); and (c) by
definition and context (Miles & Huberman, 1994). Binding the
case will ensure that your
547 The Qualitative Report December 2008
study remains reasonable in scope. In the example of the study
involving women who
must decide whether or not to have reconstructive surgery,
established boundaries would
need to include a concise definition of breast cancer and
reconstructive surgery. I would
have to indicate where these women were receiving care or
where they were making
these decisions and the period of time that we wanted to learn
about, for example within
six months of a radical mastectomy. It would be unreasonable
12. for me to look at all
women in their 30s across Canada who had experienced breast
cancer and their decisions
regarding reconstructive surgery. In contrast, I might want to
look at single women in
their 30s who have received care in a tertiary care center in a
specific hospital in South
Western Ontario. The boundaries indicate what will and will not
be studied in the scope
of the research project. The establishment of boundaries in a
qualitative case study design
is similar to the development of inclusion and exclusion criteria
for sample selection in a
quantitative study. The difference is that these boundaries also
indicate the breadth and
depth of the study and not simply the sample to be included.
Determining the Type of Case Study
Once you have determined that the research question is best
answered using a
qualitative case study and the case and its boundaries have been
determined, then you
must consider what type of case study will be conducted. The
selection of a specific type
of case study design will be guided by the overall study
purpose. Are you looking to
describe a case, explore a case, or compare between cases? Yin
(2003) and Stake (1995)
use different terms to describe a variety of case studies. Yin
categorizes case studies as
explanatory, exploratory, or descriptive. He also differentiates
between single, holistic
case studies and multiple-case studies. Stake identifies case
studies as intrinsic,
13. instrumental, or collective. Definitions and published examples
of these types of case
studies are provided in Table 2.
Table 2
Definitions and Examples of Different Types of Case Studies
Case Study Type Definition Published Study Example
Explanatory This type of case study
would be used if you were
seeking to answer a question
that sought to explain the
presumed causal links in
real-life interventions that
are too complex for the
survey or experimental
strategies. In evaluation
language, the explanations
would link program
implementation with
program effects (Yin, 2003).
Joia (2002). Analysing a web-
based e-commerce learning
community: A case study in
Brazil. Internet Research, 12,
305-317.
Pamela Baxter and Susan Jack 548
Exploratory This type of case study is
14. used to explore those
situations in which the
intervention being evaluated
has no clear, single set of
outcomes (Yin, 2003).
Lotzkar & Bottorff (2001). An
observational study of the
development of a nurse-patient
relationship. Clinical Nursing
Research, 10, 275-294.
Descriptive This type of case study is
used to describe an
intervention or phenomenon
and the real-life context in
which it occurred (Yin,
2003).
Tolson, Fleming, & Schartau
(2002). Coping with
menstruation: Understanding
the needs of women with
Parkinson’s disease. Journal
of Advanced Nursing, 40, 513-
521.
Multiple-case studies A multiple case study
enables the researcher to
explore differences within
and between cases. The goal
is to replicate findings across
cases. Because comparisons
will be drawn, it is
imperative that the cases are
chosen carefully so that the
15. researcher can predict
similar results across cases,
or predict contrasting results
based on a theory (Yin,
2003).
Campbell & Ahrens (1998).
Innovative community
services for rape victims: An
application of multiple case
study methodology. American
Journal of Community
Psychology, 26, 537-571.
Intrinsic Stake (1995) uses the term
intrinsic and suggests that
researchers who have a
genuine interest in the case
should use this approach
when the intent is to better
understand the case. It is not
undertaken primarily
because the case represents
other cases or because it
illustrates a particular trait or
problem, but because in all
its particularity and
ordinariness, the case itself
is of interest. The purpose is
NOT to come to understand
some abstract construct or
generic phenomenon. The
Hellström, Nolan, & Lundh
(2005). “We do things
together” A case study of
16. “couplehood” in dementia.
Dementia, 4(1), 7-22.
549 The Qualitative Report December 2008
purpose is NOT to build
theory (although that is an
option; Stake, 1995).
Instrumental Is used to accomplish
something other than
understanding a particular
situation. It provides insight
into an issue or helps to
refine a theory. The case is
of secondary interest; it
plays a supportive role,
facilitating our
understanding of something
else. The case is often
looked at in depth, its
contexts scrutinized, its
ordinary activities detailed,
and because it helps the
researcher pursue the
external interest. The case
may or may not be seen as
typical of other cases (Stake,
1995).
Luck, Jackson, & Usher
(2007). STAMP: Components
of observable behaviour that
indicate potential for patient
17. violence in emergency
departments. Journal of
Advanced Nursing, 59, 11-19.
Collective Collective case studies are
similar in nature and
description to multiple case
studies (Yin, 2003)
Scheib (2003). Role stress in
the professional life of the
school music teacher: A
collective case study. Journal
of Research in Music
Education, 51,124-136.
Single or Multiple Case Study Designs
Single Case
In addition to identifying the “case” and the specific “type” of
case study to be
conducted, researchers must consider if it is prudent to conduct
a single case study or if a
better understanding of the phenomenon will be gained through
conducting a multiple
case study. If we consider the topic of breast reconstruction
surgery again we can begin to
discuss how to determine the “type” of case study and the
necessary number of cases to
study. A single holistic case might be the decision making of
one woman or a single
group of 30 year old women facing breast reconstruction post-
mastectomy. But
18. remember that you also have to take into consideration the
context. So, are you going to
look at these women in one environment because it is a unique
or extreme situation? If
so, you can consider a holistic single case study (Yin, 2003).
Pamela Baxter and Susan Jack 550
Single Case with Embedded Units
If you were interested in looking at the same issue, but now
were intrigued by the
different decisions made by women attending different clinics
within one hospital, then a
holistic case study with embedded units would enable the
researcher to explore the case
while considering the influence of the various clinics and
associated attributes on the
women’s decision making. The ability to look at sub-units that
are situated within a larger
case is powerful when you consider that data can be analyzed
within the subunits
separately (within case analysis), between the different subunits
(between case analysis),
or across all of the subunits (cross-case analysis). The ability to
engage in such rich
analysis only serves to better illuminate the case. The pitfall
that novice researchers fall
into is that they analyze at the individual subunit level and fail
to return to the global
issue that they initially set out to address (Yin, 2003).
19. Multiple-Case Studies
If a study contains more than a single case then a multiple-case
study is required.
This is often equated with multiple experiments. You might find
yourself asking, but
what is the difference between a holistic case study with
embedded units and a multiple-
case study? Good question! The simple answer is that the
context is different for each of
the cases. A multiple or collective case study will allow the
researcher to analyze within
each setting and across settings. While a holistic case study
with embedded units only
allows the researcher to understand one unique/extreme/critical
case. In a multiple case
study, we are examining several cases to understand the
similarities and differences
between the cases. Yin (2003) describes how multiple case
studies can be used to either,
“(a) predicts similar results (a literal replication) or (b) predicts
contrasting results but for
predictable reasons (a theoretical replication)” (p. 47). This
type of a design has its
advantages and disadvantages. Overall, the evidence created
from this type of study is
considered robust and reliable, but it can also be extremely time
consuming and
expensive to conduct. Continuing with the same example, if you
wanted to study women
in various health care institutions across the country, then a
multiple or collective case
study would be indicated. The case would still be the decision
making of women in their
30s, but you would be able to analyze the different decision-
making processes engaged in
20. by women in different centers (See Case Example #3 in Table
1).
Stake (1995) uses three terms to describe case studies;
intrinsic, instrumental, and
collective. If you are interested in a unique situation according
to Stake, conduct an
intrinsic case study. This simply means that you have an
intrinsic interest in the subject
and you are aware that the results have limited transferability.
If the intent is to gain
insight and understanding of a particular situation or
phenomenon, then Stake would
suggest that you use an instrumental case study to gain
understanding. This author also
uses the term collective case study when more than one case is
being examined. The
same example used to describe multiple case studies can be
applied here.
Once the case has been determined and the boundaries placed
on the case it is
important to consider the additional components required for
designing and implementing
a rigorous case study. These include: (a) propositions (which
may or may not be present)
(Yin, 2003, Miles & Huberman, 1994); (b) the application of a
conceptual framework
551 The Qualitative Report December 2008
(Miles & Huberman); (c) development of the research questions
(generally “how” and/or
“why” questions); (d) the logic linking data to propositions; and
(e) the criteria for
interpreting findings (Yin).
21. Propositions
Propositions are helpful in any case study, but they are not
always present. When
a case study proposal includes specific propositions it increases
the likelihood that the
researcher will be able to place limits on the scope of the study
and increase the
feasibility of completing the project. The more a study contains
specific propositions, the
more it will stay within feasible limits. So where do the
propositions come from?
Propositions may come from the literature,
personal/professional experience, theories,
and/or generalizations based on empirical data (Table 3).
Table 3
Case Study Propositions
Potential Propositions Source
**These are only examples of literature and
do not reflect a full literature review.
Women in their 30s most often decide not
to have reconstructive surgery
Professional experience and Literature
Handel, Silverstein, Waisman, Waisman, &
Gierson (1990). Reasons why
22. mastectomy patients do not have breast
reconstruction. Plastic Reconstruction
Surgery, 86(6), 1118-22.
Morrow, Scott, Menck, Mustoe, &
Winchester (2001). Factors influencing
the use of breast reconstruction
postmastectomy: a National Cancer
Database Study. Journal of American
College of Surgeons, 192(1), 69-70.
Women choose not to have reconstructive
surgery post mastectomy due to the issues
related to acute pain
Literature- Wallace, Wallace, Lee, &
Dobke (1996). Pain after breast surgery: A
survey of 282 women. Pain, 66(2-3), 195-
205.
Women face many personal and social
barriers to breast reconstructive surgery.
Professional experience and Literature
Reaby (1998). Reasons Why Women Who
Have Mastectomy Decide to Have or Not
to Have Breast Reconstruction. Plastic &
Reconstructive Surgery. 101(7), 1810-
1818.
Pamela Baxter and Susan Jack 552
23. Staradub, YiChing Hsieh, Clauson, et al.
(2002). Factors that influence surgical
choices in women with breast carcinoma.
Cancer, 95(6), 1185-1190.
Women are influenced by their health care
providers when making this decision
Personal experience and literature
Wanzel, Brown, Anastakis, et al. (2002).
Reconstructive Breast Surgery: Referring
physician knowledge and learning needs.
Plastic and Reconstructive Surgery, 110:6
Women in different regions of Canada
make different decisions about breast
reconstructive surgery post mastectomy
Literature
Polednak (2000). Geographic variation in
postmastectomy breast reconstruction rates.
Plastic & Reconstructive Surgery, 106(2),
298-301.
For example, one proposition included in a study on the
development of nursing student
decision making in a clinical setting stated that “various factors
influence nurse decision
making including the decision maker’s knowledge, and
experience, feelings of fear, and
degree of confidence” (Baxter, 2000, 2006). This proposition
was based on the literature
found on the topic of nurse decision making. The researcher can
24. have several
propositions to guide the study, but each must have a distinct
focus and purpose. These
propositions later guide the data collection and discussion. Each
proposition serves to
focus the data collection, determine direction and scope of the
study and together the
propositions form the foundation for a conceptual
structure/framework (Miles &
Huberman, 1994; Stake, 1995). It should be noted that
propositions may not be present in
exploratory holistic or intrinsic case studies due to the fact that
the researcher does not
have enough experience, knowledge, or information from the
literature upon which to
base propositions. For those of you more familiar with
quantitative approaches to
experimental studies, propositions can be equated with
hypotheses in that they both make
an educated guess to the possible outcomes of the
experiment/research study. A common
pitfall for the novice case study researchers is to include too
many propositions and then
find that they are overwhelmed by the number of propositions
that must be returned to
when analyzing the data and reporting the findings.
Issues
To contribute to the confusion that exists surrounding the
implementation of
different types of qualitative case study approaches, where Yin
uses “propositions” to
guide the research process, Stake (1995) applies what he terms
25. “issues.” Stake states,
“issues are not simple and clean, but intricately wired to
political, social, historical, and
especially personal contexts. All these meanings are important
in studying cases” (p. 17).
Both Yin and Stake suggest that the propositions and issues are
necessary elements in
case study research in that both lead to the development of a
conceptual framework that
guides the research.
553 The Qualitative Report December 2008
Conceptual Framework
Both Stake and Yin refer to conceptual frameworks, but fail to
fully describe them
or provide a model of a conceptual framework for reference.
One resource that provides
examples of conceptual frameworks is Miles and Huberman
(1994). These authors note
that the conceptual framework serves several purposes: (a)
identifying who will and will
not be included in the study; (b) describing what relationships
may be present based on
logic, theory and/or experience; and (c) providing the researcher
with the opportunity to
gather general constructs into intellectual “bins” (Miles &
Huberman, p. 18). The
conceptual framework serves as an anchor for the study and is
referred at the stage of
data interpretation. For example an initial framework was
developed by Baxter, 2003 in
26. her exploration of nursing student decision making. The
framework was based on the
literature and her personal experiences. The major constructs
were proposed in the
following manner:
(Adapted from Baxter, 2003 p. 28)
The reader will note that the framework does not display
relationships between
the constructs. The framework should continue to develop and
be completed as the study
progresses and the relationships between the proposed
constructs will emerge as data are
analyzed. A final conceptual framework will include all the
themes that emerged from
data analysis. Yin suggests that returning to the propositions
that initially formed the
conceptual framework ensures that the analysis is reasonable in
scope and that it also
provides structure for the final report. One of the drawbacks of
a conceptual framework is
that it may limit the inductive approach when exploring a
phenomenon. To safeguard
against becoming deductive, researchers are encouraged to
journal their thoughts and
decisions and discuss them with other researchers to determine
if their thinking has
become too driven by the framework.
Types of decisions
Internal
27. influencing
factors
External
influencing
factors
Decision
making
Time
Clinical Setting
PBL setting
Pamela Baxter and Susan Jack 554
Data Sources
A hallmark of case study research is the use of multiple data
sources, a strategy
which also enhances data credibility (Patton, 1990; Yin, 2003).
Potential data sources
may include, but are not limited to: documentation, archival
records, interviews, physical
artifacts, direct observations, and participant-observation.
Unique in comparison to other
qualitative approaches, within case study research, investigators
can collect and integrate
quantitative survey data, which facilitates reaching a holistic
understanding of the
phenomenon being studied. In case study, data from these
28. multiple sources are then
converged in the analysis process rather than handled
individually. Each data source is
one piece of the “puzzle,” with each piece contributing to the
researcher’s understanding
of the whole phenomenon. This convergence adds strength to
the findings as the various
strands of data are braided together to promote a greater
understanding of the case.
Although the opportunity to gather data from various sources is
extremely
attractive because of the rigor that can be associated with this
approach, there are
dangers. One of them is the collection of overwhelming amounts
of data that require
management and analysis. Often, researchers find themselves
“lost” in the data. In order
to bring some order to the data collection a computerized data
base is often necessary to
organize and manage the voluminous amount of data.
Database
Both Yin and Stake recognize the importance of effectively
organizing data. The
advantage of using a database to accomplish this task is that raw
data are available for
independent inspection. Using a database improves the
reliability of the case study as it
enables the researcher to track and organize data sources
including notes, key documents,
tabular materials, narratives, photographs, and audio files can
29. be stored in a database for
easy retrieval at a later date. Computer Aided Qualitative Data
Analysis Software
(CAQDAS) provides unlimited “bins” into which data can be
collected and then
organized. In addition to the creation of bins these programs
facilitate the recording of
source detail, the time and date of the data collection, storage,
and search capabilities.
These are all important when developing a case study database
(Wickham & Woods,
2005). The advantages and disadvantages of such a database
have been described in the
literature (Richards & Richards, 1994, 1998) one of the greatest
drawbacks is the
distancing of the researcher from the data.
Analysis
As in any other qualitative study the data collection and
analysis occur
concurrently. The type of analysis engaged in will depend on
the type of case study. Yin
briefly describes five techniques for analysis: pattern matching,
linking data to
propositions, explanation building, time-series analysis, logic
models, and cross-case
synthesis. In contrast, Stake describes categorical aggregation
and direct interpretation as
types of analysis. Explaining each of these techniques is beyond
the scope of this paper.
As a novice researcher, it is important to review various types
of analysis and to
30. determine which approach you are most comfortable with.
555 The Qualitative Report December 2008
Yin (2003) notes that one important practice during the analysis
phase of any case
study is the return to the propositions (if used); there are
several reasons for this. First,
this practice leads to a focused analysis when the temptation is
to analyze data that are
outside the scope of the research questions. Second, exploring
rival propositions is an
attempt to provide an alternate explanation of a phenomenon.
Third, by engaging in this
iterative process the confidence in the findings is increased as
the number of propositions
and rival propositions are addressed and accepted or rejected.
One danger associated with the analysis phase is that each data
source would be
treated independently and the findings reported separately. This
is not the purpose of a
case study. Rather, the researcher must ensure that the data are
converged in an attempt to
understand the overall case, not the various parts of the case, or
the contributing factors
that influence the case. As a novice researcher, one strategy that
will ensure that you
remain true to the original case is to involve other research team
members in the analysis
phase and to ask them to provide feedback on your ability to
integrate the data sources in
an attempt to answer the research questions.
31. Reporting a Case Study
Reporting a case study can be a difficult task for any researcher
due to the
complex nature of this approach. It is difficult to report the
findings in a concise manner,
and yet it is the researcher’s responsibility to convert a complex
phenomenon into a
format that is readily understood by the reader. The goal of the
report is to describe the
study in such a comprehensive manner as to enable the reader to
feel as if they had been
an active participant in the research and can determine whether
or not the study findings
could be applied to their own situation. It is important that the
researcher describes the
context within which the phenomenon is occurring as well as
the phenomenon itself.
There is no one correct way to report a case study. However,
some suggested ways are by
telling the reader a story, by providing a chronological report,
or by addressing each
proposition. Addressing the propositions ensures that the report
remains focused and
deals with the research question. The pitfall in the report
writing that many novice
researchers fall into is being distracted by the mounds of
interesting data that are
superfluous to the research question. Returning to the
propositions or issues ensures that
the researcher avoids this pitfall. In order to fully understand
the findings they are
compared and contrasted to what can be found in published
literature in order to situate
32. the new data into preexisting data. Yin (2003) suggests six
methods for reporting a case
study. These include: linear, comparative, chronological, theory
building, suspense, and
unsequenced (Refer to Yin for full descriptions).
Strategies for Achieving Trustworthiness in Case Study
Research
Numerous frameworks have been developed to evaluate the
rigor or assess the
trustworthiness of qualitative data (e.g., Guba, 1981; Lincoln &
Guba, 1985) and
strategies for establishing credibility, transferability,
dependability, and confirmability
have been extensively written about across fields (e.g.,
Krefting, 1991; Sandelowski,
1986, 1993). General guidelines for critically appraising
qualitative research have also
been published (e.g., Forchuk & Roberts, 1993; Mays & Pope,
2000).
Pamela Baxter and Susan Jack 556
For the novice researcher, designing and implementing a case
study project, there
are several basic key elements to the study design that can be
integrated to enhance
overall study quality or trustworthiness. Researchers using this
method will want to
ensure enough detail is provided so that readers can assess the
validity or credibility of
the work. As a basic foundation to achieve this, novice
researchers have a responsibility
33. to ensure that: (a) the case study research question is clearly
written, propositions (if
appropriate to the case study type) are provided, and the
question is substantiated; (b)
case study design is appropriate for the research question; (c)
purposeful sampling
strategies appropriate for case study have been applied; (d) data
are collected and
managed systematically; and (e) the data are analyzed correctly
(Russell, Gregory, Ploeg,
DiCenso, & Guyatt, 2005). Case study research design
principles lend themselves to
including numerous strategies that promote data credibility or
“truth value.”
Triangulation of data sources, data types or researchers is a
primary strategy that can be
used and would support the principle in case study research that
the phenomena be
viewed and explored from multiple perspectives. The collection
and comparison of this
data enhances data quality based on the principles of idea
convergence and the
confirmation of findings (Knafl & Breitmayer, 1989). Novice
researchers should also
plan for opportunities to have either a prolonged or intense
exposure to the phenomenon
under study within its context so that rapport with participants
can be established and so
that multiple perspectives can be collected and understood and
to reduce potential for
social desirability responses in interviews (Krefting, 1991). As
data are collected and
analyzed, researchers may also wish to integrate a process of
member checking, where
the researchers’ interpretations of the data are shared with the
participants, and the
34. participants have the opportunity to discuss and clarify the
interpretation, and contribute
new or additional perspectives on the issue under study.
Additional strategies commonly
integrated into qualitative studies to establish credibility
include the use of reflection or
the maintenance of field notes and peer examination of the data.
At the analysis stage, the
consistency of the findings or “dependability” of the data can be
promoted by having
multiple researchers independently code a set of data and then
meet together to come to
consensus on the emerging codes and categories. Researchers
may also choose to
implement a process of double coding where a set of data are
coded, and then after a
period of time the researcher returns and codes the same data
set and compares the results
(Krefting).
Conclusion
Case study research is more than simply conducting research on
a single
individual or situation. This approach has the potential to deal
with simple through
complex situations. It enables the researcher to answer “how”
and “why” type questions,
while taking into consideration how a phenomenon is influenced
by the context within
which it is situated. For the novice research a case study is an
excellent opportunity to
gain tremendous insight into a case. It enables the researcher to
gather data from a variety
35. of sources and to converge the data to illuminate the case.
557 The Qualitative Report December 2008
References
Baxter, P. (2000). An exploration of student decision making as
experienced by second
year baccalaureate nursing students in a surgical clinical
setting. Unpublished
master’s thesis, McMaster University, Hamilton, ON.
Baxter, P. (2003). The development of nurse decision making: A
case study of a four year
baccalaureate nursing programme. Unpublished doctoral thesis,
McMaster
University, Hamilton, ON.
Baxter, P., & Rideout, L. (2006). Decision making of 2nd year
baccalaureate nursing
students. Journal of Nursing Education, 45(4), 121-128.
Campbell, R., & Ahrens, C. E. (1998). Innovative community
services for rape victims:
An application of multiple case study methodology. American
Journal of
Community Psychology, 26, 537-571.
Creswell, J. (1998). Research design: Qualitative, quantitative,
and mixed methods
approaches (2nd ed.). Thousand Oaks, CA: Sage.
36. Forchuk, C., & Roberts, J. (1993). How to critique qualitative
research articles. Canadian
Journal of Nursing Research, 25, 47-55.
Guba, E. (1981). Criteria for assessing the trustworthiness of
naturalistic inquiries.
Educational Resources Information Center Annual Review
Paper, 29, 75-91.
Hancock, D. R., & Algozzine, B. (2006). Doing case study
research: A practical guide
for beginning researchers. New York: Teachers College Press.
Handel, N., Silverstein, M., Waisman, E., Waisman, J., &
Gierson, E. (1990). Reasons
why mastectomy patients do not have breast reconstruction.
Plastic
Reconstruction Surgery, 86(6), 1118-22.
Hellström, I., Nolan, M., & Lundh, U. (2005). “We do things
together.” A case study of
“couplehood” in dementia. Dementia, 4(1), 7-22.
Joia, L. A. (2002). Analysing a web-based e-commerce learning
community: A case
study in Brazil. Internet Research, 12, 305-317.
Knafl, K., & Breitmayer, B. J. (1989). Triangulation in
qualitative research: Issues of
conceptual clarity and purpose. In J. Morse (Ed.), Qualitative
nursing research: A
contemporary dialogue (pp. 193-203). Rockville, MD: Aspen.
Krefting, L. (1991). Rigor in qualitative research: The
assessment of trustworthiness.
American Journal of Occupational Therapy, 45, 214-222.
37. Lather, P. (1992). Critical frames in educational research:
Feminist and post-structural
perspectives. Theory into Practice, 31(2), 87-99.
Lincoln, Y. S., & Guba, E. A. (1985). Naturalistic inquiry.
Beverly Hills, CA: Sage.
Lotzkar, M., & Bottorff, J. (2001). An observational study of
the development of a nurse-
patient relationship. Clinical Nursing Research, 10, 275-294.
Luck, L., Jackson, D., & Usher, K. (2007). STAMP:
Components of observable
behaviour that indicate potential for patient violence in
emergency departments.
Journal of Advanced Nursing, 59, 11-19.
Mays, N., & Pope, C. (2000). Qualitative research in health
care: Assessing quality in
qualitative research. BMJ, 320, 50-52.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data
analysis: An expanded source
book (2nd ed.). Thousand Oaks, CA: Sage.
Pamela Baxter and Susan Jack 558
Morrow, M., Scott, S., Menck, H., Mustoe, T., & Winchester, D.
(2001). Factors
influencing the use of breast reconstruction postmastectomy: A
national cancer
database study. Journal of American College of Surgeons,
192(1), 69-70.
38. Patton, M. (1990). Qualitative evaluation and research methods
(2nd ed.). Newbury
Park, CA: Sage.
Polednak, A. (2000). Geographic variation in postmastectomy
breast reconstruction rates.
Plastic & Reconstructive Surgery, 106(2), 298-301.
Reaby, L. (1998). Reasons why women who have mastectomy
decide to have or not to
have breast reconstruction. Plastic & Reconstructive Surgery,
101(7), 1810-1818,
Richards, L., & Richards, T. (1994). From filing cabinet to
computer. In A. Bryman & R.
G. Burgess (Eds.), Analysing qualitative data (pp. 146-172).
London: Routledge.
Richards, T. J., & Richards, L. (1998). Using computers in
qualitative research. In N. K.
Denzin & Y. S. Lincoln (Eds.), Collecting and interpreting
qualitative materials
(pp. 445-462). London: Sage.
Russell, C., Gregory, D., Ploeg, J., DiCenso, A., & Guyatt, G.
(2005). Qualitative
research. In A. DiCenso, G. Guyatt, & D. Ciliska (Eds.),
Evidence-based nursing:
A guide to clinical practice (pp. 120-135). St. Louis, MO:
Elsevier Mosby.
Sandelowski, M. (1986). The problem of rigor in qualitative
research. Advances in
Nursing Science, 8(3), 27-37.
39. Sandelowski, M. (1993). Rigor or rigor mortis: The problem of
rigor in qualitative
research revisited. Advances in Nursing Science, 16(1), 1-8.
Scanlon, C. (2000). A professional code of ethics provides
guidance for genetic nursing
practice. Nursing Ethics, 7(3), 262-268.
Scheib, J. W. (2003). Role stress in the professional life of the
school music teacher: A
collective case study. Journal of Research in Music Education,
51,124-136.
Stake, R. E. (1995). The art of case study research. Thousand
Oaks, CA: Sage.
Staradub, V., YiChing Hsieh, M., Clauson, J., et al. (2002).
Factors that influence
surgical choices in women with breast carcinoma. Cancer,
95(6), 1185-1190.
Tolson, D., Fleming, V., & Schartau, E. (2002). Coping with
menstruation:
Understanding the needs of women with Parkinson’s disease.
Journal of
Advanced Nursing, 40, 513-521.
Wallace, M., Wallace, H., Lee, J., & Dobke, M. (1996). Pain
after breast surgery: A
survey of 282 women. Pain, 66(2-3), 195-205.
Wanzel, K., Brown, M., Anastakis, D., et al. (2002).
Reconstructive breast surgery:
Referring physician knowledge and learning needs. Plastic and
Reconstructive
Surgery, 110(6), xx-xx.
40. Wickham, M., & Woods, M. (2005). Reflecting on the strategic
use of CAQDAS to
manage and report on the qualitative research process. The
Qualitative Report,
10(4), 687-702.
Yin, R. K. (2003). Case study research: Design and methods
(3rd ed.). Thousand Oaks,
CA: Sage.
559 The Qualitative Report December 2008
Author Note
Dr. Pamela Baxter is an assistant professor at McMaster
University. Contact
information: McMaster University, 1200 Main St. W., HSB-
3N28C, Hamilton, ON, L8N
3Z5; Phone: 905-525-9140 x 22290; Fax: 905-521-8834; E-mail:
[email protected]
Dr. Susan Jack is an assistant professor at McMaster University.
1200 Main St.
W. Hamilton, ON L8N 3Z5; E-mail: [email protected]
Copyright 2008: Pamela Baxter, Susan Jack, and Nova
Southeastern University
Article Citation
41. Baxter, P., & Jack, S. (2008). Qualitative case study
methodology: Study design and
implementation for novice researchers. The Qualitative Report,
13(4), 544-559.
Retrieved from http://www.nova.edu/ssss/QR/QR13-
4/baxter.pdf
The Qualitative Report12-1-2008Qualitative Case Study
Methodology: Study Design and Implementation for Novice
ResearchersPamela BaxterSusan JackRecommended APA
CitationQualitative Case Study Methodology: Study Design and
Implementation for Novice
ResearchersAbstractKeywordsCreative Commons
LicenseMicrosoft Word - baxter.doc
European Journal of Training and Development
Nonexperimental research: strengths, weaknesses and issues of
precision
Thomas G. Reio Jr
Article information:
To cite this document:
Thomas G. Reio Jr , (2016),"Nonexperimental research:
strengths, weaknesses and issues of
precision ", European Journal of Training and Development,
Vol. 40 Iss 8/9 pp. 676 - 690
Permanent link to this document:
http://dx.doi.org/10.1108/EJTD-07-2015-0058
Downloaded on: 22 December 2016, At: 09:26 (PT)
References: this document contains references to 53 other
42. documents.
To copy this document: [email protected]
The fulltext of this document has been downloaded 127 times
since 2016*
Users who downloaded this article also downloaded:
(2016),"Opening the black box and searching for smoking guns:
Process causality in qualitative
research", European Journal of Training and Development, Vol.
40 Iss 8/9 pp. 691-718 http://
dx.doi.org/10.1108/EJTD-07-2015-0049
(2016),"Contending claims to causality: a critical review of
mediation research in HRD", European
Journal of Training and Development, Vol. 40 Iss 8/9 pp. 595-
614 http://dx.doi.org/10.1108/
EJTD-07-2015-0056
Access to this document was granted through an Emerald
subscription provided by emerald-
srm:485088 []
For Authors
If you would like to write for this, or any other Emerald
publication, then please use our Emerald
for Authors service information about how to choose which
publication to write for and submission
guidelines are available for all. Please visit
www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to
the benefit of society. The company
manages a portfolio of more than 290 journals and over 2,350
books and book series volumes, as
well as providing an extensive range of online products and
additional customer resources and
43. services.
Emerald is both COUNTER 4 and TRANSFER compliant. The
organization is a partner of the
Committee on Publication Ethics (COPE) and also works with
Portico and the LOCKSS initiative for
digital archive preservation.
*Related content and download information correct at time of
download.
D
ow
nl
oa
de
d
by
U
ni
ve
rs
ity
L
ib
ra
ry
A
45. is not an experiment, is the predominate kind of research design
used in the social sciences. How to
unambiguously and correctly present the results of
nonexperimental research, however, remains
decidedly unclear and possibly detrimental to applied
disciplines such as human resource development.
To clarify issues about the accurate reporting and generalization
of nonexperimental research results,
this paper aims to present information about the relative
strength of research designs, followed by the
strengths and weaknesses of nonexperimental research. Further,
some possible ways to more precisely
report nonexperimental findings without using causal language
are explored. Next, the researcher takes
the position that the results of nonexperimental research can be
used cautiously, yet appropriately, for
making practice recommendations. Finally, some closing
thoughts about nonexperimental research
and the appropriate use of causal language are presented.
Design/methodology/approach – A review of the extant social
science literature was consulted to
inform this paper.
Findings – Nonexperimental research, when reported accurately,
makes a tremendous contribution
because it can be used for conducting research when
experimentation is not feasible or desired. It can be
used also to make tentative recommendations for practice.
Originality/value – This article presents useful means to more
accurately report nonexperimental
findings through avoiding causal language. Ways to link
nonexperimental results to making practice
recommendations are explored.
Keywords Research design, Experimental design, Causal
inference, Nonexperimental,
Social science research, Triangulation
46. Paper type Conceptual paper
The call for cutting-edge research to meet individual, group and
societal needs around
the world has never seemed more urgent. As social science
researchers, this need seems
particularly acute in the field of human resource development
(HRD). HRD researchers
and practitioners are at the cusp of fostering learning and
development in diverse
workplace settings that benefit not only individuals and the
organization but also
society and the common good (Reio, 2007). As applied social
scientists, HRD
professionals need to better understand how to foster learning
and development
optimally, as organizational support for such activities can
range from being weak or
nonexistent (e.g. management not valuing or implementing a
formal mentoring
program) to strong (e.g. pressing need for cross-cultural
training for expatriate
managers in an important new geographic region). These better
understandings will
contribute to organizational efforts to attain and sustain
competitive advantage through
The current issue and full text archive of this journal is
available on Emerald Insight at:
www.emeraldinsight.com/2046-9012.htm
EJTD
40,8/9
676
49. called for additional HRD theory-building methods articles, as
well as theory generating
and testing articles, to support present and future research needs
in the field. Reio et al.
(2015) highlighted the particular importance of accurate
reporting, as erroneous
analysis and subsequent reporting of findings can be highly
problematic for theorists
and researchers. Reio and Shuck (2015), for instance, noted how
inappropriate decision
practices in exploratory factor analysis (e.g. using the wrong
factor extraction or factor
rotation procedure) were all too common, leading to inaccurate
interpretations of data
when analyzed. The inaccurate data analysis interpretations, in
turn, can lead to
erroneous reporting that renders otherwise solid research into
something that is
ambiguous and incorrect. This state of affairs can indeed be
detrimental, because future
researchers all too often unwittingly use such unsound
scholarship as the foundation of
their own empirical research (Shaw et al., 2010).
Additionally, of great import is that inaccurate reporting
contributes to troubling
perceptions that social science research tends to be sloppy, of
low quality and, therefore,
lacking scientific credibility (Frazier et al., 2004; Levin, 1994).
This notion of credibility
has been an issue since the early part of the twentieth century,
so much so that Campbell
and Stanley (1963, p. 2) in one of their seminal works saw
experimental research as the:
[…] only means for settling disputes regarding educational
50. practice, as the only way of
verifying educational improvements, and as the only way to
establish a cumulative tradition in
which improvements can be introduced without the danger of a
faddish discharge of old
wisdom in favor of inferior novelties.
Campbell and Stanley hailed experimental research strongly
because they saw it as the
only true means to establish causality. Causal inferences and
thus causality can be
claimed through this type of research, because of its ability
through random assignment
to handle internal validity issues (i.e. history, maturation,
testing, instrumentation,
statistical regression, selection, experimental mortality and
selection-maturation
interaction). Campbell and Stanley presented four threats to
external validity that are
handled to some lesser degree by experimental research designs
as well; that is, reactive
effect of testing, interaction effects of selection biases and the
experimental variable,
reactive effects of experimental arrangements and multiple-
treatment interference.
Consequently, because of its perceived precision and therefore
rigor, they saw
experimental research as the one best way to move the field of
education and, by
extension, social sciences forward.
Some researchers have argued that reporting the results of
nonexperimental research
should not be used for doing little beyond finding associations
between variables and
falsifying hypotheses, because it cannot handle internal validity
51. or rival explanation
issues sufficiently well (Bullock et al., 2010; Duncan and
Gibson-Davis, 2006; Maxk,
2006; Stone-Romero and Rosopa, 2008; White and Pesner,
1983). Indeed, although not
limited to the social sciences (basic science results are all-too-
often interpreted and
generalized beyond what is supportable with the data
[Goodman, 2008; Ioannidis,
2005]), there is evidence that social science researchers can
sometimes interpret
nonexperimental results beyond what they were designed to
support. For example,
677
Weaknesses
and issues of
precision
D
ow
nl
oa
de
d
by
U
ni
ve
53. making “causal”
statements and causal conclusions in five of the top-tier
teaching-and-learning research
journals based on nonexperimental (nonintervention) data. In an
interesting extension
of Robinson et al.’s research, Shaw et al. (2010) discovered that
inappropriate practice
recommendations based on nonintervention research were
perpetuated by educational
researchers themselves (i.e. they cited nonintervention studies
when making causal
statements). Likewise, using the same journals as in the
Robinson et al.’s study, Reinhart
et al. (2013) found that the use of prescriptive statements for
practice based on
nonexperimental data had indeed increased over the 2000-2010
time period, despite a
distinct decrease in published intervention studies. Finally,
Hsieh et al. (2005) found that
over a 21-year period, articles based on randomized experiments
had decreased in
educational psychology and education, suggesting a lack of
attention to answering
previous calls (Levin, 1994) for higher-quality educational
intervention research. Hsieh
et al. lamented that this was unfortunate in that this was the
type of research that could
bridge the research-to-practice gap.
HRD researchers should be alert to issues associated with the
proper reporting of
nonexperimental findings. Unmistakably, reporting issues
associated with
nonexperimental results are salient because equivocal, oblique
reporting can be
interpreted as being sloppy and unconvincing at best and simply
54. wrong at worst.
Imprecise reporting that includes using causal language to
describe nonexperimental-
generated results can be deleterious to theory building and
future research (Bollen and
Pearl, 2013; Cook and Campbell, 1979; Reio, 2011). As
indicated by the aforementioned
discussion, there is a conflicting direction in the literature as to
what constitutes the best
reporting practices required to guide social science researchers
in correcting this
situation, as it relates to proper reporting and generalizing. To
address this information
gap, I will examine the vexing issue associated with the precise
reporting of research,
such as avoiding causal statements based on nonexperimental
data and making
tenuous, but supportable, recommendations for practice based
upon nonexperimental
research designs. First, to undergird discussion in the paper, a
brief introduction
regarding the relative strength of research designs is presented.
The strength of
research design section is followed by a section presenting the
strengths and
weaknesses of nonexperimental research and subsequently by a
section highlighting
some possible ways to more precisely report nonexperimental
findings without using
causal language. Next, recommendations are made with regard
to how to take
nonexperimental results and make them explicitly useful for
practice. In the final
section, I present some concluding thoughts about
nonexperimental research and the
appropriate use of causal language.
55. Relative strength of research designs
I must make it clear at this point that there is no such thing as a
perfect research design,
as each has its strengths and weaknesses (Cook and Campbell,
1979; Coursey, 1989;
Howe, 2009; Maxwell, 2004; Newman et al., 2002; Schilling,
2010; Spector and Meier,
2014; Sue, 1999). Thus, in many ways, no particular research
design should be in the
ascendancy. Rather, the research design that matters the most is
the one that will most
elegantly, parsimoniously and correctly, within ethical
boundaries, support answering
the research questions or testing the hypotheses associated with
a study (Johnson, 2009;
Reis, 2000). Generally speaking, social scientists tend to stress
a weakness or strength of
EJTD
40,8/9
678
D
ow
nl
oa
de
d
by
U
57. a research design so much so that they can become dogmatic
and narrow in their views
(Bredo, 2009). For example, Sue (1999) criticizes experimental
designs and their ability to
make causal inferences as it relates to cross-cultural research on
external validity
grounds; Stone-Romero and Rosopa (2008), in contrast, strongly
promote the superiority
of experimental designs for making causal inferences, not
acknowledging the possible
external validity issue. Both are quite correct in their assertions,
but they are similar to
ships passing in the night; their focus is related to the utility of
experimental research for
their research purposes.
To move past perhaps unnecessarily narrow views, it might be
useful to think of the
assumptions underlying this research because we must
remember that these
assumptions can shape interpretation of the findings. When
selecting a research
method, there are three major assumptions to consider: context,
causal inference and
generalization (Coursey, 1989). If one selects a
phenomenological study, we emphasize
context over generalization and causal inference. Alternatively,
if we choose a survey
study, we favor generalization over the other two assumptions.
Further, experimental
research would emphasize on the causal inference and not
generalization and context.
Still, honoring context, causal inference and generalizability
can be at odds within the
58. limited confines of a study, and, therefore, the researcher must
make tradeoffs. The
tradeoffs, in turn, can be compensated for in a research design
sense by triangulation;
the use of a variety of data collection methods, data, theories or
observers to provide
convergent evidence and thereby bolster the validity of the
findings (Maxwell, 2004).
The assumption behind this is that using different, independent
measurements of the
same phenomenon can provide a means of counterbalancing the
weaknesses of one
research method with the strengths of another, especially in low
“n” studies (Newman
et al., 2013).
Being mindful that the decision as to which research design to
use is a function of the
researcher’s motivation (e.g. Do we want to generalize or make
causal inferences? Does
the context matter?), available resources and available
population samples, the
triangulation approach seems tenable because it supports the
utility of trade-offs for
supporting the researcher’s aims. Indeed, triangulation speaks
well to mixed-method
designs, but I must acknowledge that such a research design is
controversial because of
philosophical and methodological issues (Johnson, 2009 for a
fuller exploration of this
issue). Again, narrow and sometimes unbalanced views dampen
the intellectual risk
taking required to promote the new thinking consistent with
exploring all sides of
important issues (Reio, 2007). We cannot move forward in the
social sciences if we are
59. bound unrealistically and intransigently to outdated notions as
to what constitutes
rigorous research (Howe, 2009; Maxwell, 2004). In its place, a
means to move forward in
the social sciences, and thereby HRD, might be to “advocate
thinking in terms of
continua on multiple philosophical and methodological
dimensions” (Johnson, 2009,
p. 451).
Perhaps, a better way to think of research designs then is that
their strengths and
weaknesses are a matter of degree. Building upon Coursey’s
(1989) assumptions
underlying selecting a research method, I propose refining
thinking about the relative
strength of research designs along three continua: causality,
generalization and context.
Another way might be to examine internal and external validity
along continua, or
theory-building and theory testing, which have been explored by
previous researchers
(Cook and Cook, 2008; Maxk, 2006). In the present example,
considering causality along
679
Weaknesses
and issues of
precision
D
ow
nl
61. 20
16
(
PT
)
a continuum from weak to strong, looking at a set of secondary
data that are not
theoretically or empirically driven, would be an example of no
causality evidence. No
manner of statistical analysis could make this anything more
than just exploring the
data; hence, it is arguably almost useless and can even be
detrimental. Nonexperimental
methods such as case studies, phenomenologies, biographies,
focus groups, interviews
and surveys would be considered weak on the causality
continuum, followed by
quasi-experimental (less weak) and finally experimental
(strong). Another continuum
could be generalization ranging from experimental (weak),
quasi-experimental
(weak) and nonexperimental (moderate). Last, context would
range from experimental
(weak) and quasi-experimental (weak) to nonexperimental
(moderate). Within the
nonexperimental design area, quantitative and qualitative
methods (e.g. surveys and
focus groups) could be examined along the three continua as
well. The point is knowing
beforehand the relative strengths and weaknesses associated
with a research design
62. along the three continua, balanced against the researcher’s
motivation, available
resources and an accessible research population, could be a
useful tool not only in the
research planning process but also for thinking about and
accurately interpreting the
results with an eye toward creatively and accurately making
recommendations for
future research and tentatively guiding practice.
Strengths and weaknesses of nonexperimental research
Nonexperimental research designs are either quantitative (e.g.
surveys), qualitative (e.g.
interviews) or mixed-method (e.g. case studies).
Nonexperimental research can be
distinguished from experimental research in that with
experimental research, the
researcher manipulates at least one independent variable,
controls as many other
theoretically relevant variables as feasible and subsequently
observes the effect on one
or more dependent variables (Campbell and Stanley, 1963;
Shadish et al., 2002).
Nonexperimental research, on the other hand, does not involve
manipulation by the
researcher and instead focuses on finding linkages or
associations between variables.
Researchers tend to see nonexperimental research as being
useful at the early stages of
a line of research (Cook and Cook, 2008; Johnson, 2001;
Maxwell, 2004; Newman et al.,
2013) such as preliminarily establishing that a hypothesized
relation exists between two
variables as predicted by theory (Dannels, 2010). Yet, it is not
appropriate for theory
validation because such research is not capable of eliminating
63. Campbell and Stanley’s
(1963, p. 5) 12 factors that “jeopardize the validity of various
experimental designs”.
Nonexperimental design is prevalent in social science research
as it often is not
feasible or even ethical to manipulate an independent variable
(e.g. workplace incivility)
for the sake of one’s research (Cook and Cook, 2008; Johnson,
2001; Kerlinger, 1986;
Maxk, 2006; Spector and Meier, 2014). Accordingly, because
we cannot ethically
manipulate independent variables such as workplace incivility,
binge drinking,
physical disabilities and hours worked per week, we are bound
to using
nonexperimental methods when researching such variables.
Despite this potential
limitation, a decided benefit of nonexperimental research is that
it is relatively
inexpensive and easy to perform, particularly survey research.
Surveys are very useful
also for measuring perceptions, attitudes and behaviors such
that the data generated
can be used for correlational analyses to establish the strength
and direction of
important relations that can guide future experimental study
(Cook and Cook, 2008).
Finding moderate to strong negative relations between
workplace incivility and job
EJTD
40,8/9
680
65. em
be
r
20
16
(
PT
)
performance, although not causal, puts forward the notion that
managers’ efforts to
improve job performance might be focused at least preliminarily
on the workplace
incivility issue.
Notwithstanding the recognized drawbacks to conducting
experimental research
(e.g. external validity), nonexperimental research has been
much maligned in that it is
perceived as not even being true research by some because it is
does not well support
testing for cause-and-effect relations. Conversely, the well-
respected research
methodologist Kerlinger (1986) noted that nonexperimental
research could be thought of
as being more important than experimental research because
without it, we might not
have even the most rudimentary understanding of links among
variables that are not
amenable to experimentation. He indicated that in essence that
there have been more
66. sound and significant behavioral and education studies than the
sum total of
experimental studies. It is not hard thinking of nonexperimental
research over the years
that confirmed causal relations well before the causal
mechanisms had been determined.
For example, doctors had knowledge about the healing
properties of penicillin and
aspirin for some time before they discovered why the treatments
were beneficial
(Gerring, 2010).
Experimental research, despite its ability to support making
causal inferences, tends
to be costly, time-consuming and difficult to conduct in the
context of workplace settings
where it can be highly challenging to find organizations willing
to participate in such
research (Cook and Cook, 2008). Still, the reigning argument is
that experimental
research supported by randomized control trials (RCTs) is the
“gold standard” for social
science research, because it is touted as the only true means to
infer cause-and-effect
relationships (Shadish et al., 2002; Stone-Romero and Rosopa,
2008). This view,
however, has been challenged vigorously by researchers from
quantitative, qualitative
and mixed-method research traditions. Coursey (1989, p. 231)
suggests the idea that
“nonexperimental designs produce internally invalid results is
overstated” because not
all internal validity threats (e.g. history, maturation and subject
selection) can be
completely eliminated and there are limits to the use of random
assignment (Cook and
67. Campbell, 1979; Shadish et al., 2002). For instance, random
assignment can be
problematic when participants believe that they are receiving a
less than desirable
treatment (e.g. attend a series of lectures from management
experts about how to be a
motivational manager) as compared to the experimental group in
a study (e.g.
participate in an engaging program that integrates the latest
motivation information
from experts along with hands-on activities, cases and
application that draw upon the
best of adult learning practices), and they alter their behavior
(i.e. they half-heartedly
participate and, thus, do not learn to be as motivational as they
could have) related to the
dependent variable (being a more motivational manager).
Maxwell (2004) wrote a compelling challenge to the notion that
experimental
research was the only way to make causal inferences. Citing a
long line of qualitative
researchers, Maxwell noted the almost folly of ignoring other
plausible means of getting
at causality. Instead of the regularity view of causality where
causal processes are
ignored in favor of causal effects (introduction of x caused y;
this is the dominant view of
causality), Maxwell advocated a reality approach where
causality refers to the actual
causal mechanisms and processes involved in events and
situations. Qualitative
research is uniquely adept at handling the study of causal
mechanisms and processes.
Thus, taking a reality approach might be an appropriate means
to augment
69. 26
2
2
D
ec
em
be
r
20
16
(
PT
)
experimental research seeking causal effects or vice versa.
Neither approach, therefore,
is superior; both together herald the use of mixed-method
approaches to get at causal
effects and causal processes in social science research.
The overall contributions of nonexperimental research have
been certainly profound
and will continue to be as long as we need as social scientists to
push the theoretical,
conceptual, empirical and practical boundaries of our respective
fields. As HRD
researchers, we need to know that plausible associations among
variables exist through
70. nonexperimental research before we can design experimental
studies that would
support making causal inferences that one variable (e.g.
curiosity) has an effect on
another (e.g. learning) after an intervention or manipulation by
the researcher.
The importance of precisely reporting nonexperimental findings
No matter the type of research, the findings must be reported
unambiguously and
accurately to allow researchers to understand what was found
and afford them the
opportunity to falsify the results (Fraenkel and Wallen, 2009;
Grigorenko, 2000; Reis,
2000). In the case of nonexperimental research, erroneously
utilizing imprecise and
mistaken language to present their findings (e.g. causal
language; Bullock et al., 2010;
Levin, 1994; Maxk, 2006; Reinhart et al., 2013) must be
assiduously avoided because the
imprecise research, when published, becomes part of the
scientific literature and serves
as a stepping stone for theory building and research related to
the topic area (Reio et al.,
2015; Spector and Meier, 2014; Stone-Romero and Rosopa,
2008). To be sure, theory
building and research can only proceed with accurately reported
empirical studies to
build upon. Meta-analyses, for example, are used to support
theory building (Callahan
and Reio, 2006; Sheskin, 2000). Because meta-analyses allow
researchers to support a
conclusion about the validity of a hypothesis based upon more
than a few studies, they
may be subject to misinterpretation when using nonexperimental
findings that have
71. been erroneously reported. Thus, meta-analytic results can be
only as good as the
studies comprising it; that is, they must be accurate and precise
(Newman et al., 2002)
and not go beyond the data to make untenable, causal claims
(Robinson et al., 2007).
As social scientists, we are taught to honor the scientific
method and remain
skeptical, but not inflexible consumers of research (Bredo,
2009; Howe, 2009; Kerlinger,
1986; Maxk, 2006; Maxwell, 2004, 2013). As part of this
teaching, we are introduced to
the literature of our respective fields and a number of research
methods courses that
permit one to not only understand the strengths and weaknesses
of each data collection
method but also support the critique of published research based
on the relative strength
of the research design (Reis, 2000). Part of one’s methods
exposure is learning things
such as “correlation does not imply causation”, “experimental
research is the sole means
available to researchers to support causal inferences” and
“qualitative research is not
generalizable”. (Shadish et al., 2002). Thus, by the end of one’s
graduate training, one
should be well versed in the caveats of conducting research.
Unfortunately, often what
does not occur is the researcher has been schooled sufficiently
in how to avoid making
inadvertent, nuanced interpretation and writing errors (that
includes using causal
language) that feed perceptions of sloppy, unscientific social
science research
(Sternberg, 2000). This regrettable situation can carry through
72. into one’s academic
career until a colleague, reviewer or editor apprises the
researcher to the problem.
Therefore, it is incumbent upon us as colleagues, mentors,
reviewers and editors to do
our best to assist moving the field along through helping
researchers polish and perfect
EJTD
40,8/9
682
D
ow
nl
oa
de
d
by
U
ni
ve
rs
ity
L
ib
ra
ry
73. A
t 0
9:
26
2
2
D
ec
em
be
r
20
16
(
PT
)
their interpretation and reporting practices (Reio, 2011;
Sternberg, 2000). This paper, for
example, is one such attempt to augment researchers’
knowledge about the issue of
imprecise and mistaken reporting of nonexperimental research
and the associated
pitfalls.
As such, academic writing is clearly a craft and should be
74. treated that way as in any
other genre of writing (Sanders-Reio et al., 2014). Dissertation
writing, for instance, is
often the first entry into the world of scholarly writing and, not
surprisingly, the issues
of inaccurate or inappropriate interpretation and reporting are
all too real (Clark, 2004).
Similarly, when reviewing articles for research journals, the
same issue appears, but
fortunately somewhat less so (Beins and Beins, 2012; Sternberg,
2000). As Robinson
et al. (2007); Reinhart et al. (2013) and Shaw et al. (2010) have
found, the imprecise and
incorrect writing issue has not disappeared satisfactorily even
when published in
top-tier teaching-and-learning research journals.
Someone might reasonably ask “Why do these writing errors
persist, even in the best
social science journals?” First, I do not think it is the case, as
some seem to suggest
(Stone-Romero and Rosopa, 2008) that authors are trying to
make their research appear
more important; social science methods training simply does not
permit this style of
erroneous thinking. More plausibly, what may be missing is that
the authors do not
realize that they are making such errors in the first place
(Sternberg, 2000). For example,
in a hypothetical organizational study, a researcher found
experiencing workplace
incivility correlated negatively with organizational commitment
and job satisfaction.
When interpreting the results, the researcher is speaking too
strongly when he states:
75. There is a strong negative relationship between experiencing
workplace incivility and job
performance. This means that the more one experiences
workplace incivility, the more likely
they will perform poorly on their jobs.
Using the word “means” in this context has causal implications.
The research write-up
can be polished and made more appropriate by toning down the
sentence to read:
Understanding that these results are correlational, preliminary,
and that no causality can be
implied, the strong association between the variables suggests
more research is warranted
that would tease out why this might be so.
As an another example based on a phenomenological study of
the meaning of
experiencing workplace incivility to a group of participants in
one organization, if a
researcher claimed:
The results of the interviews suggested that experiencing
workplace incivility was
demeaning, counter-productive, and detrimental to performing
well. Organizations, therefore,
should find more ways to reduce the likelihood of experiencing
incivility if they ever hope to
improve job performance.
In this case, the researcher is going beyond the data to
generalize to other organizations,
and the sentence could be toned down to read:
It seems to be the case that experiencing workplace incivility
76. was a salient issue in this
organization for the majority of participants in that they noted
that incivility tended to dampen
their performance. Future qualitative research might be
designed where researchers could
look into this issue at other types of organizations to understand
how and why job
performance suffers when experiencing workplace incivility.
683
Weaknesses
and issues of
precision
D
ow
nl
oa
de
d
by
U
ni
ve
rs
ity
L
ib
77. ra
ry
A
t 0
9:
26
2
2
D
ec
em
be
r
20
16
(
PT
)
As another example, in a nonexperimental study where path-
analytic procedures were used
to test a theoretical model (all observed variables) where
negative affect was associated with
workplace incivility and then, in turn, job performance, the
researcher found a strong,
78. statistically significant negative path coefficient on the
incivility to performance path in the
model. The researcher reported that “The strong negative path
coefficient on the incivility to
performance path implies that workplace incivility influences
job performance”. The terms
“implies” and “influences” are causal terms and, despite the
understanding that path models
are a form of “causal model” (Bollen and Pearl, 2013, for a
fuller exploration of “causal
modeling”), it suggests inappropriately by virtue of the
language used that incivility in the
context of this study had a causal link to performance. Another
way to present the results of
the path analysis could be to state:
The strong negative path coefficient on the incivility to
performance path is a preliminary
indication that incivility and performance are strongly
associated. Although this model could
not be tested experimentally, it could be replicated in a wide
variety of workplace contexts to
test whether the strength and direction of the relationships
represented by this model would be
consistent, thereby supporting the model.
Consequently, researchers need to be acutely aware of causal
terms such as “effect”,
“affect”, “influence”, “imply”, “suggest”, “infer” and “indicate”
to avoid biasing
reporting of their nonexperimentally based findings. Using
terms such as “may”,
“perhaps” and the like are more tentative and correct.
I want to caution the reader about the “causal” terms emerging
from structural
79. equation modeling (SEM) research (Bollen and Pearl, 2013).
Interestingly, Dannels
(2010) noted that because SEM as a statistical modeling
technique is based on a
priori theory and stochastic assumptions, causality claims may
be warranted in
some cases because the statistical model is in a sense a
depiction of the theory being
tested. That is, when variable X significantly predicts variable
Y as predicted by the
theoretical model being tested, the relationship between the two
variables could be
interpreted as being causal. Thus, variable X is said to have
influenced or caused
variable Y. Although this “causal modeling” interpretation of
what constitutes
causality may seem justifiable, it simply cannot be causal in the
truest sense of the
term with nonexperimental data because of the lack of
manipulation of at least one
independent variable by the researcher (Bollen and Pearl, 2013).
Therefore, causal
claims should be avoided even with SEM based on
nonexperimental data, despite its
statistical sophistication.
Another term to avoid is “causal-comparative research” because
it is an outdated and
confusing term that some suggest implies cause-and-effect,
which simply is not so
because it is little more than another form of correlational
research (Johnson, 2001). At
the very least, when interpreting and writing up
nonexperimental quantitative results,
the researcher could always use the caveat “With the
understanding that these data are
80. correlational and thus no causality can be implied […]”. The
researcher must make it
clear to the reader that they are being appropriately cautious by
not going beyond their
data when discussing their work. Similarly, for qualitative
findings, the researcher
could use the caveat “As this was qualitative research,
generalizing the results beyond
the study would not be appropriate”.
In this section, I explored the need for precision in the social
sciences when
interpreting and reporting findings. The use of causal language
with nonexperimental
research has been troubling because it continues unabated,
despite being incorrect and
EJTD
40,8/9
684
D
ow
nl
oa
de
d
by
U
ni
ve
82. misleading. The use of terms such as “causal modeling” when
using SEM and
“causal-comparative” research was highlighted as being
noteworthy because of the
confusion they engender. Instead, eschewing terms such as
“influence” and “implies”
would be positive first steps in staying away from causal
language in nonexperimental
research.
As the next step in our exploration of using and reporting
nonexperimental findings
appropriately, in the next section, I take the position that
nonexperimental research
designs can be used to support making tentative practice
recommendations.
Nonexperimental results can be used tentatively for making
practice
recommendations
In applied social science research such as HRD, there is quite a
bit of emphasis on
taking research and making some sense of its practical
implications. If we find
evidence, for instance, after testing a hypothesized model where
a combination of
select demographic variables (age, gender), negative affect and
workplace incivility
predict job performance, the natural question is “So what?” This
question can be
handled pretty readily in terms of talking about its possible
theoretical and
empirical implications, but unless we are using experimental
research-generated
data, there should be some caution against using it for making
practice
83. recommendations (Cook and Cook, 2008). Reinhart et al. (2013)
on finding that
increasingly nonintervention research was being used in place
of experimental
research in the educational psychology field, strongly
recommended that such data
be used strictly for disconfirming and not confirming
hypotheses or stated
differently, falsifying rather than validating theory. The
researchers also stressed
that prescriptive statements for practice should be only
acceptable based on
experimental evidence. Likewise, they specified that
recommendations for practice
should be based on “evidence-based” practice, which essentially
is RCT,
experimental research. This in itself seems terribly enigmatic
considering that less
than 5 per cent of published educational research is
experimental (Reinhart et al.,
2013). Thus, by extension, we would have very little research
on which to base our
practice. Undoubtedly, there must be a more productive middle
ground if we are to
make any meaningful contributions to improving practice.
Acknowledging that RCT experimental research offers the best
opportunity for
eliminating most sources of bias, we must recall that RCTs
merely inform us whether
the intervention worked, but not why. Notably, we must also
bear in mind that most
social science does not lend itself to such designs; they are
simply infeasible (Cook and
Campbell, 1979; Johnson, 2009; Maxk, 2006; Spector and
Meier, 2014). Even Campbell
84. and Stanley (1963, p. 3) strongly advocated that experiments
must be replicated and
cross-validated at “other times and other conditions before they
can become an
established part of science”. Thus, by merely conducting one or
two experimental
studies, it is not sufficient based on the results to make
recommendations for either
research or practice, unless it is done tentatively.
If using a nonexperimental design, we must accept that we will
not be able to
eliminate all possible rival explanations (Campbell and Stanley,
1963). However,
through our research designs, we can eliminate within reason
the most pressing
685
Weaknesses
and issues of
precision
D
ow
nl
oa
de
d
by
U
86. theoretical ones. Again, mixed-method designs appear best
suited for handling this
kind of dilemma because it relies on a number of research
methods to answer
research questions (Johnson, 2009; Newman et al., 2013). It
seems less than
productive and almost implausible to dismiss all qualitative
research and the vast
majority of nonexperimental research for making
recommendations to improve
practice.
I suggest instead that nonexperimental findings should be used
tentatively, rather
than definitively, with the clear understanding that whatever the
recommendations
being made for practice happen to be, that they be preliminary,
unambiguous and
precise, without the use of causal language. To help move the
field forward, practice
recommendations should be balanced with recommendations for
further research that
might more strongly support this practice recommendation. For
instance, if an
organizational researcher found a moderate to strong positive
association between
curiosity and training classroom learning, as predicted by well-
validated theory and
empirical research (Reio and Wiswell, 2000), it would
preliminarily suggest that
promoting curiosity may be something worthy of consideration
in training classrooms.
This preliminary finding would support designing experimental
research where a
87. curiosity-inducing intervention would be introduced and tested.
Another example
might be after finding through focus group research that mid-
level managers at a large
organization did not feel they had the requisite cross-cultural
skills to manage
supervisees in a foreign country, which is consistent with prior
expatriate manager
research (Selmer, 2000); one would at least have a sense that
additional cross-cultural
training might be needed, despite the lack of experimental
evidence. Future research
could be designed to examine theoretically relevant variables
that might shed light on
the variables that mattered the most when considering expatriate
manager
performance.
Accordingly, despite possessing nothing more than
nonexperimental data in the two
examples, I demonstrated how it would be possible to use such
data to tentatively and
correctly guide practice. Narrow views of what constitutes
acceptable research to
support practice recommendations seem unnecessary. Although
well intentioned, these
narrow views tend to be unproductive in the sense that
enriching, interesting and
path-breaking nonexperimental research might be overlooked
for the sake of the
overzealous pursuit of “rigor” (Maxwell, 2004). The gold
standard of RCT research is not
possible or should not even be desired in every single case
because of its inherent flaws
as a research design (e.g. generalization).
88. Conclusions
The article was predicated upon the notion that there is a
research gap or direction about
best reporting practices for social science research such as
HRD. Through discussing
the relative strength of research designs, highlighting the
strengths and weaknesses
of nonexperimental designs, presenting how we might improve
the precision of
nonexperimental research reporting and generalizing and
demonstrating how
nonexperimental research can be used cautiously and
preliminarily to inform practice, I
attempted to close the gap in the literature as to what
constitutes best reporting practices
and why.
I conclude that we cannot continue using causal language in
nonexperimental social
science research, and this must be taken seriously as rigorous
researchers. On the other
EJTD
40,8/9
686
D
ow
nl
oa
de
d
90. PT
)
hand, we cannot afford to ignore the vast array of social science
research simply because
it is not experimental. Because the large majority of published
social science research is
nonexperimental, it is just too limited to suggest that such
research has no scientific
merit with regards to making research or practice
recommendations. As Kerlinger
(1986) noted some time ago, we must not overlook the
importance of nonexperimental
research because it is the foundation of the social science
research we do. Experimental
research would be rudderless without being provided possible
clues for promising
variables generated by nonexperimental research.
Nonexperimental research would be
useless if pointlessly ignored because of its inherent design
weaknesses. Remembering
that all research designs have limitations, if used appropriately,
tentatively and
correctly, the research it generates could be useful for building
theory, guiding research
and informing practice.
References
Beins, B.C. and Beins, A.M. (2012), Effective Writing in
Psychology: Papers, Posters and
Presentations, 2nd ed., Wiley-Blackwell, Malden, MA.
Bollen, K.A. and Pearl, J. (2013), “Eight myths about causality
91. and structural equation models”, in
Morgan, S.L. (Ed.), Handbook of Causal Analysis for Social
Research, Springer, New York,
NY, pp. 301-328.
Bredo, E. (2009), “Getting over the methodology wars”,
Educational Researcher, Vol. 38 No. 6,
pp. 441-448.
Bullock, J.G., Green, D.P. and Ha, S.E. (2010), “Yes, but
what’s the mechanism (Don’t
expect an easy answer)”, Journal of Personality and Social
Psychology, Vol. 98 No. 9,
pp. 550-558.
Callahan, J. and Reio, T.G., Jr (2006), “Making subjective
judgments in quantitative studies: the
importance of effect size indices and confidence intervals”,
Human Resource Development
Quarterly, Vol. 17, pp. 159-173.
Campbell, D.T. and Stanley, J.C. (1963), Experimental and
Quasi-Experimental Designs for
Research, Rand McNally College Publishing, Chicago, IL.
Clark, I.L. (2004), Writing the Successful Thesis and
Dissertation, Prentice Hall, Upper Saddle
River, NJ.
Cook, B.G. and Cook, L. (2008), “Nonexperimental quantitative
research and its role in guiding
instruction”, Intervention in School and Clinic, Vol. 44 No. 2,
pp. 98-104.
Cook, T. and Campbell, D.T. (1979), Quasi-Experimentation:
Design and Analysis Issues for Field
92. Settings, Houghton Mifflin, Boston, MA.
Coursey, D.H. (1989), “Using experiments in knowledge
utilization research: strengths,
weaknesses and potential applications”, Knowledge: Creation,
Diffusion, Utilization,
Vol. 10 No. 1, pp. 224-238.
Dannels, S.A. (2010), “Research design”, in Hancock, G.R. and
Mueller, R.O. (Eds), The Reviewer’s
Guide to Quantitative Methods in the Social Sciences,
Routledge, New York, NY,
pp. 343-355.
Duncan, G.J. and Gibson-Davis, C.M. (2006), “Connecting child
care quality to child outcomes”,
Evaluation Review, Vol. 30 No. 3, pp. 611-630.
Ferguson, K. and Reio, T.G., Jr (2010), “Human resource
management systems and firm
performance”, Journal of Management Development, Vol. 29
No. 5, pp. 471-494.
687
Weaknesses
and issues of
precision
D
ow
nl
oa
95. pp. 1499-1526.
Goodman, S.N. (2008), “A dirty dozen: twelve P-value
misconceptions”, Seminars in Hematology
Vol. 45 No. 3, pp. 135-140.
Grigorenko, E.L. (2000), “Doing data analyses and writing up
their results”, in Sternberg, R.J. (Ed.),
Guide to Publishing in Psychology Journals, Cambridge
University Press, New York, NY,
pp. 98-120.
Howe, K.R. (2009), “Epistemology, methodology, and education
sciences: positivist dogmas,
rhetoric, and the education science question”, Educational
Researcher, Vol. 38 No. 6,
pp. 428-440.
Hsieh, P., Acee, T., Chung, W., Hsieh, Y., Kim, H., Thomas,
G.D., You, J., Levin, J.R. and
Robinson, D.H. (2005), “Is educational research on the
decline?”, Journal of Educational
Psychology, Vol. 97 No. 4, pp. 523-529, doi: 10.1037/0022-
0663.97.4.523.
Ioannidis, J.P.A. (2005), “Why most published research findings
are false”, PLoS Medicine, Vol. 2
No. 8, pp. 0101-0106.
Johnson, B. (2001), “Toward a new classification of
nonexperimental research”, Educational
Researcher, Vol. 30 No. 2, pp. 3-13.
Johnson, R.B. (2009), “Toward a more inclusive ‘Scientific
research in education”, Educational
Researcher, Vol. 38, pp. 449-457.
96. Kerlinger, F.N. (1986), Foundations of Behavioral Research,
3rd ed., Holt, Rinehart & Winston,
New York, NY.
Levin, J.R. (1994), “Crafting educational intervention research
that’s both credible and creditable”,
Educational Psychology Review, Vol. 6 No. 3, pp. 231-243.
Maxk, B.A. (2006), “Methodological issues in nurse staffing
research”, Western Journal of Nursing
Research, Vol. 28 No. 6, pp. 694-709.
Maxwell, J.A. (2004), “Causal explanation, qualitative research,
and scientific inquiry in
education”, Educational Researcher, Vol. 33 No. 2, pp. 3-11.
Maxwell, J.A. (2013), Qualitative Research Design, 3rd ed.,
Sage, Los Angeles, CA.
Newman, I., Fraas, J.W., Newman, C. and Brown, R. (2002),
“Research practices that produce
Type VI errors”, Journal of Research in Education, Vol. 12 No.
1, pp. 138-145.
Newman, I., Ridenour, C.S., Newman, C. and Smith, S. (2013),
“Detecting low-incidence effects: the
value of mixed method research designs in low-n studies”, Mid-
Western Educational
Researcher, Vol. 25 No. 4, pp. 31-46.
Reinhart, A.L., Haring, S.H., Levin, J.R., Patall, E.A. and
Robinson, D.H. (2013), “Models of
not-so-good behavior: yet another way to squeeze causality and
recommendations for
practice out of correlational data”, Journal of Educational
97. Psychology, Vol. 105 No. 1,
pp. 241-247.
Reio, T.G., Jr (2007), “Exploring the links between adult
education and human resource
development: learning, risk-taking, and democratic discourse”,
New Horizons in Adult
Education and Human Resource Development, Vol. 21 Nos 1/2,
pp. 5-12.
EJTD
40,8/9
688
D
ow
nl
oa
de
d
by
U
ni
ve
rs
ity
L
ib