SlideShare a Scribd company logo
1 of 227
Chapter 3 - Evaluation Rubric
Criteria Does Not Meet 0.01 points Meets/NA 1 point
Introductory Remarks The section is missing; or some topic
areas are not included
in the Introduction or are not explained clearly.
The chapter outline is not provided and/or is unclear.
The reader is adequately oriented to the topic areas
covered. An outline of the logical flow of the chapter is
presented.
All major themes/concepts are introduced.
Research Methodology and
Design
There is a lack of alignment among the chosen research
method and design and the study’s problem, purpose, and
research questions.
There is a lack of justification and alternate choices for
methods.
For Qualitative Studies: Lacks clear discussion of the
study phenomenon, boundaries of case(s), and/or
constructs explored.
Describes how the research method and design are aligned
with the study problem, purpose, and research questions.
Uses scholarly support to describe how the design choice
is consistent with the research method, and alternate
choices are discussed.
For Qualitative Studies: Describes the study
phenomenon, boundaries of case(s) and/or constructs
explored.
Population and Sample Lacks a description of the sample,
demographics, and the
representation of the sample to the broader population.
There is little to no description of the inclusion/exclusion
criteria used to select the participants (sample) of the study.
For Quantitative Studies: A power analysis is not
described and appropriately cited.
Provides a description of the target population and the
relation to the larger population.
Inclusion/exclusion criteria for selecting participants
(sample) of the study are noted.
For Quantitative Studies: Power analysis is
described and appropriately cited.
Materials/
Instrumentation
Lacks a description of the instruments associated with the
chosen research method and design used. Details missing
regarding instrument origin, reliability, and validity.
For Quantitative Studies: (e.g., tests or surveys). Lacks
explanation of any permission needed to use the
instrument(s) and cites properly. Instrument permissions are
missing in appendices
For Qualitative Studies: (e.g., observation
checklists/protocols, interview or focus group discussion
Handbooks). Did not clearly explain the process for
conducting an expert review of instruments (e.g., provides
justification of reviewers being credible – reviewers may
include, but not limited to NCU dissertation team members,
professional colleagues, peers, or non-research participants
representative of the greater population); and/or did not
clearly explain use of a field test if practicing the
administration of the instruments is warranted.
For Pilot Study: Does not clearly explain the procedure for
conducting a pilot study (did not conduct pilot) if using a
self-created instrument (e.g., survey questionnaire); does not
include explanation of a field test if practicing the
administration of the instruments is warranted.
Provides a description of the instruments associated with
the chosen research method and design used. Includes
information regarding instrument origin, reliability, and
validity.
For Quantitative Studies: (e.g., tests or surveys).
Includes any permission needed to use the instrument(s)
and cites properly.
For Qualitative Studies: (e.g., observation
checklists/protocols, interview or focus group discussion
Handbooks). Describes process for conducting an expert
review of instruments (e.g., provides justification of
reviewers being credible – reviewers may include, but not
limited to NCU dissertation team members, professional
colleagues, peers, or non-research participants
representative of the greater population); describes use of
a field test if practicing the administration of the
instruments is warranted.
For Pilot Study: Explains the procedure for conducting a
pilot study (requires IRB approval for pilot) if using a
self-created instrument (e.g., survey questionnaire);
explains use of a field test if practicing the administration
of the instruments is warranted.
Operational Definitions
of Variables
(Quantitative Studies
Only)
Discussion of the study variables examined is lacking
information and/or is unclear.
Describes study variables in terms of being measurable
and/or observable.
(Reviewer - mark Meets/NA for Qualitative studies)
Procedures Procedures are not clear or replicable. Steps are
missing;
recruitment, selection, and informed consent are not
established. IRB ethical practices are missing or unclear.
Describes the procedures to conduct the study in enough
detail to practically replicate the study, including
participant recruitment and notification, and informed
consent. IRB ethical practices are noted.
Data Collection and
Analysis
Does not clearly provide a description of the data and the
processes to collect data. Lack of alignment between the
data collected and the research questions and/or hypotheses
of the study.
For Quantitative Studies: Does not clearly provide the data
analysis processes including, but not limited to, clearly
describing the statistical tests performed and the
purpose/outcome, coding of data linked to each RQ, the
software used (e.g., SPSS, Qualtrics).
For Qualitative Studies: Does not clearly identify the
coding process of data linked to RQs; does not clearly
describe the transcription of data, the software used for
textual analysis (e.g., Nivo, DeDoose), and does not justify
manual analysis by researcher. There is missing or unclear
explanation of the use of a member check to validate data
collected.
Provides a description of the data collected and the
processes used in gathering the data. Explains alignment
between the data collected and the research questions
and/or hypotheses of the study.
For Quantitative Studies: Includes the data analysis
processes including, but not limited to, describing the
statistical tests performed and the purpose/outcome,
coding of data linked to each RQ, the software used (e.g.,
SPSS, Qualtrics).
For Qualitative Studies: Identifies the coding process of
data linked to RQs. Describes the transcription of data,
the software used for textual analysis (e.g., Nivo,
DeDoose), and describes manual analysis by researcher.
Describes the use of a member check to validate data
collected.
Assumptions/Limitations/
Delimitations
Does not clearly outline the
assumptions/limitations/delimitations (or has missing
components) inherent to the choice of method and design.
For Quantitative Studies: Does not include or lacks key
elements such as, but not limited to, threats to internal and
external validity, credibility, and generalizability.
For Qualitative Studies: Does not include or lacks key
elements such as, but not limited to threats to credibility,
trustworthiness, and transferability.
Outlines the assumptions/limitations/delimitations to the
choice of method and design.
For Quantitative Studies: Includes key elements such as,
but not limited to, threats to internal and external validity,
credibility, and generalizability.
For Qualitative Studies: Includes key elements such as,
but not limited to threats to credibility, trustworthiness,
and transferability.
Ethical Assurances Lacks discussion of compliance with the
standards to
conduct research as appropriate to the proposed research
design and is not aligned to IRB requirements.
Describes compliance with the standards to conduct
research as appropriate to the proposed research design
and aligned to IRB requirements.
Summary Chapter does not conclude with a summary of key
points
from the Chapter - elements are missing, incomplete,
and/new information is presented.
Chapter concludes with an organized summary of key
points discussed/presented in the Chapter.
Issues in Informing Science and Information Technology
Volume 6, 2009
Towards a Guide for Novice Researchers on
Research Methodology:
Review and Proposed Methods
Timothy J. Ellis and Yair Levy
Nova Southeastern University
Graduate School of Computer and Information Sciences
Fort Lauderdale, Florida, USA
[email protected], [email protected]
Abstract
The novice researcher, such as the graduate student, can be
overwhelmed by the intricacies of the
research methods employed in conducting a scholarly inquiry.
As both a consumer and producer
of research, it is essential to have a firm grasp on just what is
entailed in producing legitimate,
valid results and conclusions. The very large and growing
number of diverse research approaches
in current practice exacerbates this problem. The goal of this
review is to provide the novice re-
searcher with a starting point in becoming a more informed
consumer and producer of research.
Toward addressing this goal, a new system for deriving a
proposed study type is developed. The
PLD model includes the three common drivers for selection of
study type: research-worthy prob-
lem (P), valid quality peer-reviewed literature (L), and data (D).
The discussion includes a review
of some common research types and concludes with definitions,
discussions, and examples of
various fundamentals of research methods such as: a) forming
research questions and hypotheses;
b) acknowledging assumptions, limitations, and delimitations;
and c) establishing reliability and
validity.
Keywords: Research methodology, reliability, validity, research
questions, problem directed re-
search
Introduction
The novice researcher, such as the graduate student, can be
overwhelmed by the intricacies of the
research methods employed in conducting a scholarly inquiry
(Leedy & Ormrod, 2005). As both
a consumer and producer of research, it is essential to have a
firm grasp on just what is entailed in
producing legitimate, valid results and conclusions. The very
large and growing number of di-
verse research approaches in current practice exacerbates this
problem (Mertler & Vannatta,
2001). The goal of this review is to pro-
vide the novice researcher with a start-
ing point in becoming a more informed
consumer and producer of research in
the form of a lexicon of terms and an
analysis of the underlying constructs
that apply to scholarly enquiry, regard-
less of the specific methods employed.
Scholarly research is, to a very great
extent, characterized by the type of
Material published as part of this publication, either on-line or
in print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
works for personal or classroom use is granted without fee
provided that the copies are not made or distributed for profit
or commercial advantage AND that copies 1) bear this notice
in full and 2) give the full citation on the first page. It is per -
missible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
to redistribute to lists requires specific permission and payment
of a fee. Contact [email protected] to request
redistribution permission.
Guide for Novice Researchers on Research Methodology
324
study conducted and, by extension, the specific methods
employed in conducting that type of
study (Creswell, 2005, p. 61). Novice researchers, however,
often mistakenly think that, since
studies are known by how they are conducted, the research
process starts with deciding upon just
what type of study to conduct. On the contrary, the type of
study one conducts is based upon three
related issues: the problem driving the study, the body of
knowledge, and the nature of the data
available to the researcher.
As discussed elsewhere, scholarly research starts with the
identification of a tightly focused, lit-
erature supported problem (Ellis & Levy, 2008). The research-
worthy problem serves as the point
of departure for the research. The nature of the research
problem and the domain from which it is
drawn serves as a limiting factor on the type of research that
can be conducted. Nunamaker,
Chen, and Purdin (1991) noted that “It is clear that some
research domains are sufficiently nar-
row that they allow the use of only limited methodologies” (p.
91). The problem also serves as
the guidance system for the study in that the research is, in
essence, an attempt to, in some man-
ner, develop at least a partial solution to the research problem.
The best design cannot provide
meaning to research and answer the question ‘Why was the
study conducted,’ if there is not the
anchor of a clearly identified research problem.
The body of knowledge serves as the foundation upon which the
study is built (Levy & Ellis,
2006). The literature also serves to channel the research, in that
it indicates the type of study or
studies that are appropriate based upon the nature of the
problem driving the study. Likewise, the
literature provides clear guidance on the specific methods to be
followed in conducting a study of
a given type. Although originality is of great value in scholarly
work, it is usually not rewarded
when applied to the research methods. Ignoring the wisdom
contained in the existing body of
knowledge can cause the novice researcher, at the least, a great
deal of added work establishing
the validity of the study.
From an entirely practical perspective, the nature of the data
available to the researcher serves as
a final filter in determining the type of study to conduct. The
type of data available should be
considered a necessary, but certainly not sufficient,
consideration for selecting research methods.
The data should never supersede the necessity of a research-
worthy problem serving as the anchor
and the existing body of knowledge serving as the foundation
for the research. The absence of the
ability to gather the necessary data can, however, certainly
make a study based upon research
methods directly driven by a well-conceived problem and
supported by current literature com-
pletely futile. Every solid research study must use data in order
to validate the proposed theory.
As a result, novice researchers should understand the centrality
of access to data for their study
success. Access to data refers to the ability of the researcher to
actually collect the desired data
for the study. Without access to data, it is impossible for a
researcher to make any meaningful
conclusions on the phenomena. Novice researchers should be
aware that their access to data will
also imply what type of methodology they will be using and
what type of research, eventually
they will conduct. Figure 1 illustrates the interaction among the
research-worthy problem (P), the
existing body of knowledge documented in the peer-reviewed
literature (L), and the data avail-
able to the researcher (D). The research-worthy problem (P)
serves as the input to the process of
selecting the appropriate type of research to conduct; the valid
peer-reviewed literature (L) is the
key funnel that limits the range of applicable research
approaches, based on the body of knowl-
edge; the data (D) available to the researcher serves as the final
filter used to identify the specific
study type.
The balance of this paper explores the constructs underlying
scholarly research in two aspects.
The first section examines some of the types of studies most
commonly used in information sys-
tems research. The second section explores vital considerations
for research methods that apply
across all study types.
Ellis & Levy
325
Study type
Data available
Valid peer-reviewed
literature
Research-worthy
problem
Figure 1. The PLD Model for Deriving Study Type
Types of Research
There are a number of different ways to distinguish among types
of research. The type of data
available is certainly one vital aspect (Gay, Mills, & Airasian,
2006; Leedy & Ormrod, 2005);
different research approaches are appropriate for quantitative
data – precise, numeric data derived
from a reduced variable – than for qualitative data – complex,
multidimensional data derived
from a natural setting. Of equal importance is the nature of the
problem being addressed by the
research (Isaac & Michael, 1981). Some problems, for example,
are relatively new and require
Table 1: Key Categories of Research
Approach Most common type of data Stage of problem
Categories of
Theory
Experimental Quantitative Evaluation Testing or
revising
Causal-comparative Quantitative Evaluation Testing or
revising
Historical Quantitative or Qualitative Description Testing or
revising
Developmental Quantitative and qualitative Description
Building or
revising
Correlational
Quantitative Description Testing
Case study Qualitative Exploration Building or
revising
Grounded theory
Qualitative Exploration Building
Ethnography
Qualitative Descriptive Building
Action research Quantitative and qualitative Applied
exploration Building or
revising
Guide for Novice Researchers on Research Methodology
326
exploratory types of research, while more mature problems
might better be addressed by descrip-
tive or evaluative (hypothesis testing) approaches (Sekaran,
2003). Table 1 presents an overview
of how the research approaches most commonly used in
information systems (IS) are categorized.
The subsections following Table 1 briefly explore each of these
major research approaches. In
general, research studies can be classified into three categories:
theory building, theory testing,
and theory revising. Theory building refers to research studies
that aim at building a theory where
no prior solid theory existed to explain phenomena or specific
scenario. Theory testing refers to
research studies that aim at validating (i.e. testing) existing
theories in new context. Theory revis-
ing refers to research studies that aim at revising an existing
theory.
Experimental
The essence of experimental research is determining if a cause -
effect relationship exists between
one factor or set of factors – the independent variable(s) – and a
second factor or set of factors –
the dependent variable(s) (Cook & Campbell, 1979). In an
experiment, the researcher takes con-
trol of and manipulates the independent variable, usually by
randomly assigning participants to
two or more different groups that receive different treatments or
implementations of the inde-
pendent variable. The experimenter measures and compares the
performance of the participants
on the dependent variable to determine if changes in the
independent variables are very likely to
cause similar changes in performance on the dependent variable.
In medical settings, this type of
research is very common. However, in many research fields it is
somewhat difficult to control all
the variables in the experiments, especially when dealing with
research area that is related to or-
ganizations and institutions. For that reason, the use of
experiments in IS it is somewhat limited,
and a less restrictive type of experiments is used. Such type is
called quasi-experiment (Cook &
Campbell, 1979). Similar to experiments, in quasi-experiments,
the research is attempting to de-
termining if a cause-effect relationship exists between one
factor or set of factors – the independ-
ent variable(s) – and a second factor or set of factors – the
dependent variable(s). However in
quasi-experiments, the researcher is unable to control all the
variables in the experimentation, but
most variables are controlled.
An example of an experimental study would be research into
which of two methods of inputting
text in a personal digital assistant, soft-key or handwriting
recognition, is more accurate. The in-
dependent variable would be method of text input. The
dependent variable might be a count of
the number of entry errors, and the comparison based on the
mean of the group using the soft-key
method with the mean of the group that used handwriting
recognition input. An example of ex-
perimental research can be found at Cockburn, Savage, and
Wallace (2005).
Causal-Comparative
As with experimental studies, causal-comparative research
focuses on determining if a cause-
effect relationship exists between one factor of set of factors –
the independent variable(s) – and a
second factor or set of factors – the dependent variable(s).
Unlike an experiment, the researcher
does not take control of and manipulate the independent
variable in causal-comparative research
but rather observes, measures, and compares the performance on
the dependent variable or vari-
ables of subjects in naturally-occurring groupings based on the
independent variable.
An example of a causal-comparative study would be research
into the impact monetary bonuses
had on knowledge sharing behavior as exhibited by
contributions to a company knowledge bank.
The independent variable would be “monetary bonus,” and it
might have two levels (i.e. “yes”
and “no”). The dependent variable might be a count of the
number of contributions, and the com-
parison based upon an examination of the mean number of
knowledge-base contributions made
per employee in companies that provided a monetary bonus
versus the mean number of contribu-
tions made per employee in companies that did not provide a
bonus. Since the researcher did not
Ellis & Levy
327
assign companies to the “bonus” or “no bonus” categories, this
study would be causal compara-
tive, not experimental. An example of causal-comparative
research can be found at Becerra-
Fernandez, Zanakis, and Walczak (2002) who developed a
knowledge discovery technique using
neural network modeling to classify a country’s investing risk
based on a variety of independent
variables.
Case Study
A case study is “an empirical inquiry that investigates a
contemporary phenomenon within its real
life context using multiple sources of evidence” (Noor, 2008, p.
1602). The evidence used in a
case study is typically qualitative in nature and focuses on
developing an in-depth rather than
broad, generalizable understanding. Case studies can be used to
explore, describe, or explain phe-
nomena by an exhaustive study within its natural setting (Yin,
1984). An example of a case study
can be found in the study by Ramim and Levy (2006) who
described the issues related to the im-
pact of an insider’s attack combined with novice management
on the survivability of an e-
learning systems of a small university.
Historical
Historical research utilizes interpretation of qualitative data to
explain the causes of change
through time. This type of research is based upon the
recognition of a historical problem or the
identification of a need for certain historical knowledge and
generally entails gathering as much
relevant information about the problem or topic as possible. The
research usually begins with the
formation of a hypothesis that tentatively explains a suspected
relationship between two or more
historical factors and proceeds to a rigorous collection and
organization of usually qualitative
evidence. The verification of the authenticity and validity of
such evidence, together with its se-
lection, organization, and analysis forms the basis for this type
of research. An example of his-
torical research can be found in the study by Grant and Grant
(2008) who conducted a study to
test the hypothesis that a new generation in knowledge
management was emerging.
Correlational
The primary focus of the correlational type of research is to
determine the presence and degree of
a relationship between two factors. Although correlational
studies are in a superficial way similar
to causal-comparative research – both types of study focus on
analyzing quantitative data to de-
termine if a relationship exists between two variables – the
difference between the two cannot be
ignored. Unlike causal-comparative research, in correlational
studies, there is no attempt to de-
termine if a cause-effect relationship exists (variable x causes
changes in variable y). The goal for
correlational studies is to determine if a predictive relationship
exists (knowing the value of vari-
able x allows one to predict the value of variable y). At a
practical level, there is, therefore, no
distinction made between independent and dependent variables
in correlational research.
An example of a simple correlational study would be research
into the relationship between age
and willingness to make e-commerce purchases. The two
variables of interest would be age and
number of e-commerce purchases made over a given period of
time. The comparison would be
based upon an examination of age of each subject in the study
and the number of e-commerce
purchases made by that subject. Since the researcher did not
control either of the variables or at-
tempt to determine if age caused changes in purchases, just if
age could be used to predict behav-
ior, the study would be correlational, not experimental or
causal-comparative. An example of cor-
relational research can be found in Cohen and Ellis (2003).
Guide for Novice Researchers on Research Methodology
328
Developmental
Developmental research attempts to answer the question: How
can researchers build a ‘thing’ to
address the problem? It is especially applicable when there is
not an adequate solution to even test
for efficacy in addressing the problem and presupposes that
researchers don’t even know how to
go about building a solution that can be tested. Developmental
research generally entails three
major elements:
• Establishing and validating criteria the product must meet
• Following a formalized, accepted process for developing the
product
• Subjecting the product to a formalized, accepted process to
determine if it satis-
fies the criteria.
An example of developmental research would be Ellis and
Hafner (2006) that detailed the devel-
opment of an asynchronous environment for project-based
collaborative learning experiences.
Developmental research is distinguis hed from product
development by: a focus on complex, in-
novative solutions that have few, if any, accepted design and
development principles; a compre-
hensive grounding in the literature and theory; empirical testing
of product’s practicality and ef-
fectiveness; as well as thorough documentation, analysis, and
reflection on processes and out-
comes (van den Akker, Branch, Gustafson, Nieveen, & Plomp,
2000).
Grounded Theory
Grounded Theory is defined as “a systematic, qualitative
procedure used to generate theory that
explains, at a broad conceptual level, a process, an action, or
interaction about substantive topic”
(Creswell, 2005, p. 396). Grounded theory is used when theories
currently documented in the lit-
erature fail to adequately explain the phenomena observed
(Leedy & Ormrod, 2005). In such
cases, revisions for existing theory may not be valid as the
fundamental assumptions behind such
theories may be flawed given the context or data at hand. Table
2 outlines the three key types of
grounded theory design. According to Creswell, “choosing
among the three approaches requires
several considerations” (p. 403). He noted that such
considerations depend on the key emphasis
of the study such as: Is the aim of the study to follow given
procedures? Is the aim of the aim of
the study to follow predetermined categories? What is the
position of the researcher? An example
of Grounded Theory in the context of information systems
includes the study by Oliver, Why-
mark, and Romm (2005). Oliver et al. used Grounded Theory to
develop a conceptual model on
enterprise-resources planning (ERP) systems adoption based on
the various types of organiza-
tional justifications and reported motives.
Table 2: Types of Grounded Theory Design (Creswell, 2005)
Type of Grounded Theory
Design
Definition
Systematic Design “emphasizes the use of data analysis steps of
open, axial, and
selective coding, and the development of a logic paradigm or a
visual picture of the theory generated” (Creswell, 2005, p. 397)
Emerging Design “letting the theory emerge from the data
rather than using spe-
cific, preset categories” (Creswell, 2005, p. 401)
Constructivist Design “focus is on the meanings ascribed by
participants in a
study…more interested in the views, values, beliefs, feelings,
assumptions, and ideologies of individuals than in gathering
facts and describing acts” (Creswell, 2005, p. 402)
Ellis & Levy
329
Ethnography
The study of ethnography aims at “a particular person, program,
or event in considerable depth.
In an ethnography, the researcher looks at an entire group –
more specifically, a group that shares
a common culture – in depth” (Leedy & Ormrod, 2005, p. 151).
According to Creswell (2005),
ethnographic research deals with an in-depth qualitative
investigation of a group that share a
common culture. He indicated that ethnography is best used to
explain various issues within a
group of individuals that have been together for a considerable
length of time and have, therefore,
developed a common culture. Ethnographic research also
provides a chronological collection of
events related to a group of individuals sharing a common
culture. Beynon-Davies (1997) out-
lined the use of ethnographic research in the context of system
development. He noted that for IS
researchers, ethnographic research may provide value in the
area of IS development, specifically
in the process of capturing tacit knowledge during the system
development life cycle (SDLC)
(Beynon-Davies). Crabtree (2003) noted that “ethnography is an
approach that is increasing inter-
est to the designers of collaborative computing systems.
Rejecting the use of theoretical frame-
works and insisting instead on a rigorously descriptive mode of
research” (p. ix). However, criti-
cism for Crabtree’s advocacy of ethnography in information
systems research was also voiced
(Alexander, 2003).
Action Research
Action research is defined as “a type of research that focuses on
finding a solution to a local prob-
lem in a local setting” (Leedy & Ormrod, 2005, p. 114). Action
research is unique in the approach
as the researcher himself or herself are part of the practitioners
group that face the actual problem
the research is trying to address(Creswell, 2005). Additionally,
the aim of action research is to
investigate a localized and practical problem. According to
DeLuca, Gallivan, and Kock (2008),
there are five key steps in action research including: a)
Diagnosing the problem; b) Planning the
action; c) Taking the action; d) Evaluating the results; and e)
Specifying lessons learned for the
next cycle. During the course of all give steps of the action
research, “researchers and practitio-
ners collaborate during each step” (DeLuca et al., p. 49).
Fundamentals of Research Methods
For each study type there is an accepted methodology
documented in texts (Gay et al., 2006;
Isaac & Michael, 1981; Leedy & Ormrod, 2005; Yin, 1984) and
exemplified in the literature
(Levy & Ellis, 2006). As a first step in establishing the value of
a proposed study, the novice re-
searcher is well advised to closely follow the template for the
study type contained in the text and
model her or his research methods after similar studies reported
in the literature. Regardless of the
type of study being conducted, there are a number of important
factors that must be accommo-
dated in an effective description of the research methods. In
brief, the description must provide a
detailed, step-by-step description of how the study will be
conducted, answering the vital “who,
what, where, when, why, and how” questions.
1. What is going to be done
2. Who is going to do each thing to be done
3. How will each thing to be done be accomplished
4. When, and in what order, will the things to be accomplished
actually be done
5. Where will those things be done
6. Why – supported by the literature – for the answers to the
What, Who, How,
When, and Where
Guide for Novice Researchers on Research Methodology
330
A properly developed description of the research methods would
allow the reader to actually con-
duct the study being proposed based upon the processes
outlined. Included among those processes
are: forming research questions and hypotheses; identifying
assumptions, limitations, and delimi-
tations; as well as establishing reliability and validity.
Form Research Questions and Hypotheses
Research questions
Research questions are the essence of most research conducted
and acts as the guiding plan for
the investigation (Mertler & Vannatta, 2001). In general,
research questions are “specific ques-
tions that researchers seek to answer” (Creswell, 2005, p. 117).
According to Maxwell (2005),
“research questions state what you want to learn” (p. 69). A
research investigation may have one
or more research questions regardless of the specific type of the
research including qualitative,
quantitative, and mixed method types of research. Most quality
peer-reviewed studies will have a
specific section that highlights the research questions
investigated. In most other published work
that don’t have a specific section that highlights it, the research
questions will appear either at the
end of the problem statement or right after the literature review
section. Maxwell suggested that a
good research question is one that will point the researcher to
the information that will lead
him/her to understand what he/she set forth to investigate.
According to Ellis and Levy (2008),
“in order for the research to be at all meaningful, there has to be
an identifiable connection be-
tween the answers to the research questions and the research
problem inspiring the study” (p. 20).
However, research questions shouldn’t be created in a vacuum,
but be strongly influenced by
quality literature is suggesting about the phenomena (Berg,
1998). Moreover, the exact wording
used to note the research questions is vital as the accuracy and
appropriateness of the research
question determine the methodology to be used (Mertler &
Vannatta, 2001).
The nature of the research questions will be dependent on the
type of study being conducted.
Studies based on quantitative data will generally be driven by
research questions that are formu-
lated on the confirmatory and predictive nature, while studies
based on qualitative data will be
more likely driven by research questions that are formulated on
the exploratory and interpretive
nature.
Examples of quantitative research questions in the context of
information systems include:
- To what extent does users’ perceived usefulness increases the
odds of their e-commerce
usage?
- Do computer self-efficacy and computer anxiety have a
significant difference for males
and females when using e-learning systems?
- What are the contributions of users’ systems trust, deterrent
severity, and motivation to
their misuse of biometrics technology?
- To what degree do team communication and team cohesiveness
predict productivity of
system development by virtual teams?
Examples of qualitative research questions in the context of
information systems include:
- How does training help the implementation success of
enterprise-wide information sys-
tems?
- Why do user involvement and user resistance help in the
systems’ requirement gathering
process?
What are the systems characteristics that are valuable to users
when using e-learning
systems?
- How do e-commerce users define information privacy?
Ellis & Levy
331
Hypotheses
One must keep in mind that “research questions are not the
same as research hypotheses”
(Maxwell, 2005, p. 69). In general, a hypothesis can be defined
as a “logical supposition, a rea-
sonable guess, an educated conjecture” about some aspect of
daily life (Leedy & Ormrod, 2005,
p. 6). In scholarly research, however, hypotheses are more than
‘educated guesses.’ A research
hypothesis is a “prediction or conjecture about the outcome of a
relationship among attributes or
characteristics” (Creswell, 2005, p. 117). By convention,
research is conservative and assumes
the absence of a relationship among the attributes under
consideration; hypotheses, therefore, are
expressed in null terms. For example, if a study were to
examine the impact interactive multime-
dia animations have on the average amount of a purchase at an
e-commerce site, the hypothesis
would be stated: The average amount of purchase on an e-
commerce site enhanced with interac-
tive animations will not be different that the average amount of
purchase on the same e-
commerce site that is not enhanced with interactive animations.
Not all types of research entail
establishing and testing hypotheses. Research methods based
upon quantitative data commonly
test hypotheses; studies based upon qualitative data, on the
other hand, explore propositions
(Maxwell).
Unlink hypotheses, propositions do predict a directionality for
the results. If, for example, one
were to examine consumer reaction to interactive animation on
an e-commerce site, one might
investigate the proposition that: Consumers will express a
greater feeling of engagement and sat-
isfaction when visiting e-commerce sites enhanced with
interactive animations than similar sites
that lack the enhancement.
Acknowledge Assumptions, Limitations, and Delimitations
For any given research investigation there are underlying
assumptions, limitations, and delimita-
tions (Creswell, 2005). According to Leedy and Ormrod (2005),
assumptions, limitations, and
delimitations are critical components of a viable research
proposal; without these considerations
clearly articulated, evaluators may raise some valid questions
regarding the credibility of the pro-
posal. The following three sub-sections provide definition and
examples for each term.
Assumptions
Assumptions serve as the basic foundation of any proposed
research (Leedy & Ormrod, 2005)
and constitute “what the researcher takes for granted. But taking
things for granted may cause
much misunderstanding. What [researchers] may tacitly assume,
others may never have consid-
ered” (Leedy & Ormrod, p. 62). Moreover, assumptions can be
viewed as something the re-
searcher accepts as true without a concrete proof. Essentially,
there is no research study without a
basic set of assumptions (Berg, 1998). According to Williams
and Colomb (2003), identifying the
assumptions behind a given research proposal is one of the
hardest issues to address, especially
for novice researchers. Such difficulties emerge due to the fact
that by nature “we all take our
deepest beliefs for granted, rarely questioning them from
someone else’s point of view”
(Williams & Colomb, p. 200). It is important for novice
researchers to learn how to explicitly
document their assumptions in order to ensure that they are
aware of those things taken as givens,
rather than trying to hide or smear them from the reader.
Explicitly documenting the research as-
sumptions may help reduce misunderstanding and resistance to a
proposed research as it demon-
strates that the research proposal has been thoroughly
considered (Leedy & Ormrod, 2005).
To identify the assumption behind a proposal, the researcher
must ask himself the following ques-
tion: “what do I believe that my readers must also believe (but
may not) before they will think
that my reasons are relevant to my claims?” (Williams &
Colomb, p. 200).
Guide for Novice Researchers on Research Methodology
332
Examples of assumptions researchers make include:
- Participants in the study will make a sincere effort to complete
the assigned tasks
- The students participating in the Internet-based course have a
basic familiarity with the
personal computer and the use of the World Wide Web.
Limitations
Every study has a set of limitations (Leedy & Ormrod, 2005), or
“potential weaknesses or prob-
lems with the study identified by the researcher” (Creswell,
2005, p. 198). A limitation is an un-
controllable threat to the internal validity of a study. As
described in greater detail below, internal
validity refers to the likelihood that the results of the study
actually mean what the researcher in-
dicates they mean. Explicitly stating the research limitations is
vital in order to allow other re-
searchers to replicate the study or expand on a study (Creswell,
2005). Additionally, by explicitly
stating the limitations of the research, a researcher can help
other researchers “judge to what ex-
tent the findings can or cannot be generalized to other people
and situations” (Creswell, 2005, p.
198).
Examples of limitations researchers may have:
- All subjects in the study will be volunteers who may withdraw
from the study at any
time. The participants who finish the study might not, therefore,
be truly representative of
the population.
- The members of the expert panel that will validate the
proficiency survey instrument will
be drawn from the faculty of … and may not truly represent
universally accepted expert
opinion.
Delimitations
Delimitations refer to “what the researcher is not going to do”
(Leedy & Ormrod, 2005). In schol-
arly research, the goals of the research outlines what the
researcher intends to do; without the de-
limitations, the reader will have difficulties in understanding
the boundaries of the research. In
order to constrain the scope of the study and make it more
manageable, researchers should outline
in the delimitations – the factors, constructs, and/or variables –
that were intentionally left out of
the study. Delimitations impact the external validity or
generalizability of the results of the study.
Examples of delimitations include:
- Participation in the study was delimited to only males aged
25-45 who had made a pur-
chase via the internet within the past 12 months; generalization
to other age groups or
females may not be warranted.
- This study examined attrition rates in MBA programs offered
in continuing education de-
partments of public colleges and universities; generalization to
other educational pro-
grams or similar programs offered in private institutions may
not be warranted.
Establish Reliability and Validity
Every study must address threats to validity and reliability
(Leedy & Ormrod, 2005). Although
the concepts of validity and reliability originally started in
quantitative research approaches, in
recent years validity and reliability are being addressed in
qualitative and mixed-methods ap-
proaches as well (Berg, 1998; Maxwell, 2005). According to
Leedy and Ormrod (2005), “the va-
lidity and reliability of your measurement instruments
influences the extent to which you can
learn something about the phenomenon you are studying…and
the extent to which you can draw
meaningful conclusions from your data” (p. 31). The following
two sections define and outline
Ellis & Levy
333
the key types of validity and reliability related to common
research investigation. Establishing an
approach following published methods to address validity and
reliability issues in a research pro-
posal may drastically increase the overall acceptance of the
research proposal.
Reliability
Reliability is defined as “the consistency with which a
measuring instrument yields a certain re-
sults when the entity being measured hasn’t changed” (Leedy &
Ormrod, 2005, p. 31). According
to Straub (1989), researchers should try to answer the following
question in an attempt to address
reliability; “do measures show stability across the unit of
observation? That is, could measure-
ment error be so high as to discredit the findings?” (p. 150).
Reliability can be established in four
different ways: equivalency, stability, inter-rater, and internal
consistency (Carmines & Zeller,
1991).
Equivalency reliability. Equivalency reliability is concerned
with how closely measurements
taken with one instrument match those taken with a second
instrument under similar conditions.
Equivalency is often used to certify the reliability of a new
measurement instrument or procedure
by comparing the results of using that instrument with those
obtained by using established in-
struments or processes. Equivalency is usually established
through the use of a statistical correla-
tion (Pearson’s r for linear correlation or Eta for non-linear
correlation).
Stability reliability. Stability reliability – also know as test, re-
test reliability – is concerned with
how consistent results of measuring with a given instrument or
process are over time. Stability is
based on the assumption that, absent some identifiable
explanation, the measurement should pro-
duce the same results today as last month and will produce the
same results next month. Stability,
like equivalency, is usually established through the use of a
statistical correlation (Pearson’s r for
linear correlation or Eta for non-linear correlation).
Inter-rater reliability. Inter-rater reliability focuses on the
extent agreement in the results of two
or more individuals using the same measurement instrument or
process. As with stability and
equivalency, inter-rater reliability is usually established through
the use of a statistical correlation
(Pearson’s r for linear correlation or Eta for non-linear
correlation).
Internal consistency. Unlike the previous methods of
establishing reliability which were con-
cerned with comparing the results of using an instrument or
process with some external standard
(another instrument, the same instrument over time, or the same
instrument used by different
people), internal consistence focuses on the level of agreement
among the various parts of the
instrument or process in assessing the characteristic being
measured. In a 20-question survey
measuring attitude toward knowledge sharing, for example, if
the survey is internally consistent,
there will be a strong correlation the responses on all 20
questions. Internal consistency is also
measured by statistical correlation, but with the Cronbach α in
place of Pearson r.
Validity
Validity refers to a researchers’ ability to “draw meaningful and
justifiable inferences from scores
about a sample or population” (Creswell, 2005, p. 600). There
are various types of validity asso-
ciated with scholarly research (Cook & Campbell, 1979).
Validity of an instrument refers to “the
extent to which the instrument measures what it is supposed to
measure” (Leedy & Ormrod,
2005, p. 31). Thus, researchers when designing their study,
must ask themselves “how might you
be wrong?” (Maxwell, 2005, p. 105). Additionally, the validity
of a study “depends on the rela-
tionship of your conclusions to reality” (Maxwell, 2005, p.
105). This section will define and out-
line the key validity issues. The two most common validity
issues are internal validity and exter-
nal validity.
Guide for Novice Researchers on Research Methodology
334
Internal validity. Internal validity refers to the “extent to which
its design and the data that it
yields allow the researcher to draw accurate conclusions about
cause-and-effect and other rela-
tionships within the data” (Leedy & Ormrod, 2005, pp. 103-
104). According to Straub (1989),
researchers should try to answer the following question in an
attempt to address internal validity;
“are there untested rival hypotheses for the observed effects?”
(p. 150). Generally, establishing
internal validity requires examining one or more of the
following: face validity, criterion validity,
construct validity, content validity, or statistical conclusion
validity.
Face Validity. Face validity is based upon appearance; does the
instrument or process seem to
pass the test for reasonableness. Face validity is never sufficient
by itself, but an informal assess-
ment of how well the study appears to be designed is often the
first step in establishing its valid-
ity.
Criterion Related Validity. Also known as instrumental validity,
criterion related validity is
based upon the premise that processes and instruments used in a
study are valid if they parallel
similar those used previous, validated research. In order to
establish criterion related validity it is
necessary to draw strong parallels between as many particulars
of the validated study – popula-
tion, circumstances, instruments used, methods followed – as
possible.
Construct Validity. Construct validity “is in essence operational
issue. It asks whether the meas-
ures chosen are true constructs describing the event or merely
artifacts of the methodology itself”
(Straub, 1989, p. 150). According to Straub, researchers should
try to answer the following ques-
tion in an attempt to address construct validity; “do measures
show stability across methodology?
That is, are the data a reflection of true scores or artifacts of the
kind of instrument chosen?” (p.
150).
Content Validity. In survey-based research, the term content
validity refers to “the degree to
which items in an instrument reflect the content universe to
which the instrument will be general-
ized” (Boudreau, Gefen, & Straub, 2001, p. 5). According to
Straub (1989), researchers should
try to answer the following question in an attempt to address
content validity; “are instrument
measures drown from all possible measures of the properties
under investigation?” (p. 150).
Statistical Conclusion Validity. Statistical conclusion validity
refers to the “assessment of the
mathematical relationships between variables and the likelihood
that this mathematical assess-
ment provides a correct picture of the covariation …(Type I and
Type II error)” (Straub, 1989, p.
152). According to Straub, researchers should try to answer the
following question in an attempt
to address statistical conclusion validity; “do the variables
demonstrate relationships not explain-
able by chance or some other standard of comparison?” (p. 150).
External validity. External validity refers to the “extent to
which its results apply to situations
beyond the study itself…the extent to which the conclusions
drawn can be generalized to other
contexts” (Leedy & Ormrod, 2005, p. 105). Additionally,
external validity addresses the “gener-
alizability of sample results to the population of interest, across
different measures, persons, set-
tings, or times. External validity is important to demonstrate
that research results are applicable in
natural settings, as contrasted with classroom, laboratory, or
survey-response settings” (King &
He, 2005, p. 882).
Summary
One of the major challenges facing the novice researcher is
matching the research she or he pro-
poses with a research method that is appropriate and will be
accepted by the scholarly commu-
nity. The material presented in this paper is certainly not
intended to be the ending point in the
process of establishing the research methods for a given study.
The novice researcher is encour-
Ellis & Levy
335
aged, even expected to augment this material by referring to one
or more of the texts and research
examples cited.
This paper does present a foundation upon which such a
decision can be based on:
1. Developing the PLD, a model for selection of research
approach based upon the
problem driving the study, the body of knowledge documented
in peer-reviewed lit-
erature, and the data available to the researcher;
2. Identifying, in brief, several of the research approaches
commonly used in informa-
tion systems studies;
3. Exploring several of the important terms and constructs that
apply to scholarly re-
search, regardless of the specific approach selected.
References
Alexander, I. (2003). Designing collaborative systems. A
practical guide to ethnography. European Journal
of In formation Systems, 12(3), 247-249.
Becerra-Fernandez, I., Zanakis, S. H., & Walczak, S. (2002).
Knowledge discovery techniques for predict-
ing country investment risk. Computers & Industrial
Engineering, 43(4), 787-800.
Berg, B. L. (1998). Qualitative research methods for the social
sciences (3rd ed.). Boston, MA: Allyn &
Bacon.
Beynon-Davies, P. (1997). Ethnography and information
systems development: Ethnography of, for and
within is development. Information and Software Technology,
39(8), 531-540.
Boudreau, M.-C., Gefen, D., & Straub, D. W. (2001). Validation
in information systems research: A state-
of-the-art assessment. MIS Quarterly, 25(1), 1-16.
Cockburn, A., Savage, J., & Wallace, A. (2005). Tuning and
testing scrolling interfaces that automatically
zoom. Proceeding of the Computer-Human Interaction 2005
Conference, Portland, Oregon, pp. 71-80.
Cohen, M. S., & Ellis, T. J. (2003). Predictors of success: A
longitudinal study of threaded discussion fo-
rums. Proceeding of the Frontiers in Education Conference,
Boulder, Colorado, pp. T3F-14–T13F-18.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation:
Design & analysis issues from field set-
tings. Boston, MA: Houghton Mifflin Company.
Crabtree, A. (2003). Designing collaborative systems. A
practical guide to ethnography. Berlin: Springer-
Verlag.
Creswell, J. W. (2005). Educational research: Planning,
conducting, and evaluating quantitative and
qualitative research (2nd ed.). Upper Saddle River, NJ: Pearson.
DeLuca, D., Gallivan, M. J., & Kock, N. (2008). Furthering
information systems action research: A post-
positivist synthesis of four dialectics. Journal of the Association
for Information Systems, 9(2), 48-72.
Ellis, T. J., & Hafner, W. (2006). A communicat ion
environment for asynchronous collaborative learning.
Proceeding of the 37th Hawaii International Conference on
System Sciences, Big Island, Hawaii, pp.
3a-3a.
Ellis, T. J., & Levy, Y. (2008). A framework of problem-based
research: A guide for novice researchers on
the development of a research-worthy problem. In forming
Science: The International Journal of an
Emerging Transdiscipline, 11, 17-33. Retrieved from
http://inform.nu/Artic les/Vol11/ISJv11p017-
033Ellis486.pdf
Gay, L. R., Mills, G. E., & Airasian, P. (2006). Educational
research: Competencies for analysis and ap-
plications (8th ed.). Upper Saddle River, NJ: Pearson.
Guide for Novice Researchers on Research Methodology
336
Grant, K. A., & Grant, C. T. (2008). Developing a model of next
generation knowledge management. Is-
sues in Informing Science and Information Technology, 5, 571–
590. Retrieved from
http://proceedings.informingscience.org/InSITE2008/IISITv5p5
71-590Grant532.pdf
Isaac, S., & Michael, W. B. (1981). Handbook in research and
evaluation. San Diego, CA: EdITS publish-
ers.
King, W. R., & He, J. (2005). External validity in IS survey
research. Communications of the Association
for In formation Systems, 16, 880-894.
Leedy, P. D., & Ormrod, J. E. (2005). Practical research:
Planning and design (8th ed.). Upper Saddle
River, NJ: Prentice Hall.
Levy, Y., & Ellis, T. J. (2006). A systems approach to conduct
an effective literature rev iew in support of
informat ion systems research. Informing Science: The
International Journal of an Emerging Transdis-
cipline, 9, 181-212. Retrieved from http://inform.nu/Artic
les/Vol9/V9p181-212Levy99.pdf
Maxwell, J. A. (2005). Qualitative research design: An
interactive approach (2nd ed.). Thousand Okas,
CA: Sage Publication.
Mertler, C. A., & Vannatta, R. A. (2001). Advanced and
multivariate statistical methods: Practical appli-
cation and interpretation. Los Angeles, CA: Pyrczak Publishing.
Noor, K. (2008). Case study: A strategic research methodology.
American Journal of Applied Sciences,
5(11), 1602-1604.
Nunamaker, J. F., Chen, M., & Purdin, T. D. M. (1991).
Systems development in information systems re-
search. Journal of Management Information Systems, 7(3), 89-
106.
Oliver, D., Whymark, G., & Romm, C. (2005). Researching ERP
adoption: An internet-based grounded
theory approach. Online Information Review, 29(6), 585-604.
Ramim, M. M., & Levy, Y. (2006). Securing e-learning systems:
A case of insider cyber attacks and novice
IT management in a s mall university. Journal of Cases on
Information Technology, 8(4), 24-35.
Sekaran, U. (2003). Research methods for business (4th ed.).
Hoboken, NJ: John Wiley & Sons.
Straub, D. W. (1989). Validating instruments in MIS research.
MIS Quarterly, 13(2), 147-170.
van den Akker, J., Branch, R. M., Gustafson, K., Nieveen, N., &
Plomp, T. (2000). Design approaches and
tools in education. Norwell, MA: Kluwer Academic Publishers.
Williams, J. M., & Colomb, G. G. (2003). The craft o f
argument (2nd ed.). New York: Longma n Publish-
ers.
Yin, R. K. (1984). Case study research: Design and methods.
Newbury Park, CA: Sage Publicat ion.
Biographies
Dr. Timothy Ellis obtained a B.S. degree in History from
Bradley
University, an M.A. in Rehabilitation Counseling from Southern
Illinois
University, a C.A.G.S. in Rehabilitation Administration from
North-
eastern University, and a Ph.D. in Computing Technology in
Education
from Nova Southeastern University. He joined NSU as Assistant
Pro-
fessor in 1999 and currently teaches computer technology
courses at
both the Masters and Ph.D. level in the School of Computer and
Infor-
mation Sciences. Prior to joining NSU, he was on the faculty at
Fisher
College in the Computer Technology department and, prior to
that, was
a Systems Engineer for Tandy Business Products. His research
interests
include: multimedia, distance education, and adult learning. He
has
published in several technical and educational journals
including
Catalyst, Journal of Instructional Delivery Systems, and Journal
of Instructional Multimedia and
Ellis & Levy
337
Hypermedia. His email address is [email protected] His main
website is located at
http://www.scis.nova.edu/~ellist/
Dr. Yair Levy is an associate professor at the Graduate School
of
Computer and Information Sciences at Nova Southeastern
University.
During the mid to late 1990s, he assisted NASA to develop e-
learning
systems. He earned his Bachelor’s degree in Aerospace
Engineering
from the Technion (Israel Institute of Technology). He received
his
MBA with MIS concentration and Ph.D. in Management
Information
Systems from Florida International University. His current
research
interests include cognitive value of IS, of online learning
systems, ef-
fectiveness of IS, and cognitive aspects of IS. Dr. Levy is the
author of
the book “Assessing the Value of e-Learning systems.” His
research
publications appear in the IS journals, conference proceedings,
invited
book chapters, and encyclopedias. Additionally, he chaired and
co-
chaired multiple sessions/tracks in recognized conferences.
Currently, Dr. Levy is serving as the
Editor-in-Chief of the International Journal of Doctoral Studies
(IJDS). Additionally, he is serv-
ing as an associate editor for the International Journal of Web-
based Learning and Teaching
Technologies (IJWLTT). Moreover, he is serving as a member
of editorial review or advisory
board of several refereed journals. Additionally, Dr. Levy has
been serving as a referee research
reviewer for numerous national and international scientific
journals, conference proceedings, as
well as MIS and Information Security textbooks. He is also a
frequent speaker at national and
international meetings on MIS and online learning topics. To
find out more about Dr. Levy,
please visit his site: http://scis.nova.edu/~levyy/
Issues in Informing Science and Information Technology
Volume 6, 2009
Towards a Guide for Novice Researchers on
Research Methodology:
Review and Proposed Methods
Timothy J. Ellis and Yair Levy
Nova Southeastern University
Graduate School of Computer and Information Sciences
Fort Lauderdale, Florida, USA
[email protected], [email protected]
Abstract
The novice researcher, such as the graduate student, can be
overwhelmed by the intricacies of the
research methods employed in conducting a scholarly inquiry.
As both a consumer and producer
of research, it is essential to have a firm grasp on just what is
entailed in producing legitimate,
valid results and conclusions. The very large and growing
number of diverse research approaches
in current practice exacerbates this problem. The goal of this
review is to provide the novice re-
searcher with a starting point in becoming a more informed
consumer and producer of research.
Toward addressing this goal, a new system for deriving a
proposed study type is developed. The
PLD model includes the three common drivers for selection of
study type: research-worthy prob-
lem (P), valid quality peer-reviewed literature (L), and data (D).
The discussion includes a review
of some common research types and concludes with definitions,
discussions, and examples of
various fundamentals of research methods such as: a) forming
research questions and hypotheses;
b) acknowledging assumptions, limitations, and delimitations;
and c) establishing reliability and
validity.
Keywords: Research methodology, reliability, validity, research
questions, problem directed re-
search
Introduction
The novice researcher, such as the graduate student, can be
overwhelmed by the intricacies of the
research methods employed in conducting a scholarly inquiry
(Leedy & Ormrod, 2005). As both
a consumer and producer of research, it is essential to have a
firm grasp on just what is entailed in
producing legitimate, valid results and conclusions. The very
large and growing number of di-
verse research approaches in current practice exacerbates this
problem (Mertler & Vannatta,
2001). The goal of this review is to pro-
vide the novice researcher with a start-
ing point in becoming a more informed
consumer and producer of research in
the form of a lexicon of terms and an
analysis of the underlying constructs
that apply to scholarly enquiry, regard-
less of the specific methods employed.
Scholarly research is, to a very great
extent, characterized by the type of
Material published as part of this publication, either on-line or
in print, is copyrighted by the Informing Science Institute.
Permission to make digital or paper copy of part or all of these
works for personal or classroom use is granted without fee
provided that the copies are not made or distributed for profit
or commercial advantage AND that copies 1) bear this notice
in full and 2) give the full citation on the first page. It is per -
missible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
to redistribute to lists requires specific permission and payment
of a fee. Contact [email protected] to request
redistribution permission.
Guide for Novice Researchers on Research Methodology
324
study conducted and, by extension, the specific methods
employed in conducting that type of
study (Creswell, 2005, p. 61). Novice researchers, however,
often mistakenly think that, since
studies are known by how they are conducted, the research
process starts with deciding upon just
what type of study to conduct. On the contrary, the type of
study one conducts is based upon three
related issues: the problem driving the study, the body of
knowledge, and the nature of the data
available to the researcher.
As discussed elsewhere, scholarly research starts with the
identification of a tightly focused, lit-
erature supported problem (Ellis & Levy, 2008). The research-
worthy problem serves as the point
of departure for the research. The nature of the research
problem and the domain from which it is
drawn serves as a limiting factor on the type of research that
can be conducted. Nunamaker,
Chen, and Purdin (1991) noted that “It is clear that some
research domains are sufficiently nar-
row that they allow the use of only limited methodologies” (p.
91). The problem also serves as
the guidance system for the study in that the research is, in
essence, an attempt to, in some man-
ner, develop at least a partial solution to the research problem.
The best design cannot provide
meaning to research and answer the question ‘Why was the
study conducted,’ if there is not the
anchor of a clearly identified research problem.
The body of knowledge serves as the foundation upon which the
study is built (Levy & Ellis,
2006). The literature also serves to channel the research, in that
it indicates the type of study or
studies that are appropriate based upon the nature of the
problem driving the study. Likewise, the
literature provides clear guidance on the specific methods to be
followed in conducting a study of
a given type. Although originality is of great value in scholarly
work, it is usually not rewarded
when applied to the research methods. Ignoring the wisdom
contained in the existing body of
knowledge can cause the novice researcher, at the least, a great
deal of added work establishing
the validity of the study.
From an entirely practical perspective, the nature of the data
available to the researcher serves as
a final filter in determining the type of study to conduct. The
type of data available should be
considered a necessary, but certainly not sufficient,
consideration for selecting research methods.
The data should never supersede the necessity of a research-
worthy problem serving as the anchor
and the existing body of knowledge serving as the foundation
for the research. The absence of the
ability to gather the necessary data can, however, certainly
make a study based upon research
methods directly driven by a well-conceived problem and
supported by current literature com-
pletely futile. Every solid research study must use data in order
to validate the proposed theory.
As a result, novice researchers should understand the centrality
of access to data for their study
success. Access to data refers to the ability of the researcher to
actually collect the desired data
for the study. Without access to data, it is impossible for a
researcher to make any meaningful
conclusions on the phenomena. Novice researchers should be
aware that their access to data will
also imply what type of methodology they will be using and
what type of research, eventually
they will conduct. Figure 1 illustrates the interaction among the
research-worthy problem (P), the
existing body of knowledge documented in the peer-reviewed
literature (L), and the data avail-
able to the researcher (D). The research-worthy problem (P)
serves as the input to the process of
selecting the appropriate type of research to conduct; the valid
peer-reviewed literature (L) is the
key funnel that limits the range of applicable research
approaches, based on the body of knowl-
edge; the data (D) available to the researcher serves as the final
filter used to identify the specific
study type.
The balance of this paper explores the constructs underlying
scholarly research in two aspects.
The first section examines some of the types of studies most
commonly used in information sys-
tems research. The second section explores vital consider ations
for research methods that apply
across all study types.
Ellis & Levy
325
Study type
Data available
Valid peer-reviewed
literature
Research-worthy
problem
Figure 1. The PLD Model for Deriving Study Type
Types of Research
There are a number of different ways to distinguish among types
of research. The type of data
available is certainly one vital aspect (Gay, Mills, & Airasian,
2006; Leedy & Ormrod, 2005);
different research approaches are appropriate for quantitative
data – precise, numeric data derived
from a reduced variable – than for qualitative data – complex,
multidimensional data derived
from a natural setting. Of equal importance is the nature of the
problem being addressed by the
research (Isaac & Michael, 1981). Some problems, for example,
are relatively new and require
Table 1: Key Categories of Research
Approach Most common type of data Stage of problem
Categories of
Theory
Experimental Quantitative Evaluation Testing or
revising
Causal-comparative Quantitative Evaluation Testing or
revising
Historical Quantitative or Qualitative Description Testing or
revising
Developmental Quantitative and qualitative Description
Building or
revising
Correlational
Quantitative Description Testing
Case study Qualitative Exploration Building or
revising
Grounded theory
Qualitative Exploration Building
Ethnography
Qualitative Descriptive Building
Action research Quantitative and qualitative Applied
exploration Building or
revising
Guide for Novice Researchers on Research Methodology
326
exploratory types of research, while more mature problems
might better be addressed by descrip-
tive or evaluative (hypothesis testing) approaches (Sekaran,
2003). Table 1 presents an overview
of how the research approaches most commonly used in
information systems (IS) are categorized.
The subsections following Table 1 briefly explore each of these
major research approaches. In
general, research studies can be classified into three categories:
theory building, theory testing,
and theory revising. Theory building refers to research studies
that aim at building a theory where
no prior solid theory existed to explain phenomena or specific
scenario. Theory testing refers to
research studies that aim at validating (i.e. testing) existing
theories in new context. Theory revis-
ing refers to research studies that aim at revising an existing
theory.
Experimental
The essence of experimental research is determining if a cause -
effect relationship exists between
one factor or set of factors – the independent variable(s) – and a
second factor or set of factors –
the dependent variable(s) (Cook & Campbell, 1979). In an
experiment, the researcher takes con-
trol of and manipulates the independent variable, usually by
randomly assigning participants to
two or more different groups that receive different treatments or
implementations of the inde-
pendent variable. The experimenter measures and compares the
performance of the participants
on the dependent variable to determine if changes in the
independent variables are very likely to
cause similar changes in performance on the dependent variable.
In medical settings, this type of
research is very common. However, in many research fields it is
somewhat difficult to control all
the variables in the experiments, especially when dealing with
research area that is related to or-
ganizations and institutions. For that reason, the use of
experiments in IS it is somewhat limited,
and a less restrictive type of experiments is used. Such type is
called quasi-experiment (Cook &
Campbell, 1979). Similar to experiments, in quasi-experiments,
the research is attempting to de-
termining if a cause-effect relationship exists between one
factor or set of factors – the independ-
ent variable(s) – and a second factor or set of factors – the
dependent variable(s). However in
quasi-experiments, the researcher is unable to control all the
variables in the experimentation, but
most variables are controlled.
An example of an experimental study would be research into
which of two methods of inputting
text in a personal digital assistant, soft-key or handwriting
recognition, is more accurate. The in-
dependent variable would be method of text input. The
dependent variable might be a count of
the number of entry errors, and the comparison based on the
mean of the group using the soft-key
method with the mean of the group that used handwriting
recognition input. An example of ex-
perimental research can be found at Cockburn, Savage, and
Wallace (2005).
Causal-Comparative
As with experimental studies, causal-comparative research
focuses on determining if a cause-
effect relationship exists between one factor of set of factors –
the independent variable(s) – and a
second factor or set of factors – the dependent variable(s).
Unlike an experiment, the researcher
does not take control of and manipulate the independent
variable in causal-comparative research
but rather observes, measures, and compares the performance on
the dependent variable or vari-
ables of subjects in naturally-occurring groupings based on the
independent variable.
An example of a causal-comparative study would be research
into the impact monetary bonuses
had on knowledge sharing behavior as exhibited by
contributions to a company knowledge bank.
The independent variable would be “monetary bonus,” and it
might have two levels (i.e. “yes”
and “no”). The dependent variable might be a count of the
number of contributions, and the com-
parison based upon an examination of the mean number of
knowledge-base contributions made
per employee in companies that provided a monetary bonus
versus the mean number of contribu-
tions made per employee in companies that did not provide a
bonus. Since the researcher did not
Ellis & Levy
327
assign companies to the “bonus” or “no bonus” categories, this
study would be causal compara-
tive, not experimental. An example of causal-comparative
research can be found at Becerra-
Fernandez, Zanakis, and Walczak (2002) who developed a
knowledge discovery technique using
neural network modeling to classify a country’s investing risk
based on a variety of independent
variables.
Case Study
A case study is “an empirical inquiry that investigates a
contemporary phenomenon within its real
life context using multiple sources of evidence” (Noor, 2008, p.
1602). The evidence used in a
case study is typically qualitative in nature and focuses on
developing an in-depth rather than
broad, generalizable understanding. Case studies can be used to
explore, describe, or explain phe-
nomena by an exhaustive study within its natural setting (Yin,
1984). An example of a case study
can be found in the study by Ramim and Levy (2006) who
described the issues related to the im-
pact of an insider’s attack combined with novice management
on the survivability of an e-
learning systems of a small university.
Historical
Historical research utilizes interpretation of qualitative data to
explain the causes of change
through time. This type of research is based upon the
recognition of a historical problem or the
identification of a need for certain historical knowledge and
generally entails gathering as much
relevant information about the problem or topic as possible. The
research usually begins with the
formation of a hypothesis that tentatively explains a suspected
relationship between two or more
historical factors and proceeds to a rigorous collection and
organization of usually qualitative
evidence. The verification of the authentici ty and validity of
such evidence, together with its se-
lection, organization, and analysis forms the basis for this type
of research. An example of his-
torical research can be found in the study by Grant and Grant
(2008) who conducted a study to
test the hypothesis that a new generation in knowledge
management was emerging.
Correlational
The primary focus of the correlational type of research is to
determine the presence and degree of
a relationship between two factors. Although correlational
studies are in a superficial way similar
to causal-comparative research – both types of study focus on
analyzing quantitative data to de-
termine if a relationship exists between two variables – the
difference between the two cannot be
ignored. Unlike causal-comparative research, in correlational
studies, there is no attempt to de-
termine if a cause-effect relationship exists (variable x causes
changes in variable y). The goal for
correlational studies is to determine if a predictive relationship
exists (knowing the value of vari-
able x allows one to predict the value of variable y). At a
practical level, there is, therefore, no
distinction made between independent and dependent variables
in correlational research.
An example of a simple correlational study would be research
into the relationship between age
and willingness to make e-commerce purchases. The two
variables of interest would be age and
number of e-commerce purchases made over a given period of
time. The comparison would be
based upon an examination of age of each subject in the study
and the number of e-commerce
purchases made by that subject. Since the researcher did not
control either of the variables or at-
tempt to determine if age caused changes in purchases, just if
age could be used to predict behav-
ior, the study would be correlational, not experimental or
causal-comparative. An example of cor-
relational research can be found in Cohen and Ellis (2003).
Guide for Novice Researchers on Research Methodology
328
Developmental
Developmental research attempts to answer the question: How
can researchers build a ‘thing’ to
address the problem? It is especially applicable when there is
not an adequate solution to even test
for efficacy in addressing the problem and presupposes that
researchers don’t even know how to
go about building a solution that can be tested. Developmental
research generally entails three
major elements:
• Establishing and validating criteria the product must meet
• Following a formalized, accepted process for developing the
product
• Subjecting the product to a formalized, accepted process to
determine if it satis-
fies the criteria.
An example of developmental research would be Ellis and
Hafner (2006) that detailed the devel-
opment of an asynchronous environment for project-based
collaborative learning experiences.
Developmental research is distinguished from product
development by: a focus on complex, in-
novative solutions that have few, if any, accepted design and
development principles; a compre-
hensive grounding in the literature and theory; empirical testing
of product’s practicality and ef-
fectiveness; as well as thorough documentation, analysis, and
reflection on processes and out-
comes (van den Akker, Branch, Gustafson, Nieveen, & Plomp,
2000).
Grounded Theory
Grounded Theory is defined as “a systematic, qualitative
procedure used to generate theory that
explains, at a broad conceptual level, a process, an action, or
interaction about substantive topic”
(Creswell, 2005, p. 396). Grounded theory is used when theories
currently documented in the lit-
erature fail to adequately explain the phenomena observed
(Leedy & Ormrod, 2005). In such
cases, revisions for existing theory may not be valid as the
fundamental assumptions behind such
theories may be flawed given the context or data at hand. Table
2 outlines the three key types of
grounded theory design. According to Creswell, “choosing
among the three approaches requires
several considerations” (p. 403). He noted that such
considerations depend on the key emphasis
of the study such as: Is the aim of the study to follow given
procedures? Is the aim of the aim of
the study to follow predetermined categories? What is the
position of the researcher? An example
of Grounded Theory in the context of information systems
includes the study by Oliver, Why-
mark, and Romm (2005). Oliver et al. used Grounded Theory to
develop a conceptual model on
enterprise-resources planning (ERP) systems adoption based on
the various types of organiza-
tional justifications and reported motives.
Table 2: Types of Grounded Theory Design (Creswell, 2005)
Type of Grounded Theory
Design
Definition
Systematic Design “emphasizes the use of data analysis steps of
open, axial, and
selective coding, and the development of a logic paradigm or a
visual picture of the theory generated” (Creswell, 2005, p. 397)
Emerging Design “letting the theory emerge from the data
rather than using spe-
cific, preset categories” (Creswell, 2005, p. 401)
Constructivist Design “focus is on the meanings ascribed by
participants in a
study…more interested in the views, values, beliefs, feelings,
assumptions, and ideologies of individuals than in gathering
facts and describing acts” (Creswell, 2005, p. 402)
Ellis & Levy
329
Ethnography
The study of ethnography aims at “a particular person, program,
or event in considerable depth.
In an ethnography, the researcher looks at an entire group –
more specifically, a group that shares
a common culture – in depth” (Leedy & Ormrod, 2005, p. 151).
According to Creswell (2005),
ethnographic research deals with an in-depth qualitative
investigation of a group that share a
common culture. He indicated that ethnography is best used to
explain various issues within a
group of individuals that have been together for a considerable
length of time and have, therefore,
developed a common culture. Ethnographic research also
provides a chronological collection of
events related to a group of individuals sharing a common
culture. Beynon-Davies (1997) out-
lined the use of ethnographic research in the context of system
development. He noted that for IS
researchers, ethnographic research may provide value in the
area of IS development, specifically
in the process of capturing tacit knowledge during the system
development life cycle (SDLC)
(Beynon-Davies). Crabtree (2003) noted that “ethnography is an
approach that is increasing inter-
est to the designers of collaborative computing systems.
Rejecting the use of theoretical frame-
works and insisting instead on a rigorously descriptive mode of
research” (p. ix). However, criti-
cism for Crabtree’s advocacy of ethnography in information
systems research was also voiced
(Alexander, 2003).
Action Research
Action research is defined as “a type of research that focuses on
finding a solution to a local prob-
lem in a local setting” (Leedy & Ormrod, 2005, p. 114). Action
research is unique in the approach
as the researcher himself or herself are part of the practitioners
group that face the actual problem
the research is trying to address(Creswell, 2005). Additionally,
the aim of action research is to
investigate a localized and practical problem. According to
DeLuca, Gallivan, and Kock (2008),
there are five key steps in action research including: a)
Diagnosing the problem; b) Planning the
action; c) Taking the action; d) Evaluating the results; and e)
Specifying lessons learned for the
next cycle. During the course of all give steps of the action
research, “researchers and practitio-
ners collaborate during each step” (DeLuca et al., p. 49).
Fundamentals of Research Methods
For each study type there is an accepted methodology
documented in texts (Gay et al., 2006;
Isaac & Michael, 1981; Leedy & Ormrod, 2005; Yin, 1984) and
exemplified in the literature
(Levy & Ellis, 2006). As a first step in establishing the value of
a proposed study, the novice re-
searcher is well advised to closely follow the template for the
study type contained in the text and
model her or his research methods after similar studies reported
in the literature. Regardless of the
type of study being conducted, there are a number of important
factors that must be accommo-
dated in an effective description of the research methods. In
brief, the description must provide a
detailed, step-by-step description of how the study will be
conducted, answering the vital “who,
what, where, when, why, and how” questions.
1. What is going to be done
2. Who is going to do each thing to be done
3. How will each thing to be done be accomplished
4. When, and in what order, will the things to be accomplished
actually be done
5. Where will those things be done
6. Why – supported by the literature – for the answers to the
What, Who, How,
When, and Where
Guide for Novice Researchers on Research Methodology
330
A properly developed description of the research methods would
allow the reader to actually con-
duct the study being proposed based upon the processes
outlined. Included among those processes
are: forming research questions and hypotheses; identifying
assumptions, limitations, and delimi-
tations; as well as establishing reliability and validity.
Form Research Questions and Hypotheses
Research questions
Research questions are the essence of most research conducted
and acts as the guiding plan for
the investigation (Mertler & Vannatta, 2001). In general,
research questions are “specific ques-
tions that researchers seek to answer” (Creswell, 2005, p. 117).
According to Maxwell (2005),
“research questions state what you want to learn” (p. 69). A
research investigation may have one
or more research questions regardless of the specific type of the
research including qualitative,
quantitative, and mixed method types of research. Most quality
peer-reviewed studies will have a
specific section that highlights the research questions
investigated. In most other published work
that don’t have a specific section that highlights it, the research
questions will appear either at the
end of the problem statement or right after the literature review
section. Maxwell suggested that a
good research question is one that will point the researcher to
the information that will lead
him/her to understand what he/she set forth to investigate.
According to Ellis and Levy (2008),
“in order for the research to be at all meaningful, there has to be
an identifiable connection be-
tween the answers to the research questions and the research
problem inspiring the study” (p. 20).
However, research questions shouldn’t be created in a vacuum,
but be strongly influenced by
quality literature is suggesting about the phenomena (Berg,
1998). Moreover, the exact wording
used to note the research questions is vital as the accuracy and
appropriateness of the research
question determine the methodology to be used (Mertler &
Vannatta, 2001).
The nature of the research questions will be dependent on the
type of study being conducted.
Studies based on quantitative data will generally be driven by
research questions that are formu-
lated on the confirmatory and predictive nature, while studies
based on qualitative data will be
more likely driven by research questions that are formulated on
the exploratory and interpretive
nature.
Examples of quantitative research questions in the context of
information systems include:
- To what extent does users’ perceived usefulness increases the
odds of their e-commerce
usage?
- Do computer self-efficacy and computer anxiety have a
significant difference for males
and females when using e-learning systems?
- What are the contributions of users’ systems trust, deterrent
severity, and motivation to
their misuse of biometrics technology?
- To what degree do team communication and team cohesiveness
predict productivity of
system development by virtual teams?
Examples of qualitative research questions in the context of
information systems include:
- How does training help the implementation success of
enterprise-wide information sys-
tems?
- Why do user involvement and user resistance help in the
systems’ requirement gathering
process?
What are the systems characteristics that are valuable to users
when using e-learning
systems?
- How do e-commerce users define information privacy?
Ellis & Levy
331
Hypotheses
One must keep in mind that “research questions are not the
same as research hypotheses”
(Maxwell, 2005, p. 69). In general, a hypothesis can be defined
as a “logical supposition, a rea-
sonable guess, an educated conjecture” about some aspect of
daily life (Leedy & Ormrod, 2005,
p. 6). In scholarly research, however, hypotheses are more than
‘educated guesses.’ A research
hypothesis is a “prediction or conjecture about the outcome of a
relationship among attributes or
characteristics” (Creswell, 2005, p. 117). By convention,
research is conservative and assumes
the absence of a relationship among the attributes under
consideration; hypotheses, therefore, are
expressed in null terms. For example, if a study were to
examine the impact interactive multime-
dia animations have on the average amount of a purchase at an
e-commerce site, the hypothesis
would be stated: The average amount of purchase on an e-
commerce site enhanced with interac-
tive animations will not be different that the average amount of
purchase on the same e-
commerce site that is not enhanced with interactive animations.
Not all types of research entail
establishing and testing hypotheses. Research methods based
upon quantitative data commonly
test hypotheses; studies based upon qualitative data, on the
other hand, explore propositions
(Maxwell).
Unlink hypotheses, propositions do predict a directionality for
the results. If, for example, one
were to examine consumer reaction to interactive animation on
an e-commerce site, one might
investigate the proposition that: Consumers will express a
greater feeling of engagement and sat-
isfaction when visiting e-commerce sites enhanced with
interactive animations than similar sites
that lack the enhancement.
Acknowledge Assumptions, Limitations, and Delimitations
For any given research investigation there are underlying
assumptions, limitations, and delimita-
tions (Creswell, 2005). According to Leedy and Ormrod (2005),
assumptions, limitations, and
delimitations are critical components of a viable research
proposal; without these considerations
clearly articulated, evaluators may raise some valid questions
regarding the credibility of the pro-
posal. The following three sub-sections provide definition and
examples for each term.
Assumptions
Assumptions serve as the basic foundation of any proposed
research (Leedy & Ormrod, 2005)
and constitute “what the researcher takes for granted. But taking
things for granted may cause
much misunderstanding. What [researchers] may tacitly assume,
others may never have consid-
ered” (Leedy & Ormrod, p. 62). Moreover, assumptions can be
viewed as something the re-
searcher accepts as true without a concrete proof. Essentially,
there is no research study without a
basic set of assumptions (Berg, 1998). According to Williams
and Colomb (2003), identifying the
assumptions behind a given research proposal is one of the
hardest issues to address, especially
for novice researchers. Such difficulties emerge due to the fact
that by nature “we all take our
deepest beliefs for granted, rarely questioning them from
someone else’s point of view”
(Williams & Colomb, p. 200). It is important for novice
researchers to learn how to explicitly
document their assumptions in order to ensure that they are
aware of those things taken as givens,
rather than trying to hide or smear them from the reader.
Explicitly documenting the research as-
sumptions may help reduce misunderstanding and resistance to a
proposed research as it demon-
strates that the research proposal has been thoroughly
considered (Leedy & Ormrod, 2005).
To identify the assumption behind a proposal, the researcher
must ask himself the following ques-
tion: “what do I believe that my readers must also believe (but
may not) before they will think
that my reasons are relevant to my claims?” (Williams &
Colomb, p. 200).
Guide for Novice Researchers on Research Methodology
332
Examples of assumptions researchers make include:
- Participants in the study will make a sincere effort to complete
the assigned tasks
- The students participating in the Internet-based course have a
basic familiarity with the
personal computer and the use of the World Wide Web.
Limitations
Every study has a set of limitations (Leedy & Ormrod, 2005), or
“potential weaknesses or prob-
lems with the study identified by the researcher” (Creswell,
2005, p. 198). A limitation is an un-
controllable threat to the internal validity of a study. As
described in greater detail below, internal
validity refers to the likelihood that the results of the study
actually mean what the researcher in-
dicates they mean. Explicitly stating the research limitations is
vital in order to allow other re-
searchers to replicate the study or expand on a study (Creswell,
2005). Additionally, by explicitly
stating the limitations of the research, a researcher can help
other researchers “judge to what ex-
tent the findings can or cannot be generalized to other people
and situations” (Creswell, 2005, p.
198).
Examples of limitations researchers may have:
- All subjects in the study will be volunteers who may withdraw
from the study at any
time. The participants who finish the study might not, therefore,
be truly representative of
the population.
- The members of the expert panel that will validate the
proficiency survey instrument will
be drawn from the faculty of … and may not truly represent
universally accepted expert
opinion.
Delimitations
Delimitations refer to “what the researcher is not going to do”
(Leedy & Ormrod, 2005). In schol-
arly research, the goals of the research outlines what the
researcher intends to do; without the de-
limitations, the reader will have difficulties in understanding
the boundaries of the research. In
order to constrain the scope of the study and make it more
manageable, researchers should outline
in the delimitations – the factors, constructs, and/or variables –
that were intentionally left out of
the study. Delimitations impact the external validity or
generalizability of the results of the study.
Examples of delimitations include:
- Participation in the study was delimited to only males aged
25-45 who had made a pur-
chase via the internet within the past 12 months; generalization
to other age groups or
females may not be warranted.
- This study examined attrition rates in MBA programs offered
in continuing education de-
partments of public colleges and universities; generalization to
other educational pro-
grams or similar programs offered in private institutions may
not be warranted.
Establish Reliability and Validity
Every study must address threats to validity and reliability
(Leedy & Ormrod, 2005). Although
the concepts of validity and reliability originally started in
quantitative research approaches, in
recent years validity and reliability are being addressed in
qualitative and mixed-methods ap-
proaches as well (Berg, 1998; Maxwell, 2005). According to
Leedy and Ormrod (2005), “the va-
lidity and reliability of your measurement instruments
influences the extent to which you can
learn something about the phenomenon you are studying…and
the extent to which you can draw
meaningful conclusions from your data” (p. 31). The following
two sections define and outline
Ellis & Levy
333
the key types of validity and reliability related to common
research investigation. Establishing an
approach following published methods to address validity and
reliability issues in a research pro-
posal may drastically increase the overall acceptance of the
research proposal.
Reliability
Reliability is defined as “the consistency with which a
measuring instrument yields a certain re-
sults when the entity being measured hasn’t changed” (Leedy &
Ormrod, 2005, p. 31). According
to Straub (1989), researchers should try to answer the following
question in an attempt to address
reliability; “do measures show stability across the unit of
observation? That is, could measure-
ment error be so high as to discredit the findings?” (p. 150).
Reliability can be established in four
different ways: equivalency, stability, inter-rater, and internal
consistency (Carmines & Zeller,
1991).
Equivalency reliability. Equivalency reliability is concerned
with how closely measurements
taken with one instrument match those taken with a second
instrument under similar conditions.
Equivalency is often used to certify the reliability of a new
measurement instrument or procedure
by comparing the results of using that instrument with those
obtained by using established in-
struments or processes. Equivalency is usually established
through the use of a statistical correla-
tion (Pearson’s r for linear correlation or Eta for non-linear
correlation).
Stability reliability. Stability reliability – also know as test, re-
test reliability – is concerned with
how consistent results of measuring with a given instrument or
process are over time. Stability is
based on the assumption that, absent some identifiable
explanation, the measurement should pro-
duce the same results today as last month and will produce the
same results next month. Stability,
like equivalency, is usually established through the use of a
statistical correlation (Pearson’s r for
linear correlation or Eta for non-linear correlation).
Inter-rater reliability. Inter-rater reliability focuses on the
extent agreement in the results of two
or more individuals using the same measurement instrument or
process. As with stability and
equivalency, inter-rater reliability is usually established through
the use of a statistical correlation
(Pearson’s r for linear correlation or Eta for non-linear
correlation).
Internal consistency. Unlike the previous methods of
establishing reliability which were con-
cerned with comparing the results of using an instrument or
process with some external standard
(another instrument, the same instrument over time, or the same
instrument used by different
people), internal consistence focuses on the level of agreement
among the various parts of the
instrument or process in assessing the characteristic being
measured. In a 20-question survey
measuring attitude toward knowledge sharing, for example, if
the survey is internally consistent,
there will be a strong correlation the responses on all 20
questions. Internal consistency is also
measured by statistical correlation, but with the Cronbach α in
place of Pearson r.
Validity
Validity refers to a researchers’ ability to “draw meaningful and
justifiable inferences from scores
about a sample or population” (Creswell, 2005, p. 600). There
are various types of validity asso-
ciated with scholarly research (Cook & Campbell, 1979).
Validity of an instrument refers to “the
extent to which the instrument measures what it is supposed to
measure” (Leedy & Ormrod,
2005, p. 31). Thus, researchers when designing their study,
must ask themselves “how might you
be wrong?” (Maxwell, 2005, p. 105). Additionally, the validity
of a study “depends on the rela-
tionship of your conclusions to reality” (Maxwell, 2005, p.
105). This section will define and out-
line the key validity issues. The two most common validity
issues are internal validity and exter-
nal validity.
Guide for Novice Researchers on Research Methodology
334
Internal validity. Internal validity refers to the “extent to which
its design and the data that it
yields allow the researcher to draw accurate conclusions about
cause-and-effect and other rela-
tionships within the data” (Leedy & Ormrod, 2005, pp. 103-
104). According to Straub (1989),
researchers should try to answer the following question in an
attempt to address internal validity;
“are there untested rival hypotheses for the observed effects?”
(p. 150). Generally, establishing
internal validity requires examining one or more of the
following: face validity, criterion validity,
construct validity, content validity, or statistical conclusion
validity.
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx
Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx

More Related Content

Similar to Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx

How to write Research methodology Thesis
How to write Research methodology ThesisHow to write Research methodology Thesis
How to write Research methodology ThesisNarendranath Guria
 
Research protocol writting
Research protocol writtingResearch protocol writting
Research protocol writtingDawal Salve
 
Writing Methodology Section in public health
Writing Methodology Section in public healthWriting Methodology Section in public health
Writing Methodology Section in public healthNezlyIderus2
 
review of Litt on clinical research and Methodology
review of Litt on clinical research and Methodologyreview of Litt on clinical research and Methodology
review of Litt on clinical research and MethodologyDr.Venkata Suresh Ponnuru
 
مراجعة الأدبيات المنهجيةsystematic literature review .ppt
مراجعة الأدبيات المنهجيةsystematic literature review .pptمراجعة الأدبيات المنهجيةsystematic literature review .ppt
مراجعة الأدبيات المنهجيةsystematic literature review .ppt901202
 
protocol writing in clinical research
protocol writing in clinical research protocol writing in clinical research
protocol writing in clinical research pavithra vinayak
 
Research ProposalThe written research proposal will be created b.docx
Research ProposalThe written research proposal will be created b.docxResearch ProposalThe written research proposal will be created b.docx
Research ProposalThe written research proposal will be created b.docxmackulaytoni
 
Q CHAPTER NINE (toc1.html#c09a)Qualitative Methods.docx
Q CHAPTER NINE (toc1.html#c09a)Qualitative Methods.docxQ CHAPTER NINE (toc1.html#c09a)Qualitative Methods.docx
Q CHAPTER NINE (toc1.html#c09a)Qualitative Methods.docxmakdul
 
RESEARCH APPROACHES AND DESIGNS.pptx
RESEARCH APPROACHES AND DESIGNS.pptxRESEARCH APPROACHES AND DESIGNS.pptx
RESEARCH APPROACHES AND DESIGNS.pptxPRADEEP ABOTHU
 
research design
 research design research design
research designkpgandhi
 
Answer SheetVariablesList the major study variables, an.docx
Answer SheetVariablesList the major study variables, an.docxAnswer SheetVariablesList the major study variables, an.docx
Answer SheetVariablesList the major study variables, an.docxYASHU40
 
Research Critique.pptx
Research Critique.pptxResearch Critique.pptx
Research Critique.pptxneeti70
 
GBS MSCBDA - Dissertation Guidelines.pdf
GBS MSCBDA - Dissertation Guidelines.pdfGBS MSCBDA - Dissertation Guidelines.pdf
GBS MSCBDA - Dissertation Guidelines.pdfStanleyChivandire1
 
Resaerch-design-Presentation-MTTE-1st-sem-2023.pptx
Resaerch-design-Presentation-MTTE-1st-sem-2023.pptxResaerch-design-Presentation-MTTE-1st-sem-2023.pptx
Resaerch-design-Presentation-MTTE-1st-sem-2023.pptxJanVincentFuentes
 
Quantitative research presentation, safiah almurashi
Quantitative research presentation, safiah almurashiQuantitative research presentation, safiah almurashi
Quantitative research presentation, safiah almurashiQUICKFIXQUICKFIX
 

Similar to Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx (20)

How to write Research methodology Thesis
How to write Research methodology ThesisHow to write Research methodology Thesis
How to write Research methodology Thesis
 
Research protocol writting
Research protocol writtingResearch protocol writting
Research protocol writting
 
Writing Methodology Section in public health
Writing Methodology Section in public healthWriting Methodology Section in public health
Writing Methodology Section in public health
 
review of Litt on clinical research and Methodology
review of Litt on clinical research and Methodologyreview of Litt on clinical research and Methodology
review of Litt on clinical research and Methodology
 
MA Talk Temple.ppt
MA Talk Temple.pptMA Talk Temple.ppt
MA Talk Temple.ppt
 
مراجعة الأدبيات المنهجيةsystematic literature review .ppt
مراجعة الأدبيات المنهجيةsystematic literature review .pptمراجعة الأدبيات المنهجيةsystematic literature review .ppt
مراجعة الأدبيات المنهجيةsystematic literature review .ppt
 
ES_140_METHODS_OF_RESEARCH.pdf
ES_140_METHODS_OF_RESEARCH.pdfES_140_METHODS_OF_RESEARCH.pdf
ES_140_METHODS_OF_RESEARCH.pdf
 
protocol writing in clinical research
protocol writing in clinical research protocol writing in clinical research
protocol writing in clinical research
 
Research ProposalThe written research proposal will be created b.docx
Research ProposalThe written research proposal will be created b.docxResearch ProposalThe written research proposal will be created b.docx
Research ProposalThe written research proposal will be created b.docx
 
Q CHAPTER NINE (toc1.html#c09a)Qualitative Methods.docx
Q CHAPTER NINE (toc1.html#c09a)Qualitative Methods.docxQ CHAPTER NINE (toc1.html#c09a)Qualitative Methods.docx
Q CHAPTER NINE (toc1.html#c09a)Qualitative Methods.docx
 
RESEARCH APPROACHES AND DESIGNS.pptx
RESEARCH APPROACHES AND DESIGNS.pptxRESEARCH APPROACHES AND DESIGNS.pptx
RESEARCH APPROACHES AND DESIGNS.pptx
 
research design
 research design research design
research design
 
Answer SheetVariablesList the major study variables, an.docx
Answer SheetVariablesList the major study variables, an.docxAnswer SheetVariablesList the major study variables, an.docx
Answer SheetVariablesList the major study variables, an.docx
 
good proposal ppt2.pptx
good proposal ppt2.pptxgood proposal ppt2.pptx
good proposal ppt2.pptx
 
Research Critique.pptx
Research Critique.pptxResearch Critique.pptx
Research Critique.pptx
 
GBS MSCBDA - Dissertation Guidelines.pdf
GBS MSCBDA - Dissertation Guidelines.pdfGBS MSCBDA - Dissertation Guidelines.pdf
GBS MSCBDA - Dissertation Guidelines.pdf
 
Resaerch-design-Presentation-MTTE-1st-sem-2023.pptx
Resaerch-design-Presentation-MTTE-1st-sem-2023.pptxResaerch-design-Presentation-MTTE-1st-sem-2023.pptx
Resaerch-design-Presentation-MTTE-1st-sem-2023.pptx
 
Research design
Research designResearch design
Research design
 
Quantitative research presentation, safiah almurashi
Quantitative research presentation, safiah almurashiQuantitative research presentation, safiah almurashi
Quantitative research presentation, safiah almurashi
 
Ph d ces_dissertation_rubric
Ph d ces_dissertation_rubricPh d ces_dissertation_rubric
Ph d ces_dissertation_rubric
 

More from Abhinav816839

Clinical Reasoning Case Study Total Parenteral NutritionName ____.docx
Clinical Reasoning Case Study Total Parenteral NutritionName ____.docxClinical Reasoning Case Study Total Parenteral NutritionName ____.docx
Clinical Reasoning Case Study Total Parenteral NutritionName ____.docxAbhinav816839
 
Correctional Health Care AssignmentCourse Objective for Assignme.docx
Correctional Health Care AssignmentCourse Objective for Assignme.docxCorrectional Health Care AssignmentCourse Objective for Assignme.docx
Correctional Health Care AssignmentCourse Objective for Assignme.docxAbhinav816839
 
Class Response 1 Making or implementing policies should ultima.docx
Class Response 1 Making or implementing policies should ultima.docxClass Response 1 Making or implementing policies should ultima.docx
Class Response 1 Making or implementing policies should ultima.docxAbhinav816839
 
Chapter 4 (Sexual Assault)Points Possible 20Deliverable Len.docx
Chapter 4 (Sexual Assault)Points Possible 20Deliverable Len.docxChapter 4 (Sexual Assault)Points Possible 20Deliverable Len.docx
Chapter 4 (Sexual Assault)Points Possible 20Deliverable Len.docxAbhinav816839
 
CS547 Wireless Networking and Security Exam 1 Questio.docx
CS547 Wireless Networking and Security Exam 1  Questio.docxCS547 Wireless Networking and Security Exam 1  Questio.docx
CS547 Wireless Networking and Security Exam 1 Questio.docxAbhinav816839
 
Communicating professionally and ethically is an essential ski.docx
Communicating professionally and ethically is an essential ski.docxCommunicating professionally and ethically is an essential ski.docx
Communicating professionally and ethically is an essential ski.docxAbhinav816839
 
Case Study The HouseCallCompany.com Abstract The case.docx
Case Study The HouseCallCompany.com  Abstract The case.docxCase Study The HouseCallCompany.com  Abstract The case.docx
Case Study The HouseCallCompany.com Abstract The case.docxAbhinav816839
 
CHAPTER 5Risk Response and MitigationIn this chapter, you will.docx
CHAPTER 5Risk Response and MitigationIn this chapter, you will.docxCHAPTER 5Risk Response and MitigationIn this chapter, you will.docx
CHAPTER 5Risk Response and MitigationIn this chapter, you will.docxAbhinav816839
 
BiopsychologySensation and PerceptionInstructionsTake a mom.docx
BiopsychologySensation and PerceptionInstructionsTake a mom.docxBiopsychologySensation and PerceptionInstructionsTake a mom.docx
BiopsychologySensation and PerceptionInstructionsTake a mom.docxAbhinav816839
 
Chapter 2Historical Perspectives on Case ManagementChapter In.docx
Chapter 2Historical Perspectives on Case ManagementChapter In.docxChapter 2Historical Perspectives on Case ManagementChapter In.docx
Chapter 2Historical Perspectives on Case ManagementChapter In.docxAbhinav816839
 
Compare and Contrast Systems Development Life Cycle (SDLC Mo.docx
Compare and Contrast Systems Development Life Cycle (SDLC Mo.docxCompare and Contrast Systems Development Life Cycle (SDLC Mo.docx
Compare and Contrast Systems Development Life Cycle (SDLC Mo.docxAbhinav816839
 
C H A P T E R F O U R PROBLEMS OF COMMUNICATIONCri.docx
C H A P T E R  F O U R  PROBLEMS OF COMMUNICATIONCri.docxC H A P T E R  F O U R  PROBLEMS OF COMMUNICATIONCri.docx
C H A P T E R F O U R PROBLEMS OF COMMUNICATIONCri.docxAbhinav816839
 
By Day 6 of Week Due 917Respond to at least two of your c.docx
By Day 6 of Week Due 917Respond to at least two of your c.docxBy Day 6 of Week Due 917Respond to at least two of your c.docx
By Day 6 of Week Due 917Respond to at least two of your c.docxAbhinav816839
 
ASSIGNMENT 3 (CHAPTERS 8-9) QUESTIONS Name .docx
ASSIGNMENT 3 (CHAPTERS 8-9) QUESTIONS Name                .docxASSIGNMENT 3 (CHAPTERS 8-9) QUESTIONS Name                .docx
ASSIGNMENT 3 (CHAPTERS 8-9) QUESTIONS Name .docxAbhinav816839
 
3 pages excluding the title and reference page.Describe how Ph.docx
3 pages excluding the title and reference page.Describe how Ph.docx3 pages excluding the title and reference page.Describe how Ph.docx
3 pages excluding the title and reference page.Describe how Ph.docxAbhinav816839
 
91422, 1134 AM Printhttpscontent.uagc.eduprintMcNe.docx
91422, 1134 AM Printhttpscontent.uagc.eduprintMcNe.docx91422, 1134 AM Printhttpscontent.uagc.eduprintMcNe.docx
91422, 1134 AM Printhttpscontent.uagc.eduprintMcNe.docxAbhinav816839
 
All details are posted in the documentPlease use this topics for.docx
All details are posted in the documentPlease use this topics for.docxAll details are posted in the documentPlease use this topics for.docx
All details are posted in the documentPlease use this topics for.docxAbhinav816839
 
1, Describe the mind as a stream of consciousness, and what it revea.docx
1, Describe the mind as a stream of consciousness, and what it revea.docx1, Describe the mind as a stream of consciousness, and what it revea.docx
1, Describe the mind as a stream of consciousness, and what it revea.docxAbhinav816839
 
2Some of the ways that students are diverse in todays schools .docx
2Some of the ways that students are diverse in todays schools .docx2Some of the ways that students are diverse in todays schools .docx
2Some of the ways that students are diverse in todays schools .docxAbhinav816839
 
40 CHAPTER 18 THE NEW SOUTH AND THE NEW WEST, 1865-1900 .docx
 40 CHAPTER 18 THE NEW SOUTH AND THE NEW WEST, 1865-1900 .docx 40 CHAPTER 18 THE NEW SOUTH AND THE NEW WEST, 1865-1900 .docx
40 CHAPTER 18 THE NEW SOUTH AND THE NEW WEST, 1865-1900 .docxAbhinav816839
 

More from Abhinav816839 (20)

Clinical Reasoning Case Study Total Parenteral NutritionName ____.docx
Clinical Reasoning Case Study Total Parenteral NutritionName ____.docxClinical Reasoning Case Study Total Parenteral NutritionName ____.docx
Clinical Reasoning Case Study Total Parenteral NutritionName ____.docx
 
Correctional Health Care AssignmentCourse Objective for Assignme.docx
Correctional Health Care AssignmentCourse Objective for Assignme.docxCorrectional Health Care AssignmentCourse Objective for Assignme.docx
Correctional Health Care AssignmentCourse Objective for Assignme.docx
 
Class Response 1 Making or implementing policies should ultima.docx
Class Response 1 Making or implementing policies should ultima.docxClass Response 1 Making or implementing policies should ultima.docx
Class Response 1 Making or implementing policies should ultima.docx
 
Chapter 4 (Sexual Assault)Points Possible 20Deliverable Len.docx
Chapter 4 (Sexual Assault)Points Possible 20Deliverable Len.docxChapter 4 (Sexual Assault)Points Possible 20Deliverable Len.docx
Chapter 4 (Sexual Assault)Points Possible 20Deliverable Len.docx
 
CS547 Wireless Networking and Security Exam 1 Questio.docx
CS547 Wireless Networking and Security Exam 1  Questio.docxCS547 Wireless Networking and Security Exam 1  Questio.docx
CS547 Wireless Networking and Security Exam 1 Questio.docx
 
Communicating professionally and ethically is an essential ski.docx
Communicating professionally and ethically is an essential ski.docxCommunicating professionally and ethically is an essential ski.docx
Communicating professionally and ethically is an essential ski.docx
 
Case Study The HouseCallCompany.com Abstract The case.docx
Case Study The HouseCallCompany.com  Abstract The case.docxCase Study The HouseCallCompany.com  Abstract The case.docx
Case Study The HouseCallCompany.com Abstract The case.docx
 
CHAPTER 5Risk Response and MitigationIn this chapter, you will.docx
CHAPTER 5Risk Response and MitigationIn this chapter, you will.docxCHAPTER 5Risk Response and MitigationIn this chapter, you will.docx
CHAPTER 5Risk Response and MitigationIn this chapter, you will.docx
 
BiopsychologySensation and PerceptionInstructionsTake a mom.docx
BiopsychologySensation and PerceptionInstructionsTake a mom.docxBiopsychologySensation and PerceptionInstructionsTake a mom.docx
BiopsychologySensation and PerceptionInstructionsTake a mom.docx
 
Chapter 2Historical Perspectives on Case ManagementChapter In.docx
Chapter 2Historical Perspectives on Case ManagementChapter In.docxChapter 2Historical Perspectives on Case ManagementChapter In.docx
Chapter 2Historical Perspectives on Case ManagementChapter In.docx
 
Compare and Contrast Systems Development Life Cycle (SDLC Mo.docx
Compare and Contrast Systems Development Life Cycle (SDLC Mo.docxCompare and Contrast Systems Development Life Cycle (SDLC Mo.docx
Compare and Contrast Systems Development Life Cycle (SDLC Mo.docx
 
C H A P T E R F O U R PROBLEMS OF COMMUNICATIONCri.docx
C H A P T E R  F O U R  PROBLEMS OF COMMUNICATIONCri.docxC H A P T E R  F O U R  PROBLEMS OF COMMUNICATIONCri.docx
C H A P T E R F O U R PROBLEMS OF COMMUNICATIONCri.docx
 
By Day 6 of Week Due 917Respond to at least two of your c.docx
By Day 6 of Week Due 917Respond to at least two of your c.docxBy Day 6 of Week Due 917Respond to at least two of your c.docx
By Day 6 of Week Due 917Respond to at least two of your c.docx
 
ASSIGNMENT 3 (CHAPTERS 8-9) QUESTIONS Name .docx
ASSIGNMENT 3 (CHAPTERS 8-9) QUESTIONS Name                .docxASSIGNMENT 3 (CHAPTERS 8-9) QUESTIONS Name                .docx
ASSIGNMENT 3 (CHAPTERS 8-9) QUESTIONS Name .docx
 
3 pages excluding the title and reference page.Describe how Ph.docx
3 pages excluding the title and reference page.Describe how Ph.docx3 pages excluding the title and reference page.Describe how Ph.docx
3 pages excluding the title and reference page.Describe how Ph.docx
 
91422, 1134 AM Printhttpscontent.uagc.eduprintMcNe.docx
91422, 1134 AM Printhttpscontent.uagc.eduprintMcNe.docx91422, 1134 AM Printhttpscontent.uagc.eduprintMcNe.docx
91422, 1134 AM Printhttpscontent.uagc.eduprintMcNe.docx
 
All details are posted in the documentPlease use this topics for.docx
All details are posted in the documentPlease use this topics for.docxAll details are posted in the documentPlease use this topics for.docx
All details are posted in the documentPlease use this topics for.docx
 
1, Describe the mind as a stream of consciousness, and what it revea.docx
1, Describe the mind as a stream of consciousness, and what it revea.docx1, Describe the mind as a stream of consciousness, and what it revea.docx
1, Describe the mind as a stream of consciousness, and what it revea.docx
 
2Some of the ways that students are diverse in todays schools .docx
2Some of the ways that students are diverse in todays schools .docx2Some of the ways that students are diverse in todays schools .docx
2Some of the ways that students are diverse in todays schools .docx
 
40 CHAPTER 18 THE NEW SOUTH AND THE NEW WEST, 1865-1900 .docx
 40 CHAPTER 18 THE NEW SOUTH AND THE NEW WEST, 1865-1900 .docx 40 CHAPTER 18 THE NEW SOUTH AND THE NEW WEST, 1865-1900 .docx
40 CHAPTER 18 THE NEW SOUTH AND THE NEW WEST, 1865-1900 .docx
 

Recently uploaded

Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptxPoojaSen20
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 

Recently uploaded (20)

Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptx
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 

Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 .docx

  • 1. Chapter 3 - Evaluation Rubric Criteria Does Not Meet 0.01 points Meets/NA 1 point Introductory Remarks The section is missing; or some topic areas are not included in the Introduction or are not explained clearly. The chapter outline is not provided and/or is unclear. The reader is adequately oriented to the topic areas covered. An outline of the logical flow of the chapter is presented. All major themes/concepts are introduced. Research Methodology and Design There is a lack of alignment among the chosen research method and design and the study’s problem, purpose, and research questions.
  • 2. There is a lack of justification and alternate choices for methods. For Qualitative Studies: Lacks clear discussion of the study phenomenon, boundaries of case(s), and/or constructs explored. Describes how the research method and design are aligned with the study problem, purpose, and research questions. Uses scholarly support to describe how the design choice is consistent with the research method, and alternate choices are discussed. For Qualitative Studies: Describes the study phenomenon, boundaries of case(s) and/or constructs explored. Population and Sample Lacks a description of the sample, demographics, and the representation of the sample to the broader population.
  • 3. There is little to no description of the inclusion/exclusion criteria used to select the participants (sample) of the study. For Quantitative Studies: A power analysis is not described and appropriately cited. Provides a description of the target population and the relation to the larger population. Inclusion/exclusion criteria for selecting participants (sample) of the study are noted. For Quantitative Studies: Power analysis is described and appropriately cited. Materials/ Instrumentation Lacks a description of the instruments associated with the chosen research method and design used. Details missing regarding instrument origin, reliability, and validity.
  • 4. For Quantitative Studies: (e.g., tests or surveys). Lacks explanation of any permission needed to use the instrument(s) and cites properly. Instrument permissions are missing in appendices For Qualitative Studies: (e.g., observation checklists/protocols, interview or focus group discussion Handbooks). Did not clearly explain the process for conducting an expert review of instruments (e.g., provides justification of reviewers being credible – reviewers may include, but not limited to NCU dissertation team members, professional colleagues, peers, or non-research participants representative of the greater population); and/or did not clearly explain use of a field test if practicing the administration of the instruments is warranted. For Pilot Study: Does not clearly explain the procedure for conducting a pilot study (did not conduct pilot) if using a
  • 5. self-created instrument (e.g., survey questionnaire); does not include explanation of a field test if practicing the administration of the instruments is warranted. Provides a description of the instruments associated with the chosen research method and design used. Includes information regarding instrument origin, reliability, and validity. For Quantitative Studies: (e.g., tests or surveys). Includes any permission needed to use the instrument(s) and cites properly. For Qualitative Studies: (e.g., observation checklists/protocols, interview or focus group discussion Handbooks). Describes process for conducting an expert review of instruments (e.g., provides justification of reviewers being credible – reviewers may include, but not limited to NCU dissertation team members, professional
  • 6. colleagues, peers, or non-research participants representative of the greater population); describes use of a field test if practicing the administration of the instruments is warranted. For Pilot Study: Explains the procedure for conducting a pilot study (requires IRB approval for pilot) if using a self-created instrument (e.g., survey questionnaire); explains use of a field test if practicing the administration of the instruments is warranted. Operational Definitions of Variables (Quantitative Studies Only) Discussion of the study variables examined is lacking information and/or is unclear. Describes study variables in terms of being measurable and/or observable.
  • 7. (Reviewer - mark Meets/NA for Qualitative studies) Procedures Procedures are not clear or replicable. Steps are missing; recruitment, selection, and informed consent are not established. IRB ethical practices are missing or unclear. Describes the procedures to conduct the study in enough detail to practically replicate the study, including participant recruitment and notification, and informed consent. IRB ethical practices are noted. Data Collection and Analysis Does not clearly provide a description of the data and the processes to collect data. Lack of alignment between the data collected and the research questions and/or hypotheses of the study.
  • 8. For Quantitative Studies: Does not clearly provide the data analysis processes including, but not limited to, clearly describing the statistical tests performed and the purpose/outcome, coding of data linked to each RQ, the software used (e.g., SPSS, Qualtrics). For Qualitative Studies: Does not clearly identify the coding process of data linked to RQs; does not clearly describe the transcription of data, the software used for textual analysis (e.g., Nivo, DeDoose), and does not justify manual analysis by researcher. There is missing or unclear explanation of the use of a member check to validate data collected. Provides a description of the data collected and the processes used in gathering the data. Explains alignment between the data collected and the research questions and/or hypotheses of the study.
  • 9. For Quantitative Studies: Includes the data analysis processes including, but not limited to, describing the statistical tests performed and the purpose/outcome, coding of data linked to each RQ, the software used (e.g., SPSS, Qualtrics). For Qualitative Studies: Identifies the coding process of data linked to RQs. Describes the transcription of data, the software used for textual analysis (e.g., Nivo, DeDoose), and describes manual analysis by researcher. Describes the use of a member check to validate data collected. Assumptions/Limitations/ Delimitations Does not clearly outline the assumptions/limitations/delimitations (or has missing components) inherent to the choice of method and design. For Quantitative Studies: Does not include or lacks key
  • 10. elements such as, but not limited to, threats to internal and external validity, credibility, and generalizability. For Qualitative Studies: Does not include or lacks key elements such as, but not limited to threats to credibility, trustworthiness, and transferability. Outlines the assumptions/limitations/delimitations to the choice of method and design. For Quantitative Studies: Includes key elements such as, but not limited to, threats to internal and external validity, credibility, and generalizability. For Qualitative Studies: Includes key elements such as, but not limited to threats to credibility, trustworthiness, and transferability. Ethical Assurances Lacks discussion of compliance with the standards to
  • 11. conduct research as appropriate to the proposed research design and is not aligned to IRB requirements. Describes compliance with the standards to conduct research as appropriate to the proposed research design and aligned to IRB requirements. Summary Chapter does not conclude with a summary of key points from the Chapter - elements are missing, incomplete, and/new information is presented. Chapter concludes with an organized summary of key points discussed/presented in the Chapter. Issues in Informing Science and Information Technology Volume 6, 2009 Towards a Guide for Novice Researchers on Research Methodology: Review and Proposed Methods Timothy J. Ellis and Yair Levy Nova Southeastern University
  • 12. Graduate School of Computer and Information Sciences Fort Lauderdale, Florida, USA [email protected], [email protected] Abstract The novice researcher, such as the graduate student, can be overwhelmed by the intricacies of the research methods employed in conducting a scholarly inquiry. As both a consumer and producer of research, it is essential to have a firm grasp on just what is entailed in producing legitimate, valid results and conclusions. The very large and growing number of diverse research approaches in current practice exacerbates this problem. The goal of this review is to provide the novice re- searcher with a starting point in becoming a more informed consumer and producer of research. Toward addressing this goal, a new system for deriving a proposed study type is developed. The PLD model includes the three common drivers for selection of study type: research-worthy prob- lem (P), valid quality peer-reviewed literature (L), and data (D). The discussion includes a review of some common research types and concludes with definitions, discussions, and examples of various fundamentals of research methods such as: a) forming research questions and hypotheses; b) acknowledging assumptions, limitations, and delimitations; and c) establishing reliability and validity. Keywords: Research methodology, reliability, validity, research questions, problem directed re- search
  • 13. Introduction The novice researcher, such as the graduate student, can be overwhelmed by the intricacies of the research methods employed in conducting a scholarly inquiry (Leedy & Ormrod, 2005). As both a consumer and producer of research, it is essential to have a firm grasp on just what is entailed in producing legitimate, valid results and conclusions. The very large and growing number of di- verse research approaches in current practice exacerbates this problem (Mertler & Vannatta, 2001). The goal of this review is to pro- vide the novice researcher with a start- ing point in becoming a more informed consumer and producer of research in the form of a lexicon of terms and an analysis of the underlying constructs that apply to scholarly enquiry, regard- less of the specific methods employed. Scholarly research is, to a very great extent, characterized by the type of Material published as part of this publication, either on-line or in print, is copyrighted by the Informing Science Institute. Permission to make digital or paper copy of part or all of these works for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage AND that copies 1) bear this notice in full and 2) give the full citation on the first page. It is per - missible to abstract these works so long as credit is given. To copy in all other cases or to republish or to post on a server or to redistribute to lists requires specific permission and payment of a fee. Contact [email protected] to request redistribution permission.
  • 14. Guide for Novice Researchers on Research Methodology 324 study conducted and, by extension, the specific methods employed in conducting that type of study (Creswell, 2005, p. 61). Novice researchers, however, often mistakenly think that, since studies are known by how they are conducted, the research process starts with deciding upon just what type of study to conduct. On the contrary, the type of study one conducts is based upon three related issues: the problem driving the study, the body of knowledge, and the nature of the data available to the researcher. As discussed elsewhere, scholarly research starts with the identification of a tightly focused, lit- erature supported problem (Ellis & Levy, 2008). The research- worthy problem serves as the point of departure for the research. The nature of the research problem and the domain from which it is drawn serves as a limiting factor on the type of research that can be conducted. Nunamaker, Chen, and Purdin (1991) noted that “It is clear that some research domains are sufficiently nar- row that they allow the use of only limited methodologies” (p. 91). The problem also serves as the guidance system for the study in that the research is, in essence, an attempt to, in some man- ner, develop at least a partial solution to the research problem. The best design cannot provide meaning to research and answer the question ‘Why was the
  • 15. study conducted,’ if there is not the anchor of a clearly identified research problem. The body of knowledge serves as the foundation upon which the study is built (Levy & Ellis, 2006). The literature also serves to channel the research, in that it indicates the type of study or studies that are appropriate based upon the nature of the problem driving the study. Likewise, the literature provides clear guidance on the specific methods to be followed in conducting a study of a given type. Although originality is of great value in scholarly work, it is usually not rewarded when applied to the research methods. Ignoring the wisdom contained in the existing body of knowledge can cause the novice researcher, at the least, a great deal of added work establishing the validity of the study. From an entirely practical perspective, the nature of the data available to the researcher serves as a final filter in determining the type of study to conduct. The type of data available should be considered a necessary, but certainly not sufficient, consideration for selecting research methods. The data should never supersede the necessity of a research- worthy problem serving as the anchor and the existing body of knowledge serving as the foundation for the research. The absence of the ability to gather the necessary data can, however, certainly make a study based upon research methods directly driven by a well-conceived problem and supported by current literature com- pletely futile. Every solid research study must use data in order to validate the proposed theory. As a result, novice researchers should understand the centrality
  • 16. of access to data for their study success. Access to data refers to the ability of the researcher to actually collect the desired data for the study. Without access to data, it is impossible for a researcher to make any meaningful conclusions on the phenomena. Novice researchers should be aware that their access to data will also imply what type of methodology they will be using and what type of research, eventually they will conduct. Figure 1 illustrates the interaction among the research-worthy problem (P), the existing body of knowledge documented in the peer-reviewed literature (L), and the data avail- able to the researcher (D). The research-worthy problem (P) serves as the input to the process of selecting the appropriate type of research to conduct; the valid peer-reviewed literature (L) is the key funnel that limits the range of applicable research approaches, based on the body of knowl- edge; the data (D) available to the researcher serves as the final filter used to identify the specific study type. The balance of this paper explores the constructs underlying scholarly research in two aspects. The first section examines some of the types of studies most commonly used in information sys- tems research. The second section explores vital considerations for research methods that apply across all study types. Ellis & Levy 325
  • 17. Study type Data available Valid peer-reviewed literature Research-worthy problem Figure 1. The PLD Model for Deriving Study Type Types of Research There are a number of different ways to distinguish among types of research. The type of data available is certainly one vital aspect (Gay, Mills, & Airasian, 2006; Leedy & Ormrod, 2005); different research approaches are appropriate for quantitative data – precise, numeric data derived from a reduced variable – than for qualitative data – complex, multidimensional data derived from a natural setting. Of equal importance is the nature of the problem being addressed by the research (Isaac & Michael, 1981). Some problems, for example, are relatively new and require Table 1: Key Categories of Research Approach Most common type of data Stage of problem Categories of Theory Experimental Quantitative Evaluation Testing or
  • 18. revising Causal-comparative Quantitative Evaluation Testing or revising Historical Quantitative or Qualitative Description Testing or revising Developmental Quantitative and qualitative Description Building or revising Correlational Quantitative Description Testing Case study Qualitative Exploration Building or revising Grounded theory Qualitative Exploration Building Ethnography Qualitative Descriptive Building Action research Quantitative and qualitative Applied exploration Building or revising
  • 19. Guide for Novice Researchers on Research Methodology 326 exploratory types of research, while more mature problems might better be addressed by descrip- tive or evaluative (hypothesis testing) approaches (Sekaran, 2003). Table 1 presents an overview of how the research approaches most commonly used in information systems (IS) are categorized. The subsections following Table 1 briefly explore each of these major research approaches. In general, research studies can be classified into three categories: theory building, theory testing, and theory revising. Theory building refers to research studies that aim at building a theory where no prior solid theory existed to explain phenomena or specific scenario. Theory testing refers to research studies that aim at validating (i.e. testing) existing theories in new context. Theory revis- ing refers to research studies that aim at revising an existing theory. Experimental The essence of experimental research is determining if a cause - effect relationship exists between one factor or set of factors – the independent variable(s) – and a second factor or set of factors – the dependent variable(s) (Cook & Campbell, 1979). In an experiment, the researcher takes con- trol of and manipulates the independent variable, usually by randomly assigning participants to two or more different groups that receive different treatments or implementations of the inde-
  • 20. pendent variable. The experimenter measures and compares the performance of the participants on the dependent variable to determine if changes in the independent variables are very likely to cause similar changes in performance on the dependent variable. In medical settings, this type of research is very common. However, in many research fields it is somewhat difficult to control all the variables in the experiments, especially when dealing with research area that is related to or- ganizations and institutions. For that reason, the use of experiments in IS it is somewhat limited, and a less restrictive type of experiments is used. Such type is called quasi-experiment (Cook & Campbell, 1979). Similar to experiments, in quasi-experiments, the research is attempting to de- termining if a cause-effect relationship exists between one factor or set of factors – the independ- ent variable(s) – and a second factor or set of factors – the dependent variable(s). However in quasi-experiments, the researcher is unable to control all the variables in the experimentation, but most variables are controlled. An example of an experimental study would be research into which of two methods of inputting text in a personal digital assistant, soft-key or handwriting recognition, is more accurate. The in- dependent variable would be method of text input. The dependent variable might be a count of the number of entry errors, and the comparison based on the mean of the group using the soft-key method with the mean of the group that used handwriting recognition input. An example of ex- perimental research can be found at Cockburn, Savage, and Wallace (2005).
  • 21. Causal-Comparative As with experimental studies, causal-comparative research focuses on determining if a cause- effect relationship exists between one factor of set of factors – the independent variable(s) – and a second factor or set of factors – the dependent variable(s). Unlike an experiment, the researcher does not take control of and manipulate the independent variable in causal-comparative research but rather observes, measures, and compares the performance on the dependent variable or vari- ables of subjects in naturally-occurring groupings based on the independent variable. An example of a causal-comparative study would be research into the impact monetary bonuses had on knowledge sharing behavior as exhibited by contributions to a company knowledge bank. The independent variable would be “monetary bonus,” and it might have two levels (i.e. “yes” and “no”). The dependent variable might be a count of the number of contributions, and the com- parison based upon an examination of the mean number of knowledge-base contributions made per employee in companies that provided a monetary bonus versus the mean number of contribu- tions made per employee in companies that did not provide a bonus. Since the researcher did not Ellis & Levy 327
  • 22. assign companies to the “bonus” or “no bonus” categories, this study would be causal compara- tive, not experimental. An example of causal-comparative research can be found at Becerra- Fernandez, Zanakis, and Walczak (2002) who developed a knowledge discovery technique using neural network modeling to classify a country’s investing risk based on a variety of independent variables. Case Study A case study is “an empirical inquiry that investigates a contemporary phenomenon within its real life context using multiple sources of evidence” (Noor, 2008, p. 1602). The evidence used in a case study is typically qualitative in nature and focuses on developing an in-depth rather than broad, generalizable understanding. Case studies can be used to explore, describe, or explain phe- nomena by an exhaustive study within its natural setting (Yin, 1984). An example of a case study can be found in the study by Ramim and Levy (2006) who described the issues related to the im- pact of an insider’s attack combined with novice management on the survivability of an e- learning systems of a small university. Historical Historical research utilizes interpretation of qualitative data to explain the causes of change through time. This type of research is based upon the recognition of a historical problem or the identification of a need for certain historical knowledge and generally entails gathering as much relevant information about the problem or topic as possible. The research usually begins with the
  • 23. formation of a hypothesis that tentatively explains a suspected relationship between two or more historical factors and proceeds to a rigorous collection and organization of usually qualitative evidence. The verification of the authenticity and validity of such evidence, together with its se- lection, organization, and analysis forms the basis for this type of research. An example of his- torical research can be found in the study by Grant and Grant (2008) who conducted a study to test the hypothesis that a new generation in knowledge management was emerging. Correlational The primary focus of the correlational type of research is to determine the presence and degree of a relationship between two factors. Although correlational studies are in a superficial way similar to causal-comparative research – both types of study focus on analyzing quantitative data to de- termine if a relationship exists between two variables – the difference between the two cannot be ignored. Unlike causal-comparative research, in correlational studies, there is no attempt to de- termine if a cause-effect relationship exists (variable x causes changes in variable y). The goal for correlational studies is to determine if a predictive relationship exists (knowing the value of vari- able x allows one to predict the value of variable y). At a practical level, there is, therefore, no distinction made between independent and dependent variables in correlational research. An example of a simple correlational study would be research into the relationship between age and willingness to make e-commerce purchases. The two
  • 24. variables of interest would be age and number of e-commerce purchases made over a given period of time. The comparison would be based upon an examination of age of each subject in the study and the number of e-commerce purchases made by that subject. Since the researcher did not control either of the variables or at- tempt to determine if age caused changes in purchases, just if age could be used to predict behav- ior, the study would be correlational, not experimental or causal-comparative. An example of cor- relational research can be found in Cohen and Ellis (2003). Guide for Novice Researchers on Research Methodology 328 Developmental Developmental research attempts to answer the question: How can researchers build a ‘thing’ to address the problem? It is especially applicable when there is not an adequate solution to even test for efficacy in addressing the problem and presupposes that researchers don’t even know how to go about building a solution that can be tested. Developmental research generally entails three major elements: • Establishing and validating criteria the product must meet • Following a formalized, accepted process for developing the product • Subjecting the product to a formalized, accepted process to
  • 25. determine if it satis- fies the criteria. An example of developmental research would be Ellis and Hafner (2006) that detailed the devel- opment of an asynchronous environment for project-based collaborative learning experiences. Developmental research is distinguis hed from product development by: a focus on complex, in- novative solutions that have few, if any, accepted design and development principles; a compre- hensive grounding in the literature and theory; empirical testing of product’s practicality and ef- fectiveness; as well as thorough documentation, analysis, and reflection on processes and out- comes (van den Akker, Branch, Gustafson, Nieveen, & Plomp, 2000). Grounded Theory Grounded Theory is defined as “a systematic, qualitative procedure used to generate theory that explains, at a broad conceptual level, a process, an action, or interaction about substantive topic” (Creswell, 2005, p. 396). Grounded theory is used when theories currently documented in the lit- erature fail to adequately explain the phenomena observed (Leedy & Ormrod, 2005). In such cases, revisions for existing theory may not be valid as the fundamental assumptions behind such theories may be flawed given the context or data at hand. Table 2 outlines the three key types of grounded theory design. According to Creswell, “choosing among the three approaches requires several considerations” (p. 403). He noted that such considerations depend on the key emphasis of the study such as: Is the aim of the study to follow given
  • 26. procedures? Is the aim of the aim of the study to follow predetermined categories? What is the position of the researcher? An example of Grounded Theory in the context of information systems includes the study by Oliver, Why- mark, and Romm (2005). Oliver et al. used Grounded Theory to develop a conceptual model on enterprise-resources planning (ERP) systems adoption based on the various types of organiza- tional justifications and reported motives. Table 2: Types of Grounded Theory Design (Creswell, 2005) Type of Grounded Theory Design Definition Systematic Design “emphasizes the use of data analysis steps of open, axial, and selective coding, and the development of a logic paradigm or a visual picture of the theory generated” (Creswell, 2005, p. 397) Emerging Design “letting the theory emerge from the data rather than using spe- cific, preset categories” (Creswell, 2005, p. 401) Constructivist Design “focus is on the meanings ascribed by participants in a study…more interested in the views, values, beliefs, feelings, assumptions, and ideologies of individuals than in gathering facts and describing acts” (Creswell, 2005, p. 402) Ellis & Levy
  • 27. 329 Ethnography The study of ethnography aims at “a particular person, program, or event in considerable depth. In an ethnography, the researcher looks at an entire group – more specifically, a group that shares a common culture – in depth” (Leedy & Ormrod, 2005, p. 151). According to Creswell (2005), ethnographic research deals with an in-depth qualitative investigation of a group that share a common culture. He indicated that ethnography is best used to explain various issues within a group of individuals that have been together for a considerable length of time and have, therefore, developed a common culture. Ethnographic research also provides a chronological collection of events related to a group of individuals sharing a common culture. Beynon-Davies (1997) out- lined the use of ethnographic research in the context of system development. He noted that for IS researchers, ethnographic research may provide value in the area of IS development, specifically in the process of capturing tacit knowledge during the system development life cycle (SDLC) (Beynon-Davies). Crabtree (2003) noted that “ethnography is an approach that is increasing inter- est to the designers of collaborative computing systems. Rejecting the use of theoretical frame- works and insisting instead on a rigorously descriptive mode of research” (p. ix). However, criti- cism for Crabtree’s advocacy of ethnography in information systems research was also voiced (Alexander, 2003).
  • 28. Action Research Action research is defined as “a type of research that focuses on finding a solution to a local prob- lem in a local setting” (Leedy & Ormrod, 2005, p. 114). Action research is unique in the approach as the researcher himself or herself are part of the practitioners group that face the actual problem the research is trying to address(Creswell, 2005). Additionally, the aim of action research is to investigate a localized and practical problem. According to DeLuca, Gallivan, and Kock (2008), there are five key steps in action research including: a) Diagnosing the problem; b) Planning the action; c) Taking the action; d) Evaluating the results; and e) Specifying lessons learned for the next cycle. During the course of all give steps of the action research, “researchers and practitio- ners collaborate during each step” (DeLuca et al., p. 49). Fundamentals of Research Methods For each study type there is an accepted methodology documented in texts (Gay et al., 2006; Isaac & Michael, 1981; Leedy & Ormrod, 2005; Yin, 1984) and exemplified in the literature (Levy & Ellis, 2006). As a first step in establishing the value of a proposed study, the novice re- searcher is well advised to closely follow the template for the study type contained in the text and model her or his research methods after similar studies reported in the literature. Regardless of the type of study being conducted, there are a number of important factors that must be accommo- dated in an effective description of the research methods. In brief, the description must provide a detailed, step-by-step description of how the study will be conducted, answering the vital “who,
  • 29. what, where, when, why, and how” questions. 1. What is going to be done 2. Who is going to do each thing to be done 3. How will each thing to be done be accomplished 4. When, and in what order, will the things to be accomplished actually be done 5. Where will those things be done 6. Why – supported by the literature – for the answers to the What, Who, How, When, and Where Guide for Novice Researchers on Research Methodology 330 A properly developed description of the research methods would allow the reader to actually con- duct the study being proposed based upon the processes outlined. Included among those processes are: forming research questions and hypotheses; identifying assumptions, limitations, and delimi- tations; as well as establishing reliability and validity. Form Research Questions and Hypotheses Research questions Research questions are the essence of most research conducted and acts as the guiding plan for
  • 30. the investigation (Mertler & Vannatta, 2001). In general, research questions are “specific ques- tions that researchers seek to answer” (Creswell, 2005, p. 117). According to Maxwell (2005), “research questions state what you want to learn” (p. 69). A research investigation may have one or more research questions regardless of the specific type of the research including qualitative, quantitative, and mixed method types of research. Most quality peer-reviewed studies will have a specific section that highlights the research questions investigated. In most other published work that don’t have a specific section that highlights it, the research questions will appear either at the end of the problem statement or right after the literature review section. Maxwell suggested that a good research question is one that will point the researcher to the information that will lead him/her to understand what he/she set forth to investigate. According to Ellis and Levy (2008), “in order for the research to be at all meaningful, there has to be an identifiable connection be- tween the answers to the research questions and the research problem inspiring the study” (p. 20). However, research questions shouldn’t be created in a vacuum, but be strongly influenced by quality literature is suggesting about the phenomena (Berg, 1998). Moreover, the exact wording used to note the research questions is vital as the accuracy and appropriateness of the research question determine the methodology to be used (Mertler & Vannatta, 2001). The nature of the research questions will be dependent on the type of study being conducted. Studies based on quantitative data will generally be driven by
  • 31. research questions that are formu- lated on the confirmatory and predictive nature, while studies based on qualitative data will be more likely driven by research questions that are formulated on the exploratory and interpretive nature. Examples of quantitative research questions in the context of information systems include: - To what extent does users’ perceived usefulness increases the odds of their e-commerce usage? - Do computer self-efficacy and computer anxiety have a significant difference for males and females when using e-learning systems? - What are the contributions of users’ systems trust, deterrent severity, and motivation to their misuse of biometrics technology? - To what degree do team communication and team cohesiveness predict productivity of system development by virtual teams? Examples of qualitative research questions in the context of information systems include: - How does training help the implementation success of enterprise-wide information sys- tems? - Why do user involvement and user resistance help in the systems’ requirement gathering process?
  • 32. What are the systems characteristics that are valuable to users when using e-learning systems? - How do e-commerce users define information privacy? Ellis & Levy 331 Hypotheses One must keep in mind that “research questions are not the same as research hypotheses” (Maxwell, 2005, p. 69). In general, a hypothesis can be defined as a “logical supposition, a rea- sonable guess, an educated conjecture” about some aspect of daily life (Leedy & Ormrod, 2005, p. 6). In scholarly research, however, hypotheses are more than ‘educated guesses.’ A research hypothesis is a “prediction or conjecture about the outcome of a relationship among attributes or characteristics” (Creswell, 2005, p. 117). By convention, research is conservative and assumes the absence of a relationship among the attributes under consideration; hypotheses, therefore, are expressed in null terms. For example, if a study were to examine the impact interactive multime- dia animations have on the average amount of a purchase at an e-commerce site, the hypothesis would be stated: The average amount of purchase on an e- commerce site enhanced with interac- tive animations will not be different that the average amount of purchase on the same e- commerce site that is not enhanced with interactive animations.
  • 33. Not all types of research entail establishing and testing hypotheses. Research methods based upon quantitative data commonly test hypotheses; studies based upon qualitative data, on the other hand, explore propositions (Maxwell). Unlink hypotheses, propositions do predict a directionality for the results. If, for example, one were to examine consumer reaction to interactive animation on an e-commerce site, one might investigate the proposition that: Consumers will express a greater feeling of engagement and sat- isfaction when visiting e-commerce sites enhanced with interactive animations than similar sites that lack the enhancement. Acknowledge Assumptions, Limitations, and Delimitations For any given research investigation there are underlying assumptions, limitations, and delimita- tions (Creswell, 2005). According to Leedy and Ormrod (2005), assumptions, limitations, and delimitations are critical components of a viable research proposal; without these considerations clearly articulated, evaluators may raise some valid questions regarding the credibility of the pro- posal. The following three sub-sections provide definition and examples for each term. Assumptions Assumptions serve as the basic foundation of any proposed research (Leedy & Ormrod, 2005) and constitute “what the researcher takes for granted. But taking things for granted may cause much misunderstanding. What [researchers] may tacitly assume, others may never have consid-
  • 34. ered” (Leedy & Ormrod, p. 62). Moreover, assumptions can be viewed as something the re- searcher accepts as true without a concrete proof. Essentially, there is no research study without a basic set of assumptions (Berg, 1998). According to Williams and Colomb (2003), identifying the assumptions behind a given research proposal is one of the hardest issues to address, especially for novice researchers. Such difficulties emerge due to the fact that by nature “we all take our deepest beliefs for granted, rarely questioning them from someone else’s point of view” (Williams & Colomb, p. 200). It is important for novice researchers to learn how to explicitly document their assumptions in order to ensure that they are aware of those things taken as givens, rather than trying to hide or smear them from the reader. Explicitly documenting the research as- sumptions may help reduce misunderstanding and resistance to a proposed research as it demon- strates that the research proposal has been thoroughly considered (Leedy & Ormrod, 2005). To identify the assumption behind a proposal, the researcher must ask himself the following ques- tion: “what do I believe that my readers must also believe (but may not) before they will think that my reasons are relevant to my claims?” (Williams & Colomb, p. 200). Guide for Novice Researchers on Research Methodology 332
  • 35. Examples of assumptions researchers make include: - Participants in the study will make a sincere effort to complete the assigned tasks - The students participating in the Internet-based course have a basic familiarity with the personal computer and the use of the World Wide Web. Limitations Every study has a set of limitations (Leedy & Ormrod, 2005), or “potential weaknesses or prob- lems with the study identified by the researcher” (Creswell, 2005, p. 198). A limitation is an un- controllable threat to the internal validity of a study. As described in greater detail below, internal validity refers to the likelihood that the results of the study actually mean what the researcher in- dicates they mean. Explicitly stating the research limitations is vital in order to allow other re- searchers to replicate the study or expand on a study (Creswell, 2005). Additionally, by explicitly stating the limitations of the research, a researcher can help other researchers “judge to what ex- tent the findings can or cannot be generalized to other people and situations” (Creswell, 2005, p. 198). Examples of limitations researchers may have: - All subjects in the study will be volunteers who may withdraw from the study at any time. The participants who finish the study might not, therefore, be truly representative of the population.
  • 36. - The members of the expert panel that will validate the proficiency survey instrument will be drawn from the faculty of … and may not truly represent universally accepted expert opinion. Delimitations Delimitations refer to “what the researcher is not going to do” (Leedy & Ormrod, 2005). In schol- arly research, the goals of the research outlines what the researcher intends to do; without the de- limitations, the reader will have difficulties in understanding the boundaries of the research. In order to constrain the scope of the study and make it more manageable, researchers should outline in the delimitations – the factors, constructs, and/or variables – that were intentionally left out of the study. Delimitations impact the external validity or generalizability of the results of the study. Examples of delimitations include: - Participation in the study was delimited to only males aged 25-45 who had made a pur- chase via the internet within the past 12 months; generalization to other age groups or females may not be warranted. - This study examined attrition rates in MBA programs offered in continuing education de- partments of public colleges and universities; generalization to other educational pro- grams or similar programs offered in private institutions may not be warranted. Establish Reliability and Validity Every study must address threats to validity and reliability
  • 37. (Leedy & Ormrod, 2005). Although the concepts of validity and reliability originally started in quantitative research approaches, in recent years validity and reliability are being addressed in qualitative and mixed-methods ap- proaches as well (Berg, 1998; Maxwell, 2005). According to Leedy and Ormrod (2005), “the va- lidity and reliability of your measurement instruments influences the extent to which you can learn something about the phenomenon you are studying…and the extent to which you can draw meaningful conclusions from your data” (p. 31). The following two sections define and outline Ellis & Levy 333 the key types of validity and reliability related to common research investigation. Establishing an approach following published methods to address validity and reliability issues in a research pro- posal may drastically increase the overall acceptance of the research proposal. Reliability Reliability is defined as “the consistency with which a measuring instrument yields a certain re- sults when the entity being measured hasn’t changed” (Leedy & Ormrod, 2005, p. 31). According to Straub (1989), researchers should try to answer the following question in an attempt to address reliability; “do measures show stability across the unit of observation? That is, could measure-
  • 38. ment error be so high as to discredit the findings?” (p. 150). Reliability can be established in four different ways: equivalency, stability, inter-rater, and internal consistency (Carmines & Zeller, 1991). Equivalency reliability. Equivalency reliability is concerned with how closely measurements taken with one instrument match those taken with a second instrument under similar conditions. Equivalency is often used to certify the reliability of a new measurement instrument or procedure by comparing the results of using that instrument with those obtained by using established in- struments or processes. Equivalency is usually established through the use of a statistical correla- tion (Pearson’s r for linear correlation or Eta for non-linear correlation). Stability reliability. Stability reliability – also know as test, re- test reliability – is concerned with how consistent results of measuring with a given instrument or process are over time. Stability is based on the assumption that, absent some identifiable explanation, the measurement should pro- duce the same results today as last month and will produce the same results next month. Stability, like equivalency, is usually established through the use of a statistical correlation (Pearson’s r for linear correlation or Eta for non-linear correlation). Inter-rater reliability. Inter-rater reliability focuses on the extent agreement in the results of two or more individuals using the same measurement instrument or process. As with stability and equivalency, inter-rater reliability is usually established through
  • 39. the use of a statistical correlation (Pearson’s r for linear correlation or Eta for non-linear correlation). Internal consistency. Unlike the previous methods of establishing reliability which were con- cerned with comparing the results of using an instrument or process with some external standard (another instrument, the same instrument over time, or the same instrument used by different people), internal consistence focuses on the level of agreement among the various parts of the instrument or process in assessing the characteristic being measured. In a 20-question survey measuring attitude toward knowledge sharing, for example, if the survey is internally consistent, there will be a strong correlation the responses on all 20 questions. Internal consistency is also measured by statistical correlation, but with the Cronbach α in place of Pearson r. Validity Validity refers to a researchers’ ability to “draw meaningful and justifiable inferences from scores about a sample or population” (Creswell, 2005, p. 600). There are various types of validity asso- ciated with scholarly research (Cook & Campbell, 1979). Validity of an instrument refers to “the extent to which the instrument measures what it is supposed to measure” (Leedy & Ormrod, 2005, p. 31). Thus, researchers when designing their study, must ask themselves “how might you be wrong?” (Maxwell, 2005, p. 105). Additionally, the validity of a study “depends on the rela- tionship of your conclusions to reality” (Maxwell, 2005, p. 105). This section will define and out-
  • 40. line the key validity issues. The two most common validity issues are internal validity and exter- nal validity. Guide for Novice Researchers on Research Methodology 334 Internal validity. Internal validity refers to the “extent to which its design and the data that it yields allow the researcher to draw accurate conclusions about cause-and-effect and other rela- tionships within the data” (Leedy & Ormrod, 2005, pp. 103- 104). According to Straub (1989), researchers should try to answer the following question in an attempt to address internal validity; “are there untested rival hypotheses for the observed effects?” (p. 150). Generally, establishing internal validity requires examining one or more of the following: face validity, criterion validity, construct validity, content validity, or statistical conclusion validity. Face Validity. Face validity is based upon appearance; does the instrument or process seem to pass the test for reasonableness. Face validity is never sufficient by itself, but an informal assess- ment of how well the study appears to be designed is often the first step in establishing its valid- ity. Criterion Related Validity. Also known as instrumental validity, criterion related validity is based upon the premise that processes and instruments used in a
  • 41. study are valid if they parallel similar those used previous, validated research. In order to establish criterion related validity it is necessary to draw strong parallels between as many particulars of the validated study – popula- tion, circumstances, instruments used, methods followed – as possible. Construct Validity. Construct validity “is in essence operational issue. It asks whether the meas- ures chosen are true constructs describing the event or merely artifacts of the methodology itself” (Straub, 1989, p. 150). According to Straub, researchers should try to answer the following ques- tion in an attempt to address construct validity; “do measures show stability across methodology? That is, are the data a reflection of true scores or artifacts of the kind of instrument chosen?” (p. 150). Content Validity. In survey-based research, the term content validity refers to “the degree to which items in an instrument reflect the content universe to which the instrument will be general- ized” (Boudreau, Gefen, & Straub, 2001, p. 5). According to Straub (1989), researchers should try to answer the following question in an attempt to address content validity; “are instrument measures drown from all possible measures of the properties under investigation?” (p. 150). Statistical Conclusion Validity. Statistical conclusion validity refers to the “assessment of the mathematical relationships between variables and the likelihood that this mathematical assess- ment provides a correct picture of the covariation …(Type I and
  • 42. Type II error)” (Straub, 1989, p. 152). According to Straub, researchers should try to answer the following question in an attempt to address statistical conclusion validity; “do the variables demonstrate relationships not explain- able by chance or some other standard of comparison?” (p. 150). External validity. External validity refers to the “extent to which its results apply to situations beyond the study itself…the extent to which the conclusions drawn can be generalized to other contexts” (Leedy & Ormrod, 2005, p. 105). Additionally, external validity addresses the “gener- alizability of sample results to the population of interest, across different measures, persons, set- tings, or times. External validity is important to demonstrate that research results are applicable in natural settings, as contrasted with classroom, laboratory, or survey-response settings” (King & He, 2005, p. 882). Summary One of the major challenges facing the novice researcher is matching the research she or he pro- poses with a research method that is appropriate and will be accepted by the scholarly commu- nity. The material presented in this paper is certainly not intended to be the ending point in the process of establishing the research methods for a given study. The novice researcher is encour- Ellis & Levy 335
  • 43. aged, even expected to augment this material by referring to one or more of the texts and research examples cited. This paper does present a foundation upon which such a decision can be based on: 1. Developing the PLD, a model for selection of research approach based upon the problem driving the study, the body of knowledge documented in peer-reviewed lit- erature, and the data available to the researcher; 2. Identifying, in brief, several of the research approaches commonly used in informa- tion systems studies; 3. Exploring several of the important terms and constructs that apply to scholarly re- search, regardless of the specific approach selected. References Alexander, I. (2003). Designing collaborative systems. A practical guide to ethnography. European Journal of In formation Systems, 12(3), 247-249. Becerra-Fernandez, I., Zanakis, S. H., & Walczak, S. (2002). Knowledge discovery techniques for predict- ing country investment risk. Computers & Industrial Engineering, 43(4), 787-800. Berg, B. L. (1998). Qualitative research methods for the social sciences (3rd ed.). Boston, MA: Allyn & Bacon.
  • 44. Beynon-Davies, P. (1997). Ethnography and information systems development: Ethnography of, for and within is development. Information and Software Technology, 39(8), 531-540. Boudreau, M.-C., Gefen, D., & Straub, D. W. (2001). Validation in information systems research: A state- of-the-art assessment. MIS Quarterly, 25(1), 1-16. Cockburn, A., Savage, J., & Wallace, A. (2005). Tuning and testing scrolling interfaces that automatically zoom. Proceeding of the Computer-Human Interaction 2005 Conference, Portland, Oregon, pp. 71-80. Cohen, M. S., & Ellis, T. J. (2003). Predictors of success: A longitudinal study of threaded discussion fo- rums. Proceeding of the Frontiers in Education Conference, Boulder, Colorado, pp. T3F-14–T13F-18. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues from field set- tings. Boston, MA: Houghton Mifflin Company. Crabtree, A. (2003). Designing collaborative systems. A practical guide to ethnography. Berlin: Springer- Verlag. Creswell, J. W. (2005). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (2nd ed.). Upper Saddle River, NJ: Pearson. DeLuca, D., Gallivan, M. J., & Kock, N. (2008). Furthering information systems action research: A post- positivist synthesis of four dialectics. Journal of the Association for Information Systems, 9(2), 48-72.
  • 45. Ellis, T. J., & Hafner, W. (2006). A communicat ion environment for asynchronous collaborative learning. Proceeding of the 37th Hawaii International Conference on System Sciences, Big Island, Hawaii, pp. 3a-3a. Ellis, T. J., & Levy, Y. (2008). A framework of problem-based research: A guide for novice researchers on the development of a research-worthy problem. In forming Science: The International Journal of an Emerging Transdiscipline, 11, 17-33. Retrieved from http://inform.nu/Artic les/Vol11/ISJv11p017- 033Ellis486.pdf Gay, L. R., Mills, G. E., & Airasian, P. (2006). Educational research: Competencies for analysis and ap- plications (8th ed.). Upper Saddle River, NJ: Pearson. Guide for Novice Researchers on Research Methodology 336 Grant, K. A., & Grant, C. T. (2008). Developing a model of next generation knowledge management. Is- sues in Informing Science and Information Technology, 5, 571– 590. Retrieved from http://proceedings.informingscience.org/InSITE2008/IISITv5p5 71-590Grant532.pdf Isaac, S., & Michael, W. B. (1981). Handbook in research and evaluation. San Diego, CA: EdITS publish- ers.
  • 46. King, W. R., & He, J. (2005). External validity in IS survey research. Communications of the Association for In formation Systems, 16, 880-894. Leedy, P. D., & Ormrod, J. E. (2005). Practical research: Planning and design (8th ed.). Upper Saddle River, NJ: Prentice Hall. Levy, Y., & Ellis, T. J. (2006). A systems approach to conduct an effective literature rev iew in support of informat ion systems research. Informing Science: The International Journal of an Emerging Transdis- cipline, 9, 181-212. Retrieved from http://inform.nu/Artic les/Vol9/V9p181-212Levy99.pdf Maxwell, J. A. (2005). Qualitative research design: An interactive approach (2nd ed.). Thousand Okas, CA: Sage Publication. Mertler, C. A., & Vannatta, R. A. (2001). Advanced and multivariate statistical methods: Practical appli- cation and interpretation. Los Angeles, CA: Pyrczak Publishing. Noor, K. (2008). Case study: A strategic research methodology. American Journal of Applied Sciences, 5(11), 1602-1604. Nunamaker, J. F., Chen, M., & Purdin, T. D. M. (1991). Systems development in information systems re- search. Journal of Management Information Systems, 7(3), 89- 106. Oliver, D., Whymark, G., & Romm, C. (2005). Researching ERP adoption: An internet-based grounded theory approach. Online Information Review, 29(6), 585-604.
  • 47. Ramim, M. M., & Levy, Y. (2006). Securing e-learning systems: A case of insider cyber attacks and novice IT management in a s mall university. Journal of Cases on Information Technology, 8(4), 24-35. Sekaran, U. (2003). Research methods for business (4th ed.). Hoboken, NJ: John Wiley & Sons. Straub, D. W. (1989). Validating instruments in MIS research. MIS Quarterly, 13(2), 147-170. van den Akker, J., Branch, R. M., Gustafson, K., Nieveen, N., & Plomp, T. (2000). Design approaches and tools in education. Norwell, MA: Kluwer Academic Publishers. Williams, J. M., & Colomb, G. G. (2003). The craft o f argument (2nd ed.). New York: Longma n Publish- ers. Yin, R. K. (1984). Case study research: Design and methods. Newbury Park, CA: Sage Publicat ion. Biographies Dr. Timothy Ellis obtained a B.S. degree in History from Bradley University, an M.A. in Rehabilitation Counseling from Southern Illinois University, a C.A.G.S. in Rehabilitation Administration from North- eastern University, and a Ph.D. in Computing Technology in Education from Nova Southeastern University. He joined NSU as Assistant Pro- fessor in 1999 and currently teaches computer technology courses at both the Masters and Ph.D. level in the School of Computer and
  • 48. Infor- mation Sciences. Prior to joining NSU, he was on the faculty at Fisher College in the Computer Technology department and, prior to that, was a Systems Engineer for Tandy Business Products. His research interests include: multimedia, distance education, and adult learning. He has published in several technical and educational journals including Catalyst, Journal of Instructional Delivery Systems, and Journal of Instructional Multimedia and Ellis & Levy 337 Hypermedia. His email address is [email protected] His main website is located at http://www.scis.nova.edu/~ellist/ Dr. Yair Levy is an associate professor at the Graduate School of Computer and Information Sciences at Nova Southeastern University. During the mid to late 1990s, he assisted NASA to develop e- learning systems. He earned his Bachelor’s degree in Aerospace Engineering from the Technion (Israel Institute of Technology). He received his
  • 49. MBA with MIS concentration and Ph.D. in Management Information Systems from Florida International University. His current research interests include cognitive value of IS, of online learning systems, ef- fectiveness of IS, and cognitive aspects of IS. Dr. Levy is the author of the book “Assessing the Value of e-Learning systems.” His research publications appear in the IS journals, conference proceedings, invited book chapters, and encyclopedias. Additionally, he chaired and co- chaired multiple sessions/tracks in recognized conferences. Currently, Dr. Levy is serving as the Editor-in-Chief of the International Journal of Doctoral Studies (IJDS). Additionally, he is serv- ing as an associate editor for the International Journal of Web- based Learning and Teaching Technologies (IJWLTT). Moreover, he is serving as a member of editorial review or advisory board of several refereed journals. Additionally, Dr. Levy has been serving as a referee research reviewer for numerous national and international scientific journals, conference proceedings, as well as MIS and Information Security textbooks. He is also a frequent speaker at national and international meetings on MIS and online learning topics. To find out more about Dr. Levy, please visit his site: http://scis.nova.edu/~levyy/
  • 50. Issues in Informing Science and Information Technology Volume 6, 2009 Towards a Guide for Novice Researchers on Research Methodology: Review and Proposed Methods Timothy J. Ellis and Yair Levy Nova Southeastern University Graduate School of Computer and Information Sciences Fort Lauderdale, Florida, USA [email protected], [email protected] Abstract The novice researcher, such as the graduate student, can be overwhelmed by the intricacies of the research methods employed in conducting a scholarly inquiry. As both a consumer and producer of research, it is essential to have a firm grasp on just what is entailed in producing legitimate, valid results and conclusions. The very large and growing number of diverse research approaches in current practice exacerbates this problem. The goal of this review is to provide the novice re- searcher with a starting point in becoming a more informed consumer and producer of research. Toward addressing this goal, a new system for deriving a proposed study type is developed. The PLD model includes the three common drivers for selection of study type: research-worthy prob- lem (P), valid quality peer-reviewed literature (L), and data (D). The discussion includes a review
  • 51. of some common research types and concludes with definitions, discussions, and examples of various fundamentals of research methods such as: a) forming research questions and hypotheses; b) acknowledging assumptions, limitations, and delimitations; and c) establishing reliability and validity. Keywords: Research methodology, reliability, validity, research questions, problem directed re- search Introduction The novice researcher, such as the graduate student, can be overwhelmed by the intricacies of the research methods employed in conducting a scholarly inquiry (Leedy & Ormrod, 2005). As both a consumer and producer of research, it is essential to have a firm grasp on just what is entailed in producing legitimate, valid results and conclusions. The very large and growing number of di- verse research approaches in current practice exacerbates this problem (Mertler & Vannatta, 2001). The goal of this review is to pro- vide the novice researcher with a start- ing point in becoming a more informed consumer and producer of research in the form of a lexicon of terms and an analysis of the underlying constructs that apply to scholarly enquiry, regard- less of the specific methods employed. Scholarly research is, to a very great extent, characterized by the type of
  • 52. Material published as part of this publication, either on-line or in print, is copyrighted by the Informing Science Institute. Permission to make digital or paper copy of part or all of these works for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage AND that copies 1) bear this notice in full and 2) give the full citation on the first page. It is per - missible to abstract these works so long as credit is given. To copy in all other cases or to republish or to post on a server or to redistribute to lists requires specific permission and payment of a fee. Contact [email protected] to request redistribution permission. Guide for Novice Researchers on Research Methodology 324 study conducted and, by extension, the specific methods employed in conducting that type of study (Creswell, 2005, p. 61). Novice researchers, however, often mistakenly think that, since studies are known by how they are conducted, the research process starts with deciding upon just what type of study to conduct. On the contrary, the type of study one conducts is based upon three related issues: the problem driving the study, the body of knowledge, and the nature of the data available to the researcher. As discussed elsewhere, scholarly research starts with the identification of a tightly focused, lit- erature supported problem (Ellis & Levy, 2008). The research- worthy problem serves as the point of departure for the research. The nature of the research
  • 53. problem and the domain from which it is drawn serves as a limiting factor on the type of research that can be conducted. Nunamaker, Chen, and Purdin (1991) noted that “It is clear that some research domains are sufficiently nar- row that they allow the use of only limited methodologies” (p. 91). The problem also serves as the guidance system for the study in that the research is, in essence, an attempt to, in some man- ner, develop at least a partial solution to the research problem. The best design cannot provide meaning to research and answer the question ‘Why was the study conducted,’ if there is not the anchor of a clearly identified research problem. The body of knowledge serves as the foundation upon which the study is built (Levy & Ellis, 2006). The literature also serves to channel the research, in that it indicates the type of study or studies that are appropriate based upon the nature of the problem driving the study. Likewise, the literature provides clear guidance on the specific methods to be followed in conducting a study of a given type. Although originality is of great value in scholarly work, it is usually not rewarded when applied to the research methods. Ignoring the wisdom contained in the existing body of knowledge can cause the novice researcher, at the least, a great deal of added work establishing the validity of the study. From an entirely practical perspective, the nature of the data available to the researcher serves as a final filter in determining the type of study to conduct. The type of data available should be considered a necessary, but certainly not sufficient,
  • 54. consideration for selecting research methods. The data should never supersede the necessity of a research- worthy problem serving as the anchor and the existing body of knowledge serving as the foundation for the research. The absence of the ability to gather the necessary data can, however, certainly make a study based upon research methods directly driven by a well-conceived problem and supported by current literature com- pletely futile. Every solid research study must use data in order to validate the proposed theory. As a result, novice researchers should understand the centrality of access to data for their study success. Access to data refers to the ability of the researcher to actually collect the desired data for the study. Without access to data, it is impossible for a researcher to make any meaningful conclusions on the phenomena. Novice researchers should be aware that their access to data will also imply what type of methodology they will be using and what type of research, eventually they will conduct. Figure 1 illustrates the interaction among the research-worthy problem (P), the existing body of knowledge documented in the peer-reviewed literature (L), and the data avail- able to the researcher (D). The research-worthy problem (P) serves as the input to the process of selecting the appropriate type of research to conduct; the valid peer-reviewed literature (L) is the key funnel that limits the range of applicable research approaches, based on the body of knowl- edge; the data (D) available to the researcher serves as the final filter used to identify the specific study type. The balance of this paper explores the constructs underlying
  • 55. scholarly research in two aspects. The first section examines some of the types of studies most commonly used in information sys- tems research. The second section explores vital consider ations for research methods that apply across all study types. Ellis & Levy 325 Study type Data available Valid peer-reviewed literature Research-worthy problem Figure 1. The PLD Model for Deriving Study Type Types of Research There are a number of different ways to distinguish among types of research. The type of data available is certainly one vital aspect (Gay, Mills, & Airasian, 2006; Leedy & Ormrod, 2005); different research approaches are appropriate for quantitative data – precise, numeric data derived from a reduced variable – than for qualitative data – complex, multidimensional data derived
  • 56. from a natural setting. Of equal importance is the nature of the problem being addressed by the research (Isaac & Michael, 1981). Some problems, for example, are relatively new and require Table 1: Key Categories of Research Approach Most common type of data Stage of problem Categories of Theory Experimental Quantitative Evaluation Testing or revising Causal-comparative Quantitative Evaluation Testing or revising Historical Quantitative or Qualitative Description Testing or revising Developmental Quantitative and qualitative Description Building or revising Correlational Quantitative Description Testing Case study Qualitative Exploration Building or revising Grounded theory Qualitative Exploration Building
  • 57. Ethnography Qualitative Descriptive Building Action research Quantitative and qualitative Applied exploration Building or revising Guide for Novice Researchers on Research Methodology 326 exploratory types of research, while more mature problems might better be addressed by descrip- tive or evaluative (hypothesis testing) approaches (Sekaran, 2003). Table 1 presents an overview of how the research approaches most commonly used in information systems (IS) are categorized. The subsections following Table 1 briefly explore each of these major research approaches. In general, research studies can be classified into three categories: theory building, theory testing, and theory revising. Theory building refers to research studies that aim at building a theory where no prior solid theory existed to explain phenomena or specific scenario. Theory testing refers to research studies that aim at validating (i.e. testing) existing theories in new context. Theory revis- ing refers to research studies that aim at revising an existing theory.
  • 58. Experimental The essence of experimental research is determining if a cause - effect relationship exists between one factor or set of factors – the independent variable(s) – and a second factor or set of factors – the dependent variable(s) (Cook & Campbell, 1979). In an experiment, the researcher takes con- trol of and manipulates the independent variable, usually by randomly assigning participants to two or more different groups that receive different treatments or implementations of the inde- pendent variable. The experimenter measures and compares the performance of the participants on the dependent variable to determine if changes in the independent variables are very likely to cause similar changes in performance on the dependent variable. In medical settings, this type of research is very common. However, in many research fields it is somewhat difficult to control all the variables in the experiments, especially when dealing with research area that is related to or- ganizations and institutions. For that reason, the use of experiments in IS it is somewhat limited, and a less restrictive type of experiments is used. Such type is called quasi-experiment (Cook & Campbell, 1979). Similar to experiments, in quasi-experiments, the research is attempting to de- termining if a cause-effect relationship exists between one factor or set of factors – the independ- ent variable(s) – and a second factor or set of factors – the dependent variable(s). However in quasi-experiments, the researcher is unable to control all the variables in the experimentation, but most variables are controlled.
  • 59. An example of an experimental study would be research into which of two methods of inputting text in a personal digital assistant, soft-key or handwriting recognition, is more accurate. The in- dependent variable would be method of text input. The dependent variable might be a count of the number of entry errors, and the comparison based on the mean of the group using the soft-key method with the mean of the group that used handwriting recognition input. An example of ex- perimental research can be found at Cockburn, Savage, and Wallace (2005). Causal-Comparative As with experimental studies, causal-comparative research focuses on determining if a cause- effect relationship exists between one factor of set of factors – the independent variable(s) – and a second factor or set of factors – the dependent variable(s). Unlike an experiment, the researcher does not take control of and manipulate the independent variable in causal-comparative research but rather observes, measures, and compares the performance on the dependent variable or vari- ables of subjects in naturally-occurring groupings based on the independent variable. An example of a causal-comparative study would be research into the impact monetary bonuses had on knowledge sharing behavior as exhibited by contributions to a company knowledge bank. The independent variable would be “monetary bonus,” and it might have two levels (i.e. “yes” and “no”). The dependent variable might be a count of the number of contributions, and the com- parison based upon an examination of the mean number of
  • 60. knowledge-base contributions made per employee in companies that provided a monetary bonus versus the mean number of contribu- tions made per employee in companies that did not provide a bonus. Since the researcher did not Ellis & Levy 327 assign companies to the “bonus” or “no bonus” categories, this study would be causal compara- tive, not experimental. An example of causal-comparative research can be found at Becerra- Fernandez, Zanakis, and Walczak (2002) who developed a knowledge discovery technique using neural network modeling to classify a country’s investing risk based on a variety of independent variables. Case Study A case study is “an empirical inquiry that investigates a contemporary phenomenon within its real life context using multiple sources of evidence” (Noor, 2008, p. 1602). The evidence used in a case study is typically qualitative in nature and focuses on developing an in-depth rather than broad, generalizable understanding. Case studies can be used to explore, describe, or explain phe- nomena by an exhaustive study within its natural setting (Yin, 1984). An example of a case study can be found in the study by Ramim and Levy (2006) who described the issues related to the im- pact of an insider’s attack combined with novice management
  • 61. on the survivability of an e- learning systems of a small university. Historical Historical research utilizes interpretation of qualitative data to explain the causes of change through time. This type of research is based upon the recognition of a historical problem or the identification of a need for certain historical knowledge and generally entails gathering as much relevant information about the problem or topic as possible. The research usually begins with the formation of a hypothesis that tentatively explains a suspected relationship between two or more historical factors and proceeds to a rigorous collection and organization of usually qualitative evidence. The verification of the authentici ty and validity of such evidence, together with its se- lection, organization, and analysis forms the basis for this type of research. An example of his- torical research can be found in the study by Grant and Grant (2008) who conducted a study to test the hypothesis that a new generation in knowledge management was emerging. Correlational The primary focus of the correlational type of research is to determine the presence and degree of a relationship between two factors. Although correlational studies are in a superficial way similar to causal-comparative research – both types of study focus on analyzing quantitative data to de- termine if a relationship exists between two variables – the difference between the two cannot be ignored. Unlike causal-comparative research, in correlational studies, there is no attempt to de-
  • 62. termine if a cause-effect relationship exists (variable x causes changes in variable y). The goal for correlational studies is to determine if a predictive relationship exists (knowing the value of vari- able x allows one to predict the value of variable y). At a practical level, there is, therefore, no distinction made between independent and dependent variables in correlational research. An example of a simple correlational study would be research into the relationship between age and willingness to make e-commerce purchases. The two variables of interest would be age and number of e-commerce purchases made over a given period of time. The comparison would be based upon an examination of age of each subject in the study and the number of e-commerce purchases made by that subject. Since the researcher did not control either of the variables or at- tempt to determine if age caused changes in purchases, just if age could be used to predict behav- ior, the study would be correlational, not experimental or causal-comparative. An example of cor- relational research can be found in Cohen and Ellis (2003). Guide for Novice Researchers on Research Methodology 328 Developmental Developmental research attempts to answer the question: How can researchers build a ‘thing’ to address the problem? It is especially applicable when there is not an adequate solution to even test
  • 63. for efficacy in addressing the problem and presupposes that researchers don’t even know how to go about building a solution that can be tested. Developmental research generally entails three major elements: • Establishing and validating criteria the product must meet • Following a formalized, accepted process for developing the product • Subjecting the product to a formalized, accepted process to determine if it satis- fies the criteria. An example of developmental research would be Ellis and Hafner (2006) that detailed the devel- opment of an asynchronous environment for project-based collaborative learning experiences. Developmental research is distinguished from product development by: a focus on complex, in- novative solutions that have few, if any, accepted design and development principles; a compre- hensive grounding in the literature and theory; empirical testing of product’s practicality and ef- fectiveness; as well as thorough documentation, analysis, and reflection on processes and out- comes (van den Akker, Branch, Gustafson, Nieveen, & Plomp, 2000). Grounded Theory Grounded Theory is defined as “a systematic, qualitative procedure used to generate theory that explains, at a broad conceptual level, a process, an action, or interaction about substantive topic” (Creswell, 2005, p. 396). Grounded theory is used when theories
  • 64. currently documented in the lit- erature fail to adequately explain the phenomena observed (Leedy & Ormrod, 2005). In such cases, revisions for existing theory may not be valid as the fundamental assumptions behind such theories may be flawed given the context or data at hand. Table 2 outlines the three key types of grounded theory design. According to Creswell, “choosing among the three approaches requires several considerations” (p. 403). He noted that such considerations depend on the key emphasis of the study such as: Is the aim of the study to follow given procedures? Is the aim of the aim of the study to follow predetermined categories? What is the position of the researcher? An example of Grounded Theory in the context of information systems includes the study by Oliver, Why- mark, and Romm (2005). Oliver et al. used Grounded Theory to develop a conceptual model on enterprise-resources planning (ERP) systems adoption based on the various types of organiza- tional justifications and reported motives. Table 2: Types of Grounded Theory Design (Creswell, 2005) Type of Grounded Theory Design Definition Systematic Design “emphasizes the use of data analysis steps of open, axial, and selective coding, and the development of a logic paradigm or a visual picture of the theory generated” (Creswell, 2005, p. 397) Emerging Design “letting the theory emerge from the data
  • 65. rather than using spe- cific, preset categories” (Creswell, 2005, p. 401) Constructivist Design “focus is on the meanings ascribed by participants in a study…more interested in the views, values, beliefs, feelings, assumptions, and ideologies of individuals than in gathering facts and describing acts” (Creswell, 2005, p. 402) Ellis & Levy 329 Ethnography The study of ethnography aims at “a particular person, program, or event in considerable depth. In an ethnography, the researcher looks at an entire group – more specifically, a group that shares a common culture – in depth” (Leedy & Ormrod, 2005, p. 151). According to Creswell (2005), ethnographic research deals with an in-depth qualitative investigation of a group that share a common culture. He indicated that ethnography is best used to explain various issues within a group of individuals that have been together for a considerable length of time and have, therefore, developed a common culture. Ethnographic research also provides a chronological collection of events related to a group of individuals sharing a common culture. Beynon-Davies (1997) out- lined the use of ethnographic research in the context of system development. He noted that for IS researchers, ethnographic research may provide value in the area of IS development, specifically
  • 66. in the process of capturing tacit knowledge during the system development life cycle (SDLC) (Beynon-Davies). Crabtree (2003) noted that “ethnography is an approach that is increasing inter- est to the designers of collaborative computing systems. Rejecting the use of theoretical frame- works and insisting instead on a rigorously descriptive mode of research” (p. ix). However, criti- cism for Crabtree’s advocacy of ethnography in information systems research was also voiced (Alexander, 2003). Action Research Action research is defined as “a type of research that focuses on finding a solution to a local prob- lem in a local setting” (Leedy & Ormrod, 2005, p. 114). Action research is unique in the approach as the researcher himself or herself are part of the practitioners group that face the actual problem the research is trying to address(Creswell, 2005). Additionally, the aim of action research is to investigate a localized and practical problem. According to DeLuca, Gallivan, and Kock (2008), there are five key steps in action research including: a) Diagnosing the problem; b) Planning the action; c) Taking the action; d) Evaluating the results; and e) Specifying lessons learned for the next cycle. During the course of all give steps of the action research, “researchers and practitio- ners collaborate during each step” (DeLuca et al., p. 49). Fundamentals of Research Methods For each study type there is an accepted methodology documented in texts (Gay et al., 2006; Isaac & Michael, 1981; Leedy & Ormrod, 2005; Yin, 1984) and exemplified in the literature
  • 67. (Levy & Ellis, 2006). As a first step in establishing the value of a proposed study, the novice re- searcher is well advised to closely follow the template for the study type contained in the text and model her or his research methods after similar studies reported in the literature. Regardless of the type of study being conducted, there are a number of important factors that must be accommo- dated in an effective description of the research methods. In brief, the description must provide a detailed, step-by-step description of how the study will be conducted, answering the vital “who, what, where, when, why, and how” questions. 1. What is going to be done 2. Who is going to do each thing to be done 3. How will each thing to be done be accomplished 4. When, and in what order, will the things to be accomplished actually be done 5. Where will those things be done 6. Why – supported by the literature – for the answers to the What, Who, How, When, and Where Guide for Novice Researchers on Research Methodology 330 A properly developed description of the research methods would
  • 68. allow the reader to actually con- duct the study being proposed based upon the processes outlined. Included among those processes are: forming research questions and hypotheses; identifying assumptions, limitations, and delimi- tations; as well as establishing reliability and validity. Form Research Questions and Hypotheses Research questions Research questions are the essence of most research conducted and acts as the guiding plan for the investigation (Mertler & Vannatta, 2001). In general, research questions are “specific ques- tions that researchers seek to answer” (Creswell, 2005, p. 117). According to Maxwell (2005), “research questions state what you want to learn” (p. 69). A research investigation may have one or more research questions regardless of the specific type of the research including qualitative, quantitative, and mixed method types of research. Most quality peer-reviewed studies will have a specific section that highlights the research questions investigated. In most other published work that don’t have a specific section that highlights it, the research questions will appear either at the end of the problem statement or right after the literature review section. Maxwell suggested that a good research question is one that will point the researcher to the information that will lead him/her to understand what he/she set forth to investigate. According to Ellis and Levy (2008), “in order for the research to be at all meaningful, there has to be an identifiable connection be- tween the answers to the research questions and the research problem inspiring the study” (p. 20).
  • 69. However, research questions shouldn’t be created in a vacuum, but be strongly influenced by quality literature is suggesting about the phenomena (Berg, 1998). Moreover, the exact wording used to note the research questions is vital as the accuracy and appropriateness of the research question determine the methodology to be used (Mertler & Vannatta, 2001). The nature of the research questions will be dependent on the type of study being conducted. Studies based on quantitative data will generally be driven by research questions that are formu- lated on the confirmatory and predictive nature, while studies based on qualitative data will be more likely driven by research questions that are formulated on the exploratory and interpretive nature. Examples of quantitative research questions in the context of information systems include: - To what extent does users’ perceived usefulness increases the odds of their e-commerce usage? - Do computer self-efficacy and computer anxiety have a significant difference for males and females when using e-learning systems? - What are the contributions of users’ systems trust, deterrent severity, and motivation to their misuse of biometrics technology? - To what degree do team communication and team cohesiveness predict productivity of
  • 70. system development by virtual teams? Examples of qualitative research questions in the context of information systems include: - How does training help the implementation success of enterprise-wide information sys- tems? - Why do user involvement and user resistance help in the systems’ requirement gathering process? What are the systems characteristics that are valuable to users when using e-learning systems? - How do e-commerce users define information privacy? Ellis & Levy 331 Hypotheses One must keep in mind that “research questions are not the same as research hypotheses” (Maxwell, 2005, p. 69). In general, a hypothesis can be defined as a “logical supposition, a rea- sonable guess, an educated conjecture” about some aspect of daily life (Leedy & Ormrod, 2005, p. 6). In scholarly research, however, hypotheses are more than ‘educated guesses.’ A research hypothesis is a “prediction or conjecture about the outcome of a relationship among attributes or characteristics” (Creswell, 2005, p. 117). By convention,
  • 71. research is conservative and assumes the absence of a relationship among the attributes under consideration; hypotheses, therefore, are expressed in null terms. For example, if a study were to examine the impact interactive multime- dia animations have on the average amount of a purchase at an e-commerce site, the hypothesis would be stated: The average amount of purchase on an e- commerce site enhanced with interac- tive animations will not be different that the average amount of purchase on the same e- commerce site that is not enhanced with interactive animations. Not all types of research entail establishing and testing hypotheses. Research methods based upon quantitative data commonly test hypotheses; studies based upon qualitative data, on the other hand, explore propositions (Maxwell). Unlink hypotheses, propositions do predict a directionality for the results. If, for example, one were to examine consumer reaction to interactive animation on an e-commerce site, one might investigate the proposition that: Consumers will express a greater feeling of engagement and sat- isfaction when visiting e-commerce sites enhanced with interactive animations than similar sites that lack the enhancement. Acknowledge Assumptions, Limitations, and Delimitations For any given research investigation there are underlying assumptions, limitations, and delimita- tions (Creswell, 2005). According to Leedy and Ormrod (2005), assumptions, limitations, and delimitations are critical components of a viable research proposal; without these considerations
  • 72. clearly articulated, evaluators may raise some valid questions regarding the credibility of the pro- posal. The following three sub-sections provide definition and examples for each term. Assumptions Assumptions serve as the basic foundation of any proposed research (Leedy & Ormrod, 2005) and constitute “what the researcher takes for granted. But taking things for granted may cause much misunderstanding. What [researchers] may tacitly assume, others may never have consid- ered” (Leedy & Ormrod, p. 62). Moreover, assumptions can be viewed as something the re- searcher accepts as true without a concrete proof. Essentially, there is no research study without a basic set of assumptions (Berg, 1998). According to Williams and Colomb (2003), identifying the assumptions behind a given research proposal is one of the hardest issues to address, especially for novice researchers. Such difficulties emerge due to the fact that by nature “we all take our deepest beliefs for granted, rarely questioning them from someone else’s point of view” (Williams & Colomb, p. 200). It is important for novice researchers to learn how to explicitly document their assumptions in order to ensure that they are aware of those things taken as givens, rather than trying to hide or smear them from the reader. Explicitly documenting the research as- sumptions may help reduce misunderstanding and resistance to a proposed research as it demon- strates that the research proposal has been thoroughly considered (Leedy & Ormrod, 2005). To identify the assumption behind a proposal, the researcher
  • 73. must ask himself the following ques- tion: “what do I believe that my readers must also believe (but may not) before they will think that my reasons are relevant to my claims?” (Williams & Colomb, p. 200). Guide for Novice Researchers on Research Methodology 332 Examples of assumptions researchers make include: - Participants in the study will make a sincere effort to complete the assigned tasks - The students participating in the Internet-based course have a basic familiarity with the personal computer and the use of the World Wide Web. Limitations Every study has a set of limitations (Leedy & Ormrod, 2005), or “potential weaknesses or prob- lems with the study identified by the researcher” (Creswell, 2005, p. 198). A limitation is an un- controllable threat to the internal validity of a study. As described in greater detail below, internal validity refers to the likelihood that the results of the study actually mean what the researcher in- dicates they mean. Explicitly stating the research limitations is vital in order to allow other re- searchers to replicate the study or expand on a study (Creswell, 2005). Additionally, by explicitly stating the limitations of the research, a researcher can help other researchers “judge to what ex-
  • 74. tent the findings can or cannot be generalized to other people and situations” (Creswell, 2005, p. 198). Examples of limitations researchers may have: - All subjects in the study will be volunteers who may withdraw from the study at any time. The participants who finish the study might not, therefore, be truly representative of the population. - The members of the expert panel that will validate the proficiency survey instrument will be drawn from the faculty of … and may not truly represent universally accepted expert opinion. Delimitations Delimitations refer to “what the researcher is not going to do” (Leedy & Ormrod, 2005). In schol- arly research, the goals of the research outlines what the researcher intends to do; without the de- limitations, the reader will have difficulties in understanding the boundaries of the research. In order to constrain the scope of the study and make it more manageable, researchers should outline in the delimitations – the factors, constructs, and/or variables – that were intentionally left out of the study. Delimitations impact the external validity or generalizability of the results of the study. Examples of delimitations include: - Participation in the study was delimited to only males aged 25-45 who had made a pur- chase via the internet within the past 12 months; generalization
  • 75. to other age groups or females may not be warranted. - This study examined attrition rates in MBA programs offered in continuing education de- partments of public colleges and universities; generalization to other educational pro- grams or similar programs offered in private institutions may not be warranted. Establish Reliability and Validity Every study must address threats to validity and reliability (Leedy & Ormrod, 2005). Although the concepts of validity and reliability originally started in quantitative research approaches, in recent years validity and reliability are being addressed in qualitative and mixed-methods ap- proaches as well (Berg, 1998; Maxwell, 2005). According to Leedy and Ormrod (2005), “the va- lidity and reliability of your measurement instruments influences the extent to which you can learn something about the phenomenon you are studying…and the extent to which you can draw meaningful conclusions from your data” (p. 31). The following two sections define and outline Ellis & Levy 333 the key types of validity and reliability related to common research investigation. Establishing an approach following published methods to address validity and reliability issues in a research pro-
  • 76. posal may drastically increase the overall acceptance of the research proposal. Reliability Reliability is defined as “the consistency with which a measuring instrument yields a certain re- sults when the entity being measured hasn’t changed” (Leedy & Ormrod, 2005, p. 31). According to Straub (1989), researchers should try to answer the following question in an attempt to address reliability; “do measures show stability across the unit of observation? That is, could measure- ment error be so high as to discredit the findings?” (p. 150). Reliability can be established in four different ways: equivalency, stability, inter-rater, and internal consistency (Carmines & Zeller, 1991). Equivalency reliability. Equivalency reliability is concerned with how closely measurements taken with one instrument match those taken with a second instrument under similar conditions. Equivalency is often used to certify the reliability of a new measurement instrument or procedure by comparing the results of using that instrument with those obtained by using established in- struments or processes. Equivalency is usually established through the use of a statistical correla- tion (Pearson’s r for linear correlation or Eta for non-linear correlation). Stability reliability. Stability reliability – also know as test, re- test reliability – is concerned with how consistent results of measuring with a given instrument or process are over time. Stability is based on the assumption that, absent some identifiable
  • 77. explanation, the measurement should pro- duce the same results today as last month and will produce the same results next month. Stability, like equivalency, is usually established through the use of a statistical correlation (Pearson’s r for linear correlation or Eta for non-linear correlation). Inter-rater reliability. Inter-rater reliability focuses on the extent agreement in the results of two or more individuals using the same measurement instrument or process. As with stability and equivalency, inter-rater reliability is usually established through the use of a statistical correlation (Pearson’s r for linear correlation or Eta for non-linear correlation). Internal consistency. Unlike the previous methods of establishing reliability which were con- cerned with comparing the results of using an instrument or process with some external standard (another instrument, the same instrument over time, or the same instrument used by different people), internal consistence focuses on the level of agreement among the various parts of the instrument or process in assessing the characteristic being measured. In a 20-question survey measuring attitude toward knowledge sharing, for example, if the survey is internally consistent, there will be a strong correlation the responses on all 20 questions. Internal consistency is also measured by statistical correlation, but with the Cronbach α in place of Pearson r. Validity Validity refers to a researchers’ ability to “draw meaningful and justifiable inferences from scores
  • 78. about a sample or population” (Creswell, 2005, p. 600). There are various types of validity asso- ciated with scholarly research (Cook & Campbell, 1979). Validity of an instrument refers to “the extent to which the instrument measures what it is supposed to measure” (Leedy & Ormrod, 2005, p. 31). Thus, researchers when designing their study, must ask themselves “how might you be wrong?” (Maxwell, 2005, p. 105). Additionally, the validity of a study “depends on the rela- tionship of your conclusions to reality” (Maxwell, 2005, p. 105). This section will define and out- line the key validity issues. The two most common validity issues are internal validity and exter- nal validity. Guide for Novice Researchers on Research Methodology 334 Internal validity. Internal validity refers to the “extent to which its design and the data that it yields allow the researcher to draw accurate conclusions about cause-and-effect and other rela- tionships within the data” (Leedy & Ormrod, 2005, pp. 103- 104). According to Straub (1989), researchers should try to answer the following question in an attempt to address internal validity; “are there untested rival hypotheses for the observed effects?” (p. 150). Generally, establishing internal validity requires examining one or more of the following: face validity, criterion validity, construct validity, content validity, or statistical conclusion validity.