Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Â
A Comparative Essay Between Quantitative And Qualitative Research
1. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
A Comparative Essay of Quantitative Research versus Qualitative Research
Article Submitted for Publishing in the Global Journal of Business and Economics (GJBE)
Gregoire M Nleme
Doctorate of Business Administration Candidate
Email: gnleme@novainteru.com
September 2016
Abstract
In this paper, as a practitioner, I describe quantitative and qualitative inquiries. I explain some
of their differences and similarities. I emphasize most important differences that researchers
should consider when selecting one of the methods or when running a mixed study. I list
differences in worldviews, research design, research processes, reliability assurance, and
validity assurance. I propose a process flowchart for each type of inquiry. The purpose of the
essay is to give researchers and primarily doctorate students a review of differences and
similarities between the two methods of inquiry.
The Global Journal of Business and Economics
A NIU Online journal
3. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Many researchers complete qualitative studies and have been doing so for decades. There
are a multitude of literature on the design and the execution of quantitative studies. Similarly,
there is an abundance of literature on qualitative studies. In this essay, I briefly review the most
important features of the two types of inquiry. I list the difference in worldview, research design,
research process, and quality criteria for research. I also propose two quite simple process
flowcharts for each of the inquiry to clarify the difference in the two types of inquiry. My
expectation is that this essay can give a quick review to researchers, especially doctoral students
in business, or other social sciences who are in the quest of selecting between the two types of
inquiry, or who have to explain the reasons why they selected one of the methods instead of the
other, or who are completing mixed studies research.
Quantitative Studies
World View
In quantitative inquiries, researchers may have a post-positivist world view. In post-
positivism, the reality is not absolute, but researchers have to follow a well structured process
that ensures reliability and validity of the study (Creswell, 2009). Although researchersâ and
participantsâ experiences and judgments matter, the experiences and judgments must not lead to
biases that affect the result of the study. Researchers may also have an advocacy and
participative worldview by believing that research should include all groups of the society.
There are many authors who explain worldviews for qualitative and quantitative
inquiries. I recommend reviewing Denzin and Lincoln (2011). Although their handbook is about
qualitative research, it covers the different worldviews that researchers may consider for their
study. I also recommend reviewing Leech and Onwuegbuzie (2011) and Onwuegbuzie, Johnson,
4. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
and Lincoln (2009, 2011) because their articles clarified post-positivism and other worldviews
quite in detail. Wahyuni (2012) is another author who reviewed worldviews for research methods
and may complements the narrations of the former authors.
Research Design
Quantitative studies are deductive in that researchers first gather theories in form of
knowledge, assumptions, and hypotheses (Creswell, 2009). The researchersâ main objectives are
to verify research claims that they refer to as hypotheses. The qualifier âquantitativeâ means that
researchers base their analyses on numbers. The quantities here are value of statistics that
researchers compute from the result of a true experiment, a quasi-experiment, or a survey on a
sample of the whole population or in some cases on the whole population. In a true experiment,
researchers select participants randomly while in a quasi-experiment, the sample of participants
is not random.
There are different authors who covered quantitative research designs and methods. I
have selected the ones who covered quantitative research as a complement to qualitative research
and vice versa, hence the authors revealed limitations of quantitative research when researchers
use it alone which is a potential limitation that each researcher should be aware of, and should
be transparent about when completing a research study. Bradt, Burns, and Creswell (2013);
Golicic and Davis (2012); and Harrison and Reilly (2011) covered examples of the contributions
of the mixing of quantitative and qualitative methods to respectively service performance,
supply chain performance, and marketing research performance. Cameron (2011), Eaves (2013),
and Fielding (2010) revealed the complementary relationship between quantitative research and
qualitative research. Researchers may find tips that may enable them to implement each type of
inquiry in their articles. Leech and Onwuegbuzie (2009); Onwuegbuzie, Johnson, and Collins
5. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
(2009); Onwuegbuzie, Johnson, and Collins (2011); and Terrell (2012) covered the differences
between qualitative research and explained the reasons why the researchers have to mix both
methods.
Research Questions and Hypotheses
Green and Salkind (2007) covered many examples on hypothesis testing, reliability
analyses, and validity analyses for quantitative surveys and experiments. The book also comes
with SPSS software for quantitative research. In quantitative inquiries, researchers have to
formulate research questions. They must also identify the variables X that are critical for their
research questions and formulate hypotheses for each of those critical variables. Let us call the
Xm measurable of statistics X. Xm can be for instance, a mean, a variance, a median, or a
correlation coefficient of a statistic X. In a quantitative study, researchers generally have a null
hypothesis H0 and an alternative hypothesis H1. They also generally have five main cases of
hypotheses:
Case 1. H0: Xm = 0 and H1: Xm â 0
Case 2. H0: Xm = 0 and H1: Xm > 0
Case 3. H0: Xm †0 and H1: Xm > 0
Case 4. H0: Xm = 0 and H1: Xm > 0
Case 5. H0: Xm â„ 0 and H1: Xm < 0
Case 1 is a non directional null hypothesis. Case 2 and 4 have a bounded set of variables
while case 3 and 5 are directional null hypotheses. The signs =, â , >, â„, <, †and are the signs
that respectively mean equal, different, superior, superior or equal, inferior, and inferior or equal.
I am giving here a few examples:
6. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Case 1. H0: there is no significant relationship between the increase in the price of oil
and the demand for sport utility vehicles (SUV). H1: There is a significant relationship between
the price of oil and the demand of sport utility vehicle in the United States (U.S.)
Here the measurable Xm will be the correlation coefficient among prices of oils and demand of
SUV for successive months in the U.S.
Another example is the one that I am listing in the next lines:
Case 3. H0: The percentage of employees who approve Mr. Jamesâ performance as CEO
of MTC, Inc. is equal to or less than 30%. H1: The percentage of employees who approve Mr.
Jamesâ performance as CEO of MTC, Inc. is greater than 30%.
Sampling
In quantitative inquiries, researchers may prefer random samples to minimize any
significant effect of a particular small group of participants on the measurements (Creswell,
2009). Researchers should explain whether or not they have a clustered sample, a stratified
sample, or a purposeful sample. They should also minimize the use of convenient samples
because convenient samples are difficult to generalize. Researchers must use power analyses to
estimate the sample sizes of their experiments (Cohen, 1977; Lipsey, 1990) and for quantitative
surveys they should use formulas that are available in literature (Arsham, 2010; Israel, 2012).
Typically for surveys and experiments a sample of 100 ensures an error of no more than 10% for
a 95% confidence level.
I have selected some authors who may give researchers more insights on sampling and
survey research. Suri (2011) covered mainly purposeful sampling but also described other
sampling methods. Dommeyer, Lugo, Riddle, Tade, and Valdivia (2009) proposed a complete
example of a survey research. Ngondi, Reacher, Matthews, Brayne, and Emerson (2009) offered
7. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
a review of literature on survey research. Shah and Ward (2007) offered an example of a full-
example of a multi-item survey research in business management and offered good examples of
the assurance of reliability using corrected total item correlation (CTIC).
Felix (2011) is a meaningful article where the author compared the effect of scale widths
on responses for multi-item quantitative survey scales. Holland, Smith, Hasselback, and Payne
(2010) compared survey response rates for surveys that the participants completed on paper and
sent back their responses by postal mail versus the response rates of surveys that the participants
completed online. Homburg, Klarmann, Reimann, and Schilke (2012) proposed a deep analysis
of the assurance of validity with Multitrait-Multimethod Matrixes (MTMM) for multi-item
quantitative surveys. Menictas, Wang, and Fine (2011) proposed methods for analyzing flat-line
responses for online quantitative surveys while Munos-Leiva, Sanchez-Fernandez, Montoro-
Rios, Ibanez-Zapata (2010) proposed methods for improving the response rates for online
quantitative surveys relying on mail reminders and mail personalization.
Data Collection
In a quantitative inquiry researchers may record measurements or answers in hard copies
or digital files. They may also collect data by telephone, through the internet using internet
survey tools, or by having participants filling out forms in data files and sending the files to the
researchers by email or by postal mail. They should record the process that they follow to collect
and store data for traceability, reliability, and external validity in cases that other researchers
repeat the study in the future.
The Test Procedure
The statistics X may be variable statistics or discrete statistics. For variable statistics,
researchers have to verify normality of the distribution of data in or order to select the
8. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
appropriate statistics. To verify normality, they may use several tests of fit such as the Chi
Square test, the Kolomogorov-Smirnov test, and the Cramer-Von Mises test. Researchers often
use the normal distribution test, the t-Student distribution test, and the Fischer distribution test
for data that follow a normal distribution, and for tests of equality of means or tests of equality of
variances. For discrete data, researchers may approximate binomial distributions with normal
distributions when testing difference between proportions.
However, even though the Central Limit Theorem leads to the use of normal distributions
of statistics of averages of groups of data, researchers may still use other statistics such as the
Uniform distribution or the Weibull distribution. When researchers do not know the means and
the standard deviations of the populations and instead use estimates from samples to test the
equality of means, then they consider t-Student statistics to test the null hypothesis. If researchers
know the populationâs mean and standard deviation, then they use the normal distribution to test
the null hypothesis.
In case the data do not follow a normal distribution they can still use the Krukas-Willis
test or a median test to test the difference in medians in place of testing equality of means. When
testing, a researcher needs to define a confidence interval with type one error which is the error
to reject the null hypothesis H0 when H0 is actually true. Researchers often consider a smaller
type one error of 5% or 1% for respective confidence levels of 95% or 99%. Researchers may
also use a P-value, in which case, in order to fail to reject the null hypothesis, the P-value must
be greater than 0.05 (or 5%). The P-value is the probability of picking a value that is in the target
range for the confidence interval of the measurable if researchers run different successive
experiments. Thus if the alternative hypothesis was true, there will have been very little chance
(less than 5%) to pick a value next to the target.
9. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Reliability and Validity
Reliability is the ability for research findings to be consistent across different researchers
(Ihantola & Kihn, 2011) while validity is the degree of rigor of the research findings
(Zachariadis, Scott, & Barrett, 2013). Zachariadis et al. (2013) distinguished three classes of
validity for quantitative studies:
Design validity. Design validity includes internal validity and external validity. Internal
validity is the degree to which the participantsâ answers and the results of the study fulfill the
intent of the study (Zachariadis et al., 2013). In quantitative studies, the absence of biases,
absences of outliers, and inferential validity ensure internal validity. External validity is the
degree to which the study can be generalized over time and throughout different settings
(Creswell, 2009). To ensure external validity, researchers need to write the process that they
follow to complete the study, select random samples, and enunciate the extent to which they can
generalize the results of the study. Creswell (2009, pp. 163-165) lists several threats to internal
and external validity and some actions that researchers may take to minimize the threats
including control groups and experimental groups subject to the same noise, use of random
samples, removal of outliers, use of large samples, and evenness of treatment between groups.
Measurement validity. Measurement validity includes reliability and constructs validity.
A research is reliable if it gives the same results when different researchers follow the exact same
process over time. Research should verify the reliability of the scale they use with for instance
Cronbach Alpha analyses or inter-item correlation analyses. Construct validity is the ability of
the variables of the research to clarify what they researcher intended them to appraise. Construct
validity is itself a contributor to internal validity. Researchers must be rigorous in the statistical
processes, and they must verify internal consistency of the instrument and processes using
10. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Chronbach Alpha analyses or corrected total item correlation (Ihantola, & Kihn, 2011; Malhotra,
Mukhopadhyay, Xiaoyan, & Dash, 2012) to ensure both reliability and construct validity. It is
also often necessary to perform repeatability and reproducibility studies (Thisse, 1998)
Inferential validity. Inferential validity is the validity that statistical procedures and
conclusions establish. The sample size, significance of the rejection or rebuttal to reject the null
hypothesis, P-value analysis, and effect sizes all enable researchers to make robust conclusions
about the statistical analyses of the research data.
Quantitative Research Process Flow Diagram
Figure 1 is a process flow diagram for a typical quantitative study. The diagram
summarizes for researchers the critical steps that they need to follow. I have briefly covered most
of the steps in the previous lines but I would like to emphasize the decision loop. Researchers
need to decide on whether or not they will have to complete a pilot study or repeatability and
reproducibility studies (R &R). R & R are necessary when using measurement tools on human
beings or non human objects to make sure that researchers understand the weight of the
measurement errors on the overall variations.
For experiments, a pilot study may be necessary to review calibration of measurement
tools and the execution of research processes. For quantitative surveys such as the ones that rely
on multi- item questionnaires, researchers may complete a pilot study to clarify questions in case
they believe that their questions are not clear enough. Having a second person review the
questions may just be enough as long as the researcher considers the person a potential
participant. When the intent of a pilot study is to identify redundant items on survey scales, a
researcher may pass on such pilot studies and complete reliability analyses on the full experiment
11. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
because the full experiment will probably have more participants and will enable the researcher
to adjust the scale.
Figure 1. A Quantitative Study Process Flow Diagram
Researchers identify or/and formulate theories, knowledge, or
observations but they would like to have statistical confirmation. They
formulate research questions. They also select a worldview.
Researchers formulate hypotheses.
Researchers plan for the studies. They identify participants and materials; identify or create
instruments, formulate the communication and record processes, data collection processes, data
analysis process, validity, reliability, and methods: Experiments, quasi-experiments, or surveys.
Improve or verify
repeatability of
the instrument
Pilot study or
Repeatability and
reproducibility study?
Run the studies: Experiments, quasi experiment, or survey.
Test hypotheses. Remember to enunciate your assumptions
Collect data. Record the process steps
Complete verification of the reliability of the instrument: Cronbach Alpha analyses, total
item correlation analyses, or other analyses. Complete reliability and validity analyses.
Note: In case of new survey instruments (questionnaire), the instrument may be adjusted.
Complete data analyses and draw lessons learned and a conclusion.
Yes
No
12. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Qualitative Studies
Worldview
According to Creswell (2009), in qualitative research, researchers may have a social
constructivist worldview in which they seek to understand the world and accept the presence of
subjectivity in research. Researchers may also have an advocacy and participative worldview, in
such case they believe that research has to include marginalized groups and that research should
be aligned with a social or political agenda. Researchers may also have an advocacy and
participative worldview in quantitative studies and post-positivism cannot be excluded from
qualitative research. Researchers may a have a post- positivist view in qualitative survey
research, in case they use structured interviews or focused groups. There are many authors who
explain worldviews for qualitative and quantitative inquiries. Again, I recommend reviewing
Denzin and Lincoln (2011), Leach and Onwuegbuzie (2011), and Onwuegbuzie et al. (2009,
2011).
Research Design
Qualitative studies are exploratory meaning that researchers have to explore new
knowledge often from a need-to-know state or problems that other researchers or practitioners
have acknowledged. Because it is exploratory, qualitative research is inductive. Researchers have
to observe, survey, or interview participants; or observe and study written documents or audio-
visual materials. From their analyses of documents or audio-visual materials, or their analyses of
the participantsâ answers, researchers then identify themes, and finally propose a theory or lesson
learned from their research study.
13. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
According to Creswell (2009), in qualitative research, researchers may explore an
understanding of the phenomenon using open-ended questions; it is a phenomenology design.
Another design is ethnography in which researchers study views, interactions, behaviors, or
actions within or among ethnic groups. Ground theory is a design which main purpose is for
researchers to progressively establish a theory throughout the inquiry. The two other qualitative
designs are case study research and narrative inquiry. In a case study, a researcher focuses on an
organization, a group of individuals, or one individual to study a particular problem that involve
the organization, the group, or the individual within a span of time. In narrative inquiries, the
researchers let the participants describe their stories; researchers then analyze the participantsâ
narratives, interpret them, find common themes, and draw conclusions or lessons learned.
Research may also review other authors to get more insight on qualitative research.
Moustakas (1994) completed a renowned seminal work on phenomenology and I would advise
researchers who plan for phenomenology research to read it. Denzin and Lincoln (2011) covered
most qualitative research design in their handbook including phenomenology, grounded theory,
ethnography, case study, and narrative inquiry.
Muscat et al. (2012) gave many tips on interviews while Wahyuni (2012) discussed and
gave some advices on the case method. Bansal and Corley (2012) explained qualitative research
extensively so did Hays and Wood (2011) when they extensively discussed consensus qualitative
research. Srnka and Koeszegi (2007) first described quantitative and qualitative research and
proposed a method for transforming qualitative data into quantitative data, an approach that fits a
post-positivist worldview. Finally, Krivokapic-Skoko, and Neill (2011) proposed innovative
concepts for qualitative methods.
Research Questions
14. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
In qualitative research, researchers formulate main research questions and may break
them down in different questions that they will ask participants or that they will ask themselves
about the setting while reviewing documents or audio-visual materials. The qualitative research
questions are in general open-ended. Not having hypotheses does not mean that there is no initial
assumption for qualitative research. Researchers may elaborate the conceptual framework of
their study in order to clarify its context.
For instance, a researcher may consider an automobile manufacturer business a
cybernetic system. A cybernetic system should be able to adapt itself to changes. The researcher
may therefore use cybernetic system theory to support a qualitative research which purpose is to
find solutions that enable automobile manufacturers to adapt to market changes or internal
changes.
Data Collection and Inquiry Procedures
Researchers may use purposeful sampling, convenient sampling, stratified sampling,
cluster sampling, other sampling, and even random sampling in qualitative research. Although
random sampling is not often necessary in qualitative research, researchers may still ensure that
the people, documents, or materials that they include in their settings are representative of the
population that the samples represent. For studies that include human as participants such as
interviews or qualitative surveys, Blair and Conrad (2011) and DePaulo (2000) issued sample
size guidelines for qualitative research that include people as participants. Typically a sample of
30 participants is enough for interviews and qualitative surveys at a 95% confidence level.
According to Creswell (2009), in qualitative research, researchers often collect data
through interviews, observations, analyses of documents, and analyses of audio-visual material.
Creswell (2009) also recommends that researchers use other data collection methods. For
15. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
instance, researchers may use open-ended survey questions to collect qualitative information
from participants.
The difference in research procedures between quantitative research and qualitative
research is clear. When studying documents, researchers follow process steps that include
observations, interpretation, and description of the observations. When completing open-ended
question questionnaires, researchers collect the participantsâ answers, analyze and interpret them,
identify themes, and then draw conclusions. When completing interviews, researchers may
record them in audio or video files; researchers may also take notes. Researchers must then
analyze the interviews, taking into consideration the interview settings, the tones of the
participants, and their facial expressions. With the development of information technology in the
years 2010s, researchers may complete interviews on the Internet in a chat room setting.
Reliability and Validity
Bryman, Becker, and Sempik (2008), and Zachariadis et al. (2013) explained criteria for
reliability and validity for quantitative research and for qualitative research, and discussed
quality criteria for mixed research studies including triangulation which is the reliance on data
from several sources to ensure reliability and validity. Zachariadis et al. (2013) distinguished
three classes of validity for qualitative studies:
Design validity. Design validity includes descriptive validity, credibility, and
transferability. Descriptive validity is the accuracy of the researchersâ narratives about their
observations or about the accounts of participants. Credibility is the extent to which other people
than the researchers believe the processes and the findings of the research while transferability is
the extent to which the findings of the research can be transferred to other settings. Researchers
need to record the processes that they follow to complete their research, be honest about the
16. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
changes in the research settings and honest about the gaps between what they have planned and
what they have completed.
Analytical validity. Analytical validity in qualitative studies includes theoretical validity,
dependability, consistency, and plausibility. Theoretical validity is the extent to which the theory
explains the relationships among data. Dependability is the degree to which researchers take into
account changes that occurred in the settings throughout the research process when they analyze
the data and enunciate their findings. Consistency is the degree to which other researchers can
easily verify individual steps within the research process while plausibility is the extent to which
the data explain the findings of the research.
Inferential validity. Inferential validity in qualitative studies includes interpretive
validity and confirmability. Interpretive validity in qualitative research is the accuracy of the
steps of the research process that the researcher describes while confirmability is according to
Zachariadis et al. (2013), the extent to which others confirm the findings of the research or
according to Bryman et al. (2008) the degree to which the research is unbiased. .
When compared to quantitative study, researchers may notice that reliability in
qualitative research includes interpretive validity, dependability, consistency, and descriptive
validity. Bryman et al. (2008) mentioned replicability as another criterion for validity in
qualitative research. Replicability, which is the degree to which researchers can replicate the
study in a similar but possibly different setting, also ensures reliability.
Creswell (2009) explained that validity was one of the pros of qualitative research and
listed eight strategies to overcome threats to validity in qualitative research: Triangulation of
different data sources, member checking, rich and thick description of the research experiences,
clarification of the researcher own bias, presentation of negative information, spending longer
17. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
time in the field, peer debriefing, and the use of an external auditor. When researchers record the
progressive steps within their research process, they enable other researchers to follow those
steps and eventually reach the same conclusions assuming the settings remain the same and there
are no other changes within the settings than the ones that occurred in the initial setting.
Researchers may consider credibility as perceived reliability however credibility may be related
to the status of the researchers in the field. Unless the findings and processes as described by
researchers are accurate, credibility could be erroneous.
When discussing validity in research, many authors often refer to internal validity and
external validity. However external validity is not easy to achieve in qualitative research because
researchers may not often be able to generalize the study in different settings, which is the reason
why researchers need to rely on other criteria as the ones that Zachariadis et al. (2013) or
Bryman et al. (2008) listed. Bryman et al. (2008) listed almost the same criteria for validity in
qualitative research as the ones that Zachariadis et al. (2013) listed but they did not separate them
in three classes that are design validity, analytical validity, and inferential validity.
Qualitative Research Process Flow Diagram
Figure 2 is the process flowchart for a typical qualitative study. There is a loop for the
sometimes needed pilot studies for structured interviews and qualitative surveys. However,
having a colleague or a potential participant review the study questions may be enough for
removing unclear questions. For non structured interviews and other types of qualitative
inquiries such as focused groups, document reviews, reviews of audio-visual material, or
narrative inquiries; researchers may not need a pilot.
The data analyses and the interpretative phase of a qualitative inquiry are critical because
they affect the quality of the qualitative inquiry the most. Researchers have to use rich, thick
18. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
descriptions to account for the detailed events that occur throughout the inquiry and especially in
the setting (Creswell, 2009). Researchers should also report in a few cases the exact words of the
participants in order to let the participants speak directly to the readers so the audience can get
closer at least in their minds to the setting of the inquiry.
Planning for methods and steps for ensuring validity and reliability of the inquiry is also
critical. Executing the planned steps that ensure validity and reliability is critical as well.
Researchers needs to record all the steps that they follow to complete the inquiry including all
unplanned steps as well as negative or unwanted events throughout the inquiry to enable other
researchers to replicate the study and verify the reliability of the study.
19. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Figure 2. A Qualitative Study Process Diagram
Conclusion
Researchers select a design: Phenomenology, ground theory, ethnography, case study, or narrative
inquiry
Interpret data, narrate with rich, thick scripts, and identify
common themes
Researchers identify problems to be solved and main research questions. They select a worldview
Researchers plan for the studies. They identify participants and materials; formulate
questions/instruments, formulate the communication and record processes, data collection
processes, data analyses process, validity, and reliability. They select among interviews, qualitative
surveys, analyses of documents, or analyses of audio-visual materials.
Adjust the
questionnaire or
interviews
questions
Pilot study?
Run the studies: Interviews, qualitative surveys,
analyses of documents, or analyses of audio-visual
materials.
Collect data. Remember to write down all your steps
Complete verification of validity and reliability. Ensure transparency of the process.
Reveal researchersâ potential biases and immersion in the study.
.
Complete data analyses, enunciate theory, and lessons learned, and
draw a conclusion.
Yes
No
20. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
In this paper, I have reviewed the major differences between quantitative inquiries and
qualitative inquiries; I have emphasized differences in world views, research design, sampling,
inquiry procedures, validity, and reliability. My objective was to give an overview that may help
other researchers, especially doctoral students in business and other social sciences find areas
that they should focus more on to complete their research study. I have also referred to a few
authors. Many of the authors that I listed for research design, research processes, reliability, and
validity are authors that most researchers should read before completing their research. The
authors may also help clarify the differences between the two types of research. The references
that they included in their articles list many great authors who have spent many years developing
and producing knowledge on qualitative and quantitative research. There are various criteria for
validity and reliability, thus researchers should review them, get familiarized with them and
identify the ones that are the most important for their particular study. The two different process
flowcharts summarize the two types of inquiry and may help researchers assess their progress
toward completion of their research study.
21. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
References
Arsham, H. (2010). Questionnaire design and survey sampling. Retrieved from
http://home.ubalt.edu/ntsbarsh/Business-stat/stat-data/Surveys.htm#rmarginerror
Bansal, P., & Corley, K. (2012). What different about qualitative research? Academy of
Management Journal, 55, 509â513. doi:10.5465/amj.2012.4003
Blair, J., & Conrad, F. G. (2011). Sample size for cognitive interview pretesting. Public Opinion
Quarterly, 75, 636-658. doi:10.1093/poq/nfr035
Bradt, J., Burns, D. S., & Creswell, J. W. (2013). Mixed methods research in music therapy
esearch. Journal of Music Therapy, 50(2), 123-148. doi:10.1093/jmt/50.2.12
Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative, and
mixed methods research: A view from social policy. International Journal of Social
Research Methodology, 11, 261-276. doi:10.1080/13645570701401644
Cameron, R. (2011). Mixed methods research: The five Ps framework. Electronic Journal of
Business Research Methods, 9(2), 96-108. Retrieved from www.ejbrm.com76
Cohen, J. (1977). Statistical power analysis for the behavioral sciences. New York, Academic
Press.
Creswell J. W. (2009). Research design, qualitative, quantitative, and mixed methods research.
Sage.
Denzin, N. K., & Lincoln, Y. S. (2011). The Sage handbook of qualitative research (4th ed.).
Thousand Oaks, CA: Sage Publications. Retrieved from books.google.com
DePaulo, P. (2000, December). Sample size for qualitative research. Quirk Marketing Research
Review. Retrieved from http://www.quirks.com
22. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Dommeyer, C. J., Lugo, E. A., Riddle, K. R., Tade, C. T., & Valdivia, L. (2009). Polling patients
with self administered surveys. The Journal of Applied Business and Economics, 9(2),
67-75. doi:10.1300/J026v12n03_10
Eaves, S. (2013). Mixed methods research: Creating fusion from the QUAL and QUAN data
mosaic. Paper presented at the European Conference on Research Methodology for
Business and Management Studies, Guimaraes, Portugal. Retrieved from
http://www.proquest.com
Felix, R. (2011). The impact of scale width on responses for multi-item, self-report measures.
Journal of Targeting, Measurement and Analysis for Marketing, 19, 153-164.
doi:10.1057/jt.2011.16
Fielding, N. (2010). Mixed methods research in the real world. International Journal of Social
Research Methodology, 13(2), 127-138. doi:10.1080/13645570902996186
Golicic, S. L., & Davis, F. (2012). Implementing mixed methods research in supply chain
management. International Journal of Physical Distribution and Logistics Management.
42, 726-741.doi:10.1108/09600031211269721
Green, S. B., & Salkind, N. J. (2007). Using SPSS for Windows and Macintosh. Analyzing and
understanding data. Upper Saddle River, NJ: Pearson Prentice Hall.
Harrison, R. L., & Reilly, T. M. (2011). Mixed methods designs in marketing research.
Qualitative Market Research: An International Journal. 14(1), 7-26.
doi:10.1108/13522751111099300
Hays, D. G., & Wood C. (2011). Infusing qualitative traditions in counseling research designs.
Journal of Counseling and Development. 89, 288-295. doi:10.1002/j.1556-
6678.2011.tb00091.x
23. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Holland, R G, Smith, A., Hasselback, J. R., & Payne, B. (2010). Survey response in mail versus
email sollicitations. Journal of Business and Economic Research, 8(4), 95-98. Retrieved
from http://journals.cluteonline.com/index.php/JBER
Homburg, C., Klarmann, M., Reimann, M., & Schilke, O. (2012). What drives key informant
accuracy? The Journal of Marketing Research, 49(4), 594-608. doi:10.1509/jmr.09.0174
Ihantola, E., & Kihn, L. (2011). Threats to validity and reliability in mixed methods accounting
research. Qualitative Research in Accounting and Management, 8(1), 39-58.
doi:10.1108/11766091111124694
Israel, J. D. (2012). Determining sample size. PEOD6, Publication of the Institute of Food and
Agricultural Sciences (IFAS), University of Florida. Retrieved from
http://edis.ifas.ufl.edu/pdffiles/PD/PD00600.pdf
Krivokapic-Skoko, B., & OâNeill, G. (2011). Beyond the qualitativeâquantitative distinction:
Some innovative methods for business and management research. International Journal
of Multiple Research Approaches, 5, 290-300. doi:10.5172/mra.2011.5.3.290
Leech, N. L, & Onwuegbuzie, A. J. (2009). A typology of mixed methods research design.
Quality and Quantity, 43, 265â275. doi:10.1007/s11135-007-9105-3
Lipsey, M. W. (1990). Design sensitivity: Statistical power for experimental research. Newbury
Park, CA: Sage.
Malhotra, N. K., Mukhopadhyay, S., Xiaoyan, L., & Dash, S. (2012). One, few, or many? An
integrated framework for identifying the items in measurement scales. International
Journal of Market Research, 54(6), 835-862. doi:10.2501/IJMR-54-6-835-862
Marshall, C., & Rossman, G. B. (2010). Designing qualitative research. Thousand Oaks, CA:
Sage Publications. Retrievd from books.google.com
24. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Menictas, C., Wang, P., & Fine, B. (2011). Assessing flat-lining response style bias in online
research. Australasian Journal of Market and Social Research, 19(2). 34-44. Retrieved
from http://www.amsrs.com.au
Moustakas, C. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage.
Retrieved from books.google.com
Munos-Leiva, F., Sanchez-Fernandez, J., Montoro-Rios, F., & Ibanez-Zapata, J. A (2010).
Improving the response rate and quality in web-based surveys through the personalization
and frequency of reminder mailings. Quality and Quantity, 44, 1037-1052.
doi:10.1007/s11135-009-9256-5
Muscat, M., Blackman, D., & Muscat, B. (2012). Mixed methods: Combining expert interviews,
cross-impact analysis and scenario development. Electronic Journal of Business
Research Methods, 10(1), 9-21. Retrieved from http://ejbrm.com/main.html
Ngondi, J., Reacher, M., Matthews, F., Brayne, C., & Emerson, P. (2009). Trachoma survey
methods: A literature review. Bulletin of the World Health Organization, 87(2), 143-51.
doi:10.2471/BLT.07.046326
Onwuegbuzie, A. J., Johnson, R. B., & Collins, K.M. T. (2009). Call for mixed analysis:
Philosophical framework for combining qualitative and quantitative approaches.
International Journal of Multiple Research Approaches, 3(2), 114â139.
doi:10.5172/mra.3.2.114
Onwuegbuzie, A. J., Johnson, R. B., & Collins, K.M. T. (2011). Assessing legitimation in mixed
research: A new framework. Quality and Quantity, 45, 1253-1271. doi:10.1007/s11135-
009-9289-9â
25. The GJBE (http://novainteru.com/niu-journals), Volume 1, Issue 1 (September 2016)
Shah, R., & Ward, P. (2007). Defining and developing measures of lean performances. Journal
of Operations Management, 25, 785-805. doi:10.1016/j.jom.2007.01.019
Srnka, K. J., & Koeszegi, S. T. (2007). From words to numbers: How to transform qualitative
data into meaningful quantitative results. Schmalenbach Business Review (SBR), 59(1),
29-57. Retrieved from http://www.sbr-online.de/home.html
Suri, H. (2011). Purposeful sampling in qualitative research synthesis. Qualitative Research
Journal, 11(2), 63-75. doi:10.3316/QRJ1102063
Terrell, S. R (2012). Mixed-methods research methodologies. The Qualitative Report, 17(1),
254-280. Retrieved from http://search.proquest.com
Thisse, L. C. (1998). Advanced quality planning: A guide for any organization
Quality Progress 31, (2), 73-77. Retrieved from http://ezp.waldenulibrary.org
Wahyuni, D. (2012). The research design maze: Understanding paradigms, cases, methods, and
methodologies. Journal of Applied Management Accounting Research, 10(1), 69-80.
Retrieved from http://www.cmawebline.org
Zachariadis, M., Scott, S., & Barrett, M. (2013). Methodological implications of critical realism
for mixed methods research. MIS Quarterly, 37(3), 855-879. Retrieved from
http:///www.misq.org