SlideShare a Scribd company logo
1 of 231
NORMAN, ELTON_BTM7303-12-8 2
NORMAN, ELTON_BTM7303-12-8 1
Hello Elton,
I appreciate your note. YES. Keep trying. I know that
making the transition to doctoral-level reasoning can be hard! It
was very hard for me in some areas because it seemed …
unnatural. Does that make sense? Some aspects of this type of
thinking seemed “clunky” and hard to explain in plain language.
I wanted research problems, research purpose statements, etc. to
simply flow. In the beginning of my journey there was very
little flow (more like trickles) and lots of missteps!
For this assignment, you were asked to build on your
assignment last week to further explore how you might examine
your research problem using a quantitative methodology. You
were required to respond to these questions:
· Please restate the research problem, purpose, and research
questions you developed previously and incorporate any faculty
feedback as appropriate. This week be sure to also include
hypotheses for each of your research questions.
· How might surveys be used to answer your research questions?
What are the advantages and disadvantages of using surveys to
collect data?
· How might you use an experiment or quasi-experiment to
answer your research questions? What are the advantages and
disadvantages of using (quasi)experiments to collect your data?
· It is also important to consider how you might analyze the
potential data you collect and factors that could affect those
analyses. Specifically, what are Type I and Type II errors? How
might these impact your study? What is statistical power? How
might this impact your study? What steps can you take ahead of
time to help avoid issues related to Type I & II errors as well as
power?
As part of our standard, you were also required to use scholarly
sources to support all assertions and research decisions.
Length: 5 to 7 pages, not including title and reference pages
I used the rubric below to assess your submission. As I moved
through each section of your paper, I looked for information
that demonstrated you understood important research terms such
as hypothesis, null hypothesis, Type I and Type II Errors and
statistical power. In most instances you demonstrated some
understanding of these concepts or terms. In several instances
your understanding hindered your ability to create rigorous
hypotheses because there were aspects of these terms that
remained unclear. I added several prompts and questions to help
you in these areas.
Grading Rubric
Criteria
Content (4 points)
Points
1
State research problem, purpose, research questions and
hypotheses
1.5/2
2
Discussed in detail the advantages and disadvantages of using
surveys to collect data
.75/ 1
3
Explained how you could use experiments or quasi-experiments
to collect data for your study and the advantages and
disadvantages of these designs
.75/1
Organization (1 point)
4
Organized and presented in a clear manner. Included a minimum
of five scholarly references, with appropriate APA formatting
applied to citations and paraphrasing.
.75/1
Total
3.75/5
Please scroll through the body of your paper for my
specific comments and improvement suggestions. Elton – DO
NOT GIVE UP. You can master these concepts however it may
take practice, more studying of concepts or time with a tutor or
statistics coach.
Faculty Name: Dr. Antoinette Kohlman
Grade Earned: 3.75/5 = C
Date Graded: May 16, 2019
Quantitative Research Design
BTM-7303 Assignment # 8
Elton Norman
Dr. Antoinette Kohlman
12 May 2019
Research Problem
The research on the relationship between substance abuse and
school dropout cases can be examined using a quantitative
methodology. A substantial number of researches on dropout
rates touch on the actual percentages of the students who drop
out of school due to substance abuse. However, limited research
has been done on the level of education which has witnessed the
highest dropout rates and the drug which is mostly associated
with these cases. Comment by Antoinette Kohlman: Do you
mean research studies? I do not know what you mean by
researches? Comment by Antoinette Kohlman: What impact
does this lack of information have on schools, communities, or
families? Would studying this situation create new knowledge
that will enhance practice or further theoretical development?
In your next paper, I would enhance the problem statement with
this type of information.
Purpose of the Research
The purpose of the research is to establish the level where most
of the students drop out of learning institutions due to substance
abuse. It is also geared towards establishing the type of drug
which contributes to most of these cases.
Research Questions
At what level of education do most of the youths drop out of
school? Comment by Antoinette Kohlman: I think there is a
gap here. I would add more key questions.
For example:
>> Among those who drop out of school, what percentage
leaves due to illegal drug usage?
>> Among those who dropped out of school due to illegal drug
usage, what was the most common illegal drug used?
How might the above research questions be translated into
hypotheses?
Hypothesis Examples:
Illegal drug usage does not have a statistically significant effect
on school dropout rates.
Illegal drug usage has a statistically significant effect on school
dropout rates.
Comment by Antoinette Kohlman: This first question is a
good starting point because you are acknowledging there are
many reasons that contribute to high school dropouts. You then
immediately pinpoint your interests in drug usage however I
think more nuanced questions can be added.
What type of drug is associated with the highest dropout rates?
Hypotheses
Most of the school dropout rates due to substance abuse are
witnessed in high school. Comment by Antoinette Kohlman:
There are handouts that explain how to craft hypotheses in the
Dissertation Center. Click the following link to access NCU’s
Developing a Hypothesis Handout.
The element that is missing from your hypotheses is “statistical
significance.” Please see my hypothesis examples above!
Here is an excerpt that you can use to self-evaluate your
hypotheses:
Nature of Hypothesis
1. It can be tested –verifiable or falsifiable
2. Hypotheses are not moral or ethical questions
3. It is neither too specific nor to general
4. It is a prediction of consequences
5. It is considered valuable even if proven false
Alcohol abuse contributes to the highest rate of school dropout
rates in high school.
Use of Surveys
Surveys make up one of the excellent ways of gathering data
during quantitative research and involve gathering answers from
the chosen sample which represents the population being
studied. It includes the use of questionnaires, mobile surveys,
paper surveys, face-to-face interviews, and telephone surveys.
In this research, the use of questionnaires is viable since it will
help reach a large number of respondents for a short period.
Advantages of Surveys
One of the advantages of surveys is that they are inexpensive.
In most cases, surveys utilize questionnaires whereby the
respondents are issued with questions which they are supposed
to fill. In this case, a quantitative survey involving the use of
surveys can be carried out with a minimum budget and still
produce a top-notch survey with valid results. Comment by
Antoinette Kohlman: I agree. Please cite at least one source!
The use of surveys in research leads to extensive research. It
should be noted that most of the research is used to describe
particular aspects of a certain population. In this case, the
research carried out must involve a large population so that the
results from the sample population infer to the whole population
under study. Such results can only be achieved when a method
which can reach a large population for a short period is used. In
this case, the use of surveys in research gives the researchers an
opportunity to conduct the research using a large sample.
Comment by Antoinette Kohlman: How so? How does a
survey lead to “extensive research?” Comment by
Antoinette Kohlman: I do not understand what you mean.
Doesn’t most research target specific populations? Comment by
Antoinette Kohlman: So you mean the results can be
generalized?
Disadvantages of Surveys
The use of surveys has disadvantages which include higher
chances of bias. It is evident that the researchers are involved in
choosing the respondents. In this case, they can select a group
of respondents who are inclined to their hypothesis. The fact
that samples are used to infer to a large population requires the
use of a large sample with respondents who bear different lines
of thought with the researchers. In this scenario, a poorly
selected sample can lead to unreliable results which are not a
representative of the larger population(Mitchell, 2010).
Comment by Antoinette Kohlman: Please be specific and
name the type or types of biases. Comment by Antoinette
Kohlman: Comment by Antoinette Kohlman: What do you
mean? I do not understand how this might occur. Please say
more. Comment by Antoinette Kohlman: I do not understand
what you mean by a “different line(s) of thought.”
Although the researchers can select a sample population without
bias, the lack of knowledge in the techniques used in sampling
can lead to errors. Sampling method involves calculations and
statistical analysis which require a researcher with substantial
knowledge in sampling techniques. Failure to possess such
skills can lead to sampling errors resulting in misleading
research (Mitchell, 2010).
Use of Quasi-experiment
In this experiment, the use of experiments is limited as the
respondents are already out of the learning institutions. For the
study, the respondents will be subjected to a quasi-experiment
whereby they will only give details about the level of education
they dropped out of school and the substance which they can
attribute to the same. The use of quasi-experiments is popular in
research as it enables the researcher to control the experiment
and eliminates random assignment which depends on chances
that do not offer a guarantee of the equivalency of the groups at
the baseline. Comment by Antoinette Kohlman: This would
mean you might have to do either a longitudinal study or use a
pre-test, post-test design. Comment by Antoinette Kohlman:
What exactly is a quasi-experiment? Please explain or define
this term and cite your source. Thank You.
Advantages of Quasi-experiments
The use of quasi-experiments in the research gives the
researcher an opportunity to conduct the survey without
subjecting the respondents to random assignments. Such
assignments on substance abuse are unethical to carry out since
the survey involves human respondents. The results arrived at in
the survey will then be used to infer to the whole population
since a large number of respondents will ensure the survey is
extensive. Comment by Antoinette Kohlman: Some of this
information seems inaccurate/incorrect however I would need to
know which sources you used. Cites are needed.
Quasi-experiments give the researcher the freedom to
manipulate the respondents to gather substantial data for the
study. In normal scenarios, the researchers can only gather
limited information about the level at which most of the dropout
rates are witnessed. With quasi-experiments, the researcher can
twist the questions to fit the study such as indicating most of
the drugs most abused for the respondents to choose.
Disadvantages of Quasi-experiments
Although quasi-experiments put the researcher in a position to
manipulate the research, they lack randomness which leads to
weaker evidence. Randomness is vital in research as it leads to
results which infer to the whole population. Failure to include
randomness may obtain results that favor the hypothesis and
which are not a representative of the whole population.
The use of quasi-experiments leads to unequal groups which
jeopardize the internal validity of the research. During surveys,
the internal validity aids in obtaining the approximate truth
concerning causal relationships. Lack of internal validity infers
that the experimenter lacks control for the variables which
contribute to the results, leading to unreliable data (Polit &
Beck, 2010).
Analysis of Potential Data
After the experiments, the potential data is analyzed using
statistical tools such as the SPSS and SAS. At first, the central
tendencies for the acquired data will be obtained. The measures
of central tendency in the experiment will include median, mode
and the mean. It will be followed by the variability
measurements; an action will determine the distribution of the
score and how the scores vary. In this scenario, the variability
measurements taken will include standard deviation, average
deviation, and the range.
Factors affecting Data Analysis
The analysis of the data is affected by the level of the skills
exhibited by the researcher. Although the correct data can be
arrived at from the questionnaires, poor analysis skills can lead
to inaccurate data which does not infer to the population under
the study. As such, the researcher must be conversant with the
statistical tools to draw reliable conclusions from the survey.
The extent of the analysis is another factor which affects the
data analysis. During the survey, the researcher must establish
the level of analysis and apply the suitable statistical tools
which do not compromise the data integrity. In this case, they
must apply multiple tools to analyze the collected data to
establish the patterns of behavior and test the hypothesis to get
the correct data which represents the population (Ramachandran
&Tsokos, 2009).
Type 1 and Type 2 Errors
Type 1 and type 2 errors are the examples of errors which can
occur in the survey. Type 1 errors occur when the researcher
rejects the null hypothesis when it is true. The researcher
concludes that there is the existence of differences between the
groups when it is not present in reality. On the other hand, the
type 2 errors infer that the researcher fails to reject a false null
hypothesis. The researcher’s conclusion communicates that
there is no difference between the groups although it exists. The
presence of these errors in the survey leads to false results as
the researcher does not make the correct inferences from the
experiments. In such scenarios, the survey is termed as
unreliable as it contains misleading information
(Gravetter&Wallnau, 2007).
Statistical Power
Statistical power refers to the probability that the study will
reveal the differences if they exist. A study bears the possibility
of differences in the groups being studied and the failure to
detect such differences will lead to research will false results.
As such, the statistical tests must have the capacity to detect the
differences and reject the false null hypothesis. A low statistical
power infers that the tests may not identify the differences even
when they are present. Its presence increases the probability of
type 2 errors whereby the false null hypothesis is not rejected
(Wimmer& Dominick, 2011). Comment by Antoinette
Kohlman: Elton, if you are required to compare groups … which
groups would you compare? Go back to your initial research
questions. Could you compare dropout rates based on gender or
ethnicity? Could you hypothesize that males dropout od school
due to illegal drug usage at a higher rate when compared to
females? Does this make sense?
Avoiding Low Statistical Power
There are numerous actions which are adopted to ensure the
statistical tools have a higher statistical power. One of the
actions is to use a greater sample size since it offers detailed
information concerning the population being studied. Another
means of increasing the statistical power is incorporating a
higher level of significance which increases the chances of
rejecting the null hypothesis.
References
Gravetter, F. J., &Wallnau, L. B. (2007). Statistics for the
behavioral sciences. Belmont: Wadsworth.
Mitchell, M. L. (2010). Research design explained. -7th ed.
(9780495602217). Belmont: Wadsworth.
Polit, D. F., & Beck, C. T. (2010). Essentials of nursing
research: Appraising evidence for nursing practice.
Philadelphia: Wolters Kluwer Health/Lippincott Williams &
Wilkins.
Ramachandran, K. M., &Tsokos, C. P. (2009). Mathematical
statistics with applications. London: Elsevier Academic Press.
Wimmer, R. D., & Dominick, J. R. (2011). Mass media
research: An introduction. Boston, Mass: Cengage- Wadsworth.
QUANTITATIVE METHODS IN PSYCHOLOGY
A Power Primer
Jacob Cohen
New brk University
One possible reason for the continued neglect of statistical
power analysis in research in the
behavioral sciences is the inaccessibility of or difficulty with
the standard material. A convenient,
although not comprehensive, presentation of required sample
sizes is provided here. Effect-size
indexes and conventional values for these are given for
operationally defined small, medium, and
large effects. The sample sizes necessary for .80 power to detect
effects at these levels are tabled for
eight standard statistical tests: (a) the difference between
independent means, (b) the significance
of a product-moment correlation, (c) the difference between
independent rs, (d) the sign test, (e) the
difference between independent proportions, (f) chi-square tests
for goodness of fit and contin-
gency tables, (g) one-way analysis of variance, and (h) the
significance of a multiple or multiple
partial correlation.
The preface to the first edition of my power handbook (Co-
hen, 1969) begins:
During my first dozen years of teaching and consulting on
applied
statistics with behavioral scientists, 1 became increasingly im-
pressed with the importance of statistical power analysis, an im-
portance which was increased an order of magnitude by its
neglect
in our textbooks and curricula. The case for its importance is
easily made: What behavioral scientist would view with
equanim-
ity the question of the probability that his investigation would
lead to statistically significant results, i.e., its power? (p. vii)
This neglect was obvious through casual observation and had
been confirmed by a power review of the 1960 volume of the
Journal of Abnormal and Social Psychology, which found the
mean power to detect medium effect sizes to be .48 (Cohen,
1962). Thus, the chance of obtaining a significant result was
about that of tossing a head with a fair coin. I attributed this
disregard of power to the inaccessibility of a meager and mathe-
matically difficult literature, beginning with its origin in the
work of Neyman and Pearson (1928,1933).
The power handbook was supposed to solve the problem. It
required no more background than an introductory psychologi-
cal statistics course that included significance testing. The ex-
position was verbal-intuitive and carried largely by many
worked examples drawn from across the spectrum of behav-
ioral science.
In the ensuing two decades, the book has been through re-
vised (1977) and second (1988) editions and has inspired dozens
of power and effect-size surveys in many areas of the social and
life sciences (Cohen, 1988, pp. xi-xii). During this period, there
has been a spate of articles on power analysis in the social
science literature, a baker's dozen of computer programs (re-
I am grateful to Patricia Cohen for her useful comments.
Correspondence concerning this article should be addressed to
Ja-
cob Cohen, Department of Psychology, New >brk University, 6
Wash-
ington Place, 5th Floor, New York, New York 10003.
viewed in Goldstein, 1989), and a breakthrough into popular
statistics textbooks (Cohen, 1988, pp. xii-xiii).
Sedlmeier and Gigerenzer (1989) reported a power review of
the 1984 volume of the Journal of Abnormal Psychology (some
24 years after mine) under the title, "Do Studies of Statistical
Power Have an Effect on the Power of Studies?" The answer
was no. Neither their study nor the dozen other power reviews
they cite (excepting those fields in which large sample sizes are
used, e.g., sociology, market research) showed any material im-
provement in power. Thus, a quarter century has brought no
increase in the probability of obtaining a significant result.
Why is this? There is no controversy among methodologists
about the importance of power analysis, and there are ample
accessible resources for estimating sample sizes in research
planning using power analysis. My 2-decades-long expectation
that methods sections in research articles in psychological jour-
nals would invariably include power analyses has not been real-
ized. Indeed, they almost invariably do not. Of the 54 articles
Sedlmeier and Gigerenzer (1989) reviewed, only 2 mentioned
power, and none estimated power or necessary sample size or
the population effect size they posited. In 7 of the studies, null
hypotheses served as research hypotheses that were confirmed
when the results were nonsignificant. Assuming a medium ef-
fect size, the median power for these tests was .25! Thus, these
authors concluded that their research hypotheses of no effect
were supported when they had only a .25 chance of rejecting
these null hypotheses in the presence of substantial population
effects.
It is not at all clear why researchers continue to ignore power
analysis. The passive acceptance of this state of affairs by edi-
tors and reviewers is even more of a mystery. At least part of
the
reason may be the low level of consciousness about effect size:
It
is as if the only concern about magnitude in much psychologi-
cal research is with regard to the statistical test result and its
accompanying p value, not with regard to the psychological
phenomenon under study. Sedlmeier and Gigerenzer (1989) at-
tribute this to the accident of the historical precedence of Fi-
Psychological Bulletin, 1992, Vol. 112. No. 1,155-159
Copyright 1992 by the American Psychological Association,
Inc. 0033-2909/92/S3-00
155
156 JACOB COHEN
sherian theory, its hybridization with the contradictory Ney-
man-Pearson theory, and the apparent completeness of Fisher-
ian null hypothesis testing: objective, mechanical, and a clear-
cut go-no-go decision straddled over p = .05.1 have suggested
that the neglect of power analysis simply exemplifies the slow
movement of methodological advance (Cohen, 1988, p. xiv),
noting that it took some 40 years from Student's publication of
the / test to its inclusion in psychological statistics textbooks
(Cohen, 1990, p. 1311).
An associate editor of this journal suggests another reason:
Researchers find too complicated, or do not have at hand, ei-
ther my book or other reference material for power analysis. He
suggests that a short rule-of-thumb treatment of necessary sam-
ple size might make a difference. Hence this article.
In this bare bones treatment, I cover only the simplest cases,
the most common designs and tests, and only three levels of
effect size. For readers who find this inadequate, I unhesitat-
ingly recommend Statistic Power Analysis for the Behavioral
Sciences (Cohen, 1988; hereafter SPABS). It covers special
cases,
one-sided tests, unequal sample sizes, other null hypotheses, set
correlation and multivariate methods and gives substantive ex-
amples of small, medium, and large effect sizes for the various
tests. It offers well over 100 worked illustrative examples and is
as user friendly as I know how to make it, the technical material
being relegated to an appendix.
Method
Statistical power analysis exploits the relationships among the
four
variables involved in statistical inference: sample size (N),
significance
criterion (ft), population effect size (ES), and statistical power.
For any
statistical model, these relationships are such that each is a
function of
the other three. For example, in power reviews, for any given
statistical
test, we can determine power for given a, N, and ES. For
research
planning, however, it is most useful to determine the N
necessary to
have a specified power for given a and ES; this article addresses
this use.
The Significance Criterion, a
The risk of mistakenly rejecting the null hypothesis (H) and
thus of
committing a Type I error, a, represents a policy: the maximum
risk
attending such a rejection. Unless otherwise stated (and it rarely
is), it is
taken to equal .05 (part of the Fisherian legacy; Cohen, 1990).
Other
values may of course be selected. For example, in studies
testing sev-
eral fys, it is recommended that a - .01 per hypothesis in order
that the
experimentwise risk (i.e., the risk of any false rejections) not
become
too large. Also, for tests whose parameters may be either
positive or
negative, the a risk may be defined as two sided or one sided.
The many
tables in SPABS provide for both kinds, but the sample sizes
provided
in this note are all for two-sided tests at a = .01, .05, and. 10,
the last for
circumstances in which a less rigorous standard for rejection is
de-
sired, as, for example, in exploratory studies. For
unreconstructed one
tailers (see Cohen, 1965), the tabled sample sizes provide close
approxi-
mations for one-sided tests at Via (e.g., the sample sizes tabled
under a =
.10 may be used for one-sided tests at a = .05).
Power
The statistical power of a significance test is the long-term
probabil-
ity, given the population ES, a, and TV of rejecting /&. When
the ES is
not equal to zero, H, is false, so failure to reject it also incurs
an error.
This is a Type II error, and for any given ES, a, and N, its
probability of
occurring is ft. Power is thus 1 - 0, the probability of rejecting a
false H,.
In this treatment, the only specification for power is .80 (so /3 =
.20), a
convention proposed for general use. (SPABS provides for 11
levels of
power in most of its N tables.) A materially smaller value than
.80
would incur too great a risk of a Type II error. A materially
larger value
would result in a demand for N that is likely to exceed the
investigator's
resources. Taken with the conventional a = .05, powerof .80
results in a
0M ratio of 4:1 (.20 to .05) of the two kinds of risks. (See
SPABS, pp.
53-56.)
Sample Size
In research planning, the investigator needs to know the N
neces-
sary to attain the desired power for the specified a and
hypothesized
ES. A'increases with an increase in the power desired, a
decrease in the
ES, and a decrease in a. For statistical tests involving two or
more
groups, Nas here denned is the necessary sample size for each
group.
Effect Size
Researchers find specifying the ES the most difficult part of
power
analysis. As suggested above, the difficulty is at least partly due
to the
generally low level of consciousness of the magnitude of
phenomena
that characterizes much of psychology. This in turn may help
explain
why, despite the stricture of methodologists, significance
testing is so
heavily preferred to confidence interval estimation, although the
wide
intervals that usually result may also play a role (Cohen, 1990).
How-
ever, neither the determination of power or necessary sample
size can
proceed without the investigator having some idea about the
degree to
which the H, is believed to be false (i.e., the ES).
In the Neyman-Pearson method of statistical inference, in
addition
to the specification of HQ, an alternate hypothesis (//,) is
counterpoised
against fy. The degree to which H> is false is indexed by the
discrep-
ancy between H, and //, and is called the ES. Each statistical
test has its
own ES index. All the indexes are scale free and continuous,
ranging
upward from zero, and for all, the /^ is that ES = 0. For
example, for
testing the product-moment correlation of a sample for
significance,
the ES is simply the population r, so H posits that r = 0. As
another
example, for testing the significance of the departure of a
population
proportion (P) from .50, the ES index isg= P— .50, so the H, is
that g=
0. For the tests of the significance of the difference between
indepen-
dent means, correlation coefficients, and proportions, the H is
that the
difference equals zero. Table 1 gives for each of the tests the
definition
of its ES index.
To convey the meaning of any given ES index, it is necessary to
have
some idea of its scale. To this end, I have proposed as
conventions or
operational definitions small, medium, and large values for each
that
are at least approximately consistent across the different ES
indexes.
My intent was that medium ES represent an effect likely to be
visible to
the naked eye of a careful observer, (ft has since been noted in
effect-
size surveys that it approximates the average size of observed
effects in
various fields.) I set small ES to be noticeably smaller than
medium but
not so small as to be trivial, and I set large ES to be the same
distance
above medium as small was below it. Although the definitions
were
made subjectively, with some early minor adjustments, these
conven-
tions have been fixed since the 1977 edition of SPABS and have
come
into general use. Table 1 contains these values for the tests
considered
here.
In the present treatment, the H,s are the ESs that operationally
de-
fine small, medium, and large effects as given in Table 1. For
the test of
the significance of a sample r, for example, because the ES for
this test
is simply the alternate-hypothetical population r, small,
medium, and
large ESs are respectively .10, .30, and .50. The ES index for
the t test of
the difference between independent means is d, the difference
A POWER PRIMER 157
Table 1
ES Indexes and Their Values for Small, Medium, and Large
Effects
1.
2.
3.
4.
5.
6.
7.
8.
Test ES index
mA vs. mB for , mA — mB
independent a
means
Significance r
of product-
moment r
rA vs. rB for q = ZA - ZB where z = Fisher's z
independent
rs
P = .5 and £ = P - .50
the sign test
PA vs. PB for h = <t>A — <t>B where 0 = arcsine
independent transformation
proportions ,
Chi-square , /^ (/>„ - P0/)
2
for goodness  / £ p
of fit and V
contingency
One-way ,_ £„,
analysis of J a
variance
Multiple and f2 R
2
multiple J  - R2
partial
correlation
Small
.20
.10
.10
.05
.20
.10
.10
.02
Effect size
Medium
.50
.30
.30
.15
.50
.30
.25
.15
Large
.80
.50
.50
.25
.80
.50
.40
.35
Note. ES = population effect size.
expressed in units of (i.e., divided by) the within-population
standard
deviation. For this test, the /& is that d= 0 and the small,
medium, and
large ESs (or H,s) are d - .20, .50, and .80. Thus, an
operationally
defined medium difference between means is half a standard
devia-
tion; concretely, for IQ scores in which the population standard
devia-
tion is 15, a medium difference between means is 7.5 IQ points.
Statistical Tests
The tests covered here are the most common tests used in
psychological research:
1. The t test for the difference between two independent
means, with df= 2 (N- 1).
2. The / test for the significance of a product-moment corre-
lation coefficient r, with df= N- 2.
3. The test for the difference between two independent rs,
accomplished as a normal curve test through the Fisher z trans-
formation of r (tabled in many statistical texts).
4. The binomial distribution or, for large samples, the nor-
mal curve (or equivalent chi-square, 1 df) test that a population
proportion (P) = .50. This test is also used in the nonparametric
sign test for differences between paired observations.
5. The normal curve test for the difference between two inde-
pendent proportions, accomplished through the arcsine trans-
formation <t> (tabled in many statistical texts). The results are
effectively the same when the test is made using the chi-square
test with 1 degree of freedom.
6. The chi-square test for goodness of fit (one way) or associa-
tion in two-way contingency tables. In Table 1, k is the number
of cells and PQi and Pv are the null hypothetical and alternate
hypothetical population proportions in cell /. (Note that w's
structure is the same as chi-square's for cell sample
frequencies.)
For goodness-of-fit tests, the df= k - 1, and for contingency
tables, df= (a — 1) (b — 1), where a and b are the number of
levels
in the two variables. Table 2 provides (total) sample sizes for 1
through 6 degrees of freedom.
7. One-way analysis of variance. Assuming equal sample
sizes (as we do throughout), for g groups, the Ftest has df= g —
1, g(N - 1). The ES index is the standard deviation of the g
population means divided by the common within-population
standard deviation. Provision is made in Table 2 for 2 through 7
groups.
8. Multiple and multiple partial correlation. For k indepen-
dent variables, the significance test is the standard F test for
df= k,N—k-. The ES index, /*, is defined for either squared
multiple or squared multiple partial correlations (R2). Table 2
provides for 2 through 8 independent variables.
Note that because all tests of population parameters that can
be either positive or negative (Tests 1-5) are two-sided, their ES
indexes here are absolute values.
In using the material that follows, keep in mind that the ES
posited by the investigator is what he or she believes holds for
the population and that the sample size that is found is condi-
tional on the ES. Thus, if a study is planned in which the inves-
tigator believes that a population r is of medium size (ES = r -
.30 from Table 1) and the / test is to be performed with two-
sided a = .05, then the power of this test is .80 if the sample
size
is 85 (from Table 2). If, using 85 cases, t is not significant, then
158 JACOB COHEN
Table 2
TV for Small, Medium, and Large ES at Power = .80 for a = .01,
.05, and .10
1.
2.
3.
4.
5.
6.
7.
8.
Test
Mean dif
Sigr
rdif
P= .5
Pdif
x2
df
2df
Idf
4df
5df
6df
ANOVA
2g"
lg°
V
5«*
6S"
V
Mult/?
2fc*
3/c*
4̂
5£*
6/c*
Ik"
8/t*
Sm
586
1,163
2,339
1,165
584
,168
,388
,546
,675
,787
,887
586
464
388
336
299
271
698
780
841
901
953
998
1,039
.01
Med
95
125
263
127
93
130
154
172
186
199
210
95
76
63
55
49
44
97
108
118
126
134
141
147
Lg
38
41
96
44
36
38
56
62
67
71
75
38
30
25
22
20
18
45
50
55
59
63
66
69
Sm
393
783
1,573
783
392
785
964
1,090
1,194
1,293
1,362
393
322
274
240
215
195
481
547
599
645
686
726
757
a
.05
Med
64
85
177
85
63
87
107
121
133
143
151
64
52
45
39
35
32
67
76
84
91
97
102
107
Lg
26
28
66
30
25
26
39
44
48
51
54
26
21
18
16
14
13
30
34
38
42
45
48
50
Sm
310
617
1,240
616
309
618
771
880
968
1,045
1,113
310
258
221
193
174
159
.10
Med
50
68
140
67
49
69
86
98
108
116
124
50
41
36
32
28
26
Lg
20
22
52
23
19
25
31
35
39
42
45
20
17
15
13
12
1 1
Note. ES = population effect size, Sm = small, Med = medium,
Lg = large, diff = difference, ANOVA =
analysis of variance. Tests numbered as in Table 1.
" Number of groups. * Number of independent variables.
either r is smaller then .30 or the investigator has been the
victim of the .20 (ft) risk of making a Type II error.
Examples
The necessary N for power of .80 for the following examples
are found in Table 2.
1. To detect a medium difference between two independent
sample means (d= .50 in Table 1) at a = .05 requires N= 64 in
each group. (A dof .50 is equivalent to a point-biserial correla-
tion of .243; see SPABS, pp. 22-24.)
2. For a significance test of a sample rala = .01, when the
population r is large (.50 in Table 2), a sample size = 41 is
required. At a = .05, the necessary sample size = 28.
3. To detect a medium-sized difference between two popula-
tion rs (q = .30 in Table 1) at a = .05 requires N = 177 in each
group. (The following pairs of rs yield q = .30: .00, .29; .20,
.46;
.40, .62; .60, .76; .80, .89; .90, .94; see SPABS, pp. 113-116)
4. The sign test tests the HO that .50 of a population of paired
differences are positive. If the population proportion^ depar-
ture from .50 is medium (q = .15 in Table 1), at a = .10, the
necessary N= 67; at a = .05, it is 85.
5. To detect a small difference between two independent
population proportions (h = .20 in Table 1) at a = .05 requires
TV = 392 cases in each group. (The following pairs of Ps yield
approximate values of h = .20: .05, .10; .20, .29; .40, .50; .60,
.70;
.80, .87; .90, .95; see SPABS, p. 184f.)
6. A 3 X 4 contingency table has 6 degrees of freedom. To
detect a medium degree of association in the population (w =
.30 in Table 1) at a = .05 requires N = 151. (w = .30
corresponds
to a contingency coefficient of .287, and for 6 degrees of free-
dom, a Cramer <£ of .212; see SPABS, pp. 220-227).
7. A psychologist considers alternate research plans involv-
ing comparisons of the means of either three or four groups in
both of which she believes that the ES is medium (/= .25 in
Table 1). She finds that at a = .05, the necessary sample size per
group is 52 cases for the three-group plan and 45 cases for the
four-group plan, thus, total sample sizes of 156 and 180. (When
/= .25, the proportion of variance accounted for by group
membership is .0588; see SPABS, pp. 280-284.)
8. A psychologist plans a research in which he will do a
multiple regression/correlation analysis and perform all the sig-
nificance tests at a = .01. For the F test of the multiple R2, he
expects a medium ES, that is, f2 = . 15 (from Table 1). He has a
candidate set of eight independent variables for which Table 2
indicates that the required sample size is 147, which exceeds his
resources. However, from his knowledge of the research area,
he believes that the information in the eight variables can be
A POWER PRIMER 159
effectively summarized in three. For three variables, the neces-
sary sample size is only 108. (Given the relationship between f2
and R2, the values for small, medium, and large R2 are respec-
tively .0196, .1304, and .2592, and for R, .14, .36, and .51; see
SPABS, pp. 410-414.)
References
Cohen, J. (1962). The statistical power of abnormal-social
psychologi-
cal research: A review. Journal of Abnormal and Social
Psychology,
65, 145-153.
Cohen, J. (1965). Some statistical issues in psychological
research. In
B. B. Wolman (Ed.), Handbook of clinical psychology (pp. 95-
121).
New York: McGraw-Hill.
Cohen, J. (1969). Statistical power analysis for the behavioral
sciences.
San Diego, CA: Academic Press.
Cohen, J. (1988). Statistical power analysis for the behavioral
sciences
(2nd ed.). Hillsdale, NJ: Erlbaum.
Cohen, J. (1990). Things I have learned (so far). American
Psychologist,
45,1304-1312.
Goldstein, R. (1989). Power and sample size via MS/PC-DOS
com-
puters. American Statistician, 43, 253-260.
Neyman, 1, & Pearson, E. S. (1928). On the use and
interpretation of
certain test criteria for purposes of statistical inference.
Biometrika,
20A,175-240, 263-294.
Neyman, J., & Pearson, E. S. (1933). On the problem of the
most effi-
cient tests of statistical hypotheses. Transactions of the Royal
Society
of London Series A, 231, 289-337.
Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical
power
have an effect on the power of studies? Psychological Bulletin,
105,
309-316.
Received February 1,1991
Revision received April 26,1991
Accepted May 2,1991 •
Low Publication Prices for APA Members and Affiliates
Keeping You Up-to-Dcrte: All APA members (Fellows;
Members; Associates, and Student
Affiliates) receive—as part of their annual dues—subscriptions
to the American Psychobgist and
APA Monitor.
High School Teacher and International Affiliates receive
subscriptions to the APA Monitor,
and they can subscribe to the American Psychologist at a
significantly reduced rate.
In addition, all members and affiliates are eligible for savings
of up to 60% (plus a journal
credit) on all other APA journals, as well as significant
discounts on subscriptions from coop-
erating societies and publishers (e.g., the American Association
for Counseling and Develop-
ment, Academic Press, and Human Sciences Press).
Essential Resources: APA members and affiliates receive
special rates for purchases of APA
books, including the Publication Manual of the APA, the Master
Lectures, and Journals in Psychol-
ogy: A Resource Listing for Authors.
Other Benefits of Membership: Membership in APA also
provides eligibility for low-cost
insurance plans covering life, income protection, office
overhead, accident protection, health
care, hospital indemnity, professional liability,
research/academic professional liability, stu-
dent/school liability, and student health.
For more information, write to American Psychological
Association,
Membership Services, 750 First Street, NE, Washington, DC
20002-4242, USA
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 1/35
CHAPTER 11
Sampling
The simplest rationale for sampling is that it may not be
feasible because of time or financial
constraints, or even physically possible, to collect data from
everyone involved in an evaluation.
Sampling strategies provide systematic, transparent processes
for choosing who will actually be asked
to provide data.
—Mertens and Wilson, 2012, p. 410
Relationships are powerful. Our one-to-one connections with
each other are the foundation for change.
And building relationships with people from different cultures,
often many different cultures, is key in
building diverse communities that are powerful enough to
achieve significant goals.
—Work Group for Community Health and Development, 2013
In This Chapter
• The viewpoints of researchers who work within the
postpositivist, constructivist, and transformative
paradigms are contrasted in relation to sampling strategies and
generalizability.
• External validity is introduced as a critical concept in
sampling decisions.
• Challenges in the definition of specific populations are
described in terms of conceptual and operational
definitions, identifying a person’s racial or ethnic status,
identifying persons with a disability, heterogeneity
within populations, and cultural issues.
• Strategies for designing and selecting samples are provided,
including probability-based, theoretical-
purposive, and convenience sampling. Sampling is also
discussed for complex designs such as those using
hierarchical linear modeling.
• Sampling bias, access issues, and sample size are discussed.
• Ethical standards for the protection of study participants are
described in terms of an institutional review
board’s requirements.
• Questions to guide critical analysis of sampling definition,
selection, and ethics are provided.
Transformative research implies a philosophy that research
should confront and act against the causes
of injustice and violence, which can be caused not only by that
which is researched but also by the
process of research itself. Individuals involved in research can
be disenfranchised in a few ways: (1) by
the hidden power arrangements uncovered by the research
process, (2) by the actions of unscrupulous
(and even well-intentioned) researchers, but also (3) by
researchers’ failure to expose those
arrangements once they become aware of them. Hidden power
arrangements are maintained by secrets
of those who might be victimized by them (because they fear
retaliation). . . . [Researchers] contribute
to this disenfranchisement if it prevents the exposure of hidden
power arrangements. (Baez, 2002, pp.
51–52)
Definition, Selection, and Ethics p. 319
p. 318
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 2/35
Sampling Strategies: Alternative Paradigms
The decisions that a researcher makes regarding from whom
data will be collected, who is included,
how they are included, and what is done to conceal or reveal
identities in research constitute the topics
addressed in this chapter on sampling. As can be seen in the
opening quotation, these decisions are
complex and not unproblematic. In a simple sense, sampling
refers to the method used to select a given
number of people (or things) from a population. The strategy for
selecting your sample influences the
quality of your data and the inferences that you can make from
it. The issues surrounding from whom
you collect data are what sampling is all about. Within all
approaches to research, researchers use
sampling for very practical reasons. In most research studies, it
is simply not feasible to collect data
from every individual in a setting or population.
Sampling is one area in which great divergence can be
witnessed when comparing the various
research paradigms. In general, researchers who function within
the postpositivist paradigm see the
ideal sampling strategy as some form of probability sampling.
Kathleen Collins (2010) describes
probability sampling as follows:
A researcher uses probability sampling schemes to select
randomly the sampling units that are
representative of the population of interest. . . . These methods
meet the goal of ensuring that
every member of the population of interest has an equal chance
of selection. . . . When
implementing probabilistic sampling designs, the researcher’s
objective is to make external
statistical generalizations (i.e., generalizing conclusions for the
population from which the sample
was drawn). (p. 357)
Researchers within the constructivist paradigm tend to use a
theoretical or purposive approach to
sampling. Their sampling activities begin with an identification
of groups, settings, and individuals
where (and for whom) the processes being studied are most
likely to occur (K. M. T. Collins, 2010).
Collins explains:
When using a purposive sample, the goal is to add to or
generate new theories by obtaining new
insights or fresh perspectives. . . . Purposive sampling schemes
are employed by the researcher to
choose strategically elite cases or key informants based on the
researcher’s perception that the
selected cases will yield a depth of information or a unique
perspective. (p. 357)
Researchers within the transformative paradigm could choose
either a probability or theoretical-
purposive approach to sampling, depending on their choice of
quantitative, qualitative, or mixed
methods. However, they would function with a distinctive
consciousness of representing the
populations that have traditionally been underrepresented in
research.
Despite the contrasting views of sampling evidenced within the
various paradigms, issues of
common concern exist. All sampling decisions must be made
within the constraints of ethics and
feasibility. Although randomized probability samples are set
forth as the ideal in the postpositivist
paradigm, they are not commonly used in educational and
psychological research. Thus, in practice, the
postpositivist and constructivist paradigms are more similar
than different in that both use nonrandom
samples. Sometimes, the use of convenience samples (discussed
at greater length later in this chapter)
means that less care is taken by those in both of these
paradigms. All researchers should make
conscious choices in the design of their samples rather than
accepting whatever sample presents itself
as most convenient.
External Validity (Generalizability) or Transferability
As you will recall from Chapter 4, external validity refers to the
ability of the researcher (and user of
the research results) to extend the findings of a particular study
beyond the specific individuals and
setting in which that study occurred. Within the postpositivist
paradigm, the external validity depends
on the design and execution of the sampling strategy.
Generalizability is a concept that is linked to the
target population—that is, the group to whom we want to
generalize findings.
In the constructivist paradigm, every instance of a case or
process is viewed as both an exemplar of a
l l f h d i l d i i i (D i & Li l 2011 ) Th
p. 320
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874080
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 3/35
general class of phenomena and particular and unique in its own
way (Denzin & Lincoln, 2011a). The
researcher’s task is to provide sufficient thick description about
the case so that the readers can
understand the contextual variables operating in that setting
(Lincoln & Guba, 2000). The burden of
generalizability then lies with the readers, who are assumed to
be able to generalize subjectively from
the case in question to their own personal experiences. Lincoln
and Guba label this type of
generalizability transferability.
EXTENDING YOUR THINKING
Generalizability or Transferability of Results
What is your opinion of a researcher’s ability to generalize
results? Is it possible? If so, under what
conditions? What do you think of the alternative concept of
transferability?
Defining the Population and Sample
Research constructs, such as racial or ethnic minority or deaf
student, can be defined in two ways.
Conceptual definitions are those that use other constructs to
explain the meaning, and operational
definitions are those that specify how the construct will be
measured. Researchers often begin their
work with a conceptual idea of the group of people they want to
study, such as working mothers, drug
abusers, students with disabilities, and so on. Through a review
of the literature, they formulate a
formal, conceptual definition of the group they want to study.
For example, the target population might
be first-grade students in the United States.
An operational definition of the population in the postpositivist
paradigm is called the
experimentally accessible population, defined as the list of
people who fit the conceptual definition.
For example, the experimentally accessible population might be
all the first-grade students in your
school district whose names are entered into the district’s
database. You would next need to obtain a list
of all the students in that school district. This would be called
your sampling frame. Examples of
sampling frames include (a) the student enrollment, (b) a list of
clients who receive services at a clinic,
(c) professional association membership directories, or (d) city
phone directories. The researcher
should ask if the lists are complete and up-to-date and who has
been left off the list. For example, lists
of clients at a community mental health clinic eliminate those
who need services but have not sought
them. Telephone directories eliminate people who do not have
telephone service, as well as those with
unlisted or newly assigned numbers, and most directories do not
list people’s cell, or mobile, phone
numbers. In the postpositivist view, generalizability is in part a
function of the match between the
conceptual and operational definitions of the sample. If the lists
are not accurate, systematic error can
occur because of differences between the true population and
the study population. When the
accessible population represents the target population, this
establishes population validity.
The researcher must also acknowledge that the intended sample
might differ from the obtained
sample. The issue of response rate was addressed in Chapter 6
on survey research, along with strategies
such as follow-up of nonrespondents and comparison of
respondents and nonrespondents on key
variables. The size and effect of nonresponse or attrition should
be reported and explained in all
approaches to research to address the effect of people not
responding, choosing not to participate, being
inaccessible, or dropping out of the study. This effect represents
a threat to the internal and external
validity (or credibility and transferability) of the study’s
findings. You may recall the related discussion
of this issue in the section on experimental mortality in Chapter
4 and the discussion of credibility and
transferability in Chapter 8. A researcher can use statistical
processes (described in Chapter 13) to
identify the plausibility of fit between the obtained sample and
the group from which it was drawn
when the design of the study permits it.
Identification of Sample Members
It might seem easy to know who is a member of your sample
and who is not; however complexities
p. 321
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874082
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874080
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874084
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874089
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 4/35
It might seem easy to know who is a member of your sample
and who is not; however, complexities
arise because of the ambiguity or inadequacy of the categories
typically used by researchers. Examples
of errors in identification of sample members can readily be
found in research with racial and ethnic
minorities and persons with disabilities. Two examples are
presented here, and the reader is referred to
Chapter 5 on causal comparative and correlational research to
review additional complexities
associated with this issue.
Identification of Race and Ethnicity in Populations
Investigators who examine racial or ethnic groups and
differences between such groups frequently do
so without a clear sense of what race or ethnicity means in a
research context (Blum, 2008).
Researchers who use categorization and assume homogeneity of
condition are avoiding the
complexities of participants’ experiences and social locations.
Selection of samples on the basis of race
should be done with attention to within-group variation and to
the influence of particular contexts.
Race as a biogenetic variable should not serve as a proxy
variable for actual causal variables, such as
poverty, unemployment, or family structure.
Heterogeneity has been recognized as a factor that contributes
to difficulty in classifying people as
African American or Latino (Stanfield, 2011). In reference to
African American populations, Stanfield
writes,
The question of what is blackness, which translates into who
has black African ancestry and how
far back it is in family tree histories, is a subject of empirical
analysis and should remain on the
forefront in any . . . research project. . . . What is needed . . . is
developing theories and methods of
data collection and analysis that remind us that whiteness,
blackness, and other kinds of
racializations are relational phenomena. White people create
black people; black people create
white people, and people in general create each other and
structure each other in hierarchies,
communities, movements, and societies, and global spheres. (p.
18)
Thus, Stanfield recognizes that many people are not pure
racially, but people are viewed as belonging
to specific racial groups in many research studies.
Race is sometimes used as a substitute for ethnicity, which is
usually defined in terms of a common
origin or culture resulting from shared activities and identity
based on some mixture of language,
religion, race, and ancestry (C. D. Lee, 2003). Lee suggests that
the profoundly contextual nature of
race and ethnicity must be taken into account in the study of
ethnic and race relations. Blum (2008)
makes clear that use of broad categories of race can hide
important differences in communities; using
labels such as African American and Asian American ignores
important differences based on ethnicity.
Initial immigration status and social capital among different
Asian immigrant groups result in stark
differences in terms of advantages and positions in current
racial and ethnic stratifications. For
example, Hmong and Cambodians are generally less successful
in American society than Asians from
the southern or eastern parts of Asia. Ethnic plurality is visible
in the Black community in terms of
people who were brought to America during the times of slavery
and those who have come more
recently from Africa or the Caribbean.
For instance, the word Latino has been used to categorize
people of Mexican, Cuban, Puerto Rican,
Dominican, Colombian, Salvadoran, and other extractions. The
term Hispanic has been used to include
people who trace their origins to an area colonized by Spain.
However, both labels obscure important
dimensions of diversity within the groups. This has implications
for sampling and must be attended to
if the results are to be meaningful.
The American Psychological Association Joint Task Force of
Divisions 17 and 45’s Guidelines on
Multicultural Education Training, Research, Practice, and
Organizational Change for Psychologists
(American Psychological Association [APA], 2002) and the
Council of National Psychological
Associations for the Advancement of Ethnic Minority Interests’
(2000) Guidelines for Research in
Ethnic Minority Communities, 2000 provide detailed insights
into working with four of the major
racial/ethnic minority groups in the United States: Asian
American/Pacific Islander populations,
persons of African descent, Hispanics, and American Indians
(see Box 11.1).Although American
Indians/Native Americans (AI/NA) make up approximately
1.4% of the national population, there are
more than 560 federally recognized American Indian tribes in
the United States (J B Unger Soto &
p. 322
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874081
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874087:box11.1
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 5/35
more than 560 federally recognized American Indian tribes in
the United States (J. B. Unger, Soto, &
Thomas, 2008). Each recognized tribe has its own government
and court system. The diversity in the
AI/NA population is described as follows:
The precise number of AI/ANs in the United States is difficult
to quantify because it depends on
individuals’ self-reports of their AI/AN ancestry and affiliation.
Individuals’ decisions to self-
identify as AI/AN are influenced by the wording of
race/ethnicity questions on surveys,
individuals’ awareness of their ancestry, feelings of
identification with AI/AN cultures, and
perceptions about the potential benefits and costs of labeling
themselves as AI/ANs. (p. 125)
BOX 11.1 Heterogeneity in Racial/Ethnic Minority and
Immigrant
Communities
The American Psychological Association (APA) developed
guidelines for cultural competence in conducting
research. Because of the unique salience of race/ethnicity for
diversity-related issues in the United States,
they developed guidelines for four specific racial ethnic groups:
Asian American/Pacific Islander
populations, persons of African descent, Hispanics, and
American Indian participants (APA, 2002). The APA
used race/ethnicity as the organizing framework; however, they
also recognized the need to be aware of other
dimensions of diversity. They had as a guiding principle the
following:
Recognition of the ways in which the intersection of racial and
ethnic group membership with other
dimensions of identity (e.g., gender, age, sexual orientation,
disability, religion/spiritual orientation,
educational attainment/experiences, and socioeconomic status)
enhances the understanding and
treatment of all people. (p. 19)
They included the following narrative in their discussion:
As an agent of prosocial change, the culturally competent
psychologist carries the responsibility of
combating the damaging effects of racism, prejudice, bias, and
oppression in all their forms, including
all of the methods we use to understand the populations we
serve. . . . A consistent theme . . . relates to
the interpretation and dissemination of research findings that
are meaningful and relevant to each of the
four populations and that reflect an inherent understanding of
the racial, cultural, and sociopolitical
context within which they exist. (p. 1)
Stake and Rizvi (2009) and Banks (2008) discuss the effects of
globalization in terms of
complicating our understandings of who belongs in which
groups and what the implications are for
appropriate inclusion in research for immigrant groups
particularly. The majority of immigrants
coming to the United States are from Asia, Latin America, the
West Indies, and Africa. With national
boundaries eroding, people cross boundaries more frequently
than ever before, resulting in questions
about citizenship and nationality. In addition, political
instability and factors such as war, violence,
drought, or famine have led to millions of refugees who are
essentially stateless. Researchers need to
be aware of the status of immigrant and refugee groups in their
communities and implications for how
they sample in their studies. For example, the University of
Michigan’s Center for Arab American
Studies (www.casl.umd.umich.edu/caas/) conducts studies that
illuminate much of the diversity in that
community. The American Psychological Association (APA,
2013) developed a guide that has
relevance when working with diverse culture communities
called Working With Immigrant-Origin
Clients. Kien Lee’s (2004) work in immigrant communities
provides guidance in working with
immigrants to the United States from a variety of countries,
including China, India, El Salvador, and
Vietnam. Lee also worked with the Work Group for Community
Health and Development (2013) to
develop a Community Tool Box, an online resource that
contains practical information for working
with culturally diverse communities for social change. The tool
box is available at
http://ctb.ku.edu/en/tablecontents/index.aspx.
People With Disabilities
As you will recall from Chapter 6 the federal legislation
Individuals with Disabilities Education Act
p. 324
p. 323
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
http://www.casl.umd.umich.edu/caas/
http://ctb.ku.edu/en/tablecontents/index.aspx
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874082
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 6/35
As you will recall from Chapter 6, the federal legislation
Individuals with Disabilities Education Act
(IDEA, 2001; Public Law 108-446, Section 602), reauthorized
in 2004, defines the following
categories of disabilities:
• Mental retardation
• Hearing impairments
• Speech or language impairments
• Visual impairments
• Emotional disturbance
• Orthopedic impairments
• Other health impairments
• Specific learning disabilities
• Multiple disabilities
• Deaf-blindness
• Autism
• Traumatic brain injury
• Developmental delays
Mertens and McLaughlin (2004) present an operational and
conceptual definition for each of these
disability categories. The conceptual definitions can be found in
the IDEA and a data dictionary that is
available at the IDEA website (www.ideadata.org), which
includes definitions of key terms in special
education legislation (Data Accountability Center, 2012). The
translation of these conceptual
definitions into operational definitions is fraught with
difficulty. You can imagine the diversity of
individuals who would be included in a category such as
emotional disturbance, which is defined in the
federal legislation as individuals who are unable to build or
maintain satisfactory interpersonal
relationships, exhibit inappropriate types of behaviors or
feelings, have a generally pervasive mood of
unhappiness or depression, or have been diagnosed with
schizophrenia. Psychologists have struggled
for years with finding ways to accurately classify people with
such characteristics.
A second example of issues that complicate categorizing
individuals with disabilities can be seen in
the federal definition and procedures for identification for
people with learning disabilities displayed in
Box 11.2. The definition indicates eight areas in which the
learning disability can be manifest. This list
alone demonstrates the heterogeneity that is masked when
participants in studies are simply labeled
“learning disabled.” Even within one skill area, such as reading,
there are several potential reasons that
a student would display difficulty in that area (e.g., letter
identification, word attack, comprehension).
Then, there are the complications that arise in moving from this
conceptual definition to the operational
definition. That is, how are people identified as having a
learning disability? And how reliable and
valid are the measures used to establish that a student has a
learning disability (E. Johnson, Mellard, &
Byrd, 2005)? Many researchers in the area of learning
disabilities identify their participants through
school records of Individualized Education Plans; they do not
do independent assessments to determine
the validity of those labels. However, Aaron, Malatesha Joshi,
Gooden, and Bentum (2008) conclude
that many children are not identified as having a learning
disability, yet they exhibit similar skill
deficits as those who are so labeled, further complicating
comparisons between groups. The National
Dissemination Center for Children With Disabilities
(www.nichcy.org1) published a series of
pamphlets on the identification of children with learning
disabilities that are geared to professionals
and parents (Hozella, 2007).
Cultural issues also come into play in the definition of people
with disabilities. For example, people
who are deaf use a capital D in writing the word Deaf when a
person is considered to be culturally Deaf
(Harris, Holmes, & Mertens, 2009). This designation as
culturally Deaf is made less on the basis of
one’s level of hearing loss and more on the basis of one’s
identification with the Deaf community and
use of American Sign Language
p. 325
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874082
http://www.ideadata.org/
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874087:box11.2
http://www.nichcy.org/
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 7/35
use of American Sign Language.
BOX 11.2 Federal Definition of Specific Learning Disability
and Identification
Procedures
The following conceptual definition of learning disability is
included in the IDEA legislation:
Specific learning disability means a disorder in one or more of
the basic psychological processes
involved in understanding or in using language, spoken or
written, that may manifest itself in an
imperfect ability to listen, think, speak, read, write, spell, or to
do mathematical calculations, including
conditions such as perceptual disabilities, brain injury, minimal
brain dysfunction, dyslexia, and
developmental aphasia. . . . Specific learning disability does not
include learning problems that are
primarily the result of visual, hearing, or motor disabilities, of
mental retardation, of emotional
disturbance, or of environmental, cultural, or economic
disadvantage. (34 CFR 300.8[c][10])
The federal government addressed the issue of an operational
definition of learning disability as a
determination made by the child’s teachers and an individual
qualified to do individualized diagnostic
assessment such as a school psychologist, based on the
following:
• The child does not achieve adequately for the child’s age or
to meet State-approved grade-level standards
in one or more of the following areas, when provided with
learning experiences and instruction appropriate
for the child’s age or State-approved grade-level standards:
Oral expression.
Listening comprehension.
Written expression.
Basic reading skills.
Reading fluency skills.
Reading comprehension.
Mathematics calculation.
Mathematics problem solving.
• The child does not make sufficient progress to meet age or
State-approved grade-level standards in one or
more of the areas identified in 34 CFR 300.309(a)(1) when
using a process based on the child’s response to
scientific, research-based intervention; or the child exhibits a
pattern of strengths and weaknesses in
performance, achievement, or both, relative to age, State-
approved grade-level standards, or intellectual
development, that is determined by the group to be relevant to
the identification of a specific learning
disability, using appropriate assessments, consistent with 34
CFR 300.304 and 300.305; and the group
determines that its findings under 34 CFR 300.309(a)(1) and (2)
are not primarily the result of:
A visual, hearing, or motor disability;
Mental retardation;
Emotional disturbance;
Cultural factors;
Environmental or economic disadvantage; or
Limited English proficiency.
To ensure that underachievement in a child suspected of having
a specific learning disability is not due to
lack of appropriate instruction in reading or math, the group
must consider, as part of the evaluation
described in 34 CFR 300.304 through 300.306:
• Data that demonstrate that prior to, or as a part of, the
referral process, the child was provided appropriate
instruction in regular education settings, delivered by qualified
personnel; and
D b d d i f d f hi bl i l fl i
p. 326
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 8/35
• Data-based documentation of repeated assessments of
achievement at reasonable intervals, reflecting
formal assessment of student progress during instruction, which
was provided to the child’s parents.
SOURCES: 34 CFR 300.309; 20 U.S.C. 1221e-3, 1401(30),
1414(b)(6).
The American Psychological Association (APA, 2012)
developed “Guidelines for Assessment of and
Interventions with Persons with Disabilities,” which
acknowledge that defining the term disability is
difficult. It encourages psychologists to adopt a positive,
enablement-focused approach with people
with disabilities rather than focusing on what they cannot do. It
also provides guidance in how to have
a barrier-free physical and communication environment so that
people with disabilities can participate
in research (and therapy) with dignity.
Sampling Strategies
As mentioned previously, the strategy chosen for selecting
samples varies based on the logistics, ethics,
and paradigm of the researcher. An important strategy for
choosing a sample is to determine the
dimensions of diversity that are important to that particular
study. An example is provided in Box 5.1.
Questions for reflection about salient dimensions of diversity in
sampling for focus groups are included
in Box 11.3.
K. M. T. Collins (2010) divides sampling strategies into
probabilistic and purposive. Persons
working in the constructivist paradigm prefer the terms
theoretical or purposive to describe their
sampling. A third category of sampling that is often used, but
not endorsed by proponents of any of the
major paradigms, is convenience sampling.
BOX 11.3 Dimensions of Diversity: Questions for Reflection on
Sampling Strategy in
Focus Group Research
Box 5.1 describes the sampling strategy used by Mertens (2000)
in her study of deaf and hard-of-hearing
people in the court system. The following are questions for
reflection about salient aspects of that strategy:
1. What sampling strategies are appropriate to provide a fair
picture of the diversity within important
target populations? What are the dimensions of diversity that
are important in gender groups? How
can one address the myth of homogeneity in selected cultural
groups—for example, all women are the
same, all deaf people are the same, and so on?
2. What is the importance of considering such a concept in the
context in which you do
research/evaluation?
EXTENDING YOUR THINKING
Dimensions of Diversity
How do you think researchers can address the issues of
heterogeneity within different populations? Find
examples of research studies with women, ethnic minorities, and
people with disabilities. How did the
researchers address heterogeneity in their studies? What
suggestions do you have for improving the way
this issue is addressed?
Probability-Based Sampling
Probability-based sampling is recommended because it is
possible to analyze the possible bias and
likely error mathematically (K. M. T. Collins, 2010). Sampling
error is defined as the difference
between the sample and the population, and can be estimated for
random samples. Random samples are
those in which every member of the population has a known,
nonzero probability of being included in
p. 327
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874081:box5.1
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874087:box11.3
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874081:box5.1
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 9/35
t ose w c eve y e be o t e popu at o as a ow , o e o p obab ty
o be g c uded
the sample. Random means that the selection of each unit is
independent of the selection of any other
unit. Random selection can be done in a variety of ways,
including using a lottery procedure drawing
well-mixed numbers, extracting a set of numbers from a list of
random numbers, or producing a
computer-generated list of random numbers. If the sample has
been drawn in such a way that makes it
probable that the sample is approximately the same as the
population on the variables to be studied, it
is deemed to be representative of the population. Researchers
can choose from several strategies for
probability-based sampling. K. M. T. Collins (2010) describes
probabilistic sampling strategies as
follows:
Before the study commences, the researcher establishes a
sampling frame and predetermines the
number of sampling units, preferably based on a mathematical
formula, such as power analysis
and selects the units by using simple random sampling or other
adaptations of simple random
sampling, specifically, stratified, cluster and two-stage or multi-
stage random sampling. (p. 357)
Five examples are presented here:
Simple Random Sampling
Simple random sampling means that each member of the
population has an equal and independent
chance of being selected. The researcher can choose a simple
random sample by assigning a number to
every member of the population, using a table of random
numbers, randomly selecting a row or column
in that table, and taking all the numbers that correspond to the
sampling units in that row or column. Or
the researcher could put all the names in a hat and pull them out
at random. Computers could also be
used to generate a random list of numbers that corresponds to
the numbers of the members of the
population.
This sampling strategy requires a complete list of the
population. Its advantages are the simplicity of
the process and its compatibility with the assumptions of many
statistical tests (described further in
Chapter 13). Disadvantages are that a complete list of the
population might not be available or that the
subpopulations of interest might not be equally represented in
the population. In telephone survey
research in which a complete listing of the population is not
available, the researcher can use a different
type of simple random sampling known as random digit dialing
(RDD). RDD involves the generation
of random telephone numbers that are then used to contact
people for interviews. This eliminates the
problems of out-of-date directories and unlisted numbers. If the
target population is households in a
given geographic area, the researcher can obtain a list of the
residential exchanges for that area, thus
eliminating wasted calls to business establishments.
Systematic Sampling
For systematic sampling, the researcher will take every nth
name on the population list. The procedure
involves estimating the needed sample size and dividing the
number of names on the list by the
estimated sample size. For example, if you had a population of
1,000 and you estimated that you
needed a sample size of 100, you would divide 1,000 by 100 and
determine that you need to choose
every 10th name on the population list. You then randomly pick
a place to start on the list that is less
than n and take every 10th name past your starting point.
The advantage of this sampling strategy is that you do not need
to have an exact list of all the
sampling units. It is sufficient to have knowledge of how many
people (or things) are in the accessible
population and to have a physical representation for each person
in that group. For example, a
researcher could sample files or invoices in this manner.
Systematic sampling strategy can be used to
accomplish de facto stratified sampling. Stratified sampling is
discussed next, but the basic concept is
sampling from previously established groups (e.g., different
hospitals or schools). If the files or
invoices are arranged by group, the systematic sampling
strategy can result in de facto stratification by
group (i.e., in this example, location of services).
One caution should be noted in the use of systematic sampling.
If the files or invoices are arranged
in a specific pattern, that could result in choosing a biased
sample. For example, if the files are kept in
alphabetical order by year and the number n results in choosing
only individuals or cases whose last
names begin with the letter A, this could be biasing.
p. 329
p. 328
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874089
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 10/35
a es beg w t t e ette , t s cou d be b as g.
Stratified Sampling
This type of sampling is used when there are subgroups (or
strata) of different sizes that you wish to
investigate. For example, if you want to study gender
differences in a special education population, you
need to stratify on the basis of gender, because boys are known
to be more frequently represented in
special education than girls. The researcher then needs to decide
if he or she will sample each
subpopulation proportionately or disproportionately to its
representation in the population.
• Proportional stratified sampling means that the sampling
fraction is the same for each stratum.
Thus, the sample size for each stratum will be different when
using this strategy. This type of
stratification will result in greater precision and reduction of
the sampling error, especially when the
variance between or among the stratified groups is large. The
disadvantage of this approach is that
information must be available on the stratifying variable for
every member of the accessible
population.
• Disproportional stratified sampling is used when there are big
differences in the sizes of the
subgroups, as mentioned previously in gender differences in
special education. Disproportional
sampling requires the use of different fractions of each
subgroup and thus requires the use of weighting
in the analysis of results to adjust for the selection bias. The
advantage of disproportional sampling is
that the variability is reduced within the smaller subgroup by
having a larger number of observations
for the group. The major disadvantage of this strategy is that
weights must be used in the subsequent
analyses; however, most statistical programs are set up to use
weights in the calculation of population
estimates and standard errors.
Cluster Sampling
Cluster sampling is used with naturally occurring groups of
individuals—for example, city blocks or
classrooms in a school. The researcher would randomly choose
the city blocks and then attempt to
study all (or a random sample of) the households in those
blocks. This approach is useful when a full
listing of individuals in the population is not available but a
listing of clusters is. For example,
individual schools maintain a list of students by grade, but no
state or national list is kept. Cluster
sampling is also useful when site visits are needed to collect
data; the researcher can save time and
money by collecting data at a limited number of sites.
The disadvantage of cluster sampling is apparent in the analysis
phase of the research. In the
calculations of sampling error, the number used for the sample
size is the number of clusters, and the
mean for each cluster replaces the sample mean. This reduction
in sample size results in a larger
standard error and thus less precision in estimates of effect.
Multistage Sampling
This method consists of a combination of sampling strategies
and is described by K. M. T. Collins
(2010) as “choosing a sample from the random sampling
schemes in multiple states” (p. 358). For
example, the researcher could use cluster sampling to randomly
select classrooms and then use simple
random sampling to select a sample within each classroom. The
calculations of statistics for multistage
sampling become quite complex; researchers need to aware that
too few strata will yield unreliable
extremes of the sampling variable. Between roughly 30 and 50
strata work well for multistage samples
using regression analysis.
Complex Sampling Designs in Quantitative Research
Spybrook, Raudenbush, Liu, Congdon, and Martinez (2008)
discuss sampling issues involved in
complex designs such as cluster randomized trials, multisite
randomized trials, multisite cluster
randomized trials, cluster randomized trials with treatment at
level three, trials with repeated measures,
and cluster randomized trials with repeated measures. The
sampling issues arise because these research
approaches involve the assignment of groups, rather than
individuals, to experimental and control
conditions. This complicates sampling issues because the n of
the clusters may be quite small and
p. 330
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 11/35
hence limit the ability of the researcher to demonstrate
sufficient power in the analysis phase of the
study. However, Spybrook and colleagues developed a
sophisticated analytic procedure that
accommodates the small cluster sizes while still allowing larger
sample sizes within the clusters to be
tested appropriately. The statistical procedures involved in such
designs exceed the scope of this text;
hence, readers are referred to Spybrook et al. (2008) and other
sources such as Mertler and Vannatta
(2005).
Examples of Sampling in Quantitative Studies
Researchers in education and psychology face many challenges
in trying to use probability-based
sampling strategies. Even in G. D. Borman et al.’s (2007) study
of the Success for All reading program
that is summarized in Chapter 1, they were constrained by the
need to obtain agreement from schools
to participate. They could not select randomly from the group of
schools that agreed to the conditions
of the study because it was already a relatively small group.
Probability-based sampling is generally
easier to do with survey research when a list of people in the
population is available. For example,
Nardo, Custodero, Persellin, and Fox (2006) used the National
Association for the Education of Young
Children’s digital database of 8,000 names of programs that had
fully accredited centers for their study
of the musical practices, musical preparation of teachers, and
music education needs of early childhood
professionals in the United States. They gave the list to a
university-based research center and asked
them to prepare a randomized clustered sample of 1,000 early
childhood centers. The clusters were
based on the state in which the programs were located, and the
number of centers chosen was
proportional to the number of centers in each state.
Henry, Gordon, and Rickman (2006) conducted an evaluation
study of early childhood education in
the state of Georgia in which they were able to randomly select
4-year-olds receiving early education
services either through Head Start (a federal program) or in a
Georgia pre-K program (a state program).
They first established strata based on the number of 4-year-olds
living in each county. Counties were
randomly selected from each stratum. Then, sites within the
counties were randomly selected from both
Head Start and pre-K programs and five children were randomly
selected from each classroom. This
resulted in a list of 98 pre-K and Head Start sites, all of which
agreed to participate in the study (which
the authors acknowledge is “amazing” [p. 83]). The researchers
then asked for parental permission;
75% or more of parents in most sites consented, resulting in a
Head Start sample size of 134. Data were
not collected for 20 of these 134 students because students
moved out of state, withdrew from the
program, or lacked available baseline data. From the 353 pre-K
children, the researchers ended up with
201 students who matched those enrolled in Head Start in terms
of eligibility to be considered for that
program based on poverty indicators. Clearly, thoughtful
strategies are needed in applying random
sampling principles in research in education and psychology.
Purposeful or Theoretical Sampling
As mentioned previously, researchers working within the
constructivist paradigm typically select their
samples with the goal of identifying information-rich cases that
will allow them to study a case in
depth. Although the goal is not generalization from a sample to
the population, it is important that the
researcher make clear the sampling strategy and its associated
logic to the reader. Patton (2002)
identifies the following sampling strategies that can be used
with qualitative methods:
Extreme or Deviant Cases
The criterion for selection of cases might be to choose
individuals or sites that are unusual or special in
some way. For example, the researcher might choose to study a
school with a low record of violence
compared with one that has a high record of violence. The
researcher might choose to study highly
successful programs and compare them with programs that have
failed. Study of extreme cases might
yield information that would be relevant to improving more
“typical” cases. The researcher makes the
assumption that studying the unusual will illuminate the
ordinary. The criterion for selection then
becomes the researcher’s and users’ beliefs about which cases
they could learn the most from.
Psychologists have used this sampling strategy to study deviant
behaviors in specific extreme cases.
p. 331
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874077
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 12/35
Intensity Sampling
Intensity sampling is somewhat similar to the extreme-case
strategy, except there is less emphasis on
extreme. The researcher wants to identify sites or individuals in
which the phenomenon of interest is
strongly represented. Critics of the extreme- or deviant-case
strategy might suggest that the cases are so
unusual that they distort the situation beyond applicability to
typical cases. Thus, the researcher would
look for rich cases that are not necessarily extreme. Intensity
sampling requires knowledge on the part
of the researcher as to which sites or individuals meet the
specified criterion. This knowledge can be
gained by exploratory fieldwork.
Maximum-Variation Sampling
Sites or individuals can be chosen based on the criterion of
maximizing variation within the sample.
For example, the researcher can identify sites located in isolated
rural areas, urban centers, and
suburban neighborhoods to study the effect of total inclusion of
students with disabilities. The results
would indicate what is unique about each situation (e.g., ability
to attract and retain qualified
personnel) as well as what is common across these diverse
settings (e.g., increase in interaction
between students with and without disabilities).
Homogeneous Sampling
In contrast to maximum variation sampling, homogeneous
sampling involves identification of cases or
individuals that are strongly homogeneous. In using this
strategy, the researcher seeks to describe the
experiences of subgroups of people who share similar
characteristics. For example, parents of deaf
children aged 6 through 7 represent a group of parents who have
had similar experiences with
preschool services for deaf children. Homogeneous sampling is
the recommended strategy for focus
group studies. Researchers who use focus groups have found
that groups made up of heterogeneous
people often result in representatives of the “dominant” group
monopolizing the focus group
discussion. For example, combining parents of children with
disabilities in the same focus group with
program administrators could result in the parents’ feeling
intimidated.
Typical-Case Sampling
If the researcher’s goal is to describe a typical case in which a
program has been implemented, this is
the sampling strategy of choice. Typical cases can be identified
by recommendations of knowledgeable
individuals or by review of extant demographic or programmatic
data that suggest that this case is
indeed average.
Stratified Purposeful Sampling
This is a combination of sampling strategies such that
subgroups are chosen based on specified criteria,
and a sample of cases is then selected within those strata. For
example, the cases might be divided into
highly successful, average, and failing schools, and the specific
cases can be selected from each
subgroup.
Critical-Case Sampling
Patton (2002) describes critical cases as those that can make a
point quite dramatically or are, for some
reason, particularly important in the scheme of things. A clue to
the existence of a critical case is a
statement to the effect that “if it’s true of this one case, it’s
likely to be true of all other cases” (p. 243).
For example, if total inclusion is planned for children with
disabilities, the researcher might identify a
community in which the parents are highly satisfied with the
education of their children in a separate
school for children with disabilities. If a program of inclusion
can be deemed to be successful in that
community, it suggests that it would be possible to see that
program succeed in other communities in
which the parents are not so satisfied with the separate
education of their children with disabilities.
Snowball or Chain Sampling
Snowball sampling is used to help the researcher find out who
has the information that is important to
p. 333
p. 332
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 13/35
Snowball sampling is used to help the researcher find out who
has the information that is important to
the study. The researcher starts with key informants who are
viewed as knowledgeable about the
program or community. The researcher asks the key informants
to recommend other people to whom
he or she should talk based on their knowledge of who should
know a lot about the program in
question. Although the researcher starts with a relatively short
list of informants, the list grows (like a
snowball) as names are added through the referral of
informants.
Criterion Sampling
The researcher must set up a criterion and then identify cases
that meet that criterion. For example, a
huge increase in referrals from a regular elementary school to a
special residential school for students
with disabilities might lead the researcher to set up a criterion
of “cases that have been referred to the
special school within the last 6 months.” Thus, the researcher
could determine reasons for the sudden
increase in referrals (e.g., Did a staff member recently leave the
regular elementary school? Did the
special school recently obtain staff with expertise that it did not
previously have?).
Theory-Based or Operational Construct Sampling
Sometimes, a researcher will start a study with the desire to
study the meaning of a theoretical
construct such as creativity or anxiety. Such a theoretical
construct must be operationally defined (as
discussed previously in regard to the experimentally accessible
population). If a researcher
operationalizes the theoretical construct of anxiety in terms of
social stresses that create anxiety,
sample selection might focus on individuals who “theoretically”
should exemplify that construct. This
might be a group of people who have recently become
unemployed or homeless.
Confirming and Disconfirming Cases
You will recall that in the grounded theory approach (discussed
in Chapter 8 on qualitative methods),
the researcher is interested in emerging theory that is always
being tested against data that are
systematically collected. The “constant comparative method”
requires the researcher to seek
verification for hypotheses that emerge throughout the study.
The application of the criterion to seek
negative cases suggests that the researcher should consciously
sample cases that fit (confirming) and do
not fit (disconfirming) the theory that is emerging.
Opportunistic Sampling
When working within the constructivist paradigm, researchers
seldom establish the final definition and
selection of sample members prior to the beginning of the study.
When opportunities present
themselves to the researcher during the course of the study, the
researcher should make a decision on
the spot as to the relevance of the activity or individual in terms
of the emerging theory. Thus,
opportunistic sampling involves decisions made regarding
sampling during the course of the study.
Purposeful Random Sampling
In qualitative research, samples tend to be relatively small
because of the depth of information that is
sought from each site or individual. Nevertheless, random
sampling strategies can be used to choose
those who will be included in a very small sample. For example,
in a study of sexual abuse at a
residential school for deaf students, I randomly selected the
students to be interviewed (Mertens, 1996).
The result was not a statistically representative sample but a
purposeful random sampling that could be
defended on the grounds that the cases that were selected were
not based on recommendations of
administrators at the school who might have handpicked a group
of students who would put the school
in a “good light.”
Sampling Politically Important Cases
The rationale for sampling politically important cases rests on
the perceived credibility of the study by
the persons expected to use the results. For example, if a
program has been implemented in a number
of regions, a random sample might (by chance) omit the region
in which the legislator who controls
funds for the program resides. It would be politically expedient
for the legislator to have information
p. 334
00000001583532 - Research and Evaluation in Education and
Psychology:
tative, Q
ualitative, and M
ixed M
ethods
05/14/2019 - RS000000000000000
Integrating Diversity W
ith Q
uantit
https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
011.xlink.html?#sp8874084
5/13/2019 Research and Evaluation in Education and
Psychology: Integrating Diversity With Quantitative,
Qualitative, and Mixed Methods
https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
326/View?ou=122307 14/35
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx
NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx

More Related Content

Similar to NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx

A Guide To Using Qualitative Research Methodology
A Guide To Using Qualitative Research MethodologyA Guide To Using Qualitative Research Methodology
A Guide To Using Qualitative Research MethodologyJim Jimenez
 
Unpacking Nutrition Research and being an effective Science Communicator
Unpacking Nutrition Research and being an effective Science CommunicatorUnpacking Nutrition Research and being an effective Science Communicator
Unpacking Nutrition Research and being an effective Science CommunicatorTim Crowe
 
Quantitative Methods for Lawyers - Class #2 - Research Design Part II + Intro...
Quantitative Methods for Lawyers - Class #2 - Research Design Part II + Intro...Quantitative Methods for Lawyers - Class #2 - Research Design Part II + Intro...
Quantitative Methods for Lawyers - Class #2 - Research Design Part II + Intro...Daniel Katz
 
Mba724 s3 2 elements of research design v2
Mba724 s3 2 elements of research design v2Mba724 s3 2 elements of research design v2
Mba724 s3 2 elements of research design v2Rachel Chung
 
Liberty university cjus 601 (2)
Liberty university cjus 601 (2)Liberty university cjus 601 (2)
Liberty university cjus 601 (2)leesa marteen
 
Use the Capella library to locate two psychology research articles.docx
Use the Capella library to locate two psychology research articles.docxUse the Capella library to locate two psychology research articles.docx
Use the Capella library to locate two psychology research articles.docxdickonsondorris
 
Running Head LASA 1 Final Project Early Methods Section .docx
Running Head LASA 1 Final Project Early Methods Section        .docxRunning Head LASA 1 Final Project Early Methods Section        .docx
Running Head LASA 1 Final Project Early Methods Section .docxcharisellington63520
 
Bivariate RegressionRegression analysis is a powerful and comm.docx
Bivariate RegressionRegression analysis is a powerful and comm.docxBivariate RegressionRegression analysis is a powerful and comm.docx
Bivariate RegressionRegression analysis is a powerful and comm.docxhartrobert670
 
How NOT to Aggregrate Polling Data
How NOT to Aggregrate Polling DataHow NOT to Aggregrate Polling Data
How NOT to Aggregrate Polling DataDataCards
 
Week 3_TypesOfResearch.pptx
Week 3_TypesOfResearch.pptxWeek 3_TypesOfResearch.pptx
Week 3_TypesOfResearch.pptxssuserfc27a0
 

Similar to NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx (14)

A Guide To Using Qualitative Research Methodology
A Guide To Using Qualitative Research MethodologyA Guide To Using Qualitative Research Methodology
A Guide To Using Qualitative Research Methodology
 
Unpacking Nutrition Research and being an effective Science Communicator
Unpacking Nutrition Research and being an effective Science CommunicatorUnpacking Nutrition Research and being an effective Science Communicator
Unpacking Nutrition Research and being an effective Science Communicator
 
Research Essay Questions
Research Essay QuestionsResearch Essay Questions
Research Essay Questions
 
Quantitative Methods for Lawyers - Class #2 - Research Design Part II + Intro...
Quantitative Methods for Lawyers - Class #2 - Research Design Part II + Intro...Quantitative Methods for Lawyers - Class #2 - Research Design Part II + Intro...
Quantitative Methods for Lawyers - Class #2 - Research Design Part II + Intro...
 
Sample Of Research Essay
Sample Of Research EssaySample Of Research Essay
Sample Of Research Essay
 
Mba724 s3 2 elements of research design v2
Mba724 s3 2 elements of research design v2Mba724 s3 2 elements of research design v2
Mba724 s3 2 elements of research design v2
 
Liberty university cjus 601 (2)
Liberty university cjus 601 (2)Liberty university cjus 601 (2)
Liberty university cjus 601 (2)
 
Use the Capella library to locate two psychology research articles.docx
Use the Capella library to locate two psychology research articles.docxUse the Capella library to locate two psychology research articles.docx
Use the Capella library to locate two psychology research articles.docx
 
Running Head LASA 1 Final Project Early Methods Section .docx
Running Head LASA 1 Final Project Early Methods Section        .docxRunning Head LASA 1 Final Project Early Methods Section        .docx
Running Head LASA 1 Final Project Early Methods Section .docx
 
Bivariate RegressionRegression analysis is a powerful and comm.docx
Bivariate RegressionRegression analysis is a powerful and comm.docxBivariate RegressionRegression analysis is a powerful and comm.docx
Bivariate RegressionRegression analysis is a powerful and comm.docx
 
How NOT to Aggregrate Polling Data
How NOT to Aggregrate Polling DataHow NOT to Aggregrate Polling Data
How NOT to Aggregrate Polling Data
 
Qualitative Research Essay
Qualitative Research EssayQualitative Research Essay
Qualitative Research Essay
 
Week 3_TypesOfResearch.pptx
Week 3_TypesOfResearch.pptxWeek 3_TypesOfResearch.pptx
Week 3_TypesOfResearch.pptx
 
Issues in Experimental Design
Issues in Experimental DesignIssues in Experimental Design
Issues in Experimental Design
 

More from picklesvalery

NPV, IRR, Payback period,— PA1Correlates with CLA2 (NPV portion.docx
NPV, IRR, Payback period,— PA1Correlates with CLA2 (NPV portion.docxNPV, IRR, Payback period,— PA1Correlates with CLA2 (NPV portion.docx
NPV, IRR, Payback period,— PA1Correlates with CLA2 (NPV portion.docxpicklesvalery
 
Now that you have had the opportunity to review various Cyber At.docx
Now that you have had the opportunity to review various Cyber At.docxNow that you have had the opportunity to review various Cyber At.docx
Now that you have had the opportunity to review various Cyber At.docxpicklesvalery
 
Now that you have completed a series of assignments that have led yo.docx
Now that you have completed a series of assignments that have led yo.docxNow that you have completed a series of assignments that have led yo.docx
Now that you have completed a series of assignments that have led yo.docxpicklesvalery
 
Now that you have completed your paper (ATTACHED), build and deliver.docx
Now that you have completed your paper (ATTACHED), build and deliver.docxNow that you have completed your paper (ATTACHED), build and deliver.docx
Now that you have completed your paper (ATTACHED), build and deliver.docxpicklesvalery
 
Now that you have identified the revenue-related internal contro.docx
Now that you have identified the revenue-related internal contro.docxNow that you have identified the revenue-related internal contro.docx
Now that you have identified the revenue-related internal contro.docxpicklesvalery
 
Now that you have read about Neandertals and modern Homo sapiens.docx
Now that you have read about Neandertals and modern Homo sapiens.docxNow that you have read about Neandertals and modern Homo sapiens.docx
Now that you have read about Neandertals and modern Homo sapiens.docxpicklesvalery
 
Now that you have had an opportunity to explore ethics formally, cre.docx
Now that you have had an opportunity to explore ethics formally, cre.docxNow that you have had an opportunity to explore ethics formally, cre.docx
Now that you have had an opportunity to explore ethics formally, cre.docxpicklesvalery
 
Novel Literary Exploration EssayWrite a Literary Exploration Ess.docx
Novel Literary Exploration EssayWrite a Literary Exploration Ess.docxNovel Literary Exploration EssayWrite a Literary Exploration Ess.docx
Novel Literary Exploration EssayWrite a Literary Exploration Ess.docxpicklesvalery
 
Notifications My CommunityHomeBBA 3551-16P-5A19-S3, Inform.docx
Notifications My CommunityHomeBBA 3551-16P-5A19-S3, Inform.docxNotifications My CommunityHomeBBA 3551-16P-5A19-S3, Inform.docx
Notifications My CommunityHomeBBA 3551-16P-5A19-S3, Inform.docxpicklesvalery
 
November-December 2013 • Vol. 22No. 6 359Beverly Waller D.docx
November-December 2013 • Vol. 22No. 6 359Beverly Waller D.docxNovember-December 2013 • Vol. 22No. 6 359Beverly Waller D.docx
November-December 2013 • Vol. 22No. 6 359Beverly Waller D.docxpicklesvalery
 
NOTEPlease pay attention to the assignment instructionsZero.docx
NOTEPlease pay attention to the assignment instructionsZero.docxNOTEPlease pay attention to the assignment instructionsZero.docx
NOTEPlease pay attention to the assignment instructionsZero.docxpicklesvalery
 
NOTE Use below Textbooks only. 400 WordsTopic Which doctrine.docx
NOTE Use below Textbooks only. 400 WordsTopic Which doctrine.docxNOTE Use below Textbooks only. 400 WordsTopic Which doctrine.docx
NOTE Use below Textbooks only. 400 WordsTopic Which doctrine.docxpicklesvalery
 
NOTE Everything in BOLD are things that I need to turn in for m.docx
NOTE Everything in BOLD are things that I need to turn in for m.docxNOTE Everything in BOLD are things that I need to turn in for m.docx
NOTE Everything in BOLD are things that I need to turn in for m.docxpicklesvalery
 
Note Be sure to focus only on the causes of the problem in this.docx
Note Be sure to focus only on the causes of the problem in this.docxNote Be sure to focus only on the causes of the problem in this.docx
Note Be sure to focus only on the causes of the problem in this.docxpicklesvalery
 
Note I’ll provide my sources in the morning, and lmk if you hav.docx
Note I’ll provide my sources in the morning, and lmk if you hav.docxNote I’ll provide my sources in the morning, and lmk if you hav.docx
Note I’ll provide my sources in the morning, and lmk if you hav.docxpicklesvalery
 
Note Here, the company I mentioned was Qualcomm 1. Email is the.docx
Note Here, the company I mentioned was Qualcomm 1. Email is the.docxNote Here, the company I mentioned was Qualcomm 1. Email is the.docx
Note Here, the company I mentioned was Qualcomm 1. Email is the.docxpicklesvalery
 
Note Please follow instructions to the T.Topic of 3 page pape.docx
Note Please follow instructions to the T.Topic of 3 page pape.docxNote Please follow instructions to the T.Topic of 3 page pape.docx
Note Please follow instructions to the T.Topic of 3 page pape.docxpicklesvalery
 
Note A full-sentence outline differs from bullet points because e.docx
Note A full-sentence outline differs from bullet points because e.docxNote A full-sentence outline differs from bullet points because e.docx
Note A full-sentence outline differs from bullet points because e.docxpicklesvalery
 
Notable photographers 1980 to presentAlmas, ErikAraki, No.docx
Notable photographers 1980 to presentAlmas, ErikAraki, No.docxNotable photographers 1980 to presentAlmas, ErikAraki, No.docx
Notable photographers 1980 to presentAlmas, ErikAraki, No.docxpicklesvalery
 
Note 2 political actions that are in line with Socialism and explain.docx
Note 2 political actions that are in line with Socialism and explain.docxNote 2 political actions that are in line with Socialism and explain.docx
Note 2 political actions that are in line with Socialism and explain.docxpicklesvalery
 

More from picklesvalery (20)

NPV, IRR, Payback period,— PA1Correlates with CLA2 (NPV portion.docx
NPV, IRR, Payback period,— PA1Correlates with CLA2 (NPV portion.docxNPV, IRR, Payback period,— PA1Correlates with CLA2 (NPV portion.docx
NPV, IRR, Payback period,— PA1Correlates with CLA2 (NPV portion.docx
 
Now that you have had the opportunity to review various Cyber At.docx
Now that you have had the opportunity to review various Cyber At.docxNow that you have had the opportunity to review various Cyber At.docx
Now that you have had the opportunity to review various Cyber At.docx
 
Now that you have completed a series of assignments that have led yo.docx
Now that you have completed a series of assignments that have led yo.docxNow that you have completed a series of assignments that have led yo.docx
Now that you have completed a series of assignments that have led yo.docx
 
Now that you have completed your paper (ATTACHED), build and deliver.docx
Now that you have completed your paper (ATTACHED), build and deliver.docxNow that you have completed your paper (ATTACHED), build and deliver.docx
Now that you have completed your paper (ATTACHED), build and deliver.docx
 
Now that you have identified the revenue-related internal contro.docx
Now that you have identified the revenue-related internal contro.docxNow that you have identified the revenue-related internal contro.docx
Now that you have identified the revenue-related internal contro.docx
 
Now that you have read about Neandertals and modern Homo sapiens.docx
Now that you have read about Neandertals and modern Homo sapiens.docxNow that you have read about Neandertals and modern Homo sapiens.docx
Now that you have read about Neandertals and modern Homo sapiens.docx
 
Now that you have had an opportunity to explore ethics formally, cre.docx
Now that you have had an opportunity to explore ethics formally, cre.docxNow that you have had an opportunity to explore ethics formally, cre.docx
Now that you have had an opportunity to explore ethics formally, cre.docx
 
Novel Literary Exploration EssayWrite a Literary Exploration Ess.docx
Novel Literary Exploration EssayWrite a Literary Exploration Ess.docxNovel Literary Exploration EssayWrite a Literary Exploration Ess.docx
Novel Literary Exploration EssayWrite a Literary Exploration Ess.docx
 
Notifications My CommunityHomeBBA 3551-16P-5A19-S3, Inform.docx
Notifications My CommunityHomeBBA 3551-16P-5A19-S3, Inform.docxNotifications My CommunityHomeBBA 3551-16P-5A19-S3, Inform.docx
Notifications My CommunityHomeBBA 3551-16P-5A19-S3, Inform.docx
 
November-December 2013 • Vol. 22No. 6 359Beverly Waller D.docx
November-December 2013 • Vol. 22No. 6 359Beverly Waller D.docxNovember-December 2013 • Vol. 22No. 6 359Beverly Waller D.docx
November-December 2013 • Vol. 22No. 6 359Beverly Waller D.docx
 
NOTEPlease pay attention to the assignment instructionsZero.docx
NOTEPlease pay attention to the assignment instructionsZero.docxNOTEPlease pay attention to the assignment instructionsZero.docx
NOTEPlease pay attention to the assignment instructionsZero.docx
 
NOTE Use below Textbooks only. 400 WordsTopic Which doctrine.docx
NOTE Use below Textbooks only. 400 WordsTopic Which doctrine.docxNOTE Use below Textbooks only. 400 WordsTopic Which doctrine.docx
NOTE Use below Textbooks only. 400 WordsTopic Which doctrine.docx
 
NOTE Everything in BOLD are things that I need to turn in for m.docx
NOTE Everything in BOLD are things that I need to turn in for m.docxNOTE Everything in BOLD are things that I need to turn in for m.docx
NOTE Everything in BOLD are things that I need to turn in for m.docx
 
Note Be sure to focus only on the causes of the problem in this.docx
Note Be sure to focus only on the causes of the problem in this.docxNote Be sure to focus only on the causes of the problem in this.docx
Note Be sure to focus only on the causes of the problem in this.docx
 
Note I’ll provide my sources in the morning, and lmk if you hav.docx
Note I’ll provide my sources in the morning, and lmk if you hav.docxNote I’ll provide my sources in the morning, and lmk if you hav.docx
Note I’ll provide my sources in the morning, and lmk if you hav.docx
 
Note Here, the company I mentioned was Qualcomm 1. Email is the.docx
Note Here, the company I mentioned was Qualcomm 1. Email is the.docxNote Here, the company I mentioned was Qualcomm 1. Email is the.docx
Note Here, the company I mentioned was Qualcomm 1. Email is the.docx
 
Note Please follow instructions to the T.Topic of 3 page pape.docx
Note Please follow instructions to the T.Topic of 3 page pape.docxNote Please follow instructions to the T.Topic of 3 page pape.docx
Note Please follow instructions to the T.Topic of 3 page pape.docx
 
Note A full-sentence outline differs from bullet points because e.docx
Note A full-sentence outline differs from bullet points because e.docxNote A full-sentence outline differs from bullet points because e.docx
Note A full-sentence outline differs from bullet points because e.docx
 
Notable photographers 1980 to presentAlmas, ErikAraki, No.docx
Notable photographers 1980 to presentAlmas, ErikAraki, No.docxNotable photographers 1980 to presentAlmas, ErikAraki, No.docx
Notable photographers 1980 to presentAlmas, ErikAraki, No.docx
 
Note 2 political actions that are in line with Socialism and explain.docx
Note 2 political actions that are in line with Socialism and explain.docxNote 2 political actions that are in line with Socialism and explain.docx
Note 2 political actions that are in line with Socialism and explain.docx
 

Recently uploaded

mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersChitralekhaTherkar
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 

Recently uploaded (20)

mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of Powders
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 

NORMAN, ELTON_BTM7303-12-82NORMAN, ELTON_BTM7303-12-81.docx

  • 1. NORMAN, ELTON_BTM7303-12-8 2 NORMAN, ELTON_BTM7303-12-8 1 Hello Elton, I appreciate your note. YES. Keep trying. I know that making the transition to doctoral-level reasoning can be hard! It was very hard for me in some areas because it seemed … unnatural. Does that make sense? Some aspects of this type of thinking seemed “clunky” and hard to explain in plain language. I wanted research problems, research purpose statements, etc. to simply flow. In the beginning of my journey there was very little flow (more like trickles) and lots of missteps! For this assignment, you were asked to build on your assignment last week to further explore how you might examine your research problem using a quantitative methodology. You were required to respond to these questions: · Please restate the research problem, purpose, and research questions you developed previously and incorporate any faculty feedback as appropriate. This week be sure to also include hypotheses for each of your research questions. · How might surveys be used to answer your research questions? What are the advantages and disadvantages of using surveys to collect data? · How might you use an experiment or quasi-experiment to answer your research questions? What are the advantages and disadvantages of using (quasi)experiments to collect your data? · It is also important to consider how you might analyze the potential data you collect and factors that could affect those analyses. Specifically, what are Type I and Type II errors? How
  • 2. might these impact your study? What is statistical power? How might this impact your study? What steps can you take ahead of time to help avoid issues related to Type I & II errors as well as power? As part of our standard, you were also required to use scholarly sources to support all assertions and research decisions. Length: 5 to 7 pages, not including title and reference pages I used the rubric below to assess your submission. As I moved through each section of your paper, I looked for information that demonstrated you understood important research terms such as hypothesis, null hypothesis, Type I and Type II Errors and statistical power. In most instances you demonstrated some understanding of these concepts or terms. In several instances your understanding hindered your ability to create rigorous hypotheses because there were aspects of these terms that remained unclear. I added several prompts and questions to help you in these areas. Grading Rubric Criteria Content (4 points) Points 1 State research problem, purpose, research questions and hypotheses 1.5/2 2 Discussed in detail the advantages and disadvantages of using surveys to collect data .75/ 1 3 Explained how you could use experiments or quasi-experiments
  • 3. to collect data for your study and the advantages and disadvantages of these designs .75/1 Organization (1 point) 4 Organized and presented in a clear manner. Included a minimum of five scholarly references, with appropriate APA formatting applied to citations and paraphrasing. .75/1 Total 3.75/5 Please scroll through the body of your paper for my specific comments and improvement suggestions. Elton – DO NOT GIVE UP. You can master these concepts however it may take practice, more studying of concepts or time with a tutor or statistics coach. Faculty Name: Dr. Antoinette Kohlman Grade Earned: 3.75/5 = C Date Graded: May 16, 2019
  • 4. Quantitative Research Design BTM-7303 Assignment # 8 Elton Norman Dr. Antoinette Kohlman 12 May 2019 Research Problem The research on the relationship between substance abuse and school dropout cases can be examined using a quantitative methodology. A substantial number of researches on dropout
  • 5. rates touch on the actual percentages of the students who drop out of school due to substance abuse. However, limited research has been done on the level of education which has witnessed the highest dropout rates and the drug which is mostly associated with these cases. Comment by Antoinette Kohlman: Do you mean research studies? I do not know what you mean by researches? Comment by Antoinette Kohlman: What impact does this lack of information have on schools, communities, or families? Would studying this situation create new knowledge that will enhance practice or further theoretical development? In your next paper, I would enhance the problem statement with this type of information. Purpose of the Research The purpose of the research is to establish the level where most of the students drop out of learning institutions due to substance abuse. It is also geared towards establishing the type of drug which contributes to most of these cases. Research Questions At what level of education do most of the youths drop out of school? Comment by Antoinette Kohlman: I think there is a gap here. I would add more key questions. For example: >> Among those who drop out of school, what percentage leaves due to illegal drug usage? >> Among those who dropped out of school due to illegal drug usage, what was the most common illegal drug used? How might the above research questions be translated into hypotheses? Hypothesis Examples: Illegal drug usage does not have a statistically significant effect on school dropout rates. Illegal drug usage has a statistically significant effect on school
  • 6. dropout rates. Comment by Antoinette Kohlman: This first question is a good starting point because you are acknowledging there are many reasons that contribute to high school dropouts. You then immediately pinpoint your interests in drug usage however I think more nuanced questions can be added. What type of drug is associated with the highest dropout rates? Hypotheses Most of the school dropout rates due to substance abuse are witnessed in high school. Comment by Antoinette Kohlman: There are handouts that explain how to craft hypotheses in the Dissertation Center. Click the following link to access NCU’s Developing a Hypothesis Handout. The element that is missing from your hypotheses is “statistical significance.” Please see my hypothesis examples above! Here is an excerpt that you can use to self-evaluate your hypotheses: Nature of Hypothesis 1. It can be tested –verifiable or falsifiable 2. Hypotheses are not moral or ethical questions 3. It is neither too specific nor to general 4. It is a prediction of consequences 5. It is considered valuable even if proven false Alcohol abuse contributes to the highest rate of school dropout rates in high school. Use of Surveys Surveys make up one of the excellent ways of gathering data during quantitative research and involve gathering answers from the chosen sample which represents the population being studied. It includes the use of questionnaires, mobile surveys, paper surveys, face-to-face interviews, and telephone surveys.
  • 7. In this research, the use of questionnaires is viable since it will help reach a large number of respondents for a short period. Advantages of Surveys One of the advantages of surveys is that they are inexpensive. In most cases, surveys utilize questionnaires whereby the respondents are issued with questions which they are supposed to fill. In this case, a quantitative survey involving the use of surveys can be carried out with a minimum budget and still produce a top-notch survey with valid results. Comment by Antoinette Kohlman: I agree. Please cite at least one source! The use of surveys in research leads to extensive research. It should be noted that most of the research is used to describe particular aspects of a certain population. In this case, the research carried out must involve a large population so that the results from the sample population infer to the whole population under study. Such results can only be achieved when a method which can reach a large population for a short period is used. In this case, the use of surveys in research gives the researchers an opportunity to conduct the research using a large sample. Comment by Antoinette Kohlman: How so? How does a survey lead to “extensive research?” Comment by Antoinette Kohlman: I do not understand what you mean. Doesn’t most research target specific populations? Comment by Antoinette Kohlman: So you mean the results can be generalized? Disadvantages of Surveys The use of surveys has disadvantages which include higher chances of bias. It is evident that the researchers are involved in choosing the respondents. In this case, they can select a group of respondents who are inclined to their hypothesis. The fact that samples are used to infer to a large population requires the use of a large sample with respondents who bear different lines of thought with the researchers. In this scenario, a poorly selected sample can lead to unreliable results which are not a representative of the larger population(Mitchell, 2010). Comment by Antoinette Kohlman: Please be specific and
  • 8. name the type or types of biases. Comment by Antoinette Kohlman: Comment by Antoinette Kohlman: What do you mean? I do not understand how this might occur. Please say more. Comment by Antoinette Kohlman: I do not understand what you mean by a “different line(s) of thought.” Although the researchers can select a sample population without bias, the lack of knowledge in the techniques used in sampling can lead to errors. Sampling method involves calculations and statistical analysis which require a researcher with substantial knowledge in sampling techniques. Failure to possess such skills can lead to sampling errors resulting in misleading research (Mitchell, 2010). Use of Quasi-experiment In this experiment, the use of experiments is limited as the respondents are already out of the learning institutions. For the study, the respondents will be subjected to a quasi-experiment whereby they will only give details about the level of education they dropped out of school and the substance which they can attribute to the same. The use of quasi-experiments is popular in research as it enables the researcher to control the experiment and eliminates random assignment which depends on chances that do not offer a guarantee of the equivalency of the groups at the baseline. Comment by Antoinette Kohlman: This would mean you might have to do either a longitudinal study or use a pre-test, post-test design. Comment by Antoinette Kohlman: What exactly is a quasi-experiment? Please explain or define this term and cite your source. Thank You. Advantages of Quasi-experiments The use of quasi-experiments in the research gives the researcher an opportunity to conduct the survey without subjecting the respondents to random assignments. Such assignments on substance abuse are unethical to carry out since the survey involves human respondents. The results arrived at in the survey will then be used to infer to the whole population since a large number of respondents will ensure the survey is extensive. Comment by Antoinette Kohlman: Some of this
  • 9. information seems inaccurate/incorrect however I would need to know which sources you used. Cites are needed. Quasi-experiments give the researcher the freedom to manipulate the respondents to gather substantial data for the study. In normal scenarios, the researchers can only gather limited information about the level at which most of the dropout rates are witnessed. With quasi-experiments, the researcher can twist the questions to fit the study such as indicating most of the drugs most abused for the respondents to choose. Disadvantages of Quasi-experiments Although quasi-experiments put the researcher in a position to manipulate the research, they lack randomness which leads to weaker evidence. Randomness is vital in research as it leads to results which infer to the whole population. Failure to include randomness may obtain results that favor the hypothesis and which are not a representative of the whole population. The use of quasi-experiments leads to unequal groups which jeopardize the internal validity of the research. During surveys, the internal validity aids in obtaining the approximate truth concerning causal relationships. Lack of internal validity infers that the experimenter lacks control for the variables which contribute to the results, leading to unreliable data (Polit & Beck, 2010). Analysis of Potential Data After the experiments, the potential data is analyzed using statistical tools such as the SPSS and SAS. At first, the central tendencies for the acquired data will be obtained. The measures of central tendency in the experiment will include median, mode and the mean. It will be followed by the variability measurements; an action will determine the distribution of the score and how the scores vary. In this scenario, the variability measurements taken will include standard deviation, average deviation, and the range. Factors affecting Data Analysis The analysis of the data is affected by the level of the skills exhibited by the researcher. Although the correct data can be
  • 10. arrived at from the questionnaires, poor analysis skills can lead to inaccurate data which does not infer to the population under the study. As such, the researcher must be conversant with the statistical tools to draw reliable conclusions from the survey. The extent of the analysis is another factor which affects the data analysis. During the survey, the researcher must establish the level of analysis and apply the suitable statistical tools which do not compromise the data integrity. In this case, they must apply multiple tools to analyze the collected data to establish the patterns of behavior and test the hypothesis to get the correct data which represents the population (Ramachandran &Tsokos, 2009). Type 1 and Type 2 Errors Type 1 and type 2 errors are the examples of errors which can occur in the survey. Type 1 errors occur when the researcher rejects the null hypothesis when it is true. The researcher concludes that there is the existence of differences between the groups when it is not present in reality. On the other hand, the type 2 errors infer that the researcher fails to reject a false null hypothesis. The researcher’s conclusion communicates that there is no difference between the groups although it exists. The presence of these errors in the survey leads to false results as the researcher does not make the correct inferences from the experiments. In such scenarios, the survey is termed as unreliable as it contains misleading information (Gravetter&Wallnau, 2007). Statistical Power Statistical power refers to the probability that the study will reveal the differences if they exist. A study bears the possibility of differences in the groups being studied and the failure to detect such differences will lead to research will false results. As such, the statistical tests must have the capacity to detect the differences and reject the false null hypothesis. A low statistical power infers that the tests may not identify the differences even when they are present. Its presence increases the probability of type 2 errors whereby the false null hypothesis is not rejected
  • 11. (Wimmer& Dominick, 2011). Comment by Antoinette Kohlman: Elton, if you are required to compare groups … which groups would you compare? Go back to your initial research questions. Could you compare dropout rates based on gender or ethnicity? Could you hypothesize that males dropout od school due to illegal drug usage at a higher rate when compared to females? Does this make sense? Avoiding Low Statistical Power There are numerous actions which are adopted to ensure the statistical tools have a higher statistical power. One of the actions is to use a greater sample size since it offers detailed information concerning the population being studied. Another means of increasing the statistical power is incorporating a higher level of significance which increases the chances of rejecting the null hypothesis. References Gravetter, F. J., &Wallnau, L. B. (2007). Statistics for the behavioral sciences. Belmont: Wadsworth. Mitchell, M. L. (2010). Research design explained. -7th ed. (9780495602217). Belmont: Wadsworth. Polit, D. F., & Beck, C. T. (2010). Essentials of nursing research: Appraising evidence for nursing practice. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins. Ramachandran, K. M., &Tsokos, C. P. (2009). Mathematical statistics with applications. London: Elsevier Academic Press. Wimmer, R. D., & Dominick, J. R. (2011). Mass media research: An introduction. Boston, Mass: Cengage- Wadsworth. QUANTITATIVE METHODS IN PSYCHOLOGY
  • 12. A Power Primer Jacob Cohen New brk University One possible reason for the continued neglect of statistical power analysis in research in the behavioral sciences is the inaccessibility of or difficulty with the standard material. A convenient, although not comprehensive, presentation of required sample sizes is provided here. Effect-size indexes and conventional values for these are given for operationally defined small, medium, and large effects. The sample sizes necessary for .80 power to detect effects at these levels are tabled for eight standard statistical tests: (a) the difference between independent means, (b) the significance of a product-moment correlation, (c) the difference between independent rs, (d) the sign test, (e) the difference between independent proportions, (f) chi-square tests for goodness of fit and contin- gency tables, (g) one-way analysis of variance, and (h) the significance of a multiple or multiple partial correlation. The preface to the first edition of my power handbook (Co- hen, 1969) begins: During my first dozen years of teaching and consulting on applied statistics with behavioral scientists, 1 became increasingly im- pressed with the importance of statistical power analysis, an im- portance which was increased an order of magnitude by its neglect in our textbooks and curricula. The case for its importance is
  • 13. easily made: What behavioral scientist would view with equanim- ity the question of the probability that his investigation would lead to statistically significant results, i.e., its power? (p. vii) This neglect was obvious through casual observation and had been confirmed by a power review of the 1960 volume of the Journal of Abnormal and Social Psychology, which found the mean power to detect medium effect sizes to be .48 (Cohen, 1962). Thus, the chance of obtaining a significant result was about that of tossing a head with a fair coin. I attributed this disregard of power to the inaccessibility of a meager and mathe- matically difficult literature, beginning with its origin in the work of Neyman and Pearson (1928,1933). The power handbook was supposed to solve the problem. It required no more background than an introductory psychologi- cal statistics course that included significance testing. The ex- position was verbal-intuitive and carried largely by many worked examples drawn from across the spectrum of behav- ioral science. In the ensuing two decades, the book has been through re- vised (1977) and second (1988) editions and has inspired dozens of power and effect-size surveys in many areas of the social and life sciences (Cohen, 1988, pp. xi-xii). During this period, there has been a spate of articles on power analysis in the social science literature, a baker's dozen of computer programs (re- I am grateful to Patricia Cohen for her useful comments. Correspondence concerning this article should be addressed to Ja- cob Cohen, Department of Psychology, New >brk University, 6 Wash- ington Place, 5th Floor, New York, New York 10003.
  • 14. viewed in Goldstein, 1989), and a breakthrough into popular statistics textbooks (Cohen, 1988, pp. xii-xiii). Sedlmeier and Gigerenzer (1989) reported a power review of the 1984 volume of the Journal of Abnormal Psychology (some 24 years after mine) under the title, "Do Studies of Statistical Power Have an Effect on the Power of Studies?" The answer was no. Neither their study nor the dozen other power reviews they cite (excepting those fields in which large sample sizes are used, e.g., sociology, market research) showed any material im- provement in power. Thus, a quarter century has brought no increase in the probability of obtaining a significant result. Why is this? There is no controversy among methodologists about the importance of power analysis, and there are ample accessible resources for estimating sample sizes in research planning using power analysis. My 2-decades-long expectation that methods sections in research articles in psychological jour- nals would invariably include power analyses has not been real- ized. Indeed, they almost invariably do not. Of the 54 articles Sedlmeier and Gigerenzer (1989) reviewed, only 2 mentioned power, and none estimated power or necessary sample size or the population effect size they posited. In 7 of the studies, null hypotheses served as research hypotheses that were confirmed when the results were nonsignificant. Assuming a medium ef- fect size, the median power for these tests was .25! Thus, these authors concluded that their research hypotheses of no effect were supported when they had only a .25 chance of rejecting these null hypotheses in the presence of substantial population effects. It is not at all clear why researchers continue to ignore power analysis. The passive acceptance of this state of affairs by edi- tors and reviewers is even more of a mystery. At least part of the
  • 15. reason may be the low level of consciousness about effect size: It is as if the only concern about magnitude in much psychologi- cal research is with regard to the statistical test result and its accompanying p value, not with regard to the psychological phenomenon under study. Sedlmeier and Gigerenzer (1989) at- tribute this to the accident of the historical precedence of Fi- Psychological Bulletin, 1992, Vol. 112. No. 1,155-159 Copyright 1992 by the American Psychological Association, Inc. 0033-2909/92/S3-00 155 156 JACOB COHEN sherian theory, its hybridization with the contradictory Ney- man-Pearson theory, and the apparent completeness of Fisher- ian null hypothesis testing: objective, mechanical, and a clear- cut go-no-go decision straddled over p = .05.1 have suggested that the neglect of power analysis simply exemplifies the slow movement of methodological advance (Cohen, 1988, p. xiv), noting that it took some 40 years from Student's publication of the / test to its inclusion in psychological statistics textbooks (Cohen, 1990, p. 1311). An associate editor of this journal suggests another reason: Researchers find too complicated, or do not have at hand, ei- ther my book or other reference material for power analysis. He suggests that a short rule-of-thumb treatment of necessary sam- ple size might make a difference. Hence this article. In this bare bones treatment, I cover only the simplest cases, the most common designs and tests, and only three levels of
  • 16. effect size. For readers who find this inadequate, I unhesitat- ingly recommend Statistic Power Analysis for the Behavioral Sciences (Cohen, 1988; hereafter SPABS). It covers special cases, one-sided tests, unequal sample sizes, other null hypotheses, set correlation and multivariate methods and gives substantive ex- amples of small, medium, and large effect sizes for the various tests. It offers well over 100 worked illustrative examples and is as user friendly as I know how to make it, the technical material being relegated to an appendix. Method Statistical power analysis exploits the relationships among the four variables involved in statistical inference: sample size (N), significance criterion (ft), population effect size (ES), and statistical power. For any statistical model, these relationships are such that each is a function of the other three. For example, in power reviews, for any given statistical test, we can determine power for given a, N, and ES. For research planning, however, it is most useful to determine the N necessary to have a specified power for given a and ES; this article addresses this use. The Significance Criterion, a The risk of mistakenly rejecting the null hypothesis (H) and thus of committing a Type I error, a, represents a policy: the maximum risk
  • 17. attending such a rejection. Unless otherwise stated (and it rarely is), it is taken to equal .05 (part of the Fisherian legacy; Cohen, 1990). Other values may of course be selected. For example, in studies testing sev- eral fys, it is recommended that a - .01 per hypothesis in order that the experimentwise risk (i.e., the risk of any false rejections) not become too large. Also, for tests whose parameters may be either positive or negative, the a risk may be defined as two sided or one sided. The many tables in SPABS provide for both kinds, but the sample sizes provided in this note are all for two-sided tests at a = .01, .05, and. 10, the last for circumstances in which a less rigorous standard for rejection is de- sired, as, for example, in exploratory studies. For unreconstructed one tailers (see Cohen, 1965), the tabled sample sizes provide close approxi- mations for one-sided tests at Via (e.g., the sample sizes tabled under a = .10 may be used for one-sided tests at a = .05). Power The statistical power of a significance test is the long-term probabil- ity, given the population ES, a, and TV of rejecting /&. When the ES is not equal to zero, H, is false, so failure to reject it also incurs an error.
  • 18. This is a Type II error, and for any given ES, a, and N, its probability of occurring is ft. Power is thus 1 - 0, the probability of rejecting a false H,. In this treatment, the only specification for power is .80 (so /3 = .20), a convention proposed for general use. (SPABS provides for 11 levels of power in most of its N tables.) A materially smaller value than .80 would incur too great a risk of a Type II error. A materially larger value would result in a demand for N that is likely to exceed the investigator's resources. Taken with the conventional a = .05, powerof .80 results in a 0M ratio of 4:1 (.20 to .05) of the two kinds of risks. (See SPABS, pp. 53-56.) Sample Size In research planning, the investigator needs to know the N neces- sary to attain the desired power for the specified a and hypothesized ES. A'increases with an increase in the power desired, a decrease in the ES, and a decrease in a. For statistical tests involving two or more groups, Nas here denned is the necessary sample size for each group. Effect Size
  • 19. Researchers find specifying the ES the most difficult part of power analysis. As suggested above, the difficulty is at least partly due to the generally low level of consciousness of the magnitude of phenomena that characterizes much of psychology. This in turn may help explain why, despite the stricture of methodologists, significance testing is so heavily preferred to confidence interval estimation, although the wide intervals that usually result may also play a role (Cohen, 1990). How- ever, neither the determination of power or necessary sample size can proceed without the investigator having some idea about the degree to which the H, is believed to be false (i.e., the ES). In the Neyman-Pearson method of statistical inference, in addition to the specification of HQ, an alternate hypothesis (//,) is counterpoised against fy. The degree to which H> is false is indexed by the discrep- ancy between H, and //, and is called the ES. Each statistical test has its own ES index. All the indexes are scale free and continuous, ranging upward from zero, and for all, the /^ is that ES = 0. For example, for testing the product-moment correlation of a sample for significance, the ES is simply the population r, so H posits that r = 0. As
  • 20. another example, for testing the significance of the departure of a population proportion (P) from .50, the ES index isg= P— .50, so the H, is that g= 0. For the tests of the significance of the difference between indepen- dent means, correlation coefficients, and proportions, the H is that the difference equals zero. Table 1 gives for each of the tests the definition of its ES index. To convey the meaning of any given ES index, it is necessary to have some idea of its scale. To this end, I have proposed as conventions or operational definitions small, medium, and large values for each that are at least approximately consistent across the different ES indexes. My intent was that medium ES represent an effect likely to be visible to the naked eye of a careful observer, (ft has since been noted in effect- size surveys that it approximates the average size of observed effects in various fields.) I set small ES to be noticeably smaller than medium but not so small as to be trivial, and I set large ES to be the same distance above medium as small was below it. Although the definitions were made subjectively, with some early minor adjustments, these conven- tions have been fixed since the 1977 edition of SPABS and have
  • 21. come into general use. Table 1 contains these values for the tests considered here. In the present treatment, the H,s are the ESs that operationally de- fine small, medium, and large effects as given in Table 1. For the test of the significance of a sample r, for example, because the ES for this test is simply the alternate-hypothetical population r, small, medium, and large ESs are respectively .10, .30, and .50. The ES index for the t test of the difference between independent means is d, the difference A POWER PRIMER 157 Table 1 ES Indexes and Their Values for Small, Medium, and Large Effects 1. 2. 3. 4. 5. 6.
  • 22. 7. 8. Test ES index mA vs. mB for , mA — mB independent a means Significance r of product- moment r rA vs. rB for q = ZA - ZB where z = Fisher's z independent rs P = .5 and £ = P - .50 the sign test PA vs. PB for h = <t>A — <t>B where 0 = arcsine independent transformation proportions , Chi-square , /^ (/>„ - P0/) 2 for goodness / £ p of fit and V contingency One-way ,_ £„, analysis of J a variance Multiple and f2 R 2 multiple J - R2
  • 24. .30 .25 .15 Large .80 .50 .50 .25 .80 .50 .40 .35 Note. ES = population effect size. expressed in units of (i.e., divided by) the within-population standard deviation. For this test, the /& is that d= 0 and the small, medium, and large ESs (or H,s) are d - .20, .50, and .80. Thus, an operationally defined medium difference between means is half a standard devia- tion; concretely, for IQ scores in which the population standard devia-
  • 25. tion is 15, a medium difference between means is 7.5 IQ points. Statistical Tests The tests covered here are the most common tests used in psychological research: 1. The t test for the difference between two independent means, with df= 2 (N- 1). 2. The / test for the significance of a product-moment corre- lation coefficient r, with df= N- 2. 3. The test for the difference between two independent rs, accomplished as a normal curve test through the Fisher z trans- formation of r (tabled in many statistical texts). 4. The binomial distribution or, for large samples, the nor- mal curve (or equivalent chi-square, 1 df) test that a population proportion (P) = .50. This test is also used in the nonparametric sign test for differences between paired observations. 5. The normal curve test for the difference between two inde- pendent proportions, accomplished through the arcsine trans- formation <t> (tabled in many statistical texts). The results are effectively the same when the test is made using the chi-square test with 1 degree of freedom. 6. The chi-square test for goodness of fit (one way) or associa- tion in two-way contingency tables. In Table 1, k is the number of cells and PQi and Pv are the null hypothetical and alternate hypothetical population proportions in cell /. (Note that w's structure is the same as chi-square's for cell sample frequencies.) For goodness-of-fit tests, the df= k - 1, and for contingency
  • 26. tables, df= (a — 1) (b — 1), where a and b are the number of levels in the two variables. Table 2 provides (total) sample sizes for 1 through 6 degrees of freedom. 7. One-way analysis of variance. Assuming equal sample sizes (as we do throughout), for g groups, the Ftest has df= g — 1, g(N - 1). The ES index is the standard deviation of the g population means divided by the common within-population standard deviation. Provision is made in Table 2 for 2 through 7 groups. 8. Multiple and multiple partial correlation. For k indepen- dent variables, the significance test is the standard F test for df= k,N—k-. The ES index, /*, is defined for either squared multiple or squared multiple partial correlations (R2). Table 2 provides for 2 through 8 independent variables. Note that because all tests of population parameters that can be either positive or negative (Tests 1-5) are two-sided, their ES indexes here are absolute values. In using the material that follows, keep in mind that the ES posited by the investigator is what he or she believes holds for the population and that the sample size that is found is condi- tional on the ES. Thus, if a study is planned in which the inves- tigator believes that a population r is of medium size (ES = r - .30 from Table 1) and the / test is to be performed with two- sided a = .05, then the power of this test is .80 if the sample size is 85 (from Table 2). If, using 85 cases, t is not significant, then 158 JACOB COHEN
  • 27. Table 2 TV for Small, Medium, and Large ES at Power = .80 for a = .01, .05, and .10 1. 2. 3. 4. 5. 6. 7. 8. Test Mean dif Sigr rdif P= .5 Pdif x2 df 2df Idf 4df 5df 6df ANOVA 2g" lg° V 5«*
  • 35. 22 52 23 19 25 31 35 39 42 45 20 17 15 13 12 1 1 Note. ES = population effect size, Sm = small, Med = medium, Lg = large, diff = difference, ANOVA = analysis of variance. Tests numbered as in Table 1. " Number of groups. * Number of independent variables. either r is smaller then .30 or the investigator has been the victim of the .20 (ft) risk of making a Type II error. Examples The necessary N for power of .80 for the following examples are found in Table 2. 1. To detect a medium difference between two independent sample means (d= .50 in Table 1) at a = .05 requires N= 64 in each group. (A dof .50 is equivalent to a point-biserial correla- tion of .243; see SPABS, pp. 22-24.)
  • 36. 2. For a significance test of a sample rala = .01, when the population r is large (.50 in Table 2), a sample size = 41 is required. At a = .05, the necessary sample size = 28. 3. To detect a medium-sized difference between two popula- tion rs (q = .30 in Table 1) at a = .05 requires N = 177 in each group. (The following pairs of rs yield q = .30: .00, .29; .20, .46; .40, .62; .60, .76; .80, .89; .90, .94; see SPABS, pp. 113-116) 4. The sign test tests the HO that .50 of a population of paired differences are positive. If the population proportion^ depar- ture from .50 is medium (q = .15 in Table 1), at a = .10, the necessary N= 67; at a = .05, it is 85. 5. To detect a small difference between two independent population proportions (h = .20 in Table 1) at a = .05 requires TV = 392 cases in each group. (The following pairs of Ps yield approximate values of h = .20: .05, .10; .20, .29; .40, .50; .60, .70; .80, .87; .90, .95; see SPABS, p. 184f.) 6. A 3 X 4 contingency table has 6 degrees of freedom. To detect a medium degree of association in the population (w = .30 in Table 1) at a = .05 requires N = 151. (w = .30 corresponds to a contingency coefficient of .287, and for 6 degrees of free- dom, a Cramer <£ of .212; see SPABS, pp. 220-227). 7. A psychologist considers alternate research plans involv- ing comparisons of the means of either three or four groups in both of which she believes that the ES is medium (/= .25 in Table 1). She finds that at a = .05, the necessary sample size per group is 52 cases for the three-group plan and 45 cases for the
  • 37. four-group plan, thus, total sample sizes of 156 and 180. (When /= .25, the proportion of variance accounted for by group membership is .0588; see SPABS, pp. 280-284.) 8. A psychologist plans a research in which he will do a multiple regression/correlation analysis and perform all the sig- nificance tests at a = .01. For the F test of the multiple R2, he expects a medium ES, that is, f2 = . 15 (from Table 1). He has a candidate set of eight independent variables for which Table 2 indicates that the required sample size is 147, which exceeds his resources. However, from his knowledge of the research area, he believes that the information in the eight variables can be A POWER PRIMER 159 effectively summarized in three. For three variables, the neces- sary sample size is only 108. (Given the relationship between f2 and R2, the values for small, medium, and large R2 are respec- tively .0196, .1304, and .2592, and for R, .14, .36, and .51; see SPABS, pp. 410-414.) References Cohen, J. (1962). The statistical power of abnormal-social psychologi- cal research: A review. Journal of Abnormal and Social Psychology, 65, 145-153. Cohen, J. (1965). Some statistical issues in psychological research. In B. B. Wolman (Ed.), Handbook of clinical psychology (pp. 95- 121).
  • 38. New York: McGraw-Hill. Cohen, J. (1969). Statistical power analysis for the behavioral sciences. San Diego, CA: Academic Press. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45,1304-1312. Goldstein, R. (1989). Power and sample size via MS/PC-DOS com- puters. American Statistician, 43, 253-260. Neyman, 1, & Pearson, E. S. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference. Biometrika, 20A,175-240, 263-294. Neyman, J., & Pearson, E. S. (1933). On the problem of the most effi- cient tests of statistical hypotheses. Transactions of the Royal Society of London Series A, 231, 289-337. Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316.
  • 39. Received February 1,1991 Revision received April 26,1991 Accepted May 2,1991 • Low Publication Prices for APA Members and Affiliates Keeping You Up-to-Dcrte: All APA members (Fellows; Members; Associates, and Student Affiliates) receive—as part of their annual dues—subscriptions to the American Psychobgist and APA Monitor. High School Teacher and International Affiliates receive subscriptions to the APA Monitor, and they can subscribe to the American Psychologist at a significantly reduced rate. In addition, all members and affiliates are eligible for savings of up to 60% (plus a journal credit) on all other APA journals, as well as significant discounts on subscriptions from coop- erating societies and publishers (e.g., the American Association for Counseling and Develop- ment, Academic Press, and Human Sciences Press). Essential Resources: APA members and affiliates receive special rates for purchases of APA books, including the Publication Manual of the APA, the Master Lectures, and Journals in Psychol- ogy: A Resource Listing for Authors. Other Benefits of Membership: Membership in APA also provides eligibility for low-cost insurance plans covering life, income protection, office overhead, accident protection, health care, hospital indemnity, professional liability,
  • 40. research/academic professional liability, stu- dent/school liability, and student health. For more information, write to American Psychological Association, Membership Services, 750 First Street, NE, Washington, DC 20002-4242, USA 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 1/35 CHAPTER 11 Sampling The simplest rationale for sampling is that it may not be feasible because of time or financial constraints, or even physically possible, to collect data from everyone involved in an evaluation. Sampling strategies provide systematic, transparent processes for choosing who will actually be asked to provide data. —Mertens and Wilson, 2012, p. 410 Relationships are powerful. Our one-to-one connections with each other are the foundation for change.
  • 41. And building relationships with people from different cultures, often many different cultures, is key in building diverse communities that are powerful enough to achieve significant goals. —Work Group for Community Health and Development, 2013 In This Chapter • The viewpoints of researchers who work within the postpositivist, constructivist, and transformative paradigms are contrasted in relation to sampling strategies and generalizability. • External validity is introduced as a critical concept in sampling decisions. • Challenges in the definition of specific populations are described in terms of conceptual and operational definitions, identifying a person’s racial or ethnic status, identifying persons with a disability, heterogeneity within populations, and cultural issues. • Strategies for designing and selecting samples are provided, including probability-based, theoretical- purposive, and convenience sampling. Sampling is also discussed for complex designs such as those using hierarchical linear modeling. • Sampling bias, access issues, and sample size are discussed. • Ethical standards for the protection of study participants are described in terms of an institutional review board’s requirements. • Questions to guide critical analysis of sampling definition,
  • 42. selection, and ethics are provided. Transformative research implies a philosophy that research should confront and act against the causes of injustice and violence, which can be caused not only by that which is researched but also by the process of research itself. Individuals involved in research can be disenfranchised in a few ways: (1) by the hidden power arrangements uncovered by the research process, (2) by the actions of unscrupulous (and even well-intentioned) researchers, but also (3) by researchers’ failure to expose those arrangements once they become aware of them. Hidden power arrangements are maintained by secrets of those who might be victimized by them (because they fear retaliation). . . . [Researchers] contribute to this disenfranchisement if it prevents the exposure of hidden power arrangements. (Baez, 2002, pp. 51–52) Definition, Selection, and Ethics p. 319 p. 318 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q
  • 43. uantit 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 2/35 Sampling Strategies: Alternative Paradigms The decisions that a researcher makes regarding from whom data will be collected, who is included, how they are included, and what is done to conceal or reveal identities in research constitute the topics addressed in this chapter on sampling. As can be seen in the opening quotation, these decisions are complex and not unproblematic. In a simple sense, sampling refers to the method used to select a given number of people (or things) from a population. The strategy for selecting your sample influences the quality of your data and the inferences that you can make from it. The issues surrounding from whom you collect data are what sampling is all about. Within all approaches to research, researchers use sampling for very practical reasons. In most research studies, it is simply not feasible to collect data from every individual in a setting or population. Sampling is one area in which great divergence can be witnessed when comparing the various research paradigms. In general, researchers who function within the postpositivist paradigm see the ideal sampling strategy as some form of probability sampling. Kathleen Collins (2010) describes
  • 44. probability sampling as follows: A researcher uses probability sampling schemes to select randomly the sampling units that are representative of the population of interest. . . . These methods meet the goal of ensuring that every member of the population of interest has an equal chance of selection. . . . When implementing probabilistic sampling designs, the researcher’s objective is to make external statistical generalizations (i.e., generalizing conclusions for the population from which the sample was drawn). (p. 357) Researchers within the constructivist paradigm tend to use a theoretical or purposive approach to sampling. Their sampling activities begin with an identification of groups, settings, and individuals where (and for whom) the processes being studied are most likely to occur (K. M. T. Collins, 2010). Collins explains: When using a purposive sample, the goal is to add to or generate new theories by obtaining new insights or fresh perspectives. . . . Purposive sampling schemes are employed by the researcher to choose strategically elite cases or key informants based on the researcher’s perception that the selected cases will yield a depth of information or a unique perspective. (p. 357) Researchers within the transformative paradigm could choose either a probability or theoretical- purposive approach to sampling, depending on their choice of quantitative, qualitative, or mixed methods. However, they would function with a distinctive
  • 45. consciousness of representing the populations that have traditionally been underrepresented in research. Despite the contrasting views of sampling evidenced within the various paradigms, issues of common concern exist. All sampling decisions must be made within the constraints of ethics and feasibility. Although randomized probability samples are set forth as the ideal in the postpositivist paradigm, they are not commonly used in educational and psychological research. Thus, in practice, the postpositivist and constructivist paradigms are more similar than different in that both use nonrandom samples. Sometimes, the use of convenience samples (discussed at greater length later in this chapter) means that less care is taken by those in both of these paradigms. All researchers should make conscious choices in the design of their samples rather than accepting whatever sample presents itself as most convenient. External Validity (Generalizability) or Transferability As you will recall from Chapter 4, external validity refers to the ability of the researcher (and user of the research results) to extend the findings of a particular study beyond the specific individuals and setting in which that study occurred. Within the postpositivist paradigm, the external validity depends on the design and execution of the sampling strategy. Generalizability is a concept that is linked to the target population—that is, the group to whom we want to generalize findings. In the constructivist paradigm, every instance of a case or process is viewed as both an exemplar of a
  • 46. l l f h d i l d i i i (D i & Li l 2011 ) Th p. 320 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874080 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 3/35 general class of phenomena and particular and unique in its own way (Denzin & Lincoln, 2011a). The researcher’s task is to provide sufficient thick description about the case so that the readers can understand the contextual variables operating in that setting (Lincoln & Guba, 2000). The burden of generalizability then lies with the readers, who are assumed to
  • 47. be able to generalize subjectively from the case in question to their own personal experiences. Lincoln and Guba label this type of generalizability transferability. EXTENDING YOUR THINKING Generalizability or Transferability of Results What is your opinion of a researcher’s ability to generalize results? Is it possible? If so, under what conditions? What do you think of the alternative concept of transferability? Defining the Population and Sample Research constructs, such as racial or ethnic minority or deaf student, can be defined in two ways. Conceptual definitions are those that use other constructs to explain the meaning, and operational definitions are those that specify how the construct will be measured. Researchers often begin their work with a conceptual idea of the group of people they want to study, such as working mothers, drug abusers, students with disabilities, and so on. Through a review of the literature, they formulate a formal, conceptual definition of the group they want to study. For example, the target population might be first-grade students in the United States. An operational definition of the population in the postpositivist paradigm is called the experimentally accessible population, defined as the list of people who fit the conceptual definition. For example, the experimentally accessible population might be all the first-grade students in your school district whose names are entered into the district’s database. You would next need to obtain a list
  • 48. of all the students in that school district. This would be called your sampling frame. Examples of sampling frames include (a) the student enrollment, (b) a list of clients who receive services at a clinic, (c) professional association membership directories, or (d) city phone directories. The researcher should ask if the lists are complete and up-to-date and who has been left off the list. For example, lists of clients at a community mental health clinic eliminate those who need services but have not sought them. Telephone directories eliminate people who do not have telephone service, as well as those with unlisted or newly assigned numbers, and most directories do not list people’s cell, or mobile, phone numbers. In the postpositivist view, generalizability is in part a function of the match between the conceptual and operational definitions of the sample. If the lists are not accurate, systematic error can occur because of differences between the true population and the study population. When the accessible population represents the target population, this establishes population validity. The researcher must also acknowledge that the intended sample might differ from the obtained sample. The issue of response rate was addressed in Chapter 6 on survey research, along with strategies such as follow-up of nonrespondents and comparison of respondents and nonrespondents on key variables. The size and effect of nonresponse or attrition should be reported and explained in all approaches to research to address the effect of people not responding, choosing not to participate, being inaccessible, or dropping out of the study. This effect represents a threat to the internal and external validity (or credibility and transferability) of the study’s
  • 49. findings. You may recall the related discussion of this issue in the section on experimental mortality in Chapter 4 and the discussion of credibility and transferability in Chapter 8. A researcher can use statistical processes (described in Chapter 13) to identify the plausibility of fit between the obtained sample and the group from which it was drawn when the design of the study permits it. Identification of Sample Members It might seem easy to know who is a member of your sample and who is not; however complexities p. 321 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874082 https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874080 https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874084 https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0
  • 50. 011.xlink.html?#sp8874089 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 4/35 It might seem easy to know who is a member of your sample and who is not; however, complexities arise because of the ambiguity or inadequacy of the categories typically used by researchers. Examples of errors in identification of sample members can readily be found in research with racial and ethnic minorities and persons with disabilities. Two examples are presented here, and the reader is referred to Chapter 5 on causal comparative and correlational research to review additional complexities associated with this issue. Identification of Race and Ethnicity in Populations Investigators who examine racial or ethnic groups and differences between such groups frequently do so without a clear sense of what race or ethnicity means in a research context (Blum, 2008). Researchers who use categorization and assume homogeneity of condition are avoiding the complexities of participants’ experiences and social locations. Selection of samples on the basis of race should be done with attention to within-group variation and to the influence of particular contexts. Race as a biogenetic variable should not serve as a proxy variable for actual causal variables, such as poverty, unemployment, or family structure.
  • 51. Heterogeneity has been recognized as a factor that contributes to difficulty in classifying people as African American or Latino (Stanfield, 2011). In reference to African American populations, Stanfield writes, The question of what is blackness, which translates into who has black African ancestry and how far back it is in family tree histories, is a subject of empirical analysis and should remain on the forefront in any . . . research project. . . . What is needed . . . is developing theories and methods of data collection and analysis that remind us that whiteness, blackness, and other kinds of racializations are relational phenomena. White people create black people; black people create white people, and people in general create each other and structure each other in hierarchies, communities, movements, and societies, and global spheres. (p. 18) Thus, Stanfield recognizes that many people are not pure racially, but people are viewed as belonging to specific racial groups in many research studies. Race is sometimes used as a substitute for ethnicity, which is usually defined in terms of a common origin or culture resulting from shared activities and identity based on some mixture of language, religion, race, and ancestry (C. D. Lee, 2003). Lee suggests that the profoundly contextual nature of race and ethnicity must be taken into account in the study of ethnic and race relations. Blum (2008) makes clear that use of broad categories of race can hide important differences in communities; using
  • 52. labels such as African American and Asian American ignores important differences based on ethnicity. Initial immigration status and social capital among different Asian immigrant groups result in stark differences in terms of advantages and positions in current racial and ethnic stratifications. For example, Hmong and Cambodians are generally less successful in American society than Asians from the southern or eastern parts of Asia. Ethnic plurality is visible in the Black community in terms of people who were brought to America during the times of slavery and those who have come more recently from Africa or the Caribbean. For instance, the word Latino has been used to categorize people of Mexican, Cuban, Puerto Rican, Dominican, Colombian, Salvadoran, and other extractions. The term Hispanic has been used to include people who trace their origins to an area colonized by Spain. However, both labels obscure important dimensions of diversity within the groups. This has implications for sampling and must be attended to if the results are to be meaningful. The American Psychological Association Joint Task Force of Divisions 17 and 45’s Guidelines on Multicultural Education Training, Research, Practice, and Organizational Change for Psychologists (American Psychological Association [APA], 2002) and the Council of National Psychological Associations for the Advancement of Ethnic Minority Interests’ (2000) Guidelines for Research in Ethnic Minority Communities, 2000 provide detailed insights into working with four of the major racial/ethnic minority groups in the United States: Asian American/Pacific Islander populations,
  • 53. persons of African descent, Hispanics, and American Indians (see Box 11.1).Although American Indians/Native Americans (AI/NA) make up approximately 1.4% of the national population, there are more than 560 federally recognized American Indian tribes in the United States (J B Unger Soto & p. 322 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874081 https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874087:box11.1 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 5/35
  • 54. more than 560 federally recognized American Indian tribes in the United States (J. B. Unger, Soto, & Thomas, 2008). Each recognized tribe has its own government and court system. The diversity in the AI/NA population is described as follows: The precise number of AI/ANs in the United States is difficult to quantify because it depends on individuals’ self-reports of their AI/AN ancestry and affiliation. Individuals’ decisions to self- identify as AI/AN are influenced by the wording of race/ethnicity questions on surveys, individuals’ awareness of their ancestry, feelings of identification with AI/AN cultures, and perceptions about the potential benefits and costs of labeling themselves as AI/ANs. (p. 125) BOX 11.1 Heterogeneity in Racial/Ethnic Minority and Immigrant Communities The American Psychological Association (APA) developed guidelines for cultural competence in conducting research. Because of the unique salience of race/ethnicity for diversity-related issues in the United States, they developed guidelines for four specific racial ethnic groups: Asian American/Pacific Islander populations, persons of African descent, Hispanics, and American Indian participants (APA, 2002). The APA used race/ethnicity as the organizing framework; however, they also recognized the need to be aware of other dimensions of diversity. They had as a guiding principle the following: Recognition of the ways in which the intersection of racial and ethnic group membership with other
  • 55. dimensions of identity (e.g., gender, age, sexual orientation, disability, religion/spiritual orientation, educational attainment/experiences, and socioeconomic status) enhances the understanding and treatment of all people. (p. 19) They included the following narrative in their discussion: As an agent of prosocial change, the culturally competent psychologist carries the responsibility of combating the damaging effects of racism, prejudice, bias, and oppression in all their forms, including all of the methods we use to understand the populations we serve. . . . A consistent theme . . . relates to the interpretation and dissemination of research findings that are meaningful and relevant to each of the four populations and that reflect an inherent understanding of the racial, cultural, and sociopolitical context within which they exist. (p. 1) Stake and Rizvi (2009) and Banks (2008) discuss the effects of globalization in terms of complicating our understandings of who belongs in which groups and what the implications are for appropriate inclusion in research for immigrant groups particularly. The majority of immigrants coming to the United States are from Asia, Latin America, the West Indies, and Africa. With national boundaries eroding, people cross boundaries more frequently than ever before, resulting in questions about citizenship and nationality. In addition, political instability and factors such as war, violence, drought, or famine have led to millions of refugees who are essentially stateless. Researchers need to be aware of the status of immigrant and refugee groups in their communities and implications for how
  • 56. they sample in their studies. For example, the University of Michigan’s Center for Arab American Studies (www.casl.umd.umich.edu/caas/) conducts studies that illuminate much of the diversity in that community. The American Psychological Association (APA, 2013) developed a guide that has relevance when working with diverse culture communities called Working With Immigrant-Origin Clients. Kien Lee’s (2004) work in immigrant communities provides guidance in working with immigrants to the United States from a variety of countries, including China, India, El Salvador, and Vietnam. Lee also worked with the Work Group for Community Health and Development (2013) to develop a Community Tool Box, an online resource that contains practical information for working with culturally diverse communities for social change. The tool box is available at http://ctb.ku.edu/en/tablecontents/index.aspx. People With Disabilities As you will recall from Chapter 6 the federal legislation Individuals with Disabilities Education Act p. 324 p. 323 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods
  • 57. 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit http://www.casl.umd.umich.edu/caas/ http://ctb.ku.edu/en/tablecontents/index.aspx https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874082 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 6/35 As you will recall from Chapter 6, the federal legislation Individuals with Disabilities Education Act (IDEA, 2001; Public Law 108-446, Section 602), reauthorized in 2004, defines the following categories of disabilities: • Mental retardation • Hearing impairments • Speech or language impairments • Visual impairments • Emotional disturbance • Orthopedic impairments • Other health impairments • Specific learning disabilities • Multiple disabilities • Deaf-blindness • Autism
  • 58. • Traumatic brain injury • Developmental delays Mertens and McLaughlin (2004) present an operational and conceptual definition for each of these disability categories. The conceptual definitions can be found in the IDEA and a data dictionary that is available at the IDEA website (www.ideadata.org), which includes definitions of key terms in special education legislation (Data Accountability Center, 2012). The translation of these conceptual definitions into operational definitions is fraught with difficulty. You can imagine the diversity of individuals who would be included in a category such as emotional disturbance, which is defined in the federal legislation as individuals who are unable to build or maintain satisfactory interpersonal relationships, exhibit inappropriate types of behaviors or feelings, have a generally pervasive mood of unhappiness or depression, or have been diagnosed with schizophrenia. Psychologists have struggled for years with finding ways to accurately classify people with such characteristics. A second example of issues that complicate categorizing individuals with disabilities can be seen in the federal definition and procedures for identification for people with learning disabilities displayed in Box 11.2. The definition indicates eight areas in which the learning disability can be manifest. This list alone demonstrates the heterogeneity that is masked when participants in studies are simply labeled “learning disabled.” Even within one skill area, such as reading, there are several potential reasons that a student would display difficulty in that area (e.g., letter identification, word attack, comprehension).
  • 59. Then, there are the complications that arise in moving from this conceptual definition to the operational definition. That is, how are people identified as having a learning disability? And how reliable and valid are the measures used to establish that a student has a learning disability (E. Johnson, Mellard, & Byrd, 2005)? Many researchers in the area of learning disabilities identify their participants through school records of Individualized Education Plans; they do not do independent assessments to determine the validity of those labels. However, Aaron, Malatesha Joshi, Gooden, and Bentum (2008) conclude that many children are not identified as having a learning disability, yet they exhibit similar skill deficits as those who are so labeled, further complicating comparisons between groups. The National Dissemination Center for Children With Disabilities (www.nichcy.org1) published a series of pamphlets on the identification of children with learning disabilities that are geared to professionals and parents (Hozella, 2007). Cultural issues also come into play in the definition of people with disabilities. For example, people who are deaf use a capital D in writing the word Deaf when a person is considered to be culturally Deaf (Harris, Holmes, & Mertens, 2009). This designation as culturally Deaf is made less on the basis of one’s level of hearing loss and more on the basis of one’s identification with the Deaf community and use of American Sign Language p. 325 00000001583532 - Research and Evaluation in Education and Psychology:
  • 60. tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874082 http://www.ideadata.org/ https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874087:box11.2 http://www.nichcy.org/ 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 7/35 use of American Sign Language. BOX 11.2 Federal Definition of Specific Learning Disability and Identification Procedures The following conceptual definition of learning disability is included in the IDEA legislation:
  • 61. Specific learning disability means a disorder in one or more of the basic psychological processes involved in understanding or in using language, spoken or written, that may manifest itself in an imperfect ability to listen, think, speak, read, write, spell, or to do mathematical calculations, including conditions such as perceptual disabilities, brain injury, minimal brain dysfunction, dyslexia, and developmental aphasia. . . . Specific learning disability does not include learning problems that are primarily the result of visual, hearing, or motor disabilities, of mental retardation, of emotional disturbance, or of environmental, cultural, or economic disadvantage. (34 CFR 300.8[c][10]) The federal government addressed the issue of an operational definition of learning disability as a determination made by the child’s teachers and an individual qualified to do individualized diagnostic assessment such as a school psychologist, based on the following: • The child does not achieve adequately for the child’s age or to meet State-approved grade-level standards in one or more of the following areas, when provided with learning experiences and instruction appropriate for the child’s age or State-approved grade-level standards: Oral expression. Listening comprehension. Written expression. Basic reading skills.
  • 62. Reading fluency skills. Reading comprehension. Mathematics calculation. Mathematics problem solving. • The child does not make sufficient progress to meet age or State-approved grade-level standards in one or more of the areas identified in 34 CFR 300.309(a)(1) when using a process based on the child’s response to scientific, research-based intervention; or the child exhibits a pattern of strengths and weaknesses in performance, achievement, or both, relative to age, State- approved grade-level standards, or intellectual development, that is determined by the group to be relevant to the identification of a specific learning disability, using appropriate assessments, consistent with 34 CFR 300.304 and 300.305; and the group determines that its findings under 34 CFR 300.309(a)(1) and (2) are not primarily the result of: A visual, hearing, or motor disability; Mental retardation; Emotional disturbance; Cultural factors; Environmental or economic disadvantage; or Limited English proficiency. To ensure that underachievement in a child suspected of having
  • 63. a specific learning disability is not due to lack of appropriate instruction in reading or math, the group must consider, as part of the evaluation described in 34 CFR 300.304 through 300.306: • Data that demonstrate that prior to, or as a part of, the referral process, the child was provided appropriate instruction in regular education settings, delivered by qualified personnel; and D b d d i f d f hi bl i l fl i p. 326 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 8/35
  • 64. • Data-based documentation of repeated assessments of achievement at reasonable intervals, reflecting formal assessment of student progress during instruction, which was provided to the child’s parents. SOURCES: 34 CFR 300.309; 20 U.S.C. 1221e-3, 1401(30), 1414(b)(6). The American Psychological Association (APA, 2012) developed “Guidelines for Assessment of and Interventions with Persons with Disabilities,” which acknowledge that defining the term disability is difficult. It encourages psychologists to adopt a positive, enablement-focused approach with people with disabilities rather than focusing on what they cannot do. It also provides guidance in how to have a barrier-free physical and communication environment so that people with disabilities can participate in research (and therapy) with dignity. Sampling Strategies As mentioned previously, the strategy chosen for selecting samples varies based on the logistics, ethics, and paradigm of the researcher. An important strategy for choosing a sample is to determine the dimensions of diversity that are important to that particular study. An example is provided in Box 5.1. Questions for reflection about salient dimensions of diversity in sampling for focus groups are included in Box 11.3. K. M. T. Collins (2010) divides sampling strategies into probabilistic and purposive. Persons working in the constructivist paradigm prefer the terms theoretical or purposive to describe their
  • 65. sampling. A third category of sampling that is often used, but not endorsed by proponents of any of the major paradigms, is convenience sampling. BOX 11.3 Dimensions of Diversity: Questions for Reflection on Sampling Strategy in Focus Group Research Box 5.1 describes the sampling strategy used by Mertens (2000) in her study of deaf and hard-of-hearing people in the court system. The following are questions for reflection about salient aspects of that strategy: 1. What sampling strategies are appropriate to provide a fair picture of the diversity within important target populations? What are the dimensions of diversity that are important in gender groups? How can one address the myth of homogeneity in selected cultural groups—for example, all women are the same, all deaf people are the same, and so on? 2. What is the importance of considering such a concept in the context in which you do research/evaluation? EXTENDING YOUR THINKING Dimensions of Diversity How do you think researchers can address the issues of heterogeneity within different populations? Find examples of research studies with women, ethnic minorities, and people with disabilities. How did the researchers address heterogeneity in their studies? What suggestions do you have for improving the way this issue is addressed?
  • 66. Probability-Based Sampling Probability-based sampling is recommended because it is possible to analyze the possible bias and likely error mathematically (K. M. T. Collins, 2010). Sampling error is defined as the difference between the sample and the population, and can be estimated for random samples. Random samples are those in which every member of the population has a known, nonzero probability of being included in p. 327 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874081:box5.1 https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874087:box11.3 https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874081:box5.1 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative,
  • 67. Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 9/35 t ose w c eve y e be o t e popu at o as a ow , o e o p obab ty o be g c uded the sample. Random means that the selection of each unit is independent of the selection of any other unit. Random selection can be done in a variety of ways, including using a lottery procedure drawing well-mixed numbers, extracting a set of numbers from a list of random numbers, or producing a computer-generated list of random numbers. If the sample has been drawn in such a way that makes it probable that the sample is approximately the same as the population on the variables to be studied, it is deemed to be representative of the population. Researchers can choose from several strategies for probability-based sampling. K. M. T. Collins (2010) describes probabilistic sampling strategies as follows: Before the study commences, the researcher establishes a sampling frame and predetermines the number of sampling units, preferably based on a mathematical formula, such as power analysis and selects the units by using simple random sampling or other adaptations of simple random sampling, specifically, stratified, cluster and two-stage or multi- stage random sampling. (p. 357) Five examples are presented here: Simple Random Sampling Simple random sampling means that each member of the
  • 68. population has an equal and independent chance of being selected. The researcher can choose a simple random sample by assigning a number to every member of the population, using a table of random numbers, randomly selecting a row or column in that table, and taking all the numbers that correspond to the sampling units in that row or column. Or the researcher could put all the names in a hat and pull them out at random. Computers could also be used to generate a random list of numbers that corresponds to the numbers of the members of the population. This sampling strategy requires a complete list of the population. Its advantages are the simplicity of the process and its compatibility with the assumptions of many statistical tests (described further in Chapter 13). Disadvantages are that a complete list of the population might not be available or that the subpopulations of interest might not be equally represented in the population. In telephone survey research in which a complete listing of the population is not available, the researcher can use a different type of simple random sampling known as random digit dialing (RDD). RDD involves the generation of random telephone numbers that are then used to contact people for interviews. This eliminates the problems of out-of-date directories and unlisted numbers. If the target population is households in a given geographic area, the researcher can obtain a list of the residential exchanges for that area, thus eliminating wasted calls to business establishments. Systematic Sampling For systematic sampling, the researcher will take every nth name on the population list. The procedure
  • 69. involves estimating the needed sample size and dividing the number of names on the list by the estimated sample size. For example, if you had a population of 1,000 and you estimated that you needed a sample size of 100, you would divide 1,000 by 100 and determine that you need to choose every 10th name on the population list. You then randomly pick a place to start on the list that is less than n and take every 10th name past your starting point. The advantage of this sampling strategy is that you do not need to have an exact list of all the sampling units. It is sufficient to have knowledge of how many people (or things) are in the accessible population and to have a physical representation for each person in that group. For example, a researcher could sample files or invoices in this manner. Systematic sampling strategy can be used to accomplish de facto stratified sampling. Stratified sampling is discussed next, but the basic concept is sampling from previously established groups (e.g., different hospitals or schools). If the files or invoices are arranged by group, the systematic sampling strategy can result in de facto stratification by group (i.e., in this example, location of services). One caution should be noted in the use of systematic sampling. If the files or invoices are arranged in a specific pattern, that could result in choosing a biased sample. For example, if the files are kept in alphabetical order by year and the number n results in choosing only individuals or cases whose last names begin with the letter A, this could be biasing. p. 329
  • 70. p. 328 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874089 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 10/35 a es beg w t t e ette , t s cou d be b as g. Stratified Sampling This type of sampling is used when there are subgroups (or strata) of different sizes that you wish to investigate. For example, if you want to study gender differences in a special education population, you need to stratify on the basis of gender, because boys are known to be more frequently represented in
  • 71. special education than girls. The researcher then needs to decide if he or she will sample each subpopulation proportionately or disproportionately to its representation in the population. • Proportional stratified sampling means that the sampling fraction is the same for each stratum. Thus, the sample size for each stratum will be different when using this strategy. This type of stratification will result in greater precision and reduction of the sampling error, especially when the variance between or among the stratified groups is large. The disadvantage of this approach is that information must be available on the stratifying variable for every member of the accessible population. • Disproportional stratified sampling is used when there are big differences in the sizes of the subgroups, as mentioned previously in gender differences in special education. Disproportional sampling requires the use of different fractions of each subgroup and thus requires the use of weighting in the analysis of results to adjust for the selection bias. The advantage of disproportional sampling is that the variability is reduced within the smaller subgroup by having a larger number of observations for the group. The major disadvantage of this strategy is that weights must be used in the subsequent analyses; however, most statistical programs are set up to use weights in the calculation of population estimates and standard errors. Cluster Sampling Cluster sampling is used with naturally occurring groups of individuals—for example, city blocks or
  • 72. classrooms in a school. The researcher would randomly choose the city blocks and then attempt to study all (or a random sample of) the households in those blocks. This approach is useful when a full listing of individuals in the population is not available but a listing of clusters is. For example, individual schools maintain a list of students by grade, but no state or national list is kept. Cluster sampling is also useful when site visits are needed to collect data; the researcher can save time and money by collecting data at a limited number of sites. The disadvantage of cluster sampling is apparent in the analysis phase of the research. In the calculations of sampling error, the number used for the sample size is the number of clusters, and the mean for each cluster replaces the sample mean. This reduction in sample size results in a larger standard error and thus less precision in estimates of effect. Multistage Sampling This method consists of a combination of sampling strategies and is described by K. M. T. Collins (2010) as “choosing a sample from the random sampling schemes in multiple states” (p. 358). For example, the researcher could use cluster sampling to randomly select classrooms and then use simple random sampling to select a sample within each classroom. The calculations of statistics for multistage sampling become quite complex; researchers need to aware that too few strata will yield unreliable extremes of the sampling variable. Between roughly 30 and 50 strata work well for multistage samples using regression analysis. Complex Sampling Designs in Quantitative Research
  • 73. Spybrook, Raudenbush, Liu, Congdon, and Martinez (2008) discuss sampling issues involved in complex designs such as cluster randomized trials, multisite randomized trials, multisite cluster randomized trials, cluster randomized trials with treatment at level three, trials with repeated measures, and cluster randomized trials with repeated measures. The sampling issues arise because these research approaches involve the assignment of groups, rather than individuals, to experimental and control conditions. This complicates sampling issues because the n of the clusters may be quite small and p. 330 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252
  • 74. 326/View?ou=122307 11/35 hence limit the ability of the researcher to demonstrate sufficient power in the analysis phase of the study. However, Spybrook and colleagues developed a sophisticated analytic procedure that accommodates the small cluster sizes while still allowing larger sample sizes within the clusters to be tested appropriately. The statistical procedures involved in such designs exceed the scope of this text; hence, readers are referred to Spybrook et al. (2008) and other sources such as Mertler and Vannatta (2005). Examples of Sampling in Quantitative Studies Researchers in education and psychology face many challenges in trying to use probability-based sampling strategies. Even in G. D. Borman et al.’s (2007) study of the Success for All reading program that is summarized in Chapter 1, they were constrained by the need to obtain agreement from schools to participate. They could not select randomly from the group of schools that agreed to the conditions of the study because it was already a relatively small group. Probability-based sampling is generally easier to do with survey research when a list of people in the population is available. For example, Nardo, Custodero, Persellin, and Fox (2006) used the National Association for the Education of Young Children’s digital database of 8,000 names of programs that had fully accredited centers for their study of the musical practices, musical preparation of teachers, and music education needs of early childhood professionals in the United States. They gave the list to a university-based research center and asked them to prepare a randomized clustered sample of 1,000 early
  • 75. childhood centers. The clusters were based on the state in which the programs were located, and the number of centers chosen was proportional to the number of centers in each state. Henry, Gordon, and Rickman (2006) conducted an evaluation study of early childhood education in the state of Georgia in which they were able to randomly select 4-year-olds receiving early education services either through Head Start (a federal program) or in a Georgia pre-K program (a state program). They first established strata based on the number of 4-year-olds living in each county. Counties were randomly selected from each stratum. Then, sites within the counties were randomly selected from both Head Start and pre-K programs and five children were randomly selected from each classroom. This resulted in a list of 98 pre-K and Head Start sites, all of which agreed to participate in the study (which the authors acknowledge is “amazing” [p. 83]). The researchers then asked for parental permission; 75% or more of parents in most sites consented, resulting in a Head Start sample size of 134. Data were not collected for 20 of these 134 students because students moved out of state, withdrew from the program, or lacked available baseline data. From the 353 pre-K children, the researchers ended up with 201 students who matched those enrolled in Head Start in terms of eligibility to be considered for that program based on poverty indicators. Clearly, thoughtful strategies are needed in applying random sampling principles in research in education and psychology. Purposeful or Theoretical Sampling As mentioned previously, researchers working within the constructivist paradigm typically select their
  • 76. samples with the goal of identifying information-rich cases that will allow them to study a case in depth. Although the goal is not generalization from a sample to the population, it is important that the researcher make clear the sampling strategy and its associated logic to the reader. Patton (2002) identifies the following sampling strategies that can be used with qualitative methods: Extreme or Deviant Cases The criterion for selection of cases might be to choose individuals or sites that are unusual or special in some way. For example, the researcher might choose to study a school with a low record of violence compared with one that has a high record of violence. The researcher might choose to study highly successful programs and compare them with programs that have failed. Study of extreme cases might yield information that would be relevant to improving more “typical” cases. The researcher makes the assumption that studying the unusual will illuminate the ordinary. The criterion for selection then becomes the researcher’s and users’ beliefs about which cases they could learn the most from. Psychologists have used this sampling strategy to study deviant behaviors in specific extreme cases. p. 331 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M
  • 77. ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874077 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 12/35 Intensity Sampling Intensity sampling is somewhat similar to the extreme-case strategy, except there is less emphasis on extreme. The researcher wants to identify sites or individuals in which the phenomenon of interest is strongly represented. Critics of the extreme- or deviant-case strategy might suggest that the cases are so unusual that they distort the situation beyond applicability to typical cases. Thus, the researcher would look for rich cases that are not necessarily extreme. Intensity sampling requires knowledge on the part of the researcher as to which sites or individuals meet the specified criterion. This knowledge can be gained by exploratory fieldwork. Maximum-Variation Sampling Sites or individuals can be chosen based on the criterion of maximizing variation within the sample.
  • 78. For example, the researcher can identify sites located in isolated rural areas, urban centers, and suburban neighborhoods to study the effect of total inclusion of students with disabilities. The results would indicate what is unique about each situation (e.g., ability to attract and retain qualified personnel) as well as what is common across these diverse settings (e.g., increase in interaction between students with and without disabilities). Homogeneous Sampling In contrast to maximum variation sampling, homogeneous sampling involves identification of cases or individuals that are strongly homogeneous. In using this strategy, the researcher seeks to describe the experiences of subgroups of people who share similar characteristics. For example, parents of deaf children aged 6 through 7 represent a group of parents who have had similar experiences with preschool services for deaf children. Homogeneous sampling is the recommended strategy for focus group studies. Researchers who use focus groups have found that groups made up of heterogeneous people often result in representatives of the “dominant” group monopolizing the focus group discussion. For example, combining parents of children with disabilities in the same focus group with program administrators could result in the parents’ feeling intimidated. Typical-Case Sampling If the researcher’s goal is to describe a typical case in which a program has been implemented, this is the sampling strategy of choice. Typical cases can be identified by recommendations of knowledgeable individuals or by review of extant demographic or programmatic
  • 79. data that suggest that this case is indeed average. Stratified Purposeful Sampling This is a combination of sampling strategies such that subgroups are chosen based on specified criteria, and a sample of cases is then selected within those strata. For example, the cases might be divided into highly successful, average, and failing schools, and the specific cases can be selected from each subgroup. Critical-Case Sampling Patton (2002) describes critical cases as those that can make a point quite dramatically or are, for some reason, particularly important in the scheme of things. A clue to the existence of a critical case is a statement to the effect that “if it’s true of this one case, it’s likely to be true of all other cases” (p. 243). For example, if total inclusion is planned for children with disabilities, the researcher might identify a community in which the parents are highly satisfied with the education of their children in a separate school for children with disabilities. If a program of inclusion can be deemed to be successful in that community, it suggests that it would be possible to see that program succeed in other communities in which the parents are not so satisfied with the separate education of their children with disabilities. Snowball or Chain Sampling Snowball sampling is used to help the researcher find out who has the information that is important to p. 333
  • 80. p. 332 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 13/35 Snowball sampling is used to help the researcher find out who has the information that is important to the study. The researcher starts with key informants who are viewed as knowledgeable about the program or community. The researcher asks the key informants to recommend other people to whom he or she should talk based on their knowledge of who should know a lot about the program in question. Although the researcher starts with a relatively short list of informants, the list grows (like a snowball) as names are added through the referral of
  • 81. informants. Criterion Sampling The researcher must set up a criterion and then identify cases that meet that criterion. For example, a huge increase in referrals from a regular elementary school to a special residential school for students with disabilities might lead the researcher to set up a criterion of “cases that have been referred to the special school within the last 6 months.” Thus, the researcher could determine reasons for the sudden increase in referrals (e.g., Did a staff member recently leave the regular elementary school? Did the special school recently obtain staff with expertise that it did not previously have?). Theory-Based or Operational Construct Sampling Sometimes, a researcher will start a study with the desire to study the meaning of a theoretical construct such as creativity or anxiety. Such a theoretical construct must be operationally defined (as discussed previously in regard to the experimentally accessible population). If a researcher operationalizes the theoretical construct of anxiety in terms of social stresses that create anxiety, sample selection might focus on individuals who “theoretically” should exemplify that construct. This might be a group of people who have recently become unemployed or homeless. Confirming and Disconfirming Cases You will recall that in the grounded theory approach (discussed in Chapter 8 on qualitative methods), the researcher is interested in emerging theory that is always being tested against data that are systematically collected. The “constant comparative method”
  • 82. requires the researcher to seek verification for hypotheses that emerge throughout the study. The application of the criterion to seek negative cases suggests that the researcher should consciously sample cases that fit (confirming) and do not fit (disconfirming) the theory that is emerging. Opportunistic Sampling When working within the constructivist paradigm, researchers seldom establish the final definition and selection of sample members prior to the beginning of the study. When opportunities present themselves to the researcher during the course of the study, the researcher should make a decision on the spot as to the relevance of the activity or individual in terms of the emerging theory. Thus, opportunistic sampling involves decisions made regarding sampling during the course of the study. Purposeful Random Sampling In qualitative research, samples tend to be relatively small because of the depth of information that is sought from each site or individual. Nevertheless, random sampling strategies can be used to choose those who will be included in a very small sample. For example, in a study of sexual abuse at a residential school for deaf students, I randomly selected the students to be interviewed (Mertens, 1996). The result was not a statistically representative sample but a purposeful random sampling that could be defended on the grounds that the cases that were selected were not based on recommendations of administrators at the school who might have handpicked a group of students who would put the school in a “good light.”
  • 83. Sampling Politically Important Cases The rationale for sampling politically important cases rests on the perceived credibility of the study by the persons expected to use the results. For example, if a program has been implemented in a number of regions, a random sample might (by chance) omit the region in which the legislator who controls funds for the program resides. It would be politically expedient for the legislator to have information p. 334 00000001583532 - Research and Evaluation in Education and Psychology: tative, Q ualitative, and M ixed M ethods 05/14/2019 - RS000000000000000 Integrating Diversity W ith Q uantit https://platform.virdocs.com/rscontent/epub/196223/OEBPS/ch0 011.xlink.html?#sp8874084 5/13/2019 Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods https://ncuone.ncu.edu/d2l/le/content/122307/viewContent/1252 326/View?ou=122307 14/35