SlideShare a Scribd company logo
Chapter 4
Survey Research—Describing and Predic�ng Behavior
Kim Steele/Photodisc/Ge�y Images
Chapter Contents
Introduc�on to Survey Research
(h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o
ns/sec1.1#sec1.1)
Designing Ques�onnaires
(h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o
ns/sec1.2#sec1.2)
Sampling From the Popula�on
(h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o
ns/sec1.3#sec1.3)
Analyzing Survey Data
(h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o
ns/sec1.4#sec1.4)
Ethical Issues in Survey Research
(h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o
ns/sec1.5#sec1.5)
In a highly influen�al book published in the 1960s, the
sociologist Erving Goffman (1963) defined s�gma as an
unusual characteris�c that triggers a nega�ve
evalua�on. In his words, "The s�gma�zed person is one
who is reduced in our minds from a whole and usual
person to a tainted, discounted one" (1963, p. 3).
People's beliefs about s�gma�zed characteris�cs exist
largely in the eye of the beholder but have substan�al
influence on social interac�ons with the s�gma�zed
(see Snyder, Tanke, & Berscheid, 1977). A large research
tradi�on in psychology has been devoted to understanding
both the origins of s�gma and the
consequences of being s�gma�zed. According to Goffman
and others, the characteris�cs associated with the greatest
degree of s�gma have three features in
common: They are highly visible, they are perceived as
controllable, and they are misunderstood by the public.
Recently, researchers have taken considerable interest in
people's a�tudes toward members of the gay and lesbian
community. Although these a�tudes have
become more posi�ve over �me, this group s�ll
encounters harassment and other forms of discrimina�on
on a regular basis (see Na�onal Gay Task Force, 1984).
One of the top recognized experts on this subject is
Gregory Herek, professor of psychology at the University
of California at Davis
(h�p://psychology.ucdavis.edu/herek/
(h�p://psychology.ucdavis.edu/herek/) ). In a 1988 ar�cle,
Herek conducted a survey of heterosexuals' a�tudes toward
both
lesbians and gay men, with the goal of understanding the
predictors of nega�ve a�tudes. Herek approached this
research ques�on by construc�ng a scale to
measure a�tudes toward these groups. In three studies,
par�cipants were asked to complete this a�tude measure,
along with other exis�ng scales assessing
a�tudes about gender roles, religion, and tradi�onal
ideologies.
Herek's (1988) research revealed that, as hypothesized,
heterosexual males tended to hold more nega�ve a�tudes
about gay men and lesbians than heterosexual
females. However, the same psychological mechanisms
seemed to explain the prejudice in both genders. That is ,
nega�ve a�tudes were associated with increased
religiosity, more tradi�onal beliefs about family and
gender, and fewer experiences actually interac�ng with
gay men and lesbians. These associa�ons meant that
Herek could predict people's a�tudes toward gay men and
lesbians based on knowing their views about family,
gender, and religion, as well as their past
interac�ons with the s�gma�zed group. Herek's primary
contribu�on to the literature in this paper was the insight
that reducing s�gma toward gay men and
lesbians "may require confron�ng deeply held, socially
reinforced values" (1988, p. 473). And this insight was
possible only because people were asked to report
these values directly.
https://content.ashford.edu/books/Malec.5743.13.1/sections/sec
1.1#sec1.1
https://content.ashford.edu/books/Malec.5743.13.1/sections/sec
1.2#sec1.2
https://content.ashford.edu/books/Malec.5743.13.1/sections/sec
1.3#sec1.3
https://content.ashford.edu/books/Malec.5743.13.1/sections/sec
1.4#sec1.4
https://content.ashford.edu/books/Malec.5743.13.1/sections/sec
1.5#sec1.5
http://psychology.ucdavis.edu/herek/
4.1 Introduc�on to Survey Research
Whether you are aware of it or not, you have been
encountering survey research for most of your life. Every
�me your telephone rings during dinner�me, and the
person on the other end of the line insists on knowing
your household income and favorite brand of laundry
detergent, he or she is helping to conduct survey
research. When news programs try to predict the winner
of an elec�on two weeks early, these reports are based
on survey research of eligible voters. In both
cases, the researcher is trying to make predic�ons about
the products people buy or the candidates they will elect
based on people's reports of their own
a�tudes, feelings, and behaviors.
Surveys can be used in a variety of contexts and are
most appropriate for ques�ons that involve people
describing their a�tudes, their behaviors, or a combina�on
of the two. For example, if you wanted to examine the
predictors of a�tudes toward the death penalty, you could
ask people their opinions on this topic and also
ask them about their poli�cal party affilia�on. Based on
these responses, you could test whether poli�cal
affilia�on predicted a�tudes toward the death penalty.
Or, imagine you wanted to know whether students who
spent more �me studying were more likely to do well on
their exams. This ques�on could be answered
using a survey that asked students about their study habits
and then tracked their exam grades. We will return to this
example near the end of the chapter as we
discuss the process of analyzing survey data to test our
hypotheses about predic�ons.
The common thread running through these two examples is
that they require people to report either their thoughts
(e.g., opinions about the death penalty) or
their behaviors (e.g., the hours they spend studying). Thus,
in deciding whether a survey is the best fit for your
research ques�on, the key is to consider whether
people will be both able and willing to report these
things accurately. We will expand on both of these issues
in the next sec�on.
In this chapter, we con�nue our journey along the
con�nuum of control, moving on to survey research, in
which the primary goal is either describing or predic�ng
a�tudes and behavior. For our purposes, survey research
refers to any method that relies on people's reports of
their own a�tudes, feelings, and behaviors. So,
for example, in Herek's (1988) study, the par�cipants
reported their a�tudes toward lesbians and gay men, rather
than these a�tudes being somehow directly
observed by the researchers. Compared with the qualita�ve
and descrip�ve designs for observing behavior we
discussed in Chapter 3, survey research tends to
yield more control over both data collec�on and ques�on
content. Thus, survey research falls somewhere between
quan�ta�ve descrip�ve research (Chapter 3) and
the explanatory research involved in experimental designs
(Chapter 5). This chapter provides an overview of survey
research from conceptualiza�on through
analysis. We will cover the types of research ques�ons
that are best suited to survey research and provide an
overview of the decisions to consider in designing
and conduc�ng a survey study. We will then cover the
process of data collec�on, with a focus on selec�ng the
people who will complete your survey. Finally, we
will cover the three most common approaches for
analyzing survey data, bringing us back full circle to
addressing our research ques�ons.
Distinguishing Features of Surveys
Survey research designs have three dis�nguishing features
that set them apart from other designs. First, all survey
research relies on either wri�en or verbal self-
reports of people's a�tudes, feelings, and behaviors. This
means that researchers will ask par�cipants a series of
ques�ons and record their responses. This
approach has several advantages, including being r ela�vely
straigh�orward and allowing access to psychological
processes (e.g., "Why do you support candidate
X?"). However, researchers are also cau�ous in their
interpreta�on of self-reported data because par�cipants'
responses can reflect a combina�on of their true
a�tude and their concern over how this a�tude will be
perceived. Scien�sts refer to this as social desirability,
which means that people may be reluctant to
report unpopular a�tudes. So if you were to ask people
their a�tudes about different racial groups, their answers
might reflect both their true a�tude and their
desire not to appear racist. We return to the issue of
social desirability and discuss some tricks for designing
ques�ons that can help to sidestep these concerns and
capture respondents' true a�tudes.
The second dis�nguishing feature of survey research is
that it has the ability to access internal states that cannot
be measured through direct observa�on. In our
discussion of observa�onal designs in Chapter 3, we
learned that one of the limita�ons of these designs was a
lack of insight into why people do what they do.
Survey research is able to address this limita�on directly:
By asking people what they think, how they feel, and
why they behave in certain ways, researchers come
closer to capturing the underlying psychological processes.
However, people's reports of their internal states should be
taken with a grain of salt, for three reasons. First, as
men�oned, these reports may be biased by social
desirability concerns, par�cularly when unpopular a�tudes
are involved. Second, there is a large literature in social
psychology sugges�ng that people may not be
very accurate at understanding the true reasons for their
behavior. In a highly cited review paper, psychologists
Richard Nisbe� and Tim Wilson (1977) argued that
we make poor guesses about why we do things, and those
guesses are based more on our assump�ons than on any
real introspec�on. Thus, survey ques�ons can
provide access to internal states, but these should always
be interpreted with cau�on. Third, on a more prac�cal
note, survey research allows us to collect large
amounts of data with rela�vely li�le effort and few
resources. However, their actual efficiency depends on the
decisions made during the design process. In reality,
efficiency is o�en in a delicate balance with the accuracy
and completeness of the data.
Broadly speaking, survey research can be conducted usi ng
either verbal or wri�en self-reports (or a combina�on of
the two). Before we dive into the details of
wri�ng and forma�ng a survey, it is important to
understand the pros and cons of administering your survey
as an interview (i.e., an oral survey) or a
ques�onnaire (i.e., a wri�en survey).
Interviews
An interview involves an oral ques�on-and-answer
exchange between the researcher and the par�cipant. This
exchange can take place either face-to-face or over
the phone. So our telemarketer example from earlier in
the chapter represents an interview because the ques�ons
are asked orally, via phone. Likewise, if you are
approached in a shopping mall and asked to answer
ques�ons about your favorite products, you are
experiencing a survey in interview form because the
ques�ons
are administered out loud. And, if you have ever taken
part in a focus group, in which a group of people gives
their reac�ons to a new product, the researchers are
Conduc�ng interviews may allow a researcher to gather
more
detailed and richer responses.
Alina Solovyova-Vincent/E+/Ge�y Images
essen�ally conduc�ng an interview with the group. (For a
more in-depth discussion of focus groups and other
interview techniques, see Chapter 3, Sec�on 3.2
(h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o
ns/sec3.2#sec3.2) , Qualita�ve Research Interviews.)
Interview Schedules
Regardless of how the interview is administered, the
interviewer (i.e., the researcher) has a predetermined plan,
or script, for how the interview should go. This
plan, or script, for the progress of the interview is known
as an interview schedule. When conduc�ng an interview—
including those telemarke�ng calls—the
researcher/interviewer has a detailed plan for the order of
ques�ons to be asked, along with follow-up ques�ons
depending on the par�cipant's responses.
Broadly speaking, there are two types of interview
schedules. A linear (also called "structured") schedule will
ask the same ques�ons in the same order for all
par�cipants. In contrast, a branching schedule unfolds more
like a flowchart, with the next ques�on dependent on
par�cipants' answers. A branching schedule is
typically used in cases with follow-up ques�ons that make
sense only for some of the par�cipants. For example, you
might first ask people whether they have
children; if they answer "yes," you could then follow up
by asking how many.
One danger in using a branching schedule is that it is
based partly on your assump�ons about the rela�onships
between variables. Granted, it is fairly
uncontroversial to ask only people with children to
indicate how many children they have. But imagine the
following scenario in which you first ask par�cipants for
their household income, and then ask about their poli �cal
dona�ons:
"How much money do you make? $18,000? Okay, how likely
are you to donate money to the Democra�c Party?"
"How much money do you make? $250,000? Okay, how likely
are you to donate money to the Republican Party?"
The assump�on implicit in the way these ques�ons branch
is that wealthier people are more likely to be Republicans
and less wealthy people to be Democrats.
This might be supported by the data or it might not. But
by planning the follow-up ques�ons in this way, you are
unable to capture cases that do not fit your
stereotypes (i.e., the wealthy Democrats and the poor
Republicans). The lesson here is to be careful about
le�ng your biases shape the data collec�on process, as
this can create invalid or inaccurate findings.
Advantages and Disadvantages of Interviews
Spoken interviews have a number of advantages over
wri�en surveys. For one, people are o�en more mo�vated
to talk than they are to write. Let's say that an
undergraduate research assistant is dispatched to a local
shopping mall to interview people about their experiences
in roman�c rela�onships. The researcher may
have no trouble at all recrui�ng par�cipants, many of
whom will be eager to divulge the personal details about
their recent rela�onships. But for be�er or for
worse, these experiences will be more difficult for the
researcher to capture in wri�ng. Related to this
observa�on, people's oral responses are typically richer
and
more detailed than their wri�en responses. Think of the
difference between asking someone to "Describe your
views on gun control" versus "Indicate on a scale of
1 to 7 the degree to which you support gun control." The
former is more likely to capture the richness and subtlety
involved in people's a�tudes about guns.
On a prac�cal note, using an interview format also allows
you to ensure that respondents understand
the ques�ons. If wri�en ques�onnaire items are poorly
worded, people are forced to guess at your
meaning, and these guesses introduce a big source of error
variance (variance from random sources
that are irrelevant to the trait or ability the ques �onnaire
is purpor�ng to measure). But if an
interview ques�on is poorly asked, people find it much
easier to ask the interviewer to clarify. Finally,
using an interview format allows you to reach a broader
cross-sec�on of people and to include those
who are unable to read and write—or, perhaps, unable to
read and write the language of your
survey.
Interviews also have three clear disadvantages compared
with wri�en surveys. First, interviews are
more costly in terms of both �me and money. It certainly
used more of my �me to go to a shopping
mall than it would have taken to mail out packets of
surveys (but no more money—these research
assistant gigs tend to be unpaid!). Second, the interview
format allows many opportuni�es to glean
personal bias from the interview. These biases are unlikely
to be deliberate, but par�cipants can
o�en pick up on body language and subtle facial
expressions when the interviewer disagrees with
their answers. These cues may lead them to shape their
responses in order to make the interviewer
happier (the influence of social desirability again). Third,
interviews can be difficult to score and interpret,
especially with open-ended ques�ons. Although
administering them may be easy, scoring them is
rela�vely more complicated, o�en involving subjec�vity
or bias in the interpreta�on. Because the researcher o�en
has to make judgments based on personal beliefs about
the quality of the response, mul�ple raters are generally
used to score the responses in order to minimize
bias.
The best way to understand the pros and cons of
interviewing is that both are a consequence of personal
interac�ons. The interac�on between interviewer and
interviewee allows for richer responses but also the
poten�al for these responses to be biased. As a
researcher, you have to weigh these pros and cons and
decide
which method is the best fit for your survey. In the next
sec�on, we turn our a�en�on to the process of
administering surveys in wri�ng.
Questionnaires
A ques�onnaire is a survey that involves a wri�en
ques�on-and-answer exchange between the researcher and
the par�cipant. The ques�onnaire can be in open-
ended format (e.g., the par�cipant writes in his or her
answer) or forced-choice response format (e.g., the
par�cipant selects from a set of responses, such as with
mul�ple choice ques�ons, ra�ng scales, or true/false
ques�ons), which will be discussed later in this chapter.
The exchange is a bit different from what we saw with
https://content.ashford.edu/books/Malec.5743. 13.1/sections/sec
3.2#sec3.2
interview formats. In wri�en surveys, the ques�ons are
designed ahead of �me and then given to par�cipants,
who write their responses and return the
ques�onnaire to the researcher. In the next sec�on, we
will discuss details for designing these ques�ons. But
before we get there, let's take a quick look at the
process of administering wri�en surveys.
Distribu�on Methods
Ques�onnaires can be distributed in three primary ways,
each with its own pa�ern of advantages and
disadvantages.
Distribu�ng by Mail
Un�l recently, one common way to distribute surveys was
to send paper copies through the mail to a group of
par�cipants (see Sec�on 4.3, Sampling From the
Popula�on, for more discussion on how this group is
selected). Mailing surveys is rela�vely inexpensive and
rela�vely easy to do, but is unfortunately one of the
worst methods when it comes to response rates. People
tend to ignore ques�onnaires they receive in the mail,
dismissing them as one more piece of junk. There
are a few tricks available to researchers to increase
response rates, including providing incen�ves, making the
survey interes�ng, and making it as easy as possible
to return the results (e.g., with a postage-paid envelope).
However, even using all of these tricks, researchers
consider themselves extremely lucky to get a 30%
response rate from a mail survey. That is, if you mail
1,000 surveys, you will be doing well to receive 300
back. Because of this low return on investment,
researchers have begun using other methods for their
wri�en surveys.
Distribu�ng in Person
Another op�on is to distribute a wri�en survey in person,
simply handing out copies and asking par�cipants to fill
them out on the spot. This method is certainly
more �me-consuming, as a researcher has to be sta�oned
for long periods of �me in order to collect data. In
addi�on, people are less likely to answer the
ques�ons honestly because the presence of a researcher
makes them worry about social desirability. Last, the
sample for this method is limited to people who are
in the physical area at the �me that ques�onnaires are
being handed out. As we will discuss later, this might
lead to problems in the composi�on of the sample.
On the plus side, however, this method tends to result in
higher compliance rates because it is harder to say no to
someone face-to-face than it is to ignore a piece
of mail.
Distribu�ng Online
Over the past 20 years, Internet, or Web-based, surveys
have become increasingly common. In Web-based survey
research, the ques�onnaire is designed and
posted on a Web page, to which par�cipants are directed
in order to complete the ques�onnaire. The advantages of
online distribu�on are clear: This method is
easiest for both researchers and par�cipants and may give
people a greater sense of anonymity, thereby encouraging
more honest responses. In addi�on, response
�mes are faster and the data are easier to analyze
because they are already in digital format. The
disadvantages include the following: Specific groups being
underrepresented because they do not have access to the
Internet, the researcher has li�le to no control over
sample selec�on, and the researcher receives
responses only from those who are interested in the
topic—so-called self-selec�on bias. All these limita�ons
could raise ques�ons about the validity and reliability
of the data collected. In addi�on, several ethical issues
might arise regarding informed consent and the privacy of
par�cipants. So when considering conduc�ng
Web-based surveys, researchers should evaluate all the
advantages and disadvantages, as well as any ethical or
legal implica�ons.
For readers interested in more informa�on on designing
and conduc�ng Internet research, Sam Gosling and John
Johnson's 2010 book Advanced Methods for
Conduc�ng Online Behavioral Research is an excellent
resource. In addi�on, several groups of psychological
researchers have been a�emp�ng to understand the
psychology of Internet users. (You can read about recent
studies on this website
(h�p://www.spring.org.uk/2010/10/internet-psychology.php) .)
Advantages and Disadvantages of Ques�onnaires
Just as with interview methods, wri�en ques�onnaires
have their own set of advantages and disadvantages.
Wri�en surveys allow researchers to collect large
amounts of data with li�le cost or effort, and they can
offer a greater degree of anonymity than interviews.
Anonymity can be a par�cular advantage in dealing
with sensi�ve or poten�ally embarrassing topics. That is,
people may be more willing to answer a ques�onnaire
about their alcohol use or their sexual history than
they would be to discuss these things face-to-face with an
interviewer. On the downside, wri�en surveys miss out on
the advantages of interviews because no one
is available to clarify confusing ques�ons or to gather
more informa�on as needed. Fortunately, there is one
rela�vely easy way to minimize this problem: Write
ques�ons (and response choices, if using mul�ple choice
or forced choice formats) that are as clear as possible. In
the next sec�on, we turn our a�en�on to the
process of designing ques�onnaires.
http://www.spring.org.uk/2010/10/internet-psychology.php
Simple language is one characteris�c of an effec�ve
ques�onnaire.
iStockphoto/Thinkstock
4.2 Designing Ques�onnaires
One of the most important elements when conduc�ng
survey research is deciding how to construct and assemble
the ques�onnaire items. In some cases, you will
be able to use ques�onnaires that other researchers have
developed in order to answer your research ques�ons. For
example, many psychology researchers use
standard scales that measure behavior or personality traits,
such as self-esteem, prejudice, depression, or stress levels.
The advantage of these ready-made
measures is that other people have already gone to the
trouble of making sure they are valid and reliable. So if
you are interested in the rela�onship between
stress and depression, you could distribute the Perceived
Stress Scale (Cohen, Kamarck, & Mermelstein, 1983) and
the Beck Depression Inventory (Beck, Steer, Ball,
& Ranieri, 1996) to a group of par�cipants and move on
to the fun part of data analyses. For further discussion,
see Chapter 2, Sec�on 2.2
(h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o
ns/sec2.2#sec2.2) , Reliability and Validity.
However, in many cases there is no perfect measure for
your research ques�on—either because no one has studied
the topic before or because the current
measures do not accurately assess the construct of interest
you are inves�ga�ng. When this happens, you will need
to go through the process of designing your
own ques�onnaire. In this sec�on, we discuss strategies
for wri�ng ques�ons and choosing the most appropriate
response format.
Five Rules for Designing Better Questionnaires
Each of the following rules is designed to make your
ques�ons as clear and easy to understand as possible in
order to minimize the poten�al for error variance. We
discuss each one and illustrate them with contras�ng pairs
of items, consis�ng of "bad" items that do not follow the
rule, and then "be�er" items that do.
1. Use simple language. One of the simplest and most important
rules to keep in mind is that people have to be able to
understand your ques�ons. This means
you should avoid jargon and specialized language whenever
possible.
BAD:
BETTER:
BAD:
BETTER:
"Have you ever had an STD?"
"Have you ever had a sexually transmi�ed disease?"
"What is your opinion of the S-CHIP?"
"What is your opinion of the State Children's Health
Insurance Program?"
It is also a good idea to simplify the language as much
as possible so that people spend �me
answering the ques�on rather than trying to decode your
meaning. For example, words like assist
and consider can be replaced with words like help and
think. This may seem odd—or perhaps even
condescending to your par�cipants—but it is always be�er
to err on the side of simplicity.
Remember, if people are forced to guess at the meaning
of your ques�ons, these guesses add error
variance to their answers.
In addi�on, when developing and administering surveys to
various popula�ons, it is important to
remember to use language and examples that are not
culturally or linguis�cally biased. For example,
par�cipants from various cultures may not be familiar
with ques�ons involving the historical
development of the United States or the nature of gender
roles in the United States. Thus, with the
changing U.S. popula�on and the different languages being
spoken in the home, it is important to
use language that does not discriminate against par�cular
popula�ons. Also, it is advisable to avoid
slang at all costs.
2. Be precise. Another way to ensure that people understand the
ques�on is to be as precise as
possible in your wording. Ques�ons that are ambiguous in their
wording will introduce an extra source of error variance into
your data because people may
interpret these ques�ons in varying ways.
BAD:
BETTER:
BAD:
BETTER:
"What kind of drugs do you take?" (Legal drugs? Illegal
drugs? Now? In college?)
"What kind of prescrip�on drugs are you currently
taking?"
"Do you like sports?" (Playing? Watching? Which sports?)
"How much do you like watching basketball on
television?"
3. Use neutral language. It is important that your ques�ons be
designed to measure your par�cipants' a�tudes, feelings, or
behaviors rather than to manipulate
their responses. That is, you should avoid leading ques�ons,
which are wri�en in such a way that they suggest an answer.
BAD:
BETTER:
BAD:
BETTER:
"Do you beat your children?" (Clearly, bea�ng isn't good;
who would say yes?)
"Is it acceptable to use physical forms of discipline?"
"Do you agree that the president is an idiot?" (Hmmm . .
. I wonder what the researcher thinks. . . .)
"How would you rate the president's job performance?"
This guideline can also be used to sidestep social
desirability concerns. If you suspect that people may be
reluctant to report holding an a�tude such as using
corporal punishment with their children, it helps to phrase
the ques�on in a nonthreatening way—"using physical
forms of discipline" versus "bea�ng your
children." Many current measures of prejudice adopt this
technique. For example, McConahay's Modern Racism Scale
contains items such as "Discrimina�on against
Blacks is no longer a problem in the United States"
(McConahay, 1986). People who hold prejudicial a�tudes
are more likely to agree with statements like this one
than with more blunt ones like "I hate people from Group
X."
https://content.ashford.edu/books/Malec.5743.13.1/sections/sec
2.2#sec2.2
4. Ask one ques�on at a �me. One remarkably common error
that people make in designing ques�ons is to include a double-
barreled ques�on, which fires off
more than one ques�on at a �me. When you fill out a new
pa�ent ques�onnaire at your doctor's office, these forms o�en
ask whether you suffer from
"headaches and nausea." What if you suffer from only one of
these? Or what if you have a lot of nausea and only an
occasional headache? The be�er approach
is to ask about each of these symptoms separately.
BAD:
BETTER:
BAD:
BETTER:
"Do you suffer from pain and numbness?"
"How o�en do you suffer from pain?" "How o�en do
you suffer from numbness?"
"Do you like watching football and boxing?"
"How much do you enjoy watching professional football
on TV?" "How much do you enjoy watching boxing on
TV?"
5. Avoid nega�ons. One final and simple way to clarify your
ques�ons is to avoid ques�ons with nega�ve statements
because these can o�en be difficult to
understand. In the following examples, the first is admi �edly a
li�le silly, but the second comes from a real survey of voter
opinions.
BAD:
BETTER:
BAD:
BETTER:
"Do you never not cheat on your exams?" (Wait—what?
Do I cheat? Do I not cheat? What is this asking?)
"Have you ever cheated on an exam?"
"Are you against rejec�ng the ban on pes�cides?"
(Wait—so am I for the ban? Against the ban? What is
this asking?)
"Do you support the current ban on pes�cides?"
Participant Response Options
In this sec�on, we turn our a�en�on to the issue of
deciding how par�cipants should respond to your
ques�ons. The decisions you make at this stage affect the
type of data you ul�mately collect, so it is important to
choose carefully. We also review the primary decisions
you will need to make about response op�ons, as
well as the pros and cons of each one.
One of the first choices you have to make is whether to
collect open-ended or fixed-format responses. As the names
imply, fixed-format responses require
par�cipants to choose from a list of op�ons (e.g., "Pick
your favorite color"), whereas open-ended responses ask
par�cipants to provide unstructured responses to
a ques�on or statement (e.g., "How do you feel about
legalizing marijuana?"). Open-ended responses tend to be
richer and more flexible but harder to translate
into quan�fiable data—analogous to the trade-off we
discussed in comparing wri�en versus oral survey
methods. To put it another way, some concepts are
difficult
to reduce to a seven-point fixed-format scale, but these
scales are easier to analyze than a paragraph of free -
flowing text.
Another reason to think carefully about this decision is
that fixed-format responses will, by defini�on, restrict
people's op�ons in answering the ques�on. In some
cases, these restric�ons can even act as leading ques�ons.
In a study of people's percep�ons of history, James
Pennebaker and his colleagues (2006) asked
respondents to indicate the "most significant event over
the last 50 years." When this was asked in an open-ended
way (i.e., "list the most significant event"), 2%
of par�cipants listed the inven�on of computers. In
another version of the survey, this ques�on was asked in
a fixed-format way (i.e., "choose the most significant
event"). When asked to select from a list of four op�ons
(World War II, Inven�on of Computers, Tiananmen
Square, or Man on the Moon), 30% chose the inven�on
of computers! In exchange for having easily coded data,
the researchers accidentally forced par�cipants into a
smaller number of op�ons and ended up with a
skewed sense of the importance of computers in peopl e's
percep�ons of history.
Fixed-Format Op�ons
Although fixed-format responses can some�mes constrain
or skew par�cipants' answers, the reality is that
researchers tend to use them more o�en than not. This
decision is largely prac�cal; fixed-format responses allow
for more efficient data collec�on from a much larger
sample. (Imagine the chore of having to hand-code
2,000 essays!) But once you have decided on this op�on
for your ques�onnaire, the decision process is far from
over. In this sec�on, we discuss three possibili�es
you can use to construct a fixed-format response scale.
True/False
One op�on is to ask ques�ons using a true/false format,
which asks par�cipants to indicate whether they endorse a
statement. For example:
"I a�ended church last Sunday"
"I am a U.S. ci�zen"
"I am in favor of abor�on"
True
True
True
False
False
False
This last example may strike you as odd, and in fact
illustrates an important point about the use of true/false
formats: They are best used for statements of fact
rather than a�tudes. It is rela�vely straigh�orward to
indicate whether you a�ended church or whether you are
a U.S. ci�zen. However, people's a�tudes toward
abor�on are o�en complicated—one can be "pro-choice"
but s�ll support some common-sense restric�ons, or "pro-
life" but support excep�ons (e.g., in cases of
rape). The point is that a true/false ques�on cannot even
come close to capturing this complexity. However, for
survey items that involve simple statements of fact,
the true/false format can be a good op�on.
Mul�ple Choice
A second op�on is to use a mul�ple-choice format, which
asks par�cipants to select from a set of predetermined
responses.
"Which of the following is your favorite fast-food
restaurant?"
a. McDonald's
b. Burger King
c. Wendy's
d. Taco Bell
"Who did you vote for in the 2008 presiden�al elec�on?"
a. John McCain
b. Barack Obama
"How do you travel to work on most days? (Select all
that apply.)"
a. drive alone
b. carpool
c. take public transporta�on
As you can see in these examples, mul�ple-choice
ques�ons give you quite a bit of freedom in both the
content and the response scaling of your ques�ons. You
can ask par�cipants either to select one answer or, as in
the last example, to select all applicable answers. You can
cover everything from preferences (e.g., favorite
fast-food restaurant) to behaviors (e.g., how you travel to
work).
In addi�on to assessing preferences or behaviors, as in
the preceding examples, mul�ple-choice ques�ons are
o�en used to examine knowledge and abili�es. For
example, intelligence and achievement tests u�lize
mul�ple-choice ques�ons to assess the cogni�ve and
academic abili�es of individuals. In these cases, there is
only one correct response choice and three to four
incorrect or "distracter" response choices. Most research
indicates that there should be at least three but no
more than four incorrect response choices. Providing more
than four incorrect response choices can make the
ques�on too difficult. It is also important that the
correct response choice not obviously differ from the
incorrect choices. Thus, all response choices, both correct
and incorrect, should be approximately the same
length and be plausible choices. If the incorrect response
choices are obviously wrong, this will make the item too
easy, which will affect its validity. In addi�on,
including obviously wrong answers allows those individuals
who do not know the answer to deduce the correct
response.
Regardless of whether you are developing mul�ple-choice
ques�ons to assess behaviors, knowledge, or abili�es, all
ques�ons should be clear, unambiguous, and
brief so that they can be easily understood. In addi �on,
as with all types of ques�on formats, it is best to avoid
nega�ve ques�ons such as, "Which of the following
is not correct?" because they can be confusing to test-
takers and cause the ques�on to be invalid (i.e., not
measure what it is supposed to be measuring). Finally, it
is best prac�ce to avoid response choices that include
"All of the above" or "None of the above." Most
individuals know that when such response choices are
provided, they are usually the correct answer.
You may have already spo�ed a downside to mul�ple-
choice formats. Whenever you provide a set of responses,
you are restric�ng par�cipants to those choices.
This is the problem that Pennebaker and colleagues
encountered when asking people about the most significant
events of the last century. In each of the preceding
examples, the categories fail to capture all possible
responses. What if your favorite restaurant is In-and-Out
Burger? What if you voted for Ralph Nader? What if
you telecommute or ride your bicycle to work? There are
two rela�vely easy ways to avoid (or at least minimize)
this problem. First, plan carefully when choosing
the response op�ons. During the design process, it helps
to brainstorm with other people to ensure you are
capturing the most likely responses. However, in many
cases, it is almost impossible to provide every op�on that
people might think of. The second solu�on is to provide
an "other" response to your mul�ple-choice
ques�on. This allows people to write in an op�on that
you neglected to include. For example, our last ques�on
about traveling to work could be rewri�en as
follows:
"How do you travel to work on most days? (Select all
that apply.)"
a. drive alone
b. carpool
c. take public transporta�on
d. other (please specify):__________________
This way, people who telecommute, bicycle, or even ride
their trained pony to work will have a way to respond
rather than skipping the ques�on. And, if you start
to no�ce a pa�ern in these write-in responses (e.g., 20%
of people adding "bicycle"), then you will have gained
valuable knowledge to improve the next itera�on of
the survey.
Ra�ng Scales
Last, but certainly not least, another op�on is to use a
ra�ng scale format, which asks par�cipants to respond on
a scale that represents a con�nuum.
"Some�mes it is necessary to sacrifice liberty in the
name of security."
1 2 3 4 5
strongly agree agree neither agree nor disagree disagree
strongly disagree
"I would vote for a candidate who supported the death
penalty."
1 2 3 4 5
Always o�en about half of the �me seldom never
"The poli�cal party in power right now has messed things
up."
1 2 3 4 5
strongly agree agree neither agree nor disagree disagree
strongly disagree
This format is well suited to capturing a�tudes and
opinions and, in fact, is one of the most common
approaches to a�tude research. Ra�ng scales are easy to
score, and they give par�cipants some flexibility in
indica�ng their agreement with or endorsement of the
ques�ons. As a researcher, you have two cri�cal
decisions to make about the construc�on of ra�ng scale
items. Both have implica�ons for how you will analyze
and interpret your results.
First, you'll need to decide on the anchors, or labels, for
your response scale. Ra�ng scales offer a good deal of
flexibility in these anchors, as you can see in the
preceding examples. You can frame ques�ons in terms of
"agreement" with a statement or "likelihood" of a
behavior; alterna�vely, you can customize the anchors
to match your ques�on (e.g., "not at all necessary").
Scales that use anchors of strongly agree and strongly
disagree are also referred to as Likert scales. At a fairly
simple level, the choice of labels affects the interpreta�on
of the results. For example, if you asked the "poli �cal
party" ques�on, you would have to be aware that
the anchors were phrased in terms of agreement with the
statement. In discussing these results, you would be able
to discuss how much people agreed with the
statement, on average, and whether agreement correlated
with other factors. If this seems like an obvious point,
you would be amazed at how o�en researchers
(or the media) will take an item like this and spin the
results to talk about the "likelihood of vo�ng" for the
party in power—confusing an a�tude with a behavior!
So, in short, make sure you are being honest when
presen�ng and interpre�ng research data.
At a more conceptual level, you need to decide whether
the anchors for your ra�ng scale make use of a bipolar
scale, which has polar opposites at its endpoints
and a neutral point in the middle, or a unipolar scale,
which assesses the presence or absence of a single
construct. The difference between these scales is best
illustrated by an example:
Bipolar: How would you rate your current mood?
1 2 3 4 5 6 7
very happy happy slightly happy neither happy nor sad
slightly sad sad very sad
Unipolar: How would you rate your current mood?
1 2 3 4 5
not at all sad slightly sad moderately sad very sad
completely sad
1 2 3 4 5
not at all happy slightly happy moderately happy very
happy completely happy
With the bipolar op�on, par�cipants are asked to place
themselves on a con�nuous scale somewhere between sad
and happy, which are polar opposites. The
assump�on in using a bipolar scale is that the endpoints
represent the only two op�ons—par�cipants can be sad,
happy, or somewhere in between. In contrast,
with the unipolar op�on, par�cipants are asked to rate
themselves on a con�nuous scale, indica�ng their level of
either sadness or happiness. The assump�on in
using a pair of unipolar scales is that it is possible to
experience varying degrees of each item: For example,
par�cipants can be moderately happy but also a li�le
bit sad. The decision to use a bipolar or a unipolar scale
comes down to the context. What is the most logical way
to think about these constructs? What have
previous researchers done?
In the 1970s, Sandra Lipsitz Bem revolu�onized the way
researchers thought about gender roles by arguing against
a bipolar approach. Previously, gender role
iden�fica�on had been measured on a bipolar scale from
"masculine" to "feminine," the assump�on being that a
person could be one or the other. Bem (1974)
argued instead that people could easily have varying
degrees of masculine and feminine traits. Her scale, the
Bem Sex Role Inventory, asks respondents to rate
themselves on a set of 60 unipolar traits. Someone with
mostly feminine and hardly any masculine traits would be
described as "feminine." Someone with high
ra�ngs on both masculine and feminine traits would be
described as "androgynous." And, someone with low
ra�ngs on both masculine and feminine traits would
be described as "undifferen�ated." You can view and
complete Bem's scale online at this website
(h�ps://openpsychometrics.org/tests/OSRI/) .
The second cri�cal decision in construc�ng a ra�ng scale
item is to decide on the number of points in the response
scale. You may have no�ced that all the
examples in this sec�on have an odd number of points
(e.g., five or seven). This is usually preferable for ra �ng
scale items because the middle of the scale (e.g.,
"3" or "4") allows respondents to give a neutral, middle -
of-the-road answer. That is, on a scale from strongly
disagree to strongly agree, the midpoint can be used
to indicate "neither" or "I'm not sure." However, in some
cases, you may not want to allow a neutral op�on in
your scale. By using an even number of points (e.g.,
four or six), you can essen�ally force people to either
agree or disagree with the statement; this type of scaling
is referred to as forced choice.
So how many points should your scale have? As a
general rule, more points will translate into more
variability in responses—the more choice people have, the
more likely they are to distribute their responses among
those choices. From a researcher's perspec�ve, the big
ques�on is whether this variability is meaningful.
For example, if you wanted to assess college students'
a�tudes about a student fee increase, opinions will likely
vary depending on the size of the fee and the
ways in which it will be used. Thus, a five- or seven-
point scale would be preferable to a two-point (yes or no)
scale. However, past a certain point, increasing the
scale range ceases to be linked to meaningful varia�on in
a�tudes. In other words, the difference between a 5 and a
6 on a seven-point scale is fairly intui�ve for
your par�cipants to grasp. But what is the real difference
between an 80 and an 81 on a 100-point scale? When
scales become too large, you risk introducing
another source of error variance as par�cipants impose
their interpreta�ons on the scaling. In sum, more points
do not always translate into a be�er scale.
https://openpsychometrics.org/tests/OSRI/
Back to our ques�on: How many points should you have?
The ideal compromise supported by many sta�s�cians is
to use a seven-point scale for bipolar scales. The
reason has to do with the differences between scales of
measurement. As you'll remember from our discussion in
Chapter 2, the way variables are measured has
implica�ons for data analyses. For the most popular
sta�s�cal tests to be legi�mate, variables need to lie on
either an interval scale (i.e., with equal intervals
between points) or a ra�o scale (i.e., with a true zero
point). Based on mathema�cal modeling research,
sta�s�cians have concluded that the variability generated
by a seven-point scale is most likely to mimic an interval
scale (see, e.g., Nunnally, 1978). So a seven-point scale is
o�en preferable because it allows us the most
flexibility in data analyses. Note, however, that whereas
seven-point scales produce reliable results with bipolar
scales, unipolar scales tend to perform the best
with five-point scales.
Finalizing the Questionnaire
Once you have finished construc�ng the ques�onnaire
items, one last important step remains before beginning to
collect data. This sec�on discusses a few
guidelines for assembling the items into a coherent
ques�onnaire. The main issues at this stage are to think
carefully about the order of the individual items; how
many items to include; whether to include open-ended,
fixed-format, or mul�ple choice ques�ons; and wri�ng the
instruc�ons in clear and concise language.
First, keep in mind that the first few ques�ons will set
the tone for the rest of the ques�onnaire. It is best to
start with ques�ons that are both interes�ng and
nonthreatening to help ensure that respondents complete
the ques�onnaire with open minds. For example:
BAD OPENING:
BETTER OPENING:
BAD OPENING:
BETTER OPENING:
"Do you agree that your child's teacher is incompetent?"
(threatening and also a leading ques�on)
"How would you rate the performance of your child's
teacher?"
"Would you support a 1% sales tax increase?" (boring)
"How do you feel about raising taxes to help fund
educa�on?"
Second, strive whenever possible to have con�nuity in the
different sec�ons of your ques�onnaire. Imagine you are
construc�ng a survey to give to college
freshmen—you might have ques�ons on family background,
stress levels, future plans, campus engagement, and so on.
It is best to have the ques�ons grouped by
topic on the survey. So, for instance, students would fill
out a set of ques�ons about future plans on one page
and then a set of ques�ons about campus
engagement on another page. This approach makes it
easier for par�cipants to progress through the ques�ons
without having to mentally switch between topics.
Third, remember that individual ques�ons are always read
in context. This means that if you start your college
student survey with a ques�on about plans for the
future and then ask about stress, respondents will likely
have their future plans in mind when they think about
their level of stress. Another example is a graduate
school that would administer a gigan�c survey packet to
every student enrolled in its Introductory Psychology
course. One year, a faculty member included a
measure of iden�ty, asking par�cipants to complete the
statements "I am ______" and "I am not ______." As the
students started to analyze data from this survey,
they found that an astonishing 60% of students had filled
in the blank with "I am not a homosexual." This response
seemed pre�y unusual un�l they realized that
the ques�onnaire immediately preceding this one in the
packet was a measure of prejudice toward gay and lesbian
individuals. So, as these students completed the
iden�ty measure, they had homosexuality on their minds
and felt compelled to point out that they were not
homosexual. This proves once again that results can
be skewed by the context.
Finally, once you have assembled a dra� version of your
ques�onnaire, do a test run. This test run, called pilot
tes�ng, involves giving the ques�onnaire to a small
sample of people, ge�ng their feedback, and making any
necessary changes. One of the best ways to pilot test is
to find a pa�ent group of friends to complete
your ques�onnaire because this group will presumably be
willing to give more extensive feedback. Another effec�ve
way to pilot test a ques�onnaire is to
administer it to individuals in the target group. In
solici�ng their feedback, you should ask ques�ons like
the following:
Was anything confusing or unclear?
Was anything offensive or threatening?
How long did the ques�onnaire take you to complete?
Did it get repe��ve or boring? Did it seem too long?
Were there par�cular ques�ons that you liked or disliked?
Why?
The answers to these ques�ons will give you valuable
informa�on to revise and clarify your ques�onnaire before
devo�ng the resources for a full round of data
collec�on. In the next sec�on, we turn our a�en�on to
the ques�on of how to find and select par�cipants for
this stage of the research.
Research: Thinking Critically
"Beautiful People Convey Personality Traits Better During First
Impressions"
Medical News Today
A new University of Bri�sh Columbia study has found
that people iden�fy the personality traits of
people who are physically a�rac�ve more accurately than
others during short encounters.
The study, published in the December [2010] edi�on of
Psychological Science, suggests people pay closer
a�en�on to people they find a�rac�ve, and is the latest
scien�fic evidence of the advantages of
perceived beauty. Previous research has shown that
individuals tend to find a�rac�ve people more
intelligent, friendly, and competent than others.
The goal of the study was to determine whether a
person's a�rac�veness impacts others' ability to
discern their personality traits, says Prof. Jeremy Biesanz,
UBC Dept. of Psychology, who coauthored the
study with PhD student Lauren Human and undergraduate
student Genevieve Lorenzo.
For the study, researchers placed more than 75 male and
female par�cipants into groups of five to 11
people for three-minute, one-on-one conversa�ons. A�er
each interac�on, study par�cipants rated
partners on physical a�rac�veness and five major
personality traits: openness, conscien�ousness,
extraversion, agreeableness, and neuro�cism. Each person
also rated his or her own personality.
Researchers were able to determine the accuracy of
people's percep�ons by comparing par�cipants'
ra�ngs of others' personality traits with how individuals
rated their own traits, says Biesanz, adding that
steps were taken to control for the posi�ve bias that can
occur in self-repor�ng.
Despite an overall posi�ve bias towards people they found
a�rac�ve (as expected from previous
research), study par�cipants iden�fied the "rela�ve
ordering" of personality traits of a�rac�ve
par�cipants more accurately than others, researchers found.
"If people think Jane is beau�ful, and she is very
organized and somewhat generous, people will see her
as more organized and generous than she actually is,"
says Biesanz. "Despite this bias, our study shows
that people will also correctly discern the rela�ve
ordering of Jane's personality traits—that she is more
organized than generous—be�er than others they find less
a�rac�ve."
The researchers say this is because people are mo�vated
to pay closer a�en�on to beau�ful people for
many reasons, including curiosity, roman�c interest, or a
desire for friendship or social status. "Not only
do we judge books by their covers, we read the ones
with beau�ful covers much closer than others,"
says Biesanz, no�ng the study focused on first
impressions of personality in social situa�ons, like
cocktail
par�es.
Although par�cipants largely agreed on group members'
a�rac�veness, the study reaffirms that beauty
is in the eye of the beholder. Par�cipants were best at
iden�fying the personali�es of people they found
a�rac�ve, regardless of whether others found them
a�rac�ve.
According to Biesanz, scien�sts spent considerable efforts
a half-century ago seeking to determine what
types of people perceive personality best, to largely mixed
results. With this study, the team chose to
inves�gate this long-standing ques�on from another
direc�on, he says, focusing not on who judges
personality best, but rather whether some people's
personali�es are be�er perceived.
Think about it:
1. Suppose the following ques�ons were part of the
ques�onnaire given a�er the 3-minute one-on-one
conversa�ons in this study. Based on the goals of the study and
the rules discussed in this chapter,
iden�fy the problem with each of the following ques�ons and
suggest a be�er item.
a. Jane is very neat.
1 2 3 4 5
strongly
agree
agree neither agree nor disagree disagree strongly disagree
main problem:
be�er item:
b. Jane is generous and organized.
1 2 3 4 5
strongly agree agree neither agree nor
disagree
disagree strongly
disagree
main problem:
be�er item:
c. Jane is extremely a�rac�ve TRUE FALSE
main problem:
be�er item:
2. What are the strengths and weaknesses of using a fixed-
format ques�onnaire in this study versus
open-ended responses?
3. The researchers state that they took steps to control for the
"posi�ve bias that can occur in
selfrepor�ng." How might social desirability influence the
outcome of this par�cular study? What
might the researchers have done to reduce the effect of social
desirability?
George, R. (2010, December 31). Beau�ful people convey
personality traits be�er during first impressions. Medical News
Today. Retrieved from
h�p://www.medicalnewstoday.com/ar�cles/212245.php
(h�p://www.medicalnewstoday.com/ar�cles/212245.php)
http://www.medicalnewstoday.com/articles/212245.php
4.3 Sampling From the Popula�on
By now, you should have a good feel for how to
construct survey items. Once you have finalized your
measures, the next step is to find a group of people to
fill out
the survey. But where do you find this group? And how
many of them do you need? On the one hand, you want
as many people as possible in order to capture
the full range of a�tudes and experiences. On the other
hand, researchers have to conserve �me and other
resources, which o�en means choosing a smaller
sample of people. In this sec�on, we will examine the
strategies researchers can use in selec�ng samples for
their studies.
Researchers refer to the en�re collec�on of people who
could possibly be relevant for a study as the popula�on.
For example, if you were interested in the effects
of prison overcrowding in this country, you would want to
study the popula�on of prisoners in the United States. If
you wanted to study vo�ng behavior in the next
U.S. presiden�al elec�on, your popula�on would be
United States residents eligible to vote. And if you
wanted to know how well college students cope with the
transi�on from high school, your popula�on would include
every college student who was graduated from high school
and is now enrolled in any college in the
country.
You may have spo�ed an obvious prac�cal complica�on
with these popula�ons. How on earth are you going to
get every college student, much less every prisoner,
in the country to fill out your ques�onnaire? You can't;
instead, researchers will collect data from a sample, a
subset of the popula�on. Instead of trying to reach all
prisoners, you might sample inmates from a handful of
state prisons. Rather than a�empt to survey all college
students in the country, researchers might restrict
their studies to a collec�on of students at one university.
The goal in choosing a sample for quan�ta�ve research
is to make it as representa�ve as possible of the larger
popula�on. This is the goal, though it is not always
prac�cal. That is, if you choose students at one
university, they need to be reasonably similar to college
students elsewhere in the country. If the phrase
"reasonably similar" sounds vague, this is because the
basis for evalua�ng a sample varies depending on the
hypothesis and the key variables of your study. For
example, if you wanted to study the rela�onship between
family income and stress levels, you would need to make
sure that your sample mirrored the popula�on
in the distribu�on of income levels. Thus, a sample of
students from a state university might be a be�er choice
than students from, say, Harvard (which costs about
$50,000 per year). On the other hand, if your research
ques�on dealt with the pressures faced by students in
selec�ve private schools, then Harvard students could
be a representa�ve sample for your study.
Figure 4.1 is a conceptual illustra�on of both a
representa�ve and nonrepresenta�ve sample, drawn from a
larger popula�on. The popula�on in this case consists
of 144 individuals, split evenly between Xs and Os. Thus,
we would want our sample to come as close as possible
to capturing this 50/50 split. The sample of 20
individuals on the le� is representa�ve of the sample
because it is split evenly between Xs and Os. But the
sample of 20 individuals on the right is
nonrepresenta�ve because it contains 75% Xs. Because
there are far fewer Os than we might expect in the right-
hand popula�on, this sample does not accurately
represent the popula�on. This failure of the sample to
represent the popula�on is also referred to as sampling
bias.
Figure 4.1: Representative and nonrepresentative samples of a
population
Stra�fied random sampling allows researchers to include all
subgroups of a popula�on in a study.
iStockphoto/Thinkstock
So, where do these samples come from? As a researcher,
you have two broad categories of sampling strategies at
your disposal: probability sampling and
nonprobability sampling.
Probability Sampling
Probability sampling is used when each person in the
popula�on has a known chance of being in the sample.
This is possible only in cases where you know the
exact size of the popula�on. For instance, the 2010
popula�on of the United States was 308,745,538 (United
States Census Bureau, 2010). If you were to have
selected a U.S. resident at random, each resident would
have had a one in 308,745,538 chance of being selected.
Whenever you have informa�on about total
popula�on, probability-sampling strategies are the most
powerful approach because they greatly increase the odds
of ge�ng a representa�ve sample. Within this
broad category of probability sampling are three specific
strategies: simple random sampling, stra�fied random
sampling, and cluster sampling.
Simple Random Sampling
Simple random sampling, the most straigh�orward
approach, involves randomly picking study par�cipants
from a list of everyone in the popula�on. The term for
this list is a sampling frame (e.g., imagine a list of every
resident of the United States). To have a truly
representa�ve random sample, several criteria need to be
met: You must have a sampling frame, you must choose
from it randomly, and you must have a 100% response
rate from those you select. (As we discussed in
Chapter 2, it can threaten the validity of your hypothesis
test if people drop out of your study.)
Stra�fied Random Sampling
Stra�fied random sampling, a varia�on of simple random
sampling, is used when subgroups of the
popula�on might be le� out of a purely random sampling
process. Imagine a city with a popula�on
that is 80% Caucasian, 10% Hispanic, 5% African
American, and 5% Asian. If you were to choose 100
residents at random, the chances are very good that your
en�re sample would consist of Caucasian
residents. As a result, you would inadvertently ignore the
perspec�ve of all ethnic minority residents.
To prevent this problem, researchers use stra�fied random
sampling—breaking the sampling frame
into subgroups and then sampling a random number from
each subgroup. In the preceding city
example, you could divide your list of residents into four
ethnic groups and then pick a random 25
from each of these groups. The result would be a sample
of 100 people who captured opinions from
each ethnic group in the popula�on.
Cluster Sampling
Cluster sampling, another varia�on of random sampling, is
used when you do not have access to a
full sampling frame (i.e., a full list of everyone in the
popula�on). Imagine that you wanted to do a
study of how cancer pa�ents in the United States cope
with their illness. Because there is not a list
of every cancer pa�ent in the country, you have to get a
li�le crea�ve with your sampling. The best way to think
about cluster sampling is as "samples within
samples." Just as with stra�fied sampling, you divide the
overall popula�on into groups; however, cluster sampling
is different in that you are dividing into groups
based on more than one level of analysis. In the cancer
example, you could start by dividing the country into
regions, then randomly selec�ng ci�es from within
each region, and then randomly selec�ng hospitals from
within each city, and finally randomly selec�ng cancer
pa�ents from each hospital. The result would be a
random sample of cancer pa�ents from, say, Phoenix,
Miami, Dallas, Cleveland, Albany, and Sea�le; taken
together, these pa�ents would cons�tute a fairly
representa�ve sample of cancer pa�ents around the
country.
Nonprobability Sampling
The other broad category of sampling strategies is known
as nonprobability sampling. These strategies are used in the
(remarkably common) case in which you do
not know the odds of any given individual being in the
sample. This is an obvious shortcoming—if you do not
know the exact size of the popula�on and do not
have a list of everyone in it, there is no way to know
that your sample is representa�ve! But despite this
limita�on, researchers use nonprobability sampling on a
regular basis.
Nonprobability sampling is used in qualita�ve,
quan�ta�ve, and mixed methods research whose focus is
on selec�ng rela�vely small samples in a purposeful
manner rather than selec�ng samples that are
representa�ve of the en�re popula�on. Unlike probability
sampling, nonprobability sampling does not include
randomly selected, stra�fied samples that provide for
generaliza�on. Rather, the procedures used to select
samples are purposeful; that is, the samples are selected
to obtain informa�on-rich cases that yield in-depth
insights. The sampling techniques used for quan�ta�ve
and qualita�ve research probably make up one of the
biggest differences between the two methods. The
following sec�ons will discuss some of the most common
nonprobability strategies. All of the nonprobability
strategies described (except convenience sampling) are
considered categories of purposive sampling, or sampling
with a purpose. Convenience sampling is
considered a form of so-called accidental or haphazard
(serendipitous) sampling, since the cases are selected based
on ready availability. Even so, convenience
sampling can be purposeful, in that researchers do their
best to recruit in ways that are convenient and yet a�ract
individuals that meet meaningful criteria.
In many cases, it is not possible to obtain a sampling
frame. When researchers study rare or hard-to-reach
popula�ons, or inves�gate poten�ally s�gma�zing
condi�ons, they o�en recruit by word of mouth. The term
for this is snowball sampling—imagine a snowball rolling
down a hill, picking up more snow (or
par�cipants) as it goes. If you wanted to study how o�en
homeless people took advantage of social services, you
would be hard-pressed to find a sampling frame
that listed the homeless popula�on. Instead, you could
recruit a small group of homeless people and ask each of
them to pass the word along to others, and so
on. The resul�ng sample is unlikely to be representa�ve,
but researchers o�en have to compromise for the sake of
obtaining access to a popula�on.
One of the most popular nonprobability strategies is
known as convenience sampling, or simply enrolling people
who show up for the study. Any �me you see
results of a viewer poll on your favorite 24-hour news
sta�on, the results are likely based on a convenience
sample. CNN and Fox News do not randomly select
from a list of their viewers; they post a ques�on on-
screen or online, and people who are mo�vated enough to
respond will do so. For that ma�er, a large majority
of psychology research studies are based on convenience
samples of undergraduate college students. O�en,
experimenters in psychology departments adver�se
their studies on a bulle�n board or website, and students
sign up for studies for extra cash or to fulfill a research
requirement for a course. Students o�en pick a
par�cular study based on whether it fits their busy
schedules or whether the adver�sement sounds interes�ng.
Another example of convenience sampling is a
researcher seeking to collect informa�on from human
resources managers to study the issue of bullying in
organiza�ons. The researcher might use a database of
poten�al par�cipants who belong to one or more chapters
of the Society of Human Resource Management (SHRM),
an organiza�on for human resources
professionals. In this example, the convenience component
of the sample is using a body of exis�ng data; although
these data are not representa�ve of all human
resources managers or all organiza�ons, they are a useful
proxy for a broad sample of human resource managers
across industries.
Other popular nonprobability strategies include extreme or
deviant case sampling, typical case sampling, heterogeneous
sampling, expert sampling, criterion
sampling, and theory-based sampling. Whatever strategy
you decide on, the goal here is to make you mindful that
all the decisions that you make as a researcher
have inherent strengths and weaknesses.
Extreme or deviant case sampling is used when we want
informa�on-rich data on unusual or special cases. For
example, if we are interested in studying university
diversity plans, it would be important to examine plans
that work excep�onally well, as well as plans that have
high expecta�ons but are not working for some
reason or another. The focus is on examining outstanding
successes and failures to learn lessons about those
condi�ons.
On the other hand, some�mes researchers are not
interested in learning about unusual cases but rather want
to learn about typical cases. Typical case sampling
involves sampling the most frequent or "normal" case. It
is o�en used to describe typical cases to people
unfamiliar with a se�ng, program, or process. For
example, a typical case sample may be used to study the
prac�cum experiences of psychology students from
universi�es that are rated average. One of the biggest
drawbacks of this method involves the difficulty of
knowing how to iden�fy or define a typical or normal
case.
Heterogeneous sampling, which is also called maximum
varia�on sampling, may include both extreme and typical
cases. It is used to select a wide variety of cases
in rela�on to the phenomenon being inves�gated. The
idea here is to obtain a sample that includes diverse
characteris�cs from mul�ple dimensions. For example,
if the researchers are interested in examining the
experiences of community health clinic pa�ents, they may
wish to conduct a focus group and then select 10
different groups that represent various demographics from
several health clinics in the area. Heterogeneous sampling
is beneficial when it is desirable to view
pa�erns across a diverse set of individuals. It is also
valuable in describing experiences that may be central or
core to most individuals. Heterogeneous sampling can
be problema�c in small samples, though, because it is
difficult to obtain a wide variety of cases using this
method.
Expert sampling involves sampling a panel of individuals
who have known exper�se (knowledge and training) in a
par�cular area. While expert sampling can be
used in both qualita�ve and quan�ta�ve research, it is
used more o�en in qualita�ve research. Expert sampling
involves a process of iden�fying individuals with
known exper�se in an area of interest, obtaining informed
consent from each expert, and then collec�ng informa�on
from them either individually or as a group.
The goal of criterion sampling is to select cases "that
meet some predetermined criterion of importance" (Pa�on,
2002, p. 238). A criterion could be a par�cular
illness or experience that is being inves�gated, or even a
program or a situa�on. For example, criterion sampling
could be used to inves�gate a range of topics: the
experiences of individuals who have a�empted suicide,
why students are exceedingly absent from online course
rooms, or cases that have exceeded the standard
wai�ng �me at a doctor's office. The idea with this
method is that the par�cipants meet a par�cular criterion
of interest.
Theory-based sampling is a version of criterion sampling
that focuses on obtaining cases that represent theore�cal
constructs. As Pa�on (2002) discussed, theory-
based sampling involves the researcher obtaining "sampling
incidents, slices of life, �me periods, or people on the
basis of their poten�al manifesta�on or
representa�on of important theore�cal constructs" (p. 238).
This type of sampling arose from grounded theory research
and follows a more deduc�ve or theory-
tes�ng approach. As Glaser (1978) described, theory-based
sampling is "the process of data collec�on for genera�ng
theory whereby the analyst jointly collects,
codes, and analyzes his data and decides which data to
collect next and where to find them" (p. 36). Thus, data
collec�on is driven by the emerging theory, and
par�cipants are selected based on their knowledge of the
topic.
Choosing a Sampling Strategy
Although quan�ta�ve researchers strive for representa�ve
samples, there is no such thing as a perfectly
representa�ve one. There is always some degree of
sampling error, defined as the degree to which the
characteris�cs of the sample differ from the characteris�cs
of the popula�on. Instead of aiming for perfec�on,
then, researchers aim for an es�mate of how far from
perfec�on their samples are. These es�mates are known
as the error of es�ma�on, or the degree to which
the data from the sample are expected to deviate from the
popula�on as a whole.
One of the main advantages of a probability sample is
that we are able to calculate these errors of es�ma�on
(or margins of error). In fact, you have likely
encountered errors of es�ma�on every �me you see the
results of an opinion poll. For example, CNN may report
that "Candidate A is leading the race with 60% of
the vote, ± 3%." This means Candidate A's percentage in
the sample is 60%, but based on sta�s�cal calcula�ons,
her real percentage is between 57% and 63%. The
smaller the error (3% in this example), the more closely
the results from the sample match the popula�on.
Naturally, researchers conduc�ng these opinion polls
want the error of es�ma�on to be as small as possible;
imagine how nonpersuasive it would be to learn that
"Candidate A has a 10-point lead, ± 20 points." In
general, these errors are minimized when three condi �ons
are met: The overall popula�on is smaller; the sample
itself is larger; and there is less variability in the
sample data. When samples are created using a probability
method, all of this informa�on is available because these
methods require knowing the popula�on.
If probability sampling is so powerful, why are
nonprobability strategies so popular? One reason is that
convenience samples are more prac�cal; they are cheaper,
easier, and almost always possible to enroll with
rela�vely few resources because you can avoid the costs
of large-scale sampling. A second reason is that
convenience is o�en a good star�ng point for a new line
of research. For example, if you wanted to study the
predictors of rela�onship sa�sfac�on, you could start
by tes�ng hypotheses in a controlled se�ng using college
student par�cipants, and then you could extend your
research to the study of adult married couples.
Finally, and relatedly, in many types of qualita�ve
research, it is acceptable to have a nonrepresenta�ve
sample because you do not need to generalize your
results.
If you want to study the prevalence of alcohol use in
college students, it may be perfectly acceptable to use a
convenience sample of college students. Although,
even in this case, you would have to keep in mind that
you were studying drinking behaviors among students who
volunteered to complete a study on drinking
behaviors.
There are also cases, however, where it is cri �cal to use
probability sampling despite the extra effort it requires.
Specifically, researchers use probability samples any
�me it is important to generalize and any �me it is
important to predict behavior of a popula�on. The best
example for understanding these criteria is to think of
poli�cal polls. In the lead-up to an elec�on, each
campaign is invested in knowing exactly what the vo�ng
public thinks of its candidate. In contrast to a CNN poll,
which is based on a convenience sample of viewers, polls
conducted by a campaign will be based on randomly
selected households from a list of registered voters.
The resul�ng sample is much more likely to be
representa�ve, much more likely to tell the campaign how
the en�re popula�on views its candidate, and therefore,
much more likely to be useful.
Determining a Sufficient Sample Size
Several factors come into play when determining what a
sufficient sample size is for a research study. Probably
the most important factor is whether the study is a
quan�ta�ve or a qualita�ve one. In quan�ta�ve research,
there is one basic rule for sample size: The larger, the
be�er. This is because a larger sample will be more
representa�ve of the popula�on being studied. Having said
that, even small samples using quan�ta�ve data can yield
meaningful correla�ons if the researcher
makes efforts to achieve a representa�ve sample or a
meaningful convenience sample.
How does one make a prac�cal decision, though,
regarding the exact sample size a par�cular research
situa�on requires? Gay, Mills, and Airasian (2009)
provided
the following guidelines for determining sufficient sample
size based on the size of the popula�on:
For popula�ons that include 100 or fewer individuals, the
en�re popula�on should be sampled.
For popula�ons that include 400–600 individuals, 50% of the
popula�on should be sampled.
For popula�ons that include 1,500 individuals, 20% of the
popula�on should be sampled.
For popula�ons larger than 5,000, about 8% of the popula�on
should be sampled.
Thus, as we can see, the larger the popula�on size, the
smaller the percentage required to obtain a representa�ve
sample. This does not mean that the sample
shrinks as the popula�on increases, but rather the
opposite: The sample size required actually increases when
studying larger popula�ons.
Although larger sample sizes are generally preferred in
quan�ta�ve studies, the size of an adequate sample also
depends on the similari�es and dissimilari�es
among members of the popula�on. If the popula�on is
fairly diverse for the construct being measured, then a
larger sample will likely be required in order to
obtain a representa�ve sample. Popula�ons whose
members have similar characteris�cs will require smaller
samples. Researchers have developed fairly
sophis�cated methods for determining sufficient sample
sizes through the use of sta�s�cal power analysis;
however, such methods are beyond the scope of this
book. Gay et al.'s (2009) guidelines are sufficient for
most types of research.
As discussed in Chapter 3, recommended samples for
qualita�ve research are typically much smaller than for
quan�ta�ve research, which strives to be
representa�ve of larger popula�ons. In qualita�ve
research, sample sizes are not based on numbers but
rather on how well the variables of interest are
represented (Houser, 2009). Unlike quan�ta�ve approaches,
which require large samples, qualita�ve techniques do not
have any rules regarding sample size. Thus,
sample size depends more on what the researcher wants to
know, the purpose of the inquiry, what the findings will
be useful for, how credible they will be, and
what can be done with available �me and resources.
Qualita�ve research can be very costly and �me-
consuming, so choosing informa�on-rich cases will yield
the
greatest return on investment. As noted by Pa�on (2002),
"The validity, meaningfulness, and insights generated from
qualita�ve inquiry have more to do with the
informa�on-richness of the cases selected and the
observa�onal/analy�cal capabili�es of the researcher than
with sample size" (p. 245).
Nonresponse Bias in Survey Research
Some�mes par�cipants do not submit a survey or do not
fill it out completely. When this occurs, it is considered
nonresponse bias, which can affect the size and
characteris�cs of the sample, as well as the external
validity of the study. External validity refers to how well
the results obtained from a sample can be extended
to make predic�ons about the en�re popula�on, which
will be further discussed in Chapter 5. Nonresponses are
par�cularly problema�c if a large number of
par�cipants or a specific group of par�cipants fails to
respond to par�cular ques�ons. For example, perhaps all
females skip a certain ques�on, or several
par�cipants skip certain ques�ons because they are too
personal. This omission creates bias not only in the
characteris�cs of the sample (e.g., the sample may no
longer be representa�ve) but also in the size of the
sample that is required for the study.
Nonresponses can occur for many reasons, including the
survey being too long, the ques�ons being worded
awkwardly, the survey topic being uninteres�ng, or, in
the case with Web-based surveys, the par�cipants not
knowing how to access or log into the website to
complete the survey.
There are a few ways to minimize the threat of
nonresponse bias. These include increasing the sample size
to account for the possibility of nonresponses, making
sure that survey direc�ons and ques�ons are worded
clearly, making sure the survey is not too long, providing
rewards or incen�ves for comple�ng the survey,
sending out reminders to complete the survey, and
providing a cover le�er that describes the exact reasons
for conduc�ng the survey.
4.4 Analyzing Survey Data
Once you have designed a survey, chosen an appropriate
sample, and collected some data, now comes the fun part.
As with the quan�ta�ve descrip�ve designs
covered in Chapter 3, the goal of analyzing survey data is
to subject your hypotheses to a sta�s�cal test. Surveys
can be used both to describe and predict
thoughts, feelings, and behaviors. However, since we have
already covered the basics of descrip�ve analysis in
Chapter 3, this sec�on will focus on predic�ve
analyses, which are designed to assess the associa�ons
between and among variables.
Researchers typically use three approaches to test
predic�ve hypotheses: correla�onal analyses, chi-square
analyses, and regression analyses. Each one has its
advantages and disadvantages, and each is most appropriate
for a different kind of data. Correla�onal analysis allows
one to examine the strength, direc�on, and
sta�s�cal significance of a rela�onship; chi-square
analysis determines whether two nominal variables are
independent from or related to one another; and simple
linear regression is the method used to "predict" scores
for one variable based on another. In this sec�on, we
will walk through the basics of each analysis.
Correlational Analysis
In the beginning of this chapter, we encountered an
example of a survey research ques�on: What is the
rela�onship between the number of hours that students
spend studying and their grades in the class? In this case,
the hypothesis claims that we can predict something about
a student's grades by knowing how many
hours he or she spends studying.
Imagine we collected a small amount of data to test this
hypothesis, shown in Table 4.1. (Of course, if we really
wanted a good test of this hypothesis, we would
need more than 10 people in the sample, but this will do
as an illustra�on.)
Table 4.1: Data for quiz grade/hours studied example
Par�cipant Hours Studied Quiz Grade
1 1 2
2 1 3
3 2 4
4 3 5
5 3 6
6 3 6
7 4 7
8 4 8
9 4 9
10 5 9
The Logic of Correla�on
The important ques�on here is whether and to what
extent we can predict grades based on study �me. One
common sta�s�c for tes�ng these kinds of hypotheses
is a correla�on, which assesses the linear rela�onship
between two variables. A stronger correla�on between two
variables translates into a stronger associa�on
between them. Or, to put it differently, the stronger the
correla�on between study �me and quiz grade, the more
accurately you can predict grades based on
knowing how long the student spends studying.
Before we calculate the correla�on between these
variables, it is always a good idea to visualize the data
on a graph. The sca�erplot in Figure 4.2 displays our
sample data from the studying/quiz grade study.
Figure 4.2: Scatterplot for quiz grade/hours studied example
Figure 4.3: Curvilinear relationship between
arousal and performance
Each point on the graph represents one par�cipant. For
example, the point in the top right corner represents a
student who studied for 5 hours and earned a 9 on
the quiz. The two points in the bo�om right represent
students who studied for only 1 hour and earned a 2 and
a 3 on the quiz.
There are two reasons to graph data before conduc�ng
sta�s�cal tests. First, a graph conveys
a general sense of the pa�ern—in this case, students who
study less appear to do worse on
the quiz. As a result, we will be be�er informed going
into our sta�s�cal calcula�ons. Second,
the graph ensures that there is a linear rela�onship
between the variables. This is a very
important point about correla�ons: The math is based on
how well the data points fit a
straight line, which means nonlinear rela�onships might be
overlooked. Figure 4.3
demonstrates a robust nonlinear finding in psychology
regarding the rela�onship between
task performance and physiological arousal. As this graph
shows, people tend to perform their
best on just about any task when they have a moderate
level of arousal.
When arousal is too high, it is difficult to calm down
and concentrate; when arousal is too
low, it is difficult to care about the task at all. If we
simply ran a correla�on with data on
performance and arousal, the correla�on would be zero
because the points do not fit a
straight line. Thus, it is cri�cal to visualize the data
before jumping ahead to the sta�s�cs.
Otherwise, you risk overlooking an important finding in
the data.
Interpre�ng Coefficients
Once we are sa�sfied that our data look linear, it is
�me to calculate our sta�s�cs. This is typically done
using a computer so�ware program, such as SPSS (IBM),
SAS/STAT (SAS), or Microso� Excel. The number used
to quan�fy our correla�on is called the correla�on
coefficient. This number ranges from –1 to +1 and
contains two important pieces of informa�on:
The size of our rela�onship is based on the absolute value of
our correla�on coefficient. The farther our coefficient is from
zero in either direc�on, the stronger
the rela�onship between variables. For example, both a + .80
and a – .80 indicate strong rela�onships.
The direc�on of the rela�onship is based on the sign of our
correla�on coefficient. A + .80 would indicate a posi�ve
correla�on, meaning that as one variable
increases, so does the other variable. A – .80 would indicate a
nega�ve correla�on, meaning that as one variable increases,
the other variable decreases. (Refer
back to Sec�on 2.1, Overview of Research Designs, for a
review of these two terms.)
So, for example, a + .20 is a weak posi�ve rela�onship
and a – .70 is a strong nega�ve rela�onship.
When we calculate the correla�on for our quiz grade
study, we get a coefficient of .96, indica�ng a strong
posi�ve rela�onship between studying and quiz grade.
What does this mean in plain English? Students who
spend more hours studying tend to score higher on the
quiz.
How do we know whether to get excited about a
correla�on of .96? As with all of our sta�s�cal analyses,
we look up this value in a cri�cal value table, or, more
commonly, let the computer so�ware do this for us. This
gives us a p value represen�ng the odds that our
correla�on is the result of random chance. In this case,
the p value is less than .001. This means that the chance
of our correla�on being a random fluke is less than 1 in
1,000, so we can feel pre�y confident in our
results. Note, however, that because the p value associated
with r is dependent on sample size, even a �ny,
unimportant correla�on can be sta�s�cally significant
when drawing conclusions from a large popula�on.
We now have all the informa�on we need to report this
correla�on in a research paper. The standard way of
repor�ng a correla�on coefficient includes informa�on
about the sample size and p value, as well as the
coefficient itself. Our quiz grade study would be reported
as shown in Figure 4.4.
Figure 4.4: Correlation coefficient diagram
So where does this leave our hypothesis? We started by
predic�ng that students who
spend more �me studying would perform be�er on their
quizzes than those who spend
less �me studying. We then designed a study to test this
hypothesis by collec�ng data on
study habits and quiz grades. Finally, we analyzed these
data and found a significant,
strong, posi�ve correla�on between hours studied and quiz
grade. Based on this study,
our hypothesis has been supported—students who study
more have higher quiz grades!
Of course, because this is a correla�onal study, we are
unable to make causal
statements. It could be that studying more for an exam
helps you to learn more. Or, it
could be the case that previous low quiz grades make
students give up and study less.
Or, the third variable of mo�va�on could cause students
to both study more and
perform be�er on the quizzes. To tease these explana�ons
apart and determine
causality, we will need an experimental type of research
design, which we will cover in
Chapter 5.
Regression Analysis
Correla�ons are the best tool for tes�ng the linear
rela�onship between pairs of quan�ta�ve variables.
However, in many cases, we are interested in comparing
the
influence of several variables at once. Imagine that you
wanted to expand the inves�ga�on about hours studying
and quiz grade by looking at other variables that
might predict students' quiz grades. We have already
learned that the hours students spend studying posi �vely
correlate with their grades. But what about SAT
scores? We might predict that students with higher
standardized test scores will do be�er in all of their
college classes. Or what about the number of classes that
students have previously taken in the subject area? We
might predict that increased familiarity with the subject
would be associated with higher scores. In order to
compare the influence of all three variables, we will use
a slightly different analy�c approach. Mul�ple regression
analysis is a varia�on on correla�onal analysis; in
it, more than one predictor variable is used to foresee a
single outcome variable. In this example, we would be
a�emp�ng to predict the outcome variable of quiz
scores based on three predictor variables: SAT scores,
number of previous classes, and hours studied.
Mul�ple regression analysis requires an extensive set of
calcula�ons; consequently, it is always done using
computer so�ware. A detailed look at these calcula�ons
is beyond the scope of this book, but a conceptual
overview will help you understand the unique advantages
of this form of analysis. Essen�ally, the calcula�ons
for mul�ple regression are based on the correla�on
coefficients between each of our predictor variables, and
between each of these variables and the outcome
variable. These correla�ons for our revised quiz grade
study are shown in Table 4.2. If we scan the top row,
we can see the correla�ons between quiz grade and
the three predictor variables: SAT score (r = .14),
previous classes (r = .24), and hours studied (r = .25).
The remainder of the table shows correla�ons between the
various predictor variables; for example, hours studied and
previous classes correlate at r = .24. When we conduct
mul�ple regression analysis using computer
so�ware, the so�ware will use all of these correla�ons
in performing its calcula�ons.
Table 4.2: Correla�ons for a mul�ple regression analysis
Quiz Grade SAT Score Previous Classes Hours Studied
Quiz Grade — .14 .24* .25*
SAT Score — .02 –.02
Previous Classes — .24*
Hours Studied —
The advantage of mul�ple regression is that it considers
both the individual and the combined influence of the
predictor variables. Figure 4.5 is a visual diagram of
the individual predictors of quiz grades. The numbers
along each line are known as regression coefficients, or
beta weights. These are standardized coefficients that
allow comparison across predictors. Their values are very
similar to correla�on coefficients but differ in an
important way: They represent the effects of each
predictor variable while controlling for the effects of all
the other predictors. That is, the value of b = .21 linking
hours studied with quiz grades is the independent
contribu�on of hours studied, controlling for SAT scores
and previous classes. If we compared the size of these
regression coefficients, we would see that, in fact,
hours spent studying were s�ll the largest predictor of
quiz grades (b = .21) compared with both SAT scores (b
= .14) and previous classes (b = .19).
Figure 4.5: Predictors of quiz grades
Even if individual variables have only a small influence,
they can add up to a larger combined influence. So, if
we were to analyze the predictors of quiz grades in
this study, we would find a combined mul�ple correla�on
coefficient of r = .34. The mul�ple correla�on coefficient
represents the combined associa�on between
the outcome variable and the full set of predictor
variables. Note that in this case, the combined r of .34 is
larger than any of the individual correla�ons (r) in Table
4.2, which ranged from .14 to .25. These numbers mean
that we are be�er able to predict quiz grades from
examining all three variables than we are from
examining any single variable. Or, as the saying goes, the
whole is greater than the sum of its parts!
Mul�ple regression analysis is an incredibly useful and
powerful analy�c approach, but it can also be a tough
concept to grasp. Before we move on, let's revisit the
concept in the form of an analogy. Imagine you've just
eaten the most delicious hamburger of your life and are
determined to understand what made it so good.
Lots of factors will contribute to the taste of your
hamburger: the quality of the meat, the type and amount
of cheese, the freshness of the bun, perhaps the
smoked chili peppers layered on top. If you were to
approach this inves�ga�on using mul�ple regression
analysis, you would be able to separate out the influence
of each variable (e.g., How important is the cheese
compared with the smoked peppers?) as well as take into
account the full set of ingredients (e.g., Does the
freshness of the bun really ma�er when the other
elements taste so good?). Ul�mately, you would be armed
with the knowledge of which elements are most
important in cra�ing the perfect hamburger. And you
would understand more about the perfect hamburger than
if you had examined each ingredient in isola�on.
Chi-Square Analyses
Both correla�ons and regressions are well suited to
tes�ng hypotheses about predic�on, as long as it is
possible to demonstrate a linear rela�onship between two
variables. But linear rela�onships require that variables be
measured on one of the quan�ta�ve scales—that is,
ordinal, interval, or ra�o scales (see Sec�on 2.3
(h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o
ns/sec2.3#sec2.3) , Scales and Types of Measurement, for a
review). What if we wanted to test the associa�on
between nominal, or categorical, variables? In these cases,
we would need an alterna�ve sta�s�c called the chi-
square sta�s�c, which determines whether two
nominal variables are independent from or related to one
another. Chi-square is o�en abbreviated with the symbol
X2, which shows the Greek le�er chi with the
superscript 2 for squared. (This sta�s�c is also referred
to as the chi-square test for independence —a slightly
longer but more descrip�ve synonym.)
The idea behind this test is similar to that of the
correla�on coefficient. If two variables are independent,
then knowing the value of one variable does not tell you
anything about the value of the other. As we will see in
the following examples, a larger chi-square reflects a
larger devia�on from what we would expect by
chance and is thus an index of sta�s�cal significance.
The Logic of Chi-Square
To determine whether two variables are associated, the
chi-square works by comparing the observed frequencies
(collected data) with the expected frequencies if
the variables were unrelated. If the results significantly
deviate from these expected frequencies, then we conclude
that our variables are related. And,
consequently, we are able to predict one variable based on
knowing the values of the other. Let's look at a couple of
examples to make this more concrete.
First, let's say we wanted to know whether gender is
related to poli�cal party affilia�on. We might randomly
select 100 men and 100 women and ask them
whether they iden�fied as Republican or Democrat.
Because both of these variables are nominal—that is, they
iden�fy only categories, not quan�ta�ve measures—
chi-square will be our best choice to test the associa�on
between them. The first step in conduc�ng this analysis
is to arrange our data in a con�ngency table,
which displays the number of individuals in each of the
combina�ons of our nominal variables. We encountered
these tables before in our examples of
observa�onal studies in Chapter 3 (Sec�on 3.1
(h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o
ns/sec3.1#sec3.1) ) but stopped short of conduc�ng the
sta�s�cal analyses. So imagine we get the results shown
in Table 4.3a from our survey of gender and party
affilia�on.
Table 4.3a: Gender and party affilia�on
Male Female
Democrat 60 60
https://content.ashford.edu/books/Malec.5743.13.1/sections/sec
2.3#sec2.3
https://content.ashford.edu/books/Malec.5743.13.1/sections/sec
3.1#sec3.1
Republican 40 40
In this case, there is no associa�on between sex and
party affilia�on. It does not ma�er that the sample
consists of 60% Democrats and 40% Republicans. What
ma�ers for our hypothesis test is that the pa�ern for
males is the same as the pa�ern for females: Our sample
consists of 1.5 �mes the number of Democrats for
both sexes. In other words, knowing a person's sex does
not tell us anything about their poli�cal affilia�on.
For illustra�on purposes, imagine we tested the same
hypothesis again but could recruit only 50 women,
compared with 100 men. Now, if we found the same
60%/40% split among men again, and assuming that the
variables were s�ll not related, here's the ques�on: What
would we expect the split to look like among
women? If the ra�o of Democrats to Republicans remains
at 1.5 to 1, we would expect to see the women divided
into 30 Democrats and 20 Republicans (i.e., the
same ra�o; shown in Table 4.3b). This concept is referred
to as the expected frequency, or the frequency you would
expect to see if the variables were not related.
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior
Chapter 4Survey Research—Describing and Predicng Behavior

More Related Content

Similar to Chapter 4Survey Research—Describing and Predicng Behavior

Research questionsIt was not known how criminal offenders percei.docx
Research questionsIt was not known how criminal offenders percei.docxResearch questionsIt was not known how criminal offenders percei.docx
Research questionsIt was not known how criminal offenders percei.docx
audeleypearl
 
Research questionsIt was not known how criminal offenders percei.docx
Research questionsIt was not known how criminal offenders percei.docxResearch questionsIt was not known how criminal offenders percei.docx
Research questionsIt was not known how criminal offenders percei.docx
gholly1
 
Slide 1 what is social science social science is about examinin
Slide 1 what is social science social science is about examininSlide 1 what is social science social science is about examinin
Slide 1 what is social science social science is about examinin
rock73
 
For my final project I am choosing the environmental influences on.docx
For my final project I am choosing the environmental influences on.docxFor my final project I am choosing the environmental influences on.docx
For my final project I am choosing the environmental influences on.docx
rhetttrevannion
 
Rorschach Measures Of Cognition And Social Functioning Essay
Rorschach Measures Of Cognition And Social Functioning EssayRorschach Measures Of Cognition And Social Functioning Essay
Rorschach Measures Of Cognition And Social Functioning Essay
Katherine Alexander
 
Fawcett86
Fawcett86Fawcett86
Fawcett86
Samir Gokarn
 
Get Academic Writing Help - 75% Discount
Get Academic Writing Help - 75% Discount Get Academic Writing Help - 75% Discount
Get Academic Writing Help - 75% Discount
Rebecca Morris
 
Social research
Social researchSocial research
Social research
ashrafulislam293
 
Definition Essay Samples. 011 Essay Example Examples Of Definition Essays The...
Definition Essay Samples. 011 Essay Example Examples Of Definition Essays The...Definition Essay Samples. 011 Essay Example Examples Of Definition Essays The...
Definition Essay Samples. 011 Essay Example Examples Of Definition Essays The...
Faith Russell
 
Running Head Slow Wave Sleep1Psychology 2Slo.docx
Running Head Slow  Wave Sleep1Psychology 2Slo.docxRunning Head Slow  Wave Sleep1Psychology 2Slo.docx
Running Head Slow Wave Sleep1Psychology 2Slo.docx
agnesdcarey33086
 
Develop a 3–4 page research paper based on a selected case study rel
Develop a 3–4 page research paper based on a selected case study relDevelop a 3–4 page research paper based on a selected case study rel
Develop a 3–4 page research paper based on a selected case study rel
mackulaytoni
 

Similar to Chapter 4Survey Research—Describing and Predicng Behavior (11)

Research questionsIt was not known how criminal offenders percei.docx
Research questionsIt was not known how criminal offenders percei.docxResearch questionsIt was not known how criminal offenders percei.docx
Research questionsIt was not known how criminal offenders percei.docx
 
Research questionsIt was not known how criminal offenders percei.docx
Research questionsIt was not known how criminal offenders percei.docxResearch questionsIt was not known how criminal offenders percei.docx
Research questionsIt was not known how criminal offenders percei.docx
 
Slide 1 what is social science social science is about examinin
Slide 1 what is social science social science is about examininSlide 1 what is social science social science is about examinin
Slide 1 what is social science social science is about examinin
 
For my final project I am choosing the environmental influences on.docx
For my final project I am choosing the environmental influences on.docxFor my final project I am choosing the environmental influences on.docx
For my final project I am choosing the environmental influences on.docx
 
Rorschach Measures Of Cognition And Social Functioning Essay
Rorschach Measures Of Cognition And Social Functioning EssayRorschach Measures Of Cognition And Social Functioning Essay
Rorschach Measures Of Cognition And Social Functioning Essay
 
Fawcett86
Fawcett86Fawcett86
Fawcett86
 
Get Academic Writing Help - 75% Discount
Get Academic Writing Help - 75% Discount Get Academic Writing Help - 75% Discount
Get Academic Writing Help - 75% Discount
 
Social research
Social researchSocial research
Social research
 
Definition Essay Samples. 011 Essay Example Examples Of Definition Essays The...
Definition Essay Samples. 011 Essay Example Examples Of Definition Essays The...Definition Essay Samples. 011 Essay Example Examples Of Definition Essays The...
Definition Essay Samples. 011 Essay Example Examples Of Definition Essays The...
 
Running Head Slow Wave Sleep1Psychology 2Slo.docx
Running Head Slow  Wave Sleep1Psychology 2Slo.docxRunning Head Slow  Wave Sleep1Psychology 2Slo.docx
Running Head Slow Wave Sleep1Psychology 2Slo.docx
 
Develop a 3–4 page research paper based on a selected case study rel
Develop a 3–4 page research paper based on a selected case study relDevelop a 3–4 page research paper based on a selected case study rel
Develop a 3–4 page research paper based on a selected case study rel
 

More from WilheminaRossi174

Senior Seminar in Business Administration BUS 499Coope.docx
Senior Seminar in Business Administration BUS 499Coope.docxSenior Seminar in Business Administration BUS 499Coope.docx
Senior Seminar in Business Administration BUS 499Coope.docx
WilheminaRossi174
 
Select two countries that have been or currently are in confli.docx
Select two countries that have been or currently are in confli.docxSelect two countries that have been or currently are in confli.docx
Select two countries that have been or currently are in confli.docx
WilheminaRossi174
 
Serial KillersFor this assignment you will review a serial kille.docx
Serial KillersFor this assignment you will review a serial kille.docxSerial KillersFor this assignment you will review a serial kille.docx
Serial KillersFor this assignment you will review a serial kille.docx
WilheminaRossi174
 
SESSION 1Michael Delarosa, Department ManagerWhat sugg.docx
SESSION 1Michael Delarosa, Department ManagerWhat sugg.docxSESSION 1Michael Delarosa, Department ManagerWhat sugg.docx
SESSION 1Michael Delarosa, Department ManagerWhat sugg.docx
WilheminaRossi174
 
Sheet11a & 1b.RESDETAILRes NumCheck InCheck OutCust IDCustFNameCus.docx
Sheet11a & 1b.RESDETAILRes NumCheck InCheck OutCust IDCustFNameCus.docxSheet11a & 1b.RESDETAILRes NumCheck InCheck OutCust IDCustFNameCus.docx
Sheet11a & 1b.RESDETAILRes NumCheck InCheck OutCust IDCustFNameCus.docx
WilheminaRossi174
 
Selecting & Implementing Interventions – Assignment #4.docx
Selecting & Implementing Interventions – Assignment #4.docxSelecting & Implementing Interventions – Assignment #4.docx
Selecting & Implementing Interventions – Assignment #4.docx
WilheminaRossi174
 
Seediscussions,stats,andauthorprofilesforthispublicati.docx
Seediscussions,stats,andauthorprofilesforthispublicati.docxSeediscussions,stats,andauthorprofilesforthispublicati.docx
Seediscussions,stats,andauthorprofilesforthispublicati.docx
WilheminaRossi174
 
Shared Reading FrameworkFollow this framework when viewing the v.docx
Shared Reading FrameworkFollow this framework when viewing the v.docxShared Reading FrameworkFollow this framework when viewing the v.docx
Shared Reading FrameworkFollow this framework when viewing the v.docx
WilheminaRossi174
 
Self-disclosureDepth of reflectionResponse demonstrates an in.docx
Self-disclosureDepth of reflectionResponse demonstrates an in.docxSelf-disclosureDepth of reflectionResponse demonstrates an in.docx
Self-disclosureDepth of reflectionResponse demonstrates an in.docx
WilheminaRossi174
 
Sheet1Excel for Finance Majorsweek 1week 2week 3week 4week 5week 6.docx
Sheet1Excel for Finance Majorsweek 1week 2week 3week 4week 5week 6.docxSheet1Excel for Finance Majorsweek 1week 2week 3week 4week 5week 6.docx
Sheet1Excel for Finance Majorsweek 1week 2week 3week 4week 5week 6.docx
WilheminaRossi174
 
Seemingly riding on the coattails of SARS-CoV-2, the alarming sp.docx
Seemingly riding on the coattails of SARS-CoV-2, the alarming sp.docxSeemingly riding on the coattails of SARS-CoV-2, the alarming sp.docx
Seemingly riding on the coattails of SARS-CoV-2, the alarming sp.docx
WilheminaRossi174
 
See the attachment of 1 Article belowPlease answer all the que.docx
See the attachment of 1 Article belowPlease answer all the que.docxSee the attachment of 1 Article belowPlease answer all the que.docx
See the attachment of 1 Article belowPlease answer all the que.docx
WilheminaRossi174
 
SHAPING SCHOOL CULTURE BY LIVING THE VISION AND MISSIONNameI.docx
SHAPING SCHOOL CULTURE BY LIVING THE VISION AND MISSIONNameI.docxSHAPING SCHOOL CULTURE BY LIVING THE VISION AND MISSIONNameI.docx
SHAPING SCHOOL CULTURE BY LIVING THE VISION AND MISSIONNameI.docx
WilheminaRossi174
 
Select a healthcare legislature of interest. Discuss the historica.docx
Select a healthcare legislature of interest. Discuss the historica.docxSelect a healthcare legislature of interest. Discuss the historica.docx
Select a healthcare legislature of interest. Discuss the historica.docx
WilheminaRossi174
 
See discussions, stats, and author profiles for this publicati.docx
See discussions, stats, and author profiles for this publicati.docxSee discussions, stats, and author profiles for this publicati.docx
See discussions, stats, and author profiles for this publicati.docx
WilheminaRossi174
 
Segmented Assimilation Theory and theLife Model An Integrat.docx
Segmented Assimilation Theory and theLife Model An Integrat.docxSegmented Assimilation Theory and theLife Model An Integrat.docx
Segmented Assimilation Theory and theLife Model An Integrat.docx
WilheminaRossi174
 
Select a local, state, or national public policy that is relev.docx
Select a local, state, or national public policy that is relev.docxSelect a local, state, or national public policy that is relev.docx
Select a local, state, or national public policy that is relev.docx
WilheminaRossi174
 
School of Community and Environmental HealthMPH Program .docx
School of Community and Environmental HealthMPH Program .docxSchool of Community and Environmental HealthMPH Program .docx
School of Community and Environmental HealthMPH Program .docx
WilheminaRossi174
 
School Effects on Psychological Outcomes During Adolescence.docx
School Effects on Psychological Outcomes During Adolescence.docxSchool Effects on Psychological Outcomes During Adolescence.docx
School Effects on Psychological Outcomes During Adolescence.docx
WilheminaRossi174
 
Search the gene belonging to the accession id you selected in week 2.docx
Search the gene belonging to the accession id you selected in week 2.docxSearch the gene belonging to the accession id you selected in week 2.docx
Search the gene belonging to the accession id you selected in week 2.docx
WilheminaRossi174
 

More from WilheminaRossi174 (20)

Senior Seminar in Business Administration BUS 499Coope.docx
Senior Seminar in Business Administration BUS 499Coope.docxSenior Seminar in Business Administration BUS 499Coope.docx
Senior Seminar in Business Administration BUS 499Coope.docx
 
Select two countries that have been or currently are in confli.docx
Select two countries that have been or currently are in confli.docxSelect two countries that have been or currently are in confli.docx
Select two countries that have been or currently are in confli.docx
 
Serial KillersFor this assignment you will review a serial kille.docx
Serial KillersFor this assignment you will review a serial kille.docxSerial KillersFor this assignment you will review a serial kille.docx
Serial KillersFor this assignment you will review a serial kille.docx
 
SESSION 1Michael Delarosa, Department ManagerWhat sugg.docx
SESSION 1Michael Delarosa, Department ManagerWhat sugg.docxSESSION 1Michael Delarosa, Department ManagerWhat sugg.docx
SESSION 1Michael Delarosa, Department ManagerWhat sugg.docx
 
Sheet11a & 1b.RESDETAILRes NumCheck InCheck OutCust IDCustFNameCus.docx
Sheet11a & 1b.RESDETAILRes NumCheck InCheck OutCust IDCustFNameCus.docxSheet11a & 1b.RESDETAILRes NumCheck InCheck OutCust IDCustFNameCus.docx
Sheet11a & 1b.RESDETAILRes NumCheck InCheck OutCust IDCustFNameCus.docx
 
Selecting & Implementing Interventions – Assignment #4.docx
Selecting & Implementing Interventions – Assignment #4.docxSelecting & Implementing Interventions – Assignment #4.docx
Selecting & Implementing Interventions – Assignment #4.docx
 
Seediscussions,stats,andauthorprofilesforthispublicati.docx
Seediscussions,stats,andauthorprofilesforthispublicati.docxSeediscussions,stats,andauthorprofilesforthispublicati.docx
Seediscussions,stats,andauthorprofilesforthispublicati.docx
 
Shared Reading FrameworkFollow this framework when viewing the v.docx
Shared Reading FrameworkFollow this framework when viewing the v.docxShared Reading FrameworkFollow this framework when viewing the v.docx
Shared Reading FrameworkFollow this framework when viewing the v.docx
 
Self-disclosureDepth of reflectionResponse demonstrates an in.docx
Self-disclosureDepth of reflectionResponse demonstrates an in.docxSelf-disclosureDepth of reflectionResponse demonstrates an in.docx
Self-disclosureDepth of reflectionResponse demonstrates an in.docx
 
Sheet1Excel for Finance Majorsweek 1week 2week 3week 4week 5week 6.docx
Sheet1Excel for Finance Majorsweek 1week 2week 3week 4week 5week 6.docxSheet1Excel for Finance Majorsweek 1week 2week 3week 4week 5week 6.docx
Sheet1Excel for Finance Majorsweek 1week 2week 3week 4week 5week 6.docx
 
Seemingly riding on the coattails of SARS-CoV-2, the alarming sp.docx
Seemingly riding on the coattails of SARS-CoV-2, the alarming sp.docxSeemingly riding on the coattails of SARS-CoV-2, the alarming sp.docx
Seemingly riding on the coattails of SARS-CoV-2, the alarming sp.docx
 
See the attachment of 1 Article belowPlease answer all the que.docx
See the attachment of 1 Article belowPlease answer all the que.docxSee the attachment of 1 Article belowPlease answer all the que.docx
See the attachment of 1 Article belowPlease answer all the que.docx
 
SHAPING SCHOOL CULTURE BY LIVING THE VISION AND MISSIONNameI.docx
SHAPING SCHOOL CULTURE BY LIVING THE VISION AND MISSIONNameI.docxSHAPING SCHOOL CULTURE BY LIVING THE VISION AND MISSIONNameI.docx
SHAPING SCHOOL CULTURE BY LIVING THE VISION AND MISSIONNameI.docx
 
Select a healthcare legislature of interest. Discuss the historica.docx
Select a healthcare legislature of interest. Discuss the historica.docxSelect a healthcare legislature of interest. Discuss the historica.docx
Select a healthcare legislature of interest. Discuss the historica.docx
 
See discussions, stats, and author profiles for this publicati.docx
See discussions, stats, and author profiles for this publicati.docxSee discussions, stats, and author profiles for this publicati.docx
See discussions, stats, and author profiles for this publicati.docx
 
Segmented Assimilation Theory and theLife Model An Integrat.docx
Segmented Assimilation Theory and theLife Model An Integrat.docxSegmented Assimilation Theory and theLife Model An Integrat.docx
Segmented Assimilation Theory and theLife Model An Integrat.docx
 
Select a local, state, or national public policy that is relev.docx
Select a local, state, or national public policy that is relev.docxSelect a local, state, or national public policy that is relev.docx
Select a local, state, or national public policy that is relev.docx
 
School of Community and Environmental HealthMPH Program .docx
School of Community and Environmental HealthMPH Program .docxSchool of Community and Environmental HealthMPH Program .docx
School of Community and Environmental HealthMPH Program .docx
 
School Effects on Psychological Outcomes During Adolescence.docx
School Effects on Psychological Outcomes During Adolescence.docxSchool Effects on Psychological Outcomes During Adolescence.docx
School Effects on Psychological Outcomes During Adolescence.docx
 
Search the gene belonging to the accession id you selected in week 2.docx
Search the gene belonging to the accession id you selected in week 2.docxSearch the gene belonging to the accession id you selected in week 2.docx
Search the gene belonging to the accession id you selected in week 2.docx
 

Recently uploaded

Hindi varnamala | hindi alphabet PPT.pdf
Hindi varnamala | hindi alphabet PPT.pdfHindi varnamala | hindi alphabet PPT.pdf
Hindi varnamala | hindi alphabet PPT.pdf
Dr. Mulla Adam Ali
 
IGCSE Biology Chapter 14- Reproduction in Plants.pdf
IGCSE Biology Chapter 14- Reproduction in Plants.pdfIGCSE Biology Chapter 14- Reproduction in Plants.pdf
IGCSE Biology Chapter 14- Reproduction in Plants.pdf
Amin Marwan
 
Leveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit InnovationLeveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit Innovation
TechSoup
 
UGC NET Exam Paper 1- Unit 1:Teaching Aptitude
UGC NET Exam Paper 1- Unit 1:Teaching AptitudeUGC NET Exam Paper 1- Unit 1:Teaching Aptitude
UGC NET Exam Paper 1- Unit 1:Teaching Aptitude
S. Raj Kumar
 
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptxNEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
iammrhaywood
 
Advanced Java[Extra Concepts, Not Difficult].docx
Advanced Java[Extra Concepts, Not Difficult].docxAdvanced Java[Extra Concepts, Not Difficult].docx
Advanced Java[Extra Concepts, Not Difficult].docx
adhitya5119
 
Walmart Business+ and Spark Good for Nonprofits.pdf
Walmart Business+ and Spark Good for Nonprofits.pdfWalmart Business+ and Spark Good for Nonprofits.pdf
Walmart Business+ and Spark Good for Nonprofits.pdf
TechSoup
 
clinical examination of hip joint (1).pdf
clinical examination of hip joint (1).pdfclinical examination of hip joint (1).pdf
clinical examination of hip joint (1).pdf
Priyankaranawat4
 
Chapter wise All Notes of First year Basic Civil Engineering.pptx
Chapter wise All Notes of First year Basic Civil Engineering.pptxChapter wise All Notes of First year Basic Civil Engineering.pptx
Chapter wise All Notes of First year Basic Civil Engineering.pptx
Denish Jangid
 
A Independência da América Espanhola LAPBOOK.pdf
A Independência da América Espanhola LAPBOOK.pdfA Independência da América Espanhola LAPBOOK.pdf
A Independência da América Espanhola LAPBOOK.pdf
Jean Carlos Nunes Paixão
 
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Leena Ghag-Sakpal
 
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
PECB
 
How to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 InventoryHow to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 Inventory
Celine George
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
History of Stoke Newington
 
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
imrankhan141184
 
How to deliver Powerpoint Presentations.pptx
How to deliver Powerpoint  Presentations.pptxHow to deliver Powerpoint  Presentations.pptx
How to deliver Powerpoint Presentations.pptx
HajraNaeem15
 
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
GeorgeMilliken2
 
Liberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdfLiberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdf
WaniBasim
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
Priyankaranawat4
 
Film vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movieFilm vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movie
Nicholas Montgomery
 

Recently uploaded (20)

Hindi varnamala | hindi alphabet PPT.pdf
Hindi varnamala | hindi alphabet PPT.pdfHindi varnamala | hindi alphabet PPT.pdf
Hindi varnamala | hindi alphabet PPT.pdf
 
IGCSE Biology Chapter 14- Reproduction in Plants.pdf
IGCSE Biology Chapter 14- Reproduction in Plants.pdfIGCSE Biology Chapter 14- Reproduction in Plants.pdf
IGCSE Biology Chapter 14- Reproduction in Plants.pdf
 
Leveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit InnovationLeveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit Innovation
 
UGC NET Exam Paper 1- Unit 1:Teaching Aptitude
UGC NET Exam Paper 1- Unit 1:Teaching AptitudeUGC NET Exam Paper 1- Unit 1:Teaching Aptitude
UGC NET Exam Paper 1- Unit 1:Teaching Aptitude
 
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptxNEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
 
Advanced Java[Extra Concepts, Not Difficult].docx
Advanced Java[Extra Concepts, Not Difficult].docxAdvanced Java[Extra Concepts, Not Difficult].docx
Advanced Java[Extra Concepts, Not Difficult].docx
 
Walmart Business+ and Spark Good for Nonprofits.pdf
Walmart Business+ and Spark Good for Nonprofits.pdfWalmart Business+ and Spark Good for Nonprofits.pdf
Walmart Business+ and Spark Good for Nonprofits.pdf
 
clinical examination of hip joint (1).pdf
clinical examination of hip joint (1).pdfclinical examination of hip joint (1).pdf
clinical examination of hip joint (1).pdf
 
Chapter wise All Notes of First year Basic Civil Engineering.pptx
Chapter wise All Notes of First year Basic Civil Engineering.pptxChapter wise All Notes of First year Basic Civil Engineering.pptx
Chapter wise All Notes of First year Basic Civil Engineering.pptx
 
A Independência da América Espanhola LAPBOOK.pdf
A Independência da América Espanhola LAPBOOK.pdfA Independência da América Espanhola LAPBOOK.pdf
A Independência da América Espanhola LAPBOOK.pdf
 
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
 
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
 
How to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 InventoryHow to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 Inventory
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
 
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
 
How to deliver Powerpoint Presentations.pptx
How to deliver Powerpoint  Presentations.pptxHow to deliver Powerpoint  Presentations.pptx
How to deliver Powerpoint Presentations.pptx
 
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
 
Liberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdfLiberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdf
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
 
Film vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movieFilm vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movie
 

Chapter 4Survey Research—Describing and Predicng Behavior

  • 1. Chapter 4 Survey Research—Describing and Predic�ng Behavior Kim Steele/Photodisc/Ge�y Images Chapter Contents Introduc�on to Survey Research (h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o ns/sec1.1#sec1.1) Designing Ques�onnaires (h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o ns/sec1.2#sec1.2) Sampling From the Popula�on (h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o ns/sec1.3#sec1.3) Analyzing Survey Data (h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o ns/sec1.4#sec1.4) Ethical Issues in Survey Research (h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o ns/sec1.5#sec1.5) In a highly influen�al book published in the 1960s, the sociologist Erving Goffman (1963) defined s�gma as an unusual characteris�c that triggers a nega�ve evalua�on. In his words, "The s�gma�zed person is one who is reduced in our minds from a whole and usual person to a tainted, discounted one" (1963, p. 3). People's beliefs about s�gma�zed characteris�cs exist largely in the eye of the beholder but have substan�al influence on social interac�ons with the s�gma�zed
  • 2. (see Snyder, Tanke, & Berscheid, 1977). A large research tradi�on in psychology has been devoted to understanding both the origins of s�gma and the consequences of being s�gma�zed. According to Goffman and others, the characteris�cs associated with the greatest degree of s�gma have three features in common: They are highly visible, they are perceived as controllable, and they are misunderstood by the public. Recently, researchers have taken considerable interest in people's a�tudes toward members of the gay and lesbian community. Although these a�tudes have become more posi�ve over �me, this group s�ll encounters harassment and other forms of discrimina�on on a regular basis (see Na�onal Gay Task Force, 1984). One of the top recognized experts on this subject is Gregory Herek, professor of psychology at the University of California at Davis (h�p://psychology.ucdavis.edu/herek/ (h�p://psychology.ucdavis.edu/herek/) ). In a 1988 ar�cle, Herek conducted a survey of heterosexuals' a�tudes toward both lesbians and gay men, with the goal of understanding the predictors of nega�ve a�tudes. Herek approached this research ques�on by construc�ng a scale to measure a�tudes toward these groups. In three studies, par�cipants were asked to complete this a�tude measure, along with other exis�ng scales assessing a�tudes about gender roles, religion, and tradi�onal ideologies. Herek's (1988) research revealed that, as hypothesized, heterosexual males tended to hold more nega�ve a�tudes about gay men and lesbians than heterosexual females. However, the same psychological mechanisms seemed to explain the prejudice in both genders. That is ,
  • 3. nega�ve a�tudes were associated with increased religiosity, more tradi�onal beliefs about family and gender, and fewer experiences actually interac�ng with gay men and lesbians. These associa�ons meant that Herek could predict people's a�tudes toward gay men and lesbians based on knowing their views about family, gender, and religion, as well as their past interac�ons with the s�gma�zed group. Herek's primary contribu�on to the literature in this paper was the insight that reducing s�gma toward gay men and lesbians "may require confron�ng deeply held, socially reinforced values" (1988, p. 473). And this insight was possible only because people were asked to report these values directly. https://content.ashford.edu/books/Malec.5743.13.1/sections/sec 1.1#sec1.1 https://content.ashford.edu/books/Malec.5743.13.1/sections/sec 1.2#sec1.2 https://content.ashford.edu/books/Malec.5743.13.1/sections/sec 1.3#sec1.3 https://content.ashford.edu/books/Malec.5743.13.1/sections/sec 1.4#sec1.4 https://content.ashford.edu/books/Malec.5743.13.1/sections/sec 1.5#sec1.5 http://psychology.ucdavis.edu/herek/ 4.1 Introduc�on to Survey Research Whether you are aware of it or not, you have been encountering survey research for most of your life. Every �me your telephone rings during dinner�me, and the person on the other end of the line insists on knowing
  • 4. your household income and favorite brand of laundry detergent, he or she is helping to conduct survey research. When news programs try to predict the winner of an elec�on two weeks early, these reports are based on survey research of eligible voters. In both cases, the researcher is trying to make predic�ons about the products people buy or the candidates they will elect based on people's reports of their own a�tudes, feelings, and behaviors. Surveys can be used in a variety of contexts and are most appropriate for ques�ons that involve people describing their a�tudes, their behaviors, or a combina�on of the two. For example, if you wanted to examine the predictors of a�tudes toward the death penalty, you could ask people their opinions on this topic and also ask them about their poli�cal party affilia�on. Based on these responses, you could test whether poli�cal affilia�on predicted a�tudes toward the death penalty. Or, imagine you wanted to know whether students who spent more �me studying were more likely to do well on their exams. This ques�on could be answered using a survey that asked students about their study habits and then tracked their exam grades. We will return to this example near the end of the chapter as we discuss the process of analyzing survey data to test our hypotheses about predic�ons. The common thread running through these two examples is that they require people to report either their thoughts (e.g., opinions about the death penalty) or their behaviors (e.g., the hours they spend studying). Thus, in deciding whether a survey is the best fit for your research ques�on, the key is to consider whether people will be both able and willing to report these things accurately. We will expand on both of these issues
  • 5. in the next sec�on. In this chapter, we con�nue our journey along the con�nuum of control, moving on to survey research, in which the primary goal is either describing or predic�ng a�tudes and behavior. For our purposes, survey research refers to any method that relies on people's reports of their own a�tudes, feelings, and behaviors. So, for example, in Herek's (1988) study, the par�cipants reported their a�tudes toward lesbians and gay men, rather than these a�tudes being somehow directly observed by the researchers. Compared with the qualita�ve and descrip�ve designs for observing behavior we discussed in Chapter 3, survey research tends to yield more control over both data collec�on and ques�on content. Thus, survey research falls somewhere between quan�ta�ve descrip�ve research (Chapter 3) and the explanatory research involved in experimental designs (Chapter 5). This chapter provides an overview of survey research from conceptualiza�on through analysis. We will cover the types of research ques�ons that are best suited to survey research and provide an overview of the decisions to consider in designing and conduc�ng a survey study. We will then cover the process of data collec�on, with a focus on selec�ng the people who will complete your survey. Finally, we will cover the three most common approaches for analyzing survey data, bringing us back full circle to addressing our research ques�ons. Distinguishing Features of Surveys Survey research designs have three dis�nguishing features that set them apart from other designs. First, all survey research relies on either wri�en or verbal self- reports of people's a�tudes, feelings, and behaviors. This
  • 6. means that researchers will ask par�cipants a series of ques�ons and record their responses. This approach has several advantages, including being r ela�vely straigh�orward and allowing access to psychological processes (e.g., "Why do you support candidate X?"). However, researchers are also cau�ous in their interpreta�on of self-reported data because par�cipants' responses can reflect a combina�on of their true a�tude and their concern over how this a�tude will be perceived. Scien�sts refer to this as social desirability, which means that people may be reluctant to report unpopular a�tudes. So if you were to ask people their a�tudes about different racial groups, their answers might reflect both their true a�tude and their desire not to appear racist. We return to the issue of social desirability and discuss some tricks for designing ques�ons that can help to sidestep these concerns and capture respondents' true a�tudes. The second dis�nguishing feature of survey research is that it has the ability to access internal states that cannot be measured through direct observa�on. In our discussion of observa�onal designs in Chapter 3, we learned that one of the limita�ons of these designs was a lack of insight into why people do what they do. Survey research is able to address this limita�on directly: By asking people what they think, how they feel, and why they behave in certain ways, researchers come closer to capturing the underlying psychological processes. However, people's reports of their internal states should be taken with a grain of salt, for three reasons. First, as men�oned, these reports may be biased by social desirability concerns, par�cularly when unpopular a�tudes are involved. Second, there is a large literature in social psychology sugges�ng that people may not be
  • 7. very accurate at understanding the true reasons for their behavior. In a highly cited review paper, psychologists Richard Nisbe� and Tim Wilson (1977) argued that we make poor guesses about why we do things, and those guesses are based more on our assump�ons than on any real introspec�on. Thus, survey ques�ons can provide access to internal states, but these should always be interpreted with cau�on. Third, on a more prac�cal note, survey research allows us to collect large amounts of data with rela�vely li�le effort and few resources. However, their actual efficiency depends on the decisions made during the design process. In reality, efficiency is o�en in a delicate balance with the accuracy and completeness of the data. Broadly speaking, survey research can be conducted usi ng either verbal or wri�en self-reports (or a combina�on of the two). Before we dive into the details of wri�ng and forma�ng a survey, it is important to understand the pros and cons of administering your survey as an interview (i.e., an oral survey) or a ques�onnaire (i.e., a wri�en survey). Interviews An interview involves an oral ques�on-and-answer exchange between the researcher and the par�cipant. This exchange can take place either face-to-face or over the phone. So our telemarketer example from earlier in the chapter represents an interview because the ques�ons are asked orally, via phone. Likewise, if you are approached in a shopping mall and asked to answer ques�ons about your favorite products, you are experiencing a survey in interview form because the ques�ons are administered out loud. And, if you have ever taken
  • 8. part in a focus group, in which a group of people gives their reac�ons to a new product, the researchers are Conduc�ng interviews may allow a researcher to gather more detailed and richer responses. Alina Solovyova-Vincent/E+/Ge�y Images essen�ally conduc�ng an interview with the group. (For a more in-depth discussion of focus groups and other interview techniques, see Chapter 3, Sec�on 3.2 (h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o ns/sec3.2#sec3.2) , Qualita�ve Research Interviews.) Interview Schedules Regardless of how the interview is administered, the interviewer (i.e., the researcher) has a predetermined plan, or script, for how the interview should go. This plan, or script, for the progress of the interview is known as an interview schedule. When conduc�ng an interview— including those telemarke�ng calls—the researcher/interviewer has a detailed plan for the order of ques�ons to be asked, along with follow-up ques�ons depending on the par�cipant's responses. Broadly speaking, there are two types of interview schedules. A linear (also called "structured") schedule will ask the same ques�ons in the same order for all par�cipants. In contrast, a branching schedule unfolds more like a flowchart, with the next ques�on dependent on par�cipants' answers. A branching schedule is typically used in cases with follow-up ques�ons that make
  • 9. sense only for some of the par�cipants. For example, you might first ask people whether they have children; if they answer "yes," you could then follow up by asking how many. One danger in using a branching schedule is that it is based partly on your assump�ons about the rela�onships between variables. Granted, it is fairly uncontroversial to ask only people with children to indicate how many children they have. But imagine the following scenario in which you first ask par�cipants for their household income, and then ask about their poli �cal dona�ons: "How much money do you make? $18,000? Okay, how likely are you to donate money to the Democra�c Party?" "How much money do you make? $250,000? Okay, how likely are you to donate money to the Republican Party?" The assump�on implicit in the way these ques�ons branch is that wealthier people are more likely to be Republicans and less wealthy people to be Democrats. This might be supported by the data or it might not. But by planning the follow-up ques�ons in this way, you are unable to capture cases that do not fit your stereotypes (i.e., the wealthy Democrats and the poor Republicans). The lesson here is to be careful about le�ng your biases shape the data collec�on process, as this can create invalid or inaccurate findings. Advantages and Disadvantages of Interviews Spoken interviews have a number of advantages over wri�en surveys. For one, people are o�en more mo�vated to talk than they are to write. Let's say that an undergraduate research assistant is dispatched to a local
  • 10. shopping mall to interview people about their experiences in roman�c rela�onships. The researcher may have no trouble at all recrui�ng par�cipants, many of whom will be eager to divulge the personal details about their recent rela�onships. But for be�er or for worse, these experiences will be more difficult for the researcher to capture in wri�ng. Related to this observa�on, people's oral responses are typically richer and more detailed than their wri�en responses. Think of the difference between asking someone to "Describe your views on gun control" versus "Indicate on a scale of 1 to 7 the degree to which you support gun control." The former is more likely to capture the richness and subtlety involved in people's a�tudes about guns. On a prac�cal note, using an interview format also allows you to ensure that respondents understand the ques�ons. If wri�en ques�onnaire items are poorly worded, people are forced to guess at your meaning, and these guesses introduce a big source of error variance (variance from random sources that are irrelevant to the trait or ability the ques �onnaire is purpor�ng to measure). But if an interview ques�on is poorly asked, people find it much easier to ask the interviewer to clarify. Finally, using an interview format allows you to reach a broader cross-sec�on of people and to include those who are unable to read and write—or, perhaps, unable to read and write the language of your survey. Interviews also have three clear disadvantages compared with wri�en surveys. First, interviews are more costly in terms of both �me and money. It certainly used more of my �me to go to a shopping
  • 11. mall than it would have taken to mail out packets of surveys (but no more money—these research assistant gigs tend to be unpaid!). Second, the interview format allows many opportuni�es to glean personal bias from the interview. These biases are unlikely to be deliberate, but par�cipants can o�en pick up on body language and subtle facial expressions when the interviewer disagrees with their answers. These cues may lead them to shape their responses in order to make the interviewer happier (the influence of social desirability again). Third, interviews can be difficult to score and interpret, especially with open-ended ques�ons. Although administering them may be easy, scoring them is rela�vely more complicated, o�en involving subjec�vity or bias in the interpreta�on. Because the researcher o�en has to make judgments based on personal beliefs about the quality of the response, mul�ple raters are generally used to score the responses in order to minimize bias. The best way to understand the pros and cons of interviewing is that both are a consequence of personal interac�ons. The interac�on between interviewer and interviewee allows for richer responses but also the poten�al for these responses to be biased. As a researcher, you have to weigh these pros and cons and decide which method is the best fit for your survey. In the next sec�on, we turn our a�en�on to the process of administering surveys in wri�ng. Questionnaires A ques�onnaire is a survey that involves a wri�en
  • 12. ques�on-and-answer exchange between the researcher and the par�cipant. The ques�onnaire can be in open- ended format (e.g., the par�cipant writes in his or her answer) or forced-choice response format (e.g., the par�cipant selects from a set of responses, such as with mul�ple choice ques�ons, ra�ng scales, or true/false ques�ons), which will be discussed later in this chapter. The exchange is a bit different from what we saw with https://content.ashford.edu/books/Malec.5743. 13.1/sections/sec 3.2#sec3.2 interview formats. In wri�en surveys, the ques�ons are designed ahead of �me and then given to par�cipants, who write their responses and return the ques�onnaire to the researcher. In the next sec�on, we will discuss details for designing these ques�ons. But before we get there, let's take a quick look at the process of administering wri�en surveys. Distribu�on Methods Ques�onnaires can be distributed in three primary ways, each with its own pa�ern of advantages and disadvantages. Distribu�ng by Mail Un�l recently, one common way to distribute surveys was to send paper copies through the mail to a group of par�cipants (see Sec�on 4.3, Sampling From the Popula�on, for more discussion on how this group is selected). Mailing surveys is rela�vely inexpensive and rela�vely easy to do, but is unfortunately one of the worst methods when it comes to response rates. People
  • 13. tend to ignore ques�onnaires they receive in the mail, dismissing them as one more piece of junk. There are a few tricks available to researchers to increase response rates, including providing incen�ves, making the survey interes�ng, and making it as easy as possible to return the results (e.g., with a postage-paid envelope). However, even using all of these tricks, researchers consider themselves extremely lucky to get a 30% response rate from a mail survey. That is, if you mail 1,000 surveys, you will be doing well to receive 300 back. Because of this low return on investment, researchers have begun using other methods for their wri�en surveys. Distribu�ng in Person Another op�on is to distribute a wri�en survey in person, simply handing out copies and asking par�cipants to fill them out on the spot. This method is certainly more �me-consuming, as a researcher has to be sta�oned for long periods of �me in order to collect data. In addi�on, people are less likely to answer the ques�ons honestly because the presence of a researcher makes them worry about social desirability. Last, the sample for this method is limited to people who are in the physical area at the �me that ques�onnaires are being handed out. As we will discuss later, this might lead to problems in the composi�on of the sample. On the plus side, however, this method tends to result in higher compliance rates because it is harder to say no to someone face-to-face than it is to ignore a piece of mail. Distribu�ng Online Over the past 20 years, Internet, or Web-based, surveys
  • 14. have become increasingly common. In Web-based survey research, the ques�onnaire is designed and posted on a Web page, to which par�cipants are directed in order to complete the ques�onnaire. The advantages of online distribu�on are clear: This method is easiest for both researchers and par�cipants and may give people a greater sense of anonymity, thereby encouraging more honest responses. In addi�on, response �mes are faster and the data are easier to analyze because they are already in digital format. The disadvantages include the following: Specific groups being underrepresented because they do not have access to the Internet, the researcher has li�le to no control over sample selec�on, and the researcher receives responses only from those who are interested in the topic—so-called self-selec�on bias. All these limita�ons could raise ques�ons about the validity and reliability of the data collected. In addi�on, several ethical issues might arise regarding informed consent and the privacy of par�cipants. So when considering conduc�ng Web-based surveys, researchers should evaluate all the advantages and disadvantages, as well as any ethical or legal implica�ons. For readers interested in more informa�on on designing and conduc�ng Internet research, Sam Gosling and John Johnson's 2010 book Advanced Methods for Conduc�ng Online Behavioral Research is an excellent resource. In addi�on, several groups of psychological researchers have been a�emp�ng to understand the psychology of Internet users. (You can read about recent studies on this website (h�p://www.spring.org.uk/2010/10/internet-psychology.php) .) Advantages and Disadvantages of Ques�onnaires
  • 15. Just as with interview methods, wri�en ques�onnaires have their own set of advantages and disadvantages. Wri�en surveys allow researchers to collect large amounts of data with li�le cost or effort, and they can offer a greater degree of anonymity than interviews. Anonymity can be a par�cular advantage in dealing with sensi�ve or poten�ally embarrassing topics. That is, people may be more willing to answer a ques�onnaire about their alcohol use or their sexual history than they would be to discuss these things face-to-face with an interviewer. On the downside, wri�en surveys miss out on the advantages of interviews because no one is available to clarify confusing ques�ons or to gather more informa�on as needed. Fortunately, there is one rela�vely easy way to minimize this problem: Write ques�ons (and response choices, if using mul�ple choice or forced choice formats) that are as clear as possible. In the next sec�on, we turn our a�en�on to the process of designing ques�onnaires. http://www.spring.org.uk/2010/10/internet-psychology.php Simple language is one characteris�c of an effec�ve ques�onnaire. iStockphoto/Thinkstock 4.2 Designing Ques�onnaires One of the most important elements when conduc�ng survey research is deciding how to construct and assemble the ques�onnaire items. In some cases, you will be able to use ques�onnaires that other researchers have developed in order to answer your research ques�ons. For example, many psychology researchers use
  • 16. standard scales that measure behavior or personality traits, such as self-esteem, prejudice, depression, or stress levels. The advantage of these ready-made measures is that other people have already gone to the trouble of making sure they are valid and reliable. So if you are interested in the rela�onship between stress and depression, you could distribute the Perceived Stress Scale (Cohen, Kamarck, & Mermelstein, 1983) and the Beck Depression Inventory (Beck, Steer, Ball, & Ranieri, 1996) to a group of par�cipants and move on to the fun part of data analyses. For further discussion, see Chapter 2, Sec�on 2.2 (h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o ns/sec2.2#sec2.2) , Reliability and Validity. However, in many cases there is no perfect measure for your research ques�on—either because no one has studied the topic before or because the current measures do not accurately assess the construct of interest you are inves�ga�ng. When this happens, you will need to go through the process of designing your own ques�onnaire. In this sec�on, we discuss strategies for wri�ng ques�ons and choosing the most appropriate response format. Five Rules for Designing Better Questionnaires Each of the following rules is designed to make your ques�ons as clear and easy to understand as possible in order to minimize the poten�al for error variance. We discuss each one and illustrate them with contras�ng pairs of items, consis�ng of "bad" items that do not follow the rule, and then "be�er" items that do. 1. Use simple language. One of the simplest and most important rules to keep in mind is that people have to be able to
  • 17. understand your ques�ons. This means you should avoid jargon and specialized language whenever possible. BAD: BETTER: BAD: BETTER: "Have you ever had an STD?" "Have you ever had a sexually transmi�ed disease?" "What is your opinion of the S-CHIP?" "What is your opinion of the State Children's Health Insurance Program?" It is also a good idea to simplify the language as much as possible so that people spend �me answering the ques�on rather than trying to decode your meaning. For example, words like assist and consider can be replaced with words like help and think. This may seem odd—or perhaps even condescending to your par�cipants—but it is always be�er to err on the side of simplicity. Remember, if people are forced to guess at the meaning of your ques�ons, these guesses add error variance to their answers. In addi�on, when developing and administering surveys to various popula�ons, it is important to remember to use language and examples that are not culturally or linguis�cally biased. For example, par�cipants from various cultures may not be familiar with ques�ons involving the historical development of the United States or the nature of gender roles in the United States. Thus, with the changing U.S. popula�on and the different languages being
  • 18. spoken in the home, it is important to use language that does not discriminate against par�cular popula�ons. Also, it is advisable to avoid slang at all costs. 2. Be precise. Another way to ensure that people understand the ques�on is to be as precise as possible in your wording. Ques�ons that are ambiguous in their wording will introduce an extra source of error variance into your data because people may interpret these ques�ons in varying ways. BAD: BETTER: BAD: BETTER: "What kind of drugs do you take?" (Legal drugs? Illegal drugs? Now? In college?) "What kind of prescrip�on drugs are you currently taking?" "Do you like sports?" (Playing? Watching? Which sports?) "How much do you like watching basketball on television?" 3. Use neutral language. It is important that your ques�ons be designed to measure your par�cipants' a�tudes, feelings, or behaviors rather than to manipulate their responses. That is, you should avoid leading ques�ons, which are wri�en in such a way that they suggest an answer. BAD: BETTER: BAD: BETTER:
  • 19. "Do you beat your children?" (Clearly, bea�ng isn't good; who would say yes?) "Is it acceptable to use physical forms of discipline?" "Do you agree that the president is an idiot?" (Hmmm . . . I wonder what the researcher thinks. . . .) "How would you rate the president's job performance?" This guideline can also be used to sidestep social desirability concerns. If you suspect that people may be reluctant to report holding an a�tude such as using corporal punishment with their children, it helps to phrase the ques�on in a nonthreatening way—"using physical forms of discipline" versus "bea�ng your children." Many current measures of prejudice adopt this technique. For example, McConahay's Modern Racism Scale contains items such as "Discrimina�on against Blacks is no longer a problem in the United States" (McConahay, 1986). People who hold prejudicial a�tudes are more likely to agree with statements like this one than with more blunt ones like "I hate people from Group X." https://content.ashford.edu/books/Malec.5743.13.1/sections/sec 2.2#sec2.2 4. Ask one ques�on at a �me. One remarkably common error that people make in designing ques�ons is to include a double- barreled ques�on, which fires off more than one ques�on at a �me. When you fill out a new pa�ent ques�onnaire at your doctor's office, these forms o�en ask whether you suffer from "headaches and nausea." What if you suffer from only one of these? Or what if you have a lot of nausea and only an occasional headache? The be�er approach is to ask about each of these symptoms separately.
  • 20. BAD: BETTER: BAD: BETTER: "Do you suffer from pain and numbness?" "How o�en do you suffer from pain?" "How o�en do you suffer from numbness?" "Do you like watching football and boxing?" "How much do you enjoy watching professional football on TV?" "How much do you enjoy watching boxing on TV?" 5. Avoid nega�ons. One final and simple way to clarify your ques�ons is to avoid ques�ons with nega�ve statements because these can o�en be difficult to understand. In the following examples, the first is admi �edly a li�le silly, but the second comes from a real survey of voter opinions. BAD: BETTER: BAD: BETTER: "Do you never not cheat on your exams?" (Wait—what? Do I cheat? Do I not cheat? What is this asking?) "Have you ever cheated on an exam?" "Are you against rejec�ng the ban on pes�cides?" (Wait—so am I for the ban? Against the ban? What is this asking?) "Do you support the current ban on pes�cides?" Participant Response Options
  • 21. In this sec�on, we turn our a�en�on to the issue of deciding how par�cipants should respond to your ques�ons. The decisions you make at this stage affect the type of data you ul�mately collect, so it is important to choose carefully. We also review the primary decisions you will need to make about response op�ons, as well as the pros and cons of each one. One of the first choices you have to make is whether to collect open-ended or fixed-format responses. As the names imply, fixed-format responses require par�cipants to choose from a list of op�ons (e.g., "Pick your favorite color"), whereas open-ended responses ask par�cipants to provide unstructured responses to a ques�on or statement (e.g., "How do you feel about legalizing marijuana?"). Open-ended responses tend to be richer and more flexible but harder to translate into quan�fiable data—analogous to the trade-off we discussed in comparing wri�en versus oral survey methods. To put it another way, some concepts are difficult to reduce to a seven-point fixed-format scale, but these scales are easier to analyze than a paragraph of free - flowing text. Another reason to think carefully about this decision is that fixed-format responses will, by defini�on, restrict people's op�ons in answering the ques�on. In some cases, these restric�ons can even act as leading ques�ons. In a study of people's percep�ons of history, James Pennebaker and his colleagues (2006) asked respondents to indicate the "most significant event over the last 50 years." When this was asked in an open-ended way (i.e., "list the most significant event"), 2% of par�cipants listed the inven�on of computers. In another version of the survey, this ques�on was asked in
  • 22. a fixed-format way (i.e., "choose the most significant event"). When asked to select from a list of four op�ons (World War II, Inven�on of Computers, Tiananmen Square, or Man on the Moon), 30% chose the inven�on of computers! In exchange for having easily coded data, the researchers accidentally forced par�cipants into a smaller number of op�ons and ended up with a skewed sense of the importance of computers in peopl e's percep�ons of history. Fixed-Format Op�ons Although fixed-format responses can some�mes constrain or skew par�cipants' answers, the reality is that researchers tend to use them more o�en than not. This decision is largely prac�cal; fixed-format responses allow for more efficient data collec�on from a much larger sample. (Imagine the chore of having to hand-code 2,000 essays!) But once you have decided on this op�on for your ques�onnaire, the decision process is far from over. In this sec�on, we discuss three possibili�es you can use to construct a fixed-format response scale. True/False One op�on is to ask ques�ons using a true/false format, which asks par�cipants to indicate whether they endorse a statement. For example: "I a�ended church last Sunday" "I am a U.S. ci�zen" "I am in favor of abor�on" True True True
  • 23. False False False This last example may strike you as odd, and in fact illustrates an important point about the use of true/false formats: They are best used for statements of fact rather than a�tudes. It is rela�vely straigh�orward to indicate whether you a�ended church or whether you are a U.S. ci�zen. However, people's a�tudes toward abor�on are o�en complicated—one can be "pro-choice" but s�ll support some common-sense restric�ons, or "pro- life" but support excep�ons (e.g., in cases of rape). The point is that a true/false ques�on cannot even come close to capturing this complexity. However, for survey items that involve simple statements of fact, the true/false format can be a good op�on. Mul�ple Choice A second op�on is to use a mul�ple-choice format, which asks par�cipants to select from a set of predetermined responses. "Which of the following is your favorite fast-food restaurant?" a. McDonald's b. Burger King c. Wendy's d. Taco Bell
  • 24. "Who did you vote for in the 2008 presiden�al elec�on?" a. John McCain b. Barack Obama "How do you travel to work on most days? (Select all that apply.)" a. drive alone b. carpool c. take public transporta�on As you can see in these examples, mul�ple-choice ques�ons give you quite a bit of freedom in both the content and the response scaling of your ques�ons. You can ask par�cipants either to select one answer or, as in the last example, to select all applicable answers. You can cover everything from preferences (e.g., favorite fast-food restaurant) to behaviors (e.g., how you travel to work). In addi�on to assessing preferences or behaviors, as in the preceding examples, mul�ple-choice ques�ons are o�en used to examine knowledge and abili�es. For example, intelligence and achievement tests u�lize mul�ple-choice ques�ons to assess the cogni�ve and academic abili�es of individuals. In these cases, there is only one correct response choice and three to four incorrect or "distracter" response choices. Most research indicates that there should be at least three but no more than four incorrect response choices. Providing more than four incorrect response choices can make the ques�on too difficult. It is also important that the correct response choice not obviously differ from the incorrect choices. Thus, all response choices, both correct and incorrect, should be approximately the same
  • 25. length and be plausible choices. If the incorrect response choices are obviously wrong, this will make the item too easy, which will affect its validity. In addi�on, including obviously wrong answers allows those individuals who do not know the answer to deduce the correct response. Regardless of whether you are developing mul�ple-choice ques�ons to assess behaviors, knowledge, or abili�es, all ques�ons should be clear, unambiguous, and brief so that they can be easily understood. In addi �on, as with all types of ques�on formats, it is best to avoid nega�ve ques�ons such as, "Which of the following is not correct?" because they can be confusing to test- takers and cause the ques�on to be invalid (i.e., not measure what it is supposed to be measuring). Finally, it is best prac�ce to avoid response choices that include "All of the above" or "None of the above." Most individuals know that when such response choices are provided, they are usually the correct answer. You may have already spo�ed a downside to mul�ple- choice formats. Whenever you provide a set of responses, you are restric�ng par�cipants to those choices. This is the problem that Pennebaker and colleagues encountered when asking people about the most significant events of the last century. In each of the preceding examples, the categories fail to capture all possible responses. What if your favorite restaurant is In-and-Out Burger? What if you voted for Ralph Nader? What if you telecommute or ride your bicycle to work? There are two rela�vely easy ways to avoid (or at least minimize) this problem. First, plan carefully when choosing the response op�ons. During the design process, it helps to brainstorm with other people to ensure you are capturing the most likely responses. However, in many
  • 26. cases, it is almost impossible to provide every op�on that people might think of. The second solu�on is to provide an "other" response to your mul�ple-choice ques�on. This allows people to write in an op�on that you neglected to include. For example, our last ques�on about traveling to work could be rewri�en as follows: "How do you travel to work on most days? (Select all that apply.)" a. drive alone b. carpool c. take public transporta�on d. other (please specify):__________________ This way, people who telecommute, bicycle, or even ride their trained pony to work will have a way to respond rather than skipping the ques�on. And, if you start to no�ce a pa�ern in these write-in responses (e.g., 20% of people adding "bicycle"), then you will have gained valuable knowledge to improve the next itera�on of the survey. Ra�ng Scales Last, but certainly not least, another op�on is to use a ra�ng scale format, which asks par�cipants to respond on a scale that represents a con�nuum. "Some�mes it is necessary to sacrifice liberty in the name of security." 1 2 3 4 5 strongly agree agree neither agree nor disagree disagree
  • 27. strongly disagree "I would vote for a candidate who supported the death penalty." 1 2 3 4 5 Always o�en about half of the �me seldom never "The poli�cal party in power right now has messed things up." 1 2 3 4 5 strongly agree agree neither agree nor disagree disagree strongly disagree This format is well suited to capturing a�tudes and opinions and, in fact, is one of the most common approaches to a�tude research. Ra�ng scales are easy to score, and they give par�cipants some flexibility in indica�ng their agreement with or endorsement of the ques�ons. As a researcher, you have two cri�cal decisions to make about the construc�on of ra�ng scale items. Both have implica�ons for how you will analyze and interpret your results. First, you'll need to decide on the anchors, or labels, for your response scale. Ra�ng scales offer a good deal of flexibility in these anchors, as you can see in the preceding examples. You can frame ques�ons in terms of "agreement" with a statement or "likelihood" of a behavior; alterna�vely, you can customize the anchors to match your ques�on (e.g., "not at all necessary").
  • 28. Scales that use anchors of strongly agree and strongly disagree are also referred to as Likert scales. At a fairly simple level, the choice of labels affects the interpreta�on of the results. For example, if you asked the "poli �cal party" ques�on, you would have to be aware that the anchors were phrased in terms of agreement with the statement. In discussing these results, you would be able to discuss how much people agreed with the statement, on average, and whether agreement correlated with other factors. If this seems like an obvious point, you would be amazed at how o�en researchers (or the media) will take an item like this and spin the results to talk about the "likelihood of vo�ng" for the party in power—confusing an a�tude with a behavior! So, in short, make sure you are being honest when presen�ng and interpre�ng research data. At a more conceptual level, you need to decide whether the anchors for your ra�ng scale make use of a bipolar scale, which has polar opposites at its endpoints and a neutral point in the middle, or a unipolar scale, which assesses the presence or absence of a single construct. The difference between these scales is best illustrated by an example: Bipolar: How would you rate your current mood? 1 2 3 4 5 6 7 very happy happy slightly happy neither happy nor sad slightly sad sad very sad Unipolar: How would you rate your current mood? 1 2 3 4 5
  • 29. not at all sad slightly sad moderately sad very sad completely sad 1 2 3 4 5 not at all happy slightly happy moderately happy very happy completely happy With the bipolar op�on, par�cipants are asked to place themselves on a con�nuous scale somewhere between sad and happy, which are polar opposites. The assump�on in using a bipolar scale is that the endpoints represent the only two op�ons—par�cipants can be sad, happy, or somewhere in between. In contrast, with the unipolar op�on, par�cipants are asked to rate themselves on a con�nuous scale, indica�ng their level of either sadness or happiness. The assump�on in using a pair of unipolar scales is that it is possible to experience varying degrees of each item: For example, par�cipants can be moderately happy but also a li�le bit sad. The decision to use a bipolar or a unipolar scale comes down to the context. What is the most logical way to think about these constructs? What have previous researchers done? In the 1970s, Sandra Lipsitz Bem revolu�onized the way researchers thought about gender roles by arguing against a bipolar approach. Previously, gender role iden�fica�on had been measured on a bipolar scale from "masculine" to "feminine," the assump�on being that a person could be one or the other. Bem (1974) argued instead that people could easily have varying degrees of masculine and feminine traits. Her scale, the Bem Sex Role Inventory, asks respondents to rate themselves on a set of 60 unipolar traits. Someone with mostly feminine and hardly any masculine traits would be
  • 30. described as "feminine." Someone with high ra�ngs on both masculine and feminine traits would be described as "androgynous." And, someone with low ra�ngs on both masculine and feminine traits would be described as "undifferen�ated." You can view and complete Bem's scale online at this website (h�ps://openpsychometrics.org/tests/OSRI/) . The second cri�cal decision in construc�ng a ra�ng scale item is to decide on the number of points in the response scale. You may have no�ced that all the examples in this sec�on have an odd number of points (e.g., five or seven). This is usually preferable for ra �ng scale items because the middle of the scale (e.g., "3" or "4") allows respondents to give a neutral, middle - of-the-road answer. That is, on a scale from strongly disagree to strongly agree, the midpoint can be used to indicate "neither" or "I'm not sure." However, in some cases, you may not want to allow a neutral op�on in your scale. By using an even number of points (e.g., four or six), you can essen�ally force people to either agree or disagree with the statement; this type of scaling is referred to as forced choice. So how many points should your scale have? As a general rule, more points will translate into more variability in responses—the more choice people have, the more likely they are to distribute their responses among those choices. From a researcher's perspec�ve, the big ques�on is whether this variability is meaningful. For example, if you wanted to assess college students' a�tudes about a student fee increase, opinions will likely vary depending on the size of the fee and the ways in which it will be used. Thus, a five- or seven- point scale would be preferable to a two-point (yes or no) scale. However, past a certain point, increasing the
  • 31. scale range ceases to be linked to meaningful varia�on in a�tudes. In other words, the difference between a 5 and a 6 on a seven-point scale is fairly intui�ve for your par�cipants to grasp. But what is the real difference between an 80 and an 81 on a 100-point scale? When scales become too large, you risk introducing another source of error variance as par�cipants impose their interpreta�ons on the scaling. In sum, more points do not always translate into a be�er scale. https://openpsychometrics.org/tests/OSRI/ Back to our ques�on: How many points should you have? The ideal compromise supported by many sta�s�cians is to use a seven-point scale for bipolar scales. The reason has to do with the differences between scales of measurement. As you'll remember from our discussion in Chapter 2, the way variables are measured has implica�ons for data analyses. For the most popular sta�s�cal tests to be legi�mate, variables need to lie on either an interval scale (i.e., with equal intervals between points) or a ra�o scale (i.e., with a true zero point). Based on mathema�cal modeling research, sta�s�cians have concluded that the variability generated by a seven-point scale is most likely to mimic an interval scale (see, e.g., Nunnally, 1978). So a seven-point scale is o�en preferable because it allows us the most flexibility in data analyses. Note, however, that whereas seven-point scales produce reliable results with bipolar scales, unipolar scales tend to perform the best with five-point scales. Finalizing the Questionnaire Once you have finished construc�ng the ques�onnaire
  • 32. items, one last important step remains before beginning to collect data. This sec�on discusses a few guidelines for assembling the items into a coherent ques�onnaire. The main issues at this stage are to think carefully about the order of the individual items; how many items to include; whether to include open-ended, fixed-format, or mul�ple choice ques�ons; and wri�ng the instruc�ons in clear and concise language. First, keep in mind that the first few ques�ons will set the tone for the rest of the ques�onnaire. It is best to start with ques�ons that are both interes�ng and nonthreatening to help ensure that respondents complete the ques�onnaire with open minds. For example: BAD OPENING: BETTER OPENING: BAD OPENING: BETTER OPENING: "Do you agree that your child's teacher is incompetent?" (threatening and also a leading ques�on) "How would you rate the performance of your child's teacher?" "Would you support a 1% sales tax increase?" (boring) "How do you feel about raising taxes to help fund educa�on?" Second, strive whenever possible to have con�nuity in the different sec�ons of your ques�onnaire. Imagine you are construc�ng a survey to give to college freshmen—you might have ques�ons on family background, stress levels, future plans, campus engagement, and so on. It is best to have the ques�ons grouped by topic on the survey. So, for instance, students would fill out a set of ques�ons about future plans on one page
  • 33. and then a set of ques�ons about campus engagement on another page. This approach makes it easier for par�cipants to progress through the ques�ons without having to mentally switch between topics. Third, remember that individual ques�ons are always read in context. This means that if you start your college student survey with a ques�on about plans for the future and then ask about stress, respondents will likely have their future plans in mind when they think about their level of stress. Another example is a graduate school that would administer a gigan�c survey packet to every student enrolled in its Introductory Psychology course. One year, a faculty member included a measure of iden�ty, asking par�cipants to complete the statements "I am ______" and "I am not ______." As the students started to analyze data from this survey, they found that an astonishing 60% of students had filled in the blank with "I am not a homosexual." This response seemed pre�y unusual un�l they realized that the ques�onnaire immediately preceding this one in the packet was a measure of prejudice toward gay and lesbian individuals. So, as these students completed the iden�ty measure, they had homosexuality on their minds and felt compelled to point out that they were not homosexual. This proves once again that results can be skewed by the context. Finally, once you have assembled a dra� version of your ques�onnaire, do a test run. This test run, called pilot tes�ng, involves giving the ques�onnaire to a small sample of people, ge�ng their feedback, and making any necessary changes. One of the best ways to pilot test is to find a pa�ent group of friends to complete your ques�onnaire because this group will presumably be willing to give more extensive feedback. Another effec�ve
  • 34. way to pilot test a ques�onnaire is to administer it to individuals in the target group. In solici�ng their feedback, you should ask ques�ons like the following: Was anything confusing or unclear? Was anything offensive or threatening? How long did the ques�onnaire take you to complete? Did it get repe��ve or boring? Did it seem too long? Were there par�cular ques�ons that you liked or disliked? Why? The answers to these ques�ons will give you valuable informa�on to revise and clarify your ques�onnaire before devo�ng the resources for a full round of data collec�on. In the next sec�on, we turn our a�en�on to the ques�on of how to find and select par�cipants for this stage of the research. Research: Thinking Critically "Beautiful People Convey Personality Traits Better During First Impressions" Medical News Today A new University of Bri�sh Columbia study has found that people iden�fy the personality traits of people who are physically a�rac�ve more accurately than others during short encounters. The study, published in the December [2010] edi�on of
  • 35. Psychological Science, suggests people pay closer a�en�on to people they find a�rac�ve, and is the latest scien�fic evidence of the advantages of perceived beauty. Previous research has shown that individuals tend to find a�rac�ve people more intelligent, friendly, and competent than others. The goal of the study was to determine whether a person's a�rac�veness impacts others' ability to discern their personality traits, says Prof. Jeremy Biesanz, UBC Dept. of Psychology, who coauthored the study with PhD student Lauren Human and undergraduate student Genevieve Lorenzo. For the study, researchers placed more than 75 male and female par�cipants into groups of five to 11 people for three-minute, one-on-one conversa�ons. A�er each interac�on, study par�cipants rated partners on physical a�rac�veness and five major personality traits: openness, conscien�ousness, extraversion, agreeableness, and neuro�cism. Each person also rated his or her own personality. Researchers were able to determine the accuracy of people's percep�ons by comparing par�cipants' ra�ngs of others' personality traits with how individuals rated their own traits, says Biesanz, adding that steps were taken to control for the posi�ve bias that can occur in self-repor�ng. Despite an overall posi�ve bias towards people they found a�rac�ve (as expected from previous research), study par�cipants iden�fied the "rela�ve ordering" of personality traits of a�rac�ve
  • 36. par�cipants more accurately than others, researchers found. "If people think Jane is beau�ful, and she is very organized and somewhat generous, people will see her as more organized and generous than she actually is," says Biesanz. "Despite this bias, our study shows that people will also correctly discern the rela�ve ordering of Jane's personality traits—that she is more organized than generous—be�er than others they find less a�rac�ve." The researchers say this is because people are mo�vated to pay closer a�en�on to beau�ful people for many reasons, including curiosity, roman�c interest, or a desire for friendship or social status. "Not only do we judge books by their covers, we read the ones with beau�ful covers much closer than others," says Biesanz, no�ng the study focused on first impressions of personality in social situa�ons, like cocktail par�es. Although par�cipants largely agreed on group members' a�rac�veness, the study reaffirms that beauty is in the eye of the beholder. Par�cipants were best at iden�fying the personali�es of people they found a�rac�ve, regardless of whether others found them a�rac�ve. According to Biesanz, scien�sts spent considerable efforts a half-century ago seeking to determine what types of people perceive personality best, to largely mixed results. With this study, the team chose to inves�gate this long-standing ques�on from another direc�on, he says, focusing not on who judges personality best, but rather whether some people's
  • 37. personali�es are be�er perceived. Think about it: 1. Suppose the following ques�ons were part of the ques�onnaire given a�er the 3-minute one-on-one conversa�ons in this study. Based on the goals of the study and the rules discussed in this chapter, iden�fy the problem with each of the following ques�ons and suggest a be�er item. a. Jane is very neat. 1 2 3 4 5 strongly agree agree neither agree nor disagree disagree strongly disagree main problem: be�er item: b. Jane is generous and organized. 1 2 3 4 5 strongly agree agree neither agree nor disagree disagree strongly disagree main problem: be�er item:
  • 38. c. Jane is extremely a�rac�ve TRUE FALSE main problem: be�er item: 2. What are the strengths and weaknesses of using a fixed- format ques�onnaire in this study versus open-ended responses? 3. The researchers state that they took steps to control for the "posi�ve bias that can occur in selfrepor�ng." How might social desirability influence the outcome of this par�cular study? What might the researchers have done to reduce the effect of social desirability? George, R. (2010, December 31). Beau�ful people convey personality traits be�er during first impressions. Medical News Today. Retrieved from h�p://www.medicalnewstoday.com/ar�cles/212245.php (h�p://www.medicalnewstoday.com/ar�cles/212245.php) http://www.medicalnewstoday.com/articles/212245.php 4.3 Sampling From the Popula�on By now, you should have a good feel for how to construct survey items. Once you have finalized your measures, the next step is to find a group of people to fill out the survey. But where do you find this group? And how many of them do you need? On the one hand, you want as many people as possible in order to capture
  • 39. the full range of a�tudes and experiences. On the other hand, researchers have to conserve �me and other resources, which o�en means choosing a smaller sample of people. In this sec�on, we will examine the strategies researchers can use in selec�ng samples for their studies. Researchers refer to the en�re collec�on of people who could possibly be relevant for a study as the popula�on. For example, if you were interested in the effects of prison overcrowding in this country, you would want to study the popula�on of prisoners in the United States. If you wanted to study vo�ng behavior in the next U.S. presiden�al elec�on, your popula�on would be United States residents eligible to vote. And if you wanted to know how well college students cope with the transi�on from high school, your popula�on would include every college student who was graduated from high school and is now enrolled in any college in the country. You may have spo�ed an obvious prac�cal complica�on with these popula�ons. How on earth are you going to get every college student, much less every prisoner, in the country to fill out your ques�onnaire? You can't; instead, researchers will collect data from a sample, a subset of the popula�on. Instead of trying to reach all prisoners, you might sample inmates from a handful of state prisons. Rather than a�empt to survey all college students in the country, researchers might restrict their studies to a collec�on of students at one university. The goal in choosing a sample for quan�ta�ve research is to make it as representa�ve as possible of the larger popula�on. This is the goal, though it is not always prac�cal. That is, if you choose students at one
  • 40. university, they need to be reasonably similar to college students elsewhere in the country. If the phrase "reasonably similar" sounds vague, this is because the basis for evalua�ng a sample varies depending on the hypothesis and the key variables of your study. For example, if you wanted to study the rela�onship between family income and stress levels, you would need to make sure that your sample mirrored the popula�on in the distribu�on of income levels. Thus, a sample of students from a state university might be a be�er choice than students from, say, Harvard (which costs about $50,000 per year). On the other hand, if your research ques�on dealt with the pressures faced by students in selec�ve private schools, then Harvard students could be a representa�ve sample for your study. Figure 4.1 is a conceptual illustra�on of both a representa�ve and nonrepresenta�ve sample, drawn from a larger popula�on. The popula�on in this case consists of 144 individuals, split evenly between Xs and Os. Thus, we would want our sample to come as close as possible to capturing this 50/50 split. The sample of 20 individuals on the le� is representa�ve of the sample because it is split evenly between Xs and Os. But the sample of 20 individuals on the right is nonrepresenta�ve because it contains 75% Xs. Because there are far fewer Os than we might expect in the right- hand popula�on, this sample does not accurately represent the popula�on. This failure of the sample to represent the popula�on is also referred to as sampling bias. Figure 4.1: Representative and nonrepresentative samples of a population
  • 41. Stra�fied random sampling allows researchers to include all subgroups of a popula�on in a study. iStockphoto/Thinkstock So, where do these samples come from? As a researcher, you have two broad categories of sampling strategies at your disposal: probability sampling and nonprobability sampling. Probability Sampling Probability sampling is used when each person in the popula�on has a known chance of being in the sample. This is possible only in cases where you know the exact size of the popula�on. For instance, the 2010 popula�on of the United States was 308,745,538 (United States Census Bureau, 2010). If you were to have selected a U.S. resident at random, each resident would have had a one in 308,745,538 chance of being selected. Whenever you have informa�on about total popula�on, probability-sampling strategies are the most powerful approach because they greatly increase the odds of ge�ng a representa�ve sample. Within this broad category of probability sampling are three specific strategies: simple random sampling, stra�fied random sampling, and cluster sampling. Simple Random Sampling Simple random sampling, the most straigh�orward approach, involves randomly picking study par�cipants from a list of everyone in the popula�on. The term for
  • 42. this list is a sampling frame (e.g., imagine a list of every resident of the United States). To have a truly representa�ve random sample, several criteria need to be met: You must have a sampling frame, you must choose from it randomly, and you must have a 100% response rate from those you select. (As we discussed in Chapter 2, it can threaten the validity of your hypothesis test if people drop out of your study.) Stra�fied Random Sampling Stra�fied random sampling, a varia�on of simple random sampling, is used when subgroups of the popula�on might be le� out of a purely random sampling process. Imagine a city with a popula�on that is 80% Caucasian, 10% Hispanic, 5% African American, and 5% Asian. If you were to choose 100 residents at random, the chances are very good that your en�re sample would consist of Caucasian residents. As a result, you would inadvertently ignore the perspec�ve of all ethnic minority residents. To prevent this problem, researchers use stra�fied random sampling—breaking the sampling frame into subgroups and then sampling a random number from each subgroup. In the preceding city example, you could divide your list of residents into four ethnic groups and then pick a random 25 from each of these groups. The result would be a sample of 100 people who captured opinions from each ethnic group in the popula�on. Cluster Sampling Cluster sampling, another varia�on of random sampling, is used when you do not have access to a full sampling frame (i.e., a full list of everyone in the
  • 43. popula�on). Imagine that you wanted to do a study of how cancer pa�ents in the United States cope with their illness. Because there is not a list of every cancer pa�ent in the country, you have to get a li�le crea�ve with your sampling. The best way to think about cluster sampling is as "samples within samples." Just as with stra�fied sampling, you divide the overall popula�on into groups; however, cluster sampling is different in that you are dividing into groups based on more than one level of analysis. In the cancer example, you could start by dividing the country into regions, then randomly selec�ng ci�es from within each region, and then randomly selec�ng hospitals from within each city, and finally randomly selec�ng cancer pa�ents from each hospital. The result would be a random sample of cancer pa�ents from, say, Phoenix, Miami, Dallas, Cleveland, Albany, and Sea�le; taken together, these pa�ents would cons�tute a fairly representa�ve sample of cancer pa�ents around the country. Nonprobability Sampling The other broad category of sampling strategies is known as nonprobability sampling. These strategies are used in the (remarkably common) case in which you do not know the odds of any given individual being in the sample. This is an obvious shortcoming—if you do not know the exact size of the popula�on and do not have a list of everyone in it, there is no way to know that your sample is representa�ve! But despite this limita�on, researchers use nonprobability sampling on a regular basis. Nonprobability sampling is used in qualita�ve, quan�ta�ve, and mixed methods research whose focus is
  • 44. on selec�ng rela�vely small samples in a purposeful manner rather than selec�ng samples that are representa�ve of the en�re popula�on. Unlike probability sampling, nonprobability sampling does not include randomly selected, stra�fied samples that provide for generaliza�on. Rather, the procedures used to select samples are purposeful; that is, the samples are selected to obtain informa�on-rich cases that yield in-depth insights. The sampling techniques used for quan�ta�ve and qualita�ve research probably make up one of the biggest differences between the two methods. The following sec�ons will discuss some of the most common nonprobability strategies. All of the nonprobability strategies described (except convenience sampling) are considered categories of purposive sampling, or sampling with a purpose. Convenience sampling is considered a form of so-called accidental or haphazard (serendipitous) sampling, since the cases are selected based on ready availability. Even so, convenience sampling can be purposeful, in that researchers do their best to recruit in ways that are convenient and yet a�ract individuals that meet meaningful criteria. In many cases, it is not possible to obtain a sampling frame. When researchers study rare or hard-to-reach popula�ons, or inves�gate poten�ally s�gma�zing condi�ons, they o�en recruit by word of mouth. The term for this is snowball sampling—imagine a snowball rolling down a hill, picking up more snow (or par�cipants) as it goes. If you wanted to study how o�en homeless people took advantage of social services, you would be hard-pressed to find a sampling frame that listed the homeless popula�on. Instead, you could recruit a small group of homeless people and ask each of them to pass the word along to others, and so on. The resul�ng sample is unlikely to be representa�ve,
  • 45. but researchers o�en have to compromise for the sake of obtaining access to a popula�on. One of the most popular nonprobability strategies is known as convenience sampling, or simply enrolling people who show up for the study. Any �me you see results of a viewer poll on your favorite 24-hour news sta�on, the results are likely based on a convenience sample. CNN and Fox News do not randomly select from a list of their viewers; they post a ques�on on- screen or online, and people who are mo�vated enough to respond will do so. For that ma�er, a large majority of psychology research studies are based on convenience samples of undergraduate college students. O�en, experimenters in psychology departments adver�se their studies on a bulle�n board or website, and students sign up for studies for extra cash or to fulfill a research requirement for a course. Students o�en pick a par�cular study based on whether it fits their busy schedules or whether the adver�sement sounds interes�ng. Another example of convenience sampling is a researcher seeking to collect informa�on from human resources managers to study the issue of bullying in organiza�ons. The researcher might use a database of poten�al par�cipants who belong to one or more chapters of the Society of Human Resource Management (SHRM), an organiza�on for human resources professionals. In this example, the convenience component of the sample is using a body of exis�ng data; although these data are not representa�ve of all human resources managers or all organiza�ons, they are a useful proxy for a broad sample of human resource managers across industries.
  • 46. Other popular nonprobability strategies include extreme or deviant case sampling, typical case sampling, heterogeneous sampling, expert sampling, criterion sampling, and theory-based sampling. Whatever strategy you decide on, the goal here is to make you mindful that all the decisions that you make as a researcher have inherent strengths and weaknesses. Extreme or deviant case sampling is used when we want informa�on-rich data on unusual or special cases. For example, if we are interested in studying university diversity plans, it would be important to examine plans that work excep�onally well, as well as plans that have high expecta�ons but are not working for some reason or another. The focus is on examining outstanding successes and failures to learn lessons about those condi�ons. On the other hand, some�mes researchers are not interested in learning about unusual cases but rather want to learn about typical cases. Typical case sampling involves sampling the most frequent or "normal" case. It is o�en used to describe typical cases to people unfamiliar with a se�ng, program, or process. For example, a typical case sample may be used to study the prac�cum experiences of psychology students from universi�es that are rated average. One of the biggest drawbacks of this method involves the difficulty of knowing how to iden�fy or define a typical or normal case. Heterogeneous sampling, which is also called maximum varia�on sampling, may include both extreme and typical cases. It is used to select a wide variety of cases in rela�on to the phenomenon being inves�gated. The idea here is to obtain a sample that includes diverse
  • 47. characteris�cs from mul�ple dimensions. For example, if the researchers are interested in examining the experiences of community health clinic pa�ents, they may wish to conduct a focus group and then select 10 different groups that represent various demographics from several health clinics in the area. Heterogeneous sampling is beneficial when it is desirable to view pa�erns across a diverse set of individuals. It is also valuable in describing experiences that may be central or core to most individuals. Heterogeneous sampling can be problema�c in small samples, though, because it is difficult to obtain a wide variety of cases using this method. Expert sampling involves sampling a panel of individuals who have known exper�se (knowledge and training) in a par�cular area. While expert sampling can be used in both qualita�ve and quan�ta�ve research, it is used more o�en in qualita�ve research. Expert sampling involves a process of iden�fying individuals with known exper�se in an area of interest, obtaining informed consent from each expert, and then collec�ng informa�on from them either individually or as a group. The goal of criterion sampling is to select cases "that meet some predetermined criterion of importance" (Pa�on, 2002, p. 238). A criterion could be a par�cular illness or experience that is being inves�gated, or even a program or a situa�on. For example, criterion sampling could be used to inves�gate a range of topics: the experiences of individuals who have a�empted suicide, why students are exceedingly absent from online course rooms, or cases that have exceeded the standard wai�ng �me at a doctor's office. The idea with this method is that the par�cipants meet a par�cular criterion of interest.
  • 48. Theory-based sampling is a version of criterion sampling that focuses on obtaining cases that represent theore�cal constructs. As Pa�on (2002) discussed, theory- based sampling involves the researcher obtaining "sampling incidents, slices of life, �me periods, or people on the basis of their poten�al manifesta�on or representa�on of important theore�cal constructs" (p. 238). This type of sampling arose from grounded theory research and follows a more deduc�ve or theory- tes�ng approach. As Glaser (1978) described, theory-based sampling is "the process of data collec�on for genera�ng theory whereby the analyst jointly collects, codes, and analyzes his data and decides which data to collect next and where to find them" (p. 36). Thus, data collec�on is driven by the emerging theory, and par�cipants are selected based on their knowledge of the topic. Choosing a Sampling Strategy Although quan�ta�ve researchers strive for representa�ve samples, there is no such thing as a perfectly representa�ve one. There is always some degree of sampling error, defined as the degree to which the characteris�cs of the sample differ from the characteris�cs of the popula�on. Instead of aiming for perfec�on, then, researchers aim for an es�mate of how far from perfec�on their samples are. These es�mates are known as the error of es�ma�on, or the degree to which the data from the sample are expected to deviate from the popula�on as a whole. One of the main advantages of a probability sample is that we are able to calculate these errors of es�ma�on (or margins of error). In fact, you have likely
  • 49. encountered errors of es�ma�on every �me you see the results of an opinion poll. For example, CNN may report that "Candidate A is leading the race with 60% of the vote, ± 3%." This means Candidate A's percentage in the sample is 60%, but based on sta�s�cal calcula�ons, her real percentage is between 57% and 63%. The smaller the error (3% in this example), the more closely the results from the sample match the popula�on. Naturally, researchers conduc�ng these opinion polls want the error of es�ma�on to be as small as possible; imagine how nonpersuasive it would be to learn that "Candidate A has a 10-point lead, ± 20 points." In general, these errors are minimized when three condi �ons are met: The overall popula�on is smaller; the sample itself is larger; and there is less variability in the sample data. When samples are created using a probability method, all of this informa�on is available because these methods require knowing the popula�on. If probability sampling is so powerful, why are nonprobability strategies so popular? One reason is that convenience samples are more prac�cal; they are cheaper, easier, and almost always possible to enroll with rela�vely few resources because you can avoid the costs of large-scale sampling. A second reason is that convenience is o�en a good star�ng point for a new line of research. For example, if you wanted to study the predictors of rela�onship sa�sfac�on, you could start by tes�ng hypotheses in a controlled se�ng using college student par�cipants, and then you could extend your research to the study of adult married couples. Finally, and relatedly, in many types of qualita�ve research, it is acceptable to have a nonrepresenta�ve
  • 50. sample because you do not need to generalize your results. If you want to study the prevalence of alcohol use in college students, it may be perfectly acceptable to use a convenience sample of college students. Although, even in this case, you would have to keep in mind that you were studying drinking behaviors among students who volunteered to complete a study on drinking behaviors. There are also cases, however, where it is cri �cal to use probability sampling despite the extra effort it requires. Specifically, researchers use probability samples any �me it is important to generalize and any �me it is important to predict behavior of a popula�on. The best example for understanding these criteria is to think of poli�cal polls. In the lead-up to an elec�on, each campaign is invested in knowing exactly what the vo�ng public thinks of its candidate. In contrast to a CNN poll, which is based on a convenience sample of viewers, polls conducted by a campaign will be based on randomly selected households from a list of registered voters. The resul�ng sample is much more likely to be representa�ve, much more likely to tell the campaign how the en�re popula�on views its candidate, and therefore, much more likely to be useful. Determining a Sufficient Sample Size Several factors come into play when determining what a sufficient sample size is for a research study. Probably the most important factor is whether the study is a quan�ta�ve or a qualita�ve one. In quan�ta�ve research, there is one basic rule for sample size: The larger, the be�er. This is because a larger sample will be more representa�ve of the popula�on being studied. Having said
  • 51. that, even small samples using quan�ta�ve data can yield meaningful correla�ons if the researcher makes efforts to achieve a representa�ve sample or a meaningful convenience sample. How does one make a prac�cal decision, though, regarding the exact sample size a par�cular research situa�on requires? Gay, Mills, and Airasian (2009) provided the following guidelines for determining sufficient sample size based on the size of the popula�on: For popula�ons that include 100 or fewer individuals, the en�re popula�on should be sampled. For popula�ons that include 400–600 individuals, 50% of the popula�on should be sampled. For popula�ons that include 1,500 individuals, 20% of the popula�on should be sampled. For popula�ons larger than 5,000, about 8% of the popula�on should be sampled. Thus, as we can see, the larger the popula�on size, the smaller the percentage required to obtain a representa�ve sample. This does not mean that the sample shrinks as the popula�on increases, but rather the opposite: The sample size required actually increases when studying larger popula�ons. Although larger sample sizes are generally preferred in quan�ta�ve studies, the size of an adequate sample also depends on the similari�es and dissimilari�es among members of the popula�on. If the popula�on is fairly diverse for the construct being measured, then a larger sample will likely be required in order to obtain a representa�ve sample. Popula�ons whose members have similar characteris�cs will require smaller
  • 52. samples. Researchers have developed fairly sophis�cated methods for determining sufficient sample sizes through the use of sta�s�cal power analysis; however, such methods are beyond the scope of this book. Gay et al.'s (2009) guidelines are sufficient for most types of research. As discussed in Chapter 3, recommended samples for qualita�ve research are typically much smaller than for quan�ta�ve research, which strives to be representa�ve of larger popula�ons. In qualita�ve research, sample sizes are not based on numbers but rather on how well the variables of interest are represented (Houser, 2009). Unlike quan�ta�ve approaches, which require large samples, qualita�ve techniques do not have any rules regarding sample size. Thus, sample size depends more on what the researcher wants to know, the purpose of the inquiry, what the findings will be useful for, how credible they will be, and what can be done with available �me and resources. Qualita�ve research can be very costly and �me- consuming, so choosing informa�on-rich cases will yield the greatest return on investment. As noted by Pa�on (2002), "The validity, meaningfulness, and insights generated from qualita�ve inquiry have more to do with the informa�on-richness of the cases selected and the observa�onal/analy�cal capabili�es of the researcher than with sample size" (p. 245). Nonresponse Bias in Survey Research Some�mes par�cipants do not submit a survey or do not fill it out completely. When this occurs, it is considered nonresponse bias, which can affect the size and characteris�cs of the sample, as well as the external
  • 53. validity of the study. External validity refers to how well the results obtained from a sample can be extended to make predic�ons about the en�re popula�on, which will be further discussed in Chapter 5. Nonresponses are par�cularly problema�c if a large number of par�cipants or a specific group of par�cipants fails to respond to par�cular ques�ons. For example, perhaps all females skip a certain ques�on, or several par�cipants skip certain ques�ons because they are too personal. This omission creates bias not only in the characteris�cs of the sample (e.g., the sample may no longer be representa�ve) but also in the size of the sample that is required for the study. Nonresponses can occur for many reasons, including the survey being too long, the ques�ons being worded awkwardly, the survey topic being uninteres�ng, or, in the case with Web-based surveys, the par�cipants not knowing how to access or log into the website to complete the survey. There are a few ways to minimize the threat of nonresponse bias. These include increasing the sample size to account for the possibility of nonresponses, making sure that survey direc�ons and ques�ons are worded clearly, making sure the survey is not too long, providing rewards or incen�ves for comple�ng the survey, sending out reminders to complete the survey, and providing a cover le�er that describes the exact reasons for conduc�ng the survey. 4.4 Analyzing Survey Data Once you have designed a survey, chosen an appropriate
  • 54. sample, and collected some data, now comes the fun part. As with the quan�ta�ve descrip�ve designs covered in Chapter 3, the goal of analyzing survey data is to subject your hypotheses to a sta�s�cal test. Surveys can be used both to describe and predict thoughts, feelings, and behaviors. However, since we have already covered the basics of descrip�ve analysis in Chapter 3, this sec�on will focus on predic�ve analyses, which are designed to assess the associa�ons between and among variables. Researchers typically use three approaches to test predic�ve hypotheses: correla�onal analyses, chi-square analyses, and regression analyses. Each one has its advantages and disadvantages, and each is most appropriate for a different kind of data. Correla�onal analysis allows one to examine the strength, direc�on, and sta�s�cal significance of a rela�onship; chi-square analysis determines whether two nominal variables are independent from or related to one another; and simple linear regression is the method used to "predict" scores for one variable based on another. In this sec�on, we will walk through the basics of each analysis. Correlational Analysis In the beginning of this chapter, we encountered an example of a survey research ques�on: What is the rela�onship between the number of hours that students spend studying and their grades in the class? In this case, the hypothesis claims that we can predict something about a student's grades by knowing how many hours he or she spends studying. Imagine we collected a small amount of data to test this hypothesis, shown in Table 4.1. (Of course, if we really
  • 55. wanted a good test of this hypothesis, we would need more than 10 people in the sample, but this will do as an illustra�on.) Table 4.1: Data for quiz grade/hours studied example Par�cipant Hours Studied Quiz Grade 1 1 2 2 1 3 3 2 4 4 3 5 5 3 6 6 3 6 7 4 7 8 4 8 9 4 9 10 5 9 The Logic of Correla�on The important ques�on here is whether and to what extent we can predict grades based on study �me. One common sta�s�c for tes�ng these kinds of hypotheses is a correla�on, which assesses the linear rela�onship between two variables. A stronger correla�on between two variables translates into a stronger associa�on
  • 56. between them. Or, to put it differently, the stronger the correla�on between study �me and quiz grade, the more accurately you can predict grades based on knowing how long the student spends studying. Before we calculate the correla�on between these variables, it is always a good idea to visualize the data on a graph. The sca�erplot in Figure 4.2 displays our sample data from the studying/quiz grade study. Figure 4.2: Scatterplot for quiz grade/hours studied example Figure 4.3: Curvilinear relationship between arousal and performance Each point on the graph represents one par�cipant. For example, the point in the top right corner represents a student who studied for 5 hours and earned a 9 on the quiz. The two points in the bo�om right represent students who studied for only 1 hour and earned a 2 and a 3 on the quiz. There are two reasons to graph data before conduc�ng sta�s�cal tests. First, a graph conveys a general sense of the pa�ern—in this case, students who study less appear to do worse on the quiz. As a result, we will be be�er informed going into our sta�s�cal calcula�ons. Second, the graph ensures that there is a linear rela�onship between the variables. This is a very important point about correla�ons: The math is based on how well the data points fit a
  • 57. straight line, which means nonlinear rela�onships might be overlooked. Figure 4.3 demonstrates a robust nonlinear finding in psychology regarding the rela�onship between task performance and physiological arousal. As this graph shows, people tend to perform their best on just about any task when they have a moderate level of arousal. When arousal is too high, it is difficult to calm down and concentrate; when arousal is too low, it is difficult to care about the task at all. If we simply ran a correla�on with data on performance and arousal, the correla�on would be zero because the points do not fit a straight line. Thus, it is cri�cal to visualize the data before jumping ahead to the sta�s�cs. Otherwise, you risk overlooking an important finding in the data. Interpre�ng Coefficients Once we are sa�sfied that our data look linear, it is �me to calculate our sta�s�cs. This is typically done using a computer so�ware program, such as SPSS (IBM), SAS/STAT (SAS), or Microso� Excel. The number used to quan�fy our correla�on is called the correla�on coefficient. This number ranges from –1 to +1 and contains two important pieces of informa�on: The size of our rela�onship is based on the absolute value of our correla�on coefficient. The farther our coefficient is from zero in either direc�on, the stronger the rela�onship between variables. For example, both a + .80 and a – .80 indicate strong rela�onships. The direc�on of the rela�onship is based on the sign of our
  • 58. correla�on coefficient. A + .80 would indicate a posi�ve correla�on, meaning that as one variable increases, so does the other variable. A – .80 would indicate a nega�ve correla�on, meaning that as one variable increases, the other variable decreases. (Refer back to Sec�on 2.1, Overview of Research Designs, for a review of these two terms.) So, for example, a + .20 is a weak posi�ve rela�onship and a – .70 is a strong nega�ve rela�onship. When we calculate the correla�on for our quiz grade study, we get a coefficient of .96, indica�ng a strong posi�ve rela�onship between studying and quiz grade. What does this mean in plain English? Students who spend more hours studying tend to score higher on the quiz. How do we know whether to get excited about a correla�on of .96? As with all of our sta�s�cal analyses, we look up this value in a cri�cal value table, or, more commonly, let the computer so�ware do this for us. This gives us a p value represen�ng the odds that our correla�on is the result of random chance. In this case, the p value is less than .001. This means that the chance of our correla�on being a random fluke is less than 1 in 1,000, so we can feel pre�y confident in our results. Note, however, that because the p value associated with r is dependent on sample size, even a �ny, unimportant correla�on can be sta�s�cally significant when drawing conclusions from a large popula�on. We now have all the informa�on we need to report this correla�on in a research paper. The standard way of repor�ng a correla�on coefficient includes informa�on about the sample size and p value, as well as the
  • 59. coefficient itself. Our quiz grade study would be reported as shown in Figure 4.4. Figure 4.4: Correlation coefficient diagram So where does this leave our hypothesis? We started by predic�ng that students who spend more �me studying would perform be�er on their quizzes than those who spend less �me studying. We then designed a study to test this hypothesis by collec�ng data on study habits and quiz grades. Finally, we analyzed these data and found a significant, strong, posi�ve correla�on between hours studied and quiz grade. Based on this study, our hypothesis has been supported—students who study more have higher quiz grades! Of course, because this is a correla�onal study, we are unable to make causal statements. It could be that studying more for an exam helps you to learn more. Or, it could be the case that previous low quiz grades make students give up and study less. Or, the third variable of mo�va�on could cause students to both study more and perform be�er on the quizzes. To tease these explana�ons apart and determine causality, we will need an experimental type of research design, which we will cover in Chapter 5. Regression Analysis
  • 60. Correla�ons are the best tool for tes�ng the linear rela�onship between pairs of quan�ta�ve variables. However, in many cases, we are interested in comparing the influence of several variables at once. Imagine that you wanted to expand the inves�ga�on about hours studying and quiz grade by looking at other variables that might predict students' quiz grades. We have already learned that the hours students spend studying posi �vely correlate with their grades. But what about SAT scores? We might predict that students with higher standardized test scores will do be�er in all of their college classes. Or what about the number of classes that students have previously taken in the subject area? We might predict that increased familiarity with the subject would be associated with higher scores. In order to compare the influence of all three variables, we will use a slightly different analy�c approach. Mul�ple regression analysis is a varia�on on correla�onal analysis; in it, more than one predictor variable is used to foresee a single outcome variable. In this example, we would be a�emp�ng to predict the outcome variable of quiz scores based on three predictor variables: SAT scores, number of previous classes, and hours studied. Mul�ple regression analysis requires an extensive set of calcula�ons; consequently, it is always done using computer so�ware. A detailed look at these calcula�ons is beyond the scope of this book, but a conceptual overview will help you understand the unique advantages of this form of analysis. Essen�ally, the calcula�ons for mul�ple regression are based on the correla�on coefficients between each of our predictor variables, and between each of these variables and the outcome variable. These correla�ons for our revised quiz grade
  • 61. study are shown in Table 4.2. If we scan the top row, we can see the correla�ons between quiz grade and the three predictor variables: SAT score (r = .14), previous classes (r = .24), and hours studied (r = .25). The remainder of the table shows correla�ons between the various predictor variables; for example, hours studied and previous classes correlate at r = .24. When we conduct mul�ple regression analysis using computer so�ware, the so�ware will use all of these correla�ons in performing its calcula�ons. Table 4.2: Correla�ons for a mul�ple regression analysis Quiz Grade SAT Score Previous Classes Hours Studied Quiz Grade — .14 .24* .25* SAT Score — .02 –.02 Previous Classes — .24* Hours Studied — The advantage of mul�ple regression is that it considers both the individual and the combined influence of the predictor variables. Figure 4.5 is a visual diagram of the individual predictors of quiz grades. The numbers along each line are known as regression coefficients, or beta weights. These are standardized coefficients that allow comparison across predictors. Their values are very similar to correla�on coefficients but differ in an important way: They represent the effects of each predictor variable while controlling for the effects of all the other predictors. That is, the value of b = .21 linking hours studied with quiz grades is the independent contribu�on of hours studied, controlling for SAT scores
  • 62. and previous classes. If we compared the size of these regression coefficients, we would see that, in fact, hours spent studying were s�ll the largest predictor of quiz grades (b = .21) compared with both SAT scores (b = .14) and previous classes (b = .19). Figure 4.5: Predictors of quiz grades Even if individual variables have only a small influence, they can add up to a larger combined influence. So, if we were to analyze the predictors of quiz grades in this study, we would find a combined mul�ple correla�on coefficient of r = .34. The mul�ple correla�on coefficient represents the combined associa�on between the outcome variable and the full set of predictor variables. Note that in this case, the combined r of .34 is larger than any of the individual correla�ons (r) in Table 4.2, which ranged from .14 to .25. These numbers mean that we are be�er able to predict quiz grades from examining all three variables than we are from examining any single variable. Or, as the saying goes, the whole is greater than the sum of its parts! Mul�ple regression analysis is an incredibly useful and powerful analy�c approach, but it can also be a tough concept to grasp. Before we move on, let's revisit the concept in the form of an analogy. Imagine you've just eaten the most delicious hamburger of your life and are determined to understand what made it so good. Lots of factors will contribute to the taste of your hamburger: the quality of the meat, the type and amount of cheese, the freshness of the bun, perhaps the smoked chili peppers layered on top. If you were to
  • 63. approach this inves�ga�on using mul�ple regression analysis, you would be able to separate out the influence of each variable (e.g., How important is the cheese compared with the smoked peppers?) as well as take into account the full set of ingredients (e.g., Does the freshness of the bun really ma�er when the other elements taste so good?). Ul�mately, you would be armed with the knowledge of which elements are most important in cra�ing the perfect hamburger. And you would understand more about the perfect hamburger than if you had examined each ingredient in isola�on. Chi-Square Analyses Both correla�ons and regressions are well suited to tes�ng hypotheses about predic�on, as long as it is possible to demonstrate a linear rela�onship between two variables. But linear rela�onships require that variables be measured on one of the quan�ta�ve scales—that is, ordinal, interval, or ra�o scales (see Sec�on 2.3 (h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o ns/sec2.3#sec2.3) , Scales and Types of Measurement, for a review). What if we wanted to test the associa�on between nominal, or categorical, variables? In these cases, we would need an alterna�ve sta�s�c called the chi- square sta�s�c, which determines whether two nominal variables are independent from or related to one another. Chi-square is o�en abbreviated with the symbol X2, which shows the Greek le�er chi with the superscript 2 for squared. (This sta�s�c is also referred to as the chi-square test for independence —a slightly longer but more descrip�ve synonym.) The idea behind this test is similar to that of the correla�on coefficient. If two variables are independent,
  • 64. then knowing the value of one variable does not tell you anything about the value of the other. As we will see in the following examples, a larger chi-square reflects a larger devia�on from what we would expect by chance and is thus an index of sta�s�cal significance. The Logic of Chi-Square To determine whether two variables are associated, the chi-square works by comparing the observed frequencies (collected data) with the expected frequencies if the variables were unrelated. If the results significantly deviate from these expected frequencies, then we conclude that our variables are related. And, consequently, we are able to predict one variable based on knowing the values of the other. Let's look at a couple of examples to make this more concrete. First, let's say we wanted to know whether gender is related to poli�cal party affilia�on. We might randomly select 100 men and 100 women and ask them whether they iden�fied as Republican or Democrat. Because both of these variables are nominal—that is, they iden�fy only categories, not quan�ta�ve measures— chi-square will be our best choice to test the associa�on between them. The first step in conduc�ng this analysis is to arrange our data in a con�ngency table, which displays the number of individuals in each of the combina�ons of our nominal variables. We encountered these tables before in our examples of observa�onal studies in Chapter 3 (Sec�on 3.1 (h�p://content.thuzelearning.com/books/Malec.5743.13.1/sec�o ns/sec3.1#sec3.1) ) but stopped short of conduc�ng the sta�s�cal analyses. So imagine we get the results shown in Table 4.3a from our survey of gender and party affilia�on.
  • 65. Table 4.3a: Gender and party affilia�on Male Female Democrat 60 60 https://content.ashford.edu/books/Malec.5743.13.1/sections/sec 2.3#sec2.3 https://content.ashford.edu/books/Malec.5743.13.1/sections/sec 3.1#sec3.1 Republican 40 40 In this case, there is no associa�on between sex and party affilia�on. It does not ma�er that the sample consists of 60% Democrats and 40% Republicans. What ma�ers for our hypothesis test is that the pa�ern for males is the same as the pa�ern for females: Our sample consists of 1.5 �mes the number of Democrats for both sexes. In other words, knowing a person's sex does not tell us anything about their poli�cal affilia�on. For illustra�on purposes, imagine we tested the same hypothesis again but could recruit only 50 women, compared with 100 men. Now, if we found the same 60%/40% split among men again, and assuming that the variables were s�ll not related, here's the ques�on: What would we expect the split to look like among women? If the ra�o of Democrats to Republicans remains at 1.5 to 1, we would expect to see the women divided into 30 Democrats and 20 Republicans (i.e., the same ra�o; shown in Table 4.3b). This concept is referred to as the expected frequency, or the frequency you would expect to see if the variables were not related.