2. Introduction
Introduction
When you were a child, were you ever
• sent to your room for a bad behavior, a consequence that
continued to occur until
you changed your behavior?
• slapped on the hand for touching something that you were not
supposed to touch?
• yelled at if you walked into the street without first looking for
cars?
• given an allowance when you completed your chores?
• allowed to go on dates but only if you were home by curfew?
• given a sticker or badge for an assignment when you did well?
All of these examples could be categorized as behaviorist
techniques for reinforcing learning.
Learning can refer to the process of
developing knowledge or a skill through
instruction or study or the process of
modifying behavior through experience.
Understanding how learning is studied
is an important step if you want to suc-
cessfully apply psychological methods to
your own learning or to that of others,
whether in a classroom, in the workplace,
or even in your role as a parent or grand-
parent. It is also important to under-
stand that theories have evolved over
time and that inaccuracies often exist
in the literature that presents behavior
and learning studies (Abramson, 2013).
Applications of technology and method-
3. ological approaches continue to develop
researchers’ awareness of possible inac-
curacies and alternate approaches.
Your journey to a better understanding of learning begins with
behaviorism. This theoretical
foundation, which was first discussed in this book’s
introduction, argues that learning has
successfully occurred when the appropriate behavior is observed
(Ertmer & Newby, 1993).
However, behaviorism is an intricate theory, and its approach to
learning cannot be general-
ized so easily. There are many perspectives related to
behaviorism, and such variability makes
it critical that you understand behaviorism’s theoretical
foundation in more depth. Although
new methods are often used in the 21st century, behaviorism
still offers the field of learning
many relevant strategies for successful learning, educating, and
counseling today (Abramson,
2013).
In this chapter, we will first discuss the history of behaviorism,
as well as its evolution in the
scope of learning theory. In addition, the chapter will cover
behaviorism’s foundational ideas,
including connectionism, the law of effect, principles of
conditioning, and modeling and shap-
ing, and explain how behaviorism has been applied within the
domains of marketing and
education.
Jacob Wackerhausen/iStock/Thinkstock
Making mistakes is part of the learning process. It
allows people to modify behavior or thought pro-
cesses in order to develop knowledge or skills.
5. why this framework is often misunderstood and difficult to
clearly explain. They also provide you
with a foundation that will help you better understand the
advances and new reflections that
continue to be explored.
Excerpts from “On Behaviorism in the Cognitive Revolution:
Myth and Reactions”
By J. P. Watrin and R. Darwich
In the course of history, there is a clear difficulty to define
psychology. For a long time, it was
treated as the study of mind or human psyche. Some authors,
though, saw the emergence of
behaviorism as a revolution in psychological science (e.g.,
Gardner, 1985; Moore, 1999). Start-
ing with J. B. Watson (1878–1958), the behaviorist school
flourished in the beginning of the
20th century. It was a remarkable rupture in the history of
psychology, once it put the mind
aside of scientific inquiry. From then on, behaviorism began a
tradition of study of behavior,
comprising several—and sometimes even conflicting—
theoretical systems (Moore, 1999).
In that context, behavior analysis emerged as one of the
behavioristic approaches, having
been developed from the works of B. F. Skinner (1904–1990).
With an emphasis on operant
behavior and an antimentalistic position [which rejects the mind
as the cause of behavior], it
became a forefront system of behaviorism during the 1950s. [. .
.]
From Behaviorism to Behavior Analysis
Behavior analysis constitutes a field and a psychological system
devoted to the study of
7. of behavior” as its goal. That
drastic movement would greatly contribute to the beginning of a
new tradition, whose name
seems to have been created by Watson himself: “behaviorism”
(Schneider & Morris, 1987).
In the following decades, several psychologists would be
identified as behaviorists. Names
such as Clark Hull (1884–1952) and Edward Tolman (1886–
1959) became associated with
the behaviorist movement, once they developed their own
explanatory models of behavior
(e.g., Hull, 1943; Tolman, 1932). New forms of behaviorism
were thus being shaped and were
sometimes at odds with those that already existed (Moore,
1999). In the 1930s, the contribu-
tions of Skinner established his place among those
developments. Conceiving behavior as a
lawful process, Skinner’s experimental works on reflexes led
him to new concepts and meth-
ods of investigation (see Iversen, 1992). Reflex—and,
subsequently, all behavior—was no lon-
ger something that happened inside the organism; rather, it was
seen as a relation in which
a response is defined in function of a stimulus and vice versa
(Skinner, 1931). [. . .] In 1938,
Skinner published The Behavior of Organisms, in which he
summarized many of his positions
and refined the concept of operant behavior. Skinnerian
behaviorism (see section i.2) was
acquiring its shape. Its first developments laid the
fundamental concepts and methods of behavior
analysis. Because they relied on basic research, they
were also the first steps of the experimental analy-
sis of behavior.
8. In the 1940s, the first introductory course based in
Skinner’s psychology and the first conference on
experimental analysis of behavior took place (Keller
& Schoenfeld, 1949; Michael, 1980). In 1945, Skin-
ner wrote The Operational Analysis of Psychological
Terms, in which, for the first time in print, he defined
his thought as “radical behaviorism” (Skinner, 1945,
p. 294; see also Schneider & Morris, 1987). The term
would designate a philosophy that, on one hand,
defines private events (e.g., thinking, feelings) as
behavior and, therefore, as a legitimate subject mat-
ter of a behavioral analysis, but on the other hand
attacks explanatory mentalism, the explanation of
behavior by mental events (cf. Skinner, 1945, 1974).
Private events usually refer to a mental concept, but
they are behavior and, as such, cannot cause other
behavior. That antimentalism would become a cen-
tral feature of radical behaviorism. [. . .]
As the prominence of Skinner and his work began
to rise and the foundations for applied behavior
analysis were laid (Morris, Smith, & Altus, 2005),
Skinner would become central to the development
of behavior analysis. [. . .] Thus, behavior analysis
Nina Leen/The LIFE Picture Collection/Getty Images
Psychologist B. F. Skinner’s experi-
ments showed that behavior could
be related to a stimulus and did not
have to be only an occurrence inside
an organism. One of Skinner’s famous
experiments included a rat pressing a
lever to then be rewarded with food.
maj83688_02_c01_031-066.indd 34 8/31/17 3:06 PM
10. treated as a homogeneous school, as a linear tradition. The term
behaviorism, however, refers
to a variety of conflicting positions (Leigland, 2003; but see
also Moore, 1999). Indeed, after
Watson’s (1913) first use, many theories related to the study of
behavior were taken as
“behaviorists.” Since the term began to be largely used, its
ambiguity was soon recognized,
seeing that there was no single enterprise called “behaviorism”
(e.g., Hunter, 1922; Spence,
1948; Williams, 1931). Woodworth (1924) summarized the
problem:
If I am asked whether I am a behaviorist, I have to reply that I
do not know,
and do not much care. If I am, it is because I believe in the
several projects put
forward by behaviorists. If I am not, it is partly because I also
believe in other
projects which behaviorists seem to avoid, and partly because I
cannot see
any one big thing, to be called “behaviorism.” (p. 264)
Spence (1948) also noted that the term was mostly used when
someone defines his or her
oppositions to an effective (or alleged) behaviorism. Even so,
later developments were identi-
fied with “behaviorism,” such as behavior analysis itself.
Therefore, the term would still des-
ignate a very heterogeneous set of positions. Its indiscriminate
use, on the other hand, over-
looks the historical complexity and diversity of the behaviorist
school.
Moreover, references to a generic behaviorism set biases in the
analysis of behavioristic sys-
13. eating because we are hungry and sleeping because we are tired.
The laws of learning explain
how people learn best through these associations. As just one
example, the law of effect asserts
that learning is strengthened when it is associated
with a positive feeling. As Sandiford (1942) explains
in the following excerpts, the theory of connection-
ism and the laws of learning helped build a more
developed understanding of learning and contrib-
uted to our more modern applications of today.
Before you begin reading, it is important to under-
stand the importance of what is known as “asso-
ciation doctrine” to Thorndike’s research. Although
Thorndike did not introduce his initial three laws of
learning until the early 20th century (Weibell, 2011),
ideas about behavioral associations began to take
shape more than 2,000 years ago. Greek philosopher
Aristotle (384–322 BCE) wrote in his major work on
ethics, “For we are busy that we may have leisure,
and make war that we may live in peace.” However,
his ideas about associations are most clearly seen in
the following passage:
When, therefore, we accomplish an act of reminiscence, we pass
through a cer-
tain series of precursive movements, until we arrive at a
movement on which
Abracada/iStock/Thinkstock
A central theory of connectionism is
that learning is conducted through
stimuli and responses.
maj83688_02_c01_031-066.indd 36 8/31/17 3:06 PM
15. Excerpts from “Connectionism: Its Origin and Major Features”
By P. Sandiford
Features of Connectionism
The following outline gives the main distinguishing features of
connectionism:
1. Connectionism is an outgrowth of the association doctrine,
especially as pro-
pounded by Alexander Bain. Thorndike was a pupil of William
James, some of whose
teachings were derived from Bain and the British
associationists. Connectionism,
therefore, through associationism, has its roots deep in the
psychological past.
2. Connectionism is a theory of learning, but as learning is
many-sided, connectionism
almost becomes a system of psychology. It is as a theory of
learning, however, that it
must stand or fall.
3. Connectionism has an evolutionary bearing in that it links
human behavior to that
of the lower animals. Thorndike’s first experiments were with
chicks, fish, cats, and,
later, with monkeys. From his animal experiments he derived
his famous laws of
learning.
4. Connectionism boldly states that learning is connecting. The
connections presum-
ably have their physical basis in the nervous system, where the
connections between
neuron and neuron explain learning. Hence, connectionism is
17. mechanistic aspects of behavior. Neither one finds it necessary
to evoke a soul in
order to explain behavior. Connectionism breaks with
behaviorism in regard to the
stress it places on the hereditary equipment of the behaving
organism.
8. Some connections are more natural than others. We grow into
reflexes and instincts
without very much stimulation from the environment except
food and air. In other
words, we mature into reflexes and instincts, but we have to
practice or exercise
in order to learn our habits. These hereditary patterns of
behavior (reflexes and
instincts) form the groundwork of learning. Most acquired
connections are based
on them and, indeed, grow out of them. Even such complex
bonds as those which
represent capacities (music, mathematics, languages, and the
like) have a hereditary
basis.
9. According to connectionism those things we call intellect and
intelligence are
quantitative rather than qualitative. A person’s intellect is the
sum total of the bonds
(associations) he has formed. The greater the number of bonds
he has formed, the
higher is his intelligence.
10. [. . .] Connectionism, above all other theories of learning,
seems to be one that the
classroom teacher can appreciate and apply. While the statistics
which summarize
18. the experiments have been decried as the products of a
mechanistic conception of
behavior, nevertheless they have done more to make education a
science than all the
theorizing of the past 2,000 years.
[. . .] Thorndike was such a voluminous writer that it is difficult
to summarize his position
on any single question, or, indeed, to pin him down to a specific
position. In order to remove
any doubt the reader may have on the matter, the following
recent statement of Thorndike’s
position is given:
A man’s life would be described by a list of all the situations
which he encoun-
tered and the responses which he made to them, including
among the latter
every detail of his sensations, percepts, memories, mental
images, ideas, judg-
ments, emotions, desires, choices, and other so-called mental
facts.
[. . .] A man’s nature at any given stage would be expressed by
a list of the responses (Rs)
which he would make to whatever situations or state of affairs
(Ss) could happen to him,
somewhat as the nature of a molecule of sugar might be
expressed by a list of all the reactions
that would take place between it and every substance which it
might encounter.
There would be one important difference, however. [. . .] In
human behavior our ignorance
often requires the acknowledgment of the principle of multiple
response or varied reaction
22. The laws usually quoted are those given in Vol. II of
Thorndike’s Educational Psychology:
The Psychology of Learning (1913). These include the three
major laws: effect, exercise or
frequency, and readiness. [. . .] These laws grew out of the
experiments with animals, coupled
with such influences as the writings of Bain, Romanes, Lloyd
Morgan, Wilhelm Wundt, and
others, and have been modified by further experiments in which
human beings acted as the
subjects (Thorndike, 1932). New elements injected into the laws
of learning are belonging-
ness, impressiveness, polarity, identifiability, availability, and
mental system. This shows clearly
enough that the laws are not to be regarded as a closed system,
complete from the start, but
merely as tentative summaries of our knowledge of the way in
which learning takes place.
They will be discarded or modified whenever experiments
disclose that such is necessary or
desirable.
Table 1.1: Overview of associationistic milestones
Theorists Milestones
Aristotle (384–322 BCE) • Introduced the ideology of
associations.
• Suggested that we could not perceive two sensations as one—
that
they would combine or fuse into one.
Thomas Hobbes (1588–1679) • Suggested sequences of thought
could be casual and illogical, as in
23. dreams, or orderly and regulated as by some design.
• Suggested that hunger, sex, and thirst are physiological needs.
John Locke (1632–1704) • Suggested “association of ideas”:
Representations arise in
consciousness.
David Hartley (1705–1757) • Suggested that sensation (pleasure
vs. pain) was generated by wave
vibrations in the nerves.
David Hume (1711–1776) • Noted that the associations in cause
and effect are affected when addi-
tional objects are introduced.
James Mill (1773–1836) • Advanced associationism to include
more complex emotional states
within the pain vs. pleasure sensation model.
Thomas Brown (1778–1820) • Suggested nine secondary laws
that strengthened Aristotle’s laws of
association.
• Understood association as an active process of an active,
holistic mind.
Alexander Bain (1818–1903) • Suggested trial-and-error
learning, reflexes, and instincts as the bases
of habits, individual differences, and the pleasure-pain principle
in
learning.
Edward Thorndike
(1874–1949)
25. [. . .] A modifiable bond is strengthened or weakened as
satisfaction or annoyance attends its
exercise. With chickens and cats, Thorndike had used as
motivating agents in their behavior
such original satisfiers as food and release from confinement for
the hungry cat, company
for the lonely chicken, and so forth. These acted as rewards for
certain actions which became
stamped in and learned. Thorndike really took the law of effect
for granted at first, as so many
before him had done. Gradually, however, it became one of his
most important principles of
education. [. . .]
In propounding the law of effect, Thorndike thought that the
two effects—satisfiers and
annoyers—were about equally potent, the one in stamping in the
connection, the other in
stamping it out. If a preference was indicated it was toward the
side of rewards, although he
explicitly asserted that rewards or satisfiers following responses
increased the likelihood of
repetitions of the connections so rewarded, while punishments
decreased the likelihood of
recurrence of the punished connection. [. . .]
The manner in which the confirming reaction develops and
operates is as follows: The con-
firming reaction is at first an aftereffect of the S → R situation
(where S is a stimulus and R is
a response), thus:
S → R → Confirming Reaction
maj83688_02_c01_031-066.indd 41 8/31/17 3:06 PM
27. to that particular annoying aftereffect in those particular
circumstances. This may be to run
away, to scream, or to perform other useless acts. Punishments,
compared with rewards, are
very unreliable forces in learning. Rewards are dependable
because they arouse confirming
reactions.
Thorndike is inclined to believe that the confirming reaction is
a reaction of the neurons
themselves. It is a neuronic force of reinforcement of the
original response or it is the afteref-
fect of the total situation response (Thorndike, 1933, 1940). [. .
.]
The Law of Exercise or Frequency
This law, like the law of effect, was at first almost taken for
granted by Thorndike. Does not
“practice make perfect”? Yet experience shows that exercise
does not always lead to perfec-
tion. Practice in sitting on a bent pin or in poking the fire with
the finger never leads to perfec-
tion in the art. The law of effect has to be invoked to explain
why practice does not necessarily
and invariably lead to improvement. Pleasurable reactions are
stamped in; painful ones are
stamped out. In terms of connectionism, repetition tends to
make the bond permanent. [. . .]
The law of exercise or frequency has two parts, use and disuse.
The law of use is stated: When a
modifiable connection is made between a situation and a
response, that connection’s strength
is, other things being equal, increased. The law of disuse runs:
When a modifiable connec-
tion is not made between a situation and a response over a
29. somewhat stereotyped, in and
of itself, is unproductive for learning. It causes no adaptive
changes and has no useful selective
power. Repetition of a connection, that is, the situation and its
particular response, results in a
real though somewhat small strengthening influence. Mere
repetition of a connection causes
learning, but the learning is slow. For example, if a
child is taught to sit in his or her seat after enter-
ing the room, but does not understand why or its
applicability, the child will sit but has not necessar-
ily learned the reasons for performing this behavior.
If the child learns that when entering a classroom, it
is important to sit as a procedure that ensures posi-
tive outcomes in the learning environment (such as
rewards) the child will be more apt to apply this in
other settings as well.
Repetition of a “connection with belonging” (that
is, the procedure that is applied “fits” the situation)
increases the likelihood of learned adaption to per-
form the behavior, even when the rewards may be
concealed or disguised. Belongingness is difficult to
describe but easy to illustrate. For example, the words of a
sentence belong together in a way
that the terminal word of one sentence and the initial word of
the next do not. An additional
example might include a child eating off a plate instead of
eating off the table. The behavior
makes logical sense to the individual. [. . .]
The Law of Readiness
Briefly the law of readiness may be stated: When a bond is
ready to act, to act gives satisfac-
31. resale or redistribution.
44
Section 1.2 Theory of Connectionism and the Laws of Learning
some number sequences may possess more belongingness than
others. Thus 2, 4, 8,
16, etc., exhibit more belongingness than 1, 3, 4, 2, 5, 11, 13,
15.
2. Impressiveness: The strength or intensity of a stimulus or a
situation.
Example: Loud sounds are considered stronger and more
impressive than less intense
ones. Stimuli attended to, that is, in the focus of consciousness,
are more impressive
than marginal elements.
3. Vividness: The recognizability of a word (Miller & Dost,
1964).
Example: In some experiments, using word-number paired
associates such as dinner
26, basal 83, divide 37, kiss 63, the number of correct number
associations with kiss
and dinner, both impressive words, is larger than the number of
associations made
with basal and divide, both weak words.
4. Polarity: The tendency for stimulus-response sequences to
function more readily in
the order they were practiced than in the opposite order.
Example: Using foreign and vernacular phrases such as raison
d’être; ohne Hast, ohne
32. Ras exeunt omnes; facile descemus; obiter dicta, etc., it was
shown that the ends could
be supplied when the beginnings were given, more readily than
the beginnings could
be given when the ends were supplied; the first half evokes the
second half more often
than the second evokes the first.
5. Identifiability: If the connection can be easily identified it is
easily learned.
Example: Some concepts such as times, numbers, weights,
colors, mass, density, etc.,
have to be analyzed out and made identifiable before they can
be profitably used
by us.
6. Availability: The accessibility of the response.
Example: When something is easier to attain, it makes the
response to it more easily
assessable.
7. Mental systems: The habituation; limited physiological or
emotional response to a
frequently repeated stimulus (one’s habit).
Example: If in paper and pencil association experiments, the
stimulus word dear
evoked the response sir, this would be regarded as a simple
habit; but if it evoked
fear, some mental system must be at work.
[. . .] These modifications and additions to the laws of learning
do not destroy the main fabric
of the connectionist doctrine. Indeed, they illustrate one
important feature of connectionism,
namely, the willingness of its supporters to modify their
teachings and beliefs when experi-
34. since the turn of the 20th century
and are aligned with the transformation of associative learning
concepts. Therefore, familiarity
with this area of learning is critical to an advanced education in
psychology, as well as a more
developed understanding of behaviorism and its evolution. For
this section of the chapter, we
will discuss conditioning. Section 1.4 will explore how
conditioning is then applied in the field
of learning. There are two types of conditioning: classical and
operant. Though both types have
an associative property, there are also clear differences between
the two. Classical condition-
ing involves repeatedly pairing two stimuli so that eventually
one of the stimuli prompts an
involuntary response that previously the other caused on its
own. Think of the classic example of
Pavlov’s dog: Repeatedly pairing food with a tone eventually
caused, or conditioned, the dog to
salivate at the tone alone.
In contrast, operant conditioning (also referred to as
instrumental conditioning or Skinner-
ian conditioning) introduces consequences to the associative
relationship between stimuli and
responses. Rather than using different stimuli to provoke the
same, involuntary response, differ-
ent stimuli are used to prompt or support the desired, voluntary
response, which may involve
the confirmation or discouragement of a behavior. In Figure 1.3,
for example, two types of rein-
forcement (positive and negative) are used to maintain the
desired response, and two types of
punishment (again, positive and negative) are used to change
the behavior. In this case, the child
being quiet at the physician’s office is the desired behavior.
36. environment, the
child is again quiet.
Parent gives
positive punishment
by giving the
child additional chores.
The next time in a
professional
environment, the
behavior improves.
Parent gives
negative punishment
by taking away
the child’s TV time.
The next time in a
professional
environment, the
behavior improves.
Child is quiet
while in the
physician’s office.
Child misbehaves
in the
38. these two types of conditioning and consider how, with each
new development, more questions
arise about how associations occur and if they affect learning.
Excerpts from “The Classical Origins of Pavlov’s Conditioning”
By R. E. Clark
Classical Conditioning
In the most basic form of classical conditioning, the stimulus
that predicts the occurrence of
another stimulus is termed the conditioned stimulus (CS) (in
Pavlov’s experiment, the tone).
The predicted stimulus is termed the unconditioned stimulus
(US) (in Pavlov’s experiment,
the food). The CS is a relatively neutral stimulus that can be
detected by the organism, but
does not initially induce a reliable behavioral response. The US
is a stimulus that can reliably
induce a measurable response from the first presentation. The
response that is elicited by
the presentation of the US is termed the unconditioned response
(UR) (in Pavlov’s experi-
ment, the drool as a result of the food). The term
“unconditioned” is used to indicate that
the response is “not learned,” but rather it is an innate or
reflexive response to the US. With
repeated presentations of the CS followed by US (referred to as
paired training) the CS begins
to elicit a conditioned response (CR) (in Pavlov’s experiment,
the drool as a result of the
tone alone). Here the term “conditioned” is used to indicate that
the response is “learned.” See
Figure 1.4 for an illustration of these relationships.
Figure 1.4: A typical classical conditioning procedure
40. 47
Section 1.3 Principles of Conditioning
Edwin Burket Twitmyer (1873–1943)
The phenomenon of classical conditioning was discovered
independently in the United States
and Russia around the turn of the 19th century. In the United
States, Edwin B. Twitmyer made
this discovery at the University of Pennsylvania while finishing
his dissertation work on the
“knee-jerk” reflex. When the patellar tendon is lightly tapped
with a doctor’s hammer, the
well-known “knee-jerk” reflex is elicited. Twitmyer had
initially intended to study the mag-
nitude of the reflex under normal and facilitating conditions
(Figure 1.5). In the facilitating
conditions the subjects were asked to verbalize the word “ah,”
or to clench their fists, or to
imagine clenching their fists (Twitmyer, 1902/1974). A bell that
was struck one-half second
before the patellar tendon was tapped served as signal for the
subjects to begin verbalizing or
fist clenching (or imagining fist clenching). Twitmyer observed:
[D]uring the adjustment of the apparatus for an earlier group of
experiments
with one subject . . . a decided kick of both legs was observed
to follow a tap of the
signal bell occurring without the usual blow of the hammers on
the tendons. . . .
Two alternatives presented themselves. Either (1) the subject
was in error in
41. his introspective observation and had voluntarily moved his
legs, or (2) the true
knee jerk (or a movement resembling it in appearance) had been
produced by a
stimulus other than the usual one. (as cited in Irwin, 1943, p.
452) [. . .]
Twitmyer apparently did not fully appreciate the potential
significance of this finding beyond
recording this initial observation, and the work was never
extended. It has been suggested
that Twitmyer’s failure to systematically investigate this
phenomenon and the lack of interest
exhibited by his colleagues who heard the presentation was
likely due in part to the prevailing
American zeitgeist where interest in delineating the components
of consciousness through
introspection was the principal perspective (Irwin, 1943; Coon,
1982). Thus, Twitmyer and
his contemporaries would have been predisposed to undervalue
the usefulness, to the field
of psychology, of something as basic as a modifiable reflex.
This was not the case in Russia.
Figure 1.5: Twitmyer’s “knee-jerk” reflex experiment
This photograph (circa 1903) shows a young subject and the
experimental apparatus Twitmyer used to
measure the magnitude of the knee-jerk reflex (see
http://www.psych.upenn.edu/history/twittext.htm
for details).
University of Pennsylvania Archive, photographer unknown.
maj83688_02_c01_031-066.indd 47 8/31/17 3:06 PM
44. Pavlov immediately recognized the significance of these
findings, findings that would ulti-
mately lead him to change the direction of his research to
explore this phenomenon. His ini-
tial results were officially presented to the International
Congress of Medicine held in Madrid,
Spain, in 1903. This report was entitled “Experimental
Psychology and Psychopathology in
Animals.” [. . .]
The Emergence of Classical Conditioning in the United States
Pavlov’s work on classical conditioning was essentially
unknown in the United States until
1906, when his lecture “The Scientific Investigation of the
Psychical Faculties or Processes
in the Higher Animals” was published in the journal Science
(Pavlov, 1906). In 1909 Rob-
ert Yerkes (1876–1956), who would later become president of
the American Psychological
Association, and Sergius Morgulis published an extensive
review of the methods and results
obtained by Pavlov, which they described as “now widely
known as the Pawlow [sic] salivary
reflex method” (Yerkes & Morgulis, 1909, p. 257).
Initially Pavlov and his associates used the term conditional
rather than conditioned. Yet Yer-
kes and Morgulis chose to use the term conditioned. They
explained their choice of terms in
a footnote:
Conditioned and unconditioned are the terms used in the only
discussion
of this subject by Pawlow [sic] which has appeared in English.
The Russian
terms, however, have as their English equivalents conditional
45. and uncondi-
tional. But as it seems highly probable that Professor Pawlow
[sic] sanctioned
the terms conditioned and unconditioned, which appear in the
Huxley lecture
(Lancet, 1906), we shall use them. (Yerkes & Morgulis, 1909, p.
259)
The terms conditioned reflex and unconditioned reflex were
used during the first two decades
of the 20th century, during which time this type of learning was
often referred to as “reflex-
ology.” In 1921, the first textbook devoted to conditioning
(General Psychology in Terms of
Behavior) adopted the terms conditioned and unconditioned
response to replace the term
reflex (Smith & Guthrie, 1921). De-emphasizing the concept of
a reflex and instead using a
more general term like response allowed a larger range of
behaviors to be examined with
conditioning procedures. [. . .]
John B. Watson (1878–1958) championed the use of classical
conditioning as a research tool
for psychological investigations. During 1915, his student Karl
Lashley conducted several
exploratory conditioning experiments in Watson’s laboratory.
Watson’s presidential address,
delivered in 1915 to the American Psychological Association,
was entitled “The Place of the
Conditioned Reflex in Psychology” (Watson, 1916). Watson was
highly influential in the rapid
incorporation of classical conditioning into American
psychology, though this influence did
not appear to extend to his student. Lashley became frustrated
with his attempts to classically
49. Types of Reinforcers
The range of possible consequences that can function as
reinforcers is enormous. To make
sense of this assortment, psychologists tend to place them into
two main categories: primary
reinforcers and secondary reinforcers. Primary reinforcers are
those that require little, if any,
experience to be effective. Food, drink, and sex are common
examples. While it is true that
experience will influence what would be considered desirable
for food, drink, or an appropri-
ate sex partner, there is little argument that these items,
themselves, are natural reinforcers.
Another kind of reinforcer that does not require experience is
called a social reinforcer. Exam-
ples are social contact and social approval. Even newborns show
a desire for social reinforc-
ers. Psychologists have discovered that newborns prefer to look
at pictures of human faces
more than practically any other stimulus pattern, and this
preference is stronger if that face is
smiling. Like the other primary reinforcers, experience will
modify the type of social recogni-
tion that is desired. Still, it is clear that most people will go to
great lengths to be noticed by
others or to gain their acceptance and approval.
Though these reinforcers are likely to be effective, most human
behavior is not motivated
directly by primary reinforcers. Money, entertainment, clothes,
cars, and computer games
are all effective rewards, yet none of these would qualify as
natural or primary reinforcers.
Because they must be acquired, they are called secondary
reinforcers. These become effec-
tive because they are paired with primary reinforcers. The
50. famous American psychologist B.
F. Skinner found that the sound of food being delivered was
sufficient to maintain a high rate
of bar pressing in experienced rats. Obviously, under normal
circumstances the sound of the
food occurred only if food was truly being delivered.
How a secondary reinforcer becomes effective is called two-
factor theory and is generally
explained through a combination of instrumental and Pavlovian
conditioning (hence the label
“two-factor”). For example, when a rat receives food for
pressing a bar (positive reinforce-
ment), at that same time a neutral stimulus is also presented, the
sound of the food drop-
ping into the food dish. The sound is paired with a stimulus that
naturally elicits a reflexive
response; that is, food elicits satisfaction. Over many trials, the
sound is paired consistently
with food; thus, it will be conditioned via Pavlovian methods to
elicit the same response as the
food. Additionally, this process occurred during the
instrumental conditioning of bar pressing
by using food as a reinforcer.
This same process works for most everyday activities. For most
humans, money is an
extremely powerful reinforcer. Money itself, though, is not very
attractive. It does not taste
good, does not reduce any biological drives, and does not, on its
own, satisfy any needs. How-
ever, it is reliably paired with all of these things and therefore
becomes as effective as these
primary reinforcers. In a similar way, popular fashion in
clothing, hair styles, and personal
adornment; popular art or music; even behaving according to
52. Thus, standing and cheering
at a basketball game will likely lead to approval (social
reinforcement), whereas this same
response is not likely to yield acceptance if it occurs at a
funeral.
A punisher is likewise defined as any consequence that reduces
the probability of a behav-
ior, with the same qualifications as for reinforcers. A behavior
that occurs in response to a
specified situation may receive a consequence that reduces the
likelihood that it will occur
in that situation in the future, but the same behavior in another
situation would not gen-
erate the same consequence. For example, drawing on the walls
of a freshly painted room
would usually result in an unpleasant consequence,
whereas the same behavior (drawing) in one’s color-
ing book would not.
The terms “positive” and “negative” are also much
more tightly defined. Former use confused these with
the emotional values of good or bad, thereby requir-
ing the counterintuitive and confusing claim that a
positive reinforcer is withheld or a negative reinforcer
presented when there is clearly no reward, and, in fact,
the intent is to reduce the probability of that response
(such as described by Kimble). A better, less confusing
definition is to consider “positive” and “negative” as
arithmetic symbols, as for adding or subtracting. They
therefore are the methods of supplying reinforcement
(or punishment) rather than descriptions of the rein-
forcer itself. Thus, if a behavior occurs, and as a conse-
quence something is given that will result in an increase
in the rate of the behavior, this is positive reinforce-
54. collar that provides an electric
shock when the dog strays too close to the property line is an
example of a device that deliv-
ers positive punishment. Loss of television privileges for
rudeness is an example of negative
punishment. See Table 1.2 for an overview of reinforcements
and punishments.
Table 1.2: Reinforcements and punishments
Type Description Example
positive reinforcement Adds to the environment to encourage
continuance of a desired behavior.
Giving child a reward
(a treat, a toy, etc.)
positive punishment Adds to the environment to discourage
continuance of an undesired behavior.
Adding chores to a child’s
weekly duties
negative reinforcement Takes away from the environment to
encourage
continuance of a desired behavior.
Taking away child’s assigned
chores for the week
negative punishment Takes away from the environment to
discourage
continuance of an undesired behavior.
Grounding child from
57. amount of work, such as
stuffing envelopes. The pay is always the same; stuffing a
certain number of envelopes
always equals the same pay. An example of a fixed-interval (FI)
schedule is receiving the
daily mail. Checking the mailbox before the mail is delivered
will not result in reinforcement.
One must wait until the appropriate time. A variable-ratio (VR)
schedule example is a
slot machine. The more attempts, the more times the player
wins, but in an unpredictable
pattern. A variable-interval (VI) schedule example would be
telephoning a friend whose
line is busy. Continued attempts will be unsuccessful until the
friend hangs up the phone,
but when this will happen is unknown. See Table 1.3 for an
overview of ratio and interval
schedules.
Table 1.3: Ratio and interval schedules of reinforcement
Schedule type Description Example
fixed-ratio (FR) Amount of reinforcer stays the same. Paying a
person $10/hour
fixed-interval (FI) Time of reinforcement stays the same.
Paying a person every Friday for work
completed
variable-ratio (VR) Reinforcers are administered in
unpredictable amounts.
Paying a person a bonus for time
worked; amount is unknown but time
may be known (such as end of the year)
59. resale or redistribution.
55
Section 1.4 Behaviorism Applied
Classical and operant conditioning can often be difficult
concepts to understand at first glance,
and it can be helpful to think about how these types of learning
processes might happen in our
lives each day. For instance, have you ever rewarded your
children for doing what you asked?
As they became older, did you have to reward them every single
time, as you may have when
they were younger, or could you reward them every now and
again and still see the behavior
repeated? By fully understanding the principles of classical and
operant conditioning, you will
be more apt to identify—and perhaps even implement—differing
schedules of reinforcement in
your own life. The last section of this chapter will guide you
through two modern applications
of conditioning. Reinforcing Your Understanding: Conditioning
takes a closer look at Skinner’s
conditioning research.
Reinforcing Your Understanding: Conditioning
Refer to your e-book for an embedded video that considers
Skinner’s work in conditioning.
In his original research, Skinner used pigeons as subjects and
grain to teach the pigeons to
perform certain behaviors. Review this video to reinforce your
understanding of punishment
60. versus reinforcers and how the schedule and rate of reinforcers
affect learning.
1.4 Behaviorism Applied
Now that you are familiar with how behaviorism was shaped
and refined through continuous
research, consider how it can be applied in modern
environments. The excerpts in this section are
from two separate articles. Both selections demonstrate the
application of strategies based on
behaviorism. The first series of excerpts
is from Wells (2014) and illustrates how
such strategies are used to understand
consumer behaviors and then applied
to product marketing; consumer behav-
iors research aims to identify why peo-
ple buy what they buy. For example, an
organization can use what it knows
about its consumers when developing
campaigns; its marketing campaigns
will often apply some of the behavioral
principles. Do you recognize the exam-
ple in the pictured advertisement? Does
it trigger specific emotional responses
or beliefs about the product? Do you use
this specific brand of product? Many of
the advertisers’ decisions and consumer
behaviors associated with their prod-
ucts are based on behaviorism.
Ullstein bild/Getty Images
Do the vibrant colors and illustrations in the Apple
iPod advertisements elicit a positive feeling? Clas-
sical conditioning in advertising generally assumes
that favorability toward a certain product develops
from a positive commercial or advertisement.
62. . . . the CS (the brand) predicts the US (a slim female torso). In
each instance
“Now you see it, now you don’t” is sung as first the brand (CS)
and then a trim-
figured woman (US) is shown. (pp. 39–40)
Overall, there has been mixed support for classical conditioning
effects in advertising, but the
general suggestion is that positive attitudes toward an
advertised product (CS) might develop
through their association in a commercial with other stimuli that
are reacted to positively
(US), such as pleasant colors, music, and humor (Gorn, 1982).
Early work applying classical conditioning to advertising
appears to have been based on and
inspired by the work of Razran (1938), who paired a free meal
(US) with various political
statements (CS). He found that agreement with the slogans was
greater when people received
a free meal than when they did not. The work of Staats and
Staats (1958), who successfully
associated visually presented nonsense symbols (CS) with
several spoken words (US) such
as beauty, healthy, smart, and success, opened the door further
for a classical conditioning
approach to advertising. After the associative pairings, the
participants’ ratings of the CS indi-
cated that the core meaning in the US (i.e., either positive or
negative evaluation) had trans-
ferred to the nonsense syllables (Allen & Janiszewski, 1989). In
a second experiment, Allen
and Janiszewski associated each of two national names
(“Swedish” and “Dutch”) with either
18 positive or 18 negative words. The national name paired with
positive words was later
64. and US; they found that conditioning was greater as the number
of trials increased. Although
other studies have used different trial numbers, there remains no
agreement on an optimum
number of trials for conditioning to occur.
Extinction
Extinction is the prediction that the conditioned behavior will
disappear if the predictive
relationship between the CS and the US is broken by either
omitting the US entirely or by
presenting the CS and US randomly (McSweeney & Bierley,
1984). Till, Stanley, and Pirluck
(2008) explored the characteristic of extinction empirically.
Their study paired brands with
celebrities and measured attitudes toward the brands after
conditioning. Attitudes increased
with the use of well-liked and relevant celebrities. They then
attempted to extinguish these
effects but found that, once paired, the pairings were difficult to
eliminate, with brand atti-
tudes still affected 2 weeks after the procedure (Till et al.,
2008). Till and Priluck (2000) stud-
ied the characteristic of generalization, or the extent to which a
response conditioned to one
stimulus transfers to similar stimuli. Through two experimental
procedures, they found that
attitudes conditioned to a particular brand (Garra mouthwash)
could be transferred (gener-
alized) to a product with a similar name (Gurra, Gurri, and
Dutti) in the same category, as well
as a product with the same name in a different category (soap).
[. . .]
Operant Conditioning in Marketing and Consumer
Behavior Research
65. In operant conditioning, behavior is shaped and maintained by
its consequences (Foxall,
1986), meaning that the rate at which a behavior will be
performed is directly related to the
consequences of that behavior performed previously. [. . .]
According to Skinner, each behav-
ioral act can be broken down into three key parts: (1) the
response/behavior (R); (2) the
reinforcement/punishment (S+/-), which is a consequence of the
behavior; and (3) a discrimi-
native stimulus (Sd ), which is a cue that signals the likelihood
of positive or negative conse-
quences arising from performing the behavior (Foxall, 1986,
2002). The three parts together,
labelled the three-term contingency, highlight that the
determinants of the behavior must
occur in the environment (Foxall, 1986, 1993):
Sd → R → S+/–
In general, behavior modifiers include positive and negative
reinforcement, and positive
and negative punishment. Positive reinforcement is generally a
reward or something that
strengthens the behavior (e.g., a pleasant experience or
satisfaction with a product, a posi-
tive response to a behavior), which likely leads the person to
buy the product again in future.
With negative reinforcement, the behavior is generally
performed to avoid unpleasantness
(e.g., buying a product to avoid an aggressive salesperson,
purchase and consumption of pain-
killers to relieve a headache; Simintiras & Cadogan, 1996).
Punishment is an aversive conse-
quence after a behavioral response and may lead to the
extinction of a behavior (Nord & Peter,
69. at home. It is helpful if teachers
and parents work together with the student to ensure that the
contract is being fulfilled. [. . .]
Consequences occur immediately after a behavior.
Consequences may be positive or nega-
tive, expected or unexpected, immediate or long term, extrinsic
or intrinsic, material or sym-
bolic (a failing grade), emotional/interpersonal, or even
unconscious. Consequences occur
after the “target” behavior occurs, when either positive or
negative reinforcement may be
given. Positive reinforcement is presentation of a stimulus that
increases the probability of a
response. This type of reinforcement occurs frequently in the
classroom. Teachers may pro-
vide positive reinforcement by:
• Smiling at students after a correct response.
• Commending students for their work.
• Selecting them for a special project.
• Praising students’ ability to parents.
Negative reinforcement increases the probability of a response
that removes or prevents an
adverse condition. Many classroom teachers mistakenly believe
that negative reinforcement
is punishment administered to suppress behavior; however,
negative reinforcement increases
the likelihood of a behavior, as does positive reinforcement.
Negative implies removing a con-
sequence that a student finds unpleasant. Negative
reinforcement might include:
• Obtaining a score of 80% or higher makes the final exam
optional.
70. • Submitting all assignments on time results in the lowest grade
being dropped.
• Perfect attendance is rewarded with a “homework pass.”
Punishment involves presenting a strong stimulus that decreases
the frequency of a particu-
lar response. Punishment is effective in quickly eliminating
undesirable behaviors. Examples
of punishment include:
• Students who fight are immediately referred to the principal.
• Late assignments are given a grade of “0.”
• Three tardies to class results in a call to the parents.
• Failure to do homework results in after-school detention
(privilege of going home is
removed).
Table 1.4 provides a comparison and examples of
reinforcements and punishments. Also see
Reinforcing Your Understanding: Reinforcement and
Punishment in the Classroom for a more
in-depth example.
Extinction decreases the probability of a response by contingent
withdrawal of a previously
reinforced stimulus. Examples of extinction are:
• A student has developed the habit of saying the punctuation
marks when reading
aloud. Classmates reinforce the behavior by laughing when he
does so. The teacher
tells the students not to laugh, thus extinguishing the behavior.
maj83688_02_c01_031-066.indd 59 8/31/17 3:06 PM
72. failing to follow the class rules.
Negative
(Something is removed)
Negative reinforcement: Something
is removed to increase desired
behavior.
Example: Give a free homework pass
for turning in all assignments.
Negative punishment: Something
is removed to decrease undesired
behavior.
Example: Make students miss their
time in recess for not following the
class rules.
Adapted from “Behaviorism” by M. Standridge, 2002, in M.
Orey (Ed.), Emerging Perspectives on Learning, Teaching, and
Technology
(http://epltt.coe.uga.edu/index.php?title=Behaviorism).
Copyright 2002 by M. Standridge. Adapted with permission.
Reinforcing Your Understanding: Reinforcement
and Punishment in the Classroom
Reinforcement and punishment are still often used as methods
for classroom management in
today’s schools. By shaping student behavior, instructors have
the ability to be more focused
on the concepts that need to be learned. The following student-
created video presents a
quality demonstration of reinforcement and punishment in a
classroom scenario. In this
video, the teacher, Mr. Andrews, uses each method to
74. 61
Section 1.4 Behaviorism Applied
but continues to talk after the bell rings. The teacher
gives the class one point for improvement, in that
all students are seated. Subsequently, the students
must be seated and quiet to earn points, which may
be accumulated and redeemed for rewards.
Cueing may be as simple as providing a child with
a verbal or nonverbal signal as to the appropriate-
ness of a behavior. For example, to teach a child to
remember to perform an action at a specific time,
the teacher might arrange for him to receive a cue
immediately before the action is expected rather
than after it has been performed incorrectly. For
example, if the teacher is working with a student
who habitually answers aloud instead of raising his
hand, the teacher should discuss a cue such as hand-
raising at the end of a question posed to the class.
Behavior Modification
Behavior modification is a method of eliciting bet-
ter classroom performance from reluctant students.
It has six basic components:
1. Specification of the desired outcome (What must be changed
and how will it be
evaluated?). One example of a desired outcome is increased
student participation in
class discussions.
2. Development of a positive, nurturing environment (by
removing negative stimuli
from the learning environment). In the above example, this
75. would involve a student-
teacher conference with a review of the relevant material, and
calling on the student
when it is evident that she knows the answer to the question
posed.
3. Identification and use of appropriate reinforcers (intrinsic
and extrinsic rewards).
A student receives an intrinsic reinforcer by correctly answering
in the presence of
peers, thus increasing self-esteem and confidence.
4. Reinforcement of behavior patterns develop until the student
has established a pat-
tern of success in engaging in class discussions.
5. Reduction in the frequency of rewards—a gradual decrease in
the amount of one-on-
one review with the student before class discussion.
6. Evaluation and assessment of the effectiveness of the
approach based on teacher
expectations and student results. Compare the frequency of
student responses in class
discussions to the amount of support provided, and determine
whether the student is
independently engaging in class discussions (Brewer, Campbell,
& Petty, 2000).
[. . .] Further methods for behavior modification could include
changing the environment,
using models for learning new behavior, recording behavior,
substituting new behavior to
break bad habits, developing positive expectations, and
increasing intrinsic satisfaction. [. . .]
77. strategies for encouraging appro-
priate and healthy behaviors in others. Reinforcing Your
Understanding: Applied Behavioral
Analysis (ABA) offers a glimpse at one young boy’s
experiences with reward-based therapy.
Reinforcing Your Understanding: Applied Behavioral Analysis
(ABA)
Behaviorism, more commonly referred to today as behavioral
analysis, is applied in a wide
range of professional areas, including, but not limited to,
learning, counseling, behavior
management, and the treatment of autism and other disorders
such as anorexia, bulimia,
and binge eating disorder. In each area, reinforcements are often
used to encourage desired
behaviors. Refer to your e-book for an embedded video clip that
demonstrates the benefits
of applied learning strategy when working with children who
have autism. In this example, a
2-year-old boy diagnosed with autism, Jake, receives ABA
therapy.
Summary & Resources
Chapter Summary
Behaviorism is a foundational framework that encourages those
interested in how we learn
to study, reflect, and identify patterns that support the stimulus-
response premise. Dating
back as far as Aristotle and his ideas about associations, these
ideas have matured, been
challenged, and continue to be elaborated upon through years of
reflection and research. As
explained by Watrin and Darwich (2012) in section 1.1,