SlideShare a Scribd company logo
1 of 317
1 The Foundations of Behaviorism
Fergregory/iStock/Thinkstock
Learning Objectives
After reading this chapter, you should be able to do the
following:
• Explain the controversial history and arguments of
behaviorism.
• Describe associative learning.
• Explain connectionism and the law of effect.
• Compare and contrast classical and operant conditioning.
• Identify examples of ratio and interval schedules.
• Discuss settings where behaviorism, in the area of learning, is
applied.
maj83688_02_c01_031-066.indd 31 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
32
Introduction
Introduction
When you were a child, were you ever
• sent to your room for a bad behavior, a consequence that
continued to occur until
you changed your behavior?
• slapped on the hand for touching something that you were not
supposed to touch?
• yelled at if you walked into the street without first looking for
cars?
• given an allowance when you completed your chores?
• allowed to go on dates but only if you were home by curfew?
• given a sticker or badge for an assignment when you did well?
All of these examples could be categorized as behaviorist
techniques for reinforcing learning.
Learning can refer to the process of
developing knowledge or a skill through
instruction or study or the process of
modifying behavior through experience.
Understanding how learning is studied
is an important step if you want to suc-
cessfully apply psychological methods to
your own learning or to that of others,
whether in a classroom, in the workplace,
or even in your role as a parent or grand-
parent. It is also important to under-
stand that theories have evolved over
time and that inaccuracies often exist
in the literature that presents behavior
and learning studies (Abramson, 2013).
Applications of technology and method-
ological approaches continue to develop
researchers’ awareness of possible inac-
curacies and alternate approaches.
Your journey to a better understanding of learning begins with
behaviorism. This theoretical
foundation, which was first discussed in this book’s
introduction, argues that learning has
successfully occurred when the appropriate behavior is observed
(Ertmer & Newby, 1993).
However, behaviorism is an intricate theory, and its approach to
learning cannot be general-
ized so easily. There are many perspectives related to
behaviorism, and such variability makes
it critical that you understand behaviorism’s theoretical
foundation in more depth. Although
new methods are often used in the 21st century, behaviorism
still offers the field of learning
many relevant strategies for successful learning, educating, and
counseling today (Abramson,
2013).
In this chapter, we will first discuss the history of behaviorism,
as well as its evolution in the
scope of learning theory. In addition, the chapter will cover
behaviorism’s foundational ideas,
including connectionism, the law of effect, principles of
conditioning, and modeling and shap-
ing, and explain how behaviorism has been applied within the
domains of marketing and
education.
Jacob Wackerhausen/iStock/Thinkstock
Making mistakes is part of the learning process. It
allows people to modify behavior or thought pro-
cesses in order to develop knowledge or skills.
maj83688_02_c01_031-066.indd 32 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
33
Section 1.1 The Evolution of Behaviorism to Behavior Analysis
1.1 The Evolution of Behaviorism to Behavior Analysis
Behaviorism was initially based on the premise that observable
environmental variables are the
basis of behaviors (Hilgard, 1956; Pierce & Cheney, 2004). The
theory itself has numerous frame-
works, some of which you read about in section i.2, and
continues to evolve today.
The excerpts in this section are from Watrin and Darwich
(2012). This article reflects upon the
evolution of behaviorism. The attention placed on the multitude
of beliefs about behaviorism
sets the standard for approaching this area of learning
psychology with skeptical thought and
critical considerations. Watrin and Darwich (2012) introduce J.
B. Watson (1913), who rede-
fined psychology as “a purely objective experimental branch of
natural science” (p. 158), pro-
posing the “prediction and control of behavior” as its goal, and
invite us to follow the path of
self-identified behaviorists who continued to reinvent how and
what behaviorism is and how it
should be applied. With explicit candor, these authors will help
you better understand exactly
why this framework is often misunderstood and difficult to
clearly explain. They also provide you
with a foundation that will help you better understand the
advances and new reflections that
continue to be explored.
Excerpts from “On Behaviorism in the Cognitive Revolution:
Myth and Reactions”
By J. P. Watrin and R. Darwich
In the course of history, there is a clear difficulty to define
psychology. For a long time, it was
treated as the study of mind or human psyche. Some authors,
though, saw the emergence of
behaviorism as a revolution in psychological science (e.g.,
Gardner, 1985; Moore, 1999). Start-
ing with J. B. Watson (1878–1958), the behaviorist school
flourished in the beginning of the
20th century. It was a remarkable rupture in the history of
psychology, once it put the mind
aside of scientific inquiry. From then on, behaviorism began a
tradition of study of behavior,
comprising several—and sometimes even conflicting—
theoretical systems (Moore, 1999).
In that context, behavior analysis emerged as one of the
behavioristic approaches, having
been developed from the works of B. F. Skinner (1904–1990).
With an emphasis on operant
behavior and an antimentalistic position [which rejects the mind
as the cause of behavior], it
became a forefront system of behaviorism during the 1950s. [. .
.]
From Behaviorism to Behavior Analysis
Behavior analysis constitutes a field and a psychological system
devoted to the study of
behavior, here defined in terms of functional relations between
behavioral and environmen-
tal events (Catania, 1998). As a field, behavior analysis has
today three fundamental domains:
(a) the experimental analysis of behavior, a basic science
devoted to empirical research on
behavioral processes, especially in the laboratory; (b) applied
behavior analysis, a techno-
logical domain dedicated to apply behavior-analytic knowledge
to solve practical problems;
and (c) the conceptual analysis of behavior, which performs
theoretical reflections about the
subject matter and methods of investigation (Moore, 1999; see
also Moore & Cooper, 2003).
Those domains are interrelated and based in radical
behaviorism, a philosophy of science
that lays the foundations of behavior analysis.
maj83688_02_c01_031-066.indd 33 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
34
Section 1.1 The Evolution of Behaviorism to Behavior Analysis
The history of the field as a whole has its roots in the
behaviorist school. In 1913, Watson pub-
lished the article “Psychology as the Behaviorist Views It.”
Attacking the study of conscious-
ness, Watson (1913) redefined psychology as “a purely
objective experimental branch of
natural science” (p. 158), proposing the “prediction and control
of behavior” as its goal. That
drastic movement would greatly contribute to the beginning of a
new tradition, whose name
seems to have been created by Watson himself: “behaviorism”
(Schneider & Morris, 1987).
In the following decades, several psychologists would be
identified as behaviorists. Names
such as Clark Hull (1884–1952) and Edward Tolman (1886–
1959) became associated with
the behaviorist movement, once they developed their own
explanatory models of behavior
(e.g., Hull, 1943; Tolman, 1932). New forms of behaviorism
were thus being shaped and were
sometimes at odds with those that already existed (Moore,
1999). In the 1930s, the contribu-
tions of Skinner established his place among those
developments. Conceiving behavior as a
lawful process, Skinner’s experimental works on reflexes led
him to new concepts and meth-
ods of investigation (see Iversen, 1992). Reflex—and,
subsequently, all behavior—was no lon-
ger something that happened inside the organism; rather, it was
seen as a relation in which
a response is defined in function of a stimulus and vice versa
(Skinner, 1931). [. . .] In 1938,
Skinner published The Behavior of Organisms, in which he
summarized many of his positions
and refined the concept of operant behavior. Skinnerian
behaviorism (see section i.2) was
acquiring its shape. Its first developments laid the
fundamental concepts and methods of behavior
analysis. Because they relied on basic research, they
were also the first steps of the experimental analy-
sis of behavior.
In the 1940s, the first introductory course based in
Skinner’s psychology and the first conference on
experimental analysis of behavior took place (Keller
& Schoenfeld, 1949; Michael, 1980). In 1945, Skin-
ner wrote The Operational Analysis of Psychological
Terms, in which, for the first time in print, he defined
his thought as “radical behaviorism” (Skinner, 1945,
p. 294; see also Schneider & Morris, 1987). The term
would designate a philosophy that, on one hand,
defines private events (e.g., thinking, feelings) as
behavior and, therefore, as a legitimate subject mat-
ter of a behavioral analysis, but on the other hand
attacks explanatory mentalism, the explanation of
behavior by mental events (cf. Skinner, 1945, 1974).
Private events usually refer to a mental concept, but
they are behavior and, as such, cannot cause other
behavior. That antimentalism would become a cen-
tral feature of radical behaviorism. [. . .]
As the prominence of Skinner and his work began
to rise and the foundations for applied behavior
analysis were laid (Morris, Smith, & Altus, 2005),
Skinner would become central to the development
of behavior analysis. [. . .] Thus, behavior analysis
Nina Leen/The LIFE Picture Collection/Getty Images
Psychologist B. F. Skinner’s experi-
ments showed that behavior could
be related to a stimulus and did not
have to be only an occurrence inside
an organism. One of Skinner’s famous
experiments included a rat pressing a
lever to then be rewarded with food.
maj83688_02_c01_031-066.indd 34 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
35
Section 1.1 The Evolution of Behaviorism to Behavior Analysis
constituted itself by the gradual establishment of its domains,
being consolidated as a field
in the late 1970s. Although Skinner became synonymous with
behavior analysis, the field
exceeded its pioneer. Behavior analysis took on a life of its
own. Other people took part in
the spreading of the field, such as Fred Keller (1899–1996),
Charles Ferster (1922–1981),
William Schoenfeld (1915–1996), and Murray Sidman (1923–).
They disseminated its knowl-
edge, just as they developed new concepts and methods (e.g.,
Sidman & Tailby, 1982). Skinner,
however, remained as the field’s main spokesman. Schultz and
Schultz (2004), for instance,
asserted that, “despite . . . criticisms, Skinner remained the
uncontested champion of behav-
ioral psychology from the 1950s to the 1980s. During this
period, American psychology was
shaped more by his work than by the ideas of any other
psychologist” (p. 344). [. . .]
The Generic (and Misrepresented) Nature of Behaviorism
[. . .] Behaviorism became a host of different and conflicting
systems, grouped under a single
label, as if they all shared the same position. Being vaguely
defined, behaviorism is frequently
treated as a homogeneous school, as a linear tradition. The term
behaviorism, however, refers
to a variety of conflicting positions (Leigland, 2003; but see
also Moore, 1999). Indeed, after
Watson’s (1913) first use, many theories related to the study of
behavior were taken as
“behaviorists.” Since the term began to be largely used, its
ambiguity was soon recognized,
seeing that there was no single enterprise called “behaviorism”
(e.g., Hunter, 1922; Spence,
1948; Williams, 1931). Woodworth (1924) summarized the
problem:
If I am asked whether I am a behaviorist, I have to reply that I
do not know,
and do not much care. If I am, it is because I believe in the
several projects put
forward by behaviorists. If I am not, it is partly because I also
believe in other
projects which behaviorists seem to avoid, and partly because I
cannot see
any one big thing, to be called “behaviorism.” (p. 264)
Spence (1948) also noted that the term was mostly used when
someone defines his or her
oppositions to an effective (or alleged) behaviorism. Even so,
later developments were identi-
fied with “behaviorism,” such as behavior analysis itself.
Therefore, the term would still des-
ignate a very heterogeneous set of positions. Its indiscriminate
use, on the other hand, over-
looks the historical complexity and diversity of the behaviorist
school.
Moreover, references to a generic behaviorism set biases in the
analysis of behavioristic sys-
tems. When behaviorism is vaguely defined, it is easier to
misrepresent any system by attrib-
uting features of other positions to it. Properties of particular
systems are ascribed to all.
Pinker (1999), for example, says the following:
Skinner and other behaviorists insisted that all talk about
mental events was
sterile speculation; only stimulus–response connection could be
studied in
the lab and the field. Exactly the opposite turned out to be true.
Before com-
putational ideas were imported in the 1950s and 1960s by
Newell and Simon
and the psychologists George Miller and Donald Broadbent,
psychology was
dull, dull, dull. (p. 84)
[. . .] In spite of the prior disputable use of the word
behaviorism, the conventional histori-
ography seems to have taken advantage of the term’s ambiguity
to legitimate the idea of a
maj83688_02_c01_031-066.indd 35 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
36
Section 1.2 Theory of Connectionism and the Laws of Learning
revolution. A generic behaviorism was, then, presented,
underlying fallacious arguments. This
ambiguous treatment is dangerous for behavior analysis and
modern behaviorism, because
it creates and strengthens academic folklore (see also Todd &
Morris, 1992). Its deceptive
character gives rise to misrepresentations. [. . .]
Source: Watrin, J. P., & Darwich, R. (2012). On behaviorism in
the cognitive revolution: Myth
and reactions. Review of General Psychology, 16(3), 269–282.
Copyright © 2012, American
Psychological Association. Reprinted with permission.
Understanding the history of a theoretical framework can help
us better understand the devel-
opments that followed. In this case, behaviorism gave rise to
many subset groups that believed
that learning was a behavior and that behavior was observable—
yet differed in the degree to
which they held to these beliefs. As the article’s authors
observed, the word behaviorism can
often be used as a general grouping for the multiple researchers
aligned with this theory. As a
lifelong learner, you may find that further questioning this
ambiguity in your own studies will
help substantiate your understanding of this important area of
psychology.
1.2 Theory of Connectionism and the Laws of Learning
Edward Thorndike’s theory of connectionism and the laws of
learning were two concepts that
would emerge as behaviorism matured. The theory of
connectionism, also known as the
synaptic theory of learning, posits that learning occurs through
the habitual associations, or
connections, made between stimuli and responses. Examples of
behavioral associations include
eating because we are hungry and sleeping because we are tired.
The laws of learning explain
how people learn best through these associations. As just one
example, the law of effect asserts
that learning is strengthened when it is associated
with a positive feeling. As Sandiford (1942) explains
in the following excerpts, the theory of connection-
ism and the laws of learning helped build a more
developed understanding of learning and contrib-
uted to our more modern applications of today.
Before you begin reading, it is important to under-
stand the importance of what is known as “asso-
ciation doctrine” to Thorndike’s research. Although
Thorndike did not introduce his initial three laws of
learning until the early 20th century (Weibell, 2011),
ideas about behavioral associations began to take
shape more than 2,000 years ago. Greek philosopher
Aristotle (384–322 BCE) wrote in his major work on
ethics, “For we are busy that we may have leisure,
and make war that we may live in peace.” However,
his ideas about associations are most clearly seen in
the following passage:
When, therefore, we accomplish an act of reminiscence, we pass
through a cer-
tain series of precursive movements, until we arrive at a
movement on which
Abracada/iStock/Thinkstock
A central theory of connectionism is
that learning is conducted through
stimuli and responses.
maj83688_02_c01_031-066.indd 36 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
37
Section 1.2 Theory of Connectionism and the Laws of Learning
the one we are in quest of is habitually consequent. Hence, too,
it is that we
hunt through the mental train, excogitating from the present or
some other, and
from similar or contrary or coadjacent. Through this process
reminiscence takes
place. For the movements are, in these cases, sometimes at the
same time, some-
times parts of the same whole, so that the subsequent movement
is already more
than half accomplished. (Aristotle, ca. 350 BCE/1930, para.
XX)
Association doctrine can be explained as the linking of
physiological and psychological pro-
cesses. Important to understanding the points of reference in the
excerpts from Sandiford (1942)
is that Thorndike’s beliefs about learning were somewhat
founded on Alexander Bain’s beliefs
about psychology that suggested all knowledge is based on
physical sensations (not thoughts
or ideas) (Bain, 1873). Bain (1818–1903) founded the academic
journal called Mind, the first
journal of psychology and analytical philosophy. He postulated
an “associationist treatment of
higher mental processes” (Wade, 2001, p. 781).
Excerpts from “Connectionism: Its Origin and Major Features”
By P. Sandiford
Features of Connectionism
The following outline gives the main distinguishing features of
connectionism:
1. Connectionism is an outgrowth of the association doctrine,
especially as pro-
pounded by Alexander Bain. Thorndike was a pupil of William
James, some of whose
teachings were derived from Bain and the British
associationists. Connectionism,
therefore, through associationism, has its roots deep in the
psychological past.
2. Connectionism is a theory of learning, but as learning is
many-sided, connectionism
almost becomes a system of psychology. It is as a theory of
learning, however, that it
must stand or fall.
3. Connectionism has an evolutionary bearing in that it links
human behavior to that
of the lower animals. Thorndike’s first experiments were with
chicks, fish, cats, and,
later, with monkeys. From his animal experiments he derived
his famous laws of
learning.
4. Connectionism boldly states that learning is connecting. The
connections presum-
ably have their physical basis in the nervous system, where the
connections between
neuron and neuron explain learning. Hence, connectionism is
also known as the
synaptic theory of learning.
5. Connectionism is atomistic rather than holistic or organismic,
since it stresses the
analysis of behavior in order to discover the elements that are
connected or bonded
together. The sum total of a man’s life can be described by a list
of all the situations
he has encountered and the responses he has made to them. [. .
.]
6. The connectionist principle of associative shifting (which
suggests that if a response
to a stimulus is sustained even if the stimulus is gradually
changed, the same
response will be likely in a new situation) has relationships with
Pavlovian condi-
tioning, which Thorndike regards as a special case of
associative learning.
7. Connectionism has also some affinities with Watsonian
behaviorism, which sug-
gested that introspection was not observable and thus not
scientific, stressing the
maj83688_02_c01_031-066.indd 37 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
38
Section 1.2 Theory of Connectionism and the Laws of Learning
mechanistic aspects of behavior. Neither one finds it necessary
to evoke a soul in
order to explain behavior. Connectionism breaks with
behaviorism in regard to the
stress it places on the hereditary equipment of the behaving
organism.
8. Some connections are more natural than others. We grow into
reflexes and instincts
without very much stimulation from the environment except
food and air. In other
words, we mature into reflexes and instincts, but we have to
practice or exercise
in order to learn our habits. These hereditary patterns of
behavior (reflexes and
instincts) form the groundwork of learning. Most acquired
connections are based
on them and, indeed, grow out of them. Even such complex
bonds as those which
represent capacities (music, mathematics, languages, and the
like) have a hereditary
basis.
9. According to connectionism those things we call intellect and
intelligence are
quantitative rather than qualitative. A person’s intellect is the
sum total of the bonds
(associations) he has formed. The greater the number of bonds
he has formed, the
higher is his intelligence.
10. [. . .] Connectionism, above all other theories of learning,
seems to be one that the
classroom teacher can appreciate and apply. While the statistics
which summarize
the experiments have been decried as the products of a
mechanistic conception of
behavior, nevertheless they have done more to make education a
science than all the
theorizing of the past 2,000 years.
[. . .] Thorndike was such a voluminous writer that it is difficult
to summarize his position
on any single question, or, indeed, to pin him down to a specific
position. In order to remove
any doubt the reader may have on the matter, the following
recent statement of Thorndike’s
position is given:
A man’s life would be described by a list of all the situations
which he encoun-
tered and the responses which he made to them, including
among the latter
every detail of his sensations, percepts, memories, mental
images, ideas, judg-
ments, emotions, desires, choices, and other so-called mental
facts.
[. . .] A man’s nature at any given stage would be expressed by
a list of the responses (Rs)
which he would make to whatever situations or state of affairs
(Ss) could happen to him,
somewhat as the nature of a molecule of sugar might be
expressed by a list of all the reactions
that would take place between it and every substance which it
might encounter.
There would be one important difference, however. [. . .] In
human behavior our ignorance
often requires the acknowledgment of the principle of multiple
response or varied reaction
to the same S by a person who is, so far as we can tell, the same
person. (See Figure 1.1 for a
specific example.) [. . .]
If John Doe were really the same person in every particular way
on 100 occasions he would
always respond to S in one same way at each of its 100
occurrences, but he will not be. Even
when we can detect no differences in him there will be subtle
variation in metabolism, blood
supply, etc. [. . .]
maj83688_02_c01_031-066.indd 38 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
39
Section 1.2 Theory of Connectionism and the Laws of Learning
The Associationistic Background
Ideas related to associationism date back to Aristotle, although
his view differed much from
our current understanding (Sandiford, 1942). Hence, there is a
large gap in associationism’s
history. Table 1.1 is adapted from the writing of Sandiford
(1942), and can help put into per-
spective the maturation of the ideas connected with
associationism. Each theorist brought
additional perspectives to this model for learning, and although
Table 1.1 provides only a broad
overview, the timeline demonstrates how the perspectives
changed as time moved forward.
Other Backgrounds of Connectionism
If Thorndike be regarded as the king-pin of connectionism, then
three main streams of influ-
ence may be found in his work. The first, that of associationism,
has already been traced. Bain
influenced Thorndike’s teaching both directly and through
William James. [. . .]
For experimentation on the learning ability of animals, new
apparatus, new devices, new
methods had to be invented. Thorndike introduced the maze, the
puzzle box, and the signal or
choice reaction experiment, all of which have become standard
equipment in animal psychol-
ogy and have been employed in thousands of studies since that
day. Figure 1.2 provides an
illustration of a puzzle box.
Thorndike’s Animal Intelligence, completed in 1898 as his
doctoral dissertation, not only was
the starting point of animal psychology as a science, but also
went far toward establishing
stimulus-response as the cornerstone of psychology. It is also
the source of the famous laws
of learning. [. . .]
Figure 1.1: Example of possible reactions to a stimulus
Psychologist Edward Thorndike proposed that humans have
varied responses to the same incident or
stimulus. However, he acknowledged that there are hereditary
patterns of behavior such as reflexes.
© Bridgepoint Education, Inc.
R = Man smiles
and walks away.
S = Man is
yelled at by
a stranger. R = Man reacts
physically to
the stranger yelling
and begins hitting him.
R = Man yells back
at the stranger and
storms away.
maj83688_02_c01_031-066.indd 39 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
40
Section 1.2 Theory of Connectionism and the Laws of Learning
The Laws of Learning
Probably the best known of the contributions that connectionism
has made to educational
theory and practice are the so-called laws of learning. They are
not absolute laws, but rather
are they to be regarded simply as comprehensive formulations
of the rules which learning
obeys.
The laws usually quoted are those given in Vol. II of
Thorndike’s Educational Psychology:
The Psychology of Learning (1913). These include the three
major laws: effect, exercise or
frequency, and readiness. [. . .] These laws grew out of the
experiments with animals, coupled
with such influences as the writings of Bain, Romanes, Lloyd
Morgan, Wilhelm Wundt, and
others, and have been modified by further experiments in which
human beings acted as the
subjects (Thorndike, 1932). New elements injected into the laws
of learning are belonging-
ness, impressiveness, polarity, identifiability, availability, and
mental system. This shows clearly
enough that the laws are not to be regarded as a closed system,
complete from the start, but
merely as tentative summaries of our knowledge of the way in
which learning takes place.
They will be discarded or modified whenever experiments
disclose that such is necessary or
desirable.
Table 1.1: Overview of associationistic milestones
Theorists Milestones
Aristotle (384–322 BCE) • Introduced the ideology of
associations.
• Suggested that we could not perceive two sensations as one—
that
they would combine or fuse into one.
Thomas Hobbes (1588–1679) • Suggested sequences of thought
could be casual and illogical, as in
dreams, or orderly and regulated as by some design.
• Suggested that hunger, sex, and thirst are physiological needs.
John Locke (1632–1704) • Suggested “association of ideas”:
Representations arise in
consciousness.
David Hartley (1705–1757) • Suggested that sensation (pleasure
vs. pain) was generated by wave
vibrations in the nerves.
David Hume (1711–1776) • Noted that the associations in cause
and effect are affected when addi-
tional objects are introduced.
James Mill (1773–1836) • Advanced associationism to include
more complex emotional states
within the pain vs. pleasure sensation model.
Thomas Brown (1778–1820) • Suggested nine secondary laws
that strengthened Aristotle’s laws of
association.
• Understood association as an active process of an active,
holistic mind.
Alexander Bain (1818–1903) • Suggested trial-and-error
learning, reflexes, and instincts as the bases
of habits, individual differences, and the pleasure-pain principle
in
learning.
Edward Thorndike
(1874–1949)
• Suggested the theory of connectionism.
• Suggested laws of learning.
Adapted from “Connectionism: Its Origin and Major Features”
by P. Sandiford, in N. B. Henry (Ed.), The Forty-First Yearbook
of
the National Society for the Study of Education: Part II, The
Psychology of Learning (pp. 102–108), 1942. Blackwell
Publishing.
© National Society for the Study of Education. Adapted with
permission.
maj83688_02_c01_031-066.indd 40 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
41
Section 1.2 Theory of Connectionism and the Laws of Learning
Figure 1.2: Thorndike’s puzzle box
In Thorndike’s design, a dish of food was placed outside of the
box, visible through the slats in the box.
Thorndike found that animal subjects placed in the box would
eventually locate the release apparatus,
and the time before the activation of this response was shorter
with each subsequent trial.
Adapted from Animal Intelligence (p. 30), by E. L. Thorndike,
1911, New York, NY: Macmillan.
The Law of Effect
[. . .] A modifiable bond is strengthened or weakened as
satisfaction or annoyance attends its
exercise. With chickens and cats, Thorndike had used as
motivating agents in their behavior
such original satisfiers as food and release from confinement for
the hungry cat, company
for the lonely chicken, and so forth. These acted as rewards for
certain actions which became
stamped in and learned. Thorndike really took the law of effect
for granted at first, as so many
before him had done. Gradually, however, it became one of his
most important principles of
education. [. . .]
In propounding the law of effect, Thorndike thought that the
two effects—satisfiers and
annoyers—were about equally potent, the one in stamping in the
connection, the other in
stamping it out. If a preference was indicated it was toward the
side of rewards, although he
explicitly asserted that rewards or satisfiers following responses
increased the likelihood of
repetitions of the connections so rewarded, while punishments
decreased the likelihood of
recurrence of the punished connection. [. . .]
The manner in which the confirming reaction develops and
operates is as follows: The con-
firming reaction is at first an aftereffect of the S → R situation
(where S is a stimulus and R is
a response), thus:
S → R → Confirming Reaction
maj83688_02_c01_031-066.indd 41 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
42
Section 1.2 Theory of Connectionism and the Laws of Learning
Afterwards it functions as a force connecting and binding S to
R, thus:
S → Confirming Reaction → R
The confirming action is independent of a pleasurable result,
since pain may also set it in
action provided it is close enough to the satisfier in the
succession of connections. However,
it must not be thought that the effect of pain or the influence of
a punishment, which is an
annoying aftereffect, is exactly the opposite of the effect or
influence of a reward upon the
bond to which it belongs and of which it is the aftereffect.
It does not directly, invariably, and inevitably weaken the
mental connection. The influence of
reward or punishment is thus seen to depend upon what it leads
the person to do. The reward
tends to arouse the confirming reaction and so cause the
continuance or repetition of the
connection. Punishment does not necessarily lead to the arousal
of a tendency to discontinue
the punished connection or to repeat it less often, nor does it
necessarily stimulate a connec-
tion of an opposite kind. It arouses whatever original behavior
or past experience has linked
to that particular annoying aftereffect in those particular
circumstances. This may be to run
away, to scream, or to perform other useless acts. Punishments,
compared with rewards, are
very unreliable forces in learning. Rewards are dependable
because they arouse confirming
reactions.
Thorndike is inclined to believe that the confirming reaction is
a reaction of the neurons
themselves. It is a neuronic force of reinforcement of the
original response or it is the afteref-
fect of the total situation response (Thorndike, 1933, 1940). [. .
.]
The Law of Exercise or Frequency
This law, like the law of effect, was at first almost taken for
granted by Thorndike. Does not
“practice make perfect”? Yet experience shows that exercise
does not always lead to perfec-
tion. Practice in sitting on a bent pin or in poking the fire with
the finger never leads to perfec-
tion in the art. The law of effect has to be invoked to explain
why practice does not necessarily
and invariably lead to improvement. Pleasurable reactions are
stamped in; painful ones are
stamped out. In terms of connectionism, repetition tends to
make the bond permanent. [. . .]
The law of exercise or frequency has two parts, use and disuse.
The law of use is stated: When a
modifiable connection is made between a situation and a
response, that connection’s strength
is, other things being equal, increased. The law of disuse runs:
When a modifiable connec-
tion is not made between a situation and a response over a
length of time, that connection’s
strength is decreased. The phrase “other things being equal”
refers mostly to the effect, the
satisfyingness or annoyingness of the situation. In other words,
the more you are able to do or
apply something to differing contexts, the strength of the
connection (what has been learned)
increases. When the concept cannot be used in varying
situations, reducing its usability, the
strength of what has been learned decreases.
Watson, the behaviorist, claims that frequency and recency
explain learning and that it is
unnecessary to invoke the law of effect. The successful action
in maze learning, for example,
must occur in every series; therefore, the successful action is
learned mainly through fre-
quency. Apparently, Watson did not realize that unsuccessful
actions within the maze were
maj83688_02_c01_031-066.indd 42 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
43
Section 1.2 Theory of Connectionism and the Laws of Learning
often repeated more frequently than the final and successful
one. Yet it is the successful one
that is finally stamped in (Watson, 1914). [. . .]
The repetition of a situation, while tending to make a reaction
somewhat stereotyped, in and
of itself, is unproductive for learning. It causes no adaptive
changes and has no useful selective
power. Repetition of a connection, that is, the situation and its
particular response, results in a
real though somewhat small strengthening influence. Mere
repetition of a connection causes
learning, but the learning is slow. For example, if a
child is taught to sit in his or her seat after enter-
ing the room, but does not understand why or its
applicability, the child will sit but has not necessar-
ily learned the reasons for performing this behavior.
If the child learns that when entering a classroom, it
is important to sit as a procedure that ensures posi-
tive outcomes in the learning environment (such as
rewards) the child will be more apt to apply this in
other settings as well.
Repetition of a “connection with belonging” (that
is, the procedure that is applied “fits” the situation)
increases the likelihood of learned adaption to per-
form the behavior, even when the rewards may be
concealed or disguised. Belongingness is difficult to
describe but easy to illustrate. For example, the words of a
sentence belong together in a way
that the terminal word of one sentence and the initial word of
the next do not. An additional
example might include a child eating off a plate instead of
eating off the table. The behavior
makes logical sense to the individual. [. . .]
The Law of Readiness
Briefly the law of readiness may be stated: When a bond is
ready to act, to act gives satisfac-
tion and not to act gives annoyance. When a bond which is not
ready to act is made to act,
annoyance is caused. Examples of a bond might include starting
an exercise program, asking
for someone’s hand in marriage, or starting a new career. If a
person is not ready to begin
exercising, marry, or start a new career, he or she will likely
feel annoyed by any pressure to
do so. [. . .]
Modifications and Additions to the Laws of Learning
Thorndike’s later experiments on learning, using human beings
as subjects, led to a modifica-
tion of the laws of exercise and effect. Numerous additions and
modifications were also made
and new terms—belongingness, impressiveness, vividness,
polarity, identifiability, availability,
and mental systems—found their way into the vocabulary of
connectionism.
1. Belongingness: A factor of great importance in the learning
process.
Example: Various words of a sentence fit or belong together; a
sequence of numbers
may belong together just because they are all numbers and not
anything else, but
Bigandt_Photography/iStock/Thinkstock
Learning how to write and using that
skill in different situations over the
course of someone’s life is an example
of the law of exercise or frequency.
maj83688_02_c01_031-066.indd 43 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
44
Section 1.2 Theory of Connectionism and the Laws of Learning
some number sequences may possess more belongingness than
others. Thus 2, 4, 8,
16, etc., exhibit more belongingness than 1, 3, 4, 2, 5, 11, 13,
15.
2. Impressiveness: The strength or intensity of a stimulus or a
situation.
Example: Loud sounds are considered stronger and more
impressive than less intense
ones. Stimuli attended to, that is, in the focus of consciousness,
are more impressive
than marginal elements.
3. Vividness: The recognizability of a word (Miller & Dost,
1964).
Example: In some experiments, using word-number paired
associates such as dinner
26, basal 83, divide 37, kiss 63, the number of correct number
associations with kiss
and dinner, both impressive words, is larger than the number of
associations made
with basal and divide, both weak words.
4. Polarity: The tendency for stimulus-response sequences to
function more readily in
the order they were practiced than in the opposite order.
Example: Using foreign and vernacular phrases such as raison
d’être; ohne Hast, ohne
Ras exeunt omnes; facile descemus; obiter dicta, etc., it was
shown that the ends could
be supplied when the beginnings were given, more readily than
the beginnings could
be given when the ends were supplied; the first half evokes the
second half more often
than the second evokes the first.
5. Identifiability: If the connection can be easily identified it is
easily learned.
Example: Some concepts such as times, numbers, weights,
colors, mass, density, etc.,
have to be analyzed out and made identifiable before they can
be profitably used
by us.
6. Availability: The accessibility of the response.
Example: When something is easier to attain, it makes the
response to it more easily
assessable.
7. Mental systems: The habituation; limited physiological or
emotional response to a
frequently repeated stimulus (one’s habit).
Example: If in paper and pencil association experiments, the
stimulus word dear
evoked the response sir, this would be regarded as a simple
habit; but if it evoked
fear, some mental system must be at work.
[. . .] These modifications and additions to the laws of learning
do not destroy the main fabric
of the connectionist doctrine. Indeed, they illustrate one
important feature of connectionism,
namely, the willingness of its supporters to modify their
teachings and beliefs when experi-
mental findings are not in harmony with them. [. . .]
Source: Sandiford, P. (1942). Connectionism: Its origin and
major features. In N. B. Henry
(Ed.), The forty-first yearbook of the National Society for the
Study of Education: Part II, The
psychology of learning (pp. 97–140). Blackwell Publishing. ©
National Society for the Study
of Education.
The theory of connectionism and laws of learning present clear
attributes and ideas about
learning behavior. Since their introduction in the early 1900s,
Thorndike’s insightful sugges-
tions, based on previous research, have left their mark on
research about learning and continue
to pose implications about how we learn. As you learn about
other areas where behaviorism is
applied in the learning domain, continue to consider how each
was derived and how they have
influenced the more modern theories we will discuss in future
chapters.
maj83688_02_c01_031-066.indd 44 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
45
Section 1.3 Principles of Conditioning
1.3 Principles of Conditioning
Conditioning and learning have been core topics in psychology
since the turn of the 20th century
and are aligned with the transformation of associative learning
concepts. Therefore, familiarity
with this area of learning is critical to an advanced education in
psychology, as well as a more
developed understanding of behaviorism and its evolution. For
this section of the chapter, we
will discuss conditioning. Section 1.4 will explore how
conditioning is then applied in the field
of learning. There are two types of conditioning: classical and
operant. Though both types have
an associative property, there are also clear differences between
the two. Classical condition-
ing involves repeatedly pairing two stimuli so that eventually
one of the stimuli prompts an
involuntary response that previously the other caused on its
own. Think of the classic example of
Pavlov’s dog: Repeatedly pairing food with a tone eventually
caused, or conditioned, the dog to
salivate at the tone alone.
In contrast, operant conditioning (also referred to as
instrumental conditioning or Skinner-
ian conditioning) introduces consequences to the associative
relationship between stimuli and
responses. Rather than using different stimuli to provoke the
same, involuntary response, differ-
ent stimuli are used to prompt or support the desired, voluntary
response, which may involve
the confirmation or discouragement of a behavior. In Figure 1.3,
for example, two types of rein-
forcement (positive and negative) are used to maintain the
desired response, and two types of
punishment (again, positive and negative) are used to change
the behavior. In this case, the child
being quiet at the physician’s office is the desired behavior.
Figure 1.3: Example of operant conditioning
Operant conditioning includes using different stimuli to provoke
a specific, desired response rather than
provoking the same involuntary response, such as in classical
conditioning.
© Bridgepoint Education, Inc.
Child is quiet
while in the
physician’s office.
Parent gives
positive
reinforcement by
offering a reward
such as TV time.
The next time in a
professional
environment, the
child is again quiet.
Parent gives negative
reinforcement by
reducing the
child’s chores.
The next time in a
professional
environment, the
child is again quiet.
Parent gives
positive punishment
by giving the
child additional chores.
The next time in a
professional
environment, the
behavior improves.
Parent gives
negative punishment
by taking away
the child’s TV time.
The next time in a
professional
environment, the
behavior improves.
Child is quiet
while in the
physician’s office.
Child misbehaves
in the
physician’s office.
Child misbehaves
in the
physician’s office.
Positive reinforcement
Negative reinforcement
Positive punishment
Negative punishment
maj83688_02_c01_031-066.indd 45 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
46
Section 1.3 Principles of Conditioning
Each of these concepts will be more fully addressed in the next
two series of excerpts. The first
discusses classical conditioning and is from Clark (2004). The
article will go into detail about the
differing types of stimuli (conditioned versus unconditioned).
The second series of excerpts discusses operant conditioning
and is from Macias (2016). It will
provide a deeper look into reinforcers and punishments. As you
read, compare and contrast
these two types of conditioning and consider how, with each
new development, more questions
arise about how associations occur and if they affect learning.
Excerpts from “The Classical Origins of Pavlov’s Conditioning”
By R. E. Clark
Classical Conditioning
In the most basic form of classical conditioning, the stimulus
that predicts the occurrence of
another stimulus is termed the conditioned stimulus (CS) (in
Pavlov’s experiment, the tone).
The predicted stimulus is termed the unconditioned stimulus
(US) (in Pavlov’s experiment,
the food). The CS is a relatively neutral stimulus that can be
detected by the organism, but
does not initially induce a reliable behavioral response. The US
is a stimulus that can reliably
induce a measurable response from the first presentation. The
response that is elicited by
the presentation of the US is termed the unconditioned response
(UR) (in Pavlov’s experi-
ment, the drool as a result of the food). The term
“unconditioned” is used to indicate that
the response is “not learned,” but rather it is an innate or
reflexive response to the US. With
repeated presentations of the CS followed by US (referred to as
paired training) the CS begins
to elicit a conditioned response (CR) (in Pavlov’s experiment,
the drool as a result of the
tone alone). Here the term “conditioned” is used to indicate that
the response is “learned.” See
Figure 1.4 for an illustration of these relationships.
Figure 1.4: A typical classical conditioning procedure
An unconditioned stimulus (US), food, leads to an
unconditioned response (UR), salivation. Introducing a
conditioned stimulus (CS) of a tone before the food’s
presentation results in the tone eventually creating
a conditioned response (CR) of salivation, even without food.
From Psychology of Learning (p. 47), by D. A. Lieberman,
2012, San Diego, CA: Bridgepoint Education, Inc. Copyright
2012 by Bridgepoint
Education, Inc.
CR
US UR
tone
salivation
salivation
CS
toneCS
Conditioning
Result
food
maj83688_02_c01_031-066.indd 46 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
47
Section 1.3 Principles of Conditioning
Edwin Burket Twitmyer (1873–1943)
The phenomenon of classical conditioning was discovered
independently in the United States
and Russia around the turn of the 19th century. In the United
States, Edwin B. Twitmyer made
this discovery at the University of Pennsylvania while finishing
his dissertation work on the
“knee-jerk” reflex. When the patellar tendon is lightly tapped
with a doctor’s hammer, the
well-known “knee-jerk” reflex is elicited. Twitmyer had
initially intended to study the mag-
nitude of the reflex under normal and facilitating conditions
(Figure 1.5). In the facilitating
conditions the subjects were asked to verbalize the word “ah,”
or to clench their fists, or to
imagine clenching their fists (Twitmyer, 1902/1974). A bell that
was struck one-half second
before the patellar tendon was tapped served as signal for the
subjects to begin verbalizing or
fist clenching (or imagining fist clenching). Twitmyer observed:
[D]uring the adjustment of the apparatus for an earlier group of
experiments
with one subject . . . a decided kick of both legs was observed
to follow a tap of the
signal bell occurring without the usual blow of the hammers on
the tendons. . . .
Two alternatives presented themselves. Either (1) the subject
was in error in
his introspective observation and had voluntarily moved his
legs, or (2) the true
knee jerk (or a movement resembling it in appearance) had been
produced by a
stimulus other than the usual one. (as cited in Irwin, 1943, p.
452) [. . .]
Twitmyer apparently did not fully appreciate the potential
significance of this finding beyond
recording this initial observation, and the work was never
extended. It has been suggested
that Twitmyer’s failure to systematically investigate this
phenomenon and the lack of interest
exhibited by his colleagues who heard the presentation was
likely due in part to the prevailing
American zeitgeist where interest in delineating the components
of consciousness through
introspection was the principal perspective (Irwin, 1943; Coon,
1982). Thus, Twitmyer and
his contemporaries would have been predisposed to undervalue
the usefulness, to the field
of psychology, of something as basic as a modifiable reflex.
This was not the case in Russia.
Figure 1.5: Twitmyer’s “knee-jerk” reflex experiment
This photograph (circa 1903) shows a young subject and the
experimental apparatus Twitmyer used to
measure the magnitude of the knee-jerk reflex (see
http://www.psych.upenn.edu/history/twittext.htm
for details).
University of Pennsylvania Archive, photographer unknown.
maj83688_02_c01_031-066.indd 47 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
http://www.psych.upenn.edu/history/twittext.htm
48
Section 1.3 Principles of Conditioning
Ivan Petrovich Pavlov (1849–1936)
The Russian discovery of classical conditioning comes from the
pioneering work of Ivan
Petrovich Pavlov. [. . .] In 1904, Pavlov was awarded the Nobel
Prize in medicine for his work
on the physiology of digestion. This early research, which used
dogs as experimental subjects,
set the stage for observing the phenomenon of classical
conditioning. As early as 1880, Pavlov
and his associates observed that sham feedings, in which food
was eaten but failed to reach
the stomach (being lost through a surgically implanted
esophageal fistula), produced gastric
secretions, just like real food.
Pavlov’s laboratory modified this preparation in order to
simplify the forthcoming studies.
Rather than measure gastric secretions, they began measuring
salivation (see Figure 1.6).
Salivation was chosen because an efficient and highly practical
method of measuring saliva-
tion using a permanently implanted fistula had just been
developed in the laboratory (Pavlov,
1951; Windholz, 1986). In 1897, Stefan Wolfson (also
translated as Sigizmund Vul’fson), a
doctoral student of Pavlov, made an important observation:
We place before the nose of the dog a glass of carbon
bisulphide . . . from its
two salivary glands flows saliva . . . we stimulate the dog a few
times with
the same glass of carbon bisulphide. The saliva flows each time.
Now we sub-
stitute surreptitiously an identical glass containing water. The
dog salivates
again, although with a smaller quantity of saliva. (translated in
Windholz,
1986, p. 142)
Figure 1.6: Apparatus used in Pavlov’s study
Apparatus used in Pavlov’s study of salivary conditioning in
dogs. Saliva flowed through a tube connected
to the dog’s cheek and traveled to another room, where it could
be recorded.
Adapted from “The Method of Pawlow in Animal Psychology,”
by R. M. Yerkes & S. Morgulis, 1909, Psychological Bulletin,
6, 265. Copyright
1909 by R. M. Yerkes & S. Morgulis. Adapted with permission.
maj83688_02_c01_031-066.indd 48 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
49
Section 1.3 Principles of Conditioning
Pavlov immediately recognized the significance of these
findings, findings that would ulti-
mately lead him to change the direction of his research to
explore this phenomenon. His ini-
tial results were officially presented to the International
Congress of Medicine held in Madrid,
Spain, in 1903. This report was entitled “Experimental
Psychology and Psychopathology in
Animals.” [. . .]
The Emergence of Classical Conditioning in the United States
Pavlov’s work on classical conditioning was essentially
unknown in the United States until
1906, when his lecture “The Scientific Investigation of the
Psychical Faculties or Processes
in the Higher Animals” was published in the journal Science
(Pavlov, 1906). In 1909 Rob-
ert Yerkes (1876–1956), who would later become president of
the American Psychological
Association, and Sergius Morgulis published an extensive
review of the methods and results
obtained by Pavlov, which they described as “now widely
known as the Pawlow [sic] salivary
reflex method” (Yerkes & Morgulis, 1909, p. 257).
Initially Pavlov and his associates used the term conditional
rather than conditioned. Yet Yer-
kes and Morgulis chose to use the term conditioned. They
explained their choice of terms in
a footnote:
Conditioned and unconditioned are the terms used in the only
discussion
of this subject by Pawlow [sic] which has appeared in English.
The Russian
terms, however, have as their English equivalents conditional
and uncondi-
tional. But as it seems highly probable that Professor Pawlow
[sic] sanctioned
the terms conditioned and unconditioned, which appear in the
Huxley lecture
(Lancet, 1906), we shall use them. (Yerkes & Morgulis, 1909, p.
259)
The terms conditioned reflex and unconditioned reflex were
used during the first two decades
of the 20th century, during which time this type of learning was
often referred to as “reflex-
ology.” In 1921, the first textbook devoted to conditioning
(General Psychology in Terms of
Behavior) adopted the terms conditioned and unconditioned
response to replace the term
reflex (Smith & Guthrie, 1921). De-emphasizing the concept of
a reflex and instead using a
more general term like response allowed a larger range of
behaviors to be examined with
conditioning procedures. [. . .]
John B. Watson (1878–1958) championed the use of classical
conditioning as a research tool
for psychological investigations. During 1915, his student Karl
Lashley conducted several
exploratory conditioning experiments in Watson’s laboratory.
Watson’s presidential address,
delivered in 1915 to the American Psychological Association,
was entitled “The Place of the
Conditioned Reflex in Psychology” (Watson, 1916). Watson was
highly influential in the rapid
incorporation of classical conditioning into American
psychology, though this influence did
not appear to extend to his student. Lashley became frustrated
with his attempts to classically
condition the salivary response in humans (Lashley, 1916) and
permanently abandoned the
paradigm. In 1920, Watson’s work with classical conditioning
culminated in the now infa-
mous case of “Little Albert” (first mentioned in the Introduction
chapter).
Albert B. was an 11-month-old boy who had no natural fear of
white rats. Watson and Rosalie
Rayner used the white rat as a CS. The US was a loud noise that
always upset the child. By pair-
ing the white rat and the loud noise, Albert began to cry and
show fear of the white rat—a CR.
maj83688_02_c01_031-066.indd 49 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
50
Section 1.3 Principles of Conditioning
With successive training sessions over the course
of several months, Watson and Rayner were able to
demonstrate that this fear of white rats generalized
to other furry objects (Watson & Rayner, 1920). The
plan had been to then systematically remove this
fear using methods that Pavlov had shown would
eliminate or extinguish the conditioned response, in
this case, fear of furry white objects. Unfortunately,
“Little Albert,” as he has historically come to be
known, was removed from the study by his mother
on the day these procedures were to begin. Unfor-
tunately, there is no known reliable account of how
this experiment on classical conditioning of fear ulti-
mately affected Albert B. Nevertheless, this example
of classical conditioning may be the most famous
single case in the literature on classical conditioning.
The end of the beginning of classical conditioning
as a paradigm in the United States can be traced to
the 1927 publication of Pavlov’s book Conditioned
Reflexes, which was translated into English by a for-
mer student, G. V. Anrep (Pavlov, 1927). This made
all of Pavlov’s conditioning work available in Eng-
lish for the first time. The availability of 25 years’
worth of Pavlov’s research, in vivid detail, led to
increased interest in the experimental examination
of classical conditioning, an interest that has contin-
ued to this day. [. . .]
By 1935 B. F. Skinner entered this discussion in earnest when
he published a paper titled
“Two Types of Conditioned Reflexes and a Pseudo-Type”
(Skinner, 1935). This was a theoreti-
cal paper where Skinner attempted to add clarity and structure
to distinguish two types of
conditioned reflexes. [. . .] It is clear that one type corresponds
to what would eventually be
termed operant conditioning and the second type corresponds to
Pavlov’s type of condition-
ing. [. . .]
Source: Clark, R. E. (2004). The classical origins of Pavlov’s
conditioning. Integrative Physi-
ological & Behavioral Science, 39(4), 279–294. Copyright ©
2004, Springer.
Operant Conditioning
First coined by behaviorist B. F. Skinner (1904–1990), the word
operant was used to describe
the behavior that is in response to the environment and
generated consequences (1953). Basi-
cally, Skinner suggested that when a behavior was reinforced, it
would increase or be validated.
If a behavior was not reinforced but instead resulted in a
punishment, then the behavior would
diminish or be eliminated. These associations describe the core
of operant conditioning. As
noted at the start of this section, the following excerpts from
Macias (2016) explain the roles of
reinforcements and punishments in conditioning.
George Rinhart/Corbis Historical/Getty Images
Psychologist John B. Watson is well
known for the “Little Albert” case, in
which, over time, a young boy learned
to fear white rats. This is an example of
classical conditioning.
maj83688_02_c01_031-066.indd 50 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
51
Section 1.3 Principles of Conditioning
Excerpts from “Reinforcement”
By S. I. Macias
Types of Reinforcers
The range of possible consequences that can function as
reinforcers is enormous. To make
sense of this assortment, psychologists tend to place them into
two main categories: primary
reinforcers and secondary reinforcers. Primary reinforcers are
those that require little, if any,
experience to be effective. Food, drink, and sex are common
examples. While it is true that
experience will influence what would be considered desirable
for food, drink, or an appropri-
ate sex partner, there is little argument that these items,
themselves, are natural reinforcers.
Another kind of reinforcer that does not require experience is
called a social reinforcer. Exam-
ples are social contact and social approval. Even newborns show
a desire for social reinforc-
ers. Psychologists have discovered that newborns prefer to look
at pictures of human faces
more than practically any other stimulus pattern, and this
preference is stronger if that face is
smiling. Like the other primary reinforcers, experience will
modify the type of social recogni-
tion that is desired. Still, it is clear that most people will go to
great lengths to be noticed by
others or to gain their acceptance and approval.
Though these reinforcers are likely to be effective, most human
behavior is not motivated
directly by primary reinforcers. Money, entertainment, clothes,
cars, and computer games
are all effective rewards, yet none of these would qualify as
natural or primary reinforcers.
Because they must be acquired, they are called secondary
reinforcers. These become effec-
tive because they are paired with primary reinforcers. The
famous American psychologist B.
F. Skinner found that the sound of food being delivered was
sufficient to maintain a high rate
of bar pressing in experienced rats. Obviously, under normal
circumstances the sound of the
food occurred only if food was truly being delivered.
How a secondary reinforcer becomes effective is called two-
factor theory and is generally
explained through a combination of instrumental and Pavlovian
conditioning (hence the label
“two-factor”). For example, when a rat receives food for
pressing a bar (positive reinforce-
ment), at that same time a neutral stimulus is also presented, the
sound of the food drop-
ping into the food dish. The sound is paired with a stimulus that
naturally elicits a reflexive
response; that is, food elicits satisfaction. Over many trials, the
sound is paired consistently
with food; thus, it will be conditioned via Pavlovian methods to
elicit the same response as the
food. Additionally, this process occurred during the
instrumental conditioning of bar pressing
by using food as a reinforcer.
This same process works for most everyday activities. For most
humans, money is an
extremely powerful reinforcer. Money itself, though, is not very
attractive. It does not taste
good, does not reduce any biological drives, and does not, on its
own, satisfy any needs. How-
ever, it is reliably paired with all of these things and therefore
becomes as effective as these
primary reinforcers. In a similar way, popular fashion in
clothing, hair styles, and personal
adornment; popular art or music; even behaving according to
the moral values of one’s family
or church group (or one’s gang) can all come to be effective
reinforcers because they are reli-
ably paired with an important primary reinforcer, namely, social
approval. The person who
will function most effectively as the approving agent changes
throughout life. One’s parents,
friends, classmates, teachers, teammates, coaches, spouse,
children, and colleagues at work
all provide effective social approval opportunities.
maj83688_02_c01_031-066.indd 51 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
52
Section 1.3 Principles of Conditioning
Reinforcers and Punishers
To maintain a reasonable degree of consistency, most
psychologists use the term “reinforce-
ment” exclusively for a process of using rewards to increase
voluntary behavior. The field of
study most associated with this technique is instrumental
conditioning. In this context, the
formal definition states that a reinforcer is any consequence to a
behavior that is emitted in
a specified situation that has the effect of increasing that
behavior in the future. It must be
emphasized that the behavior itself is not sufficient for the
consequence to be delivered. The
circumstances in which the behavior occurs are also important.
Thus, standing and cheering
at a basketball game will likely lead to approval (social
reinforcement), whereas this same
response is not likely to yield acceptance if it occurs at a
funeral.
A punisher is likewise defined as any consequence that reduces
the probability of a behav-
ior, with the same qualifications as for reinforcers. A behavior
that occurs in response to a
specified situation may receive a consequence that reduces the
likelihood that it will occur
in that situation in the future, but the same behavior in another
situation would not gen-
erate the same consequence. For example, drawing on the walls
of a freshly painted room
would usually result in an unpleasant consequence,
whereas the same behavior (drawing) in one’s color-
ing book would not.
The terms “positive” and “negative” are also much
more tightly defined. Former use confused these with
the emotional values of good or bad, thereby requir-
ing the counterintuitive and confusing claim that a
positive reinforcer is withheld or a negative reinforcer
presented when there is clearly no reward, and, in fact,
the intent is to reduce the probability of that response
(such as described by Kimble). A better, less confusing
definition is to consider “positive” and “negative” as
arithmetic symbols, as for adding or subtracting. They
therefore are the methods of supplying reinforcement
(or punishment) rather than descriptions of the rein-
forcer itself. Thus, if a behavior occurs, and as a conse-
quence something is given that will result in an increase
in the rate of the behavior, this is positive reinforce-
ment. Giving a dog a treat for executing a trick is a good
example. One can also increase the rate of a behavior
by removing something on its production. This is called
negative reinforcement. A good example might be
when a child who eats his or her vegetables does not
have to wash the dinner dishes. Another example is the
annoying seat belt buzzer in cars. Many people comply
with the rules of safety simply to terminate that aver-
sive sound.
The descriptors “positive” and “negative” can be applied to
punishment as well. If something is
added on the performance of a behavior which results in the
reduction of that behavior—that
is positive punishment. On the other hand, if this behavior
causes the removal of something
Seanfboggs/iStock/Thinkstock
Potty training a child is an example
of reinforcement, where a parent
may reward or cheer on the child
throughout the process to attain a
successful result.
maj83688_02_c01_031-066.indd 52 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
53
Section 1.3 Principles of Conditioning
that reduces the response rate—negative punishment. A dog
collar that provides an electric
shock when the dog strays too close to the property line is an
example of a device that deliv-
ers positive punishment. Loss of television privileges for
rudeness is an example of negative
punishment. See Table 1.2 for an overview of reinforcements
and punishments.
Table 1.2: Reinforcements and punishments
Type Description Example
positive reinforcement Adds to the environment to encourage
continuance of a desired behavior.
Giving child a reward
(a treat, a toy, etc.)
positive punishment Adds to the environment to discourage
continuance of an undesired behavior.
Adding chores to a child’s
weekly duties
negative reinforcement Takes away from the environment to
encourage
continuance of a desired behavior.
Taking away child’s assigned
chores for the week
negative punishment Takes away from the environment to
discourage
continuance of an undesired behavior.
Grounding child from
playing with his/her friends
© Bridgepoint Education, Inc.
Why Reinforcers Work
Reinforcers (and punishers) are effective at influencing an
organism’s willingness to respond
because they influence the way in which an organism acquires
something that is desired, or
avoids something that is not desired. For primary reinforcers,
this concerns health and sur-
vival. Secondary reinforcers are learned through experience and
do not directly affect one’s
health or survival, yet they are adaptive because they are
relevant to those situations that are
related to well-being and an improved quality of life. Certainly
learning where food, drink,
receptive sex partners, or social acceptance can be located is
useful for an organism. Coming
to enjoy being in such situations is very useful, too. [. . .]
Patterns of Reinforcer Delivery
It is not necessary to deliver a reinforcer on every occurrence of
a behavior to have the
desired effect. In fact, intermittent reinforcement has a stronger
effect on the stability of
the response rate than reinforcing every response. If the
organism expects every response to
be reinforced, suspending reinforcement will cause the response
to disappear very quickly.
If, however, the organism is familiar with occasions of
responding without reinforcement,
responding will continue for much longer on the termination of
reinforcers.
There are two basic patterns of intermittent reinforcement: ratio
and interval. These pat-
terns, or rules, are known as schedules of reinforcement. Ratio
schedules are based on the
number of responses required to receive the reinforcer. Interval
schedules are based on the
amount of time that must pass before a reinforcer is available.
Both schedules have fixed and
variable types. On fixed schedules, whatever the rule is, it stays
that way. If five responses are
required to earn a reinforcer (a fixed ratio 5, or FR 5), every
fifth response is reinforced. A fixed
maj83688_02_c01_031-066.indd 53 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
54
Section 1.3 Principles of Conditioning
interval of 10 seconds (FI 10) means that the first response after
10 seconds has elapsed is
reinforced, and this is true every time (responding during the
interval is irrelevant). Variable
schedules change the rule in unpredictable ways. A VR 5
(variable ratio 5) is one in which, on
the average, the fifth response is reinforced, but it would vary
over a series of trials. A vari-
able interval of 10 seconds (VI 10) is similar. The required
amount of time is an average of 10
seconds, but on any given trial it could be different.
An example of a fixed-ratio (FR) schedule is pay for a specific
amount of work, such as
stuffing envelopes. The pay is always the same; stuffing a
certain number of envelopes
always equals the same pay. An example of a fixed-interval (FI)
schedule is receiving the
daily mail. Checking the mailbox before the mail is delivered
will not result in reinforcement.
One must wait until the appropriate time. A variable-ratio (VR)
schedule example is a
slot machine. The more attempts, the more times the player
wins, but in an unpredictable
pattern. A variable-interval (VI) schedule example would be
telephoning a friend whose
line is busy. Continued attempts will be unsuccessful until the
friend hangs up the phone,
but when this will happen is unknown. See Table 1.3 for an
overview of ratio and interval
schedules.
Table 1.3: Ratio and interval schedules of reinforcement
Schedule type Description Example
fixed-ratio (FR) Amount of reinforcer stays the same. Paying a
person $10/hour
fixed-interval (FI) Time of reinforcement stays the same.
Paying a person every Friday for work
completed
variable-ratio (VR) Reinforcers are administered in
unpredictable amounts.
Paying a person a bonus for time
worked; amount is unknown but time
may be known (such as end of the year)
variable-interval (VI) Reinforcers are administered at
unpredictable times.
Paying a person a bonus of a predictable
amount but at unpredictable times
© Bridgepoint Education, Inc.
Response rates for fixed schedules follow a fairly specific
pattern. Fixed ratio schedules tend
to have a steady rate until the reinforcer is delivered; then there
is a short rest, followed by
the same rate. A fixed interval is slightly different. The closer
one gets to the required time, the
faster the response rate. On receiving the reinforcer there will
be a short rest, then a gradual
return to responding, becoming quicker and quicker over time.
This is called a “scalloped”
pattern. (Though not strictly an FI schedule, it does have a
temporal component, so it illus-
trates the phenomenon nicely.) Students are much more likely to
study during the last few
days before a test and very little during the days immediately
after the test. As time passes,
study behavior gradually begins again, becoming more
concentrated the closer the next exam
date comes.
Source: Macias, S. I. (2016). Reinforcement. In Salem Press
Encyclopedia of Health. Copyright
© EBSCO.
maj83688_02_c01_031-066.indd 54 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
55
Section 1.4 Behaviorism Applied
Classical and operant conditioning can often be difficult
concepts to understand at first glance,
and it can be helpful to think about how these types of learning
processes might happen in our
lives each day. For instance, have you ever rewarded your
children for doing what you asked?
As they became older, did you have to reward them every single
time, as you may have when
they were younger, or could you reward them every now and
again and still see the behavior
repeated? By fully understanding the principles of classical and
operant conditioning, you will
be more apt to identify—and perhaps even implement—differing
schedules of reinforcement in
your own life. The last section of this chapter will guide you
through two modern applications
of conditioning. Reinforcing Your Understanding: Conditioning
takes a closer look at Skinner’s
conditioning research.
Reinforcing Your Understanding: Conditioning
Refer to your e-book for an embedded video that considers
Skinner’s work in conditioning.
In his original research, Skinner used pigeons as subjects and
grain to teach the pigeons to
perform certain behaviors. Review this video to reinforce your
understanding of punishment
versus reinforcers and how the schedule and rate of reinforcers
affect learning.
1.4 Behaviorism Applied
Now that you are familiar with how behaviorism was shaped
and refined through continuous
research, consider how it can be applied in modern
environments. The excerpts in this section are
from two separate articles. Both selections demonstrate the
application of strategies based on
behaviorism. The first series of excerpts
is from Wells (2014) and illustrates how
such strategies are used to understand
consumer behaviors and then applied
to product marketing; consumer behav-
iors research aims to identify why peo-
ple buy what they buy. For example, an
organization can use what it knows
about its consumers when developing
campaigns; its marketing campaigns
will often apply some of the behavioral
principles. Do you recognize the exam-
ple in the pictured advertisement? Does
it trigger specific emotional responses
or beliefs about the product? Do you use
this specific brand of product? Many of
the advertisers’ decisions and consumer
behaviors associated with their prod-
ucts are based on behaviorism.
Ullstein bild/Getty Images
Do the vibrant colors and illustrations in the Apple
iPod advertisements elicit a positive feeling? Clas-
sical conditioning in advertising generally assumes
that favorability toward a certain product develops
from a positive commercial or advertisement.
maj83688_02_c01_031-066.indd 55 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
56
Section 1.4 Behaviorism Applied
Excerpts from “Behavioural Psychology, Marketing, and
Consumer
Behaviour: A Literature Review and Future Research Agenda”
By V. K. Wells
Classical Conditioning in Marketing and Consumer
Behavior Research
[. . .] Allen and Janiszewski (1989), based on their work on
contingency awareness, provide
an anecdotal illustrative example of how classical conditioning
could work successfully and
be correctly used in advertising (a television commercial for
Diet Pepsi), in which most of the
work on classical conditioning in consumption and marketing
has taken place. They suggest
that:
This commercial features a repetitive musical jingle with a
series of brief
visual clips. The jingle lyrics—”Now you see it, now you don’t,
here you have
it, here you won’t”—are precisely coordinated with the image
presentation
. . . the CS (the brand) predicts the US (a slim female torso). In
each instance
“Now you see it, now you don’t” is sung as first the brand (CS)
and then a trim-
figured woman (US) is shown. (pp. 39–40)
Overall, there has been mixed support for classical conditioning
effects in advertising, but the
general suggestion is that positive attitudes toward an
advertised product (CS) might develop
through their association in a commercial with other stimuli that
are reacted to positively
(US), such as pleasant colors, music, and humor (Gorn, 1982).
Early work applying classical conditioning to advertising
appears to have been based on and
inspired by the work of Razran (1938), who paired a free meal
(US) with various political
statements (CS). He found that agreement with the slogans was
greater when people received
a free meal than when they did not. The work of Staats and
Staats (1958), who successfully
associated visually presented nonsense symbols (CS) with
several spoken words (US) such
as beauty, healthy, smart, and success, opened the door further
for a classical conditioning
approach to advertising. After the associative pairings, the
participants’ ratings of the CS indi-
cated that the core meaning in the US (i.e., either positive or
negative evaluation) had trans-
ferred to the nonsense syllables (Allen & Janiszewski, 1989). In
a second experiment, Allen
and Janiszewski associated each of two national names
(“Swedish” and “Dutch”) with either
18 positive or 18 negative words. The national name paired with
positive words was later
evaluated more favorably than the one paired with negative
words. [. . .]
Acquisition
The first characteristic, acquisition, indicates that classically
conditioned responses do not
fully appear after only one pairing/trial, and the strength of the
response increases with the
number of pairings (McSweeney & Bierley, 1984). Whereas
early studies used only one or an
arbitrary number of pairings, experimenters quickly began
testing the optimum level of pair-
ings/trials, often experimenting with different numbers of
pairings in different experimental
groups. The focus of the first of the four experiments by Stuart,
Shimp, and Engle (1987) was
maj83688_02_c01_031-066.indd 56 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
57
Section 1.4 Behaviorism Applied
on testing the amount of conditioning with different numbers of
pairings (1, 3, 10, and 20).
They found that the groups subjected to higher levels of
pairings/trials (10 and 20) demon-
strated significantly higher levels of conditioning. They also
attempted to test the optimum
number of trials to ensure effective conditioning and used 1, 3,
10, and 20 pairings of the CS
and US; they found that conditioning was greater as the number
of trials increased. Although
other studies have used different trial numbers, there remains no
agreement on an optimum
number of trials for conditioning to occur.
Extinction
Extinction is the prediction that the conditioned behavior will
disappear if the predictive
relationship between the CS and the US is broken by either
omitting the US entirely or by
presenting the CS and US randomly (McSweeney & Bierley,
1984). Till, Stanley, and Pirluck
(2008) explored the characteristic of extinction empirically.
Their study paired brands with
celebrities and measured attitudes toward the brands after
conditioning. Attitudes increased
with the use of well-liked and relevant celebrities. They then
attempted to extinguish these
effects but found that, once paired, the pairings were difficult to
eliminate, with brand atti-
tudes still affected 2 weeks after the procedure (Till et al.,
2008). Till and Priluck (2000) stud-
ied the characteristic of generalization, or the extent to which a
response conditioned to one
stimulus transfers to similar stimuli. Through two experimental
procedures, they found that
attitudes conditioned to a particular brand (Garra mouthwash)
could be transferred (gener-
alized) to a product with a similar name (Gurra, Gurri, and
Dutti) in the same category, as well
as a product with the same name in a different category (soap).
[. . .]
Operant Conditioning in Marketing and Consumer
Behavior Research
In operant conditioning, behavior is shaped and maintained by
its consequences (Foxall,
1986), meaning that the rate at which a behavior will be
performed is directly related to the
consequences of that behavior performed previously. [. . .]
According to Skinner, each behav-
ioral act can be broken down into three key parts: (1) the
response/behavior (R); (2) the
reinforcement/punishment (S+/-), which is a consequence of the
behavior; and (3) a discrimi-
native stimulus (Sd ), which is a cue that signals the likelihood
of positive or negative conse-
quences arising from performing the behavior (Foxall, 1986,
2002). The three parts together,
labelled the three-term contingency, highlight that the
determinants of the behavior must
occur in the environment (Foxall, 1986, 1993):
Sd → R → S+/–
In general, behavior modifiers include positive and negative
reinforcement, and positive
and negative punishment. Positive reinforcement is generally a
reward or something that
strengthens the behavior (e.g., a pleasant experience or
satisfaction with a product, a posi-
tive response to a behavior), which likely leads the person to
buy the product again in future.
With negative reinforcement, the behavior is generally
performed to avoid unpleasantness
(e.g., buying a product to avoid an aggressive salesperson,
purchase and consumption of pain-
killers to relieve a headache; Simintiras & Cadogan, 1996).
Punishment is an aversive conse-
quence after a behavioral response and may lead to the
extinction of a behavior (Nord & Peter,
maj83688_02_c01_031-066.indd 57 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
58
Section 1.4 Behaviorism Applied
1980). An example of punishment is a product that does not do
the job it was designed to do
or is of poor quality, and thus the buyer no longer buys it.
Reinforcement, in both experimental procedures and real-life
situations, is provided on a
schedule. [. . .] Research has shown that intermittent schedules
of reinforcement develop high
rates of behavior resistant to extinction, and they are also more
economical because they use
fewer reinforcers, which can reduce the cost (Peter & Nord,
1982). Peter and Nord (1982)
suggest that most marketing activity in the real world
(differentiating brands and manipu-
lating marketing variables such as price and promotions) often
occurs on an intermittent
schedule.
In terms of marketing and consumer behavior, a full range of
behavior, such as actual purchas-
ing, visiting and browsing in a store, and searching for
information online, can be examined
under the three-term contingency. Foxall (1986, p. 404) also
documents that verbal behavior,
for example, sharing positive or negative word of mouth about a
product, can also be exam-
ined but notes that “behaviors which belong to different classes
(e.g. talking about how one
will vote and actually voting) will be consistent only when the
contingency of reinforcement
applicable to both are functionally equivalent.”
Discriminative stimuli serve to signal the probability of
behavior being reinforced and can
change the probability of a behavior being emitted. Nord and
Peter (1980) provide examples
of discriminative stimuli such as store signs (e.g., 50% off, buy
one get one free), store logos
(e.g., Kmart’s big red “K,” McDonald’s golden arches), or
distinctive brand marks (e.g., Levi’s,
Coca-Cola). Past learning history and experiences will have
taught customers that responding
to cues such as these in the past rewards them with satisfactory
value purchases. They may
also have learned that they are not rewarded when the symbols
or cues are absent. [. . .]
Source: Wells, V. K. (2014). Behavioural psychology,
marketing and consumer behaviour: A
literature review and future research agenda. Journal of
Marketing Management, 30(11/12),
1119–1158. Copyright © 2014 Routledge.
Behaviorism in Educational Environments
The second series of excerpts in this section is from Standridge
(2002). Standridge demonstrates
the application of behaviorism in education and considers the
importance of such strategies
when reinforcing preferred behaviors and discouraging
unwanted behaviors. Behavior modifi-
cation is an important strategy for creating positive
environments that support effective learn-
ing opportunities. The selection introduces the concepts of
modeling, cueing, and behavior modi-
fication. As you read, consider how similar strategies for
putting theory into practice could also
be used in organizations and family units.
Excerpts from “Behaviorism”
By M. Standridge
[ . . .] Behaviorist techniques have long been employed in
education to promote behavior that
is desirable and discourage that which is not. Among the
methods derived from behavior-
ist theory for practical classroom application are contracts,
consequences, reinforcement,
extinction, and behavior modification.
maj83688_02_c01_031-066.indd 58 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
59
Section 1.4 Behaviorism Applied
Contracts, Consequences, Reinforcement, and Extinction
Simple contracts can be effective in helping children focus on
behavior change. The relevant
behavior should be identified, and the child and counselor
should decide the terms of the
contract. Behavioral contracts can be used in school as well as
at home. It is helpful if teachers
and parents work together with the student to ensure that the
contract is being fulfilled. [. . .]
Consequences occur immediately after a behavior.
Consequences may be positive or nega-
tive, expected or unexpected, immediate or long term, extrinsic
or intrinsic, material or sym-
bolic (a failing grade), emotional/interpersonal, or even
unconscious. Consequences occur
after the “target” behavior occurs, when either positive or
negative reinforcement may be
given. Positive reinforcement is presentation of a stimulus that
increases the probability of a
response. This type of reinforcement occurs frequently in the
classroom. Teachers may pro-
vide positive reinforcement by:
• Smiling at students after a correct response.
• Commending students for their work.
• Selecting them for a special project.
• Praising students’ ability to parents.
Negative reinforcement increases the probability of a response
that removes or prevents an
adverse condition. Many classroom teachers mistakenly believe
that negative reinforcement
is punishment administered to suppress behavior; however,
negative reinforcement increases
the likelihood of a behavior, as does positive reinforcement.
Negative implies removing a con-
sequence that a student finds unpleasant. Negative
reinforcement might include:
• Obtaining a score of 80% or higher makes the final exam
optional.
• Submitting all assignments on time results in the lowest grade
being dropped.
• Perfect attendance is rewarded with a “homework pass.”
Punishment involves presenting a strong stimulus that decreases
the frequency of a particu-
lar response. Punishment is effective in quickly eliminating
undesirable behaviors. Examples
of punishment include:
• Students who fight are immediately referred to the principal.
• Late assignments are given a grade of “0.”
• Three tardies to class results in a call to the parents.
• Failure to do homework results in after-school detention
(privilege of going home is
removed).
Table 1.4 provides a comparison and examples of
reinforcements and punishments. Also see
Reinforcing Your Understanding: Reinforcement and
Punishment in the Classroom for a more
in-depth example.
Extinction decreases the probability of a response by contingent
withdrawal of a previously
reinforced stimulus. Examples of extinction are:
• A student has developed the habit of saying the punctuation
marks when reading
aloud. Classmates reinforce the behavior by laughing when he
does so. The teacher
tells the students not to laugh, thus extinguishing the behavior.
maj83688_02_c01_031-066.indd 59 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
60
Section 1.4 Behaviorism Applied
• A teacher gives partial credit for late assignments; other
teachers think this is unfair;
the teacher decides to then give zeros for the late work.
• Students are frequently late for class, and the teacher does not
require a late pass,
contrary to school policy. The rule is subsequently enforced,
and the students arrive
on time.
Table 1.4: Reinforcement and punishment comparison
Reinforcement (Behavior increases) Punishment (Behavior
decreases)
Positive
(Something is added)
Positive reinforcement: Something is
added to increase desired behavior.
Example: Smile and compliment
student on good performance.
Positive punishment: Something
is added to decrease undesired
behavior.
Example: Give student detention for
failing to follow the class rules.
Negative
(Something is removed)
Negative reinforcement: Something
is removed to increase desired
behavior.
Example: Give a free homework pass
for turning in all assignments.
Negative punishment: Something
is removed to decrease undesired
behavior.
Example: Make students miss their
time in recess for not following the
class rules.
Adapted from “Behaviorism” by M. Standridge, 2002, in M.
Orey (Ed.), Emerging Perspectives on Learning, Teaching, and
Technology
(http://epltt.coe.uga.edu/index.php?title=Behaviorism).
Copyright 2002 by M. Standridge. Adapted with permission.
Reinforcing Your Understanding: Reinforcement
and Punishment in the Classroom
Reinforcement and punishment are still often used as methods
for classroom management in
today’s schools. By shaping student behavior, instructors have
the ability to be more focused
on the concepts that need to be learned. The following student-
created video presents a
quality demonstration of reinforcement and punishment in a
classroom scenario. In this
video, the teacher, Mr. Andrews, uses each method to
demonstrate operant conditioning in
scenarios with one particularly rambunctious student, Benjamin.
https://youtu.be/wLoMs-OzimU
Modeling, Shaping, and Cueing
Modeling is also known as observational learning (where the
learner imitates, or models,
the others’ behavior). Albert Bandura has suggested that
modeling is the basis for a variety
of child behavior. Children acquire many favorable and
unfavorable responses by observing
those around them. A child who kicks another child after seeing
this on the playground, or
a student who is always late for class because his friends are
late, is displaying the results of
observational learning.
Shaping is the process of gradually changing the quality of a
response. The desired behavior is
broken down into discrete, concrete units, or positive
movements, each of which is reinforced
as it progresses toward the overall behavioral goal. In the
following scenario, the classroom
teacher employs shaping to change student behavior: The class
enters the room and sits down,
maj83688_02_c01_031-066.indd 60 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
http://epltt.coe.uga.edu/index.php?title=Behaviorism
https://youtu.be/wLoMs-OzimU
61
Section 1.4 Behaviorism Applied
but continues to talk after the bell rings. The teacher
gives the class one point for improvement, in that
all students are seated. Subsequently, the students
must be seated and quiet to earn points, which may
be accumulated and redeemed for rewards.
Cueing may be as simple as providing a child with
a verbal or nonverbal signal as to the appropriate-
ness of a behavior. For example, to teach a child to
remember to perform an action at a specific time,
the teacher might arrange for him to receive a cue
immediately before the action is expected rather
than after it has been performed incorrectly. For
example, if the teacher is working with a student
who habitually answers aloud instead of raising his
hand, the teacher should discuss a cue such as hand-
raising at the end of a question posed to the class.
Behavior Modification
Behavior modification is a method of eliciting bet-
ter classroom performance from reluctant students.
It has six basic components:
1. Specification of the desired outcome (What must be changed
and how will it be
evaluated?). One example of a desired outcome is increased
student participation in
class discussions.
2. Development of a positive, nurturing environment (by
removing negative stimuli
from the learning environment). In the above example, this
would involve a student-
teacher conference with a review of the relevant material, and
calling on the student
when it is evident that she knows the answer to the question
posed.
3. Identification and use of appropriate reinforcers (intrinsic
and extrinsic rewards).
A student receives an intrinsic reinforcer by correctly answering
in the presence of
peers, thus increasing self-esteem and confidence.
4. Reinforcement of behavior patterns develop until the student
has established a pat-
tern of success in engaging in class discussions.
5. Reduction in the frequency of rewards—a gradual decrease in
the amount of one-on-
one review with the student before class discussion.
6. Evaluation and assessment of the effectiveness of the
approach based on teacher
expectations and student results. Compare the frequency of
student responses in class
discussions to the amount of support provided, and determine
whether the student is
independently engaging in class discussions (Brewer, Campbell,
& Petty, 2000).
[. . .] Further methods for behavior modification could include
changing the environment,
using models for learning new behavior, recording behavior,
substituting new behavior to
break bad habits, developing positive expectations, and
increasing intrinsic satisfaction. [. . .]
Source: Standridge, M. (2002). Behaviorism. In M. Orey (Ed.),
Emerging perspectives on learning,
teaching, and technology. Retrieved from
http://epltt.coe.uga.edu/index.php?title=Behaviorism
Erllre/iStock/Thinkstock
A child trying on an adult’s clothing
could be an example of observational
learning; once the child sees a par-
ent wearing high heels, a large coat,
or even makeup, the child may try to
model that behavior.
maj83688_02_c01_031-066.indd 61 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
http://epltt.coe.uga.edu/index.php?title=Behaviorism
62
Summary & Resources
As we develop our understanding of how we learn, it is
important to recognize the crucial foun-
dations that characterize learning psychology, such as
behaviorism and behavior analysis.
Today, many different professions use and adapt behaviorist
methods to help people succeed
in their learning opportunities. Whether you want to become a
counselor, a teacher, a human
resources director, an employee development specialist, a
psychologist, a researcher, or simply
the best parent you can be, behaviorism offers you applicable
strategies for encouraging appro-
priate and healthy behaviors in others. Reinforcing Your
Understanding: Applied Behavioral
Analysis (ABA) offers a glimpse at one young boy’s
experiences with reward-based therapy.
Reinforcing Your Understanding: Applied Behavioral Analysis
(ABA)
Behaviorism, more commonly referred to today as behavioral
analysis, is applied in a wide
range of professional areas, including, but not limited to,
learning, counseling, behavior
management, and the treatment of autism and other disorders
such as anorexia, bulimia,
and binge eating disorder. In each area, reinforcements are often
used to encourage desired
behaviors. Refer to your e-book for an embedded video clip that
demonstrates the benefits
of applied learning strategy when working with children who
have autism. In this example, a
2-year-old boy diagnosed with autism, Jake, receives ABA
therapy.
Summary & Resources
Chapter Summary
Behaviorism is a foundational framework that encourages those
interested in how we learn
to study, reflect, and identify patterns that support the stimulus-
response premise. Dating
back as far as Aristotle and his ideas about associations, these
ideas have matured, been
challenged, and continue to be elaborated upon through years of
reflection and research. As
explained by Watrin and Darwich (2012) in section 1.1,
behaviorism is often misunderstood
and difficult to clearly explain.
However, additional articles in this chapter help us to bridge the
gaps created by the multifac-
eted metamorphosis of this theoretical model. Instinctively, the
foundations of behaviorism
can be categorized by the S → R relationship and the suggestion
that learning is the outward
manifestation of the desired behavior, and although there are
differing methods of how a
stimulus can be applied to gain differing responses, this is a
foundational component of the
behaviorist ideology. See Figure 1.7 for a side-by-side
presentation of the stimulus-response
relationships in connectionism and conditioning.
maj83688_02_c01_031-066.indd 62 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
63
Summary & Resources
Key Ideas
• Behaviorism suggests that learning has successfully occurred
when the appropriate
behavior is observed.
• Behaviorism suggests many relevant strategies for successful
learning, educating,
and counseling.
• Behavior analysis constitutes a field and a psychological
system devoted to the study
of behavior.
• Skinnerian behaviorism established the fundamental concepts
and methods of
behavior analysis.
Figure 1.7: Overview of the principles of conditioning
The foundations of behaviorism lie in the stimulus-response
theoretical model. This model can be
applied to connectionism and conditioning.
© Bridgepoint Education, Inc.
In connectionism:
S R Confirming reaction
S RConfirming reaction
In principles of conditioning:
Before conditioning:
Afterwards it functions as a force connecting and binding S to
R, thus:
During conditioning:
After conditioning:
Bell
(S)
No
Response
(R)
but
Food
(US)
Salivation
(UR)
Food
(US)
Salivation
(UR)
Bell
(CS)
Bell
(CS)
Salivation
(CR)
followed by
maj83688_02_c01_031-066.indd 63 8/31/17 3:06 PM
© 2017 Bridgepoint Education, Inc. All rights reserved. Not for
resale or redistribution.
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx
1 The Foundations of BehaviorismFergregoryiStockThinksto.docx

More Related Content

Similar to 1 The Foundations of BehaviorismFergregoryiStockThinksto.docx

D821 beliefs and evidence presentation 2010 Martin Le Voi and Eileen Mansfield
D821 beliefs and evidence presentation 2010 Martin Le Voi and Eileen MansfieldD821 beliefs and evidence presentation 2010 Martin Le Voi and Eileen Mansfield
D821 beliefs and evidence presentation 2010 Martin Le Voi and Eileen MansfieldMartin Voi
 
Thinking qualitatively, Hermeneutics in Science, James A. Anderson
Thinking qualitatively, Hermeneutics in Science, James A. AndersonThinking qualitatively, Hermeneutics in Science, James A. Anderson
Thinking qualitatively, Hermeneutics in Science, James A. AndersonRevista Enfoque Vallenato
 
philosophy of Social science
philosophy of Social science philosophy of Social science
philosophy of Social science javeria nazeer
 
Social science lecture 2
Social science lecture 2Social science lecture 2
Social science lecture 2javeria nazeer
 
OverviewThis activity is to be completed after studying the pres.docx
OverviewThis activity is to be completed after studying the pres.docxOverviewThis activity is to be completed after studying the pres.docx
OverviewThis activity is to be completed after studying the pres.docxkarlhennesey
 
IntroductionLearning ObjectivesAfter reading this chapter,.docx
IntroductionLearning ObjectivesAfter reading this chapter,.docxIntroductionLearning ObjectivesAfter reading this chapter,.docx
IntroductionLearning ObjectivesAfter reading this chapter,.docxnormanibarber20063
 
Understanding philosophy of research
Understanding philosophy of researchUnderstanding philosophy of research
Understanding philosophy of researchwaqar ahmad
 
Mae 502 Module 1 Case
Mae 502 Module 1 CaseMae 502 Module 1 Case
Mae 502 Module 1 Caseeckchela
 
Assignment 2 LASA 1 Impact of StructuralismThis week you read an.docx
Assignment 2 LASA 1 Impact of StructuralismThis week you read an.docxAssignment 2 LASA 1 Impact of StructuralismThis week you read an.docx
Assignment 2 LASA 1 Impact of StructuralismThis week you read an.docxkarenahmanny4c
 
Comparing And Contrasting Qualitative And Quantitative...
Comparing And Contrasting Qualitative And Quantitative...Comparing And Contrasting Qualitative And Quantitative...
Comparing And Contrasting Qualitative And Quantitative...Ashley Fisher
 
Tema 1. Skinner y el surgimiento de la TC (1).pdf
Tema 1. Skinner y el surgimiento de la TC (1).pdfTema 1. Skinner y el surgimiento de la TC (1).pdf
Tema 1. Skinner y el surgimiento de la TC (1).pdfPriscaEspinosa3
 
Spencer Grantage fotostockSuperStockLearning Objectives .docx
Spencer Grantage fotostockSuperStockLearning Objectives .docxSpencer Grantage fotostockSuperStockLearning Objectives .docx
Spencer Grantage fotostockSuperStockLearning Objectives .docxwhitneyleman54422
 
120Reasoned Action TheoryPersuasion as Belief-Based Beha.docx
120Reasoned Action TheoryPersuasion as Belief-Based Beha.docx120Reasoned Action TheoryPersuasion as Belief-Based Beha.docx
120Reasoned Action TheoryPersuasion as Belief-Based Beha.docxmoggdede
 

Similar to 1 The Foundations of BehaviorismFergregoryiStockThinksto.docx (17)

D821 beliefs and evidence presentation 2010 Martin Le Voi and Eileen Mansfield
D821 beliefs and evidence presentation 2010 Martin Le Voi and Eileen MansfieldD821 beliefs and evidence presentation 2010 Martin Le Voi and Eileen Mansfield
D821 beliefs and evidence presentation 2010 Martin Le Voi and Eileen Mansfield
 
Thinking qualitatively, Hermeneutics in Science, James A. Anderson
Thinking qualitatively, Hermeneutics in Science, James A. AndersonThinking qualitatively, Hermeneutics in Science, James A. Anderson
Thinking qualitatively, Hermeneutics in Science, James A. Anderson
 
philosophy of Social science
philosophy of Social science philosophy of Social science
philosophy of Social science
 
Social science lecture 2
Social science lecture 2Social science lecture 2
Social science lecture 2
 
OverviewThis activity is to be completed after studying the pres.docx
OverviewThis activity is to be completed after studying the pres.docxOverviewThis activity is to be completed after studying the pres.docx
OverviewThis activity is to be completed after studying the pres.docx
 
IntroductionLearning ObjectivesAfter reading this chapter,.docx
IntroductionLearning ObjectivesAfter reading this chapter,.docxIntroductionLearning ObjectivesAfter reading this chapter,.docx
IntroductionLearning ObjectivesAfter reading this chapter,.docx
 
Social Psychology And Social Influence
Social Psychology And Social InfluenceSocial Psychology And Social Influence
Social Psychology And Social Influence
 
SociologyExchange.co.uk Shared Resource
SociologyExchange.co.uk Shared ResourceSociologyExchange.co.uk Shared Resource
SociologyExchange.co.uk Shared Resource
 
Understanding philosophy of research
Understanding philosophy of researchUnderstanding philosophy of research
Understanding philosophy of research
 
Mae 502 Module 1 Case
Mae 502 Module 1 CaseMae 502 Module 1 Case
Mae 502 Module 1 Case
 
Assignment 2 LASA 1 Impact of StructuralismThis week you read an.docx
Assignment 2 LASA 1 Impact of StructuralismThis week you read an.docxAssignment 2 LASA 1 Impact of StructuralismThis week you read an.docx
Assignment 2 LASA 1 Impact of StructuralismThis week you read an.docx
 
Comparing And Contrasting Qualitative And Quantitative...
Comparing And Contrasting Qualitative And Quantitative...Comparing And Contrasting Qualitative And Quantitative...
Comparing And Contrasting Qualitative And Quantitative...
 
Tema 1. Skinner y el surgimiento de la TC (1).pdf
Tema 1. Skinner y el surgimiento de la TC (1).pdfTema 1. Skinner y el surgimiento de la TC (1).pdf
Tema 1. Skinner y el surgimiento de la TC (1).pdf
 
Spencer Grantage fotostockSuperStockLearning Objectives .docx
Spencer Grantage fotostockSuperStockLearning Objectives .docxSpencer Grantage fotostockSuperStockLearning Objectives .docx
Spencer Grantage fotostockSuperStockLearning Objectives .docx
 
120Reasoned Action TheoryPersuasion as Belief-Based Beha.docx
120Reasoned Action TheoryPersuasion as Belief-Based Beha.docx120Reasoned Action TheoryPersuasion as Belief-Based Beha.docx
120Reasoned Action TheoryPersuasion as Belief-Based Beha.docx
 
ppt-PFE301_1.pptx
ppt-PFE301_1.pptxppt-PFE301_1.pptx
ppt-PFE301_1.pptx
 
PPT 1 BMC.pptx
PPT 1 BMC.pptxPPT 1 BMC.pptx
PPT 1 BMC.pptx
 

More from jeremylockett77

M3 ch12 discussionConnecting Eligible Immigrant Families to Heal.docx
M3 ch12 discussionConnecting Eligible Immigrant Families to Heal.docxM3 ch12 discussionConnecting Eligible Immigrant Families to Heal.docx
M3 ch12 discussionConnecting Eligible Immigrant Families to Heal.docxjeremylockett77
 
Loudres eats powdered doughnuts for breakfast  and chocolate that sh.docx
Loudres eats powdered doughnuts for breakfast  and chocolate that sh.docxLoudres eats powdered doughnuts for breakfast  and chocolate that sh.docx
Loudres eats powdered doughnuts for breakfast  and chocolate that sh.docxjeremylockett77
 
Lostinnocenceyoucouldexploreachildsoldierwhohasbeen.docx
Lostinnocenceyoucouldexploreachildsoldierwhohasbeen.docxLostinnocenceyoucouldexploreachildsoldierwhohasbeen.docx
Lostinnocenceyoucouldexploreachildsoldierwhohasbeen.docxjeremylockett77
 
Lori Goler is the head of People at Facebook. Janelle Gal.docx
Lori Goler is the head  of People at Facebook. Janelle Gal.docxLori Goler is the head  of People at Facebook. Janelle Gal.docx
Lori Goler is the head of People at Facebook. Janelle Gal.docxjeremylockett77
 
Looking for someone to take these two documents- annotated bibliogra.docx
Looking for someone to take these two documents- annotated bibliogra.docxLooking for someone to take these two documents- annotated bibliogra.docx
Looking for someone to take these two documents- annotated bibliogra.docxjeremylockett77
 
Lorryn Tardy – critique to my persuasive essayFor this assignm.docx
Lorryn Tardy – critique to my persuasive essayFor this assignm.docxLorryn Tardy – critique to my persuasive essayFor this assignm.docx
Lorryn Tardy – critique to my persuasive essayFor this assignm.docxjeremylockett77
 
M450 Mission Command SystemGeneral forum instructions Answ.docx
M450 Mission Command SystemGeneral forum instructions Answ.docxM450 Mission Command SystemGeneral forum instructions Answ.docx
M450 Mission Command SystemGeneral forum instructions Answ.docxjeremylockett77
 
Lymphedema following breast cancer The importance of surgic.docx
Lymphedema following breast cancer The importance of surgic.docxLymphedema following breast cancer The importance of surgic.docx
Lymphedema following breast cancer The importance of surgic.docxjeremylockett77
 
Love Beyond Wallshttpswww.lovebeyondwalls.orgProvid.docx
Love Beyond Wallshttpswww.lovebeyondwalls.orgProvid.docxLove Beyond Wallshttpswww.lovebeyondwalls.orgProvid.docx
Love Beyond Wallshttpswww.lovebeyondwalls.orgProvid.docxjeremylockett77
 
Longevity PresentationThe purpose of this assignment is to exami.docx
Longevity PresentationThe purpose of this assignment is to exami.docxLongevity PresentationThe purpose of this assignment is to exami.docx
Longevity PresentationThe purpose of this assignment is to exami.docxjeremylockett77
 
Look again at the CDCs Web page about ADHD.In 150-200 w.docx
Look again at the CDCs Web page about ADHD.In 150-200 w.docxLook again at the CDCs Web page about ADHD.In 150-200 w.docx
Look again at the CDCs Web page about ADHD.In 150-200 w.docxjeremylockett77
 
M8-22 ANALYTICS o TEAMS • ORGANIZATIONS • SKILLS .fÿy.docx
M8-22   ANALYTICS o TEAMS • ORGANIZATIONS • SKILLS        .fÿy.docxM8-22   ANALYTICS o TEAMS • ORGANIZATIONS • SKILLS        .fÿy.docx
M8-22 ANALYTICS o TEAMS • ORGANIZATIONS • SKILLS .fÿy.docxjeremylockett77
 
Lombosoro theory.In week 4, you learned about the importance.docx
Lombosoro theory.In week 4, you learned about the importance.docxLombosoro theory.In week 4, you learned about the importance.docx
Lombosoro theory.In week 4, you learned about the importance.docxjeremylockett77
 
Looking over the initial material on the definitions of philosophy i.docx
Looking over the initial material on the definitions of philosophy i.docxLooking over the initial material on the definitions of philosophy i.docx
Looking over the initial material on the definitions of philosophy i.docxjeremylockett77
 
Lucky Iron FishBy Ashley SnookPro.docx
Lucky Iron FishBy Ashley SnookPro.docxLucky Iron FishBy Ashley SnookPro.docx
Lucky Iron FishBy Ashley SnookPro.docxjeremylockett77
 
Lucky Iron FishBy Ashley SnookMGMT 350Spring 2018ht.docx
Lucky Iron FishBy Ashley SnookMGMT 350Spring 2018ht.docxLucky Iron FishBy Ashley SnookMGMT 350Spring 2018ht.docx
Lucky Iron FishBy Ashley SnookMGMT 350Spring 2018ht.docxjeremylockett77
 
look for a article that talks about some type of police activity a.docx
look for a article that talks about some type of police activity a.docxlook for a article that talks about some type of police activity a.docx
look for a article that talks about some type of police activity a.docxjeremylockett77
 
Look at the Code of Ethics for at least two professional agencies,  .docx
Look at the Code of Ethics for at least two professional agencies,  .docxLook at the Code of Ethics for at least two professional agencies,  .docx
Look at the Code of Ethics for at least two professional agencies,  .docxjeremylockett77
 
Locate an example for 5 of the 12 following types of communica.docx
Locate an example for 5 of the 12 following types of communica.docxLocate an example for 5 of the 12 following types of communica.docx
Locate an example for 5 of the 12 following types of communica.docxjeremylockett77
 
Locate and read the other teams’ group project reports (located .docx
Locate and read the other teams’ group project reports (located .docxLocate and read the other teams’ group project reports (located .docx
Locate and read the other teams’ group project reports (located .docxjeremylockett77
 

More from jeremylockett77 (20)

M3 ch12 discussionConnecting Eligible Immigrant Families to Heal.docx
M3 ch12 discussionConnecting Eligible Immigrant Families to Heal.docxM3 ch12 discussionConnecting Eligible Immigrant Families to Heal.docx
M3 ch12 discussionConnecting Eligible Immigrant Families to Heal.docx
 
Loudres eats powdered doughnuts for breakfast  and chocolate that sh.docx
Loudres eats powdered doughnuts for breakfast  and chocolate that sh.docxLoudres eats powdered doughnuts for breakfast  and chocolate that sh.docx
Loudres eats powdered doughnuts for breakfast  and chocolate that sh.docx
 
Lostinnocenceyoucouldexploreachildsoldierwhohasbeen.docx
Lostinnocenceyoucouldexploreachildsoldierwhohasbeen.docxLostinnocenceyoucouldexploreachildsoldierwhohasbeen.docx
Lostinnocenceyoucouldexploreachildsoldierwhohasbeen.docx
 
Lori Goler is the head of People at Facebook. Janelle Gal.docx
Lori Goler is the head  of People at Facebook. Janelle Gal.docxLori Goler is the head  of People at Facebook. Janelle Gal.docx
Lori Goler is the head of People at Facebook. Janelle Gal.docx
 
Looking for someone to take these two documents- annotated bibliogra.docx
Looking for someone to take these two documents- annotated bibliogra.docxLooking for someone to take these two documents- annotated bibliogra.docx
Looking for someone to take these two documents- annotated bibliogra.docx
 
Lorryn Tardy – critique to my persuasive essayFor this assignm.docx
Lorryn Tardy – critique to my persuasive essayFor this assignm.docxLorryn Tardy – critique to my persuasive essayFor this assignm.docx
Lorryn Tardy – critique to my persuasive essayFor this assignm.docx
 
M450 Mission Command SystemGeneral forum instructions Answ.docx
M450 Mission Command SystemGeneral forum instructions Answ.docxM450 Mission Command SystemGeneral forum instructions Answ.docx
M450 Mission Command SystemGeneral forum instructions Answ.docx
 
Lymphedema following breast cancer The importance of surgic.docx
Lymphedema following breast cancer The importance of surgic.docxLymphedema following breast cancer The importance of surgic.docx
Lymphedema following breast cancer The importance of surgic.docx
 
Love Beyond Wallshttpswww.lovebeyondwalls.orgProvid.docx
Love Beyond Wallshttpswww.lovebeyondwalls.orgProvid.docxLove Beyond Wallshttpswww.lovebeyondwalls.orgProvid.docx
Love Beyond Wallshttpswww.lovebeyondwalls.orgProvid.docx
 
Longevity PresentationThe purpose of this assignment is to exami.docx
Longevity PresentationThe purpose of this assignment is to exami.docxLongevity PresentationThe purpose of this assignment is to exami.docx
Longevity PresentationThe purpose of this assignment is to exami.docx
 
Look again at the CDCs Web page about ADHD.In 150-200 w.docx
Look again at the CDCs Web page about ADHD.In 150-200 w.docxLook again at the CDCs Web page about ADHD.In 150-200 w.docx
Look again at the CDCs Web page about ADHD.In 150-200 w.docx
 
M8-22 ANALYTICS o TEAMS • ORGANIZATIONS • SKILLS .fÿy.docx
M8-22   ANALYTICS o TEAMS • ORGANIZATIONS • SKILLS        .fÿy.docxM8-22   ANALYTICS o TEAMS • ORGANIZATIONS • SKILLS        .fÿy.docx
M8-22 ANALYTICS o TEAMS • ORGANIZATIONS • SKILLS .fÿy.docx
 
Lombosoro theory.In week 4, you learned about the importance.docx
Lombosoro theory.In week 4, you learned about the importance.docxLombosoro theory.In week 4, you learned about the importance.docx
Lombosoro theory.In week 4, you learned about the importance.docx
 
Looking over the initial material on the definitions of philosophy i.docx
Looking over the initial material on the definitions of philosophy i.docxLooking over the initial material on the definitions of philosophy i.docx
Looking over the initial material on the definitions of philosophy i.docx
 
Lucky Iron FishBy Ashley SnookPro.docx
Lucky Iron FishBy Ashley SnookPro.docxLucky Iron FishBy Ashley SnookPro.docx
Lucky Iron FishBy Ashley SnookPro.docx
 
Lucky Iron FishBy Ashley SnookMGMT 350Spring 2018ht.docx
Lucky Iron FishBy Ashley SnookMGMT 350Spring 2018ht.docxLucky Iron FishBy Ashley SnookMGMT 350Spring 2018ht.docx
Lucky Iron FishBy Ashley SnookMGMT 350Spring 2018ht.docx
 
look for a article that talks about some type of police activity a.docx
look for a article that talks about some type of police activity a.docxlook for a article that talks about some type of police activity a.docx
look for a article that talks about some type of police activity a.docx
 
Look at the Code of Ethics for at least two professional agencies,  .docx
Look at the Code of Ethics for at least two professional agencies,  .docxLook at the Code of Ethics for at least two professional agencies,  .docx
Look at the Code of Ethics for at least two professional agencies,  .docx
 
Locate an example for 5 of the 12 following types of communica.docx
Locate an example for 5 of the 12 following types of communica.docxLocate an example for 5 of the 12 following types of communica.docx
Locate an example for 5 of the 12 following types of communica.docx
 
Locate and read the other teams’ group project reports (located .docx
Locate and read the other teams’ group project reports (located .docxLocate and read the other teams’ group project reports (located .docx
Locate and read the other teams’ group project reports (located .docx
 

Recently uploaded

Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptxPoojaSen20
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 

Recently uploaded (20)

Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptx
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 

1 The Foundations of BehaviorismFergregoryiStockThinksto.docx

  • 1. 1 The Foundations of Behaviorism Fergregory/iStock/Thinkstock Learning Objectives After reading this chapter, you should be able to do the following: • Explain the controversial history and arguments of behaviorism. • Describe associative learning. • Explain connectionism and the law of effect. • Compare and contrast classical and operant conditioning. • Identify examples of ratio and interval schedules. • Discuss settings where behaviorism, in the area of learning, is applied. maj83688_02_c01_031-066.indd 31 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 32
  • 2. Introduction Introduction When you were a child, were you ever • sent to your room for a bad behavior, a consequence that continued to occur until you changed your behavior? • slapped on the hand for touching something that you were not supposed to touch? • yelled at if you walked into the street without first looking for cars? • given an allowance when you completed your chores? • allowed to go on dates but only if you were home by curfew? • given a sticker or badge for an assignment when you did well? All of these examples could be categorized as behaviorist techniques for reinforcing learning. Learning can refer to the process of developing knowledge or a skill through instruction or study or the process of modifying behavior through experience. Understanding how learning is studied is an important step if you want to suc- cessfully apply psychological methods to your own learning or to that of others, whether in a classroom, in the workplace, or even in your role as a parent or grand- parent. It is also important to under- stand that theories have evolved over time and that inaccuracies often exist in the literature that presents behavior and learning studies (Abramson, 2013). Applications of technology and method-
  • 3. ological approaches continue to develop researchers’ awareness of possible inac- curacies and alternate approaches. Your journey to a better understanding of learning begins with behaviorism. This theoretical foundation, which was first discussed in this book’s introduction, argues that learning has successfully occurred when the appropriate behavior is observed (Ertmer & Newby, 1993). However, behaviorism is an intricate theory, and its approach to learning cannot be general- ized so easily. There are many perspectives related to behaviorism, and such variability makes it critical that you understand behaviorism’s theoretical foundation in more depth. Although new methods are often used in the 21st century, behaviorism still offers the field of learning many relevant strategies for successful learning, educating, and counseling today (Abramson, 2013). In this chapter, we will first discuss the history of behaviorism, as well as its evolution in the scope of learning theory. In addition, the chapter will cover behaviorism’s foundational ideas, including connectionism, the law of effect, principles of conditioning, and modeling and shap- ing, and explain how behaviorism has been applied within the domains of marketing and education. Jacob Wackerhausen/iStock/Thinkstock Making mistakes is part of the learning process. It allows people to modify behavior or thought pro- cesses in order to develop knowledge or skills.
  • 4. maj83688_02_c01_031-066.indd 32 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 33 Section 1.1 The Evolution of Behaviorism to Behavior Analysis 1.1 The Evolution of Behaviorism to Behavior Analysis Behaviorism was initially based on the premise that observable environmental variables are the basis of behaviors (Hilgard, 1956; Pierce & Cheney, 2004). The theory itself has numerous frame- works, some of which you read about in section i.2, and continues to evolve today. The excerpts in this section are from Watrin and Darwich (2012). This article reflects upon the evolution of behaviorism. The attention placed on the multitude of beliefs about behaviorism sets the standard for approaching this area of learning psychology with skeptical thought and critical considerations. Watrin and Darwich (2012) introduce J. B. Watson (1913), who rede- fined psychology as “a purely objective experimental branch of natural science” (p. 158), pro- posing the “prediction and control of behavior” as its goal, and invite us to follow the path of self-identified behaviorists who continued to reinvent how and what behaviorism is and how it should be applied. With explicit candor, these authors will help you better understand exactly
  • 5. why this framework is often misunderstood and difficult to clearly explain. They also provide you with a foundation that will help you better understand the advances and new reflections that continue to be explored. Excerpts from “On Behaviorism in the Cognitive Revolution: Myth and Reactions” By J. P. Watrin and R. Darwich In the course of history, there is a clear difficulty to define psychology. For a long time, it was treated as the study of mind or human psyche. Some authors, though, saw the emergence of behaviorism as a revolution in psychological science (e.g., Gardner, 1985; Moore, 1999). Start- ing with J. B. Watson (1878–1958), the behaviorist school flourished in the beginning of the 20th century. It was a remarkable rupture in the history of psychology, once it put the mind aside of scientific inquiry. From then on, behaviorism began a tradition of study of behavior, comprising several—and sometimes even conflicting— theoretical systems (Moore, 1999). In that context, behavior analysis emerged as one of the behavioristic approaches, having been developed from the works of B. F. Skinner (1904–1990). With an emphasis on operant behavior and an antimentalistic position [which rejects the mind as the cause of behavior], it became a forefront system of behaviorism during the 1950s. [. . .] From Behaviorism to Behavior Analysis Behavior analysis constitutes a field and a psychological system devoted to the study of
  • 6. behavior, here defined in terms of functional relations between behavioral and environmen- tal events (Catania, 1998). As a field, behavior analysis has today three fundamental domains: (a) the experimental analysis of behavior, a basic science devoted to empirical research on behavioral processes, especially in the laboratory; (b) applied behavior analysis, a techno- logical domain dedicated to apply behavior-analytic knowledge to solve practical problems; and (c) the conceptual analysis of behavior, which performs theoretical reflections about the subject matter and methods of investigation (Moore, 1999; see also Moore & Cooper, 2003). Those domains are interrelated and based in radical behaviorism, a philosophy of science that lays the foundations of behavior analysis. maj83688_02_c01_031-066.indd 33 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 34 Section 1.1 The Evolution of Behaviorism to Behavior Analysis The history of the field as a whole has its roots in the behaviorist school. In 1913, Watson pub- lished the article “Psychology as the Behaviorist Views It.” Attacking the study of conscious- ness, Watson (1913) redefined psychology as “a purely objective experimental branch of natural science” (p. 158), proposing the “prediction and control
  • 7. of behavior” as its goal. That drastic movement would greatly contribute to the beginning of a new tradition, whose name seems to have been created by Watson himself: “behaviorism” (Schneider & Morris, 1987). In the following decades, several psychologists would be identified as behaviorists. Names such as Clark Hull (1884–1952) and Edward Tolman (1886– 1959) became associated with the behaviorist movement, once they developed their own explanatory models of behavior (e.g., Hull, 1943; Tolman, 1932). New forms of behaviorism were thus being shaped and were sometimes at odds with those that already existed (Moore, 1999). In the 1930s, the contribu- tions of Skinner established his place among those developments. Conceiving behavior as a lawful process, Skinner’s experimental works on reflexes led him to new concepts and meth- ods of investigation (see Iversen, 1992). Reflex—and, subsequently, all behavior—was no lon- ger something that happened inside the organism; rather, it was seen as a relation in which a response is defined in function of a stimulus and vice versa (Skinner, 1931). [. . .] In 1938, Skinner published The Behavior of Organisms, in which he summarized many of his positions and refined the concept of operant behavior. Skinnerian behaviorism (see section i.2) was acquiring its shape. Its first developments laid the fundamental concepts and methods of behavior analysis. Because they relied on basic research, they were also the first steps of the experimental analy- sis of behavior.
  • 8. In the 1940s, the first introductory course based in Skinner’s psychology and the first conference on experimental analysis of behavior took place (Keller & Schoenfeld, 1949; Michael, 1980). In 1945, Skin- ner wrote The Operational Analysis of Psychological Terms, in which, for the first time in print, he defined his thought as “radical behaviorism” (Skinner, 1945, p. 294; see also Schneider & Morris, 1987). The term would designate a philosophy that, on one hand, defines private events (e.g., thinking, feelings) as behavior and, therefore, as a legitimate subject mat- ter of a behavioral analysis, but on the other hand attacks explanatory mentalism, the explanation of behavior by mental events (cf. Skinner, 1945, 1974). Private events usually refer to a mental concept, but they are behavior and, as such, cannot cause other behavior. That antimentalism would become a cen- tral feature of radical behaviorism. [. . .] As the prominence of Skinner and his work began to rise and the foundations for applied behavior analysis were laid (Morris, Smith, & Altus, 2005), Skinner would become central to the development of behavior analysis. [. . .] Thus, behavior analysis Nina Leen/The LIFE Picture Collection/Getty Images Psychologist B. F. Skinner’s experi- ments showed that behavior could be related to a stimulus and did not have to be only an occurrence inside an organism. One of Skinner’s famous experiments included a rat pressing a lever to then be rewarded with food. maj83688_02_c01_031-066.indd 34 8/31/17 3:06 PM
  • 9. © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 35 Section 1.1 The Evolution of Behaviorism to Behavior Analysis constituted itself by the gradual establishment of its domains, being consolidated as a field in the late 1970s. Although Skinner became synonymous with behavior analysis, the field exceeded its pioneer. Behavior analysis took on a life of its own. Other people took part in the spreading of the field, such as Fred Keller (1899–1996), Charles Ferster (1922–1981), William Schoenfeld (1915–1996), and Murray Sidman (1923–). They disseminated its knowl- edge, just as they developed new concepts and methods (e.g., Sidman & Tailby, 1982). Skinner, however, remained as the field’s main spokesman. Schultz and Schultz (2004), for instance, asserted that, “despite . . . criticisms, Skinner remained the uncontested champion of behav- ioral psychology from the 1950s to the 1980s. During this period, American psychology was shaped more by his work than by the ideas of any other psychologist” (p. 344). [. . .] The Generic (and Misrepresented) Nature of Behaviorism [. . .] Behaviorism became a host of different and conflicting systems, grouped under a single label, as if they all shared the same position. Being vaguely defined, behaviorism is frequently
  • 10. treated as a homogeneous school, as a linear tradition. The term behaviorism, however, refers to a variety of conflicting positions (Leigland, 2003; but see also Moore, 1999). Indeed, after Watson’s (1913) first use, many theories related to the study of behavior were taken as “behaviorists.” Since the term began to be largely used, its ambiguity was soon recognized, seeing that there was no single enterprise called “behaviorism” (e.g., Hunter, 1922; Spence, 1948; Williams, 1931). Woodworth (1924) summarized the problem: If I am asked whether I am a behaviorist, I have to reply that I do not know, and do not much care. If I am, it is because I believe in the several projects put forward by behaviorists. If I am not, it is partly because I also believe in other projects which behaviorists seem to avoid, and partly because I cannot see any one big thing, to be called “behaviorism.” (p. 264) Spence (1948) also noted that the term was mostly used when someone defines his or her oppositions to an effective (or alleged) behaviorism. Even so, later developments were identi- fied with “behaviorism,” such as behavior analysis itself. Therefore, the term would still des- ignate a very heterogeneous set of positions. Its indiscriminate use, on the other hand, over- looks the historical complexity and diversity of the behaviorist school. Moreover, references to a generic behaviorism set biases in the analysis of behavioristic sys-
  • 11. tems. When behaviorism is vaguely defined, it is easier to misrepresent any system by attrib- uting features of other positions to it. Properties of particular systems are ascribed to all. Pinker (1999), for example, says the following: Skinner and other behaviorists insisted that all talk about mental events was sterile speculation; only stimulus–response connection could be studied in the lab and the field. Exactly the opposite turned out to be true. Before com- putational ideas were imported in the 1950s and 1960s by Newell and Simon and the psychologists George Miller and Donald Broadbent, psychology was dull, dull, dull. (p. 84) [. . .] In spite of the prior disputable use of the word behaviorism, the conventional histori- ography seems to have taken advantage of the term’s ambiguity to legitimate the idea of a maj83688_02_c01_031-066.indd 35 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 36 Section 1.2 Theory of Connectionism and the Laws of Learning revolution. A generic behaviorism was, then, presented, underlying fallacious arguments. This
  • 12. ambiguous treatment is dangerous for behavior analysis and modern behaviorism, because it creates and strengthens academic folklore (see also Todd & Morris, 1992). Its deceptive character gives rise to misrepresentations. [. . .] Source: Watrin, J. P., & Darwich, R. (2012). On behaviorism in the cognitive revolution: Myth and reactions. Review of General Psychology, 16(3), 269–282. Copyright © 2012, American Psychological Association. Reprinted with permission. Understanding the history of a theoretical framework can help us better understand the devel- opments that followed. In this case, behaviorism gave rise to many subset groups that believed that learning was a behavior and that behavior was observable— yet differed in the degree to which they held to these beliefs. As the article’s authors observed, the word behaviorism can often be used as a general grouping for the multiple researchers aligned with this theory. As a lifelong learner, you may find that further questioning this ambiguity in your own studies will help substantiate your understanding of this important area of psychology. 1.2 Theory of Connectionism and the Laws of Learning Edward Thorndike’s theory of connectionism and the laws of learning were two concepts that would emerge as behaviorism matured. The theory of connectionism, also known as the synaptic theory of learning, posits that learning occurs through the habitual associations, or connections, made between stimuli and responses. Examples of behavioral associations include
  • 13. eating because we are hungry and sleeping because we are tired. The laws of learning explain how people learn best through these associations. As just one example, the law of effect asserts that learning is strengthened when it is associated with a positive feeling. As Sandiford (1942) explains in the following excerpts, the theory of connection- ism and the laws of learning helped build a more developed understanding of learning and contrib- uted to our more modern applications of today. Before you begin reading, it is important to under- stand the importance of what is known as “asso- ciation doctrine” to Thorndike’s research. Although Thorndike did not introduce his initial three laws of learning until the early 20th century (Weibell, 2011), ideas about behavioral associations began to take shape more than 2,000 years ago. Greek philosopher Aristotle (384–322 BCE) wrote in his major work on ethics, “For we are busy that we may have leisure, and make war that we may live in peace.” However, his ideas about associations are most clearly seen in the following passage: When, therefore, we accomplish an act of reminiscence, we pass through a cer- tain series of precursive movements, until we arrive at a movement on which Abracada/iStock/Thinkstock A central theory of connectionism is that learning is conducted through stimuli and responses. maj83688_02_c01_031-066.indd 36 8/31/17 3:06 PM
  • 14. © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 37 Section 1.2 Theory of Connectionism and the Laws of Learning the one we are in quest of is habitually consequent. Hence, too, it is that we hunt through the mental train, excogitating from the present or some other, and from similar or contrary or coadjacent. Through this process reminiscence takes place. For the movements are, in these cases, sometimes at the same time, some- times parts of the same whole, so that the subsequent movement is already more than half accomplished. (Aristotle, ca. 350 BCE/1930, para. XX) Association doctrine can be explained as the linking of physiological and psychological pro- cesses. Important to understanding the points of reference in the excerpts from Sandiford (1942) is that Thorndike’s beliefs about learning were somewhat founded on Alexander Bain’s beliefs about psychology that suggested all knowledge is based on physical sensations (not thoughts or ideas) (Bain, 1873). Bain (1818–1903) founded the academic journal called Mind, the first journal of psychology and analytical philosophy. He postulated an “associationist treatment of higher mental processes” (Wade, 2001, p. 781).
  • 15. Excerpts from “Connectionism: Its Origin and Major Features” By P. Sandiford Features of Connectionism The following outline gives the main distinguishing features of connectionism: 1. Connectionism is an outgrowth of the association doctrine, especially as pro- pounded by Alexander Bain. Thorndike was a pupil of William James, some of whose teachings were derived from Bain and the British associationists. Connectionism, therefore, through associationism, has its roots deep in the psychological past. 2. Connectionism is a theory of learning, but as learning is many-sided, connectionism almost becomes a system of psychology. It is as a theory of learning, however, that it must stand or fall. 3. Connectionism has an evolutionary bearing in that it links human behavior to that of the lower animals. Thorndike’s first experiments were with chicks, fish, cats, and, later, with monkeys. From his animal experiments he derived his famous laws of learning. 4. Connectionism boldly states that learning is connecting. The connections presum- ably have their physical basis in the nervous system, where the connections between neuron and neuron explain learning. Hence, connectionism is
  • 16. also known as the synaptic theory of learning. 5. Connectionism is atomistic rather than holistic or organismic, since it stresses the analysis of behavior in order to discover the elements that are connected or bonded together. The sum total of a man’s life can be described by a list of all the situations he has encountered and the responses he has made to them. [. . .] 6. The connectionist principle of associative shifting (which suggests that if a response to a stimulus is sustained even if the stimulus is gradually changed, the same response will be likely in a new situation) has relationships with Pavlovian condi- tioning, which Thorndike regards as a special case of associative learning. 7. Connectionism has also some affinities with Watsonian behaviorism, which sug- gested that introspection was not observable and thus not scientific, stressing the maj83688_02_c01_031-066.indd 37 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 38 Section 1.2 Theory of Connectionism and the Laws of Learning
  • 17. mechanistic aspects of behavior. Neither one finds it necessary to evoke a soul in order to explain behavior. Connectionism breaks with behaviorism in regard to the stress it places on the hereditary equipment of the behaving organism. 8. Some connections are more natural than others. We grow into reflexes and instincts without very much stimulation from the environment except food and air. In other words, we mature into reflexes and instincts, but we have to practice or exercise in order to learn our habits. These hereditary patterns of behavior (reflexes and instincts) form the groundwork of learning. Most acquired connections are based on them and, indeed, grow out of them. Even such complex bonds as those which represent capacities (music, mathematics, languages, and the like) have a hereditary basis. 9. According to connectionism those things we call intellect and intelligence are quantitative rather than qualitative. A person’s intellect is the sum total of the bonds (associations) he has formed. The greater the number of bonds he has formed, the higher is his intelligence. 10. [. . .] Connectionism, above all other theories of learning, seems to be one that the classroom teacher can appreciate and apply. While the statistics which summarize
  • 18. the experiments have been decried as the products of a mechanistic conception of behavior, nevertheless they have done more to make education a science than all the theorizing of the past 2,000 years. [. . .] Thorndike was such a voluminous writer that it is difficult to summarize his position on any single question, or, indeed, to pin him down to a specific position. In order to remove any doubt the reader may have on the matter, the following recent statement of Thorndike’s position is given: A man’s life would be described by a list of all the situations which he encoun- tered and the responses which he made to them, including among the latter every detail of his sensations, percepts, memories, mental images, ideas, judg- ments, emotions, desires, choices, and other so-called mental facts. [. . .] A man’s nature at any given stage would be expressed by a list of the responses (Rs) which he would make to whatever situations or state of affairs (Ss) could happen to him, somewhat as the nature of a molecule of sugar might be expressed by a list of all the reactions that would take place between it and every substance which it might encounter. There would be one important difference, however. [. . .] In human behavior our ignorance often requires the acknowledgment of the principle of multiple response or varied reaction
  • 19. to the same S by a person who is, so far as we can tell, the same person. (See Figure 1.1 for a specific example.) [. . .] If John Doe were really the same person in every particular way on 100 occasions he would always respond to S in one same way at each of its 100 occurrences, but he will not be. Even when we can detect no differences in him there will be subtle variation in metabolism, blood supply, etc. [. . .] maj83688_02_c01_031-066.indd 38 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 39 Section 1.2 Theory of Connectionism and the Laws of Learning The Associationistic Background Ideas related to associationism date back to Aristotle, although his view differed much from our current understanding (Sandiford, 1942). Hence, there is a large gap in associationism’s history. Table 1.1 is adapted from the writing of Sandiford (1942), and can help put into per- spective the maturation of the ideas connected with associationism. Each theorist brought additional perspectives to this model for learning, and although Table 1.1 provides only a broad overview, the timeline demonstrates how the perspectives changed as time moved forward.
  • 20. Other Backgrounds of Connectionism If Thorndike be regarded as the king-pin of connectionism, then three main streams of influ- ence may be found in his work. The first, that of associationism, has already been traced. Bain influenced Thorndike’s teaching both directly and through William James. [. . .] For experimentation on the learning ability of animals, new apparatus, new devices, new methods had to be invented. Thorndike introduced the maze, the puzzle box, and the signal or choice reaction experiment, all of which have become standard equipment in animal psychol- ogy and have been employed in thousands of studies since that day. Figure 1.2 provides an illustration of a puzzle box. Thorndike’s Animal Intelligence, completed in 1898 as his doctoral dissertation, not only was the starting point of animal psychology as a science, but also went far toward establishing stimulus-response as the cornerstone of psychology. It is also the source of the famous laws of learning. [. . .] Figure 1.1: Example of possible reactions to a stimulus Psychologist Edward Thorndike proposed that humans have varied responses to the same incident or stimulus. However, he acknowledged that there are hereditary patterns of behavior such as reflexes. © Bridgepoint Education, Inc.
  • 21. R = Man smiles and walks away. S = Man is yelled at by a stranger. R = Man reacts physically to the stranger yelling and begins hitting him. R = Man yells back at the stranger and storms away. maj83688_02_c01_031-066.indd 39 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 40 Section 1.2 Theory of Connectionism and the Laws of Learning The Laws of Learning Probably the best known of the contributions that connectionism has made to educational theory and practice are the so-called laws of learning. They are not absolute laws, but rather are they to be regarded simply as comprehensive formulations of the rules which learning obeys.
  • 22. The laws usually quoted are those given in Vol. II of Thorndike’s Educational Psychology: The Psychology of Learning (1913). These include the three major laws: effect, exercise or frequency, and readiness. [. . .] These laws grew out of the experiments with animals, coupled with such influences as the writings of Bain, Romanes, Lloyd Morgan, Wilhelm Wundt, and others, and have been modified by further experiments in which human beings acted as the subjects (Thorndike, 1932). New elements injected into the laws of learning are belonging- ness, impressiveness, polarity, identifiability, availability, and mental system. This shows clearly enough that the laws are not to be regarded as a closed system, complete from the start, but merely as tentative summaries of our knowledge of the way in which learning takes place. They will be discarded or modified whenever experiments disclose that such is necessary or desirable. Table 1.1: Overview of associationistic milestones Theorists Milestones Aristotle (384–322 BCE) • Introduced the ideology of associations. • Suggested that we could not perceive two sensations as one— that they would combine or fuse into one. Thomas Hobbes (1588–1679) • Suggested sequences of thought could be casual and illogical, as in
  • 23. dreams, or orderly and regulated as by some design. • Suggested that hunger, sex, and thirst are physiological needs. John Locke (1632–1704) • Suggested “association of ideas”: Representations arise in consciousness. David Hartley (1705–1757) • Suggested that sensation (pleasure vs. pain) was generated by wave vibrations in the nerves. David Hume (1711–1776) • Noted that the associations in cause and effect are affected when addi- tional objects are introduced. James Mill (1773–1836) • Advanced associationism to include more complex emotional states within the pain vs. pleasure sensation model. Thomas Brown (1778–1820) • Suggested nine secondary laws that strengthened Aristotle’s laws of association. • Understood association as an active process of an active, holistic mind. Alexander Bain (1818–1903) • Suggested trial-and-error learning, reflexes, and instincts as the bases of habits, individual differences, and the pleasure-pain principle in learning. Edward Thorndike (1874–1949)
  • 24. • Suggested the theory of connectionism. • Suggested laws of learning. Adapted from “Connectionism: Its Origin and Major Features” by P. Sandiford, in N. B. Henry (Ed.), The Forty-First Yearbook of the National Society for the Study of Education: Part II, The Psychology of Learning (pp. 102–108), 1942. Blackwell Publishing. © National Society for the Study of Education. Adapted with permission. maj83688_02_c01_031-066.indd 40 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 41 Section 1.2 Theory of Connectionism and the Laws of Learning Figure 1.2: Thorndike’s puzzle box In Thorndike’s design, a dish of food was placed outside of the box, visible through the slats in the box. Thorndike found that animal subjects placed in the box would eventually locate the release apparatus, and the time before the activation of this response was shorter with each subsequent trial. Adapted from Animal Intelligence (p. 30), by E. L. Thorndike, 1911, New York, NY: Macmillan. The Law of Effect
  • 25. [. . .] A modifiable bond is strengthened or weakened as satisfaction or annoyance attends its exercise. With chickens and cats, Thorndike had used as motivating agents in their behavior such original satisfiers as food and release from confinement for the hungry cat, company for the lonely chicken, and so forth. These acted as rewards for certain actions which became stamped in and learned. Thorndike really took the law of effect for granted at first, as so many before him had done. Gradually, however, it became one of his most important principles of education. [. . .] In propounding the law of effect, Thorndike thought that the two effects—satisfiers and annoyers—were about equally potent, the one in stamping in the connection, the other in stamping it out. If a preference was indicated it was toward the side of rewards, although he explicitly asserted that rewards or satisfiers following responses increased the likelihood of repetitions of the connections so rewarded, while punishments decreased the likelihood of recurrence of the punished connection. [. . .] The manner in which the confirming reaction develops and operates is as follows: The con- firming reaction is at first an aftereffect of the S → R situation (where S is a stimulus and R is a response), thus: S → R → Confirming Reaction maj83688_02_c01_031-066.indd 41 8/31/17 3:06 PM
  • 26. © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 42 Section 1.2 Theory of Connectionism and the Laws of Learning Afterwards it functions as a force connecting and binding S to R, thus: S → Confirming Reaction → R The confirming action is independent of a pleasurable result, since pain may also set it in action provided it is close enough to the satisfier in the succession of connections. However, it must not be thought that the effect of pain or the influence of a punishment, which is an annoying aftereffect, is exactly the opposite of the effect or influence of a reward upon the bond to which it belongs and of which it is the aftereffect. It does not directly, invariably, and inevitably weaken the mental connection. The influence of reward or punishment is thus seen to depend upon what it leads the person to do. The reward tends to arouse the confirming reaction and so cause the continuance or repetition of the connection. Punishment does not necessarily lead to the arousal of a tendency to discontinue the punished connection or to repeat it less often, nor does it necessarily stimulate a connec- tion of an opposite kind. It arouses whatever original behavior or past experience has linked
  • 27. to that particular annoying aftereffect in those particular circumstances. This may be to run away, to scream, or to perform other useless acts. Punishments, compared with rewards, are very unreliable forces in learning. Rewards are dependable because they arouse confirming reactions. Thorndike is inclined to believe that the confirming reaction is a reaction of the neurons themselves. It is a neuronic force of reinforcement of the original response or it is the afteref- fect of the total situation response (Thorndike, 1933, 1940). [. . .] The Law of Exercise or Frequency This law, like the law of effect, was at first almost taken for granted by Thorndike. Does not “practice make perfect”? Yet experience shows that exercise does not always lead to perfec- tion. Practice in sitting on a bent pin or in poking the fire with the finger never leads to perfec- tion in the art. The law of effect has to be invoked to explain why practice does not necessarily and invariably lead to improvement. Pleasurable reactions are stamped in; painful ones are stamped out. In terms of connectionism, repetition tends to make the bond permanent. [. . .] The law of exercise or frequency has two parts, use and disuse. The law of use is stated: When a modifiable connection is made between a situation and a response, that connection’s strength is, other things being equal, increased. The law of disuse runs: When a modifiable connec- tion is not made between a situation and a response over a
  • 28. length of time, that connection’s strength is decreased. The phrase “other things being equal” refers mostly to the effect, the satisfyingness or annoyingness of the situation. In other words, the more you are able to do or apply something to differing contexts, the strength of the connection (what has been learned) increases. When the concept cannot be used in varying situations, reducing its usability, the strength of what has been learned decreases. Watson, the behaviorist, claims that frequency and recency explain learning and that it is unnecessary to invoke the law of effect. The successful action in maze learning, for example, must occur in every series; therefore, the successful action is learned mainly through fre- quency. Apparently, Watson did not realize that unsuccessful actions within the maze were maj83688_02_c01_031-066.indd 42 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 43 Section 1.2 Theory of Connectionism and the Laws of Learning often repeated more frequently than the final and successful one. Yet it is the successful one that is finally stamped in (Watson, 1914). [. . .] The repetition of a situation, while tending to make a reaction
  • 29. somewhat stereotyped, in and of itself, is unproductive for learning. It causes no adaptive changes and has no useful selective power. Repetition of a connection, that is, the situation and its particular response, results in a real though somewhat small strengthening influence. Mere repetition of a connection causes learning, but the learning is slow. For example, if a child is taught to sit in his or her seat after enter- ing the room, but does not understand why or its applicability, the child will sit but has not necessar- ily learned the reasons for performing this behavior. If the child learns that when entering a classroom, it is important to sit as a procedure that ensures posi- tive outcomes in the learning environment (such as rewards) the child will be more apt to apply this in other settings as well. Repetition of a “connection with belonging” (that is, the procedure that is applied “fits” the situation) increases the likelihood of learned adaption to per- form the behavior, even when the rewards may be concealed or disguised. Belongingness is difficult to describe but easy to illustrate. For example, the words of a sentence belong together in a way that the terminal word of one sentence and the initial word of the next do not. An additional example might include a child eating off a plate instead of eating off the table. The behavior makes logical sense to the individual. [. . .] The Law of Readiness Briefly the law of readiness may be stated: When a bond is ready to act, to act gives satisfac-
  • 30. tion and not to act gives annoyance. When a bond which is not ready to act is made to act, annoyance is caused. Examples of a bond might include starting an exercise program, asking for someone’s hand in marriage, or starting a new career. If a person is not ready to begin exercising, marry, or start a new career, he or she will likely feel annoyed by any pressure to do so. [. . .] Modifications and Additions to the Laws of Learning Thorndike’s later experiments on learning, using human beings as subjects, led to a modifica- tion of the laws of exercise and effect. Numerous additions and modifications were also made and new terms—belongingness, impressiveness, vividness, polarity, identifiability, availability, and mental systems—found their way into the vocabulary of connectionism. 1. Belongingness: A factor of great importance in the learning process. Example: Various words of a sentence fit or belong together; a sequence of numbers may belong together just because they are all numbers and not anything else, but Bigandt_Photography/iStock/Thinkstock Learning how to write and using that skill in different situations over the course of someone’s life is an example of the law of exercise or frequency. maj83688_02_c01_031-066.indd 43 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for
  • 31. resale or redistribution. 44 Section 1.2 Theory of Connectionism and the Laws of Learning some number sequences may possess more belongingness than others. Thus 2, 4, 8, 16, etc., exhibit more belongingness than 1, 3, 4, 2, 5, 11, 13, 15. 2. Impressiveness: The strength or intensity of a stimulus or a situation. Example: Loud sounds are considered stronger and more impressive than less intense ones. Stimuli attended to, that is, in the focus of consciousness, are more impressive than marginal elements. 3. Vividness: The recognizability of a word (Miller & Dost, 1964). Example: In some experiments, using word-number paired associates such as dinner 26, basal 83, divide 37, kiss 63, the number of correct number associations with kiss and dinner, both impressive words, is larger than the number of associations made with basal and divide, both weak words. 4. Polarity: The tendency for stimulus-response sequences to function more readily in the order they were practiced than in the opposite order. Example: Using foreign and vernacular phrases such as raison d’être; ohne Hast, ohne
  • 32. Ras exeunt omnes; facile descemus; obiter dicta, etc., it was shown that the ends could be supplied when the beginnings were given, more readily than the beginnings could be given when the ends were supplied; the first half evokes the second half more often than the second evokes the first. 5. Identifiability: If the connection can be easily identified it is easily learned. Example: Some concepts such as times, numbers, weights, colors, mass, density, etc., have to be analyzed out and made identifiable before they can be profitably used by us. 6. Availability: The accessibility of the response. Example: When something is easier to attain, it makes the response to it more easily assessable. 7. Mental systems: The habituation; limited physiological or emotional response to a frequently repeated stimulus (one’s habit). Example: If in paper and pencil association experiments, the stimulus word dear evoked the response sir, this would be regarded as a simple habit; but if it evoked fear, some mental system must be at work. [. . .] These modifications and additions to the laws of learning do not destroy the main fabric of the connectionist doctrine. Indeed, they illustrate one important feature of connectionism, namely, the willingness of its supporters to modify their teachings and beliefs when experi-
  • 33. mental findings are not in harmony with them. [. . .] Source: Sandiford, P. (1942). Connectionism: Its origin and major features. In N. B. Henry (Ed.), The forty-first yearbook of the National Society for the Study of Education: Part II, The psychology of learning (pp. 97–140). Blackwell Publishing. © National Society for the Study of Education. The theory of connectionism and laws of learning present clear attributes and ideas about learning behavior. Since their introduction in the early 1900s, Thorndike’s insightful sugges- tions, based on previous research, have left their mark on research about learning and continue to pose implications about how we learn. As you learn about other areas where behaviorism is applied in the learning domain, continue to consider how each was derived and how they have influenced the more modern theories we will discuss in future chapters. maj83688_02_c01_031-066.indd 44 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 45 Section 1.3 Principles of Conditioning 1.3 Principles of Conditioning Conditioning and learning have been core topics in psychology
  • 34. since the turn of the 20th century and are aligned with the transformation of associative learning concepts. Therefore, familiarity with this area of learning is critical to an advanced education in psychology, as well as a more developed understanding of behaviorism and its evolution. For this section of the chapter, we will discuss conditioning. Section 1.4 will explore how conditioning is then applied in the field of learning. There are two types of conditioning: classical and operant. Though both types have an associative property, there are also clear differences between the two. Classical condition- ing involves repeatedly pairing two stimuli so that eventually one of the stimuli prompts an involuntary response that previously the other caused on its own. Think of the classic example of Pavlov’s dog: Repeatedly pairing food with a tone eventually caused, or conditioned, the dog to salivate at the tone alone. In contrast, operant conditioning (also referred to as instrumental conditioning or Skinner- ian conditioning) introduces consequences to the associative relationship between stimuli and responses. Rather than using different stimuli to provoke the same, involuntary response, differ- ent stimuli are used to prompt or support the desired, voluntary response, which may involve the confirmation or discouragement of a behavior. In Figure 1.3, for example, two types of rein- forcement (positive and negative) are used to maintain the desired response, and two types of punishment (again, positive and negative) are used to change the behavior. In this case, the child being quiet at the physician’s office is the desired behavior.
  • 35. Figure 1.3: Example of operant conditioning Operant conditioning includes using different stimuli to provoke a specific, desired response rather than provoking the same involuntary response, such as in classical conditioning. © Bridgepoint Education, Inc. Child is quiet while in the physician’s office. Parent gives positive reinforcement by offering a reward such as TV time. The next time in a professional environment, the child is again quiet. Parent gives negative reinforcement by reducing the child’s chores. The next time in a professional
  • 36. environment, the child is again quiet. Parent gives positive punishment by giving the child additional chores. The next time in a professional environment, the behavior improves. Parent gives negative punishment by taking away the child’s TV time. The next time in a professional environment, the behavior improves. Child is quiet while in the physician’s office. Child misbehaves in the
  • 37. physician’s office. Child misbehaves in the physician’s office. Positive reinforcement Negative reinforcement Positive punishment Negative punishment maj83688_02_c01_031-066.indd 45 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 46 Section 1.3 Principles of Conditioning Each of these concepts will be more fully addressed in the next two series of excerpts. The first discusses classical conditioning and is from Clark (2004). The article will go into detail about the differing types of stimuli (conditioned versus unconditioned). The second series of excerpts discusses operant conditioning and is from Macias (2016). It will provide a deeper look into reinforcers and punishments. As you read, compare and contrast
  • 38. these two types of conditioning and consider how, with each new development, more questions arise about how associations occur and if they affect learning. Excerpts from “The Classical Origins of Pavlov’s Conditioning” By R. E. Clark Classical Conditioning In the most basic form of classical conditioning, the stimulus that predicts the occurrence of another stimulus is termed the conditioned stimulus (CS) (in Pavlov’s experiment, the tone). The predicted stimulus is termed the unconditioned stimulus (US) (in Pavlov’s experiment, the food). The CS is a relatively neutral stimulus that can be detected by the organism, but does not initially induce a reliable behavioral response. The US is a stimulus that can reliably induce a measurable response from the first presentation. The response that is elicited by the presentation of the US is termed the unconditioned response (UR) (in Pavlov’s experi- ment, the drool as a result of the food). The term “unconditioned” is used to indicate that the response is “not learned,” but rather it is an innate or reflexive response to the US. With repeated presentations of the CS followed by US (referred to as paired training) the CS begins to elicit a conditioned response (CR) (in Pavlov’s experiment, the drool as a result of the tone alone). Here the term “conditioned” is used to indicate that the response is “learned.” See Figure 1.4 for an illustration of these relationships. Figure 1.4: A typical classical conditioning procedure
  • 39. An unconditioned stimulus (US), food, leads to an unconditioned response (UR), salivation. Introducing a conditioned stimulus (CS) of a tone before the food’s presentation results in the tone eventually creating a conditioned response (CR) of salivation, even without food. From Psychology of Learning (p. 47), by D. A. Lieberman, 2012, San Diego, CA: Bridgepoint Education, Inc. Copyright 2012 by Bridgepoint Education, Inc. CR US UR tone salivation salivation CS toneCS Conditioning Result food maj83688_02_c01_031-066.indd 46 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution.
  • 40. 47 Section 1.3 Principles of Conditioning Edwin Burket Twitmyer (1873–1943) The phenomenon of classical conditioning was discovered independently in the United States and Russia around the turn of the 19th century. In the United States, Edwin B. Twitmyer made this discovery at the University of Pennsylvania while finishing his dissertation work on the “knee-jerk” reflex. When the patellar tendon is lightly tapped with a doctor’s hammer, the well-known “knee-jerk” reflex is elicited. Twitmyer had initially intended to study the mag- nitude of the reflex under normal and facilitating conditions (Figure 1.5). In the facilitating conditions the subjects were asked to verbalize the word “ah,” or to clench their fists, or to imagine clenching their fists (Twitmyer, 1902/1974). A bell that was struck one-half second before the patellar tendon was tapped served as signal for the subjects to begin verbalizing or fist clenching (or imagining fist clenching). Twitmyer observed: [D]uring the adjustment of the apparatus for an earlier group of experiments with one subject . . . a decided kick of both legs was observed to follow a tap of the signal bell occurring without the usual blow of the hammers on the tendons. . . . Two alternatives presented themselves. Either (1) the subject was in error in
  • 41. his introspective observation and had voluntarily moved his legs, or (2) the true knee jerk (or a movement resembling it in appearance) had been produced by a stimulus other than the usual one. (as cited in Irwin, 1943, p. 452) [. . .] Twitmyer apparently did not fully appreciate the potential significance of this finding beyond recording this initial observation, and the work was never extended. It has been suggested that Twitmyer’s failure to systematically investigate this phenomenon and the lack of interest exhibited by his colleagues who heard the presentation was likely due in part to the prevailing American zeitgeist where interest in delineating the components of consciousness through introspection was the principal perspective (Irwin, 1943; Coon, 1982). Thus, Twitmyer and his contemporaries would have been predisposed to undervalue the usefulness, to the field of psychology, of something as basic as a modifiable reflex. This was not the case in Russia. Figure 1.5: Twitmyer’s “knee-jerk” reflex experiment This photograph (circa 1903) shows a young subject and the experimental apparatus Twitmyer used to measure the magnitude of the knee-jerk reflex (see http://www.psych.upenn.edu/history/twittext.htm for details). University of Pennsylvania Archive, photographer unknown. maj83688_02_c01_031-066.indd 47 8/31/17 3:06 PM
  • 42. © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. http://www.psych.upenn.edu/history/twittext.htm 48 Section 1.3 Principles of Conditioning Ivan Petrovich Pavlov (1849–1936) The Russian discovery of classical conditioning comes from the pioneering work of Ivan Petrovich Pavlov. [. . .] In 1904, Pavlov was awarded the Nobel Prize in medicine for his work on the physiology of digestion. This early research, which used dogs as experimental subjects, set the stage for observing the phenomenon of classical conditioning. As early as 1880, Pavlov and his associates observed that sham feedings, in which food was eaten but failed to reach the stomach (being lost through a surgically implanted esophageal fistula), produced gastric secretions, just like real food. Pavlov’s laboratory modified this preparation in order to simplify the forthcoming studies. Rather than measure gastric secretions, they began measuring salivation (see Figure 1.6). Salivation was chosen because an efficient and highly practical method of measuring saliva- tion using a permanently implanted fistula had just been developed in the laboratory (Pavlov, 1951; Windholz, 1986). In 1897, Stefan Wolfson (also translated as Sigizmund Vul’fson), a doctoral student of Pavlov, made an important observation:
  • 43. We place before the nose of the dog a glass of carbon bisulphide . . . from its two salivary glands flows saliva . . . we stimulate the dog a few times with the same glass of carbon bisulphide. The saliva flows each time. Now we sub- stitute surreptitiously an identical glass containing water. The dog salivates again, although with a smaller quantity of saliva. (translated in Windholz, 1986, p. 142) Figure 1.6: Apparatus used in Pavlov’s study Apparatus used in Pavlov’s study of salivary conditioning in dogs. Saliva flowed through a tube connected to the dog’s cheek and traveled to another room, where it could be recorded. Adapted from “The Method of Pawlow in Animal Psychology,” by R. M. Yerkes & S. Morgulis, 1909, Psychological Bulletin, 6, 265. Copyright 1909 by R. M. Yerkes & S. Morgulis. Adapted with permission. maj83688_02_c01_031-066.indd 48 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 49 Section 1.3 Principles of Conditioning
  • 44. Pavlov immediately recognized the significance of these findings, findings that would ulti- mately lead him to change the direction of his research to explore this phenomenon. His ini- tial results were officially presented to the International Congress of Medicine held in Madrid, Spain, in 1903. This report was entitled “Experimental Psychology and Psychopathology in Animals.” [. . .] The Emergence of Classical Conditioning in the United States Pavlov’s work on classical conditioning was essentially unknown in the United States until 1906, when his lecture “The Scientific Investigation of the Psychical Faculties or Processes in the Higher Animals” was published in the journal Science (Pavlov, 1906). In 1909 Rob- ert Yerkes (1876–1956), who would later become president of the American Psychological Association, and Sergius Morgulis published an extensive review of the methods and results obtained by Pavlov, which they described as “now widely known as the Pawlow [sic] salivary reflex method” (Yerkes & Morgulis, 1909, p. 257). Initially Pavlov and his associates used the term conditional rather than conditioned. Yet Yer- kes and Morgulis chose to use the term conditioned. They explained their choice of terms in a footnote: Conditioned and unconditioned are the terms used in the only discussion of this subject by Pawlow [sic] which has appeared in English. The Russian terms, however, have as their English equivalents conditional
  • 45. and uncondi- tional. But as it seems highly probable that Professor Pawlow [sic] sanctioned the terms conditioned and unconditioned, which appear in the Huxley lecture (Lancet, 1906), we shall use them. (Yerkes & Morgulis, 1909, p. 259) The terms conditioned reflex and unconditioned reflex were used during the first two decades of the 20th century, during which time this type of learning was often referred to as “reflex- ology.” In 1921, the first textbook devoted to conditioning (General Psychology in Terms of Behavior) adopted the terms conditioned and unconditioned response to replace the term reflex (Smith & Guthrie, 1921). De-emphasizing the concept of a reflex and instead using a more general term like response allowed a larger range of behaviors to be examined with conditioning procedures. [. . .] John B. Watson (1878–1958) championed the use of classical conditioning as a research tool for psychological investigations. During 1915, his student Karl Lashley conducted several exploratory conditioning experiments in Watson’s laboratory. Watson’s presidential address, delivered in 1915 to the American Psychological Association, was entitled “The Place of the Conditioned Reflex in Psychology” (Watson, 1916). Watson was highly influential in the rapid incorporation of classical conditioning into American psychology, though this influence did not appear to extend to his student. Lashley became frustrated with his attempts to classically
  • 46. condition the salivary response in humans (Lashley, 1916) and permanently abandoned the paradigm. In 1920, Watson’s work with classical conditioning culminated in the now infa- mous case of “Little Albert” (first mentioned in the Introduction chapter). Albert B. was an 11-month-old boy who had no natural fear of white rats. Watson and Rosalie Rayner used the white rat as a CS. The US was a loud noise that always upset the child. By pair- ing the white rat and the loud noise, Albert began to cry and show fear of the white rat—a CR. maj83688_02_c01_031-066.indd 49 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 50 Section 1.3 Principles of Conditioning With successive training sessions over the course of several months, Watson and Rayner were able to demonstrate that this fear of white rats generalized to other furry objects (Watson & Rayner, 1920). The plan had been to then systematically remove this fear using methods that Pavlov had shown would eliminate or extinguish the conditioned response, in this case, fear of furry white objects. Unfortunately, “Little Albert,” as he has historically come to be known, was removed from the study by his mother on the day these procedures were to begin. Unfor-
  • 47. tunately, there is no known reliable account of how this experiment on classical conditioning of fear ulti- mately affected Albert B. Nevertheless, this example of classical conditioning may be the most famous single case in the literature on classical conditioning. The end of the beginning of classical conditioning as a paradigm in the United States can be traced to the 1927 publication of Pavlov’s book Conditioned Reflexes, which was translated into English by a for- mer student, G. V. Anrep (Pavlov, 1927). This made all of Pavlov’s conditioning work available in Eng- lish for the first time. The availability of 25 years’ worth of Pavlov’s research, in vivid detail, led to increased interest in the experimental examination of classical conditioning, an interest that has contin- ued to this day. [. . .] By 1935 B. F. Skinner entered this discussion in earnest when he published a paper titled “Two Types of Conditioned Reflexes and a Pseudo-Type” (Skinner, 1935). This was a theoreti- cal paper where Skinner attempted to add clarity and structure to distinguish two types of conditioned reflexes. [. . .] It is clear that one type corresponds to what would eventually be termed operant conditioning and the second type corresponds to Pavlov’s type of condition- ing. [. . .] Source: Clark, R. E. (2004). The classical origins of Pavlov’s conditioning. Integrative Physi- ological & Behavioral Science, 39(4), 279–294. Copyright © 2004, Springer. Operant Conditioning
  • 48. First coined by behaviorist B. F. Skinner (1904–1990), the word operant was used to describe the behavior that is in response to the environment and generated consequences (1953). Basi- cally, Skinner suggested that when a behavior was reinforced, it would increase or be validated. If a behavior was not reinforced but instead resulted in a punishment, then the behavior would diminish or be eliminated. These associations describe the core of operant conditioning. As noted at the start of this section, the following excerpts from Macias (2016) explain the roles of reinforcements and punishments in conditioning. George Rinhart/Corbis Historical/Getty Images Psychologist John B. Watson is well known for the “Little Albert” case, in which, over time, a young boy learned to fear white rats. This is an example of classical conditioning. maj83688_02_c01_031-066.indd 50 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 51 Section 1.3 Principles of Conditioning Excerpts from “Reinforcement” By S. I. Macias
  • 49. Types of Reinforcers The range of possible consequences that can function as reinforcers is enormous. To make sense of this assortment, psychologists tend to place them into two main categories: primary reinforcers and secondary reinforcers. Primary reinforcers are those that require little, if any, experience to be effective. Food, drink, and sex are common examples. While it is true that experience will influence what would be considered desirable for food, drink, or an appropri- ate sex partner, there is little argument that these items, themselves, are natural reinforcers. Another kind of reinforcer that does not require experience is called a social reinforcer. Exam- ples are social contact and social approval. Even newborns show a desire for social reinforc- ers. Psychologists have discovered that newborns prefer to look at pictures of human faces more than practically any other stimulus pattern, and this preference is stronger if that face is smiling. Like the other primary reinforcers, experience will modify the type of social recogni- tion that is desired. Still, it is clear that most people will go to great lengths to be noticed by others or to gain their acceptance and approval. Though these reinforcers are likely to be effective, most human behavior is not motivated directly by primary reinforcers. Money, entertainment, clothes, cars, and computer games are all effective rewards, yet none of these would qualify as natural or primary reinforcers. Because they must be acquired, they are called secondary reinforcers. These become effec- tive because they are paired with primary reinforcers. The
  • 50. famous American psychologist B. F. Skinner found that the sound of food being delivered was sufficient to maintain a high rate of bar pressing in experienced rats. Obviously, under normal circumstances the sound of the food occurred only if food was truly being delivered. How a secondary reinforcer becomes effective is called two- factor theory and is generally explained through a combination of instrumental and Pavlovian conditioning (hence the label “two-factor”). For example, when a rat receives food for pressing a bar (positive reinforce- ment), at that same time a neutral stimulus is also presented, the sound of the food drop- ping into the food dish. The sound is paired with a stimulus that naturally elicits a reflexive response; that is, food elicits satisfaction. Over many trials, the sound is paired consistently with food; thus, it will be conditioned via Pavlovian methods to elicit the same response as the food. Additionally, this process occurred during the instrumental conditioning of bar pressing by using food as a reinforcer. This same process works for most everyday activities. For most humans, money is an extremely powerful reinforcer. Money itself, though, is not very attractive. It does not taste good, does not reduce any biological drives, and does not, on its own, satisfy any needs. How- ever, it is reliably paired with all of these things and therefore becomes as effective as these primary reinforcers. In a similar way, popular fashion in clothing, hair styles, and personal adornment; popular art or music; even behaving according to
  • 51. the moral values of one’s family or church group (or one’s gang) can all come to be effective reinforcers because they are reli- ably paired with an important primary reinforcer, namely, social approval. The person who will function most effectively as the approving agent changes throughout life. One’s parents, friends, classmates, teachers, teammates, coaches, spouse, children, and colleagues at work all provide effective social approval opportunities. maj83688_02_c01_031-066.indd 51 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 52 Section 1.3 Principles of Conditioning Reinforcers and Punishers To maintain a reasonable degree of consistency, most psychologists use the term “reinforce- ment” exclusively for a process of using rewards to increase voluntary behavior. The field of study most associated with this technique is instrumental conditioning. In this context, the formal definition states that a reinforcer is any consequence to a behavior that is emitted in a specified situation that has the effect of increasing that behavior in the future. It must be emphasized that the behavior itself is not sufficient for the consequence to be delivered. The circumstances in which the behavior occurs are also important.
  • 52. Thus, standing and cheering at a basketball game will likely lead to approval (social reinforcement), whereas this same response is not likely to yield acceptance if it occurs at a funeral. A punisher is likewise defined as any consequence that reduces the probability of a behav- ior, with the same qualifications as for reinforcers. A behavior that occurs in response to a specified situation may receive a consequence that reduces the likelihood that it will occur in that situation in the future, but the same behavior in another situation would not gen- erate the same consequence. For example, drawing on the walls of a freshly painted room would usually result in an unpleasant consequence, whereas the same behavior (drawing) in one’s color- ing book would not. The terms “positive” and “negative” are also much more tightly defined. Former use confused these with the emotional values of good or bad, thereby requir- ing the counterintuitive and confusing claim that a positive reinforcer is withheld or a negative reinforcer presented when there is clearly no reward, and, in fact, the intent is to reduce the probability of that response (such as described by Kimble). A better, less confusing definition is to consider “positive” and “negative” as arithmetic symbols, as for adding or subtracting. They therefore are the methods of supplying reinforcement (or punishment) rather than descriptions of the rein- forcer itself. Thus, if a behavior occurs, and as a conse- quence something is given that will result in an increase in the rate of the behavior, this is positive reinforce-
  • 53. ment. Giving a dog a treat for executing a trick is a good example. One can also increase the rate of a behavior by removing something on its production. This is called negative reinforcement. A good example might be when a child who eats his or her vegetables does not have to wash the dinner dishes. Another example is the annoying seat belt buzzer in cars. Many people comply with the rules of safety simply to terminate that aver- sive sound. The descriptors “positive” and “negative” can be applied to punishment as well. If something is added on the performance of a behavior which results in the reduction of that behavior—that is positive punishment. On the other hand, if this behavior causes the removal of something Seanfboggs/iStock/Thinkstock Potty training a child is an example of reinforcement, where a parent may reward or cheer on the child throughout the process to attain a successful result. maj83688_02_c01_031-066.indd 52 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 53 Section 1.3 Principles of Conditioning that reduces the response rate—negative punishment. A dog
  • 54. collar that provides an electric shock when the dog strays too close to the property line is an example of a device that deliv- ers positive punishment. Loss of television privileges for rudeness is an example of negative punishment. See Table 1.2 for an overview of reinforcements and punishments. Table 1.2: Reinforcements and punishments Type Description Example positive reinforcement Adds to the environment to encourage continuance of a desired behavior. Giving child a reward (a treat, a toy, etc.) positive punishment Adds to the environment to discourage continuance of an undesired behavior. Adding chores to a child’s weekly duties negative reinforcement Takes away from the environment to encourage continuance of a desired behavior. Taking away child’s assigned chores for the week negative punishment Takes away from the environment to discourage continuance of an undesired behavior. Grounding child from
  • 55. playing with his/her friends © Bridgepoint Education, Inc. Why Reinforcers Work Reinforcers (and punishers) are effective at influencing an organism’s willingness to respond because they influence the way in which an organism acquires something that is desired, or avoids something that is not desired. For primary reinforcers, this concerns health and sur- vival. Secondary reinforcers are learned through experience and do not directly affect one’s health or survival, yet they are adaptive because they are relevant to those situations that are related to well-being and an improved quality of life. Certainly learning where food, drink, receptive sex partners, or social acceptance can be located is useful for an organism. Coming to enjoy being in such situations is very useful, too. [. . .] Patterns of Reinforcer Delivery It is not necessary to deliver a reinforcer on every occurrence of a behavior to have the desired effect. In fact, intermittent reinforcement has a stronger effect on the stability of the response rate than reinforcing every response. If the organism expects every response to be reinforced, suspending reinforcement will cause the response to disappear very quickly. If, however, the organism is familiar with occasions of responding without reinforcement, responding will continue for much longer on the termination of reinforcers. There are two basic patterns of intermittent reinforcement: ratio
  • 56. and interval. These pat- terns, or rules, are known as schedules of reinforcement. Ratio schedules are based on the number of responses required to receive the reinforcer. Interval schedules are based on the amount of time that must pass before a reinforcer is available. Both schedules have fixed and variable types. On fixed schedules, whatever the rule is, it stays that way. If five responses are required to earn a reinforcer (a fixed ratio 5, or FR 5), every fifth response is reinforced. A fixed maj83688_02_c01_031-066.indd 53 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 54 Section 1.3 Principles of Conditioning interval of 10 seconds (FI 10) means that the first response after 10 seconds has elapsed is reinforced, and this is true every time (responding during the interval is irrelevant). Variable schedules change the rule in unpredictable ways. A VR 5 (variable ratio 5) is one in which, on the average, the fifth response is reinforced, but it would vary over a series of trials. A vari- able interval of 10 seconds (VI 10) is similar. The required amount of time is an average of 10 seconds, but on any given trial it could be different. An example of a fixed-ratio (FR) schedule is pay for a specific
  • 57. amount of work, such as stuffing envelopes. The pay is always the same; stuffing a certain number of envelopes always equals the same pay. An example of a fixed-interval (FI) schedule is receiving the daily mail. Checking the mailbox before the mail is delivered will not result in reinforcement. One must wait until the appropriate time. A variable-ratio (VR) schedule example is a slot machine. The more attempts, the more times the player wins, but in an unpredictable pattern. A variable-interval (VI) schedule example would be telephoning a friend whose line is busy. Continued attempts will be unsuccessful until the friend hangs up the phone, but when this will happen is unknown. See Table 1.3 for an overview of ratio and interval schedules. Table 1.3: Ratio and interval schedules of reinforcement Schedule type Description Example fixed-ratio (FR) Amount of reinforcer stays the same. Paying a person $10/hour fixed-interval (FI) Time of reinforcement stays the same. Paying a person every Friday for work completed variable-ratio (VR) Reinforcers are administered in unpredictable amounts. Paying a person a bonus for time worked; amount is unknown but time may be known (such as end of the year)
  • 58. variable-interval (VI) Reinforcers are administered at unpredictable times. Paying a person a bonus of a predictable amount but at unpredictable times © Bridgepoint Education, Inc. Response rates for fixed schedules follow a fairly specific pattern. Fixed ratio schedules tend to have a steady rate until the reinforcer is delivered; then there is a short rest, followed by the same rate. A fixed interval is slightly different. The closer one gets to the required time, the faster the response rate. On receiving the reinforcer there will be a short rest, then a gradual return to responding, becoming quicker and quicker over time. This is called a “scalloped” pattern. (Though not strictly an FI schedule, it does have a temporal component, so it illus- trates the phenomenon nicely.) Students are much more likely to study during the last few days before a test and very little during the days immediately after the test. As time passes, study behavior gradually begins again, becoming more concentrated the closer the next exam date comes. Source: Macias, S. I. (2016). Reinforcement. In Salem Press Encyclopedia of Health. Copyright © EBSCO. maj83688_02_c01_031-066.indd 54 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for
  • 59. resale or redistribution. 55 Section 1.4 Behaviorism Applied Classical and operant conditioning can often be difficult concepts to understand at first glance, and it can be helpful to think about how these types of learning processes might happen in our lives each day. For instance, have you ever rewarded your children for doing what you asked? As they became older, did you have to reward them every single time, as you may have when they were younger, or could you reward them every now and again and still see the behavior repeated? By fully understanding the principles of classical and operant conditioning, you will be more apt to identify—and perhaps even implement—differing schedules of reinforcement in your own life. The last section of this chapter will guide you through two modern applications of conditioning. Reinforcing Your Understanding: Conditioning takes a closer look at Skinner’s conditioning research. Reinforcing Your Understanding: Conditioning Refer to your e-book for an embedded video that considers Skinner’s work in conditioning. In his original research, Skinner used pigeons as subjects and grain to teach the pigeons to perform certain behaviors. Review this video to reinforce your understanding of punishment
  • 60. versus reinforcers and how the schedule and rate of reinforcers affect learning. 1.4 Behaviorism Applied Now that you are familiar with how behaviorism was shaped and refined through continuous research, consider how it can be applied in modern environments. The excerpts in this section are from two separate articles. Both selections demonstrate the application of strategies based on behaviorism. The first series of excerpts is from Wells (2014) and illustrates how such strategies are used to understand consumer behaviors and then applied to product marketing; consumer behav- iors research aims to identify why peo- ple buy what they buy. For example, an organization can use what it knows about its consumers when developing campaigns; its marketing campaigns will often apply some of the behavioral principles. Do you recognize the exam- ple in the pictured advertisement? Does it trigger specific emotional responses or beliefs about the product? Do you use this specific brand of product? Many of the advertisers’ decisions and consumer behaviors associated with their prod- ucts are based on behaviorism. Ullstein bild/Getty Images Do the vibrant colors and illustrations in the Apple iPod advertisements elicit a positive feeling? Clas- sical conditioning in advertising generally assumes that favorability toward a certain product develops from a positive commercial or advertisement.
  • 61. maj83688_02_c01_031-066.indd 55 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 56 Section 1.4 Behaviorism Applied Excerpts from “Behavioural Psychology, Marketing, and Consumer Behaviour: A Literature Review and Future Research Agenda” By V. K. Wells Classical Conditioning in Marketing and Consumer Behavior Research [. . .] Allen and Janiszewski (1989), based on their work on contingency awareness, provide an anecdotal illustrative example of how classical conditioning could work successfully and be correctly used in advertising (a television commercial for Diet Pepsi), in which most of the work on classical conditioning in consumption and marketing has taken place. They suggest that: This commercial features a repetitive musical jingle with a series of brief visual clips. The jingle lyrics—”Now you see it, now you don’t, here you have it, here you won’t”—are precisely coordinated with the image presentation
  • 62. . . . the CS (the brand) predicts the US (a slim female torso). In each instance “Now you see it, now you don’t” is sung as first the brand (CS) and then a trim- figured woman (US) is shown. (pp. 39–40) Overall, there has been mixed support for classical conditioning effects in advertising, but the general suggestion is that positive attitudes toward an advertised product (CS) might develop through their association in a commercial with other stimuli that are reacted to positively (US), such as pleasant colors, music, and humor (Gorn, 1982). Early work applying classical conditioning to advertising appears to have been based on and inspired by the work of Razran (1938), who paired a free meal (US) with various political statements (CS). He found that agreement with the slogans was greater when people received a free meal than when they did not. The work of Staats and Staats (1958), who successfully associated visually presented nonsense symbols (CS) with several spoken words (US) such as beauty, healthy, smart, and success, opened the door further for a classical conditioning approach to advertising. After the associative pairings, the participants’ ratings of the CS indi- cated that the core meaning in the US (i.e., either positive or negative evaluation) had trans- ferred to the nonsense syllables (Allen & Janiszewski, 1989). In a second experiment, Allen and Janiszewski associated each of two national names (“Swedish” and “Dutch”) with either 18 positive or 18 negative words. The national name paired with positive words was later
  • 63. evaluated more favorably than the one paired with negative words. [. . .] Acquisition The first characteristic, acquisition, indicates that classically conditioned responses do not fully appear after only one pairing/trial, and the strength of the response increases with the number of pairings (McSweeney & Bierley, 1984). Whereas early studies used only one or an arbitrary number of pairings, experimenters quickly began testing the optimum level of pair- ings/trials, often experimenting with different numbers of pairings in different experimental groups. The focus of the first of the four experiments by Stuart, Shimp, and Engle (1987) was maj83688_02_c01_031-066.indd 56 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 57 Section 1.4 Behaviorism Applied on testing the amount of conditioning with different numbers of pairings (1, 3, 10, and 20). They found that the groups subjected to higher levels of pairings/trials (10 and 20) demon- strated significantly higher levels of conditioning. They also attempted to test the optimum number of trials to ensure effective conditioning and used 1, 3, 10, and 20 pairings of the CS
  • 64. and US; they found that conditioning was greater as the number of trials increased. Although other studies have used different trial numbers, there remains no agreement on an optimum number of trials for conditioning to occur. Extinction Extinction is the prediction that the conditioned behavior will disappear if the predictive relationship between the CS and the US is broken by either omitting the US entirely or by presenting the CS and US randomly (McSweeney & Bierley, 1984). Till, Stanley, and Pirluck (2008) explored the characteristic of extinction empirically. Their study paired brands with celebrities and measured attitudes toward the brands after conditioning. Attitudes increased with the use of well-liked and relevant celebrities. They then attempted to extinguish these effects but found that, once paired, the pairings were difficult to eliminate, with brand atti- tudes still affected 2 weeks after the procedure (Till et al., 2008). Till and Priluck (2000) stud- ied the characteristic of generalization, or the extent to which a response conditioned to one stimulus transfers to similar stimuli. Through two experimental procedures, they found that attitudes conditioned to a particular brand (Garra mouthwash) could be transferred (gener- alized) to a product with a similar name (Gurra, Gurri, and Dutti) in the same category, as well as a product with the same name in a different category (soap). [. . .] Operant Conditioning in Marketing and Consumer Behavior Research
  • 65. In operant conditioning, behavior is shaped and maintained by its consequences (Foxall, 1986), meaning that the rate at which a behavior will be performed is directly related to the consequences of that behavior performed previously. [. . .] According to Skinner, each behav- ioral act can be broken down into three key parts: (1) the response/behavior (R); (2) the reinforcement/punishment (S+/-), which is a consequence of the behavior; and (3) a discrimi- native stimulus (Sd ), which is a cue that signals the likelihood of positive or negative conse- quences arising from performing the behavior (Foxall, 1986, 2002). The three parts together, labelled the three-term contingency, highlight that the determinants of the behavior must occur in the environment (Foxall, 1986, 1993): Sd → R → S+/– In general, behavior modifiers include positive and negative reinforcement, and positive and negative punishment. Positive reinforcement is generally a reward or something that strengthens the behavior (e.g., a pleasant experience or satisfaction with a product, a posi- tive response to a behavior), which likely leads the person to buy the product again in future. With negative reinforcement, the behavior is generally performed to avoid unpleasantness (e.g., buying a product to avoid an aggressive salesperson, purchase and consumption of pain- killers to relieve a headache; Simintiras & Cadogan, 1996). Punishment is an aversive conse- quence after a behavioral response and may lead to the extinction of a behavior (Nord & Peter,
  • 66. maj83688_02_c01_031-066.indd 57 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 58 Section 1.4 Behaviorism Applied 1980). An example of punishment is a product that does not do the job it was designed to do or is of poor quality, and thus the buyer no longer buys it. Reinforcement, in both experimental procedures and real-life situations, is provided on a schedule. [. . .] Research has shown that intermittent schedules of reinforcement develop high rates of behavior resistant to extinction, and they are also more economical because they use fewer reinforcers, which can reduce the cost (Peter & Nord, 1982). Peter and Nord (1982) suggest that most marketing activity in the real world (differentiating brands and manipu- lating marketing variables such as price and promotions) often occurs on an intermittent schedule. In terms of marketing and consumer behavior, a full range of behavior, such as actual purchas- ing, visiting and browsing in a store, and searching for information online, can be examined under the three-term contingency. Foxall (1986, p. 404) also documents that verbal behavior,
  • 67. for example, sharing positive or negative word of mouth about a product, can also be exam- ined but notes that “behaviors which belong to different classes (e.g. talking about how one will vote and actually voting) will be consistent only when the contingency of reinforcement applicable to both are functionally equivalent.” Discriminative stimuli serve to signal the probability of behavior being reinforced and can change the probability of a behavior being emitted. Nord and Peter (1980) provide examples of discriminative stimuli such as store signs (e.g., 50% off, buy one get one free), store logos (e.g., Kmart’s big red “K,” McDonald’s golden arches), or distinctive brand marks (e.g., Levi’s, Coca-Cola). Past learning history and experiences will have taught customers that responding to cues such as these in the past rewards them with satisfactory value purchases. They may also have learned that they are not rewarded when the symbols or cues are absent. [. . .] Source: Wells, V. K. (2014). Behavioural psychology, marketing and consumer behaviour: A literature review and future research agenda. Journal of Marketing Management, 30(11/12), 1119–1158. Copyright © 2014 Routledge. Behaviorism in Educational Environments The second series of excerpts in this section is from Standridge (2002). Standridge demonstrates the application of behaviorism in education and considers the importance of such strategies when reinforcing preferred behaviors and discouraging unwanted behaviors. Behavior modifi-
  • 68. cation is an important strategy for creating positive environments that support effective learn- ing opportunities. The selection introduces the concepts of modeling, cueing, and behavior modi- fication. As you read, consider how similar strategies for putting theory into practice could also be used in organizations and family units. Excerpts from “Behaviorism” By M. Standridge [ . . .] Behaviorist techniques have long been employed in education to promote behavior that is desirable and discourage that which is not. Among the methods derived from behavior- ist theory for practical classroom application are contracts, consequences, reinforcement, extinction, and behavior modification. maj83688_02_c01_031-066.indd 58 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 59 Section 1.4 Behaviorism Applied Contracts, Consequences, Reinforcement, and Extinction Simple contracts can be effective in helping children focus on behavior change. The relevant behavior should be identified, and the child and counselor should decide the terms of the contract. Behavioral contracts can be used in school as well as
  • 69. at home. It is helpful if teachers and parents work together with the student to ensure that the contract is being fulfilled. [. . .] Consequences occur immediately after a behavior. Consequences may be positive or nega- tive, expected or unexpected, immediate or long term, extrinsic or intrinsic, material or sym- bolic (a failing grade), emotional/interpersonal, or even unconscious. Consequences occur after the “target” behavior occurs, when either positive or negative reinforcement may be given. Positive reinforcement is presentation of a stimulus that increases the probability of a response. This type of reinforcement occurs frequently in the classroom. Teachers may pro- vide positive reinforcement by: • Smiling at students after a correct response. • Commending students for their work. • Selecting them for a special project. • Praising students’ ability to parents. Negative reinforcement increases the probability of a response that removes or prevents an adverse condition. Many classroom teachers mistakenly believe that negative reinforcement is punishment administered to suppress behavior; however, negative reinforcement increases the likelihood of a behavior, as does positive reinforcement. Negative implies removing a con- sequence that a student finds unpleasant. Negative reinforcement might include: • Obtaining a score of 80% or higher makes the final exam optional.
  • 70. • Submitting all assignments on time results in the lowest grade being dropped. • Perfect attendance is rewarded with a “homework pass.” Punishment involves presenting a strong stimulus that decreases the frequency of a particu- lar response. Punishment is effective in quickly eliminating undesirable behaviors. Examples of punishment include: • Students who fight are immediately referred to the principal. • Late assignments are given a grade of “0.” • Three tardies to class results in a call to the parents. • Failure to do homework results in after-school detention (privilege of going home is removed). Table 1.4 provides a comparison and examples of reinforcements and punishments. Also see Reinforcing Your Understanding: Reinforcement and Punishment in the Classroom for a more in-depth example. Extinction decreases the probability of a response by contingent withdrawal of a previously reinforced stimulus. Examples of extinction are: • A student has developed the habit of saying the punctuation marks when reading aloud. Classmates reinforce the behavior by laughing when he does so. The teacher tells the students not to laugh, thus extinguishing the behavior. maj83688_02_c01_031-066.indd 59 8/31/17 3:06 PM
  • 71. © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 60 Section 1.4 Behaviorism Applied • A teacher gives partial credit for late assignments; other teachers think this is unfair; the teacher decides to then give zeros for the late work. • Students are frequently late for class, and the teacher does not require a late pass, contrary to school policy. The rule is subsequently enforced, and the students arrive on time. Table 1.4: Reinforcement and punishment comparison Reinforcement (Behavior increases) Punishment (Behavior decreases) Positive (Something is added) Positive reinforcement: Something is added to increase desired behavior. Example: Smile and compliment student on good performance. Positive punishment: Something is added to decrease undesired behavior. Example: Give student detention for
  • 72. failing to follow the class rules. Negative (Something is removed) Negative reinforcement: Something is removed to increase desired behavior. Example: Give a free homework pass for turning in all assignments. Negative punishment: Something is removed to decrease undesired behavior. Example: Make students miss their time in recess for not following the class rules. Adapted from “Behaviorism” by M. Standridge, 2002, in M. Orey (Ed.), Emerging Perspectives on Learning, Teaching, and Technology (http://epltt.coe.uga.edu/index.php?title=Behaviorism). Copyright 2002 by M. Standridge. Adapted with permission. Reinforcing Your Understanding: Reinforcement and Punishment in the Classroom Reinforcement and punishment are still often used as methods for classroom management in today’s schools. By shaping student behavior, instructors have the ability to be more focused on the concepts that need to be learned. The following student- created video presents a quality demonstration of reinforcement and punishment in a classroom scenario. In this video, the teacher, Mr. Andrews, uses each method to
  • 73. demonstrate operant conditioning in scenarios with one particularly rambunctious student, Benjamin. https://youtu.be/wLoMs-OzimU Modeling, Shaping, and Cueing Modeling is also known as observational learning (where the learner imitates, or models, the others’ behavior). Albert Bandura has suggested that modeling is the basis for a variety of child behavior. Children acquire many favorable and unfavorable responses by observing those around them. A child who kicks another child after seeing this on the playground, or a student who is always late for class because his friends are late, is displaying the results of observational learning. Shaping is the process of gradually changing the quality of a response. The desired behavior is broken down into discrete, concrete units, or positive movements, each of which is reinforced as it progresses toward the overall behavioral goal. In the following scenario, the classroom teacher employs shaping to change student behavior: The class enters the room and sits down, maj83688_02_c01_031-066.indd 60 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. http://epltt.coe.uga.edu/index.php?title=Behaviorism https://youtu.be/wLoMs-OzimU
  • 74. 61 Section 1.4 Behaviorism Applied but continues to talk after the bell rings. The teacher gives the class one point for improvement, in that all students are seated. Subsequently, the students must be seated and quiet to earn points, which may be accumulated and redeemed for rewards. Cueing may be as simple as providing a child with a verbal or nonverbal signal as to the appropriate- ness of a behavior. For example, to teach a child to remember to perform an action at a specific time, the teacher might arrange for him to receive a cue immediately before the action is expected rather than after it has been performed incorrectly. For example, if the teacher is working with a student who habitually answers aloud instead of raising his hand, the teacher should discuss a cue such as hand- raising at the end of a question posed to the class. Behavior Modification Behavior modification is a method of eliciting bet- ter classroom performance from reluctant students. It has six basic components: 1. Specification of the desired outcome (What must be changed and how will it be evaluated?). One example of a desired outcome is increased student participation in class discussions. 2. Development of a positive, nurturing environment (by removing negative stimuli from the learning environment). In the above example, this
  • 75. would involve a student- teacher conference with a review of the relevant material, and calling on the student when it is evident that she knows the answer to the question posed. 3. Identification and use of appropriate reinforcers (intrinsic and extrinsic rewards). A student receives an intrinsic reinforcer by correctly answering in the presence of peers, thus increasing self-esteem and confidence. 4. Reinforcement of behavior patterns develop until the student has established a pat- tern of success in engaging in class discussions. 5. Reduction in the frequency of rewards—a gradual decrease in the amount of one-on- one review with the student before class discussion. 6. Evaluation and assessment of the effectiveness of the approach based on teacher expectations and student results. Compare the frequency of student responses in class discussions to the amount of support provided, and determine whether the student is independently engaging in class discussions (Brewer, Campbell, & Petty, 2000). [. . .] Further methods for behavior modification could include changing the environment, using models for learning new behavior, recording behavior, substituting new behavior to break bad habits, developing positive expectations, and increasing intrinsic satisfaction. [. . .]
  • 76. Source: Standridge, M. (2002). Behaviorism. In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology. Retrieved from http://epltt.coe.uga.edu/index.php?title=Behaviorism Erllre/iStock/Thinkstock A child trying on an adult’s clothing could be an example of observational learning; once the child sees a par- ent wearing high heels, a large coat, or even makeup, the child may try to model that behavior. maj83688_02_c01_031-066.indd 61 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. http://epltt.coe.uga.edu/index.php?title=Behaviorism 62 Summary & Resources As we develop our understanding of how we learn, it is important to recognize the crucial foun- dations that characterize learning psychology, such as behaviorism and behavior analysis. Today, many different professions use and adapt behaviorist methods to help people succeed in their learning opportunities. Whether you want to become a counselor, a teacher, a human resources director, an employee development specialist, a psychologist, a researcher, or simply the best parent you can be, behaviorism offers you applicable
  • 77. strategies for encouraging appro- priate and healthy behaviors in others. Reinforcing Your Understanding: Applied Behavioral Analysis (ABA) offers a glimpse at one young boy’s experiences with reward-based therapy. Reinforcing Your Understanding: Applied Behavioral Analysis (ABA) Behaviorism, more commonly referred to today as behavioral analysis, is applied in a wide range of professional areas, including, but not limited to, learning, counseling, behavior management, and the treatment of autism and other disorders such as anorexia, bulimia, and binge eating disorder. In each area, reinforcements are often used to encourage desired behaviors. Refer to your e-book for an embedded video clip that demonstrates the benefits of applied learning strategy when working with children who have autism. In this example, a 2-year-old boy diagnosed with autism, Jake, receives ABA therapy. Summary & Resources Chapter Summary Behaviorism is a foundational framework that encourages those interested in how we learn to study, reflect, and identify patterns that support the stimulus- response premise. Dating back as far as Aristotle and his ideas about associations, these ideas have matured, been challenged, and continue to be elaborated upon through years of reflection and research. As explained by Watrin and Darwich (2012) in section 1.1,
  • 78. behaviorism is often misunderstood and difficult to clearly explain. However, additional articles in this chapter help us to bridge the gaps created by the multifac- eted metamorphosis of this theoretical model. Instinctively, the foundations of behaviorism can be categorized by the S → R relationship and the suggestion that learning is the outward manifestation of the desired behavior, and although there are differing methods of how a stimulus can be applied to gain differing responses, this is a foundational component of the behaviorist ideology. See Figure 1.7 for a side-by-side presentation of the stimulus-response relationships in connectionism and conditioning. maj83688_02_c01_031-066.indd 62 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. 63 Summary & Resources Key Ideas • Behaviorism suggests that learning has successfully occurred when the appropriate behavior is observed. • Behaviorism suggests many relevant strategies for successful learning, educating,
  • 79. and counseling. • Behavior analysis constitutes a field and a psychological system devoted to the study of behavior. • Skinnerian behaviorism established the fundamental concepts and methods of behavior analysis. Figure 1.7: Overview of the principles of conditioning The foundations of behaviorism lie in the stimulus-response theoretical model. This model can be applied to connectionism and conditioning. © Bridgepoint Education, Inc. In connectionism: S R Confirming reaction S RConfirming reaction In principles of conditioning: Before conditioning: Afterwards it functions as a force connecting and binding S to R, thus: During conditioning: After conditioning: Bell
  • 80. (S) No Response (R) but Food (US) Salivation (UR) Food (US) Salivation (UR) Bell (CS) Bell (CS) Salivation (CR) followed by maj83688_02_c01_031-066.indd 63 8/31/17 3:06 PM © 2017 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution.