2. BURRHUS FREDERICK SKINNER
He believed in the stimulus-response
pattern of conditioned behavior.
He wrote “Walden Two” and “Science
and Human Behavior.”
He studied operant behavior
(voluntary behaviors used in operating
on the environment) – OPERANT
CONDITIONING.
3. OPERANT CONDITIONING
Learning is a result of change in
overt behavior.
Changes in behavior are the result
of individual’s response to events
(stimuli) that occur in the
environment.
4. REINFORCEMENT
Key element in Skinner’s S-R
theory
REINFORCER
Anything that strengthens the
desired response. There is a
positive and a negative reinforcer.
5. POSITIVE REINFORCER
Stimulus that is given or added to increase
the response.
Examples:
• Teacher promises extra time in the play area
to children who behave well during the
lesson.
• Mother who promises a new cell phone for
her son who gets good grades.
• Verbal praises, star stamps and stickers.
6. NEGATIVE REINFORCER
Stimulus that results in the increased
frequency of a response when it is
withdrawn or removed.
Not a punishment but a reward
Examples:
• Teacher announces that a student who gets
an average grade of 1.5 for the two grading
periods will no longer take the final
examination.
7. PUNISHMENT
Consequence intended to result in
reduced responses.
Examples:
• Student who always comes to class late is
not allowed to join a group work that has
already began and, therefore, loses points
for that activity. The punishment was done
to reduce the response of repeatedly coming
to class late.
8. EXTINCTION OR NON-REINFORCEMENT
Responses that are not reinforced are not
likely to be repeated. Example: ignoring a
student’s misbehavior may extinguish that
behavior.
SHAPING OF BEHAVIOR
Successive approximations of the behavior
are rewarded until one learns the
association between the behavior and the
reward.
9. BEHAVIORAL CHAINING
Comes about when a series of steps are needed
to be learned. Example: child can be given
reinforcement until the entire process of tying
the shoelace is learned.
REINFORCEMENT SCHEDULES
Once the desired behavioral response is
accomplished, reinforcement does not have to
be 100%; it can be maintained more
successfully through partial reinforcement
schedule.
11. FIXED INTERVAL SCHEDULES
Target amount is reinforced after a
fixed amount of time has passed since
the last reinforcement.
Example:
The bird in a cage is given food
(reinforcer) every 10 minutes,
regardless of how many times it presses
the bar.
12. VARIABLE INTERVAL SCHEDULES
Similar to fixed interval schedules but
the amount of time that must pass
between reinforcement varies.
Example:
The bird may receive food (reinforcer)
different intervals, not every 10
minutes.
13. FIXED RATIO SCHEDULES
A fixed number of correct responses
must occur before reinforcement may
recur.
Example:
The bird will be given food everytime it
presses the bar 5 times.
14. VARIABLE RATIO SCHEDULES
The number of correct repetitions of the
correct response for reinforcement varies.
Example:
The bird is given food (reinforcer) after it
presses the bar 3 times, then after 10 times,
then after 4 times. So the bird will not be
able to predict how many times it needs to
press the bar before it gets food again.
15. ANALYSIS
Variable Interval especially the
Variable Ratio Schedules produce
steadier and more persistent
rates of response because the
learners cannot predict when
reinforcement will come although
they know that they will eventually
succeed.
16. IMPLICATIONS OF OPERANT CONDITIONING
Implications are given to programmed
instruction.
1. Practice should take the form of question
(stimulus) – answer (response) frames
which expose the student to the subject in
gradual steps.
2. Require that the learner makes a response
for every frame and receives immediate
feedback.
17. IMPLICATIONS OF OPERANT CONDITIONING
3. Try to arrange the difficulty of the
questions so the response is always correct
and hence, a positive reinforcement.
4. Ensure that good performance in the
lesson is paired with secondary reinforcers
such as verbal praise, prizes and good
grades.
18. PRINCIPLES DERIVED FROM
SKINNER’S OPERANT CONDITIONING
1. Behavior that is positively reinforced will
reoccur, intermittent reinforcement is
particularly effective.
2. Information should be presented in small
amounts so that responses can be
reinforced (“shaping”).
3. Reinforcements will generalize across
similar stimuli (“stimulus generalization”)
producing secondary conditioning.