Operant conditioning


Published on

Published in: Technology, Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • OBJECTIVE 10 | Identify the two major characteristics that distinguish classical conditioning from operant conditioning.
  • OBJECTIVE 11 | State Thorndike’s law of effect, and explain its connection to Skinner’s research on operant conditioning.
  • OBJECTIVE 12 | Describe the shaping procedure, and how it can increase our understanding of what animals and babies can discriminate.
  • OBJECTIVE 13 | Compare positive and negative reinforcement, and give one example each of a primary reinforcer, a conditioned, an immediate, and a delayed reinforcer.
  • OBJECTIVE 14 | Discuss the strengths and weaknesses of continuous and partial reinforcement schedules, and identify four schedules of partial reinforcements.
  • OBJECTIVE 15 | Discuss the ways negative punishment, positive punishment, and negative reinforcement differ, and list some drawbacks of punishment as a behavior-control technique.
  • OBJECTIVE 16 | Explain how latent learning and the effect of external rewards demonstrate that cognitive processing is an important part of learning
  • OBJECTIVE 17 | Explain how biological predisposition place limits on what can be achieved through operant conditioning.
  • OBJECTIVE 18 | Describe the controversy over Skinner’s views of human behavior.
  • OBJECTIVE 19 | Describe some ways to apply operant conditioning principles at school, at work and at home.
  • OBJECTIVE 20 | Identify the major similarities and differences between classical and operant conditioning.
  • Operant conditioning

    1. 1. Operant & Classical Conditioning1. Classical conditioning forms associations between stimuli (CS and US). Operant conditioning, on the other hand, forms an association between behaviors and the resulting events. 1
    2. 2. Operant & Classical Conditioning Classical conditioning involves respondent behavior that occurs as an automatic response to a certain stimulus. Operant conditioning involves operant behavior, a behavior that operates on the environment, producing rewarding or punishing stimuli. 2
    3. 3. Skinner’s Experiments Skinner’s experiments extend Thorndike’sthinking, especially his law of effect. This lawstates that rewarded behavior is likely to occur again. Yale University Library 3
    4. 4. Operant Chamber Using Thorndikes law of effect as a startingpoint, Skinner developed the Operant chamber, or the Skinner box, to study operant conditioning. Edition by Michael P. Domjan, 2005. Used with permission From The Essentials of Conditioning and Learning, 3rd by Thomson Learning, Wadsworth Division Walter Dawn/ Photo Researchers, Inc. 4
    5. 5. Operant Chamber The operant chamber, or Skinner box, comes with a bar or key thatan animal manipulates to obtain a reinforcerlike food or water. Thebar or key is connected to devices that recordthe animal’s response. 5
    6. 6. Shaping Shaping is the operant conditioning procedurein which reinforcers guide behavior towards the desired target behavior through successive approximations. Khamis Ramadhan/ Panapress/ Getty Images Fred Bavendam/ Peter Arnold, Inc. A rat shaped to sniff mines. A manatee shaped to discriminate objects of different shapes, colors and sizes. 6
    7. 7. Types of Reinforcers Any event that strengthens the behavior itfollows. A heat lamp positively reinforces a meerkat’s behavior in the cold. Reuters/ Corbis 7
    8. 8. Primary & Secondary Reinforcers Primary Reinforcer: An innately reinforcing stimulus like food or drink. Conditioned Reinforcer: A learned reinforcer that gets its reinforcing power through association with the primary reinforcer. 8
    9. 9. Immediate & Delayed Reinforcers Immediate Reinforcer: A reinforcer that occurs instantly after a behavior. A rat gets a food pellet for a bar press. Delayed Reinforcer: A reinforcer that is delayed in time for a certain behavior. A paycheck that comes at the end of a week. We may be inclined to engage in small immediate reinforcers (watching TV) rather than large delayed reinforcers (getting an A in a course) which require consistent study. 9
    10. 10. Reinforcement Schedules• Continuous Reinforcement: Reinforces the desired response each time it occurs.• Partial Reinforcement: Reinforces a response only part of the time. Though this results in slower acquisition in the beginning, it shows greater resistance to extinction later on. 10
    11. 11. Ratio Schedules Fixed-ratio schedule: Reinforces a response only after a specified number of responses. e.g., piecework pay. Variable-ratio schedule: Reinforces a response after an unpredictable number of responses. This is hard to extinguish because of the unpredictability. (e.g., behaviors like gambling, fishing.) 11
    12. 12. Interval Schedules Fixed-interval schedule: Reinforces a response only after a specified time has elapsed. (e.g., preparing for an exam only when the exam draws close.) Variable-interval schedule: Reinforces a response at unpredictable time intervals, which produces slow, steady responses. (e.g., pop quiz.) 12
    13. 13. Schedules of Reinforcement 13
    14. 14. PunishmentAn aversive event that decreases the behavior it follows. 14
    15. 15. Punishment Although there may be some justification for occasional punishment (Larzelaere & Baumrind, 2002), it usually leads to negative effects.1. Results in unwanted fears.2. Conveys no information to the organism.3. Justifies pain to others.4. Causes unwanted behaviors to reappear in its absence.5. Causes aggression towards the agent.6. Causes one unwanted behavior to appear in place of another. 15
    16. 16. Extending Skinner’s UnderstandingSkinner believed in inner thought processes and biological underpinnings, but many psychologists criticize him for discounting them. 16
    17. 17. Cognition & Operant ConditioningEvidence of cognitive processes during operant learning comes from rats during a maze exploration in which they navigate the maze without an obvious reward. Rats seem to develop cognitive maps, or mental representations, of the layout of the maze (environment). 17
    18. 18. Latent Learning Such cognitive maps are based on latentlearning, which becomes apparent when anincentive is given (Tolman & Honzik, 1930). 18
    19. 19. MotivationIntrinsic Motivation:The desire to perform abehavior for its ownsake.Extrinsic Motivation:The desire to perform abehavior due topromised rewards orthreats of punishments. 19
    20. 20. Biological Predisposition Biological constraintspredispose organisms to learn associations that are naturally adaptive. Breland and Breland (1961) showed that animals drift towards Photo: Bob Bailey their biologicallypredisposed instinctive behaviors. Marian Breland Bailey 20
    21. 21. Skinner’s Legacy Skinner argued that behaviors were shaped byexternal influences instead of inner thoughts and feelings. Critics argued that Skinnerdehumanized people by neglecting their free will. Falk/ Photo Researchers, Inc 21 .
    22. 22. Applications of Operant Conditioning Skinner introduced the concept of teachingmachines that shape learning in small steps and provide reinforcements for correct rewards. LWA-JDL/ Corbis 22 In School
    23. 23. Applications of Operant ConditioningReinforcement principles can enhance athletic performance. 23 In Sports
    24. 24. Applications of Operant ConditioningReinforcers affect productivity. Many companies now allow employees to share profits and participate in company ownership. 24 At work
    25. 25. Applications of Operant ConditioningIn children, reinforcing good behavior increases the occurrence of these behaviors. Ignoring unwanted behavior decreases their occurrence. 25
    26. 26. Operant vs. Classical Conditioning 26