schedules of reinforcement


Published on

Published in: Technology, Business
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

schedules of reinforcement

  1. 1. Schedules of Reinforcement The Effects of Intermittently Reinforcing Behavior
  2. 2. Schedules of Reinforcement <ul><li>Behavior is not necessarily going to be reinforced every time it occurs </li></ul><ul><li>In real life, behavior is not often reinforced each time it occurred </li></ul><ul><li>A reinforcement schedule is a rule stating which instances of behavior, if any, will be reinforced </li></ul><ul><li>Intermittent reinforcement refers to reinforcement that is not administered to each instance of a response </li></ul>
  3. 3. Advantages of Intermittent Reinforcement <ul><li>Economizing on time and reinforcers when reinforcement does not have to be administered for each instance of a behavior </li></ul><ul><li>Building persistent behavior which is much more resistant to extinction </li></ul><ul><li>Delays the effects of satiation since fewer reinforcements need to be delivered </li></ul>
  4. 4. Types of Schedules <ul><li>Continuous reinforcement: every instance of a behavior is reinforced </li></ul><ul><li>Ratio schedules: Reinforcement is based on the number of behaviors required </li></ul><ul><li>Interval schedules: Reinforcement is based on the passage of time </li></ul><ul><li>Duration schedules: Reinforcement is based on the continued performance of a response for a period of time </li></ul><ul><li>Fixed schedules: The requirements for reinforcement are always the same </li></ul><ul><li>Variable schedules: The requirements for reinforcement change randomly </li></ul>
  5. 5. Schedules of Reinforcement <ul><li>Continuous reinforcement refers to reinforcement being administered to each instance of a response </li></ul><ul><li>Intermittent reinforcement lies between continuous reinforcement and extinction </li></ul>
  6. 6. An Example of Continuous Reinforcement <ul><li>Each instance of a smile is reinforced </li></ul>
  7. 7. Fixed Ratio Reinforcement <ul><li>A fixed number of responses is required for each reinforcement </li></ul><ul><li>These schedules are designated FR n where n= the number of responses required </li></ul><ul><li>These schedules usually produce rapid rates of responding with short post-reinforcement pauses </li></ul><ul><li>The length of the pause is directly proportional to the number of responses required </li></ul>
  8. 8. An example of Fixed Ratio Reinforcement <ul><li>Every fourth instance of a smile is reinforced </li></ul>
  9. 9. Graph of Fixed Ratio Responding
  10. 10. Fixed Interval Reinforcement <ul><li>These schedules require the passage of a specified amount of time before reinforcement will be delivered contingent on a response </li></ul><ul><li>No response during the interval is reinforced </li></ul><ul><li>The first response following the end of the interval is reinforced </li></ul><ul><li>This schedule usually produces a scalloped pattern of responding in which little behavior is produced early in the interval, but as the interval nears an end, the rate of responding increases </li></ul><ul><li>This also produces an overall low rate of responding </li></ul>
  11. 11. Graph of Fixed Interval Responding
  12. 12. Variable Schedules of Reinforcement <ul><li>Variable schedules differ from fixed schedules in that the behavioral requirement for reinforcement varies randomly from one reinforcement to the next </li></ul><ul><li>This usually produces a more consistent pattern of responding without post-reinforcement pauses </li></ul><ul><li>Variable ratio schedules produce an overall high consistent rate of responding </li></ul><ul><li>Variable interval schedules produce an overall low consistent rate of responding </li></ul>
  13. 13. An Example of Variable Ratio Reinforcement <ul><li>Random instances of the behavior are reinforced </li></ul>
  14. 14. Graph of Variable Ratio Responding
  15. 15. Graph of Variable Interval Responding
  16. 16. Fixed and Variable Duration Schedules <ul><li>The response is required to continue for a specified or variable period of time for reinforcement to be delivered </li></ul><ul><li>These schedules produce a continuous rate of behavior since that is the requirement for reinforcement </li></ul>
  17. 17. Extinction of Intermittently Reinforced Behavior <ul><li>The less often and the more inconsistently behavior is reinforced, the longer it will take to extinguish the behavior, other things being equal </li></ul><ul><li>Behaviors that are reinforced on a “thin” schedule are more resistant to extinction than behaviors reinforced on a more dense schedule </li></ul><ul><li>Behavior that is reinforced on a variable schedule will be more resistant to extinction than behavior reinforced on a fixed schedule </li></ul>
  18. 18. Reducing Reinforcer Density <ul><li>Large amounts of behavior can be obtained with very little reinforcement using intermittent schedules </li></ul><ul><li>Initially, behavior needs dense schedules of reinforcement to establish it, preferably continuous reinforcement </li></ul><ul><li>As the behavior is strengthened, reinforcement can be gradually reduced in frequency </li></ul><ul><li>Start with as low a density as the behavior can tolerate and decrease the density as responding is strengthened </li></ul>
  19. 19. <ul><li>If it is reduced too quickly, signs of extinction may be observed </li></ul><ul><li>Response rate may slow down </li></ul><ul><li>Inconsistent responding may be seen </li></ul><ul><li>May see an increase in other responses </li></ul><ul><li>This is known as schedule strain </li></ul><ul><li>If this happens, retreat to a denser reinforce-ment schedule </li></ul><ul><li>Adding a conditioned reinforcer in between reinforcements can help bridge the gap </li></ul>
  20. 20. Variations of Reinforcement Schedules I: Limited Hold <ul><li>This is applied when a faster rate of responding is desired with a fixed interval schedule </li></ul><ul><li>Response rate can be slowed down if response is not made soon after the end of the interval </li></ul><ul><li>By limiting how long the reinforcer is available following the end of the interval, responding can be speeded up </li></ul><ul><li>If the response is not made within that period, the reinforcement is lost and another is not available until the end of the next interval </li></ul>
  21. 21. Variations of Reinforcement Schedules II: Concurrent Schedules <ul><li>Two or more basic schedules are operating independently at the same time for two or more different behaviors </li></ul><ul><li>The organism has a choice of behaviors and schedules </li></ul><ul><li>This provides a better analog for real-life situations because reinforcement is often available for more than one response class or from more than one source or both </li></ul>
  22. 22. Concurrent Schedules (cont’d) <ul><li>When similar reinforcement is scheduled for each of the concurrent responses: </li></ul><ul><li>the response receiving the higher frequency of reinforcement will increase in rate </li></ul><ul><li>the response requiring the least effort will increase in rate </li></ul><ul><li>the response providing the most immediate reinforcement will increase in rate </li></ul>
  23. 23. Matching Law and Maximizing <ul><li>The proportion of responses made to each schedule will be proportionate to the ratio of reinforcers available under each schedule </li></ul><ul><li>Maximizing: subjects switch back and forth between alternatives to receive maximum reinforcers </li></ul><ul><li>Concurrent ratio schedules: little switching back and forth </li></ul><ul><li>Concurrent interval schedules: the subjects can earn close to all of the reinforcements on both schedules </li></ul>
  24. 24. Variations of Reinforcement Schedules II: Chained Schedules <ul><li>Two or more basic schedule requirements are in place, one schedule occurring at a time but in a specified sequence </li></ul><ul><li>There is usually a cue that is correlated with a specific schedule and is present as long as the schedule is in effect </li></ul><ul><li>Reinforcement for responding in the 1 st component is the presentation of the 2 nd </li></ul><ul><li>Reinforcement does not occur until the final component is performed </li></ul>
  25. 25. Variations of Reinforcement Schedules III: Conjunctive Schedules <ul><li>The requirements for two or more schedules must be met simultaneously </li></ul><ul><li>Task/interval interactions </li></ul><ul><li>When the task requirements are high and the interval is short, steady work throughout the interval will be the result </li></ul><ul><li>When task requirements are low and the interval long, many nontask behaviors will be observed </li></ul>