Chapter 17 AES

17,981 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
17,981
On SlideShare
0
From Embeds
0
Number of Embeds
16,518
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Chapter 17 AES

  1. 1. Chapter 17. NOTES. 98-07-06. IN SEARCH OF THE EVERYDAY VARIABLE RATIO BAD EXAMPLE variable-ratio schedule of reinforcement, we can only assume that the more often we attempt, the more often the response will produce a reinforcer. A door-to-door salesperson works under a variable-ratio schedule of reinforcement. Let us assume such a salesperson, selling brushes, calls on a particular house. After hearing his finest sales talk, the woman answering the door lets him know she is more than amply stocked. The salesperson leaves and knocks on the next door. The salesperson meets again with failure. Perhaps he calls on 20 houses in a row but doesn't sell even a toothbrush. At the next house, the housekeeper fairly drags him through the door. "I've been waiting for you for months," the housekeeper says, and then proceeds to place an order for 50 of his finest items. The salesperson leaves the house and stops at the next house, where also a person is waiting for the salesperson's services. We can see that the salesperson is operating on a variable-ratio schedule. Behind each new door lurks a possibility for a long-awaited order, and so the salesperson pushes on. As any other behavior would extinguish, selling would extinguish if reinforcement were not available. Thus, although the salesperson's behavior is on a variable-ratio schedule, reinforcement must occur often enough to maintain the behavior. ANALYSIS I am having to remove this example of an everyday variable-ratio schedule of reinforcement from EPB 4.0 because a student correctly pointed out that it ain't, at least it ain't simple. Not only is it rule governed, but also the response unit (each individual reinforceable response, like the lever press) really consists of a very elaborate stimulus-response chain. And, undoubtedly, the behavior is under the control of some sort of avoidance analog. Furthermore, this is probably best viewed as a discrete-trial procedure rather than the more typical free-operant procedure where we usually think about the application of variable-ratio schedules. 1) Unfortunately, I haven't been able to come up with a clean, everyday variable ratio example. If you've got one for me, I'd sure appreciate it. By the way, please don't send a gambling example; I've already eliminated that one from EPB 3.0, where I also explain why. 2) A clear behavior mod. example would also be great. The way the world pays off for one attempt and fails to pay off for another has produced the old saying "If at first you don't succeed, try, try again." On the 1 January 3, 2014
  2. 2. CONCEPTUAL QUESTIONS i) Answer in terms of stimulus control, with the recent delivery of a reinforcer acting as an S?. a) Gambling in the Skinner box: i) Design a Skinner box experiment for a chimpanzee where the contingencies are as much like those for human gambling as possible. ii) Also have the act of the responding, itself, function as an SD for more responding. iii) Show how your explanation correctly predicts the lack of post-reinforcement pauses following variable-ratio schedules. ii) What changes would you have to make, if your subject were a pigeon, instead of a chimp? b) Why do fixed-ratio schedules produce post-reinforcement pauses? 2 January 3, 2014

×