|
From Token Reinforcement to Economics and Back: Toward More Economically Realistic Models of Preference and Demand |
Tuesday, May 28, 2013 |
9:00 AM–10:20 AM |
101 J (Convention Center) |
Area: EAB/TPC; Domain: Basic Research |
Chair: Timothy D. Hackenberg (Reed College) |
Discussant: Ana Carolina Trousdell Franceschini (University of Sao Paulo, Brazil) |
Abstract: The present symposium includes laboratory and applied research on token reinforcement systems, emphasizing their relevance to economic conceptualizations. Hackenberg, Andrade, & Tan will present data on token accumulation as a model of economic saving, using data from laboratory experiments with pigeons. Smith & Jacobs will present data from laboratory research with rats, showing how token-production choices are affected by relative token and exchange payoffs. Bullock, DeLeon, Chastain, & Frank-Crawford will present data with humans in a choice context, showing differential elasticity effects across different reinforcer types (food versus activity). Franceschini will discuss the research from an economic standpoint, emphasizing its relevance to monetary consumption, and identifying promising areas for future research in behavioral economics. |
|
Token Accumulation as a Model of Savings: Some Experiments With Pigeons in a Closed Token Economy |
TIMOTHY D. HACKENBERG (Reed College), Leonardo F. Andrade (University of Connecticut School of Medicine), Lavinia C.M. Tan (Reed College) |
Abstract: Pigeons made repeated choices between earning and exchanging reinforcer-specific tokens (green tokens exchangeable for food, red tokens exchangeable for water) and reinforcer-general tokens (white tokens exchangeable for either food or water) in a closed token economy. Food and food tokens could be earned on one panel; water and water tokens could be earned on a second panel; generalized tokens could be earned on either panel. Responses on one key (the token-production key) produced tokens according to a fixed-ratio schedule, whereas responses on a second key (the exchange-production key) produced exchange periods, during which all previously earned tokens could be exchanged for the appropriate commodity. Pigeons generally preferred the reinforcer-general tokens under baseline conditions when the price of all tokens was equal and low (5 responses). Across conditions, the price of both reinforcer-specific and reinforcer-general tokens was increased, first for food and then for water. Pigeons tended to reduce their production of the tokens that increased in price (own-price demand elasticity) while increasing their production of the generalized tokens that remained at a fixed price (cross-price demand elasticity). The results show that generalized-type tokens functionally substitute for specific-type tokens. Moreover, the generalized tokens were often produced on one panel and exchanged on the opposite panel, suggesting a potentially useful distinction between consumption (roughly, the value of the reinforcer) and production (the costs of obtaining it). |
|
Concurrent Token-Production Schedule Performance in Rats: Manipulating the Exchange Production Schedule Type and Value |
TRAVIS RAY SMITH (Southern Illinois University Carbondale), Eric A. Jacobs (Southern Illinois University Carbondale) |
Abstract: Manipulation of the exchange-production schedule on concurrent token-production schedule performance was assessed in four rats. Lever pressing was maintained by a concurrent token-production schedule. Token deliveries were assigned probabilistically to the right or left lever (1:6 ratio). The location of the rich lever remained constant within session, but varied randomly across daily sessions. Once assigned to a lever, token delivery was arranged by a random interval 15 s schedule. Transition to token exchange was arranged by fixed or random ratio schedules requiring earning 2 to 4 tokens per transition, depending upon condition. During token exchange, depositing a token was reinforced with access to sweetened condensed milk. Across all exchange-production conditions, the generalized matching law provided an adequate description of the session wide ratio of left to right lever presses. Sensitivity to the token ratios was best described by the current sessions reinforcement ratio. Considerable undermatching and pronounced sign tracking elicited by the tokens were observed in all conditions, however. In a second series of conditions, brief stimulus presentations replaced token deliveries during the token-production period to assess the impact of sign tracking on sensitivity to the token ratio. However, sensitivity was largely unaffected. |
|
Reinforcer Demand, Reinforcer Type, and Token-Reinforcement Schedules |
CHRISTOPHER E. BULLOCK (Kennedy Krieger Institute), Iser Guillermo DeLeon (Kennedy Krieger Institute), James Allen Chastain (Kennedy Krieger Institute), Michelle A. Frank-Crawford (Kennedy Krieger Institute) |
Abstract: We generated demand curves for two sets of concurrently available reinforcers as a function of price increases for one option in children with developmental disabilities. Within a set, the reinforcers were either activities or edible reinforcers. Completion of fixed-ratio (FR) schedules produced 30-s access to the reinforcers. The schedule associated with the less preferred reinforcer was held constant at FR 1, and always involved immediate delivery, while the schedule requirements for the more preferred reinforcer increased across conditions. For one curve, higher-preference reinforcers where delivered immediately following schedule completion while in the second, a token was delivered which participants could exchange for the same highly preferred reinforcer after 10 tokens were earned. With activity reinforcers, demand curves were either similar or right shifted (less elastic) under token reinforcement schedules relative to no-token curves. For one participant, when token and no-token demand curves where compared using food reinforcers, the opposite occurred; curves were left-shifted (more elastic) under the token relative to the no-token schedule. Results are discussed in terms of the properties of the particular reinforcer available in a token-reinforcement schedule and the relative value of massed vs. distributed reinforcement. |
|
|