Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

38th Annual Convention; Seattle, WA; 2012

Event Details


Previous Page

 

Poster Session #88
EAB Poster Session 1
Saturday, May 26, 2012
5:00 PM–7:00 PM
Exhibit Hall 4AB (Convention Center)
1. The Reinforcement History Effects of Behavioral Variation and Repetition on Acquisition of Counting Behavior
Area: EAB; Domain: Basic Research
NAOKI YAMAGISHI (Ryutsu Keizai University)
Abstract:

The purpose of the research is comparing pigeon and human behavior under response differentiation task between variable and repetitive counting behavior, and following acquisition task. Thus, the research examines history effects of response variability on acquisition of behavior. Differentiation task demands variable and repetitive counting behavior in each component. The response unit of the procedure is counting behavior based on fixed consecutive number schedule. The procedure of the schedule is as follows: there are 2 keys. If1 response to right key followed at least1 response to left key, number of response to left key is considered as the number counted. Experimenter set percentile schedule for shaping variable and repetitive counting responses. Parameter of the percentile schedule was arranged to equalize average number counted as 6, and differentiate only the SD of the number counted. Following task demands to count larger number under another percentile schedule in both conditions. The author found that pigeons differentiate variable and repetitive counting. Furthermore, the reinforcement history of variable counting enhanced acquiring larger number than repetitive history. For the human experiment, the results will bediscussed on site. Potential impact of the research indicates the behavioral variability has adoptive function in acquisition task.

 
2. Differential Reinforcement of Lever Holding in Rats: Assessing Temporal Discounting on a Single Manipulandum
Area: EAB; Domain: Basic Research
CHARLES CASEY JOEL FRYE (Southern Illinois University), Eric A. Jacobs (Southern Illinois University - Carbondale), Michael Young (Southern Illinois University - Carbondale), Jerry Zhu (Southern Illinois University)
Abstract:

We assessed sensitivity to trade-offs between reinforcer amount and immediacy using a novel single manipulandum procedure. Three male Long-Evans Hooded Rats were trained to hold down a lever. Lever holding was reinforced with access to sucrose solution. The volume of sucrose solution delivered varied as a function of hold duration according to one of five feedback functionsone linear, two negatively accelerated, and two positively accelerated. The form of the feedback function varied daily according to a pseudo-random sequence. Reinforcement volume reached a maximum after hold durations of 10s. Under linear feedback conditions, the volume of solution delivered increased proportionally with hold duration. Under negatively accelerated feedback conditions, the volume of solution delivered accelerated quickly near the maximum value. Under positively accelerating feedback conditions, the volume of solution delivered increased at a slow rate initially, but the rate of growth increased rapidly towards the end of 10 s interval. Thus, under positively accelerating feedback conditions, there is a trade-off between reinforcement immediacy and reinforcement amount (i.e., releasing sooner for a smaller reinforcer or holding longer for a larger reinforcer). For two of three rats, the distribution of hold durations tracked daily changes in feedback conditions, indicating sensitivity to these contingencies.

 
3. Effects of Variability in Delay to Reinforcement on Within-Session Decreases in Operant Responding
Area: EAB; Domain: Basic Research
MIKAELA MULDER (University of Alaska Anchorage), Eric S. Murphy (University of Alaska Anchorage), Shea Lowery (University of Alaska Anchorage), Alyssa Hoskie (University of Alaska Anchorage), Amanda Hesser (University of Alaska Anchorage)
Abstract:

Our study investigated the hypothesis that habituation to food reinforcement occurs more slowly when the delay to reinforcement is variable rather than constant. To test this hypothesis, four Wistar rats lever pressed on a fixed interval 8-s (FI) schedule of reinforcement to earn five 45 mg food pellets during 30 min sessions. In the constant condition, the delay to reinforcement was 10 s during each reinforcer delivery. During the variable condition, the reinforcers were delayed by either 1 or 19 s (M = 10 s). Rates of responding were higher and within-session decreases in responding were more attenuated during the variable delay condition. Our results indicate that reinforcer effectiveness can be increased or decreased depending upon the variability in the delay to reinforcement. These findings are generally consistent with the idea that habituation (e.g., McSweeney & Murphy, 2009) accrues to food reinforcers and may have implications for behavioral treatments in applied settings.

 
4. The Predictability of a Visual Stimulus for Food and Its Effect on Induced Polydipsia
Area: EAB; Domain: Basic Research
MELISSA M. M. ANDREWS (Central Michigan University), Mark P. Reilly (Central Michigan University)
Abstract:

Previous research has shown that schedule-induced polydipsia can be produced by paired and unpaired auditory stimuli (Corfield-Sumner, Blackman & Stainer, 1977; Porter and Kenshalo, 1974; Rosenblith, 1970). The present study is an attempt to replicate and extend the previous findings by manipulating the correlation between a visual stimulus and food delivery. Rats were placed on a fixed-time 90-s schedule to induce drinking. After drinking stabilized, a 3-s presentation of 3 LEDs occurred halfway into the interfood interval (i.e., 45 s). Drinking following the food delivery developed quickly and in large amounts while post-stimulus drinking did not develop, presumably due to the fact that the LEDs were 100% predictive of food 45-s later. Next the probability with which the LEDs are followed by food will be manipulated across conditions. It is predicted that post-stimulus drinking will occur when the LED-food contingency is degraded, such as having the LED presentation followed by food 50% of the time instead of 100% of the time.

 
5. Within-Trial Contrast: Conditioning Effects on Preceding and Subsequent Stimuli
Area: EAB; Domain: Basic Research
JAMES NICHOLSON MEINDL (University of Memphis), Jonathan W. Ivy (Mercyhurst College), Neal Miller (The Ohio State University), Nancy A. Neef (The Ohio State University), Laura Baylot Casey (University of Memphis)
Abstract:

Stimuli that precede aversive events are typically preferred less than stimuli that precede non-aversive events. Stimuli that follow aversive events, however, may become preferred more than stimuli that follow nonaversive events. This effect has been labeled within-trial contrast. Although this effect has been replicated, only rarely have initial preferences for antecedent and consequent stimuli, or aversive events been established prior to training with those stimuli. This inconsistency could explain different outcomes found by various researchers. Furthermore, it is unclear whether within-trial contrast alters reinforcer efficacy in addition to stimulus preferences. If so, within-trial contrast could represent a new method of conditioning reinforcers. The current experiment sought to replicate and extend research on within-trial contrast by (a) examining preference changes for both antecedent and consequent stimuli, (b) assessing preference for all stimuli and events both before and after training, and (c) assessing whether within-trial contrast altered reinforcer efficacy. The results indicate that antecedent stimuli preceding aversive events decreased in preference, however within-trial contrast was demonstrated for only one participant. Furthermore, changes in preference for consequent stimuli were not correlated with changes in reinforcer efficacy, thus indicating that within-trial contrast may not be a viable strategy for conditioning reinforcers.

 
6. Simple Discrimination Control and Stimulus Generalization in a Go/no-go Procedure With Compound Stimuli in Pigeons
Area: EAB; Domain: Basic Research
HELOISA CURSI CAMPOS (Universidade de Sao Paulo), Paula Debert (Universidade de Sao Paulo)
Abstract:

A previous study employed a go/no-go procedure with compound stimuli to teach pigeons to peck to two-component compounds A1B1, A2B2, B1C1, B2C2 and to refrain from pecking to A1B2, A2B1, B1C2, B2C1. The test presented the compounds rotated 180 and subjects pecked to B1A1, B2A2, C1B1, C2B2 and not to B1A2, B2A1, C1B2, C2B1. Pecks could have been controlled by the relation between components (i.e. conditional discrimination) or compounds as single stimuli (i.e. simple discrimination in training and stimulus generalization in the test). The present study manipulated components display to verify if the discriminative responding established in training involved simple discrimination control and tests involved stimulus generalization. During training pecks to A1B1, A2B2, B1C1, B2C2 were followed by food and pecks to A1B2, A2B1, B1C2, B2C1 re-started the trials. Tests presented training components rotated 180 (Test 1), 90 to the right (Test 2), 90 to the left (Test 3), separated by 1 cm (Test 4) and also rotated 180 (Test 5). The four pigeons exhibited a discriminative responding in Tests 1-3, two pigeons also in Test 4 and one pigeon also in Test 5. Results suggest that training involved simple discrimination control and tests consisted in a stimulus generalization test.

 
7. Use of Timeout to Decrease Pausing During Rich to Lean Transitions
Area: EAB; Domain: Basic Research
EMILY L. BAXTER (University Of North Carolina, Wilmington), Christine E. Hughes (University of North Carolina, Wilmington), Kelsey G. Knight (University of North Carolina, Wilmington)
Abstract:

Relatively large post-reinforcement pauses (PRP) are observed during transitions from a rich (i.e., high reinforcer magnitude) environment to a lean (i.e., low reinforcer magnitude) environment compared to other transition types (i.e., rich-rich; lean-rich; lean-lean). In previous studies, two discriminative stimuli have been used to indicate the upcoming reinforcer (i.e., large or small). In contrast, in the current study, four pigeons responded on a multiple FR schedule, in which four discriminative stimuli were used to represent each individual transition. The magnitude of the reinforcers were adjusted until the PRP duration in the presence of the rich to lean transition stimuli was 20 s or greater than the PRP of the other transitions. In the second phase, probe sessions were included in which a 15, 30, or 60-s timeout (i.e., blackout) was added after each food presentation. Results have been variable; however, the most common effect is a decrease in the PRP during rich to lean transitions.

 
8. Persistent Superstitious Keypecking Despite Multiple Disruptors in Two Pigeons
Area: EAB; Domain: Basic Research
ANDREW T. FOX (University of Kansas), Mark P. Reilly (Central Michigan University)
Abstract:

Three pigeons were exposed to conditions of decreasing contingency between keypecks and food deliveries by varying the percentage of total food deliveries that were response-dependent or response-independent. Two pigeons continued to keypeck at moderate rates even when the food was delivered 100% response-independently. Sessions of no-food extinction nearly eliminated keypecking, but keypecking returned when response-independent food deliveries were reinstated. Halving the rate of food delivery did not eliminate keypecking in either pigeon but one pigeon ceased to keypeck when the rate of food delivery was doubled. The other pigeon persisted despite sessions in which the opportunity to peck was removed but response-independent food deliveries continued. The results have implications for the contingency versus contiguity debate in operant conditioning and the recent debate over signaling versus strengthening functions of reinforcers.

 
9. Effects of Signaled Reinforcement on Resistance to Change
Area: EAB; Domain: Basic Research
ASHLEY GOMEZ (Santa Clara University), Jesslyn Farros (California State University, Los Angeles), Matthew C. Bell (Santa Clara University)
Abstract:

Behavioral Momentum Theory (BMT) says resistance to change is determined by Pavlovian stimulus-reinforcer contingencies. Some research, however, suggests that the model is incomplete. Specifically, the role of the stimulus and the exact determinants of resistance to change are unclear. In a systematic replication of Nevin et al. (1990), we investigated the effect of signaling non-contingent food on response rate and resistance to change in a two-component, multiple schedule procedure. Both the target and control components reinforced responding according to a variable interval 60-s schedule. The target component, however, also provided non-contingent access to food according to an additional variable time 40-s schedule. In signal conditions (SIG) additional non-contingent food presentations were preceded by a 4-s signal; in unsignaled conditions (UNS) no stimulus change occurred. Following baseline training, behavior was disrupted with inter-component interval food presentations. According to BMT, there should be higher resistance to change in the UNS conditions compared to SIG conditions. The data show that this was the case for seven of the eight subjects. When presented with additional non-contingent food in the presence of a stimulus, proportion of baseline responding was higher in the UNS condition compared to the SIG condition. This finding supports BMT.

 
10. Response Patterns in Multi-Link Chain Schedules During Extinction
Area: EAB; Domain: Basic Research
MATTHEW C. BELL (Santa Clara University)
Abstract: How reinforcement control responding in chain schedules of reinforcement is not well understood. A primary reinforcement hypothesis suggests direct control of responding in each link by primary reinforcement, with the association becoming weaker as links become removed from reinforcement, with the stimuli associated with each link merely providing a discriminative function. A conditioned reinforcement hypothesis suggests that chain stimuli acquire conditioned value through their association with reinforcement and that conditioned value is what controls responding. Each hypothesis predicts a different pattern of responding in extinction. A primary reinforcement hypothesis predicts a forward pattern of extinction, with responding in earlier links decreasing most rapidly because of the more tenuous connection with primary reinforcement. A conditioned reinforcement hypothesis predicts a backward pattern of extinction, with responding in later links decreasing more rapidly than earlier links because the conditioned reinforcing value of later links maintains responding in earlier links. The present study presented pigeons with two chain schedules. One ended in reinforcement while the other ended in extinction. After responding had been established, the reinforcement contingencies were reversed. Of primary interest was the pattern of extinction. Preliminary results suggest support for the primary reinforcement hypothesis, with later link responding decreasing fastest.
 
11. ABA and ABC Renewal Effects in a Positive Reinforcement Paradigm: Effects of Changes in Auditory Stimuli
Area: EAB; Domain: Basic Research
STEPHANIE L. KINCAID (West Virginia University), Toshikazu Kuroda (West Virginia University), Kennon A. Lattal (West Virginia University)
Abstract: With a contextual change following extinction of a response, recurrence of the response has been observed. This phenomenon is known as the renewal effect. Previous studies demonstrating the renewal effect involved a simultaneous change in multiple modalities of stimuli (e.g. visual, auditory, olfactory, and tactile), thereby making it difficult to attribute the effect to any specific stimulus change. The present study investigated whether a change in a single stimulus modality (auditory) would be sufficient to observe the renewal effect. Lever-pressing was established in rats with a variable-ratio schedule in the presence of a tone, followed by extinction under a second tone. Within-subject renewal tests occurred at different times in the presence of the original tone (ABA renewal) or of a novel tone (ABC renewal). Lever-pressing recurred reliably for the former but not the latter, suggesting that ABA renewal is more robust than ABC renewal when contexts are differentiated by a single stimulus modality.
 
12. Effects of Magnitude of Reinforcement on the Resurgence of Computer-Based Responses
Area: EAB; Domain: Basic Research
NICHOLAS VANSELOW (Western New England University), Gregory P. Hanley (Western New England University)
Abstract: Resurgence of previously reinforced responses occurs when recently reinforced responses are placed on extinction. Some studies have demonstrated that the probability and magnitude of resurgence may be affected by the length of reinforcement history or the length of exposure to extinction. However, previous studies have not examined the effect on different magnitudes of reinforcement on the occurrence or magnitude of resurgence. Three studies were conducted with typical adults playing a computer game. In Study 1, we replicated the procedures of previous studies with a limited number of responses and equal magnitude of reinforcement. In Study 2, we included nine possible responses but kept the magnitude of reinforcement consistent. In Study 3, magnitudes of 10 points, 5 points, and 1 point were used for different responses to determine if magnitude of reinforcement affected the probability resurgence would occur and, if resurgence did occur, the order the responses resurged. Implications for these effects to preventing resurgence of problem behavior are discussed.
 
13. An Appropriate Index for Resurgence for Pigeons
Area: EAB; Domain: Theory
SATOSHI OBATA (Tokiwa University), Tetsumi Moriyama (Tokiwa University)
Abstract:

Resurgence is defined as reoccurrence of previously reinforced behavior when recently reinforced behavior is extinguished. Most previous studies have not investigated the phenomenon quantitatively. If there is an index showing the magnitude of resurgence quantitatively, we can examine the functional relation between independent variables of resurgence and the magnitude as a dependent variable in more detail. Thus, we calculated the ratio of resurgence (ROR) based on response rates of the target behavior in both the elimination and the resurgence conditions for pigeons. We used the formula 1 for calculating ROR. m1 is mean key-peck response rates over the last three sessions of the elimination condition for a pigeon. m2 is mean key-peck response rates for each session of the resurgence condition for that pigeon. Table1 shows ROR for pigeons from three studies. Positive value means that subjects showed resurgence. The value of zero and negative values mean that subjects did not show resurgence. The results showed clear variation in magnitude of resurgence among pigeons. Thus, ROR is an appropriate index of the magnitude of resurgence.

 
14. On the Reinstatement of Destructive Behavior Displayed by Individuals With Autism: A Translational Analysis
Area: EAB; Domain: Applied Research
TERRY S. FALCOMATA (University of Texas at Austin), Summer G. Ducloux (University of Texas at Austin), Katherine Hoffman (University of Texas), Colin S. Muething (University of Texas at Austin)
Abstract:

The recovery of previously extinguished responding during response-independent delivery of previously reinforcing stimuli is referred to as reinstatement. Studies of this phenomenon are limited to a small number of operant-based, basic evaluations and behavioral pharmacological studies. Thus, translational analyses of this phenomenon are needed to study its potential applied relevance across additional populations (e.g., autism and developmental disabilities). In this study, we examined reinstatement of destructive behavior exhibited by individuals with autism. Destructive behavior was first reinforced on a fixed-ratio 1 schedule of reinforcement and high rates of responding were observed. Next, extinction was implemented and destructive behavior was extinguished. In the third component, a fixed-time 2 minute schedule was implemented and destructive behavior was reinstated. This 3-component sequence of conditions was implemented3 times with both subjects and reinstatement occurred in all test conditions. Interobserver agreement was collected on at least 30% of all sessions and averaged above 90% for all participants. These results suggest that reinstatement (a) occurs across populations and (b) is a phenomenon that likely impacts clinical outcomes by contributing to treatment lapses during and following treatments for severe destructive behavior.

 
15. Sequence Acquisition With Delayed Reinforcement In Rats
Area: EAB; Domain: Basic Research
ROBIN M. KUHN (Central Michigan University), John R. Smethells (Central Michigan University), Andrew T. Fox (University of Kansas), Mark P. Reilly (Central Michigan University)
Abstract:

Response acquisition with delayed reinforcement is a reliable and general phenomenon. However, acquisition of response sequences under delay of reinforcement conditions have yet to be examined extensively in non-humans. In this study, two groups of four nave rats learned a left-right lever press sequence with a short (0.25 s) or long (5 s) unsignaled, resetting delay to reinforcement using a tandem FR 1 FT x schedule. All subjects in the short delay group acquired the sequence within four sessions, whereas up to twenty sessions were required for all rats in the long delay group to learn the sequence. Within-session analysis of homogenous (e.g., left-left) and heterogeneous (e.g., right-left) response sequences revealed variations in the proportion of sequences emitted outside of and during the delay, suggesting differential control by the FR and FT components of the tandem schedule. Results bring to bear the selective effects of reinforcement on functional operants composed of more than one discrete response and extend previous findings regarding acquisition with delayed reinforcement to more complex behaviors.

 
16. Combined Influence of Variability in Amount of Reinforcement and Schedule on Within-Session Decreases in Responding
Area: EAB; Domain: Basic Research
ALYSSA HOSKIE (University of Alaska Anchorage), Eric S. Murphy (University of Alaska Anchorage), Mikaela Mulder (University of Alaska Anchorage), Shea Lowery (University of Alaska Anchorage), Amanda Hesser (University of Alaska Anchorage)
Abstract:

The present experiment tested the hypothesis that habituation contributes to within-session decreases in operant responding. In particular, we tested for the variety effects property of habituation which states that habituation should develop more slowly and overall responsiveness should be higher when reinforcers are presented in a variable, rather than in a constant, manner. The experimental design was a 2 (Amount: constant vs. variable 5 food pellets) X 2 (Schedule: fixed interval 8-s vs. variable interval 8-s schedule of reinforcement). Four rats responded on either a FI 8- or a VI 8-s schedule in which pressing a lever produced a constant amount of 5 food pellets or an average of 5 (1 or 9 with a probability of .50) food pellets per delivery during 30 min daily sessions. When both amount and schedule of reinforcement were constant, rates of responding were lower and within-session decreases in responding were steeper than when one or both reinforcement parameters were variable. These preliminary data suggest that varying one or more reinforcement parameters increases the effectiveness of a repeatedly presented reinforcer. The results of the experiment are consistent with the idea that habituation to the reinforcer contribute to within-session changes in operant responding.

 
17. Effects of Reinforcer Delay on Within-Session Changes in Responding
Area: EAB; Domain: Basic Research
KENJIRO AOYAMA (Doshisha University)
Abstract:

Delay of reinforcer is supposed to devalue the reinforcer. This study tested the effects of reinforcement delay on within-session changes in responding. Six rats were trained to press a lever for food reinforcer during 30-min sessions. In No-Delay condition, every lever-pressing was reinforced by a food pellet immediately after the response. In Delay condition, delivery of reinforcer was delayed for 1 second. The experiment lasted for 10 days and the 2 conditions were alternated using an ABBA design. During the early part of the session, response rates in Delay condition were lower than in No-Delay condition. However, within-session decreases in responding were steeper in No-Delay than in Delay condition. As a result, response rates were similar during the later part of the session between the 2 conditions. In addition, response rates were well-described as linear functions of the cumulative number of reinforcements in both conditions (R2s>.96). The regression line for No-Delay condition had larger y-axis intercept and steeper slope than that for Delay group. However, the x-axis intercepts No-Delay and Delay conditions were similar. This pattern is different from the effect of reinforcer devaluation induced by taste aversion learning.

 
18. Attenuating the Behavioral Disruption Engendered by Negative Shifts in Food Reinforcement via a Bonus Food Contingency at Session Completion
Area: EAB; Domain: Basic Research
ROBERT ALEXANDER SAUER (College of Charleston), Chad M. Galuska (College of Charleston)
Abstract:

Negative incentive shifts involve transitions from relatively favorable to unfavorable reinforcement contexts and are known to produce clinically relevant behavioral disruptions in both animals and humans. This series of experiments was designed to assess if providing additional incentives at session completion attenuated these disruptions in an animal model. Long Evans rats lever pressed under a multiple fixed-ratio (FR) schedule (e.g., FR 80) for reinforcers of 2 different magnitudes. Half of the ratio completions resulted in delivery of a large reinforcer (3, 45-mg food pellets) and half resulted in a small reinforcer (1 pellet). The upcoming reinforcer magnitude was signaled by either the left lever (e.g., large) or the right lever (e.g., small) being inserted into the chamber at the start of each component. In each session, components were presented pseudo-randomly yielding 4 different transitions between just-received and signaled upcoming reinforcers: small-small, small-large, large-small, large-large. Consistent with prior studies, the negative incentive shift (large-small transition) engendered extended pausing. To attenuate this pausing, a bonus period of reinforcer availability was added at session completion. A lever was reinserted into the operant chamber and each press on it produced 1 food pellet until 50 pellets were earned. Implementing the bonus contingency on the lever associated with the small component, but not the large component, decreased within-session pausing during the large-small transition. Overall, the results suggest that strengthening stimulus-reinforcer and response-reinforcer associations via an enriched reinforcement context at session completion may decrease the behavioral disruption engendered by negative incentive shifts.

 
19. Effects of Food Deprivation on the Stimulus Control of Eating
Area: EAB; Domain: Basic Research
Varsovia Hernandez Eslava (Universidad Nacional Autonoma de Mexico), CARLOS A. BRUNER (Universidad Nacional de Mexico)
Abstract:

Rats feed periodically in bouts of about 10 minutes separated by inter-bout periods of about 180 minutes. Whether food is freely available or restricted by a schedule with similar durations (e.g., multiple FR1 EXT), has no effect. This investigation involved altering the regularity of the inter-access interval to either 20 or 300 minutes while holding constant the access duration to 10 minutes in daily 24 hour sessions. In addition, the temporal location of a 5-minute neutral stimulus was varied within the inter-access interval to either 5, 10 or 20 minutes before the subsequent access. Rats ate more when the inter-access interval was 300 than when its was 20 minutes. For both inter-access intervals the amount of food eaten was a decreasing function of lengthening the stimulus-to-access interval. However, the stimulus-control functions were more pronounced when the inter-access period was 300 minutes than when it was 20 minutes. These results show that food deprivation can be varied experimentally within single 24-hour periods and that the longer of the two deprivation periods enhanced stimulus control over eating.

 
20. The Effects of an Olfactory Stimulus (Fox Urine) on Reward Sensitivity and Bias in an Open Field Foraging Paradigm
Area: EAB; Domain: Basic Research
VALERI FARMER-DOUGAN (Illinois State University), Kari Chesney (University of Missouri)
Abstract:

That olfactory stimuli are important for learning and avoidance tasks is well supported in the literature. However, olfactory cues have rarely been used as avoided stimuli themselves. Results of recent studies suggest that odors that may have biological relevance to the animal, for example, fox urine should produce reliable avoidance responses. It was hypothesized that an odor with potential survival relevance, such as a predator scent, should affect the reward sensitivity, and not just bias, when Sprague Dawley rats foraged for food reinforcers in an open field matching law paradigm. Five Sprague Dawley rats were individually placed for 20 minutes per session in one of two large open foraging fields (2.5 M by 1.25 M with 30 cm walls) containing two foraging pans in opposite corners of the field. Four separate concurrent Variable Time Variable Time (conc VT VT) reinforcement schedules were used across the four week time span for the experiment. During baseline (Monday-Wednesday) rats were exposed to the schedules with no fox urine present, but water-saturated cotton balls were placed in each feeder pan. On Thursdays, 10 droplets of commercially prepared fox urine were placed on the cotton ball placed in Feeder 2. The baited feeder remained constant across reinforcement schedules. On Fridays, the recovery day, water-saturated cotton balls were once again placed in both feeders. Using Baums (1974) Matching equation, reward sensitivity and bias were calculated for baseline, fox urine scent,and recover days. Given that the same feeder was baited across schedules, the Generalized Matching Law would predict changes in bias, but not reward sensitivity. However, results showed significant changes in bias, and significant attenuation of reward sensitivity when rats were presented with the fox urine, compared to baseline and recovery days.

 
21. Can type of maintenance diet act as an establishing operation to change demand for reinforcers with hens?
Area: EAB; Domain: Basic Research
Surrey Jackson (University of Waikato), Therese Mary Foster (University of Waikato), James McEwan (University of Waikato), LEWIS A. BIZO (The University of Waikato)
Abstract:

This study investigated whether the food used as the maintenance diet affected demand when either the same or a different food served as the reinforcer. Hens preferences between wheat, laying pellets and puffed wheat were assessed using a free-access procedure. The hens were then maintained at 80% 10% of their free-feeding body weights by one of the foods while responding under progressive-ratio schedules (with the response requirement doubled each reinforcer) for each of the three foods. Sessions terminated when the hen ceased responding for 300 s. All three foods were used as reinforcers and the maintenance food. Response rates, post-reinforcement pauses and demand functions (i.e., the relation between estimated consumption rate and response requirement) under each response requirement were examined. The hens were then maintained at 80% 10% of their free-feeding body weight by pellets and responded under fixed-ratio schedules with the response requirement doubling each session until the hen received no reinforcers (each of the three foods were used as reinforcers) in a session. Sessions terminated after 40 reinforcers or 40 min. There were no systematic relations between the individual hens food preferences and any of the performance measures under either the progressive- or fixed-ratio schedules.

 
22. Foraging by Free-Ranging Eastern Fox Squirrels and Response Effort
Area: EAB; Domain: Basic Research
BRADY J. PHELPS (San Diego State University), Caitlin Gilley (South Dakota State University), Caroline Hicks (South Dakota State University), Ryan A. Richmond (South Dakota State University)
Abstract: Preferred food for free-roaming eastern fox squirrels (Sciurus niger) will be available in feeders. The effects of response effort will be manipulated by adding 25 gm weights to the lid of feeders. In approximately 75 days of initial observation, when squirrels prefer a feeder, based on access and escape routes, approximately 200 gms or more is needed before a squirrel will forage at an adjacent feeder with identical food. Effects of an altered food were also examined.
 
 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE
{"isActive":false}