Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

43rd Annual Convention; Denver, CO; 2017

Event Details


Previous Page

 

Symposium #423
CE Offered: BACB
Training Staff to Implement Skill Acquisition, Reinforcement, and Data Collection Procedures
Monday, May 29, 2017
10:00 AM–10:50 AM
Convention Center Mile High Ballroom 2A
Area: PRA/AUT; Domain: Service Delivery
Chair: Carole Ann Deitchman (DataPath ABA)
CE Instructor: Carole Ann Deitchman, M.A.
Abstract: Developing effective staff training procedures are essential for the application of behavior analytic strategies. Specifically, training staff to implement a variety of skill aquisition procedures while accurately measuring their effectiveness is important. Furthermore, reducing staff training time without sacrificing integrity is further beneficial. It also might be important to take preference of staff trainees into consideration. This symposium, therefore, will review three studies on staff training involving skill acquisition, reinforcement procedures, and discontinuous data collection procedures. The first study was a modified behavior skills training package that taught a behavior chain interruption strategy to staff trainees with no prior experience. The second study was an evaluation of varying lengths of discontinuous data collection systems, such as 10s and 30s momentary time sampling and partial interval recording data collection procedures on the number of errors and preferences of staff trainees. The last study was determining the most common error during the implementation of a variable-ratio schedule of reinforcement and evaluating the effects of intervention.
Instruction Level: Basic
 

Training Staff to Implement A Behavior Chain Interruption Procedure Using a Video Model With Voice Over Instruction Plus Feedback

REBECCA STINGER (Caldwell University ), Sharon A. Reeve (Caldwell University), Ruth M. DeBar (Caldwell University), Jason C. Vladescu (Caldwell University)
Abstract:

The current study evaluated the effects of staff training using a video model with voice over instruction plus feedback to implement multiple stimulus without replacement preference assessment and a behavior chain interruption strategy in a multiple-baseline design across participants. The dependent variable was the percentage of correct responses on undergraduate students implementation of a preference assessment and behavior chain interruption procedure. Procedural integrity, Interobserver agreement on procedural integrity and inter-observer agreement data were collected on 50% of sessions across all measures and ranged from 90-100%. The results demonstrated that in baseline there was low to no correct responses, baseline with written instruction demonstrated higher responding. Once the video model was implemented the participants reached mastery criterion on 100% correct steps within three sessions. A treatment extension was evaluated for skill acquisition of child participants diagnosed with autism from ages seven to nine. University undergraduate students scored 95% or higher for percentage of correct responses and both of the child participants met mastery criterion across two consecutive sessions within five sessions.

 

Evaluating Teacher Implementation of Discontinuous Data Collection in the Classroom

SHAWNA UEYAMA (Douglass Developmental Disabilities Center, Rutgers University), Kate E. Fiske (Douglass Developmental Disabilities Center, Rutgers University), Erica M. Dashow (Rutgers University)
Abstract:

Discontinuous data collection procedures such as momentary time sampling (MTS) and partial interval recording (PIR) provide practitioners with an alternative to continuous data collection. However, many studies on the accuracy of MTS and PIR are not conducted in applied settings and do not consider human error. The present study compared the use of MTS and PIR in a classroom setting using three teacher-student dyads, aiming to identify the procedure that had the least methodological and human error when used by teachers collecting data on stereotypy. Methodological error was measured by comparing teacher-collected estimates to duration data coded from video. Human error was quantified by calculating teachers' treatment integrity (TI) of an instructional protocol and their interobserver agreement (IOA) for each discontinuous data collection method. This study also compared the social validity of these procedures by examining teacher perceptions and preference. Results indicated that PIR significantly overestimated the occurrence of stereotypy, while MTS yielded accurate estimates. All three teachers erroneously perceived PIR to be more accurate than MTS. Results for human error indicated that these teachers maintained high TI and IOA. Lastly, findings from the present study suggest that the factors that affect preference are complex and vary across individuals.

 
Reducing Error Patterns in Variable Ratio Schedules Using a Programmed Schedule of Reinforcement
ERICA M. DASHOW (Rutgers University), Stacy Lauderdale-Littin (Rutgers University), Kimberly Sloman (Douglass Developmental Disabilities Center, Rutgers University), Robert W. Isenhower (Douglass Developmental Disabilities Center, Rutgers University), Meredith Bamond (Douglass Developmental Disabilities Center, Rutgers University)
Abstract: When utilizing reinforcement, the type and schedule of reinforcement can impact the strength of learner response (DeLuca & Holborn, 1990). For example, a variable ratio (VR) schedule of reinforcement produces a high, steady rate of responding by the learner, supports maintenance of positive behaviors over time, prevents satiation of reinforcement, teaches delayed gratification, and makes behavior less resistant to extinction. When using a VR schedule, the interval of time with which reinforcement is delivered should vary but, on average, be equal to the interval specified by the schedule. However, execution of this schedule of reinforcement can be difficult to implement with fidelity. Utilizing a multiple baseline design across participants, we sought to determine the most common errors during teacher implementation of VR schedules of reinforcement, and evaluate the effects of a programmed schedule of reinforcement in reducing errors. Three teachers participated in this evaluation. Within baseline, the calculated mean remained close to the specified VR schedule, however, there was little overall variability. Results indicate instructors increased variability within the implemented reinforcement schedule and remained closer to the desired mean with the use of a programmed schedule of reinforcement. Implications for use within the classroom will be discussed.
 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE
{"isActive":false}