|
Analysis of Procedural Variations in Teaching on Learner Outcomes |
Saturday, May 25, 2013 |
2:30 PM–3:50 PM |
M100 H-I (Convention Center) |
Area: EDC/AUT; Domain: Applied Research |
Chair: Jason M. Hirst (University of Kansas) |
CE Instructor: Florence D. DiGennaro Reed, Ph.D. |
Abstract: The experimental literature documents the effectiveness of behavioral intervention across a wide range of teaching procedures, settings, populations, and target behaviors. Although empirically supported, slight variations in procedural details may impede or promote learning. The purpose of this symposium is to highlight four studies evaluating slight changes to teaching procedures and the impact on learner outcomes. The first presentation will share findings from an evaluation of the efficacy and efficiency of two prompting procedures on receptive labeling skills of three children with autism. The results of an analysis of the efficacy and efficiency of inter-trial interval length during discrete trial teaching on acquisition of three children with autism will be shared during the second presentation. The third presentation will summarize the results of a parametric analysis of treatment integrity level (i.e., feedback accuracy) on learner acquisition. Finally, the symposium will conclude with a summary of a review of the experimental literature on treatment integrity and highlight the variables influencing maintenance and generalization of staff performance. |
Keyword(s): behavioral education, discrete trial teaching, treatment integrity |
|
A Comparison of Graduated Guidance and Simultaneous Prompting in Teaching Children With Autism Receptive Language |
ARIANA RONIS BOUTAIN HOPSTOCK (University of Kansas), Jan B. Sheldon (University of Kansas), James A. Sherman (University of Kansas) |
Abstract: A variety of prompting procedures have been used to teach children with autism new skills. This study compared the effectiveness and efficiency of a criterion-based graduated guidance prompting to a simultaneous prompting procedure for 3 children with autism (ages 4-8). Using a parallel treatment design, researchers taught each participant 6 pairs of receptive labels, 3 with simultaneous prompting and 3 with graduated guidance. Results indicated that the criterion-based graduated guidance procedure was effective in teaching 2 pairs of skills to each participant. The simultaneous prompting procedure was effective in teaching 2 pairs of skills to 1 participant and 1 pair of skills to the other 2 participants. On average, the graduated guidance procedure required slightly fewer teaching trials than the simultaneous prompting procedure to teach skills and produced fewer child errors during daily probe trials. Although the findings indicate that a criterion-based graduated guidance procedure may be slightly more effective and efficient than a simultaneous prompting procedure, the choice of prompting procedure for teaching children with autism must be made on an individual basis. |
|
The Effects of Inter-trial Intervals on Receptive Tasks for Young Children With Autism |
NICOLE ASHLEE CALL (University of Kansas), James A. Sherman (University of Kansas), Jan B. Sheldon (University of Kansas) |
Abstract: Discrete trial teaching has been widely used to teach children with autism. Several studies have looked at inter-trial intervals during discrete trial teaching and the relationship inter-trial intervals have on the speed of learning and have indicated that the duration of the inter-trial interval may have some effect. The purpose of this study was to examine the effects of inter-trial intervals on receptive labeling by three children (ages 4 to 7 years old) diagnosed on the autism spectrum. An alternating treatment design was used to compare the effects of short inter-trial intervals (5-10 seconds) to longer inter-trial intervals (15-20 seconds) during discrete trial teaching. Participants were taught to point to pictures of objects, numbers, or people. The results were mixed. One participant learned all of the pairs in roughly the same number of trials using both lengths of inter-intervals. The other two participants sometimes learned a pair of pictures with fewer trials using the short inter-trial intervals and sometimes using the long inter-trial intervals. While participants appeared to learn the tasks in a similar number of teaching trials, all participants learned the tasks in a shorter amount of total teaching time when the short inter-trial intervals were used. |
|
The Short- and Long-Term Effects of Inaccurate Feedback: An Examination of Academic Task Acquisition in an Analogue Educational Setting |
Jason M. Hirst (University of Kansas), FLORENCE D. DIGENNARO REED (University of Kansas) |
Abstract: In educational settings, feedback plays a significant role in learning. However, in these settings, teachers do not always follow prescribed procedures with high integrity and research has shown a reliable negative impact on measures of outcome. Although the relation between fidelity of instruction and learning is becoming clear, there is no substantial body of evidence examining the implementation of feedback procedures. Human operant research has shown that participants are likely to follow inaccurate instructions even when doing so fails to maximize reinforcement. Feedback has been interpreted to serve a similar function as instructions. A logical empirical question is whether the accuracy of feedback will influence behavior in similar ways. To that end, the present study examined the effects of 4 levels of feedback accuracy on the acquisition of a simple academic task. Four typically-developing children, 4-5 years old were taught four discrimination tasks, each associated with a level of feedback accuracy in a multi-element design. The results suggest that learning only occurred when the feedback provided was 100% accurate. Additionally, during a second condition where only accurate feedback was provided, a consistent delay to acquisition was obtained for the tasks previously associated with inaccurate feedback. |
|
Variables Affecting Maintenance and Generalization of Treatment Integrity by Direct Care Staff: A Review and Recommendations for Future Research |
KERRY A. CONDE (Western New England University), Amanda Karsten (Western New England University) |
Abstract: Treatment integrity, also known as procedural fidelity, refers to the accuracy with which a change agent implements an intervention (Fiske, 2008; St. Peter Pipkin, Vollmer, & Sloman, 2010; Vollmer et al., 2008; Watson, Foster, & Friman, 2006). The degree to which trainers and supervisors achieve maintenance and generalization of treatment integrity by direct-care staff may have important implications for treatment efficacy (e.g., DiGennaro Reed, Reed, Baez, & Maguire, 2011; Grow et al., 2009; Vollmer, Roane, Ringdahl, & Marcus, 1999; Wilder, Atwell, & Wine, 2006) and the validity of data-based treatment decisions (Vollmer et al., 2008). According to Fleming and Sulzer-Azaroff (1989), a primary concern for researchers and practitioners is the failure to maintain newly acquired skills. The purpose of the current paper is to review the experimental literature on strategies to promote maintenance and generalization of performance by direct-care staff. Results are discussed in terms of considerations for practicing behavior analysts and recommendations for future research. |
|
|