|
Measuring and Evaluating Treatment Integrity |
Sunday, May 28, 2017 |
10:00 AM–11:50 AM |
Convention Center Mile High Ballroom 1A/B |
Area: DDA/PRA; Domain: Translational |
Chair: Brittany LeBlanc (University of Wisconsin Milwaukee) |
Discussant: Florence D. DiGennaro Reed (University of Kansas) |
CE Instructor: Claire C. St. Peter, Ph.D. |
Abstract: Treatment integrity refers to the extent to which a procedure is implemented as it was designed. Recent research about treatment integrity has evaluated the extent to which variations of measurement procedures affect obtained integrity values, and the extent to which reduced integrity affects treatment outcomes. In this symposium, we will describe recent studies that have further advanced these two areas of enquiry. Our studies include both aspects of contemporary integrity research. Halbur et al. describe the extent to which variations in measurement systems may lead to different perceived levels of integrity in published studies. This line of research is furthered by Smothermon et al., who show that similar variations can over- or underestimate the performance of staff. Because integrity levels are often low without ongoing feedback, our final two studies evaluate the impact of reduced integrity on intervention outcomes. Brand et al. review the published literature in which treatment integrity was experimentally manipulated, and Mesches et al. describe a study in which integrity was manipulated during differential reinforcement of other behavior. Results across all studies continue to suggest that treatment integrity should be an important consideration for applied researchers and practitioners. |
Instruction Level: Intermediate |
Keyword(s): Measurement, Procedural Fidelity, Staff Training, Treatment Integrity |
|
Procedural Integrity Data Collection and Analysis When Training Paraprofessionals to Implement Discrete-Trial Training |
(Applied Research) |
STEPHANIE SMOTHERMON (University of Houston-Clear Lake), Dorothea C. Lerman (University of Houston-Clear Lake), Kally M Luck (University of Houston - Clear Lake), Taylor Custer (University of Houston Clear Lake), Brittany Zey (University of Houston Clear Lake) |
Abstract: Collecting data on the integrity with which staff and caregivers implement prescribed treatments is a critical component of program evaluation. However, it can be challenging to collect data accurately on multiple procedural components in fast-paced instructional contexts. One possible approach is to evaluate performance across an entire session (e.g., whether the individual delivers prompts correctly during all trials of an observation) versus on a trial-by-trial basis (e.g., whether the individual delivers prompts correctly on each trial). In this study, we examined the sensitivity of data collected in this manner by comparing whole-session data to trial-by-trial data on the procedural integrity of 16 paraprofessionals who received training on how to implement discrete-trial teaching (DTT). We also compared the outcomes when data were collapsed across all procedural components versus individual components. Results suggested that whole-session data had adequate sensitivity but, in general, underestimated the performance of the individual implementing DTT. At the same time, trial-by-trial data were more likely to overestimate performance unless we examined the integrity of individual DTT components. Findings have important implications for assessing procedural integrity and selecting an appropriate mastery criterion during caregiver training. |
|
An Examination of Treatment Integrity Criteria: Comparison of Training Outcomes Using Different Mastery Criteria |
(Applied Research) |
MARY HALBUR (University of Wisconsin Milwaukee), Tiffany Kodak (University of Wisconsin - Milwaukee), Brittany LeBlanc (University of Wisconsin Milwaukee), Samantha Bergmann (University of Wisconsin-Milwaukee ), Mike Harman (University of Wisconsin-Milwaukee) |
Abstract: Treatment integrity refers to the extent to which a treatment is accurately implemented according to the treatment plan (Gresham, 1989). There are multiple methods for collecting and calculating treatment integrity data. For example, experimenters may collect data on the trainees implementation of each step of intervention and calculate treatment integrity by dividing the number of correctly implemented treatment steps by a total number of steps per session. The present study describes variations in treatment integrity calculations and the implications of using each method of calculation. We will review trends in treatment integrity calculations used in staff/caregiver training studies and compare those methods to studies in which treatment integrity errors are evaluated. Raw data from staff/caregiver training studies published in the past seven years will be obtained and re-calculated using different criteria, and aggregated outcomes of these re-calculations will be presented and discussed. Clinical implications of the recalculations, suggestions for integrity calculations, and future research will be discussed. |
|
Efficacy of Differential Reinforcement of Other Behavior Implemented With Reduced Treatment Integrity |
(Applied Research) |
GABRIELLE MESCHES (West Virginia University), Claire C. St. Peter (West Virginia University), Apral Foreman (West Virginia University), Lucie Romano (West Virginia University) |
Abstract: Treatment integrity refers to the degree to which a treatment is implemented as it was designed. Treatment integrity failures are known negatively affect differential reinforcement of alternative behavior (DRA) and response cost, but little is known about effects of integrity failures on differential reinforcement of other behavior (DRO). We assessed the efficacy of a DRO procedure when implemented perfectly and when implemented with varying degrees of treatment integrity failures using a human-operant preparation with students from a university (Experiment 1) and with a child who engaged in socially significant challenging behavior (Experiment 2). The DRO was effective at all levels for 3 human operant participants, but lost efficacy when implemented with 60% or lower treatment integrity for the remaining 3 participants. We also obtained a loss of treatment effects at around 60% integrity with the clinical replication. These data suggest that DRO interventions may be effective with some level of failure in treatment integrity, but there may be a critical integrity level around 60% at which the intervention breaks down. |
|
Effects of Treatment Integrity Errors on Responding: A Fifteen Year Review |
(Applied Research) |
DENYS BRAND (The University of Kansas), Florence D. DiGennaro Reed (University of Kansas), Amy J. Henley (The University of Kansas), Elizabeth Gray (University of Kansas), Brittany Crabbs (University of Kansas) |
Abstract: Treatment integrity measures the extent to which direct care staff implement procedures consistent with the prescribed protocols. Errors made by direct care staff when implementing teaching or treatment procedures may impede progress or harm consumers. In recent years, treatment integrity research has begun to assess how specific types of treatment integrity errors affect consumer behavior. Studies of this type involve manipulating systematically the degree to which treatment integrity errors are administered and measuring their effect on consumer behavior. The current review evaluated articles published across seven behavior analytic journals between 2001 and 2015. The main objectives of this review was to identify 1) the number of studies in which levels of treatment integrity were manipulated systematically, 2) the types of errors investigated, 3) which parts of the intervention procedure were manipulated, and 4) the degree to which these errors affected participant behavior. Fourteen studies from nine articles met inclusionary criteria. Results showed that a majority of studies involved children with disabilities, took place in a school setting, and manipulated errors during the consequence component of treatment. |
|
|