Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.


34th Annual Convention; Chicago, IL; 2008

Event Details

Previous Page


Symposium #455
Observational Methods: Some Must-Knows and Some Ought-to-Considers
Monday, May 26, 2008
3:00 PM–4:20 PM
Area: TPC/EAB; Domain: Theory
Chair: Jessica M. Ray (University of Central Florida)
Discussant: W. Kent Anger (Oregon Health & Science University)
Abstract: This symposium reviews various issues attending observational coding methodology, including an overview of an expert system designed for near-errorless training of observers and a taxonomy for systematizing the complex array of alternative sampling and recording methods. Also reviewed are issues attending the use, and potential conflation, of functional vs. structural categories, numbers of categories within a common domain of reference, numbers of concurrent category domains, as well as assessment strategies for determining accuracy and agreement. In addition, the use of observation to assess and evaluate client training effectiveness in applied field settings is reported.
Train-to-Code: An Adaptive Expert System for Errorless Training of Coding Skills.
JESSICA M. RAY (University of Central Florida), Roger D. Ray ((AI)2, Inc./Rollins College)
Abstract: Problems in training behavioral observers to high degrees of inter-individual accuracy/agreement and intra-individual stability in such measures are fundamental in descriptive research and behavioral intervention services. This paper presents design characteristics and results of formative evaluations of an artificially intelligent adaptive computerized expert system, called Train-to-Code, which shapes an individual’s observation and recording behaviors using nearly-errorless training strategies to maximize both coding accuracy and stability. Using instructor-generated videos and corresponding expert coding files for supplying prompting and feedback, Train-to-Code adaptively presents five alternative feedback-based training levels until expert-equivalent levels of interobserver accuracy and satisfactory intra-individual stability in coding accuracy occurs without prompts or feedback.
Alternative Observational Recording Methods and Their Implications.
DAVID A. ECKERMAN (University of North Carolina, Chapel Hill), Roger D. Ray ((AI)2, Inc./Rollins College), Jessica M. Ray (University of Central Florida)
Abstract: Behavior analysts prefer direct to indirect measures of behavior and prefer direct measures produced by physical transducers such as a lever. Yet, when no physical transducer can be arranged to capture an aspect of behavior under study, combined observation by two or more humans can often provide direct behavioral measures of acceptable validity and reliability, especially when recording technology is used. How observations are made and recorded, however, determines what behavioral conclusions can be drawn. Alternative observational methods will be described, along with their strengths and limitations. Implications will be drawn for operational (vs. ostensive) as well as functional (vs. structural) definitions of categories and mutually exclusive (vs. non-exclusive) and exhaustive (vs. non-exhaustive) categories. Advantages of time-tagged vs. time- grouped behavioral measures will be compared, and their implications for a sequential analysis of different types of behavior will be explored.
Issues in Observational Training and Accuracy/Agreement Attainments.
ROGER D. RAY ((AI)2, Inc./Rollins College), Laura Marie Milkosky (Rollins College), Nicole Catherine Hogan (Rollins College)
Abstract: Observational methods for coding behavior accurately and consistently are replete with difficulties and caveats. In addition to the use of different terms for describing and references for measuring accuracy/reliability within and between different observers, different disciplinary fields within psychology also use different statistical evaluations and critical values within those measures. Also to be considered are the impacts of numbers of categories for classifying behaviors and alternative frequencies of each type of behavior within the corpus of training and research data. This paper reviews and illustrates these issues with concrete data both from the literature and from our laboratory.



Back to Top
Modifed by Eddie Soh