Dr. William R. Shadish is Professor and Founding Faculty at the University of California, Merced. He received his bachelor’s degree in sociology from Santa Clara University in 1972, and his M.S. (1975) and Ph.D. (1978) degrees from Purdue University in clinical psychology. He completed a postdoctoral fellowship in methodology and program evaluation at Northwestern University from 1978 to 1981. His current research interests include experimental and quasi-experimental design, the empirical study of methodological issues, the methodology and practice of meta-analysis, and evaluation theory. He is author (with T. D. Cook & D .T. Campbell, 2002) of Experimental and Quasi-Experimental Designs for Generalized Causal Inference and ES: A Computer Program and Manual for Effect Size Calculation, co-editor of five other volumes, and the author of over 100 articles and chapters. He was 1997 President of the American Evaluation Association, winner of the 1994 Paul F. Lazarsfeld Award for Evaluation Theory from the American Evaluation Association and the 2000 Robert Ingle Award for service to the American Evaluation Association. He is a Fellow of both the American Psychological Association and the American Psychological Society, and a past editor of New Directions for Program Evaluation. |
Abstract: Meta-analysis has become an essential tool for summarizing large bodies of primary research literature in the social sciences. Among the many applications of meta-analysis is the determination of whether a given educational or clinical practice can be termed evidence- or research-based. With a few exceptions, however, evidence from single subject research has not been included in meta-analyses. The reason for this is primarily technological rather than ideological, that there is little agreement on optimal statistical methods for doing meta-analysis of single subject research, and that the methods for doing this kind of meta-analysis have not received the advanced statistical attention necessary to identify sampling distributions for pertinent effect size estimators, appropriate weights, homogeneity tests, and all the ancillary statistical methods such as fixed versus random effects models. This address will review the existing methodological literature on doing meta-analysis of single subject research, identify some of the key strengths and weaknesses of some of these methods, and discuss statistical developments that may improve those methods. |