A meta-analytic approach to evaluating the explanatory adequacy of theories


  • Alejandrina Cristia Laboratoire de Sciences Cognitives et de Psycholinguistique, Département d’études cognitives, ENS, EHESS, CNRS, PSL University
  • Sho Tsuji International Research Center for Neurointelligence, Institute for Advanced Studies, The University of Tokyo, Japan
  • Christina Bergmann Language Development Department, Max Planck Institute for Psycholinguistics, The Netherlands




meta-analysis, variability, replication, sample size, effect size, quantitative, open science, cumulative science, theory adjudication, explanatory adequacy


How can data be used to check theories' explanatory adequacy? The two traditional and most widespread approaches use single studies and non-systematic narrative reviews to evaluate theories' explanatory adequacy; more recently, large-scale replications entered the picture. We argue here that none of these approaches fits in with cumulative science tenets. We propose instead Community-Augmented Meta-Analyses (CAMAs), which, like meta-analyses and systematic reviews, are built using all available data; like meta-analyses but not systematic reviews, can rely on sound statistical practices to model methodological effects; and like no other approach, are broad-scoped, cumulative and open. We explain how CAMAs entail a conceptual shift from meta-analyses and systematic reviews, a shift that is useful when evaluating theories' explanatory adequacy. We then provide step-by-step recommendations for how to implement this approach - and what it means when one cannot. This leads us to conclude that CAMAs highlight areas of uncertainty better than alternative approaches that bring data to bear on theory evaluation, and can trigger a much needed shift towards a cumulative mindset with respect to both theory and data, leading us to do and view experiments and narrative reviews differently.


Metrics Loading ...