An Extended Commentary on Post-publication Peer Review in Organizational Neuroscience

Authors

  • Guy A Prochilo University of Melbourne
  • Winnifred R Louis University of Queensland
  • Stefan Bode University of Melbourne
  • Hannes Zacher Leipzig University
  • Pascal Molenberghs University of Melbourne

DOI:

https://doi.org/10.15626/MP.2018.935

Keywords:

confidence intervals, EEG, effect sizes, fMRI, meta-science, organizational neuroscience, parameter estimation, post-publication peer-review, replication, reporting standards

Abstract

While considerable progress has been made in organizational neuroscience over the past decade, we argue that critical evaluations of published empirical works are not being conducted carefully and consistently. In this ex- tended commentary we take as an example Waldman and colleagues (2017): a major review work that evaluates the state-of-the-art of organizational neuroscience. In what should be an evaluation of the field’s empirical work, the authors uncritically summarize a series of studies that: (1) provide insufficient transparency to be clearly un- derstood, evaluated, or replicated, and/or (2) which misuse inferential tests that lead to misleading conclusions, among other concerns. These concerns have been ignored across multiple major reviews and citing articles. We therefore provide a post-publication review (in two parts) of one-third of all studies evaluated in Waldman and colleague’s major review work. In Part I, we systematically evaluate the field’s two seminal works with respect to their methods, analytic strategy, results, and interpretation of findings. And in Part II, we provide focused reviews of secondary works that each center on a specific concern we suggest should be a point of discussion as the field moves forward. In doing so, we identify a series of practices we recommend will improve the state of the literature. This includes: (1) evaluating the transparency and completeness of an empirical article before accepting its claims, (2) becoming familiar with common misuses or misconceptions of statistical testing, and (3) interpreting results with an explicit reference to effect size magnitude, precision, and accuracy, among other recommendations. We suggest that adopting these practices will motivate the development of a more replicable, reliable, and trustworthy field of organizational neuroscience moving forward.

Metrics

Metrics Loading ...

Downloads

Published

2019-11-11

Issue

Section

Commentaries