The Replication Dilemma: Potential Challenges in Measuring Replication Value—A Commentary on Isager et al. (2025)

Downloads

Authors

DOI:

https://doi.org/10.15626/MP.2024.4312

Keywords:

replication, value, open science, metascience

Abstract

The authors (Isager et al., 2025) start with the main assumption that researchers’ efforts toward replications are constrained by resources, and they propose a simple, practically scalable framework of research replication value that guides the researchers and the scientific community at large intending to achieve bigger bang for the buck. Specifically, the authors propose a framework that combines citation impact and sample size of the original articles as a metric for assessing replication value. This implies that original studies with higher scores on this metric can be prioritized for replication efforts. We thoroughly agree with the authors’ assumption and indeed support the view of working towards an optimal framework that helps the community achieve maximum research impact from the replication efforts. In this commentary, we propose to discuss three important limitations that have to be considered before using such metrics. We thoroughly agree with the authors' assumption, and indeed support the view of working towards an optimal framework that helps the community achieve maximum research impact from the replication efforts. In this commentary, we propose to discuss three important limitations that have to be considered before using such metrics.

Metrics

Metrics Loading ...

References

Boyce, V., Prystawski, B., Abutto, A. B., Chen, E. M., Chen, Z., Chiu, H., Ergin, I., Gupta, A., Hu, C., Kemmann, B., Klevak, N., Lua, V. Y. Q., Mazzaferro, M., Mon, K., Ogunbamowo, D., Pereira, A., Troutman, J., Tung, S., Uricher, R., & Frank, M. C. (2024). Estimating the replicability of psychology experiments after an initial failure to replicate. https://doi.org/10.31234/osf.io/an3yb

Brainard, J. (2023). Fast-growing open-access journals stripped of coveted impact factors. https://doi.org/10.1126/science.adi0098

Calcagno, V., Demoinet, E., Gollner, K., Guidi, L., Ruths, D., & De Mazancourt, C. (2012). Flows of Research Manuscripts Among Scientific Journals Reveal Hidden Submission Patterns. Science, 338(6110), 1065–1069. https://doi.org/10.1126/science.1227833

Chorus, C., & Waltman, L. (2016). A Large-Scale Analysis of Impact Factor Biased Journal Self-Citations (W. Glanzel, Ed.). PLOS ONE, 11(8), e0161021. https://doi.org/10.1371/journal.pone.0161021

Devezer, B., Navarro, D. J., Vandekerckhove, J., & Ozge Buzbas, E. (2021). The case for formal methodology in scientific reform. Royal Society Open Science, 8(3), rsos.200805, 200805. https://doi.org/10.1098/rsos.200805

Dong, Y., Johnson, R. A., & Chawla, N. V. (2016). Can Scientific Impact Be Predicted? IEEE Transactions on Big Data, 2(1), 18–30. https://doi.org/10.1109/TBDATA.2016.2521657

Fanelli, D. (2022). The "Tau" of Science - How to Measure, Study, and Integrate Quantitative and Qualitative Knowledge. https://doi.org/10.31222/osf.io/67sak

Hanson, M. A., Barreiro, P. G., Crosetto, P., & Brockington, D. (2023). The strain on scientific publishing [Publisher: arXiv Version Number: 2]. https://doi.org/10.48550/ARXIV.2309.15884

Ibrahim, H., Liu, F., Zaki, Y., & Rahwan, T. (2024). Google Scholar is manipulatable [Version Number: 1]. https://doi.org/10.48550/ARXIV.2402.04607

Ioannidis, J. P. A., & Maniadis, Z. (2024). Quantitative research assessment: Using metrics against gamed metrics. Internal and Emergency Medicine, 19(1), 39–47. https://doi.org/10.1007/s11739-023-03447-w

Isager, P., van ’t Veer, A., & Lakens, D. (2025). Replication value as a function of citation impact and sample size. Meta-Psychology, 9. https://doi.org/10.15626/MP.2022.3300

Loudon, K., Treweek, S., Sullivan, F., Donnan, P., Thorpe, K. E., & Zwarenstein, M. (2015). The PRECIS-2 tool: Designing trials that are fit for purpose. BMJ, 350(may08 1), h2147–h2147. https://doi.org/10.1136/bmj.h2147

MDPI. (2024). Sustainability Passes Rigorous Scopus Reevaluation Process. Retrieved March 12, 2025, from https://www.mdpi.com/about/announcements/7352

Meehl, P. E. (1990). Appraising and Amending Theories: The Strategy of Lakatosian Defense and Two Principles that Warrant It. Psychological Inquiry, 1(2), 108–141. https://doi.org/10.1207/s15327965pli0102_1

Röseler, L., Kaiser, L., Doetsch, C., Klett, N., Seida, C., Schütz, A., Aczel, B., Adelina, N., Agostini, V., Alarie, S., Albayrak-Aydemir, N., Aldoh, A., Al-Hoorie, A. H., Azevedo, F., Baker, B. J., Barth,

C. L., Beitner, J., Brick, C., Brohmer, H., . . . Zhang, Y. (2024). The Replication Database: Documenting the Replicability of Psychological Science. Journal of Open Psychology Data, 12(1), 8. https://doi.org/10.5334/jopd.101

Sigurdson, M. K., Sainani, K. L., & Ioannidis, J. P. (2023). Homeopathy can offer empirical insights on treatment effects in a null field. Journal of Clinical Epidemiology, 155, 64–72. https://doi.org/10.1016/j.jclinepi.2023.01.010

Syed, M. (2023). The Slow Progress towards Diversification in Psychological Research. https://doi.org/10.31234/osf.io/bqzs5

Vickers, P., Adamo, L., Alfano, M., Clark, C., Cresto, E., Cui, H., Dang, H., Dellsén, F., Dupin, N.,

Gradowski, L., Graf, S., Guevara, A., Hallap, M., Hamilton, J., Hardey, M., Helm, P., Landrum, A., Levy, N., Machery, E., . . . Mitchell Finnigan, S. (2024). Development of a novel methodology for ascertaining scientific opinion and extent of agreement (N. Mubarak, Ed.). PLOS ONE, 19(12), e0313541. https://doi.org/10.1371/journal.pone.0313541

Yarkoni, T. (2022). The generalizability crisis. Behavioral and Brain Sciences, 45, e1. https://doi.org/10.1017/S0140525X20001685

Downloads

Published

2025-10-29

Issue

Section

Special Topic