Unfortunately, Journals in Industrial, Work, and Organizational Psychology Still Fail to Support Open Science Practices

Currently, journals in Industrial, Work, and Organizational (IWO) Psychology collectively do too little to support Open Science Practices. To address this problematic state of affairs, we first point out numerous problems that characterize the IWO Psychology literature. We then describe seven frequent arguments, which all lead to the conclusion that the time is not ripe for IWO Psychology to broadly adopt Open Science Practices. To change this narrative and to promote the necessary change, we reply to these arguments and explain how Open Science Practices can contribute to a better future for IWO Psychology with more reproducible, replicable, and reliable findings.

It is unfortunate how slowly positive change is coming to the Industrial, Work, and Organizational Psychology (IWO Psychology) and the broader Management literature. 1 The field is riddled with problems, such as (i) low statistical power, (ii) non-transparent research practices and a lack of data-sharing, (iii) a high prevalence of Questionable Research Practices (QRPs, e.g., hypothesizing after the results are known [HARKing] or nondisclosure of unsupported hypotheses), (iv) many false positive findings, (v) publication bias and a substantial file drawer problem (i.e., findings that are not published because they are not statistically significant), (vi) a bias towards novelty at the expense of replication studies and cumulative science, and (vii) the low replicability of its findings. Importantly, a promising cure for many of these problems has long been found: Open Science Practices (OSPs; see Table 1, for an overview of OSPs and their effectiveness). 2 However, as our own (Torka et al., 2023) and other research (Tipu and Ryan, 2021) shows, most IWO Psychology and Management journals generally do little to support researchers' use of OSPs. For instance, our analysis of the policies of IWO Psychology and Management journals showed that only five of 202 analysed journals (2.5%) offered registered reports as publication option and only one journal (0.5%) provided Open Science Badges (Torka et al., 2023). If anything, the journals seem to endorse "business as usual", which prevents overdue improvements of the state of the literature.
In the following, we will illustrate that the listed problems do in fact exist, specifically in the IWO Psychology/Management field, and that there are hardly any excuses for not taking action on the part of the journals. We will do so by presenting typical arguments that we observed in our own studies with scientists (a survey with scientists in IWO Psychology, Hüffmeier et al., n.d.) and journal editors (a survey with Editors of IWO Psychology journals, Torka et al., 2023) and (over-)heard in informal conversations with colleagues. Then, we will reply to these arguments (see Table 2, for an overview of the seven arguments and our refutations).
The first argument: OSPs are for scientific fields that evidentially have documented problems with the replicability of their findings like Social Psychology. It is of course true that replicability (or rather the lack thereof) is better documented in other fields, especially Social Psychology. 3 However, the replicability of reported results is low across many research domains. These domains include, but are not limited to, Management 1 Because much IWO psychology research is published in management journals (e.g., Journal of Organizational Behavior or Journal of Management), the two fields cannot really be separated.
2 Technically, some of these measures such as journals' support of replications do not necessarily make science more open, transparent, or accessible although they improve science. Other authors, therefore, speak of "open science and reform practices" rather than of OSPs (see Tenney et al., 2021). However, to keep with established conventions, we still use the term "OSPs" in this manuscript.
3 Replicability means that findings from new (replication) studies, which converge with those of the original studies, "can be obtained with other random samples drawn from a multidimensional space that captures the most important facets of the research design" (Asendorpf et al., 2013, p. 109;see, for instance, Open Science Foundation, 2015). (Bergh et al., 2017) and neighbouring disciplines such as Marketing (Simmons and Nelson, 2019) and Economics (Camerer et al., 2016). There is little reason to assume that the situation is fundamentally different in IWO Psychology research (see also Goldfarb and King, 2016) because the incentives and publishing practices in all these fields are highly comparable and, thus, equally problematic. Finally, the methodological problems of IWO Psychology and Management are not restricted to replicability (see also the next argument).
The second argument: Show us the evidence that our field does in fact have severe methodological problems. Maybe then we will be willing to start supporting OSPs. We will use our above list to substantiate the prevailing methodological problems. First, low statistical power is very common in IWO Psychology and Management studies. One recent study found that only 37% of considered studies had a power of at least .80 (Paterson et al., 2016; see also Mone et al., 1996). Second, at least for IWO Psychology and for strategic management research, research practices are so non-transparent that it is often impossible to reproduce reported findings even when the data are available (see Bergh et al., 2017; for an overview, see Artner et al., 2021). However, related efforts often fail already one step earlier because researchers are unwilling to share their data (e.g., Tenopir et al., n.d.;Wicherts et al., 2006). Third, there is converging evidence across many studies that Questionable Research Practices (QRPs) are widespread in the field (e.g., Banks et al., 2016;O'Boyle Jr et al., 2017). The problem is even more prevalent for articles appearing in prestigious journals such as Organizational Behavior and Human Decision Processes or the Academy of Management Journal (Kepes et al., 2022).
Fourth, the risk of committing Type I errors (i.e., rejecting a true null hypothesis or obtaining a false positive finding) is directly associated with low statistical power, which is prevalent in our field (see above). Another perspective on the same issue is the rate of supported hypotheses in a field, which has been found to be especially high for the overarching field of Economics and Business (i.e., no further differentiation was made within this field; Fanelli, 2012)-a clearly worrying finding for the state of the literature. Fifth, many pertinent journals do not publish statistically nonsignificant results (Tenney et al., 2021), deeming them either irrelevant or unworthy of publication. Therefore, such negative results typically remain in researchers' file drawers (i.e., publication bias; Harrison et al., 2017;O'Boyle Jr et al., 2014). Sixth, nearly all journals in the field stress that new manuscripts must contribute theoretical and empirical extensions to the current knowledge (e.g., Group and Organization Management seeks "[. . . ] the work of scholars and professionals who extend management and organization theory [. . . ]. Innovation, conceptual sophistication, methodological rigor, and cutting-edge scholarship are the driving principles"). This coincides with the underrepresentation of replication studies in the field. This underrepresentation was shown by Ryan and Tipu (2022). They estimate in their quantitative analysis that less than 1.5% of published research in the business and management literature are replication studies. The one-sided quest for novelty together with the prevailing disinterest for replication studies that is well-documented for most journals (Tipu and Ryan, 2021; see also Evanschitzky et al., 2007;Tenney et al., 2021) limits our collective ability to establish a cumulative knowledge base and to "differentiate 'truth from nonsense'" (Kidwell et al., 2014, p. 304).
The third argument: We would like to support OSPs, but they are made exclusively for experimental (laboratory) research. There are no suitable templates for other approaches (correlative [field] research, secondary data analyses, qualitative studies, etc.). This is not true and it has not been for a while. While the first OSPs and preregistration templates were indeed often developed with a focus on experimental (laboratory) research (e.g., Van't Veer and Giner-Sorolla, 2021), many further developments followed. There are now templates that allow preregistering analyses of pre-existing data (Mertens and Krypotos, 2019), systematic reviews (Van den Akker et al., 2020), meta-analyses (Moreau and Gamble, 2022), and qualitative studies (Haven et al., 2020;Kern and Gleditsch, 2017). Moreover, extant templates originally developed for experimental research can be adapted for all kinds of research with relative ease. We argue that a preregistration not fitting the template perfectly is better than no preregistration at all. While a preregistration should always contain certain information (e.g., how the sample size is determined and what measures will be used), every effort to limit researcher degrees of freedom (and thereby possibilities to engage in QRPs) via preregistration is a step in the right direction.
Based on our own experience, we can recommend the template from the aspredicted.org website, which is also offered via the Open Science Framework (OSF; http://osf.io). The template can be used without a word limit or length restrictions on the OSF, while the aspredicted.org website has a word limit. The template is simple, short and can be easily adapted to a variety of study types. In different projects of our research group, we used it, for instance, for experimental studies, correlational studies, and the analysis of pre-existing data, meta-analyses, as well as qualitative studies. Thus, al-though its original focus may have been experimental research, it is clearly not restricted to such studies. However, the templates that were designed for specific study types are of course less generic and facilitate the declaration of necessary study-specific details (e.g., study eligibility criteria or the literature search strategy for meta-analyses, see Moreau and Gamble, 2022).
The fourth argument: If journals implement OSPs, it raises the bar and makes publishing more difficult. This especially applies to certain kinds of research, for instance research on minorities, hard-to-reach or small populations. 4 Admittedly, such concerns about gatekeeping can be justified because the requirements for publication would increase. For instance, when preregistering a study, researchers are asked to provide an a priori justification of their sample size (see, for instance, the templates on the aspredicted.org website or by Van't Veer and Giner-Sorolla, 2021). This often means collecting [much] larger sample sizes as compared to conducting a study without a sample size justification (Mone et al., 1996;Paterson et al., 2016). However, providing a sample size justification does not always mean that collecting a large sample size is necessary (and often it is not done, see Bakker et al., 2020). Having resource constraints and/or studying a hard-to-reach or small populations are legitimate justifications for the realized sample size (Lakens, 2022; although collecting surprisingly large sample sizes is more often possible than researchers might think at first, Vazire, 2015). However, while there are often good reasons to conduct and publish research with rather small sample sizes, scientists should then actively acknowledge the potential, goals, and limits of their statistical analyses.
Moreover, it can be debated how problematic a higher bar for publishing would actually be. In fact, there has been at least some agreement for some time now (e.g., Nelson et al., 2012;Vazire, 2018) that individual researchers should publish fewer manuscripts while increasing their quality. To allow for making stronger scientific claims (Vazire, 2018), researchers should improve the methods they apply, including the use of OSPs, but not excluding further improvements in other methodological areas.
The fifth argument: It does not make much of a difference if journals actively support OSPs. Researchers do not want to use them. While it may be true that first initiatives to foster the use of OSPs in a field are not necessarily met with enthusiasm of most researchers, there is no reason to be pessimistic. As is the case with any innovation, people take it up at different speeds and it takes a while for change to affect the habits of the majority. But researchers do willingly take up these new measures especially if esteemed journals lead the movement. The psychological flagship journal Psychological Science, for instance, has been an "early adopter" of OSPs since 2014 and has actively supported (but not enforced) the use of OSPs. The journal saw different positive results of its new policy rather quickly: Since the introduction of Open Science Badges (see Table 1), the data sharing rate for published articles increased. In fact, when researchers earned an Open Data Badge rather than merely indicating data availability, the data "were more likely to be actually available, correct, usable, and complete" (Kidwell et al., 2016). The higher rate of published replication studies in the journal since the introduction of the "Preregistered Direct Replication" article format indicates another positive change.
The sixth argument: Journals that actively support OSPs experience a competitive disadvantage because scientists consider them as less attractive target journals. Journals like Leadership Quarterly or the Journal of Business and Psychology have endorsed and supported the use of OSPs relatively early. If anything, these journals benefitted from this decision: Although their strongly positive development in terms of journal metrics such as the journal impact factor is certainly driven by various factors and decisions, their articulated attitude towards OSPs did at least not hurt enough to prevent this development (see also the recent development of the Journal of Applied Psychology after more recently introducing transparency-related changes). And of course, there are other journals that have not yet embraced OSPs and did not have a comparable positive development in the same time span.
The seventh argument: Journals that do at least encourage some OSPs do more than others and they therefore do enough. While some journals do actively support the use of selected OSPs (e.g., the Journal of Personnel Psychology or Group and Organization Management offering hybrid registered report submission; see Gardner, 2020), these efforts are not very visible. Researchers typically have to search actively for this option. If they do not know it is offered or do not know what to look for, there is a good chance that they will not even find the option on a journal website. Moreover, supporting only one OSP does and cannot address all of the problems we listed above. To do so, it would be much more effective to actively support all OSPs (see Table 1).

IWO Psychology and Management Journals should do more to Support OSPs
Positive change is not coming to our field automatically. Illustrating this notion, a current study (Tenney Table 1 Open Science Practice Definition Demonstrated and assumed benefits for the field Transparency requirements for data, method and code, material or stimuli As a minimum requirement, authors indicate whether they will make their data, analytic methods used in the analysis (i.e., methods and code), and research materials used to conduct the research (i.e., material or stimuli), available to any researcher.
• Studies employing high statistical power, complete methodological transparency, and preregistration are highly replicable and more replicable than past studies in prior multi-lab replication efforts (Protzko et al., 2020).

Preregistration
Preregistration is defined as "specifying your research plan in advance of your study and submitting it to a registry" (Center for Open Science, n.d.-a). • Preregistered studies are more transparent concerning the reporting of their findings than non-preregistered studies and also report a lower rate of confirmed hypotheses (Toth et al., 2021).
• Researchers with experience using preregistrations (n = 299) reported mostly positive experiences, they believed that preregistrations had improved the quality of their research projects and "that the benefits outweigh the challenges" (Sarafoglou et al., 2021).

Registered Reports
Registered reports are "a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection" (Center for Open Science, n.d.-c). • Researchers blinded to the rated article type, rated registered reports more positively across a number of criteria, including the rigorousness of the employed methods and the analysis as well as the overall manuscript quality and the importance of produced discoveries (Soderberg et al., 2021).
Open Science Badges Open Science Badges "are incentives for researchers to share data, materials, or to preregister" (Center for Open Science, n.d.-b). Specific badges indicate that an article was preregistered, or that its data or its material has been made publicly available. • Open Science Badges increase data and materials sharing (Kidwell et al., 2016).
• Shared data are "more likely to be actually available, correct, usable, and complete" when researchers earn an Open Data badge than when they only indicate data availability (Kidwell et al., 2016).

Replications
Replications are "a fundamental feature of the scientific process" (Zwaan et al., 2018). When conducting replications, researchers critically test the robustness and validity of scientific discoveries.
• Replications ensure the robustness of published research because "a finding needs to be repeatable to count as a scientific discovery" (Zwaan et al., 2018) Table 2 The seven arguments treated in this commentary The refutations of these arguments (1) OSPs are for scientific fields that evidentially have documented problems with the replicability of their findings like Social Psychology.
The replicability of research findings is consistently rather low for many scientific fields, including many neighbouring fields of IWO Psychology (beyond Social Psychology). Due to the extant similarities in publishing practices and incentives across disciplines, it is unlikely that the situation is different in IWO Psychology.
(2) Show us the evidence that our field does in fact have severe methodological problems. Maybe then we will be willing to start supporting OSPs. IWO Psychology has the following, well-documented methodological problems: (i) low statistical power, (ii) non-transparent research practices and a lack of data-sharing, (iii) a high prevalence of Questionable Research Practices, (iv) many false positive findings, (v) publication bias and a substantial file drawer problem, (vi) a bias towards novelty at the expense of replication studies and cumulative science.
(3) We would like to support OSPs, but they are made exclusively for experimental (laboratory) research. There are no suitable templates for other approaches (correlative [field] research, secondary data analyses, qualitative studies, etc.).
Suitable templates have been specifically developed for the analysis of extant (correlational) data, qualitative studies, meta-analyses, etc. Moreover, existing templates can be easily adapted.
(4) If journals implement OSPs, it raises the bar and makes publishing more difficult. This especially applies to certain kinds of research, for instance research on minorities, hard-to-reach or small populations. If journals implement OSPs, it would probably often raise the bar for publishing research. It is, however, wrong that OSPs would always require larger sample sizes. Moreover, raising the bar would probably be good for the scientific enterprise.
(5) It does not make much of a difference if journals actively support OSPs. Researchers do not want to use them.
It may take time, but researchers do want to use OSPs if journals implement them and incentivize their use, as for instance the case of Psychological Science shows.
(6) Journals that actively support OSPs experience a competitive disadvantage because scientists consider them as less attractive target journals.
The limited evidence that we have does not support this argument. Early OSP adopters among the journals (Journal of Business and Psychology, Leadership Quarterly) fared pretty well in comparison to non-adopters. (7) Journals that do at least encourage some OSPs do more than others and they therefore do enough.
These selected efforts are typically not sufficiently visible. Supporting only a part of the OSPs cannot fully and effectively address the field's problems. et al., 2021) found that less than one percent of articles in the field's flagship journals are preregistered, less than one percent of the publications are replication studies (for comparable results, see Ryan and Tipu, 2022) or report null results, and for less than three percent, authors indicate that they openly shared their data or their materials. These low rates most likely reflect the journal policies concerning OSPs (cf. Torka et al., 2023). For example concerning replications, Tipu and Ryan (2021) showed that only 4.7% of more than 600 analysed business and management journals explicitly considered replication studies, while "238 (39.7%) were implicitly dismissive of replication studies, and the remaining 3 (0.5%) journals were explicitly disinterested in considering replication studies for publication" (Tipu and Ryan, 2021, p. 101; for comparable results concerning replications and also further OSPs, see Torka et al., 2023).
With this contribution, we would like to invite and challenge IWO Psychology and Management journals to foster researchers' use of as many OSPs as possible. To be clear, we do not suggest forcing researchers to use certain OSPs. We rather ask the journals to contribute to the needed cultural change in the field's research practices by (i) encouraging and incentivizing methodological transparency and the use of preregistrations (e.g., by offering Open Science Badges), (ii) offering registered reports as an equitable publishing format, and (iii) explicitly inviting well-designed replications. These measures, while cheap and easy to implement, can increase researchers' perceptions of a journal as an attractive outlet for their high-quality research, increase the quality of research overall and the resulting trust in it, and change the field for the better by addressing the systemic roots of QRPs and the low replicability of findings.