Meta-Psychology https://open.lnu.se/index.php/metapsychology <p>Meta-Psychology publishes theoretical and empirical contributions that advance psychology as a science through critical discourse related to individual articles, research lines, research areas, or psychological science as a field.</p> Linnaeus University Press en-US Meta-Psychology 2003-2714 Responsible Research Assessment requires structural more than procedural reforms https://open.lnu.se/index.php/metapsychology/article/view/3734 <p>In their target articles, Schönbrodt et al. (2022) and Gärtner et al. (2022) propose new metrics and their practical implementation to improve responsible research assessment. Generally, I welcome the inclusion of open science and scientific rigor into evaluating job candidates. However, the proposed reform mainly focuses on the first stage of selecting candidates who then continue towards a second stage of in-depth evaluation of research quality. Yet, this second selection stage is underdeveloped but likely more critical concerning responsible research assessment and hiring decisions. I argue that an adequate assessment of research quality at this second stage requires the representation of specific knowledge in the subfield of a discipline that the candidate should be hired for by the hiring committee. This is rarely achieved given the current structural organization of departments, especially in German-speaking countries, and potentially explains the reliance on suboptimal indicators such as h-index and Journal Impact factor. Therefore, I argue that responsible research assessment requires structural reform to ensure that institutions have several researchers in permanent positions with specific knowledge in different subfields to provide an adequate and responsible assessment of research quality by hiring committees at all evaluation stages.</p> Gidon T. Frischkorn Copyright (c) 2024 Gidon T. Frischkorn https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3734 Responsible assessment of what research? Beware of epistemic diversity! https://open.lnu.se/index.php/metapsychology/article/view/3797 <p>Schönbrodt et al. (2022) and Gärtner et al. (2022) aim to outline in the target articles why and how research assessment could be improved in psychological science in accordance with DORA, resulting in a focus on abandoning the impact factor as an indicator for research quality and aligning assessment with methodological rigor and open science practices. However, I argue that their attempt is guided by a rather narrow statistical and quantitative understanding of knowledge production in psychological science. Consequently, the authors neglect the epistemic diversity within psychological science, leading to the potential danger of committing epistemic injustice. Hence, the criteria they introduce for research assessment might be appropriate for some approaches to knowledge production; it could, however, neglect or systematically disadvantage others. Furthermore, I claim that the authors lack some epistemic (intellectual) humility about their proposal. Further information is required regarding when and for which approaches their proposal is appropriate and, maybe even more importantly, when and where it is not. Similarly, a lot of the proposed improvements of the reform movement, like the one introduced in the target articles, are probably nothing more than trial and error due to a lack of investigation of their epistemic usefulness and understanding of underlying mechanisms and theories. Finally, I argue that with more awareness about epistemic diversity in psychological science in combination with more epistemic (intellectual) humility, the danger of epistemic injustice could be attenuated.</p> Sven Ulpts Copyright (c) 2024 Sven Ulpts https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3797 Responsible research assessment in the area of quantitative methods research: A comment on Gärtner et al. https://open.lnu.se/index.php/metapsychology/article/view/3796 <p>In this commentary, we discuss the proposed criteria in Gärtner et al. (2022) for hiring or promoting quantitative methods researchers. We argue that the criteria do not reflect aspects that are relevant to quantitative methods researchers and typical publications they produce. We introduce a new set of criteria that can be used to evaluate the performance of quantitative methods researchers in a more valid fashion. We discuss the necessity to balance scientific expertise and open science commitment in such ranking schemes.</p> Holger Brandt Mirka Henninger Esther Ulitzsch Kristian Kleinke Thomas Schäfer Copyright (c) 2024 Holger Brandt, Mirka Henninger, Esther Ulitzsch, Kristian Kleinke, Thomas Schäfer https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3796 Response to responsible research assessment I and II from the perspective of the DGPs working group on open science in clinical psychology https://open.lnu.se/index.php/metapsychology/article/view/3794 <p>We comment on the papers by Schönbrodt et al. (2022) and Gärtner et al. (2022) on responsible research assessment from the perspective of clinical psychology and psychotherapy research. Schönbrodt et al. (2022) propose four principles to guide hiring and promotion in psychology: (1) In addition to publications in scientific journals, data sets and the development of research software should be considered. (2) Quantitative metrics can be useful, but they should be valid and applied responsibly. (3) Methodological rigor, research impact, and work quantity should be considered as three separate dimensions for evaluating research contributions. (4) The quality of work should be prioritized over the number of citations or the quantity of research output. From the perspective of clinical psychology, we endorse the initiative to update current practice by establishing a matrix for comprehensive, transparent and fair evaluation criteria. In the following, we will both comment on and complement these criteria from a clinical-psychological perspective.</p> Jakob Fink-Lamotte Kevin Hilbert Dorothée Bentz Simon Blackwell Jan R. Boehnke Juliane Burghardt Barbara Cludius Johannes C. Ehrenthal Moritz Elsaesser Anke Haberkamp Tanja Hechler Anja Kräplin Christian Paret Lars Schulze Sarah Wilker Helen Niemeyer Copyright (c) 2024 Jakob Fink-Lamotte, Kevin Hilbert, Dorothée Bentz, Simon Blackwell, Jan R. Boehnke, Juliane Burghardt, Barbara Cludius, Johannes C. Ehrenthal, Moritz Elsaesser, Anke Haberkamp, Tanja Hechler, Anja Kräplin, Christian Paret, Lars Schulze, Sarah Wilker, Helen Niemeyer https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3794 Comment on "Responsible Research Assessment: Implementing DORA for hiring and promotion in psychology” https://open.lnu.se/index.php/metapsychology/article/view/3779 <p>In target papers, Schönbrodt et al. (2022), and Gärtner et al. (2022) proposed to broaden the range of the considered research contributions, namely (i) bringing strong empirical evidence, (ii) building open databases, (iii) building and maintaining packages, where each dimension being scored independently in marking scheme. Using simulations, we show that the current proposal places a significant weight on software development, potentially at the expense of other academic activities – a weight that should be explicit to committees before they make use of the proposed marking scheme. Following Gärtner et al. (2022) recommendations, we promote the use of flexible weights which more closely match an institution’s specific needs by the weighting of the relevant dimensions. We propose a Shinyapp that implement the marking scheme with adaptative weights to both help the hiring committee define and foresee the consequences of weights’ choices and increase the transparency and understandability of the procedure.</p> Victor Auger Nele Claes Copyright (c) 2024 Victor Auger, Nele Claes https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3779 Research assessment using a narrow definition of “research quality” is an act of gatekeeping: A comment on Gärtner et al. (2022) https://open.lnu.se/index.php/metapsychology/article/view/3764 <p>Gärtner et al. (2022) propose a system for quantitatively scoring the methodological rigour of papers during the hiring and promotion of psychology researchers, with the aim of advantaging researchers who conduct open, reproducible work. However, the quality criteria proposed for assessing methodological rigour are drawn from a narrow post-positivist paradigm of quantitative, confirmatory research conducted from an epistemology of scientific realism. This means that research conducted from a variety of other approaches, including constructivist, qualitative research, becomes structurally disadvantaged under the new system. The implications of this for particular fields, demographics of researcher, and the future of the discipline of psychology are discussed.</p> Tom Hostler Copyright (c) 2024 Tom Hostler https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3764 Indicators for teaching assessment https://open.lnu.se/index.php/metapsychology/article/view/3763 <p>This commentary on Schönbrodt et al. (2022) and Gärtner et al. (2022) aims at complementing the ideas regarding an implementation of DORA for the domain of teaching. As there is neither a comprehensive assessment system based on empirical data nor a competence model for teaching competencies available, yet, we describe some pragmatic ideas for indicators of good teaching and formulate desiderates for future research programs and validation.</p> Miriam Hansen Julia Beitner Holger Horz Martin Schultze Copyright (c) 2024 Miriam Hansen, Julia Beitner, Holger Horz, Martin Schultze https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3763 Valuing Preprints Must be Part of Responsible Research Assessment https://open.lnu.se/index.php/metapsychology/article/view/3758 <p>Comments on papers by Schönbrodt et al. (2022) and Gärtner et al. (2022) proposing reforms to the research assessment process. Given the prominent role of preprints in contemporary scientific practice, they must be an accepted and central component of research assessment.</p> Moin Syed Copyright (c) 2024 Moin Syed https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3758 Responsible Research Assessment Should Prioritize Theory Development and Testing Over Ticking Open Science Boxes https://open.lnu.se/index.php/metapsychology/article/view/3735 <p>We appreciate the initiative to seek for ways to improve academic assessment by broadening the range of relevant research contributions and by considering a candidate’s scientific rigor. Evaluating a candidate's ability to contribute to science is a complex process that cannot be captured through one metric alone. While the proposed changes have some advantages, such as an increased focus on quality over quantity, the proposal's focus on adherence to open science practices is not sufficient, as it undervalues theory building and formal modelling: A narrow focus on open science conventions is neither a sufficient nor valid indicator for a “good scientist” and may even encourage researchers to choose easy, pre-registerable studies rather than engage in time-intensive theory building. Further, when in a first step only a minimum standard for following easily achievable open science goals is set, most applicants will soon pass this threshold. At this point, one may ask if the additional benefit of such a low bar outweighs the potential costs of such an endeavour. We conclude that a reformed assessment system should put at least equal emphasis on theory building and adherence to open science principles and should not completely disregard traditional performance metrices.</p> Hannah Dames Philipp Musfeld Vencislav Popov Klaus Oberauer Gidon T. Frischkorn Copyright (c) 2024 Hannah Dames, Philipp Musfeld, Vencislav Popov, Klaus Oberauer, Gidon T. Frischkorn https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3735 Commentary: 'Responsible Research Assessment II: A specific proposal for hiring and promotion in psychology' https://open.lnu.se/index.php/metapsychology/article/view/3715 <p>Based on four principles of a more responsible research assessment in academic hiring and promotion processes, Gärtner et al. (2022) suggested an evaluation scheme for published manuscripts, reusable data sets, and research software. This commentary responds to the proposed indicators for the evaluation of research software contributions in academic hiring and promotion processes. Acknowledging the significance of research software as a critical component of modern science, we propose that an evaluation scheme must emphasize the two major dimensions of rigor and impact. Generally, we believe that research software should be recognized as valuable scientific output in academic hiring and promotion, with the hope that this incentivizes the development of more open and better research software.</p> Andreas Markus Brandmaier Maximilian Ernst Aaron Peikert Copyright (c) 2024 Andreas M. Brandmaier, Maximilian Ernst, Aaron Peikert https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3715 Responsible Research is also concerned with generalizability: Recognizing efforts to reflect upon and increase generalizability in hiring and promotion decisions in psychology https://open.lnu.se/index.php/metapsychology/article/view/3695 <p>We concur with the authors of the two target articles that Open Science practices can help combat the ongoing reproducibility and replicability crisis in psychological science and should hence be acknowledged as responsible research practices in hiring and promotion decisions. However, we emphasize that another crisis is equally threatening the credibility of psychological science in Germany: The sampling or generalizability crisis. We suggest that scientists’ efforts to contextualize their research, reflect upon, and increase its generalizability should be incentivized as responsible research practices in hiring and promotion decisions. To that end, we present concrete suggestions for how efforts to combat the additional generalizability crisis could be operationalized within Gärtner et al. (2022) evaluation scheme. Tackling the replicability and the generalizability crises in tandem will advance the credibility and quality of psychological science and teaching in Germany.</p> Roman Stengelin Manuel Bohn Alejandro Sánchez-Amaro Daniel Haun Maleen Thiele Moritz Daum Elisa Felsche Frankie Fong Anja Gampe Marta Giner Torréns Sebastian Grueneisen David Hardecker Lisa Horn Karri Neldner Sarah Pope-Caldwell Nils Schuhmacher Copyright (c) 2024 Roman Stengelin, Manuel Bohn, Alejandro Sánchez-Amaro, Daniel Haun, Maleen Thiele, Moritz Daum, Elisa Felsche, Frankie Fong, Anja Gampe, Marta Giner Torréns, Sebastian Grueneisen, David Hardecker, Lisa Horn, Karri Neldner, Sarah Pope-Caldwell, Nils Schuhmacher https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3695 Comment on: Responsible Research Assessment I and Responsible Research Assessment II https://open.lnu.se/index.php/metapsychology/article/view/3685 <p>A long-term personnel policy in filling professorships, aimed at remedying deficits in psychological research, should be able to significantly improve the scientific quality of psychology: “The main reason is that the hiring and promotion of such researchers is most likely to contribute to the emergence of a credible scientific knowledge base“ (Gärtner et al., in press). </p> Erich H. Witte Copyright (c) 2024 Erich H. Witte https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3685 Interdisciplinary Value https://open.lnu.se/index.php/metapsychology/article/view/3679 <p>This is a commentary on interdisciplinary value in the special issue "Responsible Research Assessment: Implementing DORA for hiring and promotion in psychology."</p> Veli-Matti Karhulahti Copyright (c) 2024 Veli-Matti Karhulahti https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2023.3679 Commentary: “Responsible Research Assessment: Implementing DORA for hiring and promotion in psychology” https://open.lnu.se/index.php/metapsychology/article/view/3655 <p>A commentary on: Gärtner et al., 2022; Schönbrodt et al., 2022.</p> Alejandro Sandoval-Lentisco Copyright (c) 2024 Alejandro Sandoval-Lentisco https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2022.3655 A broader view of research contributions: Necessary adjustments to DORA for hiring and promotion in psychology. https://open.lnu.se/index.php/metapsychology/article/view/3652 <p>Recently Schönbrodt et al. (2022) released recommendations for improving how psychologists could be evaluated for recruitment, retention, and promotion. Specifically, they provided four principles of responsible research assessment in response to current methods that rely heavily on bibliometric indices of journal quality and research impact. They build their case for these principles on the San Francisco Declaration on Research Assessment (DORA) perspective that decries reliance on invalid quantitative metrics of research quality and productivity in hiring and promotion. The paper makes clear the tension panels have to address in evaluating applications—too little time to do an in-depth evaluation of an individual’s career and contribution, so reliance on easy to understand, but perhaps invalid, metrics. This dilemma requires an alternative mechanism rather than simply a rejection of metrics. To that end, the authors are to be congratulated for operationalising what those alternatives might look like. Nonetheless, the details embedded in the principles seem overly narrow and restrictive.</p> Gavin Brown Copyright (c) 2024 Gavin Brown https://creativecommons.org/licenses/by/4.0/ 2024-03-17 2024-03-17 8 10.15626/MP.2022.3652 The Untrustworthy Evidence in Dishonesty Research https://open.lnu.se/index.php/metapsychology/article/view/3987 <p>Replicable and reliable research is essential for cumulative science and its applications in practice. This article examines the quality of research on dishonesty using a sample of 286 hand-coded test statistics from 99 articles. Z-curve analysis indicates a low expected replication rate, a high proportion of missing studies, and an inflated false discovery risk. Test of insufficient variance (TIVA) finds that 11/61 articles with multiple test statistics contain results that are ``too-good-to-be-true''. Sensitivity analysis confirms the robustness of the findings. In conclusion, caution is advised when relying on or applying the existing literature on dishonesty.</p> František Bartoš Copyright (c) 2024 František Bartoš https://creativecommons.org/licenses/by/4.0/ 2024-04-19 2024-04-19 8 10.15626/MP.2023.3987 Beyond a Dream: The Practical Foundations of Disconnected Psychology https://open.lnu.se/index.php/metapsychology/article/view/2740 <p><em>Disconnected</em> psychology is a form of psychological science in which researchers ground their work upon the main principles of psychological method but are detached from a “field” consisting of other psychologists that comprises <em>connected</em> psychology. It has previously been proposed that combining the two forms of psychology would result in the most significant advancement of psychological knowledge (Krpan, 2020). However, disconnected psychology may seem to be an “abstract utopia”, given that it has not been previously detailed how to put it into practice. The present article therefore sets the practical foundations of disconnected psychology. In this regard, I first describe a hypothetical disconnected psychologist and discuss relevant methodological and epistemological implications. I then propose how this variant of psychology could be integrated with the current academic system (i.e., with connected psychology). Overall, the present article transforms disconnected psychology from a hazy dream into substance that could eventually maximize psychological knowledge, even if implementing it would require a radical transformation of psychological science. </p> Dario Krpan Copyright (c) 2024 Dario Krpan https://creativecommons.org/licenses/by/4.0/ 2024-04-19 2024-04-19 8 10.15626/MP.2020.2740 Knowing What We're Talking About https://open.lnu.se/index.php/metapsychology/article/view/3638 <p><span style="font-weight: 400;">A theory crisis and measurement crisis have been argued to be root causes of psychology's replication crisis. In both, the lack of conceptual clarification and the jingle-jangle jungle at the construct definition level as well the measurement level play a central role. We introduce a conceptual tool that can address these issues: Decentralized Construct Taxonomy specifications (DCTs). These consist of comprehensive specifications of construct definitions, corresponding instructions for quantitative and qualitative research, and unique identifiers. We discuss how researchers can develop DCT specifications as well as how DCT specifications can be used in research, practice, and theory development. Finally, we discuss the implications and potential for future developments to answer the call for conceptual clarification and epistemic iteration. This contributes to the move towards a psychological science that progresses in a cumulative fashion through discussion and comparison.</span></p> Gjalt-Jorn Peters Rik Crutzen Copyright (c) 2024 Gjalt-Jorn Peters, Rik Crutzen https://creativecommons.org/licenses/by/4.0/ 2024-04-19 2024-04-19 8 10.15626/MP.2022.3638 Associations between Goal Orientation and Self-Regulated Learning Strategies are Stable across Course Types, Underrepresented Minority Status, and Gender https://open.lnu.se/index.php/metapsychology/article/view/2918 <p>In this pre-registered replication of findings from Muis and Franco [2009; Contemporary Educational Psychology, 34(4), 306-318], college students (N = 978) from across the United States and Canada were surveyed regarding their goal orientations and learning strategies. A structural equation modelling approach was used to assess the associations between goal orientations and learning strategies. Six of the eight significant associations (75%) found by Muis and Franco replicated successfully in the current study. Mastery approach goals positively predicted endorsement of all learning strategies (Rehearsal, Critical Thinking, Metacognitive Self-Regulation and Elaboration). Performance avoidance goals negatively predicted critical thinking, while positively predicting metacognitive self-regulation and rehearsal. Evidence for moderation by assignment type was found. No evidence of the moderation of these associations by gender, underrepresented minority status, or course type (STEM, Humanities, or Social Sciences) was found. The reliability of common scales used in educational research and issues concerning the replication of studies using structural equation modeling are discussed.</p> Brendan Schuetze Veronica Yan Copyright (c) 2024 Brendan Schuetze, Veronica Yan https://creativecommons.org/licenses/by/4.0/ 2024-04-19 2024-04-19 8 10.15626/MP.2021.2918