https://open.lnu.se/index.php/metapsychology/issue/feedMeta-Psychology2024-09-18T07:37:02+02:00Rickard Carlssonrickard.carlsson@lnu.seOpen Journal Systems<p>Meta-Psychology publishes theoretical and empirical contributions that advance psychology as a science through critical discourse related to individual articles, research lines, research areas, or psychological science as a field.</p>https://open.lnu.se/index.php/metapsychology/article/view/3695Responsible Research is also concerned with generalizability: Recognizing efforts to reflect upon and increase generalizability in hiring and promotion decisions in psychology2023-02-11T11:13:59+01:00Roman Stengelinroman_stengelin@eva.mpg.deManuel Bohnmanuel_bohn@eva.mpg.deAlejandro Sánchez-Amaroalex_sanchez@eva.mpg.deDaniel Haunhaun@eva.mpg.deMaleen Thielemaleen_thiele@eva.mpg.deMoritz Daummoritz.daum@uzh.chElisa Felscheelisa_felsche@eva.mpg.deFrankie Fongfrankie_fong@eva.mpg.deAnja Gampeanja.gampe@uni-due.deMarta Giner Torrénsmarta.giner@uni-muenster.deSebastian Grueneisensebastian.grueneisen@uni-leipzig.deDavid Hardeckerdavid_hardecker@eva.mpg.deLisa Hornlisa.horn@univie.ac.atKarri Neldnerkarri-neldner@eva.mpg.deSarah Pope-Caldwellsarah_caldwell@eva.mpg.deNils Schuhmachernschu_04@uni-muenster.de<p>We concur with the authors of the two target articles that Open Science practices can help combat the ongoing reproducibility and replicability crisis in psychological science and should hence be acknowledged as responsible research practices in hiring and promotion decisions. However, we emphasize that another crisis is equally threatening the credibility of psychological science in Germany: The sampling or generalizability crisis. We suggest that scientists’ efforts to contextualize their research, reflect upon, and increase its generalizability should be incentivized as responsible research practices in hiring and promotion decisions. To that end, we present concrete suggestions for how efforts to combat the additional generalizability crisis could be operationalized within Gärtner et al. (2022) evaluation scheme. Tackling the replicability and the generalizability crises in tandem will advance the credibility and quality of psychological science and teaching in Germany.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Roman Stengelin, Manuel Bohn, Alejandro Sánchez-Amaro, Daniel Haun, Maleen Thiele, Moritz Daum, Elisa Felsche, Frankie Fong, Anja Gampe, Marta Giner Torréns, Sebastian Grueneisen, David Hardecker, Lisa Horn, Karri Neldner, Sarah Pope-Caldwell, Nils Schuhmacherhttps://open.lnu.se/index.php/metapsychology/article/view/3797Responsible assessment of what research? Beware of epistemic diversity!2023-03-30T16:48:11+02:00Sven Ulptssu@ps.au.dk<p>Schönbrodt et al. (2022) and Gärtner et al. (2022) aim to outline in the target articles why and how research assessment could be improved in psychological science in accordance with DORA, resulting in a focus on abandoning the impact factor as an indicator for research quality and aligning assessment with methodological rigor and open science practices. However, I argue that their attempt is guided by a rather narrow statistical and quantitative understanding of knowledge production in psychological science. Consequently, the authors neglect the epistemic diversity within psychological science, leading to the potential danger of committing epistemic injustice. Hence, the criteria they introduce for research assessment might be appropriate for some approaches to knowledge production; it could, however, neglect or systematically disadvantage others. Furthermore, I claim that the authors lack some epistemic (intellectual) humility about their proposal. Further information is required regarding when and for which approaches their proposal is appropriate and, maybe even more importantly, when and where it is not. Similarly, a lot of the proposed improvements of the reform movement, like the one introduced in the target articles, are probably nothing more than trial and error due to a lack of investigation of their epistemic usefulness and understanding of underlying mechanisms and theories. Finally, I argue that with more awareness about epistemic diversity in psychological science in combination with more epistemic (intellectual) humility, the danger of epistemic injustice could be attenuated.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Sven Ulptshttps://open.lnu.se/index.php/metapsychology/article/view/3796Responsible research assessment in the area of quantitative methods research: A comment on Gärtner et al.2023-03-30T16:07:21+02:00Holger Brandtholger.brandt@uni-tuebingen.deMirka Henningermirka.henninger@uzh.chEsther Ulitzschulitzsch@ipn.uni-kiel.deKristian KleinkeKristian.Kleinke@uni-siegen.deThomas Schäferthomas.schaefer@medicalschool-berlin.de<p>In this commentary, we discuss the proposed criteria in Gärtner et al. (2022) for hiring or promoting quantitative methods researchers. We argue that the criteria do not reflect aspects that are relevant to quantitative methods researchers and typical publications they produce. We introduce a new set of criteria that can be used to evaluate the performance of quantitative methods researchers in a more valid fashion. We discuss the necessity to balance scientific expertise and open science commitment in such ranking schemes.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Holger Brandt, Mirka Henninger, Esther Ulitzsch, Kristian Kleinke, Thomas Schäferhttps://open.lnu.se/index.php/metapsychology/article/view/3794Response to responsible research assessment I and II from the perspective of the DGPs working group on open science in clinical psychology2023-03-30T12:22:03+02:00Jakob Fink-Lamottejakob.fink-lamotte@uni-potsdam.deKevin Hilbertkevin.hilbert@hu-berlin.deDorothée Bentzdorothee.bentz@unibas.chSimon Blackwellsimon.blackwell@ruhr-uni-bochum.deJan R. Boehnkej.r.boehnke@dundee.ac.ukJuliane BurghardtJuliane.Burghardt@kl.ac.atBarbara Cludiusbarbara.cludius@psy.lmu.deJohannes C. Ehrenthaljohannes.ehrenthal@uni-koeln.deMoritz Elsaessermoritz.elsaesser@uniklinik-freiburg.deAnke HaberkampAnke.Haberkamp@staff.uni-marburg.deTanja Hechlertanja.hechler@uni-muenster.deAnja Kräplinanja.kraeplin@tu-dresden.deChristian Paretchristian.paret@zi-mannheim.deLars Schulzelars.schulze@fu-berlin.deSarah Wilkersarah.wilker@uni-bielefeld.deHelen Niemeyerhelen.niemeyer@fu-berlin.de<p>We comment on the papers by Schönbrodt et al. (2022) and Gärtner et al. (2022) on responsible research assessment from the perspective of clinical psychology and psychotherapy research. Schönbrodt et al. (2022) propose four principles to guide hiring and promotion in psychology: (1) In addition to publications in scientific journals, data sets and the development of research software should be considered. (2) Quantitative metrics can be useful, but they should be valid and applied responsibly. (3) Methodological rigor, research impact, and work quantity should be considered as three separate dimensions for evaluating research contributions. (4) The quality of work should be prioritized over the number of citations or the quantity of research output. From the perspective of clinical psychology, we endorse the initiative to update current practice by establishing a matrix for comprehensive, transparent and fair evaluation criteria. In the following, we will both comment on and complement these criteria from a clinical-psychological perspective.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Jakob Fink-Lamotte, Kevin Hilbert, Dorothée Bentz, Simon Blackwell, Jan R. Boehnke, Juliane Burghardt, Barbara Cludius, Johannes C. Ehrenthal, Moritz Elsaesser, Anke Haberkamp, Tanja Hechler, Anja Kräplin, Christian Paret, Lars Schulze, Sarah Wilker, Helen Niemeyerhttps://open.lnu.se/index.php/metapsychology/article/view/3779Comment on "Responsible Research Assessment: Implementing DORA for hiring and promotion in psychology”2023-03-29T17:37:42+02:00Victor Augervictor.auger.ac@gmail.comNele Claesnele.claes@uca.fr<p>In target papers, Schönbrodt et al. (2022), and Gärtner et al. (2022) proposed to broaden the range of the considered research contributions, namely (i) bringing strong empirical evidence, (ii) building open databases, (iii) building and maintaining packages, where each dimension being scored independently in marking scheme. Using simulations, we show that the current proposal places a significant weight on software development, potentially at the expense of other academic activities – a weight that should be explicit to committees before they make use of the proposed marking scheme. Following Gärtner et al. (2022) recommendations, we promote the use of flexible weights which more closely match an institution’s specific needs by the weighting of the relevant dimensions. We propose a Shinyapp that implement the marking scheme with adaptative weights to both help the hiring committee define and foresee the consequences of weights’ choices and increase the transparency and understandability of the procedure.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Victor Auger, Nele Claeshttps://open.lnu.se/index.php/metapsychology/article/view/3764Research assessment using a narrow definition of “research quality” is an act of gatekeeping: A comment on Gärtner et al. (2022)2023-03-28T13:17:25+02:00Tom Hostlert.hostler@mmu.ac.uk<p>Gärtner et al. (2022) propose a system for quantitatively scoring the methodological rigour of papers during the hiring and promotion of psychology researchers, with the aim of advantaging researchers who conduct open, reproducible work. However, the quality criteria proposed for assessing methodological rigour are drawn from a narrow post-positivist paradigm of quantitative, confirmatory research conducted from an epistemology of scientific realism. This means that research conducted from a variety of other approaches, including constructivist, qualitative research, becomes structurally disadvantaged under the new system. The implications of this for particular fields, demographics of researcher, and the future of the discipline of psychology are discussed.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Tom Hostlerhttps://open.lnu.se/index.php/metapsychology/article/view/3763Indicators for teaching assessment2023-03-28T09:49:26+02:00Miriam Hansenhansen@paed.psych.uni-frankfurt.deJulia Beitnerbeitner@psych.uni-frankfurt.deHolger Horzhorz@psych.uni-frankfurt.deMartin Schultzeschultze@psych.uni-frankfurt.de<p>This commentary on Schönbrodt et al. (2022) and Gärtner et al. (2022) aims at complementing the ideas regarding an implementation of DORA for the domain of teaching. As there is neither a comprehensive assessment system based on empirical data nor a competence model for teaching competencies available, yet, we describe some pragmatic ideas for indicators of good teaching and formulate desiderates for future research programs and validation.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Miriam Hansen, Julia Beitner, Holger Horz, Martin Schultzehttps://open.lnu.se/index.php/metapsychology/article/view/3758Valuing Preprints Must be Part of Responsible Research Assessment2023-03-24T21:52:15+01:00Moin Syedmoin@umn.edu<p>Comments on papers by Schönbrodt et al. (2022) and Gärtner et al. (2022) proposing reforms to the research assessment process. Given the prominent role of preprints in contemporary scientific practice, they must be an accepted and central component of research assessment.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Moin Syedhttps://open.lnu.se/index.php/metapsychology/article/view/3735Responsible Research Assessment Should Prioritize Theory Development and Testing Over Ticking Open Science Boxes2023-03-14T13:53:30+01:00Hannah Dameshannah.dames@outlook.comPhilipp Musfeldp.musfeld@psychologie.uzh.chVencislav Popovv.popov@psychologie.uzh.chKlaus Oberauerk.oberauer@psychologie.uzh.chGidon T. Frischkorngidon.frischkorn@psychologie.uzh.ch<p>We appreciate the initiative to seek for ways to improve academic assessment by broadening the range of relevant research contributions and by considering a candidate’s scientific rigor. Evaluating a candidate's ability to contribute to science is a complex process that cannot be captured through one metric alone. While the proposed changes have some advantages, such as an increased focus on quality over quantity, the proposal's focus on adherence to open science practices is not sufficient, as it undervalues theory building and formal modelling: A narrow focus on open science conventions is neither a sufficient nor valid indicator for a “good scientist” and may even encourage researchers to choose easy, pre-registerable studies rather than engage in time-intensive theory building. Further, when in a first step only a minimum standard for following easily achievable open science goals is set, most applicants will soon pass this threshold. At this point, one may ask if the additional benefit of such a low bar outweighs the potential costs of such an endeavour. We conclude that a reformed assessment system should put at least equal emphasis on theory building and adherence to open science principles and should not completely disregard traditional performance metrices.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Hannah Dames, Philipp Musfeld, Vencislav Popov, Klaus Oberauer, Gidon T. Frischkornhttps://open.lnu.se/index.php/metapsychology/article/view/3734Responsible Research Assessment requires structural more than procedural reforms2023-03-14T13:53:40+01:00Gidon T. Frischkorngidon.frischkorn@psychologie.uzh.ch<p>In their target articles, Schönbrodt et al. (2022) and Gärtner et al. (2022) propose new metrics and their practical implementation to improve responsible research assessment. Generally, I welcome the inclusion of open science and scientific rigor into evaluating job candidates. However, the proposed reform mainly focuses on the first stage of selecting candidates who then continue towards a second stage of in-depth evaluation of research quality. Yet, this second selection stage is underdeveloped but likely more critical concerning responsible research assessment and hiring decisions. I argue that an adequate assessment of research quality at this second stage requires the representation of specific knowledge in the subfield of a discipline that the candidate should be hired for by the hiring committee. This is rarely achieved given the current structural organization of departments, especially in German-speaking countries, and potentially explains the reliance on suboptimal indicators such as h-index and Journal Impact factor. Therefore, I argue that responsible research assessment requires structural reform to ensure that institutions have several researchers in permanent positions with specific knowledge in different subfields to provide an adequate and responsible assessment of research quality by hiring committees at all evaluation stages.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Gidon T. Frischkornhttps://open.lnu.se/index.php/metapsychology/article/view/3715Assessing rigor and impact of research software for hiring and promotion in psychology: A comment on Gärtner et al. (2022)2023-03-06T16:32:02+01:00Andreas Markus Brandmaierandreas.brandmaier@medicalschool-berlin.deMaximilian Ernsternst@mpib-berlin.mpg.deAaron Peikertpeikert@mpib-berlin.mpg.de<p>Based on four principles of a more responsible research assessment in academic hiring and promotion processes, Gärtner et al. (2022) suggested an evaluation scheme for published manuscripts, reusable data sets, and research software. This commentary responds to the proposed indicators for the evaluation of research software contributions in academic hiring and promotion processes. Acknowledging the significance of research software as a critical component of modern science, we propose that an evaluation scheme must emphasize the two major dimensions of rigor and impact. Generally, we believe that research software should be recognized as valuable scientific output in academic hiring and promotion, with the hope that this incentivizes the development of more open and better research software.</p>2024-07-15T00:00:00+02:00Copyright (c) 2024 Andreas M. Brandmaier, Maximilian Ernst, Aaron Peikerthttps://open.lnu.se/index.php/metapsychology/article/view/3685Comment on: Responsible Research Assessment I and Responsible Research Assessment II2023-01-21T18:48:52+01:00Erich H. Wittewitte_e_h@uni-hamburg.de<p>A long-term personnel policy in filling professorships, aimed at remedying deficits in psychological research, should be able to significantly improve the scientific quality of psychology: “The main reason is that the hiring and promotion of such researchers is most likely to contribute to the emergence of a credible scientific knowledge base“ (Gärtner et al., in press). </p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Erich H. Wittehttps://open.lnu.se/index.php/metapsychology/article/view/3679Interdisciplinary Value2023-01-12T20:25:27+01:00Veli-Matti Karhulahtivmkarhwu@jyu.fi<p>This is a commentary on interdisciplinary value in the special issue "Responsible Research Assessment: Implementing DORA for hiring and promotion in psychology."</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Veli-Matti Karhulahtihttps://open.lnu.se/index.php/metapsychology/article/view/3655Commentary: “Responsible Research Assessment: Implementing DORA for hiring and promotion in psychology”2022-12-20T09:44:46+01:00Alejandro Sandoval-Lentiscoalejandro.sandovall@um.es<p>A commentary on: Gärtner et al., 2022; Schönbrodt et al., 2022.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Alejandro Sandoval-Lentiscohttps://open.lnu.se/index.php/metapsychology/article/view/3652A broader view of research contributions: Necessary adjustments to DORA for hiring and promotion in psychology.2022-12-15T16:29:08+01:00Gavin Browngt.brown@auckland.ac.nz<p>Recently Schönbrodt et al. (2022) released recommendations for improving how psychologists could be evaluated for recruitment, retention, and promotion. Specifically, they provided four principles of responsible research assessment in response to current methods that rely heavily on bibliometric indices of journal quality and research impact. They build their case for these principles on the San Francisco Declaration on Research Assessment (DORA) perspective that decries reliance on invalid quantitative metrics of research quality and productivity in hiring and promotion. The paper makes clear the tension panels have to address in evaluating applications—too little time to do an in-depth evaluation of an individual’s career and contribution, so reliance on easy to understand, but perhaps invalid, metrics. This dilemma requires an alternative mechanism rather than simply a rejection of metrics. To that end, the authors are to be congratulated for operationalising what those alternatives might look like. Nonetheless, the details embedded in the principles seem overly narrow and restrictive.</p>2024-03-17T00:00:00+01:00Copyright (c) 2024 Gavin Brownhttps://open.lnu.se/index.php/metapsychology/article/view/4021ReproduceMe: Lessons from a pilot project on computational reproducibility2024-09-18T07:34:59+02:00Daniel H. Bakerdaniel.baker@york.ac.ukMareike Bergmareike.kiki.berg@gmail.comKirralise J. Hansfordkh1474@york.ac.ukBartholomew P.A. Quinnbpaq500@york.ac.ukFederico G. Segalafgs502@york.ac.ukErin L. Warden-Englishee728@york.ac.uk<p><span style="font-weight: 400;">If a scientific paper is computationally reproducible, the analyses it reports can be repeated independently by others. At the present time most papers are not reproducible. However, the tools to enable computational reproducibility are now widely available, using free and open source software. We conducted a pilot study in which we offered ‘reproducibility as a service’ within a UK psychology department for a period of 6 months. Our rationale was that most researchers lack either the time or expertise to make their own work reproducible, but might be willing to allow this to be done by an independent team. Ten papers were converted into reproducible format using <em>R markdown</em>, such that all analyses were conducted by a single script that could download raw data from online platforms as required, generate figures, and produce a pdf of the final manuscript. For some studies this involved reproducing analyses originally conducted using commercial software. The project was an overall success, with strong support from the contributing authors who saw clear benefit from this work, including greater transparency and openness, and ease of use for the reader. Here we describe our framework for reproducibility, summarise the specific lessons learned during the project, and discuss the future of computational reproducibility. Our view is that computationally reproducible manuscripts embody many of the core principles of open science, and should become the default format for scientific communication.</span></p>2024-09-06T00:00:00+02:00Copyright (c) 2024 Daniel H. Baker, Mareike Berg, Kirralise J. Hansford, Bartholomew P.A. Quinn, Federico G. Segala, Erin L. Warden-Englishhttps://open.lnu.se/index.php/metapsychology/article/view/3958The many faces of early life adversity - Content overlap in validated assessment instruments as well as in fear and reward learning research2024-09-18T07:37:02+02:00Alina Koppoldal.koppold@gmail.comJulia Rugej.ruge@uke.deTobias Heckertobias.hecker@uni-bielefeld.deTina Lonsdorftina.lonsdorf@uni-bielefeld.de<p>The precise assessment of childhood adversity is crucial for understanding the impact of aversive events on mental and physical development. However, the plethora of assessment tools currently used in the literature with unknown overlap in childhood adversity types covered hamper comparability and cumulative knowledge generation. In this study, we conducted two separate item-level content analyses of in total 35 questionnaires aiming to assess childhood adversity. These include 13 questionnaires that were recently recommended based on strong psychometric properties as well as additional 25 questionnaires that were identified through a systematic literature search. The latter provides important insights into the actual use of childhood adversity questionnaires in a specific, exemplary research field (i.e., the association between childhood adversity and threat and reward learning). Of note, only 3 of the recommended questionnaires were employed in this research field. Both item-wise content analysis illustrate substantial heterogeneity in the adversity types assessed across these questionnaires and hence highlight limited overlap in content (i.e., adversity types) covered by different questionnaires. Furthermore, we observed considerable differences in structural properties across all included questionnaires such as the number of items, age ranges assessed as well as the specific response formats (e.g., binary vs. continuous assessments, self vs. caregiver). We discuss implications for the interpretation, comparability and integration of the results from the existing literature and derive specific recommendations for future research. In sum, the substantial heterogeneity in the assessment and operationalization of childhood adversity emphasizes the urgent need for theoretical and methodological solutions to promote comparability, replicability of childhood adversity assessment and foster cumulative knowledge generation in research on the association of childhood adversity and physical as well as psychological health.</p>2024-08-07T00:00:00+02:00Copyright (c) 2024 Alina Koppold, Julia Ruge, Tobias Hecker, Tina B. Lonsdorfhttps://open.lnu.se/index.php/metapsychology/article/view/1479How Close to the Mark Might Published Heritability Estimates Be?2024-06-13T10:07:55+02:00Michael Maraunmichael_maraun@sfu.caMoritz Heeneheene@psy.lmu.dePhilipp Sckopkephilipp.sckopke@psy.lmu.de<p>The behavioural scientist who requires an estimate of narrow heritability, h<sup>2</sup>, will conduct a twin study, and input the resulting estimated covariance matrices into a particular mode of estimation, the latter derived under supposition of the standard biometric model (SBM). It is known that the standard biometric model can be expected to misrepresent the phenotypic (genetic) architecture of human traits. The impact of this misrepresentation on the accuracy of h<sup>2</sup> estimation is unknown. We aimed to shed some light on this general issue, by undertaking three simulation studies. In each, we investigated the parameter recovery performance of five modes- Falconer’s coefficient and the SEM models, ACDE, ADE, ACE, and AE- when they encountered a constructed, non-SBM, architecture, under a particular informational input. In study 1, the architecture was single-locus with dominance effects and genetic-environment covariance, and the input was a set of population covariance matrices yielded under the four twin designs, monozygotic-reared together, monozygotic-reared apart, dizygotic-reared together, and dizygotic-reared apart; in study 2, the architecture was identical to that of study 1, but the informational input was monozygotic-reared together and dizygotic-reared together; and in study 3, the architecture was multi-locus with dominance effects, genetic-environment covariance, and epistatic interactions. The informational input was the same as in study 1. The results suggest that conclusions regarding the coverage of h<sup>2</sup> must be drawn conditional on a) the general class of generating architecture in play; b) specifics of the architecture’s parametric instantiations; c) the informational input into a mode of estimation; and d) the particular mode of estimation<br />employed. The results showed that the more complicated the generating architecture, the poorer a mode’s h<sup>2</sup> recovery performance. Random forest analyses furthermore revealed that, depending on the genetic architecture, h<sup>2</sup>, the dominance and locus additive parameter, and proportions of alleles were involved in complex interaction effects impacting on h<sup>2</sup> parameter recovery performance of a mode of estimation. Data and materials: <a href="https://osf.io/aq9sx/">https://osf.io/aq9sx/</a></p>2024-05-22T00:00:00+02:00Copyright (c) 2024 Michael Maraun, Moritz Heene, Philipp Sckopkehttps://open.lnu.se/index.php/metapsychology/article/view/3638Knowing What We're Talking About2024-04-19T12:19:38+02:00Gjalt-Jorn Petersgjalt-jorn@behaviorchange.euRik Crutzenrik.crutzen@maastrichtuniversity.nl<p><span style="font-weight: 400;">A theory crisis and measurement crisis have been argued to be root causes of psychology's replication crisis. In both, the lack of conceptual clarification and the jingle-jangle jungle at the construct definition level as well the measurement level play a central role. We introduce a conceptual tool that can address these issues: Decentralized Construct Taxonomy specifications (DCTs). These consist of comprehensive specifications of construct definitions, corresponding instructions for quantitative and qualitative research, and unique identifiers. We discuss how researchers can develop DCT specifications as well as how DCT specifications can be used in research, practice, and theory development. Finally, we discuss the implications and potential for future developments to answer the call for conceptual clarification and epistemic iteration. This contributes to the move towards a psychological science that progresses in a cumulative fashion through discussion and comparison.</span></p>2024-04-19T00:00:00+02:00Copyright (c) 2024 Gjalt-Jorn Peters, Rik Crutzenhttps://open.lnu.se/index.php/metapsychology/article/view/3308Evaluating the Replicability of Social Priming Studies2022-11-14T11:14:29+01:00Erik Mac Giollaerik.mac.giolla@psy.gu.seSimon Karlssonsimontkarlsson@live.seDavid A. Neequayedavid.neequaye@psy.gu.seMagnus Bergquistmagnus.bergquist@psy.gu.se<p>To assess the replicability of social priming findings we reviewed the extant close replication attempts in the field. In total, we found 70 close replications, that replicated 49 unique findings. Ninety-four percent of the replications had effect sizes smaller than the effect they replicated and only 17% of the replications reported a significant p-value in the original direction. The strongest predictor of replication success was whether or not the replication team included at least one of the authors of the original paper. Twelve of the 18 replications with at least one original author produced a significant effect in the original direction and the meta-analytic average of these studies suggest a significant priming effect (d = 0.40, 95% CI[0.23; 0.58]). In stark contrast, none of the 52 replications by independent research teams produced a significant effect in the original direction and the meta-analytic average was virtually zero (d = 0.002, 95% CI[-0.03; 0.03]). We argue that these results have shifted the burden of proof back onto advocates of social priming. Successful replications from independent research teams will likely be required to convince sceptics that social priming exists at all.</p>2024-11-12T00:00:00+01:00Copyright (c) 2024 Erik Mac Giolla, Simon Karlsson, David A. Neequaye, Magnus Bergquisthttps://open.lnu.se/index.php/metapsychology/article/view/2957Distinguishing Between Models and Hypotheses: Implications for Significance Testing2022-02-08T13:26:00+01:00David Trafimowdtrafimo@nmsu.edu<p>In the debate about the merits or demerits of null hypothesis significance testing (NHST), authorities on both sides assume that the <em>p</em> value that a researcher computes is based on the null hypothesis or test hypothesis. If the assumption is true, it suggests that there are proper uses for NHST, such as distinguishing between competing directional hypotheses. And once it is admitted that there are proper uses for NHST, it makes sense to educate substantive researchers about how to use NHST properly and avoid using it improperly. From this perspective, the conclusion would be that researchers in the business and social sciences could benefit from better education pertaining to NHST. In contrast, my goal is to demonstrate that the <em>p</em> value that a researcher computes is not based on a hypothesis, but on a model in which the hypothesis is embedded. In turn, the distinction between hypotheses and models indicates that NHST cannot soundly be used to distinguish between competing directional hypotheses or to draw any conclusions about directional hypotheses whatsoever. Therefore, it is not clear that better education is likely to prove satisfactory. It is the temptation issue, not the education issue, that deserves to be in the forefront of NHST discussions.</p>2024-11-11T00:00:00+01:00Copyright (c) 2024 David Trafimowhttps://open.lnu.se/index.php/metapsychology/article/view/2909Preregistration specificity and adherence: A review of preregistered gambling studies and cross-disciplinary comparison2022-02-11T12:35:36+01:00Robert Heirenerobheirene@gmail.comDebi LaPlantedebi_laplante@hms.harvard.eduEric Louderbackelouderback@cha.harvard.eduBrittany Keenbrittany.keen@live.com.auMarjan BakkerM.Bakker_1@tilburguniversity.eduAnastasia Serafimovskaaser8372@uni.sydney.edu.auSally Gainsburysally.gainsbury@sydney.edu.au<p>Study preregistration is one of several “open science” practices (e.g., open data, preprints) that researchers use to improve the transparency and rigour of their research. As more researchers adopt preregistration as a regular practice, examining the nature and content of preregistrations can help identify the strengths and weaknesses of current practices. The value of preregistration, in part, relates to the specificity of the study plan and the extent to which investigators adhere to this plan. We identified 53 preregistrations from the gambling studies field meeting our predefined eligibility criteria and scored their level of specificity using a 23-item protocol developed to measure the extent to which a clear and exhaustive preregistration plan restricts various researcher degrees of freedom (RDoF; i.e., the many methodological choices available to researchers when collecting and analysing data, and when reporting their findings). We also scored studies on a 32-item protocol that measured adherence to the preregistered plan in the study manuscript. We found gambling preregistrations had low specificity levels on most RDoF. However, a comparison with a sample of cross-disciplinary preregistrations (N = 52; Bakker et al., 2020) indicated that gambling preregistrations scored higher on 12 (of 29) items. Thirteen (65%) of the 20 associated published articles or preprints deviated from the protocol without declaring as much (the mean number of undeclared deviations per article was 2.25, SD = 2.34). Overall, while we found improvements in specificity and adherence over time (2017-2020), our findings suggest the purported benefits of preregistration—including increasing transparency and reducing RDoF—are not fully achieved by current practices. Using our findings, we provide 10 practical recommendations that can be used to support and refine preregistration practices.</p>2024-07-01T00:00:00+02:00Copyright (c) 2024 Robert Heirene, Debi LaPlante, Eric Louderback, Brittany Keen, Marjan Bakker, Anastasia Serafimovska, Sally Gainsburyhttps://open.lnu.se/index.php/metapsychology/article/view/2740Beyond a Dream: The Practical Foundations of Disconnected Psychology2024-04-19T12:20:01+02:00Dario Krpand.krpan@lse.ac.uk<p><em>Disconnected</em> psychology is a form of psychological science in which researchers ground their work upon the main principles of psychological method but are detached from a “field” consisting of other psychologists that comprises <em>connected</em> psychology. It has previously been proposed that combining the two forms of psychology would result in the most significant advancement of psychological knowledge (Krpan, 2020). However, disconnected psychology may seem to be an “abstract utopia”, given that it has not been previously detailed how to put it into practice. The present article therefore sets the practical foundations of disconnected psychology. In this regard, I first describe a hypothetical disconnected psychologist and discuss relevant methodological and epistemological implications. I then propose how this variant of psychology could be integrated with the current academic system (i.e., with connected psychology). Overall, the present article transforms disconnected psychology from a hazy dream into substance that could eventually maximize psychological knowledge, even if implementing it would require a radical transformation of psychological science. </p>2024-04-19T00:00:00+02:00Copyright (c) 2024 Dario Krpanhttps://open.lnu.se/index.php/metapsychology/article/view/3987The Untrustworthy Evidence in Dishonesty Research2024-04-19T12:19:25+02:00František Bartošf.bartos96@gmail.com<p>Replicable and reliable research is essential for cumulative science and its applications in practice. This article examines the quality of research on dishonesty using a sample of 286 hand-coded test statistics from 99 articles. Z-curve analysis indicates a low expected replication rate, a high proportion of missing studies, and an inflated false discovery risk. Test of insufficient variance (TIVA) finds that 11/61 articles with multiple test statistics contain results that are ``too-good-to-be-true''. Sensitivity analysis confirms the robustness of the findings. In conclusion, caution is advised when relying on or applying the existing literature on dishonesty.</p>2024-04-19T00:00:00+02:00Copyright (c) 2024 František Bartošhttps://open.lnu.se/index.php/metapsychology/article/view/3716Re-analysis of a meta-analysis about tryptophan and depression2024-05-04T12:22:19+02:00Martin Plöderlm.ploederl@salk.at<p style="font-weight: normal; line-height: 200%; margin-bottom: 0cm;" align="left"><span style="font-size: small;">This is a reanalysis of a meta-analysis about L-tryptophan blood levels and depression, which became part of the controversy around a recent umbrella review about the role of serotonin in depression. The reanalysis revealed major methodological limitations, raising doubts on the conclusions in the original publication that levels of tryptophan are lowered among depressed compared to non-depressed individuals. The data is also compatible with a null effect and no firm conclusion should be made.</span></p>2024-05-03T00:00:00+02:00Copyright (c) 2024 Martin Plöderlhttps://open.lnu.se/index.php/metapsychology/article/view/3649Investigating Heterogeneity in (Social) Media Effects: Experience-Based Recommendations2023-03-16T08:56:22+01:00Patti Valkenburgp.m.valkenburg@uva.nlIne Beyensi.beyens@uva.nlLoes Keijserskeijsers@essb.eur.nl<p><span id="page3R_mcid13" class="markedContent">We recently introduced a new, unified approach to investigate the effects of social media use on well-being. Using experience sampling methods among sizeable samples of respondents, our unified approach combines the strengths of nomothetic methods of analysis (e.g., mean comparisons, regression models), which are suited to understand group averages and generalize to populations, with idiographic methods of analysis (e.g., N=1 time series analyses), which are suitable to assess the effects of social media use on each single person (i.e., person-specific effects). Our approach challenges existing knowledge of media effects based on the nomothetic-only approach. As with many innovations, our approach has raised questions. In this article, we discuss our experience with our unified media effects approach that we have been building since 2018. We will explain what our approach exactly entails and what it requires. For example, how many observations are needed per person? Which methods did we employ to assess the meaningfulness of variation around average effects? How can we generalize our findings to our target populations? And how can our person-specific results aid policy decisions? Finally, we hope to answer questions of colleagues who are interested in replicating, extending, or building on our work.<br /></span></p>2024-07-01T00:00:00+02:00Copyright (c) 2024 Patti Valkenburg, Ine Beyens, Loes Keijsershttps://open.lnu.se/index.php/metapsychology/article/view/3322How should we investigate variation in the relation between social media and well-being?2022-09-16T16:45:14+02:00Niklas Johannesniklas.johannes@oii.ox.ac.ukPhilipp K. Masurp.k.masur@vu.nlMatti Vuorrematti.vuorre@oii.ox.ac.ukAndrew K. Przybylskiandy.przybylski@oii.ox.ac.uk<p>Most researchers studying the relation between social media use and well-being find small to no associations, yet policymakers and public stakeholders keep asking for more evidence. One way the field is reacting is by inspecting the variation around average relations—with the goal of describing individual social media users. Here, we argue that this approach produces findings that are not as informative as they could be. Our analysis begins by describing how the field got to this point. Then, we explain the problems with the current approach of studying variation and how it loses sight of one of the most important goals of a quantitative social science: generalizing from a sample to a population. We propose a principled approach to quantify, interpret, and explain variation in average relations by: (1) conducting model comparisons, (2) defining a region of practical equivalence and testing the theoretical distribution of relations against that region, (3) defining a smallest effect size of interest and comparing it against the theoretical distribution. We close with recommendations to either study moderators as systematic factors that explain variation or to commit to a person-specific approach and conduct N=1 studies and qualitative research.</p>2024-07-01T00:00:00+02:00Copyright (c) 2024 Niklas Johannes, Philipp K. Masur, Matti Vuorre, Andrew K. Przybylskihttps://open.lnu.se/index.php/metapsychology/article/view/2918Associations between Goal Orientation and Self-Regulated Learning Strategies are Stable across Course Types, Underrepresented Minority Status, and Gender2024-04-19T12:19:50+02:00Brendan SchuetzeBrendan.Schuetze@utexas.eduVeronica Yanveronicayan@austin.utexas.edu<p>In this pre-registered replication of findings from Muis and Franco [2009; Contemporary Educational Psychology, 34(4), 306-318], college students (N = 978) from across the United States and Canada were surveyed regarding their goal orientations and learning strategies. A structural equation modelling approach was used to assess the associations between goal orientations and learning strategies. Six of the eight significant associations (75%) found by Muis and Franco replicated successfully in the current study. Mastery approach goals positively predicted endorsement of all learning strategies (Rehearsal, Critical Thinking, Metacognitive Self-Regulation and Elaboration). Performance avoidance goals negatively predicted critical thinking, while positively predicting metacognitive self-regulation and rehearsal. Evidence for moderation by assignment type was found. No evidence of the moderation of these associations by gender, underrepresented minority status, or course type (STEM, Humanities, or Social Sciences) was found. The reliability of common scales used in educational research and issues concerning the replication of studies using structural equation modeling are discussed.</p>2024-04-19T00:00:00+02:00Copyright (c) 2024 Brendan Schuetze, Veronica Yanhttps://open.lnu.se/index.php/metapsychology/article/view/2639The Effect of Variety on Perceived Quantity2020-12-22T04:30:00+01:00Lukas Röselerlukas.roeseler@uni-bamberg.deGeorg Felsergfelser@hs-harz.deJana Asbergerjana.asberger@uni-erfurt.deAstrid Schützastrid.schuetz@uni-bamberg.de<p>Redden and Hoch (2009) found that variety in a set of items robustly decreased the perceived quantity of the sum of these items across multiple studies. For example, a set of multicolored M&M’s was estimated to contain fewer M&M’s than an equally large set of single-colored M&M’s (e.g., Redden & Hoch, 2009, Study 3). We conducted six close replication studies of the studies reported by Redden and Hoch and did not find this effect in any of them. A meta-analysis of the four original studies and 6 replication studies (N = 1,383) revealed no evidence for the phenomenon that variety reduces perceived quantity.</p>2024-08-27T00:00:00+02:00Copyright (c) 2024 Lukas Röseler, Georg Felser, Jana Asberger, Astrid Schütz