Fact-checked by Grok 2 weeks ago

Reproducibility Project

The Reproducibility Project is a series of large-scale, collaborative initiatives spearheaded by the Center for to systematically replicate experiments from high-impact scientific papers, primarily in and cancer biology, with the goal of estimating the reproducibility of published findings and promoting practices. Launched in 2011, the flagship effort, known as the Reproducibility Project: (RPP), involved over 270 researchers from the Open Science Collaboration who attempted to replicate 100 experimental and correlational studies originally published in 2008 across three prominent journals: Journal of Personality and Social Psychology, Psychological Science, and Journal of Experimental Psychology: Learning, Memory, and Cognition. The replications employed high-powered designs—using sample sizes approximately five times larger than the originals—to enhance statistical precision, while adhering closely to the original methods, materials, and protocols whenever possible. Key outcomes revealed that while 97% of the original studies reported statistically significant results (p < .05), only 36% of the replications did so; moreover, the mean in replications (r = 0.197) was about half that of the originals (r = 0.403), and just 47% of original effect sizes fell within the 95% confidence intervals of the replication estimates. These results, published in in 2015, underscored systemic issues in , such as potential biases from selective reporting, p-hacking, and underpowered studies, while also demonstrating that the strength of evidence in the original studies was a stronger predictor of replication success than factors like the replicating team's experience. The project emphasized replication as a cornerstone of validity, advocating for reforms like preregistration of studies and to bolster credibility. Building on the RPP's model, the Reproducibility Project: Cancer Biology (RPCB), initiated in 2013 and spanning eight years until 2021, targeted 193 experiments from 53 influential cancer biology papers published between 2010 and 2012 in top journals such as , , , and . This effort partnered with the Science Exchange to conduct replications in independent laboratories, utilizing a Registered Reports format where protocols were peer-reviewed and funded before execution, ensuring transparency and reducing bias. Of the 50 experiments from 23 papers that were fully completed (assessing 158 effects), replication effect sizes were 85% smaller than originals on average, with 46% of effects succeeding on more criteria (e.g., , overlap) than failing; notably, positive results replicated at a 40% rate, while null results fared better at 80%. Challenges included the need for protocol modifications in 67% of cases and reagent sourcing issues, with original authors providing materials in 69% of requests. The findings, disseminated through an collection starting in 2017 and culminating in a published in in 2021, highlighted reproducibility hurdles in preclinical —such as variability in biological reagents and experimental conditions—while reinforcing the value of rigorous replication for advancing reliable biomedical knowledge. Collectively, these projects have catalyzed broader discussions and reforms in scientific publishing, including incentives for replication studies, mandates, and training in reproducible practices, influencing fields beyond and to prioritize methodological rigor.

Background

Origins and Motivation

The in gained prominence in the early , fueled by high-profile instances of and growing awareness of systemic issues in research practices. A pivotal event was the 2011 fraud scandal involving Dutch social psychologist , whose investigation revealed that he had fabricated data in dozens of publications over more than a decade, prompting widespread reevaluation of trust in psychological findings. This case exemplified broader concerns about data integrity, as it highlighted how undetected manipulation could permeate peer-reviewed literature. Compounding these issues, empirical demonstrations showed that undisclosed flexibility in and —commonly termed p-hacking—could inflate false-positive rates dramatically, allowing researchers to present insignificant results as statistically significant without transparent reporting. Surveys further revealed the prevalence of questionable research practices (QRPs), such as selectively reporting analyses or excluding data post-hoc, with over half of psychologists admitting to at least one such practice under anonymity incentives. In response to this mounting skepticism, the Collaboration (OSC) was formed in , led by psychologist Brian Nosek at the , as an initiative to address reproducibility challenges through collective action. The OSC operated under the newly established (COS), a nonprofit founded in 2013 to promote practices and infrastructure. By August 2012, the collaboration had attracted 72 volunteer researchers from 41 institutions, marking the beginning of a crowdsourced approach to scientific reform. This effort was formalized with the announcement of the Reproducibility Project in a seminal publication, positioning it as a transparent, community-driven endeavor to empirically assess the field's reliability. The project's core motivations stemmed from the need to quantify reproducibility rates in psychological amid eroding , while demonstrating the value of open practices to mitigate future crises. Leaders emphasized that prevailing incentives prioritized novel discoveries over rigorous confirmation, exacerbating QRPs and false findings, as evidenced by low replication rates in related fields like (around 25%). By employing preregistration of replication protocols and sharing, the initiative aimed to model an alternative paradigm that enhanced and reduced , ultimately fostering greater public and scientific trust in psychology's empirical foundations.

Goals and Scope

The Reproducibility Project: Psychology aimed primarily to provide an unbiased estimate of the reproducibility of psychological science by conducting direct replications of 100 experimental and correlational studies that reported significant positive effects and were published in 2008 across three high-impact journals: the Journal of Experimental Psychology: Learning, Memory, and Cognition, the Journal of Personality and Social Psychology, and Psychological Science. This scope was deliberately narrowed to focus on core empirical research in cognitive and social psychology, selecting one key study per paper to represent the field's typical findings while ensuring feasibility within a collaborative framework. By targeting studies with statistically significant results, the project sought to assess the reliability of established effects under controlled replication conditions, without extending to null or non-significant outcomes that were less commonly emphasized in the literature. A secondary objective was to model and demonstrate the viability of large-scale, collaborative practices to enhance transparency and rigor in . This included preregistering replication protocols to specify methods, hypotheses, and analysis plans in advance, thereby minimizing selective reporting and analytic flexibility; all materials, data, and progress were shared openly via the Open Science Framework (OSF), fostering community involvement from over 270 researchers across multiple institutions. These practices not only supported the project's execution but also served as a blueprint for broader adoption in the field, emphasizing accessibility and accountability. The project's scope included deliberate limitations to maintain focus and practicality. It exclusively targeted psychological studies amenable to replication, excluding clinical trials, review articles, meta-analyses, or any research where original materials, stimuli, or author cooperation were unavailable or impractical to obtain. Replications were designed with high statistical power, achieving an average of 92% to detect effects of the magnitude reported in the originals, which allowed for robust evaluation while accounting for variability in sample sizes and procedures. This targeted approach ensured the effort remained manageable within the collaborative model, prioritizing depth over breadth in examining reproducibility.

Methodology

Study Selection

The study selection process for the Reproducibility Project: Psychology employed a quasi-random sampling method to achieve a representative sample of while maintaining transparency and minimizing . The was defined as all articles published in 2008 across three high-impact journals: Psychological Science, Journal of Personality and Social Psychology, and Journal of Experimental Psychology: Learning, , and , totaling 488 articles. These journals were selected for their prestige and coverage of diverse subfields, including , , , and learning psychology. Only empirical studies reporting at least one experiment with a statistically significant effect (p < 0.05) were eligible, resulting in 158 eligible studies after excluding reviews, commentaries, and non-empirical pieces. To compile the final set, an initial pool of the first 20 eligible articles per journal was drawn, expanded by up to 10 additional articles per journal as needed to facilitate volunteer matching and ensure feasibility, yielding 111 studies targeted for replication. The process was crowdsourced through the Open Science Framework (OSF), where over 270 researchers volunteered to nominate, review, and claim studies based on their expertise and interest; coordinators matched teams to studies to promote diversity across subfields like social, cognitive, and . The selection protocol, including criteria for identifying the key effect (typically a single statistical test such as a t-test or from the last-reported experiment), was preregistered on OSF to prevent post-hoc adjustments and enhance reproducibility. Several challenges emerged, including the exclusion of studies due to inaccessible materials, ethical concerns, or author non-response, with approximately 47 articles (about 28% of the eligible pool) not advancing to selection for these reasons. Of the 111 targeted studies, 13 (roughly 12%) were ultimately dropped before completion owing to resource limitations, persistent material unavailability, or logistical issues, necessitating substitutions from the remaining articles in the same journals to reach the goal of 100 replications. Deviations from replicating the last-reported study occurred in 16 cases, justified by feasibility assessments or input, while ethical reviews ensured all selected studies met contemporary standards without compromising the original designs. This rigorous approach underscored the project's commitment to a transparent, unbiased input for estimating in psychological science.

Replication Procedures

The Reproducibility Project: employed a standardized to ensure methodological rigor and fidelity to studies. Replicators contacted authors to obtain materials, such as stimuli and detailed procedures, and to seek clarifications on ambiguous aspects of the methods. Detailed replication plans were preregistered on the Open Science Framework (OSF), specifying hypotheses, exclusion criteria, and analysis pipelines to minimize flexibility in decision-making and enhance transparency. Sample sizes for each replication were calculated in advance to achieve at least 80% statistical power to detect the effect size reported in study, using a two-tailed alpha of 0.05; this often resulted in larger samples than the originals to improve detection reliability. In executing the replications, the project prioritized direct replication, closely matching the original study designs rather than pursuing conceptual variations. Original stimuli and procedures were used whenever available, with adaptations made only when necessary, such as for administration to facilitate larger participant pools. Replications were conducted across multiple independent laboratories and, in many cases, via platforms like to recruit diverse participants and meet power requirements efficiently. Data handling emphasized openness and adherence to preregistered plans. All , along with code for and scripts, were publicly shared on the OSF for each replication study. Analyses strictly followed the preregistered specifications, employing the same statistical tests as the originals—such as t-tests for between-group comparisons and ANOVA for multi-factor designs—and reporting effect sizes using standardized metrics like Cohen's d to allow direct comparability.

Reproducibility Project: Cancer Biology

The Reproducibility Project: Cancer Biology (RPCB) utilized a distinct methodology focused on preclinical cancer research, partnering with the Science Exchange to conduct replications in independent contract research organizations (CROs). Study selection targeted 193 experiments from 53 high-impact papers published between 2010 and 2012 in journals including Cancer Cell, Cell, Nature, and Science. Eligible experiments were those reporting a key biological effect, selected to represent influential findings in cancer biology. Replication procedures employed a Registered Reports publishing format through , where detailed were peer-reviewed and provisionally accepted for publication prior to and funding, minimizing and ensuring execution fidelity. Original authors were invited to review and refine , with materials provided in 69% of cases. Replications aimed for high rigor, including independent labs, standardized , and full methodological transparency. Challenges included modifications in 67% of attempts due to feasibility or variability, and only 50 experiments from 23 papers were fully completed, assessing 158 effects. All data, , and reports were openly shared via the OSF and collection.

Results

Overall Replication Rate

The results of the Reproducibility Project: Psychology were detailed in a seminal publication on August 28, 2015, in the journal Science, titled "Estimating the reproducibility of psychological science," authored by the Open Science Collaboration. This paper presented the aggregate outcomes from 100 replication attempts of psychological studies originally published in 2008 across three prominent psychology journals: Journal of Personality and Social Psychology, Psychological Science, and Journal of Experimental Psychology: Learning, Memory, and Cognition. A core quantitative finding was that 36% of the replications yielded statistically significant results (using an alpha level of 0.05) in the same direction as the original studies, compared to 97% significance in the originals. These outcomes were derived from preregistered replication protocols designed to closely mirror the originals while incorporating standardized reporting. Key metrics underscored the diminished scale of replication effects: the mean (expressed as Pearson's r) across the original studies was 0.40, whereas the replications averaged 0.20—roughly half the magnitude.

Reproducibility Project: Cancer Biology

The Reproducibility Project: Cancer Biology (RPCB) results were published in a series of papers in eLife starting in 2017. Of the 50 experiments from 23 papers that were fully completed (assessing 158 effects), replication effect sizes were 85% smaller than originals on average, with 46% of effects succeeding on more criteria (e.g., statistical significance, effect size overlap) than failing. Notably, positive results replicated at a 40% rate, while null results fared better at 80%.

Factors Influencing Reproducibility

Post-hoc analyses of the Reproducibility Project: Psychology data revealed several study characteristics associated with replication success. Studies in cognitive psychology replicated at a rate of 50% (21 out of 42 effects), significantly higher than the 25% rate (14 out of 55 effects) observed in social psychology, potentially due to differences in methodological complexity and effect robustness across subfields. Replication rates also varied by design type, with 47% success (23 out of 49) for studies testing main or simple effects compared to only 22% (8 out of 37) for those involving interaction effects, highlighting challenges in reproducing more intricate experimental setups. Social psychology studies, which frequently involved complex priming paradigms, contributed to the lower overall subfield rate, though specific priming effects were not isolated in the analyses. Original effect sizes and statistical power emerged as key predictors of reproducibility. Larger original effect sizes correlated positively with replication success (Spearman's r = 0.304 for achieving P < 0.05), indicating that stronger initial findings were more likely to recur. Conversely, studies with smaller reported effects or implicitly lower power—common in the selected high-impact publications—were less replicable, as evidenced by the replication effect sizes averaging half the magnitude of originals (mean r = 0.197 versus 0.403). This decline suggests possible overestimation of effects in the original literature, though direct tests for were not conducted and no strong emerged from the correlational analyses. Analyses of other study features showed limited systematic influence on outcomes. Original sample sizes exhibited a weak negative correlation with replication success (Spearman's r = -0.150), implying no clear advantage for larger samples in this dataset, consistent with the project's conclusion that such characteristics did not substantially moderate . Similarly, no significant differences appeared across the three journals from which studies were drawn (χ²(2) = 2.45, P = 0.48). The replication power, set at approximately 92% based on original effects, correlated positively with success (r = 0.368), underscoring the value of high-powered designs in detecting true effects. These findings contributed to the project's overall replication rate of 36%, emphasizing the role of evidential strength over procedural details.

Impact and Reception

Immediate Reactions

The publication of the Reproducibility Project: Psychology in Science in August 2015 elicited significant media attention, positioning the findings as a stark indicator of a reproducibility crisis in the field. Major outlets such as Nature, The New York Times, and BBC covered the results extensively, highlighting how only 36% of the attempted replications produced statistically significant effects in the same direction as the originals. This coverage often emphasized the project's implications for the reliability of psychological research, with headlines underscoring the low replication rate and calling for systemic reforms to address potential flaws in scientific practices. The project's high visibility was further amplified when Science designated it as one of the Breakthroughs of the Year for 2015, recognizing its role in sparking global discussions on scientific integrity. Academic reactions to the project were mixed, reflecting both commendation for its methodological transparency and skepticism regarding its interpretations. Organizations like the () endorsed the effort, praising its open collaboration and preregistered protocols as models for enhancing rigor in psychological research. However, some researchers downplayed the 36% replication rate, attributing non-replications largely to low statistical power in the original studies rather than inherent flaws in the findings themselves. These divergent views fueled initial , including dedicated debates and sessions at conferences such as the 2016 Society for Personality and Social Psychology (SPSP) annual meeting, where participants grappled with the project's implications for study design and interpretation. The project's immediate impact extended to its scholarly influence and policy shifts within the discipline. The seminal paper amassed over 1,000 citations shortly after publication, underscoring its rapid integration into academic conversations on reproducibility. It also prompted swift adoptions of open science practices, including preregistration requirements in journals affiliated with the Association for Psychological Science (APS), which implemented the Transparency and Openness Promotion (TOP) guidelines to encourage such measures and boost replicability.

Long-term Effects on Psychology

The Reproducibility Project catalyzed the widespread adoption of practices in , particularly preregistration, which by the early was incorporated in approximately 40-50% of studies across various subfields, reflecting a marked increase from pre-2015 levels. This shift was driven by initiatives like Registered Reports, a preregistration format now offered by numerous journals, enhancing in study design and analysis plans to mitigate selective reporting. Complementing this, the Center for Open Science's Guidelines, introduced in 2015 and updated in 2024 as TOP 2025 to further promote verifiability, have been integrated into the policies of over 5,000 journals and organizations, with more than 2,000 evaluated for their implementation of standards on citation, data , and research design analysis. has also surged, facilitated by platforms like the Open Science Framework (OSF), where psychological studies' data availability statements rose notably post-2015, enabling independent verification and meta-analytic reuse. These practices contributed to a profound cultural shift in psychology, elevating the "reproducibility crisis" to a mainstream concern that prompted institutional reforms. Professional organizations like the () responded with educational initiatives, including workshops and guidelines on teaching replicability and to students and researchers, fostering a norm of rigorous methodology. Funding agencies, such as the National Institutes of Health (NIH), prioritized reproducibility through 2016 policies emphasizing experimental rigor, transparency in grant applications, and training modules on bias reduction, which influenced grant allocations and researcher training across the field. By 2024, empirical studies documented improvements in replicability rates, reaching around 50-54% in targeted samples of recent psychological experiments, compared to the 36% rate from the original 2015 Project, attributed to these interventions like preregistration and larger sample sizes. The Project's broader legacy lies in inspiring meta-research, a subfield dedicated to evaluating scientific practices, with subsequent large-scale replication efforts and methodological audits building directly on its framework to refine metrics. Follow-up investigations through 2025 estimate psychology's overall at 40-60% across diverse studies, signaling progress amid ongoing challenges, while surveys of researchers indicate reduced endorsement of questionable research practices (QRPs) like selective reporting, linked to heightened awareness and institutional incentives post-2015. This evolution has solidified open practices as core to psychological inquiry, promoting more reliable knowledge accumulation.

Criticisms and Responses

Key Criticisms

One major methodological critique of the Reproducibility Project: Psychology centered on the design of the replication studies, which, despite employing higher statistical power (averaging around 90%), imposed stricter protocols such as mandatory preregistration and elimination of researcher degrees of freedom compared to the originals. These changes, critics argued, could systematically reduce observed effect sizes and lead to an underestimation of reproducibility rates, as original studies often benefited from flexible analytic choices that maximized significance. For instance, Gilbert, King, Pettigrew, and Wilson (2016) analyzed the project data and argued that the replication success rate was higher than reported when accounting for differences in fidelity, estimating around 60% for endorsed protocols. They further noted that many replication attempts deviated from original procedures in subtle but important ways, such as changes in stimuli presentation or participant instructions, effectively making them closer to conceptual replications that test generalizability rather than exact reproducibility. Critics also highlighted the potential value of conceptual replications over the project's emphasis on direct ones, arguing that direct copies might fail to capture the broader theoretical validity of findings if contextual variations are not explored. Another key criticism concerned in study selection, as the project targeted only 100 experiments from three high-impact journals (Journal of Personality and Social Psychology, Psychological Science, and Journal of Experimental Psychology: Learning, Memory, and Cognition) published in 2008, all reporting statistically significant positive results. This focus, detractors contended, did not represent the full spectrum of , which includes diverse subfields, lower-impact journals, and unpublished or null findings, potentially skewing the sample toward effects that were more likely to appear inflated due to and thus harder to replicate exactly. By excluding null results and studies from other years or outlets, the selection process may have artificially elevated the perceived failure rate, as the originals were drawn from a "file drawer" of successful outcomes. A separate using found that 77% of replication s fell within the 95% calculated from the original s, suggesting many failures were expected due to sampling variability rather than absence of effects. Regarding interpretation, the project's reported 36% replication rate—defined as the proportion of replications yielding significant effects in the same direction—was deemed misleading by some, as it emphasized success/failure without adequately considering continuity or in small-sample originals. A Bayesian reanalysis by Etz and Vandekerckhove (2016) of the project data found that, after adjusting for overestimated original s and , the evidence supported the existence of nontrivial effects in most cases, with Bayes factors indicating moderate support for alternatives to the rather than widespread absence of effects. Critics accused the project of overemphasizing this rate in ways that fueled negative portrayals and eroded in , without providing sufficient context for the challenges of detecting small, true effects in underpowered original studies.

Criticisms of the Reproducibility Project: Cancer Biology

Criticisms of the Reproducibility Project: Cancer Biology (RPCB) included concerns that one-time replication attempts in independent labs may not fully assess replicability, as biological variability and subtle protocol differences could lead to failures even for valid findings. Some argued that the selection of high-impact papers inherently biased toward novel, preliminary results that are harder to replicate exactly, and that the Registered Reports format, while reducing bias, might constrain adaptations needed for successful replication. Additionally, challenges in obtaining and detailed protocols from original authors were highlighted as systemic issues exacerbating low rates, though critics noted these reflect broader problems in preclinical rather than flaws in the project design.

Project Responses and Defenses

In response to criticisms regarding the interpretation of their findings, the Open Science Collaboration issued an official reply in Science, emphasizing that the 36% replication rate was a tentative estimate intended to highlight variability in psychological science rather than a definitive measure of its overall . They defended the use of direct replications as a standard and appropriate method for assessing , arguing that such procedures closely mirrored the original studies to isolate the reliability of effects without introducing unnecessary modifications. Subsequent follow-up work by the Center for Open Science addressed claims that the original replications were poorly executed. A 2020 study (Many Labs 5) re-replicating ten effects from the found consistent outcomes, with successful original replications more likely to succeed again and failed ones more likely to fail, thereby supporting the quality and reliability of the initial replication efforts. Further validation came from a 2024 analysis in Collabra: Psychology, which examined re-replications of initially failed studies and estimated an overall replicability rate of approximately 60% when combining first and second attempts, attributing improvements to larger samples and better practices post-crisis and underscoring the project's role in driving methodological advancements. The collaboration also mounted a broader defense by positioning the project not merely as an exposé of problems but as a model for solutions, such as preregistration, sharing, and transparent materials, which were implemented throughout to enhance scientific rigor. In associated responses, including preprints and official communications, they clarified that the low replication rates primarily reflected overestimation of effect sizes in original studies due to factors like small samples and questionable practices, rather than intentional fraud or systemic invalidity. For the RPCB, the acknowledged logistical challenges like sourcing and adaptations as indicative of barriers in cancer biology, advocating for improved materials sharing and . They emphasized that the 46% partial success rate, while low, provided valuable insights into preclinical reliability without claiming it represented all .

References

  1. [1]
    Reproducibility Project: Cancer Biology - Center for Open Science
    The Reproducibility Project: Cancer Biology was an 8-year effort to replicate experiments from high-impact cancer biology papers published between 2010 and ...
  2. [2]
    Reproducibility Project: Psychology - OSF
    We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original ...
  3. [3]
    Estimating the reproducibility of psychological science
    Aug 28, 2015 · We conducted a large-scale, collaborative effort to obtain an initial estimate of the reproducibility of psychological science.
  4. [4]
    Reproducibility Project: Cancer Biology | Collections - eLife
    The Reproducibility Project: Cancer Biology was an initiative to independently replicate selected experiments from a number of high-profile papers in the field ...
  5. [5]
    Massive Collaboration Testing Reproducibility of Psychology ...
    Aug 27, 2015 · Reproducibility means that the results recur when the same data are analyzed again, or when new data are collected using the same methods. As ...
  6. [6]
    Report finds massive fraud at Dutch universities - Nature
    Nov 1, 2011 · Stapel took responsibility for collecting data through what he said was a network of contacts at other institutions, and several weeks later ...
  7. [7]
    False-Positive Psychology - Joseph P. Simmons, Leif D. Nelson, Uri ...
    False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Joseph P. Simmons, Leif D. Nelson, ...
  8. [8]
    An Open, Large-Scale, Collaborative Effort to Estimate the ...
    Nov 7, 2012 · Nosek B. A., Spies J. R., Motyl M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability.Missing: formation | Show results with:formation
  9. [9]
    About - Center for Open Science
    The Center for Open Science (COS) was founded in 2013 to start, scale, and sustain open research practices that will democratize access to research.News · Team · Board · Finances
  10. [10]
  11. [11]
    First results from psychology's largest reproducibility test - Nature
    Apr 30, 2015 · Crowd-sourced effort raises nuanced questions about what counts as replication.Missing: announcement | Show results with:announcement
  12. [12]
    Many Psychology Findings Not as Strong as Claimed, Study Says
    Aug 27, 2015 · Only 35 of 100 studies that the Reproducibility Project looked at held up fully to scrutiny. But some questioned the process of replication ...
  13. [13]
    More or Less, How Reliable is Psychology Science? - BBC
    Having replicated 100 psychological studies published in three psychology journals only 36 had significant results compared to 97% first time ...
  14. [14]
    Reproducibility Project Named Among Top Scientific Achievements ...
    Jan 5, 2016 · APS is strengthening the reproducibility of psychological science through the publication of Registered Replication Reports, which are large- ...
  15. [15]
    A reproducibility crisis? - American Psychological Association
    Oct 1, 2015 · Participants attempted to replicate 100 experimental and correlational psychology studies that had been published in three prominent psychology journals.
  16. [16]
    [PDF] Abstracts – SPSP 2016 - Society for Philosophy of Science in Practice
    In this paper, I address three major issues: (1) What did the Reproducibility Project really show, and in what sense can the follow-up studies meaningfully be.
  17. [17]
    Wagenmakers' Crusade Against p-values - Replicability-Index
    Dec 29, 2018 · ... the OSC, 2015, reproducibility project that already garnered over 1000 citations. In PPPV, Wagenmaker claims.
  18. [18]
    [115] Preregistration Prevalence - Data Colada
    Nov 13, 2023 · Overall about 43% of papers in the sample had at least one pre-registered study. That 43% was a bit lower than I expected, but it's a lower bound of sorts.Missing: adoption 2020
  19. [19]
    Prevalence of Registered Reports in experimental psychology journals
    Jun 20, 2025 · Registered Reports represented a small percentage of the total experimental psychology articles published over the period 2013–2023 at 1.2%.<|separator|>
  20. [20]
    TOP Guidelines - Center for Open Science
    The TOP Guidelines are a policy framework for open science, including seven research practices, two verification practices, and four verification study types.
  21. [21]
    The Evolution of Data Sharing Practices in the Psychological Literature
    Jun 29, 2021 · In Clinical Psychological Science, data sharing statement rates started to increase only two years following the implementation of badges.
  22. [22]
    How to teach replicability - American Psychological Association
    Jan 1, 2020 · A simple way to introduce open science practices is to read papers related to replication issues with your students as part of a seminar course or ongoing lab ...
  23. [23]
    Enhancing Reproducibility through Rigor and Transparency
    Sep 9, 2024 · This webpage provides information about the efforts underway by NIH to enhance rigor and reproducibility in scientific research.Guidance · Resources for Preparing Your... · Training and Other Resources...
  24. [24]
    Examining the replicability of online experiments selected by a ...
    Nov 19, 2024 · Overall, 54% of the studies were successfully replicated, with replication effect size estimates averaging 45% of the original effect size ...
  25. [25]
    A Meta-Psychological Perspective on the Decade of Replication ...
    Jan 5, 2020 · A first step toward this goal was the Reproducibility Project that focused on results published in three psychology journals in the year 2008.
  26. [26]
    Estimating the Replicability of Psychology Experiments After an ...
    Nov 19, 2024 · If effect sizes are inflated, replication studies that select a sample size with appropriate statistical power for the reported effect size ...
  27. [27]
    The replication crisis has led to positive structural, procedural, and ...
    Jul 25, 2023 · The emergence of large-scale replication projects yielding successful rates substantially lower than expected caused the behavioural, ...<|control11|><|separator|>
  28. [28]
    Comment on “Estimating the reproducibility of psychological science”
    Mar 4, 2016 · The replication of empirical research is a critical component of the scientific process, and attempts to assess and improve the reproducibility ...
  29. [29]
    Power of replications in the Reproducibility Project
    Aug 27, 2015 · The Reproducibility Project demonstrates large scale collaborative efforts can work, so if you still believe in an effect that did not replicate ...Missing: criticism | Show results with:criticism
  30. [30]
    A Bayesian Perspective on the Reproducibility Project: Psychology
    We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be ...<|control11|><|separator|>
  31. [31]
    The replicability crisis and public trust in psychological science
    When large replication attempts of research in psychological science fail, and publicly expressed criticisms suggest that these failures are a result of QRPs, ...
  32. [32]
    Response to Comment on “Estimating the reproducibility ... - Science
    Mar 4, 2016 · Using the Reproducibility Project: Psychology data, both optimistic ... replication has infinite sample size (3, 4). OSC2015 ...<|control11|><|separator|>
  33. [33]
    Replications of replications suggest that prior failures to replicate ...
    Nov 13, 2020 · Critics said that a well-known psychology replication project failed to replicate findings because the replications had problems.