Fact-checked by Grok 2 weeks ago

Open peer review

Open peer review (OPR) is an umbrella term for a variety of modifications to the traditional process in scholarly , aimed at enhancing and aligning with principles. These modifications typically involve disclosing the identities of reviewers and authors to each other, publishing review reports alongside accepted manuscripts, and enabling wider participation in the process beyond a small group of selected experts. Unlike conventional anonymous , OPR seeks to foster , reduce biases, and make the of research more accessible to the broader . The origins of OPR trace back to the early , with the first documented use of the term in a 1982 proposal by Douglas Armstrong advocating for signed reviews to encourage fairer assessments. It gained momentum in the and saw a surge in the 2000s, coinciding with the rise of publishing and broader calls for openness in science, as reflected in over 122 definitions identified in scholarly literature by 2017. This evolution has been influenced by critiques of traditional peer review's opacity and inefficiencies, including issues like reproducibility crises and limited access to evaluation processes. A systematic analysis identifies seven key traits of OPR: open identities (disclosure between authors and reviewers), open reports (publicly available review comments), open participation (involving diverse contributors), open interaction (dialogue during review), open pre-review manuscripts (e.g., via preprints), open final-version commenting (post-publication input), and open platforms (digital tools for review). These traits can be combined in various configurations, with open identities appearing in about 90% of definitions and open reports in 59%. Journals implement OPR differently; for instance, employs a consultative model where reviewers collaborate openly before editorial decisions. OPR offers several benefits, including increased accountability for reviewers—who spend an average of 8.5 hours (median 5 hours) per —and of their contributions through tools like DOIs for reports. It promotes constructive feedback, reduces inconsistencies in evaluations, and accelerates the dissemination of via preprints with linked reviews, as seen in initiatives like Review Commons. However, challenges persist, such as potential retaliation against reviewers (particularly early-career or underrepresented individuals), reluctance to participate due to concerns, and a lack of standardized evidence on its overall impact on review quality. Additionally, open participation may introduce non-expert input or biases, complicating the process. Adoption of OPR has grown significantly, with Clarivate's Transparent Peer Review covering 123 journals and over 19,000 articles by 2022, and ongoing expansions in 2025 through partnerships like PLOS's collaboration with the Gates Foundation for preprint-linked reviews. Pioneering journals such as , , and UCL's Open Environment (launched in 2019) publish signed reviews with DOIs, emphasizing and for all career stages. Despite these advances, OPR remains variably implemented, with only a subset of publishers fully embracing it amid ongoing debates about its .

Definitions and Variants

Core Definition

Open peer review represents a set of modifications to the traditional process, designed to increase and openness in academic evaluation. Unlike conventional models, it incorporates elements that make parts of the review process publicly accessible, aligning with broader principles to promote accountability and collaboration in research assessment. The core components of open peer review typically include open identities, where the names of reviewers and authors are disclosed to each other; open reports, in which reviewer comments and evaluations are published alongside the accepted article; and open participation, allowing broader community involvement beyond a select group of invited experts. These features aim to transform from a closed, editor-mediated procedure into a more inclusive mechanism. In contrast to single-blind peer review, where reviewers remain anonymous to authors but authors are known to reviewers, and double-blind review, which maintains mutual , open peer review eliminates these veils to foster greater responsibility among participants and reduce potential biases stemming from hidden identities. By revealing identities and processes, it seeks to encourage constructive feedback and deter superficial or adversarial reviews. The primary goals of open peer review are to enhance scientific integrity through verifiable evaluations, mitigate biases inherent in anonymous systems, and democratize the of by involving diverse perspectives, ultimately contributing to a more robust and trustworthy scholarly ecosystem.

Types of Open Peer Review

Open peer review encompasses several distinct models that vary in the degree of applied to identities, reports, and participation in the review process. These models build on the core principle of but differ in their implementation, often combining elements to suit specific workflows. While traditional relies on to mitigate , open models emphasize and by revealing aspects of the process. The open identities model involves the disclosure of reviewers' and authors' names to each other, typically from the outset of the review process. This approach fosters a more direct and courteous exchange, as participants are aware of one another's identities, potentially encouraging constructive and professional feedback without the shield of . For instance, in this model, reviewers sign their reports, allowing authors to respond personally and engage in dialogue. In the open reports model, the full content of reviewer comments, editor decisions, and sometimes the timeline of revisions are made publicly available alongside the final published article. This transparency enables readers to assess the quality of the and the manuscript's evolution through iterations. Reports are typically unedited or lightly redacted for clarity, providing insight into the decision-making process without revealing personal details unless combined with other open elements. The open participation model extends the review beyond a select group of invited experts, inviting crowdsourced input from the broader . This often occurs through public comment sections or forums attached to manuscripts, allowing diverse perspectives to contribute to evaluation and refinement. Such participation democratizes the process, though it requires mechanisms to moderate input for relevance and quality. Hybrid models integrate multiple open elements, such as signed reviews that are published only after the manuscript's acceptance, balancing with initial protections during evaluation. These combinations can include consultative reviews where reviewers and editors collaborate openly before final decisions, or optional disclosures that allow participants to choose levels of . Hybrids offer flexibility, adapting open principles to varying publication needs. A key distinction among these models lies in their timing relative to publication: pre-publication open review occurs before formal , where manuscripts undergo open scrutiny during the initial phase to inform decisions; in contrast, post-publication open review follows an online-first release, enabling ongoing community on already disseminated work. This temporal divide affects the review's , with pre-publication focusing on gatekeeping and post-publication emphasizing continuous .

Historical Development

Early Experiments

The roots of open peer review trace back to the late and , when growing critiques of traditional anonymous peer review in medical and scientific publishing highlighted its potential for , lack of accountability, and inefficiency. These concerns were prominently discussed at the First International Congress on Peer Review in Biomedical Publication, held in 1989 and organized by and , which served as a key forum for advocating greater in the to enhance fairness and . Pioneering experiments emerged in 1999, marking the transition from critique to implementation. The (JMIR), an open-access journal focused on , launched that year as the first to fully adopt open peer review from its inception, publishing the names of its reviewers alongside accepted articles to promote openness and author ownership of content. Concurrently, the British Medical Journal (BMJ) initiated a trial of signed reviews, revealing reviewers' identities to authors while maintaining from readers, as part of a broader shift toward in medical publishing. These early efforts were motivated by the desire to mitigate biases inherent in blind review systems, such as favoritism or reluctance to provide candid feedback, and to foster that could elevate the overall of reviews. Proponents argued that identifying reviewers would encourage more thoughtful and constructive critiques, as might deter superficial or overly harsh comments. Outcomes from these initial trials were mixed, reflecting both promise and practical hurdles. The BMJ's randomized controlled trial, involving 250 reviewers across 125 manuscripts, found no significant difference in between open and anonymous groups (mean scores of 3.09 vs. 3.06 on a validated scale), though identified reviewers were slightly more likely to recommend and provided perceived as courteous by authors. However, the trial also revealed challenges, including a higher refusal rate among potential reviewers (35% vs. 23% in the control group), indicating slower recruitment due to the added visibility and responsibility. JMIR's model, while innovative, faced similar adoption barriers in an era before widespread digital tools, yet it laid groundwork for in online without reported declines in submission .

Modern Implementations

In the mid-2000s, major publishers began experimenting with digital tools to enable broader participation in peer review. A notable example was Nature's 2006 trial, which ran from June to September and offered authors an optional parallel open track alongside traditional confidential review. Manuscripts opting into the open process were posted on a dedicated website for signed public commentary, with editors incorporating all feedback into decisions. However, only about 5% of eligible authors chose this route, and of the 70 papers posted, 33 received no comments while the remaining 38 garnered just 92 technical remarks, underscoring early challenges in attracting meaningful engagement and scalability. The marked a shift toward post-publication open peer review models, leveraging immediate online publication to decouple review from gatekeeping. F1000Research, launched in , pioneered this approach by publishing articles first—after basic checks—and then inviting named expert reviewers to submit public reports, which are displayed alongside the paper with author responses. This transparent, iterative process allows revisions to be versioned and published openly, fostering ongoing dialogue while aligning with principles. The movement further propelled these innovations by integrating post-publication commentary into accessible platforms. , founded in , emerged as a key tool for ongoing public discussion of published papers, allowing signed or anonymous comments on any article via DOIs, which has facilitated community-driven scrutiny and corrections in fields like . Technological advancements in the and , including web-based forums and collaborative wikis, addressed prior scalability issues by enabling asynchronous, distributed participation. Platforms like functioned as moderated online forums for threaded discussions, while experiments such as the 2010 Shakespeare Quarterly issue employed wiki-style interfaces for signed, communal editing of reviews, allowing multiple contributors to refine feedback collectively before finalization. These tools democratized access, reduced logistical barriers, and supported the evolution from isolated critiques to dynamic, community-sustained evaluation.

Adoption and Implementation

In Traditional Publishing

, launched in 2000, pioneered open peer review by requiring signed reviews from all journals in its BMC series and publishing these reports alongside accepted articles post-publication to enhance . This approach, which aligns with variants like open reports, aimed to credit reviewers publicly while maintaining the integrity of the process. The British Medical Journal (BMJ) introduced signed peer reviews in 1999, disclosing reviewer identities to authors and editors to foster accountability and openness in the evaluation process. By 2014, BMJ expanded this model to include pre-publication histories for accepted articles, making signed reviews, author responses, and editorial decisions publicly available to promote full transparency. Nature Publishing Group offered optional transparent peer review starting in 2020, allowing authors to publish reviewer comments and responses alongside accepted manuscripts if they chose to participate. From June 2025, this became mandatory for all primary research articles submitted to Nature, with peer review reports now published as standard to standardize openness across its workflow. Despite these adoptions, legacy publishers have faced significant resistance to open peer review due to longstanding traditions of reviewer , which protect against potential reprisals or conflicts from disclosed identities. To counter this, incentives such as formal credit for signed reviews—through citable acknowledgments or integration with platforms like —have been introduced to encourage participation and recognize reviewers' contributions in academic evaluations.

In Digital Platforms and Preprints

Open peer review has found significant application in digital platforms designed for collaborative scientific exchange, particularly those hosting preprints and conference submissions. One prominent example is OpenReview.net, launched in 2013 as an extension of earlier experimental systems aimed at advancing open scholarship through transparent peer review processes. This platform facilitates public reviews and author rebuttals for submissions to major conferences, such as the (ICLR), where anonymous reviews are released publicly, followed by open discussion periods that allow community input and author responses. By making the entire review process visible, OpenReview.net promotes and enables broader participation beyond traditional reviewers, fostering iterative improvements to manuscripts before final acceptance decisions. Another key initiative is PREreview, established in September 2017 to encourage community-driven peer reviews of preprints, particularly those posted on and . PREreview operates as a collaborative platform where volunteers, including early-career researchers, provide constructive feedback on life sciences and health-related preprints, emphasizing inclusivity and training in equitable reviewing practices. These reviews are openly shared alongside the preprints, allowing authors to receive diverse perspectives without the delays associated with journal submission workflows, and they often include collaborative "group reviews" to build reviewer skills and community norms. Preprint servers like and SSRN further integrate elements of open peer review by enabling post-publication commentary that extends beyond formal processes. , a repository for physics, , and related fields, supports open pre-review by making manuscripts immediately available for public scrutiny and informal through external forums, mailing , or linked discussions, without requiring gatekeeping prior to dissemination. Similarly, SSRN, focused on social sciences and , allows preprints to garner open commentary via reader downloads, citations, and networked discussions, promoting rapid idea exchange in a non-peer-reviewed environment that contrasts with slower traditional . These integrations highlight how digital platforms democratize , enabling researchers to solicit input from global communities shortly after upload. The primary benefits of open peer review in these digital contexts lie in accelerating scientific and providing timely, multifaceted input that enhances quality without the bottlenecks of conventional timelines. Preprints with open reviews allow for early identification of errors or innovations, increasing visibility and collaboration while reducing through inclusive participation. This approach contrasts with delays, often spanning months, by offering immediate access to expert and community critiques that inform revisions and future work.

Advantages and Challenges

Purported Benefits

Proponents of open peer review argue that disclosing reviewer identities fosters greater in the process. By making reviewers identifiable, this approach discourages overly harsh, superficial, or unconstructive , as individuals are more likely to provide thoughtful and balanced critiques when their contributions are publicly associated with them. This mechanism is intended to elevate the overall quality of reviews, promoting a of among participants. Another key advantage lies in enhanced and of the process. With review reports published alongside the , readers gain direct into the evaluation , enabling them to assess the rigor of the scrutiny applied and identify any potential conflicts of interest among reviewers. This openness allows the to verify the fairness of decisions and reproduce the contextual understanding of how a paper was vetted. Open peer review is also said to encourage broader participation from the . Unlike traditional anonymous systems that rely on a select group of elite experts, open models invite diverse contributions from a wider pool of knowledgeable individuals, thereby democratizing access to expertise and enriching the with multifaceted perspectives. Finally, signed reviews provide tangible to reviewers for their intellectual labor, which can motivate higher-quality engagement and aid in professional . This acknowledgment, such as through permanent identifiers like DOIs assigned to reports, incentivizes participation by valuing the time and effort invested, potentially improving reviewer in the long term.

Criticisms and Drawbacks

One major criticism of open peer review is the reluctance of potential reviewers to participate, particularly when their identities are publicly disclosed. Reviewers may fear retaliation from authors, especially influential or senior figures whose work they critique harshly, leading to reputational damage or professional repercussions such as exclusion from collaborations or conferences. This concern is heightened for early-career researchers, who may avoid signing reviews to prevent backlash from more established scientists, ultimately resulting in fewer volunteers willing to engage in the process. Such hesitation contrasts with traditional anonymous review, where confidentiality shields reviewers from direct consequences. Another drawback involves the potential introduction of new biases stemming from known reviewer and author identities. Personal relationships, institutional affiliations, or competitive pressures can influence judgments, as reviewers might soften critiques to curry favor or avoid conflicts with colleagues at the same . For instance, junior reviewers assessing senior authors' submissions may hesitate to provide forthright due to imbalances, thereby compromising the objectivity intended by . Additionally, demographic factors like or can skew participation, with male and more experienced reviewers more inclined to sign reviews, potentially amplifying existing inequities in the evaluation process. In models of open participation peer review, where feedback is crowdsourced from the broader community, evaluations may favor popularity over scientific merit. High-visibility or high-profile papers often attract disproportionate attention and comments, creating a "" where already prominent work receives amplified validation while lesser-known submissions go under-reviewed, skewing overall assessments. This dynamic reinforces cumulative advantages for well-resourced or established researchers, undermining the goal of equitable scrutiny across all manuscripts. Privacy concerns further limit the effectiveness of open peer review, particularly in sensitive fields like or sciences, where exposing reviewer opinions publicly can stifle candid input. Reviewers may self-censor to protect their or avoid unintended conflicts, reducing the depth and of in areas involving controversial or ethically charged topics. This exposure risks personal or institutional backlash, deterring thorough critiques and favoring superficial or overly positive responses.

Current Landscape and Future Prospects

In 2025, implemented a mandate requiring all primary research articles to include published peer review reports and author responses as standard, building on successful pilots that demonstrated enhanced . This universal transparent process applies to newly submitted articles selected for , aiming to foster greater trust in the scientific record. MDPI expanded its open peer review model across all journals starting in 2018, with adoption rates increasing significantly by 2023 to approximately 36% of published articles, where reviewer identities and reports are made public to promote and review quality. This approach allows authors to opt for , resulting in detailed, citable reviews that contribute to the scholarly . By 2024, participation stabilized around 21%, reflecting sustained integration in MDPI's publishing workflow. In 2025, advanced hybrid open models incorporating -assisted elements to mitigate reviewer shortages, including pilots that enable posting peer review comments on preprints during evaluation and portable reviews from prior submissions. These initiatives, such as a with the Gates Foundation for Global , combine for technical checks (e.g., reference completeness) with human oversight for contextual assessment, reducing workload and encouraging broader participation. Amid the 2025 peer review crisis—characterized by surging submissions and reviewer burnout—open peer review adoption has grown notably, driven by innovations like intelligent matching tools that streamline reviewer assignment. Platforms and journals report rising uptake of transparent practices to alleviate system strains. Studies from open review platforms indicate improved review quality through greater accountability and detail, as evidenced by engagement metrics on sites like F1000Research. As of October 2025, surveys indicate around 32% of reviewers use generative AI in their process.

Debates and Evolving Practices

One ongoing debate in open peer review centers on the tension between full openness—where reviewer identities and reports are publicly disclosed—and hybrid models that incorporate optional to protect reviewers from potential retaliation or professional repercussions. Editors and publishers often favor , allowing a balance between and maintaining a willing reviewer pool. This approach addresses concerns that mandatory disclosure could deter participation, as surveys show many reviewers are reluctant due to privacy concerns. For instance, publishers like implement post-publication alongside pre-acceptance double , where reviewers may choose to remain unnamed, enhancing accountability without compromising candid feedback. Emerging practices increasingly integrate (AI) and to alleviate human reviewer shortages in open , particularly for initial manuscript screening and quality checks. The peer review process currently demands approximately 100 million researcher hours annually, with a small proportion of scientists performing the majority of reviews, leading to imbalances and delays; AI tools can automate compliance verification, formatting assessments, and detection of methodological inconsistencies with 74% accuracy, freeing human reviewers for substantive open evaluations. Organizations such as Australia's and Council employ AI for reviewer matching in processes, a model adaptable to open peer review platforms, where over 65% of researchers report AI aiding in identifying overlooked issues to improve . These tools support hybrid open systems by handling preliminary tasks, though ethical guidelines emphasize responsible integration to avoid biases in automated decisions. Inclusivity remains a contentious issue, with critics arguing that open peer review may inadvertently favor well-connected researchers from the Global North, perpetuating biases in reviewer selection and authorship dominance. For example, first authorship in large collaborative studies is disproportionately held by U.S.-based scholars, underrepresenting regions like , , and , while language barriers and AI-detection tools further disadvantage non-native English speakers in open processes. Vulnerable groups, such as early-career or scholars, face heightened risks of in fully open models, potentially deterring diverse participation and diluting critical from underrepresented voices. To mitigate these concerns, calls for mandatory training in equity-centered reviewing have grown, with initiatives like Reviewer Zero advocating for programs that teach intersectional perspectives and sample , alongside grassroots efforts such as the Coalition for Open Science Networks (COSN) to broaden reviewer pools globally. Looking ahead, future prospects for open peer review include standardization across funding bodies and technological integrations like to ensure immutable records of reviews. Pilot programs by funders, such as those exploring transparent review mandates, aim to harmonize practices and reward equitable participation, potentially reducing biases through shared reviewer databases. , like the proposed Decentralised Academic Publishing (DAP) system, leverage tamper-proof ledgers to store review metadata and assign tokenized rewards (e.g., Ergion tokens) for timely, high-quality contributions, fostering a standardized, transparent . Ongoing developments under initiatives like the EU's Horizon TruBlo project suggest these innovations could accelerate adoption, enhancing trust and fairness in open peer review while addressing current fragmentation.

References

  1. [1]
    What is open peer review? A systematic review - PMC
    Apr 27, 2017 · “Open peer review” (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and ...
  2. [2]
    Open peer review, pros and cons from the perspective of an early ...
    Oct 9, 2023 · Open peer review (OPR) has gained popularity in recent years as a tool to increase transparency, rigor, and inclusivity in research.
  3. [3]
    (Open!) Peer review – an overview - OPERAS Innovation Lab
    Jul 5, 2023 · Open peer review (OPR) is a recommended practice of open science to transform the PR process into an open scientific discourse. Active ...
  4. [4]
    Pioneering approaches in open peer review - The Official PLOS Blog
    Sep 11, 2025 · Open peer review is essential to a healthy research ecosystem. It promotes accountability and reduces potential for bias, by making the ...Missing: definition | Show results with:definition
  5. [5]
    Open peer review: what is it and what is UCL Press doing?
    Sep 23, 2024 · Peer review acts to validate and assess work and is the current system used to assess the quality of a manuscript before it is published.<|control11|><|separator|>
  6. [6]
    The current state of open peer review - Clarivate
    Sep 20, 2022 · Open or transparent peer review typically refers to the publication of reviews alongside articles published in a journal. In recent years, ...Missing: definition | Show results with:definition
  7. [7]
  8. [8]
    Open peer review, pros and cons from the perspective of an early ...
    Oct 9, 2023 · Peer review, or the evaluation of submissions to an academic journal by a panel of reviewers in the same subject area, is considered by many to ...
  9. [9]
    Peer review: concepts, variants and controversies - PMC
    Under the open model of peer review, the authors and reviewers are known to each other. Open reviews can be further subdivided into pre- and post-publication ...
  10. [10]
    A scoping review of recent evidence on key aspects of Open Peer ...
    Feb 8, 2024 · Combined with growing interest in Open Science practices, Open Peer Review (OPR) has become of central concern to the scholarly community.Introduction · Methodology · Results · Discussion
  11. [11]
    Three Decades of Peer Review Congresses - JAMA Network
    The editors of JAMA and the BMJ have held conferences every 4 years since 1989 to present research into the quality of publication processes, including ...Missing: experiments | Show results with:experiments
  12. [12]
    Welcome to the Journal of Medical Internet Research
    Aug 11, 1999 · Our peer-review process will be rigorous and constructive, helping authors to improve their manuscripts and guaranteeing a high-quality journal.Missing: open | Show results with:open
  13. [13]
    Opening up BMJ peer review
    ### Summary of BMJ's Motivations and Outcomes for Open Peer Review
  14. [14]
    Effect of open peer review on quality of reviews and on ... - NIH
    McNutt et al reported that reviewers who chose to sign their reviews were more constructive in their comments. This difference from our findings ...Missing: feedback | Show results with:feedback
  15. [15]
    Effect of open peer review on quality of reviews and on ... - The BMJ
    Jan 2, 1999 · We therefore conducted a randomised controlled trial to confirm that open review did not lead to poorer quality opinions than traditional review.Results · Response To Authors'... · Discussion
  16. [16]
    Chemists like to experiment, just not with opening peer review - C&EN
    Nov 26, 2018 · Nature tried an open peer review experiment in 2006, which failed when only 5% of authors opted in. “There was essentially no appetite for ...
  17. [17]
    Nature open peer review trial - ReimagineReview - ASAPbio
    Nature open peer review trial 2006 experiment with open commenting on prepublication manuscripts.
  18. [18]
    What is open peer review? A systematic review - F1000Research
    Open peer review is an umbrella term for adapting peer review models to align with Open Science, including open identities, publishing reports, and greater ...
  19. [19]
    PubPeer's secret is out: Founder of controversial website reveals ...
    Aug 31, 2015 · He launched PubPeer in late 2012. The site allows users to post under their own names or anonymously; even Stell and the Smith brothers don't ...
  20. [20]
    Lessons learned from open peer review: a publisher's perspective
    Dec 23, 2017 · There is a school of thought that publishing reviewer reports will encourage better-quality, more constructive comments. But is that actually ...
  21. [21]
    Why we embrace open peer review at BMJ Open - BMJ Blogs
    Sep 21, 2021 · The BMJ decided in 1999 to start adopting an open-peer review approach, albeit implemented in stages, beginning with sharing reviewer names and ...
  22. [22]
    Peer Review | Nature
    For manuscripts submitted before 16th June 2025, authors are provided the opportunity to opt out of this scheme at the completion of the peer review process, ...Missing: mandatory | Show results with:mandatory
  23. [23]
    Transparent peer review (TPR) now standard for all newly submitted ...
    Jun 19, 2025 · From this week, all primary research articles submitted to Nature will automatically undergo transparent peer review (TPR) as standard, if they ...
  24. [24]
    Guidelines for open peer review implementation
    Feb 27, 2019 · Platforms and publishers implement OPR tools to encourage wider and more transparent discourse within the review process.
  25. [25]
    About - OpenReview
    OpenReview.net is built over an earlier version described in the paper Open Scholarship and Peer Review: a Time for Experimentation published in the ICML 2013 ...Missing: launch | Show results with:launch
  26. [26]
    Call for Papers - ICLR 2026
    Reviewing Process​​ Submissions to ICLR are uploaded on OpenReview, which enables public discussion. Official reviews are anonymous and publicly visible.
  27. [27]
    Preprint Journal Clubs: Building A Community Of PREreviewers
    Feb 2, 2018 · In response to these results, and after discussions with various interest groups, we launched PREreview in September, 2017. PREreview today.<|separator|>
  28. [28]
    Recommendations for accelerating open preprint peer review to ...
    Feb 29, 2024 · Preprints are enabling new forms of peer review that have the potential to be more thorough, inclusive, and collegial than traditional journal ...
  29. [29]
    Why Preprints Benefit Research
    Preprints offer important benefits, including increasing visibility and attention, receiving early feedback, and establishing priority of discoveries and ideas.What Is A Preprint And How... · What Is The Impact Of... · How Do You Cite Preprint?
  30. [30]
    Pros and cons of open peer review | Nature Neuroscience
    Advocates of open review argue that openness will force referees to think more carefully about the scientific issues and to write more thoughtful reviews; it ...
  31. [31]
    Opening peer-review: the democracy of science - PMC
    Research into the effect of open peer review suggests numerous benefits, in particular accountability, fairness and crediting reviewers for their efforts [5-7].
  32. [32]
    Open Peer Review | Essential Information from F1000Research
    Enable conversation within the research community with fully transparent peer review · Reduce the possibility of bias, as everything is openly available to all ...Missing: purported | Show results with:purported
  33. [33]
    Survey on open peer review: Attitudes and experience amongst ...
    Open participation could in principle increase incentives to peer review by enabling reviewers to themselves select the works that they consider themselves ...
  34. [34]
    Dynamics of cumulative advantage and threats to equity in open ...
    ... Matthew effect', whereby already successful scientists tend to receive ... open peer review' OR 'altmetric*' OR 'alternative metric*' OR 'open data' OR ...
  35. [35]
    (PDF) Challenges to open peer review - ResearchGate
    Aug 6, 2025 · Purpose The purpose of this paper is to assess what the challenges to open peer review (OPR) are, relative to traditional peer review (TPR).Missing: resistance | Show results with:resistance
  36. [36]
    Transparent peer review to be extended to all of Nature's research ...
    Jun 16, 2025 · Nature started mandating peer review for all published research articles only in 1973 (M. Baldwin Notes Rec. 69, 337–352; 2015). But the ...
  37. [37]
    Celebrating Peer Review Week (25–29 September 2023) - MDPI
    Open peer review demonstrated its value and, as a result, was expanded to encompass all of our journals in 2018. As of 2023, approximately one-third of MDPI ...
  38. [38]
    How and Why MDPI Offers Open Peer Review
    Aug 12, 2025 · MDPI offers open peer review to increase transparency, where reports and reviewer identities are published, and to boost the quality of review ...
  39. [39]
    The promise and perils of AI use in peer review
    Sep 18, 2025 · The theme for this year's Peer Review Week explores how we can rethink Peer Review in the AI Era. Publishers like PLOS are carefully considering ...Missing: initiatives shortages
  40. [40]
    The peer-review crisis: how to fix an overloaded system - Nature
    Aug 6, 2025 · Journals and funders are trying to boost the speed and effectiveness of review processes that are under strain.
  41. [41]
    The Growing Adoption of Open Peer Review Practices - F1000
    Jul 31, 2025 · There has been a growing uptake of open peer review practices that aim to improve transparency in this vital stage of the publishing process.
  42. [42]
    Ten considerations for open peer review - F1000Research
    Jun 29, 2018 · It aims to bring greater transparency and participation to formal and informal peer review processes. But what is meant by `open peer review', ...