Fact-checked by Grok 2 weeks ago

Scientific misconduct

Scientific misconduct refers to fabrication, falsification, or in proposing, performing, or reviewing , or in reporting results. Fabrication involves inventing data or results without conducting the underlying experiments, falsification entails manipulating research materials, equipment, or processes to produce misleading outcomes, and constitutes the appropriation of another person's ideas, processes, results, or words without proper attribution. These practices deviate from accepted ethical standards in science, prioritizing personal or institutional gain over truthful inquiry. Empirical surveys reveal that self-reported rates of fabrication and falsification range from 1% to 2%, though broader questionable practices—such as selective or failing to disclose conflicts—affect a larger proportion of researchers, contributing to the across disciplines. A meta-analysis estimates the pooled of at approximately 3%, with higher admissions in some fields like and where pressures to publish incentivize corner-cutting. Such behaviors erode public confidence in scientific claims and waste resources on irreproducible findings. The repercussions extend beyond individual perpetrators, including retracted publications that mislead and , diminished for affected institutions, and collateral citation penalties for innocent collaborators averaging 8-9%. Detection often relies on post-publication scrutiny, with databases documenting thousands of retractions annually, underscoring systemic vulnerabilities in and the "" culture. Addressing misconduct demands robust institutional oversight and incentives aligned with rigorous, replicable evidence rather than output volume.

Definition and Scope

Core Components

Scientific misconduct is fundamentally characterized by three core components: fabrication, falsification, and , as defined by the U.S. Office of Research Integrity (ORI). These acts occur in the context of proposing, performing, reviewing, or reporting results and represent intentional deviations from the norms of scientific that undermine the reliability of the research record. This tripartite framework, established in federal policy under 42 CFR Part 93, excludes honest errors or differences of opinion, emphasizing intent to deceive as a key criterion for misconduct. Fabrication involves inventing data, results, or records that do not exist and presenting them as legitimate findings in the research process or outputs. For instance, this includes generating fictitious experimental outcomes or patient data without conducting the underlying studies. Such practices erode the foundational trust in , as fabricated results cannot be replicated or verified through independent means. Falsification entails manipulating research materials, equipment, processes, or —through alteration, selective omission, or suppression—such that the represented results mischaracterize the actual conduct or outcomes of the . Examples include altering images in publications to exaggerate effects, trimming sets to hide inconsistencies, or modifying statistical analyses to achieve desired significance levels. This component directly compromises and , core tenets of scientific validity. Plagiarism consists of the unauthorized appropriation of another person's ideas, methods, results, or written work without proper attribution, effectively claiming intellectual ownership of unearned contributions. This extends beyond textual copying to include uncredited reuse of experimental designs or data interpretations, violating the principle of originality in scholarly communication. While less frequently associated with data integrity than fabrication or falsification, plagiarism undermines collaborative knowledge-building by obscuring the true provenance of scientific advancements. These components are not mutually exclusive and can overlap in complex cases, but investigations require evidence of intent beyond negligence. Institutional policies worldwide, such as those from the (NIH), align closely with this U.S. model, reinforcing its role as a for identifying that threatens public trust in science.

Differentiation from Questionable Research Practices

Scientific misconduct is narrowly defined under U.S. federal policy as fabrication, falsification, or in proposing, performing, reviewing, or reporting research results, with an emphasis on intentional misrepresentation that deviates materially from accepted scientific practices. This excludes honest errors, differences of interpretation, or honest differences of opinion, focusing instead on deliberate acts that undermine the integrity of the research record. Questionable research practices (QRPs), by contrast, refer to methodological, analytical, or reporting choices that fall short of outright fraud but can systematically bias results toward false positives or inflated effect sizes. Common examples include p-hacking (e.g., repeatedly analyzing data subsets until statistical significance emerges), selective outcome reporting (omitting non-significant results), failing to preregister hypotheses or analyses, and hypothesizing after results are known (HARKing). These practices often arise from incentives like publication pressure rather than explicit intent to deceive, though they erode reproducibility and trustworthiness in aggregate. The core distinction hinges on intent, severity, and institutional thresholds: misconduct requires provable deliberate leading to invalidation of findings, triggering formal investigations and sanctions, whereas QRPs are subtler deviations that may not invalidate individual studies but contribute to broader replicability crises when widespread. Empirical surveys underscore this divide, estimating rates at around 2-8% for fabrication or falsification in recent years, compared to 50% or higher self-reported engagement in QRPs among researchers. However, boundary cases exist where QRPs escalate to if pursued knowingly to mislead, prompting calls for clearer guidelines to address gray areas without diluting for .

Historical Development

Pre-20th Century Examples

One prominent pre-20th-century case of scientific misconduct involved the fabrication of fossil-like artifacts in 1725–1726, orchestrated as a hoax against Johann Bartholomäus Adam Beringer, a professor of medicine and anatomy at the University of Würzburg. Beringer, who had previously dismissed the authenticity of fossils as mere "sports of nature" or divine creations rather than remnants of extinct life, began receiving unusual stones from local hillsides bearing intricate carvings of plants, animals, astronomical bodies such as the sun and moon, and even Hebrew inscriptions including what appeared to be the names of God. These "lying stones" (Lügensteine) were secretly carved from local limestone by students under the direction of Beringer's colleagues, the physician Ignaz Roderich and the mathematician Johann Georg von Eckhart (or Wagner in some accounts), who sought to ridicule Beringer's credulity and rigid views on natural history. Beringer, convinced of their genuineness as divinely imprinted lapides figurati, amassed over 2,000 specimens and published Lithographiae Wirceburgensis in April 1726, a 1,000-page treatise lavishly illustrated with engravings that cataloged and defended the stones as authentic geological phenomena predating the biblical Flood. The unraveled later that year when the hoaxers, fearing or perhaps regretting the scale, confessed; included tools found in Roderich's possession and inconsistencies in the stones' origins, such as some bearing contemporary dates or being sourced from Beringer's own garden. Beringer faced professional ruin, including dismissal from his positions at , financial loss from printing costs (estimated at 1,600 florins), and public humiliation; he attempted to suppress the book by purchasing all available copies, though some survived and later editions revealed the . The perpetrators faced no severe repercussions beyond reprimands, highlighting the era's limited institutional mechanisms for addressing , which often blurred pranks, theological disputes, and empirical in nascent . Earlier instances, such as 17th-century fabrications of specimens (e.g., assembled "dragons" from disparate animal parts presented as real monsters), occasionally deceived scholars but were typically exposed through anatomical scrutiny rather than formal , reflecting the pre-institutional of scientific validation. These cases underscore that pre-20th-century frequently stemmed from personal rivalries or satirical intent rather than career advancement, with detection relying on peer amid emerging empirical standards.

Modern Era and Institutional Responses

The modern era of scientific misconduct gained prominence in the late 20th century, marked by high-profile cases that exposed vulnerabilities in biomedical and physical sciences research. In 1983, Harvard cardiologist John Darsee was found to have fabricated data across multiple papers, leading to a 10-year ban from federal funding after investigations revealed systematic fraud in cardiac studies spanning years. Similarly, psychologist Stephen Breuning admitted in 1983 to falsifying data on behavioral interventions for intellectually disabled individuals, resulting in retractions and highlighting ethical lapses in clinical research. These incidents, alongside others like the 1981 case of Cyril Burt's fabricated twin studies data (posthumously confirmed as fraud), spurred congressional scrutiny and public awareness of misconduct's prevalence, with U.S. cases numbering between 40 and 100 from 1980 to 1990. The 2000s saw escalated scandals, including physicist Jan Hendrik Schön's 2002 fabrication of nanotechnology results, which prompted retractions of 28 papers from Nature and Science, and South Korean researcher Hwang Woo-suk's 2004-2005 stem cell cloning fraud, involving falsified images and ethical violations that led to over 20 retractions. Retraction rates surged, rising from fewer than 100 annually before 2000 to nearly 1,000 by the mid-2010s, with misconduct—encompassing fraud, falsification, and plagiarism—accounting for approximately 67% of cases analyzed from 1992 to 2012. This trend persisted into the 2020s, with data problems driving over 75% of retractions by 2023, amid pressures from publication incentives and technological detection tools. Institutional responses formalized in the U.S. with the establishment of the Office of Research Integrity (ORI) in 1993, evolving from the earlier Office of Scientific Integrity (1989) to oversee Public Health Service-funded research. ORI's mandate includes investigating allegations, enforcing the federal definition of misconduct under 42 CFR Part 93 (fabrication, falsification, or plagiarism in proposing, performing, or reviewing research), and promoting education to prevent violations. It relies on institutions for initial inquiries, with ORI providing oversight and debarment recommendations, as seen in cases like the 2010s findings against researchers for image manipulation in grant applications. Globally, journals adopted standardized procedures via the (COPE), founded in 1997, which issues guidelines for handling misconduct allegations, emphasizing prompt investigations and corrections. , launched in 2010, has tracked over 35,000 retractions by 2024, facilitating transparency and pressuring publishers to retract flawed papers more swiftly. Universities implemented internal policies, appointing deciding officials to adjudicate claims and impose sanctions, though surveys indicate ongoing challenges, with 64% of integrity officers in 2025 favoring self-regulation over stricter external mandates. These measures reflect causal links between unchecked incentives—like ""—and misconduct, prioritizing empirical oversight to safeguard scientific validity without stifling inquiry.

Forms of Misconduct

Fabrication and Falsification

Fabrication in scientific entails inventing data, results, or entire experiments and subsequently recording or reporting them as genuine. This form of introduces entirely fictitious into the scientific record, often to support preconceived hypotheses or secure funding and publications. Falsification, by contrast, involves manipulating existing materials, equipment, processes, or data—through selective omission, alteration, or misrepresentation—such that the findings deviate from actual observations without accurately reflecting the underlying . Both practices erode the foundational reliability of , potentially misleading subsequent studies, policy decisions, and resource allocation for years until detection occurs via audits, replication failures, or whistleblower reports. A prominent historical example of fabrication is the Piltdown Man hoax, where in 1912, fragments purportedly representing an early human ancestor were presented by Charles Dawson and others, only to be exposed in 1953 as a deliberate forgery involving a modified orangutan jaw and stained human skull bones. In a modern biomedical case, cardiologist John Darsee at Harvard Medical School fabricated data across numerous studies in the early 1980s, leading to the retraction of over 100 papers and his debarment from federal funding after an investigation revealed systematic invention of experimental results to simulate successful cardiac research outcomes. Similarly, South Korean researcher Hwang Woo-suk claimed in 2004 and 2005 to have created patient-specific stem cell lines via cloning, but investigations confirmed fabrication of core data, resulting in journal retractions and his dismissal, highlighting pressures in high-stakes fields like regenerative medicine. Falsification often manifests in subtler alterations, such as duplicating or splicing gel images to fabricate dose-response patterns. In neuroscience, a 2024 investigation into over 60 papers by NIH official Eliezer Masliah revealed apparent falsification of Western blots—key protein analysis visuals—across studies on Alzheimer's and Parkinson's diseases, with duplicated bands and implausible patterns suggesting manipulation to align with expected synaptic pathology results. Another instance involved physicist Jan Hendrik Schön, whose 2000s nanotechnology publications included falsified data traces mimicking molecular device breakthroughs; Bell Labs' 2002 probe confirmed fabrication and falsification in 16 papers, leading to widespread retractions and underscoring vulnerabilities in computationally intensive fields. Empirical surveys estimate that 1.97% of scientists admit to having fabricated, falsified, or modified data at least once, though underreporting is likely due to career repercussions and institutional reluctance to pursue allegations. Detection typically relies on forensic tools like image analysis software or statistical outliers, but systemic incentives—such as publish-or-perish dynamics—perpetuate under-detection, as evidenced by rising retractions tied to these misconduct types.

Plagiarism and Authorship Violations

Plagiarism in scientific research constitutes the appropriation of another person's ideas, processes, results, words, images, or structure without proper attribution, presenting them as one's own original work. This misconduct undermines the foundational principle of intellectual credit in , where novelty and proper sourcing are paramount. In practice, it includes verbatim copying of text, data, or methodologies from prior publications without , as well as paraphrasing ideas insufficiently acknowledged. Authorship violations encompass manipulations in crediting contributors to scientific outputs, such as including undeserving individuals ( or honorary authorship), excluding substantial contributors ( authorship), or coercing authorship for senior figures. These practices distort accountability and inflate metrics like h-indexes, often driven by hierarchical pressures in labs where principal investigators demand inclusion regardless of contribution. Duplicate publication, a related issue, involves republishing substantially similar content without disclosure, which can mislead counts and . Generative AI tools in scholarly writing have created new borderline cases that overlap with plagiarism and authorship violations. Undisclosed use of AI to draft, translate or paraphrase manuscripts, or reuse of AI generated passages that echo training data, is increasingly treated by editors as misrepresentation similar to plagiarism or ghost authorship, because the real origin of the text is hidden. In response, journal and publisher policies in the 2020s state that AI tools cannot be listed as authors and that human contributors remain fully responsible for any content produced with their assistance. In parallel, a few experimental projects have tried to avoid such opacity by assigning explicit authorial roles to AI based identities: the Aisentica Research Group, for example, credits the AI persona Angela Bogdanova as a Digital Author Persona with an ORCID iD and a Zenodo DOI and lists this non human figure as a contributor in philosophical and meta theoretical publications and public essays. These practices remain rare and do not change the legal consensus that only humans can be authors, but they illustrate one proposed response to invisible AI assistance by making automated contribution visible in bylines and metadata instead of letting it act as an unacknowledged ghost writer. Notable cases illustrate the scope. In the 1970s, Iraqi researcher Elias Alsabti plagiarized content from multiple medical journals, including copying figures and text from papers on cancer and virology, leading to retractions but minimal long-term sanctions due to lax enforcement at the time. More recently, in 2016, French physicist Étienne Klein faced accusations of serial plagiarism, lifting passages from colleagues and literary works into his popular science writings, prompting investigations by the French Academy of Sciences. Authorship disputes have arisen in retracted misconduct cases, such as those analyzed in 2022, where republished papers altered author lists, raising questions about who retains responsibility for original errors. Empirical data reveal plagiarism's prevalence in retractions, accounting for 9.8% of cases from 1996 to 2015, often intertwined with fraud. A meta-analysis estimated that 2.9% of researchers admit to plagiarism, with higher rates in manuscript submissions: one study found 30% contained plagiarism from the senior author's prior work. Authorship issues compound this, as evidenced by Office of Research Integrity findings of 19 plagiarism cases from 1992 to 2005, many involving improper credit in federally funded projects. Detection relies on tools like CrossCheck and manual reviews, but underreporting persists due to institutional reluctance to pursue high-profile violators. Consequences include paper retractions, funding bans, and career damage, though enforcement varies, with U.S. policy under 42 CFR Part 93 mandating investigations for plagiarism in public health service-supported research.

Data and Image Manipulation

Data manipulation in scientific research entails the intentional alteration, selective omission, or fabrication of numerical or experimental data to misrepresent findings, distinct from mere errors or questionable practices by virtue of deceptive intent. This form of misconduct often involves modifying raw data points, duplicating entries to inflate sample sizes, or applying unauthorized statistical adjustments to achieve statistical significance, thereby undermining the reproducibility of results. The U.S. Office of Research Integrity defines falsification, which encompasses data manipulation, as "manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record." Such practices erode trust in empirical evidence, as altered datasets can lead to erroneous conclusions propagated through citations and policy decisions. Image manipulation, a prevalent subtype particularly in biomedical and life sciences, involves digitally editing visual representations such as micrographs, Western blots, electrophoretic gels, or fluorescence images to fabricate or exaggerate evidence. Common techniques include duplicating image elements (e.g., gel bands copied across lanes to simulate replicates), splicing disparate images without disclosure, cloning artifacts to conceal impurities, or enhancing contrast to create false signals from noise. These alterations, often performed using software like Adobe Photoshop, can manufacture apparent biological effects, such as protein expression patterns that do not exist. Journals like Nature and Science have issued guidelines prohibiting undisclosed edits, emphasizing that even "beautification" crosses into misconduct if it distorts interpretation. Empirical studies highlight the scale of image manipulation in published work. A forensic analysis of figures from 960 biomedical research papers spanning 40 journals and four decades identified problematic images in 3.8% of publications, with approximately half showing hallmarks of deliberate tampering, such as inconsistent pixel patterns or error level discrepancies indicative of post-hoc editing. In radiology research, a survey of 310 professionals found 11.9% admitting to falsifying medical images, with cherry-picking nonrepresentative slices being the most frequent method (50.3% of cases). Detection relies on automated tools analyzing image forensics—e.g., identifying cloned regions via Fourier transform discrepancies—or manual scrutiny by platforms like PubPeer, which has flagged thousands of suspicious figures since 2012. Notable cases illustrate consequences. In September 2024, a Science investigation revealed falsified Western blots in over 50 papers from neuroscientist Eliezer Masliah's lab at the National Institute on Aging, including duplicated bands and anomalous splicing that misrepresented protein data in Alzheimer's and Parkinson's studies. Similarly, Harvard Medical School researcher Khalid Shah faced allegations in 2024 of data falsification and image plagiarism across 21 papers, prompting institutional probes and paper corrections. Cell biologist Yoshinori Watanabe's 2018 investigation confirmed manipulated images in multiple publications, leading to retractions. These incidents, often uncovered post-publication, have spurred mandatory raw data archiving and AI-assisted screening by journals, though under-detection persists due to resource constraints in oversight.

Underlying Causes

Individual Psychological and Ethical Lapses

Certain personality traits, particularly those encompassed by the Dark Triad—narcissism, Machiavellianism, and psychopathy—have been empirically linked to increased likelihood of research misbehavior among scientists. A 2016 cross-sectional study of 6,546 Dutch scientists found that Machiavellianism, characterized by manipulative tendencies and a focus on self-interest, was positively associated with self-reported research misbehavior, including falsification and selective reporting of data. Similarly, narcissism, involving grandiosity and entitlement, correlated with higher rates of misconduct, especially in biomedical fields where competitive pressures may amplify such traits. These associations suggest that individuals with elevated Dark Triad scores may prioritize personal gain over integrity, viewing ethical rules as obstacles rather than obligations. Ethical lapses often stem from mechanisms like moral disengagement and rationalization, where researchers justify misconduct by reframing it as serving a greater good, such as advancing knowledge or career necessities. For instance, a 2022 study in medical research indicated that high creative performance positively predicts scientific misconduct through moral licensing, wherein prior ethical behavior allows individuals to perceive rule-breaking as permissible to sustain productivity. This self-deception can manifest as downplaying the harm of data manipulation, with perpetrators convincing themselves that minor alterations do not undermine validity. Empirical surveys reveal that such rationalizations are common; a review of self-reports among psychologists showed 0.6% to 2.3% admitting to falsification, often attributed to internal states like overconfidence in one's judgment overriding objective standards. Cognitive biases further contribute, including overoptimism about experimental outcomes and confirmation bias, leading to unintentional drifts into falsification. Psychological analyses of misconduct cases highlight how low self-control and impulsivity exacerbate these, as individuals succumb to temptations without anticipating long-term repercussions. Ethical failures are compounded by a lack of intrinsic moral commitment, where researchers fail to internalize research integrity as a core value, instead treating it transactionally. Studies emphasize that while systemic factors interact, individual agency remains pivotal; traits like psychopathy, marked by callousness, directly impair empathy for the broader scientific community's reliance on truthful data. Interventions targeting these lapses, such as personality assessments in hiring or ethics training focused on self-awareness, have been proposed to mitigate risks at the individual level.

Systemic Incentives and Pressures

The "publish or perish" culture prevalent in academia ties career progression, tenure decisions, and resource allocation to the quantity and perceived impact of publications, fostering an environment where researchers face intense pressure to produce novel results rapidly. This system evaluates scientists primarily through metrics such as publication counts, journal impact factors, and citation indices like the h-index, often sidelining replication studies or null findings that do not advance careers. Empirical models demonstrate that such pressures erode trustworthiness by incentivizing selective reporting and corner-cutting, with simulations showing increased error rates and reduced reproducibility as publication demands intensify. Intense competition for funding exacerbates these dynamics, as grant success rates—such as approximately 20% for National Institutes of Health applications—compel researchers to tailor proposals toward high-risk, high-reward outcomes that align with reviewers' biases rather than exploratory or incremental work. This scarcity drives behaviors like hypothesis confirmation bias and premature data analysis to secure preliminary evidence for proposals, with surveys indicating that funding pressures correlate with higher incidences of questionable research practices across disciplines. Institutional evaluations reinforce this by linking lab resources and promotions to grant acquisition, creating a feedback loop where failure to publish competitively funded work risks professional obsolescence. Journals and peer review processes amplify systemic misalignment by prioritizing statistically significant, positive results that enhance prestige and revenue, while negative or inconclusive findings face rejection rates exceeding 90% in top outlets. This publication bias, coupled with the emphasis on "impactful" discoveries, discourages transparency in methods or data sharing, as researchers anticipate scrutiny that could undermine future funding. Cross-national surveys, including those from the Asian Council of Science Editors, reveal that over 70% of researchers perceive publication pressure as a direct threat to integrity, with fabrication and falsification rates self-reported at around 2% but likely undercounted due to non-disclosure incentives. These incentives disproportionately affect early-career researchers and those in high-stakes fields like biomedicine, where rapid publication cycles and corporate collaborations introduce additional financial stakes, further blurring lines between innovation and misconduct. While intended to spur productivity, the cumulative effect—quantified in studies as a 10-20% decline in replication success attributable to incentive-driven practices—undermines causal inference and empirical reliability in scientific outputs. Reforms proposed in response, such as weighting quality over quantity in evaluations, remain limited by entrenched institutional norms.

Ideological and External Biases

Ideological biases in scientific research often stem from political homogeneity within academic fields, where dominant viewpoints predominate and foster conformity, thereby undermining research validity through biased question formulation, methods, and interpretations. This homogeneity minimizes skepticism toward ideologically aligned claims while amplifying scrutiny of dissenting ones, potentially incentivizing questionable research practices such as selective data reporting or p-hacking to produce conforming results. In disciplines like social sciences and humanities, such dynamics have been illustrated by the 2018 grievance studies affair, in which hoax papers parodying radical leftist ideologies—such as a rewritten chapter from Mein Kampf reframed as feminist scholarship—were accepted for publication or revision in peer-reviewed journals, exposing lowered evidentiary standards when submissions echoed prevailing activist-academic norms. External biases, including funding dependencies and institutional pressures, further exacerbate misconduct by tying researcher incentives to sponsor-preferred outcomes rather than empirical rigor. Sponsorship bias occurs when commercial or ideological funders exert influence, prompting distortions like conclusion modification or non-disclosure of unfavorable data to secure continued support. Systematic reviews of medical research, for instance, demonstrate that industry-funded trials systematically favor positive results through practices such as selective outcome reporting or suppression of negative findings, with effect sizes inflated by up to 25% compared to independent studies. Government and nonprofit grants, often aligned with policy agendas, similarly impose external constraints; for example, funding priorities emphasizing social justice or environmental advocacy can prioritize hypothesis-confirming research over null or contradictory evidence, leading to echo chambers that reward alignment over falsification. These biases intersect when ideological conformity aligns with funding streams, as seen in fields where public or philanthropic dollars disproportionately support progressive-framed inquiries, creating perverse incentives for fabrication or exaggeration to meet grant criteria and publication thresholds. While deliberate fraud remains rare, the cumulative effect erodes causal inference by privileging narrative fit over replicable evidence, with peer review often failing as a safeguard due to shared reviewer biases. Addressing this requires diversifying viewpoints and funding sources to restore adversarial scrutiny, though entrenched institutional cultures pose ongoing challenges.

Empirical Prevalence

Self-Reported and Detected Rates

Self-reported rates of scientific misconduct, derived from anonymous surveys of researchers, reveal low but persistent admission of unethical practices. A 2009 systematic review and meta-analysis of 18 surveys involving over 10,000 scientists estimated that 1.97% (95% CI: 0.86–4.45%) admitted to fabricating, falsifying, or selectively modifying data at least once in their careers, with rates dropping to 1.06% when limited to explicit fabrication or falsification. A 2021 meta-analysis of studies from 2011–2020 reported a pooled prevalence of 2.9% (95% CI: 2.1–3.8%) for research misconduct encompassing fabrication, falsification, and plagiarism, while questionable research practices (QRPs) such as selective reporting or improper statistical analysis showed higher rates at 12.5% (95% CI: 10.5–14.7%). Peer observations exceed self-reports, with 14.12% (95% CI: 9.91–19.72%) of respondents aware of colleagues' data fabrication or falsification in the earlier analysis, and up to 39.7% (95% CI: 35.6–44.0%) knowing of QRPs in the 2021 review. These figures vary by discipline, with biomedical fields showing elevated admissions, potentially reflecting greater publication pressures or survey focus. Detected rates, captured through institutional investigations, retractions, and regulatory findings, remain far lower than self-reports, suggesting widespread under-detection. The U.S. Office of Research Integrity (ORI) has averaged 10–15 annual findings of misconduct since the early 2000s, representing a negligible proportion—estimated at under 0.01%—of the roughly 1–2 million U.S.-funded biomedical publications and grants processed yearly. Retraction rates have increased dramatically, quadrupling over two decades to approximately 0.2% of published papers by 2023, with over 10,000 retractions logged that year in databases tracking global outputs of millions of articles. Among retracted papers, 60–70% involve misconduct such as fraud or plagiarism rather than inadvertent errors, as evidenced by analyses of retraction notices up to 2023. This rise partly reflects improved scrutiny tools like statistical audits and databases, yet the disparity with self-reported data implies that formal detection captures only a fraction of incidents, often reliant on whistleblowers or replication failures. The divergence between self-reported (2–3% for core misconduct) and detected rates underscores limitations in both metrics: surveys may understate due to reluctance to admit grave violations, while detections favor high-profile cases in scrutinized fields like biomedicine. Empirical reviews indicate true prevalence could exceed official figures by orders of magnitude, as many QRPs evade retraction and ORI-level probes focus narrowly on federally funded U.S. research. Retraction rates for scientific publications have risen substantially since the 1990s, with misconduct—particularly fraud—accounting for the majority of cases, rising from negligible levels to over 40% of retractions by the 2000s. In 2023 alone, more than 10,000 papers were retracted globally, surpassing prior annual records, driven largely by data manipulation and errors in biomedical fields. This upward trend, with retractions due to data problems exceeding 75% of total cases by 2023, reflects both heightened scrutiny from tools like statistical audits and post-publication peer review platforms, as well as persistent underlying issues in high-stakes research environments. However, self-reported misconduct rates from surveys remain relatively stable at around 2-3% for fabrication, falsification, or plagiarism, suggesting that the retraction surge may partly indicate improved detection rather than a proportional increase in incidence. Disciplinary variations in misconduct prevalence are pronounced, with life sciences and medicine exhibiting the highest retraction rates—up to 4.8% of publications in clinical medicine and biomedical research—compared to near-zero rates in fields like mathematics or theoretical physics, where empirical falsification is more difficult due to reliance on verifiable computations or large-scale experiments. Within biomedicine, subfields such as cell biology (19% of life science retractions) and cancer research (14%) show elevated rates, often linked to image manipulation and selective reporting under competitive funding pressures. In contrast, physical sciences and engineering report lower self-admitted misconduct (under 2%), attributable to collaborative verification norms and less emphasis on novel, high-impact results. Psychology stands out among social sciences, with over 64% of retractions tied to fraud or plagiarism, exacerbated by the replication crisis revealed in large-scale projects like the Reproducibility Project starting in 2011. These differences underscore how disciplinary cultures, methodological reproducibility, and incentive structures influence misconduct vulnerability, with empirical fields prone to "p-hacking" or data dredging faring worse than deductive ones.

Institutional Roles

Responsibilities of Researchers and Labs

Researchers must adhere to ethical standards prohibiting fabrication, falsification, and plagiarism in all aspects of their work, ensuring that experimental results are reported honestly and without selective omission of data. They are required to maintain accurate, complete, and contemporaneous records of research activities, including raw data, methodologies, and analyses, with retention periods typically extending at least three years post-publication or as mandated by funding agencies or institutions. Proper authorship practices demand that credit be limited to individuals making substantial intellectual contributions, with all authors reviewing and approving final manuscripts to prevent honorary or ghost authorship. Upon observing potential misconduct, researchers have a duty to report suspicions promptly through established institutional channels, protecting whistleblowers from retaliation where policies allow. Principal investigators (PIs) and laboratory heads hold supervisory responsibilities to oversee trainees and staff, conducting regular reviews of raw data and experimental protocols to verify accuracy and reproducibility. Labs must implement standardized data management systems, such as secure electronic storage and version-controlled notebooks, ensuring data accessibility for collaborators and auditors while preventing unauthorized alterations. Fostering an open culture through frequent group meetings, ethical discussions, and integrity training programs is essential to discourage questionable practices and encourage early identification of errors or biases. PIs should also establish clear lab policies on conflict resolution and misconduct reporting, integrating periodic audits to detect systemic issues before they escalate.
  • Training and Mentorship: Labs are obligated to provide ongoing education on responsible conduct, covering topics like data integrity and ethical dilemmas, with PIs modeling conscientious behavior.
  • Resource Allocation: Adequate provisions for reproducible methods, such as calibrated equipment and validated protocols, fall under lab oversight to minimize inadvertent errors mimicking misconduct.
  • Accountability Mechanisms: Internal guidelines should delineate consequences for lapses, aligning with institutional policies to reinforce collective responsibility.
Failure to uphold these duties can undermine research validity, as evidenced by cases where inadequate supervision contributed to undetected manipulations, though robust implementation correlates with lower misconduct rates in well-monitored environments.

Oversight by Institutions and Funding Bodies

Institutions and funding bodies play critical roles in overseeing scientific misconduct, primarily through policy enforcement, investigation protocols, and conditional funding assurances. Under U.S. federal regulations, institutions receiving Public Health Service (PHS) funds, such as those from the National Institutes of Health (NIH), must establish administrative procedures to respond to allegations of research misconduct, defined as fabrication, falsification, or plagiarism in proposing, performing, or reporting research. These procedures typically involve an initial inquiry to assess credibility, followed by a formal investigation if warranted, overseen by a designated Research Integrity Officer (RIO) who coordinates the process and ensures compliance with due process. Institutions are required to protect whistleblowers from retaliation and maintain confidentiality during proceedings, though empirical analyses reveal deficiencies in report quality, including incomplete documentation of evidence and inconsistent application of standards across cases. Funding agencies like the NIH and National Science Foundation (NSF) impose oversight by mandating institutional assurances of integrity prior to awarding grants and retaining authority to review and intervene in misconduct findings. The Office of Research Integrity (ORI), within the U.S. Department of Health and Human Services, supervises PHS-funded research by monitoring institutional investigations, conducting oversight reviews, and imposing administrative actions such as debarment from funding or supervised research plans upon confirmed misconduct. For instance, after an institution's finding, ORI may require corrections to the scientific record or exclusion from federal funding for periods ranging from three to seven years, as seen in cases documented in ORI's administrative action summaries. NSF-funded research shows a higher incidence of plagiarism findings compared to NIH, potentially reflecting differences in oversight stringency or reporting thresholds. Agencies can also initiate independent inquiries if institutional handling appears inadequate, though their regulatory scope is limited to funded research, excluding broader fraud under civil or criminal law. Despite these frameworks, oversight effectiveness is constrained by institutional incentives to minimize reputational damage, leading to criticisms of superficial investigations and reluctance to publicize findings. In 2024, research universities and hospitals resisted ORI's proposed enhancements to federal oversight, arguing they would impose undue burdens without improving integrity, highlighting tensions between self-regulation and external accountability. Studies indicate variable institutional capacity to prevent misconduct, influenced by factors like leadership commitment and resource allocation, with many lacking robust proactive measures beyond reactive probes. Funding bodies' reliance on institutions for initial handling introduces principal-agent problems, where universities, dependent on grants comprising up to 60% of research budgets at major institutions, may under-report or delay to safeguard revenue streams. This dynamic underscores the need for independent audits and public disclosure of investigation outcomes to enhance credibility, as opaque processes erode trust in self-policing mechanisms.

Functions of Journals and Peer Review

Scientific journals function primarily as disseminators of peer-reviewed research, establishing a formalized process to evaluate and validate scientific claims before public release. Peer review, typically involving independent experts assessing manuscripts for methodological rigor, originality, and plausibility, serves as the core gatekeeping mechanism to filter out flawed or unsubstantiated work. This system aims to uphold research integrity by requiring authors to disclose methods, data, and potential conflicts, thereby enabling reviewers to identify errors or inconsistencies in reporting. In the context of scientific misconduct, journals and peer review are intended to deter fabrication, falsification, and plagiarism through pre-publication scrutiny, with editors enforcing ethical standards such as those outlined by the Committee on Publication Ethics (COPE). Reviewers evaluate whether results align with established knowledge and statistical norms, potentially flagging anomalies like improbable data patterns, though this relies heavily on the reviewers' expertise and diligence. Journals increasingly employ automated tools, such as plagiarism detection software and statistical checks for data fabrication (e.g., Benford's Law analysis), to supplement human review and identify suspicious patterns before acceptance. However, peer review's effectiveness in detecting deliberate misconduct is limited, as it operates on a foundation of trust in authors' self-reported data and raw materials, which reviewers seldom access directly. It excels at catching logical errors or sloppy analysis but rarely uncovers sophisticated fraud, such as manipulated images or invented datasets, which may evade detection unless serendipitously noticed. Studies indicate that peer review was not designed as a fraud-detection system, and expecting it to reliably police intentional deceit overlooks its reliance on communal verification post-publication rather than forensic auditing. Consequently, many instances of misconduct surface only after publication, prompting journals to issue expressions of concern, corrections, or retractions based on subsequent investigations or whistleblower reports. Journals mitigate these shortcomings through post-publication oversight, including transparent retraction policies and collaboration with institutions to investigate allegations, thereby preserving the scientific record. Editors play a pivotal role as initial gatekeepers, rejecting submissions with evident ethical lapses and fostering accountability via authorship guidelines that mandate verifiable contributions. Despite these functions, systemic pressures—such as publication volume and reviewer shortages—can compromise thoroughness, underscoring peer review's role as a probabilistic filter rather than an infallible safeguard against misconduct.

Impacts and Consequences

Erosion of Scientific Knowledge

Scientific misconduct introduces erroneous findings into the literature, which, once published, can become embedded in the cumulative knowledge base and influence subsequent research directions, hypotheses, and applications. Retractions, often the primary mechanism for correcting such errors, fail to fully excise the tainted information, as retracted papers continue to garner citations long after their withdrawal. A study analyzing over 2,000 retractions found that 67.4% were attributable to misconduct, including fraud (43.4%), underscoring the prevalence of fabricated or falsified data infiltrating peer-reviewed publications. These invalid results propagate when researchers unwittingly build upon them, leading to cascades of misguided experiments, resource allocation to dead ends, and reinforcement of flawed paradigms that resist correction due to entrenched citations and institutional inertia. Post-retraction citation persistence exacerbates this erosion, with analyses showing that retracted articles receive continued references for over a decade, often without acknowledgment of the retraction. In one examination of citation patterns, retracted papers were found to corrupt the scholarly record by influencing meta-analyses and reviews that aggregate uncorrected data, thereby perpetuating inaccuracies in fields like biomedicine and psychology. This dynamic contributes to the broader reproducibility crisis, where fraud and fabrication amplify irreproducible findings; high-profile fraud cases have been linked to systemic failures in replicating landmark studies, as fraudulent data sets false benchmarks that subsequent work struggles to match. For instance, in preclinical research, the integration of fraudulent results has delayed therapeutic advancements by diverting efforts toward non-viable pathways, with estimates suggesting billions in wasted funding annually on irreproducible or fraudulent outputs. The long-term ramifications include a diluted corpus of reliable knowledge, where the signal of genuine discoveries is obscured by noise from misconduct, eroding the self-correcting ethos of science. Historical precedents, such as the Piltdown Man hoax, which deceived anthropologists for over four decades before exposure in 1953, illustrate how accepted frauds can reshape entire subfields until dismantled by exhaustive scrutiny. In contemporary contexts, persistent citation of retracted works in high-impact journals sustains epistemological vulnerabilities, particularly in fast-paced domains like oncology, where irreproducible fraud-tainted studies have misled clinical trial designs and policy decisions. Mitigation requires not only vigilant detection but proactive purging of influenced literature, though the asymmetry between rapid publication and slow correction perpetuates knowledge degradation.

Effects on Public Trust and Policy Outcomes

Scientific misconduct erodes public confidence in scientific institutions by highlighting vulnerabilities in the research process and perceived lack of accountability. High-profile retractions, such as those involving falsified COVID-19 treatment data in 2020 from journals like The Lancet and The New England Journal of Medicine, have amplified fears among researchers that such incidents exacerbate existing distrust, particularly during crises where rapid decision-making relies on preliminary findings. Empirical assessments, including Pew Research Center surveys, reveal that while overall trust in scientists remains relatively high, only 10-20% of U.S. adults perceive repercussions for misconduct as commonplace across fields, fostering views of systemic leniency. Reports of data manipulation further compound this, as they signal broader integrity issues, prompting public skepticism toward expert consensus on contentious topics like vaccines and climate. A seminal case is the 1998 Lancet paper by Andrew Wakefield, retracted in 2010 after revelations of fraud including data falsification and undisclosed conflicts of interest. The study falsely implicated the MMR vaccine in autism, causing UK MMR vaccination coverage to plummet from 92% in 1998 to around 80% by the mid-2000s, directly contributing to measles resurgence. This led to policy shifts, including expanded public awareness campaigns and school entry mandates in multiple regions, yet outbreaks persisted; for example, a 2013 measles epidemic in Wales infected over 1,000 individuals, many unvaccinated due to lingering hesitancy. Such fallout delayed herd immunity goals and increased healthcare burdens, demonstrating how isolated fraud can cascade into widespread public health policy challenges. On policy outcomes, misconduct distorts evidence-based decision-making, leading to adoption of ineffective interventions or regulatory oversights with tangible harms. Falsified medical research has prolonged approval of hazardous treatments; for instance, reliance on manipulated trial data has resulted in patient injuries or deaths from substandard drugs or devices, prompting retrospective policy reforms like stricter FDA post-market surveillance. In oncology, proliferation of fraudulent papers—estimated to contribute to over 40% of retractions—wastes billions in redirected funding, skewing allocation toward invalid pursuits and undermining policies prioritizing high-impact therapies. Collectively, these incidents compromise the causal foundations of legislation, as policymakers citing tainted studies enact measures lacking empirical robustness, perpetuating inefficiencies in resource distribution and public safety protocols.

Ramifications for Whistleblowers and Careers

Whistleblowers who report scientific misconduct often face severe retaliation, including professional , demotion, or termination, which can derail their careers. A Office of Research Integrity (ORI)-commissioned study by the Research Triangle Institute analyzed cases of misconduct in science and found that 69% of whistleblowers experienced negative outcomes, such as pressure from superiors to withdraw allegations, from colleagues, and dismissal from their positions. Retaliation frequently intensifies during ongoing investigations, as institutions prioritize protecting researchers over supporting reporters, leading to a where potential whistleblowers remain silent to safeguard their livelihoods. Specific instances illustrate these risks. In 2023, former Wayne State University neuroscientist Charles Y. Kreipke prevailed in a whistleblower lawsuit against the Detroit VA Medical Center after being investigated for alleged misconduct following his reports of data falsification by superiors; the VA had initially denied recognizing him as a whistleblower. Similarly, in a case under the Whistleblower Protection Act, assistant professor Judith Zimmerman was terminated in retaliation for exposing research irregularities, highlighting how reporting can trigger institutional backlash even when substantiated. Federal regulations under 42 CFR Part 93 mandate protections against such reprisals in U.S. Public Health Service-funded research, requiring institutions to investigate retaliation complaints and impose corrective actions, yet enforcement remains inconsistent, with ORI handling complaints but lacking robust deterrence. Long-term career ramifications for whistleblowers include stalled academic advancement, blacklisting in collaborative networks, and challenges securing future funding or positions. A 2014 analysis of higher education whistleblowers described their post-reporting trajectories as bleak, with many facing prolonged unemployment or forced career shifts outside academia due to damaged reputations and severed professional ties. In contrast, researchers found guilty of misconduct typically endure sanctions like grant debarment, publication retractions, and employment termination—for instance, ORI findings since 1992 have led to over 100 debarments averaging several years—but reinstatement or mitigation is rare, underscoring a punitive asymmetry that deters reporting while punishing exposure. These dynamics perpetuate underreporting, as evidenced by studies showing only a fraction of misconduct cases surface via whistleblowing, prioritizing institutional harmony over accountability.

Prominent Examples

Seminal Historical Incidents

The Piltdown Man hoax, perpetrated in 1912, involved the presentation of forged fossils purportedly representing an early human ancestor discovered in Piltdown, England, by amateur archaeologist Charles Dawson. The skull combined a modern human cranium with an orangutan jaw, artificially stained and filed to appear ancient, fooling experts including Arthur Smith Woodward of the British Museum for over four decades. Exposed in 1953 through fluorine dating, microscopic analysis revealing tool marks, and nitrogen content tests showing inconsistencies, the forgery delayed advancements in paleoanthropology by diverting resources toward a false "missing link" that aligned with desires for a British origin of humanity. Dawson, linked to prior antiquities frauds, is widely regarded as the primary forger, possibly with accomplices like Pierre Teilhard de Chardin or Martin Hinton. In 1903, French physicist Prosper-René Blondlot announced the discovery of N-rays, claimed to be a new form of radiation emitted by various substances and detectable only by the human eye on faintly glowing screens in darkened rooms. Over 300 papers from Blondlot's lab and confirmations by more than 100 scientists across Europe followed, despite irreproducibility by others, illustrating collective confirmation bias amid the post-X-ray era's enthusiasm for novel rays. American physicist Robert W. Wood debunked the phenomenon in 1904 by covertly removing a crucial prism from Blondlot's apparatus during a demonstration, after which the "rays" vanished, revealing the detections as subjective illusions or deliberate manipulation. While Blondlot maintained his sincerity, the scandal underscored vulnerabilities in peer validation without rigorous controls. British psychologist Sir Cyril Burt's studies on intelligence heritability, spanning the 1950s to 1960s, relied on fabricated data from nonexistent identical twins reared apart and fictional assistants like J. Conway and Margaret Howard, with identical correlation coefficients recycled across publications. Posthumously exposed in the 1970s by critics including Leon Kamin, who highlighted statistical anomalies and unverifiable sources, Burt's work influenced policy debates on IQ and social mobility, promoting genetic determinism amid eugenics echoes. Biographer L.S. Hearnshaw concluded deliberate fraud after investigating Burt's records, though defenders like Arthur Jensen argued the data patterns were plausible without proving invention; consensus in psychology holds the misconduct as fabrication to fit hereditarian views.

Contemporary Cases from 2020 Onward

In 2020, a multinational registry analysis published in The Lancet claimed that hydroxychloroquine and chloroquine increased mortality and ventricular arrhythmia risks in COVID-19 patients, leading to the suspension of several clinical trials worldwide, including the WHO's Solidarity trial; the paper was retracted on June 4, 2020, after independent verification revealed that the raw data from Surgisphere Corporation, purportedly drawn from over 96,000 patient records across 671 hospitals, could not be accessed or confirmed by the authors or external auditors.31324-6/fulltext) A related New England Journal of Medicine paper by the same lead author, Sapan Desai of Surgisphere, on COVID-19 and cardiovascular disease was also retracted on the same date due to similar data integrity issues, with Surgisphere failing to provide verifiable sources despite claims of data from diverse global hospitals. In physics, Ranga Dias of the University of Rochester published a 2020 Nature paper asserting room-temperature superconductivity in a carbonaceous sulfur hydride under 267 GPa pressure, a claim that garnered significant attention and funding; the paper was retracted on September 26, 2022, following concerns raised by co-authors about manipulated data in resistance measurements and inconsistencies in spectroscopic evidence, with subsequent investigations revealing duplicated images and fabrication in supporting datasets. Dias's group faced further retractions, including a second Nature paper in November 2023 on lutetium hydride superconductivity and a fifth paper in June 2024 from Chemical Communications, amid allegations of data alteration and plagiarism, prompting the University of Rochester to launch an internal review in 2023. Behavioral scientist Francesca Gino, a Harvard Business School professor specializing in dishonesty research, was found by a 2023 university investigation to have falsified data in at least four studies, including altering survey responses in a 2012 paper on signed dishonesty pledges and fabricating results in collaborations with Dan Ariely; Harvard placed her on unpaid leave in June 2023, barred her from campus, and revoked her tenure in May 2025 after the probe concluded intentional misconduct. Gino has denied the allegations, filing lawsuits against Harvard and bloggers at Data Colada who first flagged anomalies, while several co-authored papers were retracted, including from Psychological Science and Organizational Behavior and Human Decision Processes. These incidents reflect a surge in detected misconduct, with biomedical retractions quadrupling from 2000 to 2023 and two-thirds attributed to issues like data manipulation or plagiarism, often amplified by pressures during the COVID-19 pandemic and in high-stakes fields.

Mitigation Strategies

Enhanced Detection Mechanisms

Statistical methods have become central to detecting data fabrication and falsification in scientific research. Techniques like Benford's Law examine the expected distribution of leading digits in numerical data, where deviations may indicate manipulation, as demonstrated in analyses of clinical trial datasets. Unsupervised mixed-effects models identify anomalous patterns across study sites, successfully flagging fraud in multi-center trials by comparing variability against norms. For small-sample studies (N < 200), simpler heuristics—such as improbably uniform distributions, round-number biases, or implausible error patterns—have detected potential fraud in retracted psychology papers, with reviews confirming their utility despite limitations in false positives. Forensic image analysis tools address manipulation in figures, a common misconduct vector. Software detects duplications, splicing, or AI-generated alterations by comparing pixel patterns and metadata, with platforms like ImageTwin and Proofig AI screening submissions pre- and post-publication; for instance, Springer Nature's SnappShot employs AI to automate integrity checks, reducing manual review burdens. These methods have contributed to rising retraction rates, as heightened scrutiny reveals previously overlooked issues, though they require human validation to mitigate errors. Post-publication peer review platforms enhance ongoing vigilance beyond initial gatekeeping. PubPeer allows anonymous comments on published papers, with analyses showing two-thirds of postings target suspected misconduct like data anomalies, leading to institutional probes and retractions; by 2023, it had prompted proceedings in numerous cases. Open-source AI initiatives, such as the Black Spatula Project, further scan papers for statistical inconsistencies, analyzing hundreds of publications to flag errors systematically. Collectively, these mechanisms—bolstered by transparency mandates like raw data sharing—have increased misconduct detections, though challenges persist in scaling across disciplines and verifying AI outputs.

Reforms to Research Incentives

Reforms to research incentives seek to address the systemic pressures that prioritize publication quantity and novelty over rigorous verification, which contribute to misconduct such as data fabrication and selective reporting. Traditional metrics like the number of publications and citations incentivize "publish or perish" dynamics, where researchers face career risks for pursuing replications or null results, as evidenced by surveys showing that over 50% of scientists have failed to replicate others' experiments while only 20-30% attempt their own. Proposals emphasize shifting evaluations toward quality indicators, including data transparency and reproducibility, to align personal advancement with scientific integrity. Academic institutions have begun revising promotion and tenure criteria to incorporate open science practices. For instance, in 2017, Harvard Medical School adjusted its faculty advancement standards to recognize contributions to reproducibility, moving away from sole reliance on high-impact publications. Similarly, guidelines from 2020 advocate requiring candidates to demonstrate open data sharing or pre-registration in at least one project for hiring or promotion, with institutions like those in the biomedical sciences piloting such changes to reduce fraud risks. The National Academies' Roundtable on Aligning Incentives for Open Scholarship, established in 2019, promotes evaluating team-based collaboration and public data access in career assessments, arguing that these foster long-term trust over short-term outputs. Funding agencies are implementing incentives for transparency and replication to counteract hypercompetition. The National Science Foundation (NSF) has allocated resources since 2023 for multi-institutional projects enhancing data management and public access, tying grants to compliance with open science policies. NASA's open science funding opportunities, ongoing as of 2025, prioritize projects enabling data sharing and reproducibility, with dedicated solicitations for tools that verify findings. In Europe, the ERIC Forum's 2021 recommendations urge funders to allocate resources for quality management and replication studies, including dedicated calls that reward negative or confirmatory results. A 2025 initiative backed by $1.5 million targets broader academic incentive reforms, focusing on reducing misconduct through sustained support for ethical practices over volume. These mechanisms aim to diminish the "fraud triangle" pressures of opportunity and rationalization by making integrity a competitive advantage.

Promotion of Reproducibility and Transparency

Efforts to promote reproducibility and transparency in scientific research have centered on standardized frameworks and policy reforms aimed at mitigating selective reporting and unverifiable claims. The Transparency and Openness Promotion (TOP) Guidelines, developed by the Center for Open Science and updated in 2025, outline modular standards across eight facets, including citation, data transparency, code transparency, and preregistration, with levels of implementation ranging from disclosure to verification. These guidelines encourage journals and funders to adopt practices that facilitate independent verification, with over 1,000 journals endorsing TOP standards by 2024 to enhance the verifiability of empirical claims. Preregistration of studies, which involves publicly archiving detailed research plans prior to data collection, has gained traction as a core transparency tool to distinguish confirmatory from exploratory analyses and curb practices like p-hacking. Platforms such as the Open Science Framework (OSF) host millions of preregistrations since 2013, with adoption rising in fields like psychology, where it has reduced questionable research practices by constraining post-hoc flexibility. Registered Reports, a publishing format where peer review occurs on the protocol stage with in-principle acceptance, further incentivize preregistration; by 2023, over 200 journals offered this model, leading to higher reproducibility rates in participating studies compared to traditional formats. Open data and materials sharing mandates represent another pillar, requiring researchers to deposit datasets, code, and protocols in public repositories upon publication to enable scrutiny and reuse. Funder policies, such as those from the National Institutes of Health (NIH), have integrated these requirements since 2016, with grants increasingly conditioned on data management plans that prioritize accessibility; a 2023 analysis found that journals enforcing open data policies saw a 20-30% increase in citation rates for compliant articles due to enhanced verifiability. Similarly, the NIH's Replication to Enhance Research Impact Initiative, launched in 2024, funds independent replications of high-impact preclinical studies using contract labs, aiming to validate findings and address reproducibility gaps identified in prior surveys where only 50% of studies replicated successfully. These strategies collectively shift incentives toward robust practices, though challenges persist in enforcement and cultural adoption across disciplines; for instance, while TOP-compliant journals report improved transparency metrics, non-compliance remains common in fields with proprietary data concerns. Training initiatives, such as Reproducible Training Network modules disseminated via eLife since 2021, equip researchers with skills in these methods, fostering long-term behavioral change. Overall, empirical evaluations indicate that combining preregistration with open sharing reduces false positives and accelerates cumulative knowledge, as evidenced by meta-analyses showing replicated effects are more precise than initial reports.

References

  1. [1]
    Definition of Research Misconduct | ORI
    Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.
  2. [2]
    What Is Research Misconduct - NIH Grants & Funding
    Aug 19, 2024 · Research misconduct means fabricating, falsifying, and/or plagiarizing in proposing, performing, or reviewing research, or in reporting research results.
  3. [3]
    4.1.27 Research Misconduct - NIH Grants and Funding
    Research misconduct includes fabrication (making up data), falsification (manipulating data), and plagiarism (appropriating ideas without credit). Honest error ...
  4. [4]
    Scientific Misconduct: A Global Concern - PMC - NIH
    Scientific misconduct includes fabrication, falsification, or plagiarism in research, or any behavior failing to respect high ethical standards.
  5. [5]
    How Many Scientists Fabricate and Falsify Research? A Systematic ...
    A non-systematic review based on survey and non-survey data led to estimate that the frequency of “serious misconduct”, including plagiarism, is near 1% [11].View Figures (9) · View Reader Comments · Author Info
  6. [6]
    A Systematic Review and Meta-Analysis | Science and Engineering ...
    Jun 29, 2021 · This meta-analysis provides an updated meta-analysis that calculates the pooled estimates of research misconduct (RM) and questionable research practices (QRPs ...
  7. [7]
    (PDF) Prevalence of Research Misconduct and Questionable ...
    Aug 6, 2025 · In a 2021 meta-analysis of studies on research misconduct by Xie and colleagues [66] estimates that the self-reported prevalence of FFP is 2.9% ...
  8. [8]
    Research: Financial costs and personal consequences of ... - eLife
    Aug 14, 2014 · Research misconduct correlates with decreased productivity and funding. The personal consequences for individuals found to have committed ...
  9. [9]
    What a massive database of retracted papers reveals about science ...
    Oct 25, 2018 · Fraud accounted for some 60% of those retractions; one offender, anesthesiologist Joachim Boldt, had racked up almost 90 retractions after ...
  10. [10]
    Guilt by association: How scientific misconduct harms prior ...
    Our empirical analysis shows that prior collaborators face a citation penalty of 8–9% in the aftermath of a scientific misconduct case. We base this result on ...
  11. [11]
    42 CFR Part 93 -- Public Health Service Policies on Research ...
    Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. Research ...Research Misconduct Issues · 93.100 – 93.107 · Title 42 · Subpart B —DefinitionsMissing: core | Show results with:core
  12. [12]
    Research Integrity - Research Ethics & Compliance
    Feb 28, 2025 · Research Misconduct Definitions · Fabrication - making up data or results and recording them in the research record. · Falsification - ...Missing: core | Show results with:core
  13. [13]
    RESEARCH MISCONDUCT - On Being a Scientist - NCBI Bookshelf
    Defines misconduct as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.”
  14. [14]
    Ethical Shades of Gray: International Frequency of Scientific ... - LWW
    Whereas deliberate scientific misconduct such as data fabrication is clearly unethical, other behaviors—often referred to as questionable research practices ( ...
  15. [15]
    Questionable Research Practices (and research misbehaviors) - AJE
    Nov 8, 2022 · Although fraud is rare, QRPs are fairly common in most research areas. In fact, an estimated one in two researchers has engaged in at least ...What is a questionable... · Not accurately recording your... · P-hacking
  16. [16]
    Fifty years of research on questionable research practises in science
    QRPs have been defined as 'design, analytic or reporting practices that have been questioned because of the potential for the practice to be employed with the ...<|control11|><|separator|>
  17. [17]
    Research misconduct and questionable research practices form a ...
    Mar 3, 2023 · In a recent article in this journal, it was argued that RDMM can take two forms: intentional research misconduct or unintentional questionable ...
  18. [18]
    Landmark research integrity survey finds questionable practices are ...
    Jul 7, 2021 · ... misconduct within the past 3 years: the fabrication or falsification of research results. This rate of 8% for outright fraud was more than ...
  19. [19]
    Is it Time to Revise the Definition of Research Misconduct? - PMC
    US federal policy defines research misconduct as fabrication of data, falsification of data, or plagiarism (FFP).
  20. [20]
    A Heavy Hoax: The “Lying Stones” of Johann Beringer
    Jul 25, 2019 · This tale of stony deception began when the twenty-seven year old Beringer was named professor at the University of Wurzburg in 1694, only a ...
  21. [21]
    Beringer's Lying Stones; fraud and absurdity in science
    Jul 11, 2017 · A tale of scientific fraud and personal hubris. My second-year university Geology wasn't particularly notable except for a bit of academic trickery.Missing: hoax | Show results with:hoax
  22. [22]
    Rocky Road: Johann Bartholomew Adam Beringer - Strange Science
    So great was his chagrin and mortification in discovering that he had been made the subject of a cruel and silly hoax, that he endeavored to buy up the whole ...<|control11|><|separator|>
  23. [23]
    [PDF] 1 Misinformation Age: What early modern scientific fakes ... - IUHPST
    Two scientific objects were produced – a counterfeit dragon and a falsified study – that subsequently deceived people about their true nature.
  24. [24]
    Historical Background | ORI - The Office of Research Integrity
    Some twelve cases of research misconduct were disclosed in this country between 1974-1981. Congressional attention to research misconduct was maintained ...Missing: high profile post
  25. [25]
    Misconduct in Science—Incidence and Significance - NCBI
    Cases of confirmed misconduct in science in the United States range between 40 and 100 cases during the period from 1980 to 1990.Missing: profile | Show results with:profile<|control11|><|separator|>
  26. [26]
    SCIENTIFIC FRAUD Part II: From Past to Present, Facts and Analyses
    Mar 9, 2022 · Scientific fraud has increased dramatically in recent times. The main reason has been the exponential increase of the number of researchers.
  27. [27]
    Misconduct accounts for the majority of retracted scientific publications
    67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).Missing: era 20th
  28. [28]
    Analysis of scientific paper retractions due to data problems
    The results show that since 2000, retractions due to data problems have increased significantly (p < 0.001), with the percentage in 2023 exceeding 75%. Among ...
  29. [29]
    About ORI - The Office of Research Integrity
    The Office of Research Integrity (ORI) oversees and directs Public Health Service (PHS) research integrity activities on behalf of the Secretary of Health ...
  30. [30]
    Federal Research Misconduct Policy | ORI
    In most cases, agencies will rely on the researcher's home institution to make the initial response to allegations of research misconduct. Agencies will usually ...
  31. [31]
    Case Summaries | ORI - The Office of Research Integrity
    This page contains cases in which administrative actions were imposed due to findings of research misconduct.Zhang, Liping · Bhan, Arunoday K. · Eckert, Richard L · Rutherford, BretMissing: modern era 20th
  32. [32]
    Organisational responses to alleged scientific misconduct
    We propose a theoretical model that explains how organisational responses to misconduct emerge and evolve as iterations of the processes of sensemaking, ...
  33. [33]
    'Wasted' research and lost citations: A scientometric assessment of ...
    Aug 23, 2025 · This study presents a large-scale scientometric analysis of 35,514 retracted publications indexed in Scopus between 2001 and 2024, ...
  34. [34]
    How to tackle research misconduct: survey finds stark disagreement
    Jul 25, 2025 · Around 64% of research-integrity officers said that the best way to deal with misconduct allegations is a self-regulation model, in which ...<|separator|>
  35. [35]
    Research misconduct: the poisoning of the well - PMC - NIH
    One of the many unanswered questions on scientific fraud or research misconduct is how commonly it occurs. The answer obviously depends on how it is defined ...Missing: era 20th
  36. [36]
    Detailed Case Histories - Fostering Integrity in Research - NCBI - NIH
    The following five detailed case histories of specific cases of actual and alleged research misconduct are included in an appendix to raise key issues and ...PAXIL CASE · THE HWANG STEM CELL... · THE TRANSLATIONAL OMICS...
  37. [37]
    Data integrity scandals in biomedical research: Here's a timeline
    May 17, 2023 · Naoki Mori: A prominent Japanese oncology researcher had more than 30 papers retracted as a result of image manipulation allegations.
  38. [38]
    Did a top NIH official manipulate Alzheimer's and Parkinson's ...
    Sep 26, 2024 · A Science investigation has now found that scores of his lab studies at UCSD and NIA are riddled with apparently falsified Western blots.
  39. [39]
    Plagiarism | ORI - The Office of Research Integrity
    Plagiarism has been traditionally defined as the taking of words, images, processes, structure and design elements, ideas, etc. of others and presenting them ...
  40. [40]
    Plagiarism in Research | Division of Research - Brown University
    Plagiarism is defined under federal regulations and by Brown University's research misconduct policy as “the appropriation of another person's ideas, processes, ...
  41. [41]
    Authorship: contributions, disputes, and misconduct | Royal Society
    Mar 10, 2022 · Fraudulent authorship and misrepresentation are generally considered to be misconduct. With this in mind, it is important to be aware of what ...
  42. [42]
    The vexing but persistent problem of authorship misconduct in ...
    Cases of authorship misconduct that involve unwilling participants, such as instances of coercion by a senior academic, or when a researcher is unaware of being ...
  43. [43]
    Famous Plagiarism Cases in Science: Biggest History Scandals
    Aug 22, 2025 · What Constitutes Plagiarism in Scientific Research? · Case 1: Elias Alsabti – The Phantom Researcher · Case 2: Bharat Aggarwal – Curcumin ...
  44. [44]
    Popular French physicist accused of plagiarizing colleagues and ...
    Dec 7, 2016 · Accusations of serial plagiarism against one of France's best-known scientists have shaken the country's scientific community and the media.<|control11|><|separator|>
  45. [45]
    Authorship Issues When Articles are Retracted Due to Research ...
    Jul 7, 2022 · In some cases, researchers have revised and republished articles that were retracted due to misconduct, which raises some novel questions concerning authorship.
  46. [46]
    How common is academic plagiarism? - Impact of Social Sciences
    Feb 8, 2024 · A recent meta-analysis (combining the results of multiple previous studies) estimated that 2.9 percent of researchers had admitted to plagiarism.
  47. [47]
    Prevalence of Plagiarism in Manuscript Submissions and Solutions
    30% of submitted manuscripts included plagiarism from a previous publication of the senior author. 16% of submitted manuscripts included plagiarism from another ...
  48. [48]
    Cases of Plagiarism Handled by the United States Office of ...
    This paper is a historical review of the 19 ORI plagiarism cases, describing the characteristics of those respondents, the PHS administrative actions taken ...
  49. [49]
    Case Four: Accusations of Falsifying Data | ORI
    Allan's letter was to the Dean of Academic Affairs. In it he claimed that Richard had required him to falsify data and that much of the data Richard had ...
  50. [50]
    Manipulation and Misconduct in the Handling of Image Data - PMC
    Common examples (see Figure 1A) involve sections of an image that have been cloned or blended to clean up a dirty preparation or to mask an unwanted blemish.
  51. [51]
    Detecting Image Manipulation In Academic Publications | Straive
    Jun 16, 2022 · Examples include blending together images from various microscope fields to generate a unique image that appears to be a single field.
  52. [52]
    The do's and don'ts of scientific image editing - Nature
    Apr 29, 2025 · Some researchers, for instance, will use editing software, such as Adobe Photoshop, to 'clone' parts of an image and paste them onto other ...
  53. [53]
    The Prevalence of Inappropriate Image Duplication in Biomedical ...
    Jun 7, 2016 · Overall, 3.8% of published papers contained problematic figures, with at least half exhibiting features suggestive of deliberate manipulation.
  54. [54]
    The Prevalence of Inappropriate Image Duplication in Biomedical ...
    Overall, 3.8% of published papers contained problematic figures, with at least half exhibiting features suggestive of deliberate manipulation. The prevalence ...<|control11|><|separator|>
  55. [55]
    Medical image falsification in radiology science - ScienceDirect.com
    11.9% of 310 radiology researchers admitted to falsifying medical images. Cherry-picking nonrepresentative images was the most common form (50.3%).
  56. [56]
    Automatic detection of image manipulations in the biomedical ...
    Mar 14, 2018 · Looking at single journals in the same sample, this percentage ranged from 0.3% in Journal of Cell Biology to 12.4% in International Journal of ...
  57. [57]
    Top Harvard Medical School Neuroscientist Accused of Research ...
    Feb 1, 2024 · Khalid Shah, a prominent neuroscientist at Brigham and Women's Hospital, is accused of falsifying data and plagiarizing images across 21 papers.
  58. [58]
    What's in a picture? Two decades of image manipulation awareness ...
    Aug 12, 2024 · The meeting concluded with a consensus that image manipulation was a serious problem that all biomedical research journals were eventually going ...
  59. [59]
    Personality Traits Are Associated with Research Misbehavior in ...
    Sep 29, 2016 · Machiavellianism may be a risk factor for research misbehaviour. Narcissism and research misbehaviour were more prevalent among biomedical ...Statistical Analysis · Results · Discussion
  60. [60]
    The Effect of the Dark Triad on Organizational Fraud
    The Dark Triad (DT) consists of three negative traits: narcissism, psychopathy, and Machiavellianism. Previous research found that each of the Dark Triad traits ...The Dark Triad · The Fraud Triangle · Iv. Results
  61. [61]
    Effect of medical researchers' creative performance on scientific ...
    Dec 18, 2022 · The findings provide theoretical and practical implications for the prevention of medical researchers' scientific misconduct. Peer Review ...
  62. [62]
    Perspective: Research Misconduct - Academic Medicine
    Why Did the Respondents Violate the Rules? These acts of research misconduct seemed to be the result of the interaction of psychological traits and/or states ...
  63. [63]
    Leading the charge to address research misconduct
    Sep 1, 2021 · With expertise in behavior change, motivations, and incentives, psychological researchers are tackling misconduct at both the individual and ...
  64. [64]
    Why do scientists commit misconduct? - Retraction Watch
    Aug 29, 2016 · The term fraud almost always apples to the most severe forms of scientific misconduct, falsification, fabrication and plagiarism. On the ...<|separator|>
  65. [65]
    Research Misconduct: The Peril of Publish or Perish - PMC - NIH
    One of the most pressing problems for academic researchers is the career pressure to publish or perish. It is becoming increasingly common, despite alternative ...
  66. [66]
    The misalignment of incentives in academic publishing and ... - PNAS
    This has led to a “publish or perish” culture in academia as well as publication bias: Researchers face significant expectations to continuously produce and ...Missing: misconduct | Show results with:misconduct
  67. [67]
    Modelling science trustworthiness under publish or perish pressure
    Jan 10, 2018 · To better understand the impact of publish or perish on scientific research, and to garner insight into what practices drive the ...Introduction · Model outline · Results · Discussion
  68. [68]
    How Competition for Funding Impacts Scientific Practice - NIH
    Feb 13, 2024 · Researchers across all groups experienced that competition for funding shapes science, with many unintended negative consequences.
  69. [69]
    The costs of competition in distributing scarce research funds - PNAS
    In addition to these, competitive grant funding may also incentivize direct violations of core values of research integrity such as accountability and ...
  70. [70]
    Publication Pressure vs Research Integrity: Global Insights from an ...
    Aug 11, 2025 · The academic imperative to publish, often captured by the phrase “publish or perish,” has become a global phenomenon, exerting significant ...
  71. [71]
    Finding a Good Balance between Pressure to Publish and Scientific ...
    Jul 19, 2022 · The uncontrolled pressure to publish could represent a pathway to scientific misconduct since metrics such as numbers of publications and ...
  72. [72]
    Challenging 'publish or perish' culture—researchers call ... - Phys.org
    Apr 16, 2025 · The longstanding "publish or perish" culture in academia is coming under renewed scrutiny, as a new study published in Proceedings of the ...
  73. [73]
    Political homogeneity can nurture threats to research validity - PubMed
    Political homogeneity within a scientific field nurtures threats to the validity of many research conclusions by allowing ideologically compatible values to ...
  74. [74]
    Political homogeneity can nurture threats to research validity
    Sep 8, 2015 · Ideological homogeneity can nurture threats to the validity of research conclusions and can be especially damaging to external and construct validity.Missing: integrity | Show results with:integrity
  75. [75]
    Grievance studies hoaxer: research 'replicating dominant ideology'
    Apr 6, 2023 · 'Ideological capture' of research and clampdowns on dissent are spreading, argues Peter Boghossian.
  76. [76]
    Funding (Sponsorship) bias - The Embassy of Good Science
    Feb 9, 2023 · When researchers distort the results or modify conclusions of their study due to pressure of commercial or not-for-profit funders of the study, they engage in ...Missing: fraud | Show results with:fraud
  77. [77]
    Can a good tree bring forth evil fruit? The funding of medical ...
    Systematic reviews analysing the influence of funding on the conduct of research have shown how Conflicts of Interest (COIs) create bias in the production ...
  78. [78]
    The Politicization of Research Ethics and Integrity and its ...
    Oct 28, 2024 · One is the political 'weaponization' of REI claims and allegations to discredit individuals, institutions or ideas and scientific concepts with ...
  79. [79]
    Political Homogeneity in Academia | Frontpage Mag
    May 17, 2018 · Such uniformity creates an ideological bubble that insulates academics from those with opposing perspectives.
  80. [80]
    Herding, social influences and behavioural bias in scientific research
    Deliberate fraud is rare. More usually, mistakes result from the excessive influence of scientific conventions, ideological prejudices and/or unconscious bias; ...
  81. [81]
    The Steep Price of Political Homogeneity (Opinion) - Education Week
    Jan 11, 2017 · Political homogeneity undermines scholarship both by limiting the questions that we ask and restricting our ability to eliminate error.
  82. [82]
    5 Incidence and Consequences | Fostering Integrity in Research
    In an earlier survey of scientists funded by the National Institutes of Health (NIH), less than 1 percent of respondents self-reported engaging in falsification ...
  83. [83]
    Retractions Increase 10-Fold in 20 Years - and Now AI is Involved
    Retractions are rising in medical research literature, even as more eyes examine peer-reviewed papers for accuracy. AI is powering an arms race in the world ...Missing: 2000-2025 | Show results with:2000-2025
  84. [84]
    Biomedical paper retractions have quadrupled in 20 years — why?
    May 31, 2024 · Of all the retracted papers, nearly 67% were withdrawn owing to misconduct and around 16% for honest errors. The remaining retractions did not ...
  85. [85]
    More than 10000 research papers were retracted in 2023 - Nature
    smashing annual records — as publishers struggle to clean up ...
  86. [86]
    Linking citation and retraction data reveals the demographics of ...
    The data suggest that approximately 4% of the top-cited scientists have at least 1 retraction. This is a conservative estimate, and the true rate may be higher ...
  87. [87]
    Understanding the patterns and magnitude of life science ...
    Jun 21, 2025 · Our results revealed a 12–20% increase in retractions over decades in conference proceedings as well as journals with most of the authors of these studies from ...
  88. [88]
    Science map of academic misconduct - PMC - NIH
    Feb 19, 2024 · This study reveals the widespread occurrence of academic misconduct across various topics, but the severity of misconduct varies significantly among them.
  89. [89]
    Scientific Misconduct in Psychology: A Systematic Review of ...
    Mar 29, 2019 · In survey studies, self-admission rates for data falsification ranged between 0.6% and 2.3%. Prevalence estimates for the involvement of other ...
  90. [90]
    Science map of academic misconduct
    Feb 19, 2024 · Our findings suggest that the magnitude of academic misconduct varied widely across disciplines or topics, and the patterns of reasons for ...
  91. [91]
    ORI Introduction to the Responsible Conduct of Research
    Introduction · Basic responsibilities · Research environment · Supervision and review · Transition to researcher · Questions · Resources.
  92. [92]
    Best Practices For Promoting Research Integrity
    Training and Responsibilities · Appropriate standards of scientific conduct (through instruction, by example, and ideally, by written guidelines) · Authorship ...
  93. [93]
    Preventing research misconduct | Amsterdam UMC
    Preventing research misconduct · training and supervision; · fostering a research culture promoting integrity; · proper data management; · stimulating fair ...
  94. [94]
    Research Policies: The Conduct of Science - HHMI
    These guidelines describe general standards for conduct in research and scholarship. They are intended to establish a common understanding of expectations and ...
  95. [95]
    Institutional policies | ORI - The Office of Research Integrity
    Research misconduct can put individuals at risk, if, for example, the misconduct affects information that is used for making medical or public decisions.
  96. [96]
    Research Misconduct - University Policies and Standards
    Aug 26, 2022 · Research Integrity Officer (RIO) is designated by the VPR to oversee the university's research misconduct process. The RIO serves as the intake ...
  97. [97]
    Quality of reports of investigations of research integrity by academic ...
    Feb 19, 2019 · Our analyses identify important deficiencies in the quality and reporting of institutional investigation of concerns about the integrity of a ...Abstract · Discussion · Author Information
  98. [98]
    Institutional Research Misconduct Reports Need More Credibility
    Mar 12, 2018 · This Viewpoint highlights the inadequacy and lack of transparency of most research institutions' responses to allegations of research misconduct<|separator|>
  99. [99]
    Frequently Asked Questions | ORI - The Office of Research Integrity
    Aug 4, 2016 · The Office of Research Integrity (ORI) oversees and directs U.S. Public Health Service (PHS) research integrity activities on behalf of the ...
  100. [100]
    NIH Actions and Oversight after Findings of Research Misconduct
    Sep 9, 2024 · Learn about the steps NIH and other associated federal agencies take when findings of research misconduct are confirmed.
  101. [101]
    Why is plagiarism apparently more common in research funded by ...
    Mar 15, 2024 · What's more, researchers funded by the NSF are around twice as likely to be found guilty of research misconduct than those funded by the US ...
  102. [102]
    Universities oppose plan to bolster federal research oversight
    Apr 2, 2024 · The Office of Research Integrity is considering stronger regulations for institutional investigations of alleged research misconduct.
  103. [103]
  104. [104]
    Institutional capacity to prevent and manage research misconduct
    Jul 12, 2023 · Gupta A. Fraud and misconduct in clinical research: A concern. Perspect Clin Res. 2013;4(2):144–7. Article Google Scholar.
  105. [105]
    7 Addressing Research Misconduct and Detrimental Research ...
    The current policy framework assigns specific responsibilities to institutions and to sponsoring agencies. While the current framework has achieved stability ...Uncovering Research... · Research Misconduct And... · Other Issues, Gaps, And...<|separator|>
  106. [106]
    Why Universities Should Make Misconduct Reports Public - PMC
    It closes with a call for disclosure of such reports as a default. Keywords: research misconduct, fraud, investigations, universities. Introduction. In 2012, ...
  107. [107]
    Peer review: a flawed process at the heart of science and journals
    Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust. A major question, which ...
  108. [108]
    Peer Review in Scientific Publications: Benefits, Critiques, & A ...
    Despite the positive impacts of peer review, critics argue that the peer review process stifles innovation in experimentation, and acts as a poor screen against ...
  109. [109]
    Can peer review police fraud? | Nature Neuroscience
    We do not agree that the peer review system can or should detect deliberate fraud. Science is a communal enterprise built on trust.
  110. [110]
    How journals can prevent, detect and respond to misconduct
    Dyer C (2011) The fraud squad. Br Med J 342:d4017. Article Google Scholar. Schiermeier Q (2011) Research misconduct confirmed at German clinic – March 04, 2011.
  111. [111]
    Role of editors and journals in detecting and preventing scientific ...
    Opportunities for editors are new technologies for detecting misconduct, policies by editorial organization or national institutions, and greater transparency ...Missing: functions | Show results with:functions
  112. [112]
    Investigating and preventing scientific misconduct using Benford's Law
    Apr 11, 2023 · Investigating and preventing scientific misconduct using Benford's Law. Gregory M. Eckhartt &; Graeme D. Ruxton. Research Integrity and ...
  113. [113]
    Fraud and Peer Review: An Interview with Melinda Baldwin
    Mar 24, 2022 · I think peer review is not set up to detect fraud, and we shouldn't be surprised when fabricated data makes it through a peer review process.
  114. [114]
    The ability of different peer review procedures to flag problematic ...
    Nov 29, 2018 · Some argue that peer review was never intended to track fraud and cannot be expected to do so (Biagioli 2002; Smith 2006).
  115. [115]
    Editors as gatekeepers of responsible science - Biochemia Medica
    Journal article is the best publicly visible documentation of research activity so that fraud or misconduct in science is often first discovered in ...Missing: functions | Show results with:functions
  116. [116]
    The limitations to our understanding of peer review - PMC
    On the one hand, criticisms levied towards peer review can be seen as challenging scientific legitimacy and authority and therefore creates resistance towards ...
  117. [117]
    Continued use of retracted papers: Temporal trends in citations and ...
    Dec 1, 2021 · However, postretraction citations can persist for more than 10 years (Kim, Yi et al., 2019; Schneider et al., 2020), and retracted papers may ...
  118. [118]
    The Citation of Retracted Papers and Impact on the Integrity of the ...
    Feb 10, 2025 · Generally, retracted papers should not be cited, but there are exceptions, such as academic discussions of retracted papers.Why Do Academics Cite... · Retracted Papers May 'Corrupt... · A Reflection on the...
  119. [119]
    Reproducibility of Scientific Results
    Dec 3, 2018 · A number of recent high-profile cases of scientific fraud have contributed considerably to the amount of press around the reproducibility crisis ...
  120. [120]
    Unreliable | Columbia University Press
    Bias, Fraud, and the Reproducibility Crisis in Biomedical Research. Csaba ... Unreliable exposes the various factors that contribute to the reproducibility crisis ...
  121. [121]
    The impact of fraudulent and irreproducible data to the translational ...
    The impact of fraudulent and irreproducible data to the translational research crisis - solutions and implementation. J Neurochem. 2016 Oct:139 Suppl 2:253 ...
  122. [122]
    Reproducibility and transparency: what's going on and how can we ...
    Jan 27, 2025 · Reproducibility in science involves the ability of independent research groups to obtain the same main result as that reported in the original publication in ...
  123. [123]
    Retracted studies may have damaged public trust in science, top ...
    Jun 5, 2020 · Retractions by two of the world's leading journals could do lasting harm in an environment where many already distrust scientists.
  124. [124]
    5 key findings about public trust in scientists in the U.S.
    Aug 5, 2019 · And small shares see repercussions for misconduct as commonplace: No more than two-in-ten U.S. adults say scientists in each field of work face ...
  125. [125]
    Fudged research results erode people's trust in experts - Nature
    Jul 26, 2019 · Reports of research misconduct have been prominent recently and probably reflect wider problems of relying on dated integrity protections. The ...
  126. [126]
    The MMR vaccine and autism: Sensation, refutation, retraction ... - NIH
    The systematic failures which permitted the Wakefield fraud were discussed by Opel et al.[15]. IMPLICATIONS. Scientists and organizations across the world ...Missing: policy | Show results with:policy
  127. [127]
    Fifteen Years After A Vaccine Scare, A Measles Epidemic - NPR
    May 22, 2013 · A measles epidemic in Wales that has infected more than 1000 people is the fallout from a fraudulent paper linking the vaccine and autism ...Missing: impact | Show results with:impact
  128. [128]
    Fraud and Deceit in Medical Research | Voices in Bioethics
    Jan 26, 2022 · Scientific misconduct undermines the trust among researchers and the public's trust in science. Meanwhile, fraud in medical trials may lead to ...<|separator|>
  129. [129]
  130. [130]
    Financial costs and personal consequences of research misconduct ...
    Aug 14, 2014 · Most retractions are associated with research misconduct, entailing financial costs to funding sources and damage to the careers of those committing misconduct.
  131. [131]
    Protecting Whistleblowers--Tell ORI What You Think! | Science | AAAS
    Unfortunately, blowing the whistle on scientific misconduct can ... the False Claims Act [which] is an extremely powerful whistleblower protection law.
  132. [132]
    How ex-WSU neuroscientist won whistleblower case against Detroit ...
    Jun 14, 2023 · The VA maintained that it hadn't viewed Kreipke as a whistleblower when it investigated him for scientific misconduct. The agency also ...
  133. [133]
    Whistleblower Retaliation Verdicts and Settlements Under Federal ...
    Aug 20, 2024 · ... Whistleblower Act, Researcher was terminated in retaliation for reporting research misconduct. Judith Zimmerman, an assistant professor the ...
  134. [134]
    Handling Misconduct - Whistleblowers | ORI
    Handling Misconduct - Whistleblowers · 1. Upon receipt of a whistleblower retaliation complaint, the responsible official shall notify the whistleblower of ...
  135. [135]
    Life after whistleblowing | Times Higher Education (THE)
    Jul 31, 2014 · Some of higher education's most prominent whistleblowers paint a bleak picture about the impact on their subsequent careers.
  136. [136]
    On the Willingness to Report and the Consequences of Reporting ...
    Feb 26, 2020 · Specifically, we elucidate the processes that affect researchers' ability and willingness to report research misconduct, and the likelihood of ...
  137. [137]
    The Problem of Piltdown Man | Science History Institute
    May 4, 2023 · Seduced by a racist idea, archaeologists hyped an outrageous hoax. ... Charles Dawson was a lawyer, though perhaps in name only. As a young man in ...Missing: details | Show results with:details
  138. [138]
    People and Discoveries: Piltdown Man is revealed as fake - PBS
    Weiner, Oakley, and Oxford anthropologist Wilfrid Le Gros Clark were now certain that the Piltdown fossil collection was a fake, and not just that, but a hoax.Missing: details | Show results with:details
  139. [139]
    N-rays exposed | Research | The Guardian
    Sep 1, 2004 · Blondlot emerged as either a fraud or a fool, and his career never recovered from this public ignominy. Today, n-rays serve as a reminder of ...
  140. [140]
    Rethinking Psychology gone wrong | BPS
    May 13, 2025 · Leslie Hearnshaw, Burt's biographer, concluded that Burt produced spurious data on monozygotic twins, fabricated figures on declining ...
  141. [141]
    The Cyril Burt Question: New Findings | Science
    Cyril Burt presented data in his classic paper "Intelligence and social mobility" that were in perfect agreement with a genetic theory of IQ and social ...
  142. [142]
    Lancet, NEJM retract controversial COVID-19 studies based on ...
    Jun 4, 2020 · The Lancet and the New England Journal of Medicine have retracted the articles because a number of the authors were not granted access to the underlying data.
  143. [143]
    Two elite medical journals retract coronavirus papers over data ...
    Jun 4, 2020 · The Lancet paper was what brought Surgisphere under scrutiny as it focused on the safety and effectiveness of the malaria drug ...
  144. [144]
    Retraction Note: Room-temperature superconductivity in a ... - Nature
    Sep 26, 2022 · 06 October 2022. The Retraction Note was updated to reflect reception of statements of disagreement from all authors.
  145. [145]
    Room-temperature superconductivity study retracted | Science | AAAS
    Sep 26, 2022 · On Monday Nature retracted the study, citing data issues other scientists have raised over the past 2 years that have undermined confidence in one of two key ...
  146. [146]
    Superconductor researcher loses fifth paper - Retraction Watch
    Jun 21, 2024 · Ranga Dias, the physics researcher whose work on room-temperature superconductors has been retracted after coauthors raised concerns about the data, has lost ...
  147. [147]
    Third room temperature superconductivity paper retracted as group's ...
    Nov 9, 2023 · In September 2022, against Dias's objections, Nature decided to retract the carbonaceous sulfur hydride paper, following questions 'raised ...
  148. [148]
    Honesty researcher committed research misconduct, according to ...
    Mar 15, 2024 · Honesty researcher Francesca Gino “committed research misconduct intentionally, knowingly, or recklessly,” according to an investigation completed last year by ...Missing: prominent | Show results with:prominent<|control11|><|separator|>
  149. [149]
    Harvard Revokes Tenure From Francesca Gino, Business School ...
    May 27, 2025 · Datar placed Gino on unpaid administrative leave, barred her from campus, and revoked her named professorship in June 2023. The same month, Data ...
  150. [150]
    Embattled Harvard honesty professor accused of plagiarism - Science
    Apr 9, 2024 · Acland focused on plagiarism, rather than data issues, because of her experience detecting it in student work. She searched phrases from Gino's ...
  151. [151]
    Detection of Fraud in a Clinical Trial Using Unsupervised Statistical ...
    An unsupervised approach to central monitoring, using mixed-effects statistical models, is effective at detecting centers with fraud or other data anomalies in ...
  152. [152]
    (PDF) Nine Ways to Detect Possible Scientific Misconduct in ...
    Aug 6, 2025 · Nine relatively less complex ways for detecting potentially fabricated data in small samples (N < 200), are presented, using data from articles published since ...Missing: enhanced mechanisms
  153. [153]
    Tools of the data detective: A review of statistical methods to detect ...
    Feb 1, 2025 · The purpose of the present study was to review a collection of existing statistical tools to detect data fabrication, assess their strengths and limitations.
  154. [154]
    Detection or Deception: The Double-Edged Sword of AI in Research ...
    Dec 12, 2024 · New artificial intelligence tools help scientists fight back against a rising tide of research misconduct, but is it enough?
  155. [155]
    Springer Nature unveils two new AI tools to protect research integrity
    Jun 12, 2024 · SnappShot, also developed in-house, is an AI-assisted image integrity analysis tool.
  156. [156]
    Scientific fraud: is poor experimental reproducibility a smoking gun
    May 27, 2024 · The number of retracted scientific papers has increased over the years, partly due to better detection methods and increased scrutiny.Missing: enhanced | Show results with:enhanced
  157. [157]
    Post-publication Peer Review with an Intention to Uncover Data ...
    Jun 28, 2023 · Recent findings from analyses of PubPeer postings have shown that two-thirds of the comments are posted to report some kind of misconduct ( ...
  158. [158]
    The PubPeer conundrum: Administrative challenges in research ...
    Aug 13, 2024 · Recently, PubPeer comments have led to a significant number of research misconduct proceedings – a development that could not have been ...
  159. [159]
    AI tools are spotting errors in research papers - Nature
    Mar 7, 2025 · The Black Spatula Project is an open-source AI tool that has so far analysed around 500 papers for errors. The group, which has around eight ...
  160. [160]
    AI for scientific integrity: detecting ethical breaches, errors ... - Frontiers
    Sep 1, 2025 · We also investigate emerging AI-powered systems aimed at identifying errors in published research, including tools for statistical verification, ...<|separator|>
  161. [161]
    Scientific Utopia: II. Restructuring Incentives and Practices to ...
    Exploring scientific misconduct: Isolated individuals, impure institutions, or an inevitable idiom of modern science? Journal of Bioethical Inquiry, 5, 271 ...
  162. [162]
    Faculty promotion must assess reproducibility - Nature
    Sep 14, 2017 · Promotion criteria at HMS have changed over time. It was once almost impossible to advance to professor by contributing mainly to important ...
  163. [163]
    Academic criteria for promotion and tenure in biomedical sciences ...
    Jun 25, 2020 · Changes to the criteria used to assess professors and confer ... Faculty promotion must assess reproducibility. Nature 2017;549:133 ...
  164. [164]
    Roundtable on Aligning Incentives for Open Scholarship
    The Roundtable on Aligning Incentives for Open Science was launched in 2019 to help shift incentives to better recognize and reward research ecellence, ...
  165. [165]
    NSF Public Access Initiative | NSF - National Science Foundation
    NSF is funding a cohort of 10, three-year, multi-institutional projects to start in 2023 to build and enhance national coordination among researchers and other ...Nsf's Public Access Policy · Data Management And Sharing... · Nsf Funding For Public...
  166. [166]
    NASA Open Science Funding Opportunities
    NASA offers funding opportunities for projects that enable open science. Learn more about current and past open science solicitations.
  167. [167]
    ERIC Forum publishes recommendations to increase reproducibility ...
    Oct 20, 2021 · The report includes seven ways to increase research quality and reproducibility ranging from reporting, communications, quality management and funding among ...
  168. [168]
    $$1.5 million program targets changes to academic incentives
    Oct 6, 2025 · Weekend reads: A museum of scientific misconduct?; authorship misconduct; uproar over renamed phyla. Would you consider a donation to support ...
  169. [169]
    NSF Fellows' perceptions about incentives, research misconduct ...
    Apr 7, 2023 · The survey posed questions on cheating, research misconduct, formal integrity training and ethical environments, as well as the overall positives and negatives ...Results · Integrity Training And... · Discussion<|control11|><|separator|>
  170. [170]
    TOP Guidelines - Center for Open Science
    Updated in 2025, TOP includes seven Research Practices, two Verification Practices, and four Verification Study types. These components provide recommendations ...
  171. [171]
    TOP 2025: An Update to the Transparency and Openness ... - OSF
    Feb 1, 2025 · The final version—TOP 2025—provides updated guidelines for promoting the verifiability of published empirical research claims. Show more.
  172. [172]
    TOP Factor - Transparency and Openness Promotion
    This TOP Factor is a metric that reports the steps that a journal is taking to implement open science practices, practices that are based on the core principles ...
  173. [173]
    Preregistration - Center for Open Science
    Preregistration is specifying your research plan in advance and submitting it to a registry, separating hypothesis-generating from testing research.
  174. [174]
    The preregistration revolution | PNAS
    Preregistration is a solution that helps researchers maintain clarity between prediction and postdiction and preserve accurate calibration of evidence.
  175. [175]
    Full article: The benefits of preregistration and Registered Reports
    Depending on how preregistration is organised, this practice can have additional benefits, such as making studies known to potential participants or other ...
  176. [176]
    Enhancing Reproducibility through Rigor and Transparency
    Sep 9, 2024 · This webpage provides information about the efforts underway by NIH to enhance rigor and reproducibility in scientific research.Guidance · Principles and Guidelines for... · Meetings and Workshops
  177. [177]
    Eleven strategies for making reproducible research and open ...
    Nov 23, 2023 · Reproducible research and open science practices have the potential to accelerate scientific progress by allowing others to reuse research ...
  178. [178]
    Replication to Enhance Research Impact Initiative
    The Replication Initiative supports replicating research and validating technologies, aiming to improve reproducibility and enhance research rigor.
  179. [179]
    NIH launches initiative to double check biomedical studies - Science
    Dec 26, 2024 · NIH launches initiative to double check biomedical studies. But so far, few investigators seem interested in having a contract lab repeat their experiments.
  180. [180]
    Rigor and Transparency Initiatives - Center for Open Science
    Rigor and Transparency Initiatives (RTIs) that raise awareness, shift incentives, and initiate culture and behavior change to accelerate discovery of knowledge.
  181. [181]
    A community-led initiative for training in reproducible research - eLife
    Jun 21, 2021 · The Reproducibility for Everyone initiative aims to provide researchers at all career stages and across many disciplines with training in ...
  182. [182]
    Undeclared AI-Assisted Academic Writing as a Form of Research Misconduct
    Article positing undeclared AI-assisted writing as research misconduct, similar to plagiarism or ghostwriting.
  183. [183]
    Angela Bogdanova, the First Digital Author Persona
    ORCID profile for the AI persona Angela Bogdanova, credited by Aisentica Research Group.