Fact-checked by Grok 2 weeks ago

Data fabrication

Data fabrication refers to the intentional invention of data or results in scientific , without any underlying empirical basis, and subsequently recording or reporting them as genuine findings. This practice constitutes a primary form of misconduct, distinct from falsification (which involves manipulation of existing data) and , and is explicitly prohibited by regulations in the United States, where it is defined as undermining the reliability of research outcomes. Empirical surveys indicate that data fabrication occurs across scientific disciplines, with self-reported admissions suggesting that approximately 2% of researchers have engaged in it at least once, though underreporting due to likely underestimates the true rate. Prevalence appears higher in competitive environments driven by "" incentives, where career advancement hinges on publication volume rather than methodological rigor, and has been noted among both junior fieldworkers and senior investigators seeking to sustain funding or prestige. Detection challenges persist because fabricated datasets can mimic realistic patterns, but advances in forensic statistics—such as analyzing digit distributions for violations or improbably uniform variances—have enabled post-publication identification in cases where inconsistencies emerge. The ramifications of data fabrication extend beyond individual culpability, eroding in scientific institutions and diverting resources toward invalid pursuits, as retracted papers based on fabricated results propagate errors in subsequent studies. Regulatory responses include institutional investigations, bans, and journal retractions, often culminating in career termination for perpetrators, yet systemic pressures in —exacerbated by evaluation metrics favoring quantity over quality—perpetuate vulnerabilities. Efforts to mitigate it emphasize transparent , preregistration of studies, and statistical auditing protocols to enforce causal in research claims.

Definition and Scope

Core Definition

Data fabrication is the deliberate invention of , results, or observations that did not arise from actual activities, followed by their recording, reporting, or presentation as authentic. This form of , classified under integrity violations by bodies such as the U.S. Office of Research Integrity (ORI), explicitly involves "making up or results and recording or reporting them," distinguishing it from unintentional errors or methodological flaws. It encompasses scenarios where researchers fabricate entire datasets, such as inventing experimental outcomes, patient records, or survey responses that never existed, often to support preconceived hypotheses or meet publication pressures. The intent behind fabrication is central to its identification as misconduct, requiring evidence of knowing rather than mere or in handling. Unlike falsification, which alters genuine through manipulation (e.g., selective omission or ), fabrication generates wholly fictitious content, such as reporting non-existent trials or measurements. This practice erodes the foundational of , as fabricated can propagate erroneous conclusions, mislead , and waste resources on downstream built upon falsehoods. Detection often relies on statistical anomalies, like improbable digit distributions in numerical , or inconsistencies with raw records, though proving demands thorough . Data fabrication, a core form of research misconduct, involves the intentional invention of data or results that do not exist and subsequently recording or reporting them as genuine, thereby misrepresenting the research record from inception. This differs fundamentally from falsification, which entails manipulating, altering, or selectively omitting existing data, materials, equipment, or processes to misrepresent accurately obtained results, rather than creating nonexistent evidence outright. For instance, changing numerical values in a dataset after collection constitutes falsification, whereas generating an entirely fictitious dataset without any underlying experiment exemplifies fabrication; both undermine scientific validity but originate from distinct causal mechanisms—modification of reality versus wholesale invention. Fabrication must also be distinguished from plagiarism, the third pillar of defined research misconduct, which centers on the unauthorized appropriation of another person's ideas, methods, results, or textual content without proper attribution, without necessarily involving the creation or alteration of empirical . While erodes intellectual credit and can propagate false claims if uncredited results are misrepresented, it does not require fabricating evidence; a plagiarized might report real from elsewhere but fail to cite the source, contrasting with fabrication's direct assault on authenticity through invention. Federal policy under the U.S. Public Health Service explicitly limits research misconduct to these three categories—fabrication, falsification, and —excluding practices like or authorship disputes unless they involve FFP elements. Beyond FFP, fabrication contrasts with questionable research practices (QRPs), such as p-hacking (manipulating analyses to achieve ) or (hypothesizing after results are known), which may inflate error rates and contribute to reproducibility crises but lack the deliberate deceit required for classification. QRPs often stem from incentives like publication pressure rather than to fabricate, though empirical surveys indicate they correlate with higher self-reported rates; unlike fabrication, they typically operate on real data and do not invent outcomes ex nihilo. Honest errors, such as calculation mistakes or unintentional oversights, further diverge from fabrication, as demands knowing or reckless , not mere , per established policy. This intentionality threshold ensures that systemic biases in academic evaluation—favoring novel results—do not conflate incentivized corner-cutting with outright , though both erode trust in empirical claims.

Prevalence in Scientific Fields

A of survey published in estimated that 1.97% (95% : 0.86–4.45) of scientists self-reported having fabricated, falsified, or modified or results at least once in their careers, based on pooled from seven studies. In contrast, surveys on observed among colleagues yielded higher rates, with 14.12% (95% : 9.91–19.72) reporting knowledge of falsification across twelve studies. These figures encompass fabrication—defined as inventing entirely—as a subset of serious , though self-reports are prone to underestimation due to and fear of repercussions. Prevalence varies by discipline, with higher rates documented in biomedical and pharmacological fields compared to others when controlling for methodological factors in surveys. A 2023 meta-analysis focused on biomedical research, including , , and , found self-reported data fabrication at 4.0% (95% CI: 1.6–6.4) and falsification at 9.7% (95% CI: 5.6–13.9), while non-self-reported measures (e.g., colleague observations or indirect indicators) indicated substantially elevated figures of 21.7% (95% CI: 14.8–28.7) for fabrication and 33.6% (95% CI: 8.1–59.1) for falsification. In , systematic reviews of surveys report self-admission rates for fabrication or falsification below 2%, but observed involvement of others ranges from 9.3% to 18.7%. Fields reliant on large-scale, collaborative , such as physics and chemistry, exhibit lower reported rates, attributable to the difficulty of fabricating verifiable experimental outputs without detection. More recent surveys underscore ongoing concerns, with a anonymous study across Dutch universities estimating that 8% of participating researchers admitted to falsifying or fabricating at least once. This figure aligns with a broader survey indicating one in twelve scientists engaged in such within the prior three years. Detection challenges exacerbate prevalence, as only a fraction of cases lead to retractions or investigations; for instance, while retraction rates for hover around 0.1–1% of , survey-based estimates suggest the true incidence is orders of magnitude higher, particularly in disciplines with opaque practices or high pressures.

Historical Development

Pre-20th Century Instances

In 1725, colleagues of Johann Bartholomäus Adam Beringer, a professor of medicine and director of the botanical garden at the University of Würzburg, conspired to fabricate limestone specimens etched with images of plants, animals, astronomical bodies, Hebrew letters, and even a comet to ridicule his enthusiasm for natural history. These "lying stones" (Lügensteine) were planted in a local quarry and presented to Beringer as genuine fossils, prompting him to collect over 2,000 examples and publish Lithographiae Wirceburgensis in 1726, interpreting them as evidence of divine creation predating the biblical Flood. The hoaxers, including privy councilor Johann Georg von Eckhart and mathematician Johann Ignaz Roderick, confessed after Beringer accused them of theft to recover publication costs, leading to a court ruling against them for fraud; Beringer, however, lost his position and reputation, while the stones—many destroyed by him in embarrassment—survive in museums as artifacts of early paleontological deception. This incident exemplifies specimen fabrication to exploit scholarly vanity, predating modern scientific norms and highlighting vulnerabilities in pre-institutionalized verification processes. Earlier, in 1572, naturalist acquired and cataloged a composite specimen purporting to be a dragon, assembled from parts including a grass snake's head and tail, a body (likely perch or carp), and toad legs, which he displayed in his museum and described in his Monstrum Historia as a genuine natural prodigy. Aldrovandi's acceptance of the fake influenced treatises for over a century, reinforcing mythological interpretations of zoological rarities amid limited empirical standards for specimen authentication in the . Such fabrications, often by unnamed artisans or collectors, blurred lines between empirical observation and , as dissected animals were scarce and techniques rudimentary, allowing rudimentary composites to pass as data supporting pre-scientific cosmologies. In 1869, George Hull commissioned a 10-foot gypsum statue buried and exhumed in Cardiff, New York, as a "petrified giant" to mock and profit from public curiosity, deceiving initial scientific examiners including Yale's who debated its antiquity before gypsum tool marks revealed the fraud. Exhibited for admission fees generating thousands of dollars, the prompted paleontologists to fabricate a rival "Onondaga giant" but collapsed under scrutiny from microscopic analysis and historical quarry records, underscoring 19th-century tensions between emerging geological and religious claims reliant on anomalous "data." These cases, drawn from and , illustrate data fabrication via physical artifacts before standardized , often driven by personal vendettas, financial gain, or ideological provocation rather than career advancement in formalized academia.

20th Century Cases and Recognition

One prominent case occurred in 1974 when William Summerlin, a researcher at the Sloan-Kettering Institute for Cancer Research, claimed success in transplanting skin grafts between genetically dissimilar mice without rejection, using a method involving corneal transplants. Laboratory technicians discovered the fraud when they removed black markings on white mice grafts using alcohol swabs, revealing that Summerlin had used a felt-tip pen to simulate graft acceptance. An internal investigation confirmed fabrication in at least two instances, leading to Summerlin's suspension, the retraction of related claims, and his eventual dismissal with a year's salary; the scandal prompted broader scrutiny of the institute's oversight under director Robert Good. In 1981, John R. Darsee, a young cardiologist at Harvard Medical School's , admitted to fabricating data in a dog lab experiment on , which unraveled a pattern of spanning over 100 publications. Investigations revealed falsified results dating back to his time at , including manipulated autoradiographs and nonexistent experiments, resulting in the retraction of numerous papers co-authored with senior researchers like . Darsee received a 10-year ban from receiving federal research funds in 1983, and the case exposed systemic failures in supervision, as mentors had endorsed his prolific output without verifying . That same year, Mark D. Spector, a graduate student in Efraim Racker's lab at Cornell University, resigned after falsifying data in a high-profile Science paper claiming that hexokinase isozyme III activates glycolysis to fuel cancer cell proliferation. Discrepancies in enzyme assays and glucose uptake measurements, confirmed upon retesting by colleagues, indicated deliberate alteration of results to support a novel mechanism; the paper was retracted, damaging Racker's reputation despite his non-involvement. Spector's fraud involved manipulating experimental outcomes to align with expected hypotheses, highlighting vulnerabilities in fast-paced biochemical research. The controversy, emerging posthumously after the British psychologist's death in 1971, involved allegations that his twin studies underpinning IQ heritability estimates (around 0.77 for identical twins reared apart) relied on fabricated data from nonexistent researchers like J. Conway and Margaret Howard. Lionel Hearnshaw's 1979 biography detailed inconsistencies in correlation coefficients and sample sizes that remained static despite claimed additions, leading to widespread acceptance of fraud by the 1980s, though defenders like argued for negligence over intent based on archival evidence. The case, rooted in Burt's work from the 1950s-1960s, fueled debates on hereditarian research amid ideological pressures but underscored the risks of unverifiable longitudinal data. These incidents, clustered in the and , marked a shift in recognition of data fabrication as a systemic issue rather than isolated anomalies, prompting U.S. congressional hearings in 1981 and 1988 that criticized federal agencies for inadequate oversight. The established formal misconduct policies by 1989, defining fabrication as "making up data or results and recording or reporting them," while journals like and Nature began routine data audits. By the late , retractions for rose, with cases like Robert Slutsky's questionable data (1986) illustrating emerging statistical detection methods, fostering a culture of amid growing research pressures.

21st Century Surge and Replication Crisis

The number of scientific paper retractions due to data fabrication and related misconduct has surged in the , with biomedical retractions quadrupling between 2000 and 2021, rising from approximately 11 per 100,000 papers to nearly 45 per 100,000 by 2020. Overall retractions increased tenfold from about 40 annually in the early to around 400 by the , reaching over 10,000 in 2023 alone, many attributed to deliberate data manipulation or falsification. Retractions specifically tied to data problems have exceeded 75% of total cases in recent years, reflecting improved detection amid persistent fabrication practices. This rise coincides with the , where systematic attempts to reproduce published findings have failed at high rates, exposing underlying issues including outright fabrication. For instance, anonymous surveys indicate that 1-2% of admit to fabricating or falsifying at least once, contributing to irreproducibility alongside practices like selective reporting. In fields like and , replication projects—such as those attempting to verify landmark studies—have succeeded in only 11-46% of cases, prompting scrutiny that uncovered fabricated datasets in several high-profile retractions. The crisis has amplified detection of fabrication by incentivizing sharing and pre-registration, revealing that non-replicable results often stem from manipulated evidence rather than mere errors. Exacerbating factors include the proliferation of paper mills, which produce fraudulent manuscripts at scale, with fake articles doubling every 1.5 years as of 2025, often evading initial peer review through fabricated data mimicking legitimate results. Heightened publication pressures and the "publish or perish" culture have correlated with this surge, as evidenced by misconduct self-reports rising in surveys of early-career researchers. The replication crisis, in turn, has driven reforms like mandatory data repositories, though persistent fabrication underscores systemic vulnerabilities in incentive structures prioritizing novelty over verifiability.

Mechanisms and Execution

Techniques of Fabrication

Data fabrication in scientific research primarily entails the invention of data or results that have no basis in actual experimentation, , or , often to support a preconceived or achieve . One core technique involves reporting outcomes from experiments or studies that were never conducted, such as claiming measurements from nonexistent samples or trials. For instance, in the 1980s case of John Darsee, an NIH investigation revealed that he fabricated data from fictional experiments across multiple papers, including cardiac studies where results were invented without performing the required animal tests. Another prevalent method is the manual or algorithmic generation of numerical datasets that mimic plausible variability but fail to replicate natural data distributions. Fabricators may intuitively assign values—such as rounding to convenient figures or patterning sequences—leading to anomalies like uniform digit distributions or identical standard deviations across conditions, which deviate from expected patterns like for leading digits in real financial or empirical data. In psychological research, cases like Lawrence Sanna's work showed suspiciously consistent variances, suggesting data was contrived to yield uniform effect sizes rather than emerging from genuine variability. In fields reliant on surveys, clinical trials, or qualitative inputs, fabrication can include inventing participant responses, patient outcomes, or interview transcripts. Researchers might fabricate survey data by assigning fabricated Likert-scale answers or demographic details to phantom subjects, ensuring the aggregate supports desired . This technique was implicated in cases where logs were absent or inconsistent, as seen in broader reviews of where self-reported fabrication rates reached 1.97% among surveyed . Computational and simulation-heavy disciplines enable fabrication through altered code outputs or simulated "results" presented as empirical. For example, generating pseudo-random numbers via software like or to simulate experimental noise, then reporting these as authentic observations without disclosing the simulation nature. Such methods exploit the opacity of black-box analyses, though they often produce implausibly extreme effect sizes, like Cohen's d values exceeding typical field norms (e.g., d > 0.95 versus 0.21–0.76 in ). In settings, fabrication extends to inventing raw instrument readings, such as peaks or counts, without running assays. The Hwang Woo-suk scandal (2004–2005) exemplified this, where derivation data for 100 patients was partially fabricated by inventing results for untested cases, paired with falsified images to corroborate claims. These techniques underscore fabrication's reliance on exploiting trust in unverifiable claims, often amplified by co-author complicity or inadequate record-keeping.

Tools and Software Involved

Data fabrication in scientific research often involves the misuse of readily available image editing software to alter visual representations such as blots, images, and results. has been frequently implicated in such manipulations, enabling researchers to duplicate bands, erase artifacts, or splice elements from unrelated images to fabricate evidence of experimental outcomes. For instance, forensic analyses of retracted papers have revealed cloned regions and adjusted brightness levels consistent with Photoshop's and tools, which can produce seamless alterations indistinguishable to the . Open-source alternatives like serve similar purposes, allowing free access to advanced editing features that facilitate fraudulent adjustments without costs. Statistical and data analysis software packages are commonly exploited to generate synthetic datasets that mimic real experimental variability while concealing their artificial origins. Programs such as and , equipped with libraries like Faker or , enable the programmatic creation of fabricated numerical , including randomized values tuned to yield desired p-values or correlations. is another tool routinely used for simpler fabrications, where formulas and pivot tables can retroactively insert or interpolate points to align with hypotheses, as evidenced in cases of inconsistencies flagged in post-publication audits. These tools' flexibility allows for "benign" simulations that escalate to when undisclosed, particularly in fields reliant on large datasets like or . The advent of has introduced sophisticated capabilities for data fabrication, amplifying risks since 2023. Tools like and similar large language models can produce entirely fabricated scientific abstracts, methodologies, and even tabular data that appear statistically plausible, as demonstrated in experiments where AI-generated content evaded initial human scrutiny. Image-generating AIs such as or facilitate the creation of realistic yet nonexistent experimental visuals, including cellular micrographs or protein structures, which have appeared in submitted manuscripts and prompted retractions when origins were traced. These AI systems lower the barrier to entry for by automating the generation of coherent, contextually appropriate fakes, though their outputs often contain subtle anomalies detectable via forensic scrutiny.

Indicators of Fabricated Data

Fabricated data in scientific research often displays statistical patterns inconsistent with genuine empirical observations, such as unnatural digit distributions or implausibly low variability. These anomalies arise because human-generated numbers deviate from natural processes, which produce data governed by measurement error, biological variability, and environmental noise. One key indicator involves digit analyses, particularly violations of the Newcomb-Benford Law (NBL), which predicts a of leading digits (1–9) in many real-world datasets spanning multiple orders of magnitude. Fabricated frequently fails this test, showing uniform or otherwise deviant leading digit frequencies, as fabricators intuitively assign digits more evenly rather than following the law's skew toward lower digits (e.g., '1' appearing about 30% of the time). This method requires at least 250 observations in ratio-level data within a 1–100,000 range but is ineffective for normally or uniformly distributed variables. Terminal (rightmost) digit distributions provide another forensic clue, where genuine data from rounded measurements should approximate uniformity (0–9 equally likely), yet fabricated sets often exhibit patterns or deviations detectable via tests for uniformity. Humans fabricating numbers tend to avoid repetitions or favor certain digits subconsciously, producing non-random clusters rather than true uniformity. Variance analyses reveal further red flags, such as implausibly similar standard deviations across conditions or studies, which can be assessed through simulations to estimate their rarity under random sampling. For instance, in cases like the Sanna , uniformity in variances occurred in only 0.015% of simulated genuine datasets, signaling fabrication. Genuine data typically shows greater heterogeneity due to uncontrolled factors. Tools like (Granularity-Related Inconsistency of Means) and its extensions (GRIMMER, ) detect impossibilities in reported , such as means or standard deviations inconsistent with integer sample sizes or even numbers. These flag fabrication when simulations from aggregates fail to produce viable distributions, as seen in over 150 inconsistencies across Cornell Food and Brand Lab publications. Additional indicators include extreme effect sizes exceeding literature norms (e.g., Cohen's d > 0.95 versus typical 0.21–0.76 in ), clustered p-values suggesting manipulation, and absent multivariate associations expected in real phenomena. While these methods are probabilistic and require raw or detailed summary , they have successfully identified in fields like and sciences when combined. Internal inconsistencies, such as mismatched timelines or impossible precision in experimental logs, also warrant scrutiny in raw records.

Underlying Causes

Individual Motivations

Individual researchers may fabricate data primarily to advance their careers amid the "" imperative, where securing high-impact publications is crucial for tenure, promotions, and funding in competitive academic settings. This pressure is exacerbated by declining grant success rates, such as the U.S. funding dropping to 18% by 2015, prompting individuals to prioritize output over rigor to avoid professional obsolescence. Early-career scientists, in particular, face demands for multiple first-authored papers in prestigious journals, leading some to falsify results not merely for survival but to achieve prominence. Psychological factors also drive fabrication, including narcissistic tendencies and , where researchers rationalize to align with self-perceived or to attain "superstar" status. High-achieving individuals, such as those with strong creative performance, may engage in moral licensing—perceiving past ethical or innovative contributions as justification for bending rules—fully mediating the link to in empirical studies of medical researchers. further incentivizes risk-taking, as the fear of career setbacks from null results outweighs ethical constraints, compounded by from chronic stressors like . Personal gain and avoidance of represent self-interested motivations, with fabrication offering a low-effort path to desired outcomes like financial rewards from grants or ideological validation, especially when perceived detection risks are minimal. In resource-scarce environments, individuals may cut corners to meet imposed targets, rationalizing actions through avarice or the need to maintain a of despite incremental . These drivers manifest in decisions to invent data rather than report honest , prioritizing short-term personal benefits over long-term scientific .

Institutional and Cultural Incentives

The academic reward system, often characterized as "publish or perish," compels researchers to produce a high volume of publications to secure tenure, promotions, and funding, thereby incentivizing shortcuts such as data fabrication to meet output expectations. This pressure is exacerbated by quantitative metrics like journal impact factors and citation counts, which prioritize novel, positive results over replication studies that offer lower rewards. Surveys indicate that nearly three-quarters of biomedical researchers attribute the reproducibility crisis partly to these systemic demands, where failure to publish frequently risks career stagnation. Institutional funding mechanisms further amplify these incentives by tying to demonstrated and impactful findings, often favoring hypotheses-confirming outcomes that align with grantors' priorities rather than null or contradictory results. Hyper-competition for limited resources, including faculty positions, intensifies this dynamic, as evidenced by studies linking perceived publication pressure to self-reported willingness among scholars. In fields like , where retractions due to —including fabrication—account for over 67% of cases analyzed from 1996 to 2015, such pressures correlate with elevated rates. Culturally, academia's emphasis on and discourages transparency in data practices, normalizing "questionable research practices" like selective reporting that border on fabrication while stigmatizing failures to produce "exciting" results. This ethos, reinforced by systems that reward eye-catching claims, contributes to a tolerance for unverified in high-stakes environments, as seen in rising retraction trends linked to output-driven cultures in regions with intense academic competition. Reforms advocating shifts toward and have been proposed, yet lags due to entrenched institutional norms.

Ideological and Funding Pressures

Intense competition for research funding incentivizes fabrication, as securing grants is essential for career advancement and lab sustainability. In the United States, (NIH) funding success rates hover around 20-25% for grant applications, creating pressure to produce novel, positive results that appeal to reviewers. Researchers have admitted that budget constraints and grant dependency lead to corner-cutting, including data manipulation to demonstrate impact. For instance, in 2015, biomedical researcher Dong-Pyou Han was sentenced to 57 months in prison for falsifying data in NIH grant applications related to vaccine studies, which secured over $7.5 million in funding before detection. Similarly, a cancer researcher fabricated data in 16 grant applications in 2022, resulting in a lifetime ban from federal funding. Ideological conformity within academic institutions amplifies these risks, particularly in fields dominated by homogeneous viewpoints, where dissenting results face rejection or scrutiny. Surveys indicate that and —prioritizing preconceived narratives over empirical fidelity—correlate with , as principal investigators rationalize alterations to align with expected outcomes. Political and institutional pressures distort priorities, with funding agencies favoring hypotheses that reinforce prevailing , such as in or environmental sciences, where non-conforming may jeopardize renewals. This dynamic, compounded by academia's documented left-leaning skew (e.g., ratios exceeding 10:1 to conservative faculty in and sciences), fosters environments where fabrication sustains career viability amid "" demands tied to ideological alignment. Empirical reviews confirm that such systemic incentives contribute to falsification rates, with approximately 2% of scientists admitting to fabrication under these strains.

Detection Methods

Statistical and Forensic Approaches

Statistical approaches to detecting data fabrication rely on identifying patterns in numerical data that deviate from expectations under genuine data-generating processes. One prominent method is , which states that in many real-world datasets spanning multiple orders of magnitude, the leading digits follow a where the digit 1 appears as the first digit approximately 30.1% of the time, decreasing to 4.6% for 9. Fabricated data often violates this due to human biases toward uniform or arbitrary digit selection, as demonstrated in analyses of retracted papers where leading digit frequencies showed significant deviations from Benford predictions. Applications in investigations, such as those in and , have flagged anomalies prompting further scrutiny, though the law's applicability requires datasets with sufficient scale and variability. Digit preference tests, including analyses of last or rightmost digits, exploit tendencies in fabricated data toward non-random distributions, such as excessive uniformity or avoidance of certain digits like 7. goodness-of-fit tests assess deviations from expected uniform distributions in trailing digits, which genuine measurement error typically produces, while fabricated numbers—often invented without probabilistic —fail this uniformity. In psychological and datasets, such tests have identified fabrication in studies with improbably even digit spreads, as humans subconsciously favor round numbers or patterns. Variance-based methods complement these by flagging unrealistically low variability or inflated correlations, common in invented datasets lacking natural noise; for instance, simulated fabrication experiments show variances below empirical thresholds in over 80% of cases. The GRIM test (Granularity-Related Inconsistency of Means) evaluates whether reported sample means from integer-scale data (e.g., Likert items) are mathematically possible given the sample size and number of items, by checking if the mean aligns with feasible sums rounded to the reported precision. Inconsistencies arise in fabricated summaries because inventors rarely compute exact feasible values, leading to impossibilities detectable via ; extensions like GRIMMER incorporate standard deviations for added rigor. This method exposed anomalies in numerous papers, with one review noting its role in a high-profile replication failure involving over 50 inconsistent means across datasets. Forensic approaches extend statistical scrutiny to data and structural artifacts, such as examining multivariate dependencies for impossible linearities or copied subsequences indicative of duplication. In and large-scale datasets, classifiers trained on fabrication simulations detect outliers in feature distributions, achieving sensitivities above 90% in controlled tests but requiring raw data access. on files reveal metadata inconsistencies, like uniform timestamps suggesting batch invention, or pixel-level image manipulations via , though these demand specialized tools and are less effective against sophisticated alterations. Combined statistical-forensic protocols, as in randomized trial audits, integrate these with outlier detection (e.g., ) to quantify improbabilities, emphasizing that no single test proves fabrication but convergent evidence strengthens cases for investigation. Limitations include false positives in small or constrained datasets, underscoring the need for contextual validation over automated reliance.

Peer Review and Post-Publication Scrutiny

, a of scientific validation, evaluates submitted manuscripts for methodological rigor, novelty, and plausibility but rarely detects data fabrication due to its reliance on summarized results rather than or replication. Reviewers, often overburdened and assuming author honesty, focus on conceptual soundness rather than forensic of datasets, allowing fabricated data that mimics expected patterns to pass undetected. An of peer-review comments on retracted papers found that only 8.1% recommended rejection, indicating effectiveness in flagging issues like fabrication that later prompted retractions. Editors and reviewers typically do not proactively screen for , as such checks exceed standard protocols and require specialized tools beyond routine assessment. Post-publication scrutiny has emerged as a critical complement to , enabling broader community oversight through platforms like , where anonymous comments highlight anomalies such as inconsistent figures or statistical improbabilities. These comments have triggered investigations leading to numerous retractions and proceedings, with facilitating the identification of image duplications and data irregularities in fields like . For instance, Elisabeth Bik's manual post-publication reviews have exposed duplicated or manipulated images in hundreds of papers, prompting journals to retract or correct affected works. Independent statistical scrutiny post-publication, such as anesthesiologist John Carlisle's analysis of randomized controlled trials, has revealed falsified data in 14% of 526 evaluated manuscripts through red flags like improbable digit distributions and uniform variances. Despite these advances, post-publication efforts face challenges, including delayed responses from journals—only 21.5% of flagged papers on prompt editorial action—and resistance from institutions protective of . Such scrutiny underscores systemic vulnerabilities in pre-publication gatekeeping, where fabrication often evades detection until replication failures or whistleblower alerts amplify concerns, as seen in cases like the 2022 exposure of potential image fabrication in Alzheimer's research supporting the hypothesis. Overall, while maintains baseline quality, post-publication mechanisms drive most fabrication detections, highlighting the need for integrated statistical protocols to enhance reliability.

Technological Aids and Databases

Technological aids for detecting fabrication encompass statistical software, forensic algorithms, and AI-driven systems that identify anomalies in numerical , images, or textual content inconsistent with genuine processes. These tools leverage patterns expected in authentic datasets, such as distributions or variance structures, which fabricators often fail to replicate accurately. Statistical methods form a core set of aids, implemented via open-source packages and scripts. , which posits that leading digits in naturally occurring numerical datasets follow a (e.g., '1' appearing about 30% of the time), detects fabrication by flagging deviations; a study of 12 known falsified articles found all violated this law, demonstrating high though lower specificity due to applicability limits like data scale and type. Other tools include (Granularity-Related Inconsistency of Means), which verifies if reported means align mathematically with sample sizes and measurement scales, identifying errors in up to 50% of sampled articles; GRIMMER extends this to standard deviations; and simulates distributions from to test realism. analyses, via packages like ddfab, flag unnatural clustering using reversed Fisher's tests, distinguishing fabrication from practices like p-hacking. These methods' strengths lie in automation and empirical validation, but limitations include requirements for access, to sample size, and potential false positives in small or specialized datasets. AI and machine learning tools target image and text manipulation, common in fabricated biomedical data. Proofig AI employs algorithms to detect cloning, splicing, deletions, and AI-edits in scientific images (e.g., blots), using forensic filters like color maps and similarity lines to highlight unnatural patterns. Springer Nature's assesses textual consistency across paper sections to uncover AI-generated or papermill content, scoring deviations that prompt human review and preventing hundreds of suspect submissions; SnappShot scans PDFs for duplicated gels or blots, expandable to other images. Such tools enhance pre-publication screening but rely on human oversight for confirmation. Databases aggregate signals for and investigation. The Database, maintained by the Center for Scientific Integrity, catalogs thousands of retracted papers with reasons including fabrication, enabling queries for serial offenders or field-wide trends; for instance, it flagged 445 organ transplant papers by 2025, leading to 44 retractions tied to ethical violations and . , a post-publication review platform, facilitates anonymous critiques of published figures and data, uncovering image duplications and statistical irregularities that have prompted numerous retractions and investigations, as evidenced in high-profile cases across disciplines. These resources promote but face challenges like unverified claims requiring institutional .

Notable Examples

Biomedical Research Scandals

Biomedical research has been marred by several high-profile instances of data fabrication, where researchers invented or manipulated results to support claims, often leading to retractions, halted trials, and eroded . These scandals frequently involve clinical trials or high-stakes interventions, amplifying their consequences for care and policy. Investigations have revealed patterns of falsified data, duplicated images, and undisclosed conflicts, with fabrication rates estimated to affect a notable minority of publications in fields like and . One of the earliest documented cases involved cardiologist John Darsee, who in the 1980s fabricated data across multiple studies while at Harvard-affiliated labs and . An NIH review in 1983 uncovered that Darsee invented results from non-existent experiments in at least 12 papers on cardiac function, leading to the retraction of over 100 publications co-authored by him and a 10-year research ban. The scandal prompted stricter oversight in U.S. biomedical funding but highlighted how lab hierarchies enabled subordinates to produce fraudulent data under senior supervision. In 1998, gastroenterologist published a paper claiming a link between the MMR vaccine and autism based on 12 children, but subsequent investigations revealed he had manipulated diagnostic histories and timelines to fabricate the association. Funded partly by lawyers suing vaccine makers, Wakefield's undisclosed conflicts and ethical violations in invasive procedures on children led to the paper's retraction in 2010 and his removal from the UK medical register for serious professional misconduct in 2010. The fraud fueled global , contributing to outbreaks. Japanese researcher orchestrated one of the largest known fabrication schemes, inventing data for over 200 clinical trials on , bisphosphonates, and prevention from the 1990s to 2010s. Exposed in 2018 by statisticians analyzing implausibly uniform trial outcomes, Sato's work, published in journals like and , included falsified patient enrollments from non-existent nursing homes and duplicated control groups, resulting in over 40 retractions by 2017. Despite university probes confirming , Sato avoided criminal charges, underscoring gaps in international enforcement for prolific fabricators. The 2020 Surgisphere scandal exemplified rapid dissemination of fabricated data during the . A study, based on purported data from 96,000 patients across 671 hospitals, claimed increased mortality, prompting WHO and national regulators to pause trials on June 4, 2020. Independent audits revealed Surgisphere Corporation, led by Sapan Desai, had fabricated or unverifiable datasets with inconsistencies like mismatched country records; the retracted on June 4, 2020, after authors could not provide raw data. Co-authors Mandeep Mehra and others retracted a related NEJM paper, highlighting failures under crisis pressure. Thoracic surgeon Paolo Macchiarini's stem cell-seeded trachea transplants from 2011 onward involved falsifying patient survival and recovery data in publications claiming regenerative success. investigations in 2015 and 2018 ruled his manipulations as intentional misconduct, including exaggerated outcomes for procedures that caused deaths; 12 papers were retracted or corrected. Macchiarini's 2023 conviction for one case of bodily injury reflected ethical lapses in unproven therapies, with whistleblowers facing retaliation despite early warnings. Recent cases, such as Khalid Shah's alleged image duplication and data falsification in 21 cancer papers (flagged in 2024) and Dana-Farber Cancer Institute's 2024 retractions of six papers for manipulated images, indicate ongoing issues in elite institutions. These incidents, often detected via post-publication scrutiny, have spurred calls for mandatory but reveal persistent vulnerabilities in high-pressure fields.

Physical Sciences Cases

In the physical sciences, data fabrication has occurred in high-profile instances, often involving fabricated experimental results in and related fields, leading to retractions and institutional investigations. These cases highlight vulnerabilities in complex experimental setups where data manipulation can evade initial scrutiny, though physical sciences generally exhibit lower rates of compared to biomedical fields due to stronger emphasis on theoretical consistency and . Jan Hendrik Schön, a physicist at Bell Laboratories, fabricated data across 16 publications between 1998 and 2002, claiming breakthroughs in molecular electronics, single-molecule transistors, and organic superconductors. His reports, published in prestigious journals like Nature and Science, purported to demonstrate revolutionary nanoscale devices and quantum computing prototypes, garnering widespread acclaim and nearly earning a Nobel Prize nomination. An investigation by a Bell Labs panel, released on September 26, 2002, confirmed deliberate fabrication through duplicated spectra, impossible error patterns, and inconsistencies in raw data logs, while exonerating co-authors of complicity. Schön's Ph.D. was revoked by the University of Konstanz in 2004, though a German court upheld his degree retention in 2004 on procedural grounds; he later transitioned to industry roles without further academic misconduct allegations. More recently, Ranga Dias, a at the , faced accusations of data fabrication in superconductivity research claiming room-temperature superconductors under , detailed in a 2020 Nature paper and subsequent works. Investigations by the university, concluded in 2023, identified manipulated data including forged images and selective reporting, resulting in retractions of at least four papers by July 2023 and Dias's departure from the institution in November 2024. Independent analyses revealed inconsistencies such as mismatched crystal structures and unattainable pressure conditions, undermining claims of hydride-based materials achieving zero electrical resistance at 15–25°C. The scandal drew scrutiny to funding pressures in high-stakes fields like energy materials, with Dias's lab receiving millions in grants prior to the revelations. Historical precedents include Emil Rupp's 1920s–1930s experiments on and canal rays, which falsely supported interpretations and impressed figures like . Exposed in December 1935 by Robert Ladenburg and others for lacking proper controls and fabricating deflection data, Rupp's work led to his resignation from the University of Frankfurt; no formal retractions occurred due to the era's norms, but it eroded trust in early . These cases underscore recurring patterns where fabrication exploits the opacity of proprietary data and the allure of paradigm-shifting discoveries in physics and chemistry.

Social and Climate Sciences Controversies

In social sciences, particularly and , several high-profile cases of data fabrication have undermined public trust and highlighted vulnerabilities in practices. , a former professor of at , was found to have fabricated data in at least 55 publications between 1996 and 2011, inventing experiments on topics such as racial stereotypes, meat consumption, and consumer behavior to produce desired outcomes aligning with prevailing ideological narratives. An independent investigation by three Dutch universities concluded that Stapel manipulated datasets entirely from scratch, often without conducting surveys or experiments, leading to over 50 retractions and his dismissal in 2011. More recent incidents include Michael LaCour's 2014 study in Science, which claimed in-person canvassing by gay rights activists could durably shift voters' attitudes toward same-sex marriage; the dataset was later revealed as fabricated, with fabricated survey responses and impossible statistical patterns, prompting retraction in 2015 after co-author Donald Green identified inconsistencies. In 2023, Harvard Business School professor Francesca Gino faced allegations of data fabrication in multiple papers on dishonesty and incentives, including altering survey results in a 2012 study on self-signing honesty pledges; forensic analysis by data sleuths detected implausible patterns like duplicated response sets, leading to administrative leave and investigations by Harvard and journals. Similarly, Duke University's Dan Ariely was implicated in falsified data for a 2012 paper on signing integrity statements, where bicycle theft experiment images showed evidence of digital manipulation, as reported by The Atlantic in 2023, though Ariely denied direct involvement. These cases, often involving fabricated evidence for politically resonant findings like attitude change or moral behavior, reflect how pressures for novel, confirmatory results in ideologically charged fields can incentivize misconduct. In climate sciences, outright data fabrication has been less frequently documented compared to or selective reporting controversies, though allegations persist amid polarized debates. The 2009 Climatic Research Unit (CRU) email leak, dubbed "Climategate," involved leaked correspondence from scientists suggesting efforts to withhold data from critics and adjust temperature records to emphasize warming trends, such as the "hide the decline" phrase regarding proxy data divergence; however, eight independent inquiries, including by the UK and the , found no evidence of fabrication or deliberate falsification, attributing issues to poor communication and archival lapses rather than invented data. A 2017 whistleblower account by NOAA scientist John Bates accused colleagues of using unverified, unarchived data in a 2015 paper aimed at debunking the "pause," including premature publication of rushed adjustments to sea surface temperatures without proper validation; Bates testified to that while not outright fabrication, the practices violated NOAA protocols and prioritized narrative over rigor, leading to no retractions but heightened scrutiny of agency data handling. Such episodes underscore tensions between empirical transparency and institutional incentives in climate modeling, where modeled projections rather than raw observational fabrication dominate, yet perceived biases in data curation have fueled skepticism without conclusive proof of systematic invention.

Ramifications

Scientific and Professional Consequences

Data fabrication, as a form of , prompts swift institutional responses, including investigations by oversight bodies such as the U.S. Office of Research Integrity (ORI) or equivalent entities in other jurisdictions, often culminating in paper retractions. Retractions due to fabrication erode the foundational trust in peer-reviewed literature, as they invalidate prior findings and necessitate reevaluation of dependent studies, leading to wasted resources and delayed progress in the affected fields. For example, analyses indicate that misconduct-linked retractions, which frequently involve fabrication, account for a substantial portion of all retractions, with data problems—including fabrication—comprising over 75% of retraction reasons in recent years. On the professional front, individuals adjudicated for fabrication routinely experience career-ending penalties, such as dismissal from or positions and exclusion from eligibility. In federally funded U.S. , ORI findings of fabrication can result in debarment from federal funding for up to five years or longer, alongside requirements for supervised or certification of corrective actions. A study of retraction cases linked to found that such events impose direct financial burdens on institutions and funders, estimated at hundreds of thousands of dollars per incident, while severely damaging the perpetrators' professional standing and future employability. Collaborators may also face indirect consequences, including scrutiny of their own work and reduced collaboration opportunities. Quantifiable impacts on reveal a post-fabrication decline: retracted authors experience an average 10% reduction in citations to their pre-retraction publications, signaling diminished , and approximately 46% exit active shortly after such events. While not all retractions terminate careers—some researchers pivot or continue in less scrutinized roles—fabrication's intentional deceit typically invites harsher sanctions than errors or honest discrepancies, amplifying reputational harm and isolating offenders from the . These outcomes reinforce accountability but highlight variability in enforcement, as penalties depend on institutional rigor and the scale of the fabrication.

Broader Societal and Economic Impacts

Data fabrication in scientific research leads to substantial economic losses through wasted public and private funding. Between 1992 and 2012, papers retracted due to accounted for approximately $58 million in direct funding from the (NIH). On average, each journal article retracted for incurred direct costs of about $392,582, encompassing investigations, personnel time, and administrative expenses. These figures underestimate total economic harm, as they exclude such as follow-up studies on invalidated findings, lost productivity from diverted resources, and diminished returns on subsequent research built upon fabricated data. Retractions stemming from data fabrication also impose opportunity costs by eroding the efficiency of research ecosystems. accounts for over 67% of retractions, including 43.4% due to or suspected , resulting in billions in global funding squandered annually as "" pressures incentivize fabrication over rigorous inquiry. Fabricated results can mislead commercial applications, as seen in pharmaceutical development where invalid preclinical data leads to failed clinical trials costing hundreds of millions per candidate. Societally, data fabrication undermines public confidence in scientific institutions, fostering skepticism toward evidence-based policies. High-profile retractions, such as those involving falsified studies, have amplified doubts about research reliability, with surveys indicating declining trust in post-scandals. This erosion manifests in reduced adherence to health guidelines; for instance, fabricated vaccine-autism links from the 1998 Wakefield study contributed to outbreaks by sustaining hesitancy despite subsequent debunking. Broader ramifications include distorted policy decisions with cascading effects. Reliance on fabricated or data can justify inefficient regulations or interventions, diverting societal resources from verifiable priorities. In and , where left-leaning biases may downplay in ideologically aligned fields, selective exacerbates perceptions of institutional favoritism, further alienating the from expert consensus. Over 10,000 retractions in 2023 alone, many tied to fabrication, signal a systemic that risks long-term disengagement from scientific advancement.

Policy and Regulatory Responses

In the United States, federal policy defines research misconduct, including data fabrication, as "fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results," with fabrication specifically entailing making up data or results and recording or reporting them. The Office of Research Integrity (ORI), under the Department of Health and Human Services (HHS), oversees investigations for Public Health Service (PHS)-funded research, conducting formal inquiries and investigations upon institutional referral, issuing findings of misconduct, and recommending administrative actions such as debarment from federal funding or supervision requirements. Institutions receiving PHS funds must implement assured policies and procedures for prompt inquiry, investigation, and reporting of allegations, ensuring due process while protecting whistleblowers and respondents. A significant regulatory update occurred on September 17, 2024, when HHS issued a final rule revising PHS policies on research misconduct, effective for allegations arising on or after April 15, 2025, to enhance efficiency and coordination; changes include allowing institutions to add multiple respondents to ongoing proceedings without separate inquiries, mandating ORI oversight for institutional , and clarifying processes to reduce delays in resolving cases. In June 2025, ORI released revised sample policies and procedures to guide institutions in aligning with these requirements, emphasizing standardized definitions, timelines for assessments (within 60 days), and protections against retaliation. The (NIH), as a major PHS agency, enforces these through grant terms, requiring grantees to report findings and imposing sanctions like funding suspension for confirmed fabrication. Publisher and journal responses complement federal regulations, with the International Committee of Medical Journal Editors (ICMJE) recommending prompt investigation of fabrication allegations, issuance of expressions of concern, and retraction of affected publications to maintain scientific record integrity. The (COPE) provides guidelines for editors to handle suspicions of data fabrication through confidential inquiries, collaboration with institutions, and public notices, though enforcement relies on voluntary adoption rather than legal mandate. Internationally, policies vary, with many countries adopting U.S.-style definitions of fabrication but lacking centralized enforcement; a analysis of 18 nations found that while most have institutional procedures, only a minority impose criminal penalties for , leading to calls for harmonized standards to address cross-border collaborations. In the , the for Research Integrity emphasizes institutional responsibility for investigating fabrication, with funding bodies like requiring misconduct reporting, but without uniform sanctions across member states. Responses to high-profile scandals since 2020, such as those involving fabricated datasets, have prompted ad hoc measures like enhanced mandates in clinical trials, yet systemic gaps persist in non-Western jurisdictions where enforcement is weaker.

Ongoing Debates

Extent of the Problem

A of surveys on found that approximately 1.97% of scientists admitted to fabricating, falsifying, or modifying data at least once in their career, with fabrication specifically estimated at 1.9% (95% : 1.0–3.5%). These self-reported figures derive from questionnaires across multiple studies, though they likely underestimate the true prevalence due to and fear of repercussions. In a 2022 survey of over 6,800 researchers, self-reported fabrication stood at 4.3% (95% : 2.9–5.7%) and falsification at 4.2% (95% : 2.8–5.6%), suggesting variability by or . Peer observations indicate higher incidence: over 14% of respondents in aggregated surveys reported witnessing fabrication, falsification, or selective modification by colleagues. Retraction data provides a lower-bound for detected cases, with —including fabrication—accounting for the majority of retractions in scientific publications. Since 2000, retractions due to data problems have risen significantly, comprising over 75% of such actions by 2023, though they represent only a fraction of total publications (e.g., fewer than 0.1% annually across major databases). Approximately 4% of top-cited scientists have at least one retraction, a conservative figure given delays in detection and incomplete records. The true extent remains elusive, as most fabrication evades scrutiny without statistical anomalies or whistleblowers; experts estimate underdetection multiplies observed rates severalfold, particularly in high-pressure fields like where replication crises amplify concerns. Variations exist by discipline and institution, with higher self-reported rates among graduate students (up to 17% for lab data fabrication in some U.S. surveys) and in developing countries, though cross-study comparisons are hampered by inconsistent definitions and reporting incentives. Overall, while outright fabrication affects a minority, its opacity undermines trust in broader empirical outputs, prompting calls for enhanced forensic tools like statistical detectors.

Systemic Reforms vs. Individual Accountability

The debate over addressing data fabrication in scientific research pits advocates of individual accountability, who emphasize personal sanctions, against proponents of systemic reforms, who focus on altering institutional incentives and structures. Individual accountability measures include retractions of fraudulent papers, dismissal from positions, debarment from federal funding, and, in severe cases, criminal prosecution. For example, U.S. federal policy under the Office of Research Integrity defines misconduct as fabrication, falsification, or , with penalties such as funding bans enforced in cases like those investigated by the Department of Health and Human Services. Surveys of fellows reveal strong support for harsh individual punishments, including permanent exclusion from government grants and loss of professional credentials, as effective deterrents. However, empirical assessments of these sanctions' preventive impact are sparse; a 1998 analysis in the concluded that the degree to which financial penalties and other personal repercussions curb repeat offenses remains empirically unexamined, potentially limiting their standalone efficacy. Critics of over-relying on individual measures argue that they fail to address root causes embedded in academic systems, such as intense publication pressures and misaligned incentives that reward novel, positive results over rigorous replication. The "publish-or-perish" paradigm, exacerbated by tenure and funding tied to output volume, fosters environments where fabrication offers career advantages, as evidenced by self-reported data from researchers indicating competitive pressures as a primary driver of questionable practices. Systemic reforms proposed include enhancing mentoring in labs, mandating preregistration of studies to reduce selective reporting, and shifting evaluation metrics toward replication success and data transparency rather than raw publication counts. Initiatives like frameworks have shown promise in curbing detrimental practices by increasing scrutiny, though their adoption remains uneven due to institutional resistance. Institutions often favor systemic narratives to diffuse blame, prioritizing reputational preservation over aggressive individual pursuits, which can perpetuate cycles. A 2021 study highlighted how higher education's structural imperatives lead to investigations that minimize fallout, such as downplaying fabrication in favor of "sloppy " attributions, reflecting a toward protecting collective prestige over causal accountability for deliberate acts. While some bioethicists advocate criminalizing egregious fabrication as —arguing it imposes societal costs like misguided policies—others caution that legal escalation risks chilling legitimate without resolving distortions. Empirical complexity underscores the need for integration: personal sanctions provide immediate deterrence, but without reforms to and oversight, fabrication persists, as isolated punishments overlook how rational actors respond to systemic rewards for .

Bias in Accusations and Investigations

Accusations of data fabrication often exhibit disparities across scientific fields, with biomedical research accounting for a significant portion of documented cases due to factors such as high publication pressure, complex image-based data prone to manipulation, and advanced detection tools like statistical audits. For instance, between 2000 and 2023, numerous scandals involved fabricated results in high-impact journals, including image duplication in cancer studies and stem cell research. In contrast, social and behavioral sciences show fewer formal findings despite self-reported misconduct rates suggesting similar underlying pressures; psychology, in particular, has been highlighted as vulnerable, yet detections rely heavily on whistleblowers rather than systemic screening, potentially underrepresenting fabrication in narrative-driven studies. These field-specific patterns may stem from detection biases rather than prevalence differences, as biomedicine benefits from reproducible protocols and international collaboration, while softer sciences face looser evidentiary standards. Institutional investigations into allegations frequently demonstrate reluctance or , prioritizing preservation of funding streams and institutional prestige over impartial inquiry. Reports indicate that officials can filter or downplay concerns due to personal or organizational biases, leading to delayed or incomplete probes, especially for prominent researchers whose work secures grants. In social sciences, this manifests as community-level tolerance for questionable practices, with critics noting patterns of downplaying in ideologically aligned , as seen in initial institutional responses to allegations against Harvard's in 2023, involving fabricated data on dishonesty experiments. Federal funding dynamics exacerbate this, with surveys revealing that up to 34% of grant recipients admit altering data to match sponsor expectations, yet investigations rarely target systemic incentives tied to political or bureaucratic priorities. The documented record of misconduct remains skewed, capturing only confirmed retractions or findings while excluding , exonerations, or unresolved allegations, which distorts perceptions of the problem's scope and biases scrutiny toward visible, high-stakes cases. Politicization further complicates this, as retractions can serve corrective purposes but are sometimes leveraged punitively or to discredit opposing views, particularly in contested domains like research. In environments with ideological homogeneity—prevalent in , where dissenting perspectives face heightened —accusations may disproportionately target contrarian findings, while reinforcing narratives evades rigorous vetting, though quantitative on this remains underdeveloped due to opaque institutional processes.

References

  1. [1]
    Definition of Research Misconduct | ORI
    (a) Fabrication is making up data or results and recording or reporting them. (b) Falsification is manipulating research materials, equipment, or processes, or ...
  2. [2]
    What Is Research Misconduct - NIH Grants & Funding
    Aug 19, 2024 · Research misconduct means fabricating, falsifying, and/or plagiarizing in proposing, performing, or reviewing research, or in reporting research results.
  3. [3]
    How Many Scientists Fabricate and Falsify Research? A Systematic ...
    ... prevalence of data fabrication, falsification and alteration) were conducted using only one outcome per study. For the same reason, in the regression ...View Figures (9) · View Reader Comments · Author Info
  4. [4]
    Perceived Prevalence of Data Fabrication and/or Falsification in ...
    Dec 7, 2018 · A survey by Bouter et al. [12] ranked data fabrication and data falsification as having highest impact on scientific truth, above plagiarism. In ...
  5. [5]
    Tools of the data detective: A review of statistical methods to ... - NIH
    Feb 1, 2025 · The purpose of the present study was to review a collection of existing statistical tools to detect data fabrication, assess their strengths and limitations.
  6. [6]
    RESEARCH MISCONDUCT - On Being a Scientist - NCBI Bookshelf
    Individuals, institutions, and even entire research fields can suffer grievous setbacks from instances of fabrication, falsification, and plagiarism.
  7. [7]
    Research Fraud: Falsification and Fabrication of Data
    Obviously, both falsification and fabrication of data and research are extremely serious forms of misconduct. Primarily because they can result in an inaccurate ...
  8. [8]
    The value of statistical tools to detect data fabrication - RIO Journal
    Apr 22, 2016 · We aim to investigate how statistical tools can help detect potential data fabrication in the social- and medical sciences.
  9. [9]
    Scientific Misconduct: A Global Concern - PMC - NIH
    Fabrication means making up data or results and recording or reporting them. · Falsification means manipulating research materials, equipment, or processes, or ...
  10. [10]
    What is Research Misconduct? Part 3: Fabrication
    May 29, 2019 · Unlike image duplication or textual similarities, data fabrication is harder to detect by just looking at a published paper. These cases are ...
  11. [11]
    Is it Time to Revise the Definition of Research Misconduct? - PMC
    Data fabrication and falsification directly threaten the goals of science because these behaviors lead to the publication of erroneous results, which undermines ...
  12. [12]
    USING DATA DIGITS TO IDENTIFY FABRICATED DATA
    ... data fabrication. ORI calls them "inconsequential" digits. The theory is, if the information is falsified, the miscreant typically devotes attention to ...
  13. [13]
    Research Misconduct and Medical Journals - PMC - PubMed Central
    (a) Fabrication is making up data or results and recording or reporting them. (b) Falsification is manipulating research materials, equipment, or processes, or ...
  14. [14]
    Misconduct in Biomedical Research: A Meta-Analysis and ...
    Data fabrication was 4.5% in self-reported and 21.7% in nonself-reported studies. Data falsification was 9.7% in self-reported and 33.4% in nonself-reported ...
  15. [15]
    Scientific misconduct in psychology: A systematic review of ...
    Feb 28, 2018 · Prevalence estimates for the involvement of other researcher in data falsification ranged between 9.3% and 18.7%. Self-admission rates for other ...
  16. [16]
    8% of researchers in Dutch survey have falsified or fabricated data
    Jul 22, 2021 · In the NIH study, 0.3% of more than 3,000 respondents admitted to data falsification. Gowri Gopalakrishna, an epidemiologist at the Free ...
  17. [17]
    Landmark research integrity survey finds questionable practices are ...
    Jul 7, 2021 · And one in 12 admitted to committing a more serious form of research misconduct within the past 3 years: the fabrication or falsification of ...
  18. [18]
    Beringer's Lying Stones; fraud and absurdity in science
    Jul 11, 2017 · A tale of scientific fraud and personal hubris. My second-year university Geology wasn't particularly notable except for a bit of academic trickery.
  19. [19]
    Fakes, Frauds and Fossils: The Lying Stones of Dr. Beringer
    Aug 14, 2010 · The youngsters admitted that the rocks were a fraud carved by themselves, incited by two colleges of Beringer, the mathematician Jean Ignace Roderique (1697- ...
  20. [20]
    (PDF) The first case of paleontological fraud. Beringers Lügensteine ...
    Mar 15, 2024 · ... Beringer's desire to publish an important paleontological discovery. The Lügensteine were then an intentional scientific fraud.
  21. [21]
    [PDF] 1 Misinformation Age: What early modern scientific fakes ... - IUHPST
    Both of the examples given above are instances of scientific fabrication. Two scientific objects were produced – a counterfeit dragon and a falsified study – ...
  22. [22]
    Top 10 Scientific Frauds and Hoaxes - Listverse
    Apr 9, 2008 · Top 10 Scientific Frauds and Hoaxes ; 10. Jan Hendrik Schön ; 9. The Cardiff Giant ; 8. The Perpetual Motion Machine ; 7. The Lying Stones ; 6. The ...Missing: 1900 | Show results with:1900<|separator|>
  23. [23]
    Rocky Road: Johann Bartholomew Adam Beringer - Strange Science
    ... fraud. To his credit, Beringer knew his find was important, but avoided naming the cause, leaving that for better minds than his own. In his day, no one ...
  24. [24]
    Case study 1, William Summerlin - NCBI
    When Summerlin used a black pen to alter a patch of black mouse skin transplanted onto a white mouse, animal care technicians quickly discovered the fraud.
  25. [25]
    Science fraud: from patchwork mouse to patchwork data - Weissmann
    Apr 1, 2006 · Summerlin pulled two white mice from the container. While they wriggled and squeaked in protest, he inspected the sites of the black skin ...
  26. [26]
    Inquiry at Cancer Center Finds Fraud in Research
    May 25, 1974 · ' Summerlin admitted to the committee that he had darkened the skin of two white mice with a pen to make it appear that the mice had accepted ...
  27. [27]
    Coping with fraud: the Darsee Case - PubMed
    Culliton reviews major developments in the John Darsee fraud case at Harvard Medical School, from the first known act of fraud to recent evidence.
  28. [28]
    Deeper problems for Darsee: Emory probe - JAMA Network
    The promising research career of 34-year-old John R. Darsee, MD, was aborted in 1981 when he was caught fabricating research results in the Harvard.
  29. [29]
    Some data and historical perspective on scientific misconduct ...
    Feb 17, 2020 · Former Harvard cardiologist John Darsee got a 10-year NIH funding ban in 1983 for a track record of serial misconduct going back many years ...<|control11|><|separator|>
  30. [30]
    Science: Fudging Data for Fun and Profit - Time Magazine
    Dec 7, 1981 · Findings that were touted only last summer as a fundamental breakthrough in the understanding of carcinogenesis have been branded fraudulent.
  31. [31]
    A view of misconduct in science - PubMed
    An eminent biochemist gives his personal view of misconduct in science, one largely based on an experience with the case of fraud by a young researcher.
  32. [32]
    Scientist Summarizes Evidence Against Burt's IQ Test Data | News
    Nov 9, 1978 · Cyril Burt fabricated data in support of his theories on the inheritability of IQ. ... Burt was a fraud in many ways." He added that "all this ...
  33. [33]
    [PDF] Did Sir Cyril Burt Fake His Research on Heritability of Intelligence ...
    "Scientifically, Burt's results are a fraud." Of course, the accusations do not totally invalidate Burt's theory, but they destroy the main evidence with.
  34. [34]
    Scientific Misconduct - Annual Reviews
    Scientific misconduct has occurred throughout the history of science. The US government began to take systematic interest in such misconduct in the 1980s.
  35. [35]
    The Evolution of the "Scientific Misconduct" Issue: An Historical ...
    Aug 7, 2025 · Although research misconduct pre-dates the 20 th century [8] ; studies into the prevalence of research misconduct, especially on data ...
  36. [36]
    Darsee and Slutsky cases in the 1980's
    One case involved Dr. John R. Darsee, a young clinical investigator in cardiology at the Brigham and Women s Hospital (a teaching affiliate of Harvard ...
  37. [37]
    Biomedical paper retractions have quadrupled in 20 years — why?
    May 31, 2024 · The retraction rate for European biomedical-science papers increased fourfold between 2000 and 2021, a study of thousands of retractions has found.Missing: fabrication | Show results with:fabrication
  38. [38]
    Two Cheers for the Retraction Boom - The New Atlantis
    the ultimate in academic take-backs — grew tenfold, from about 40 per year to about 400. The figure is ...
  39. [39]
    Scientific misconduct is on the rise. But what exactly is it?
    Mar 17, 2025 · In 2023, more than 10000 research papers were retracted because of scientific misconduct. But it's not always deliberate.Missing: century | Show results with:century
  40. [40]
    Analysis of scientific paper retractions due to data problems
    Previous research has pointed out that the long retraction times associated with data fabrication cases are a common challenge in uncovering this type of ...<|separator|>
  41. [41]
    Is science really facing a reproducibility crisis, and do we need it to?
    Mar 12, 2018 · In anonymous surveys, on average 1–2% of scientists admit to having fabricated or falsified data at least once (2). Much higher percentages ...
  42. [42]
    No raw data, no science: another possible source of the ...
    Feb 21, 2020 · A reproducibility crisis is a situation where many scientific studies cannot be reproduced. Inappropriate practices of science, ...
  43. [43]
    The reproducibility “crisis”: Reaction to replication ... - PubMed Central
    Aug 9, 2017 · The reproducibility crisis involves concerns that many studies fail to replicate results, with low levels of reproducibility in biomedical ...Missing: link | Show results with:link
  44. [44]
    Fraudulent Scientific Papers Are Rapidly Increasing, Study Finds
    Aug 4, 2025 · A statistical analysis found that the number of fake journal articles being churned out by “paper mills” is doubling every year and a half.
  45. [45]
    The entities enabling scientific fraud at scale are large, resilient, and ...
    Numerous recent scientific and journalistic investigations demonstrate that systematic scientific fraud is a growing threat to the scientific enterprise.
  46. [46]
    NSF Fellows' perceptions about incentives, research misconduct ...
    Apr 7, 2023 · The research misconduct rates (3.7% for self-reported and 11.9% for direct knowledge of colleagues) are of magnitude similar to those estimated ...Results · Guilty Of Distorting The... · Discussion
  47. [47]
    Misconduct in research on the rise - UOC
    Apr 7, 2025 · Research misconduct and questionable practice appear to be increasingly prevalent. This has a direct impact on public trust in science, the quality of research.
  48. [48]
    Data integrity scandals in biomedical research: Here's a timeline
    May 17, 2023 · This detailed timeline covers scandals and controversies impacting integrity in biomedical research over the decades.
  49. [49]
    Student Tutorial: Fabrication or Falsification | Academic Integrity ...
    Examples of fabrication or falsification include the following: Artificially creating data when it should be collected from an actual experiment ...
  50. [50]
    Data Fabrication and Falsification and Empiricist Philosophy of ... - NIH
    Aug 28, 2013 · For example, rules pertaining to data fabrication and falsification would apply only to data that describes things that can be observed. ...
  51. [51]
    Science Has a Nasty Photoshopping Problem - The New York Times
    Oct 29, 2022 · Scientists need to toughen up about preventing fabricated scientific results from being published.
  52. [52]
    AI-enabled image fraud in scientific publications - ScienceDirect.com
    Jul 8, 2022 · Inappropriately duplicating and fabricating images in scientific papers would have serious consequences. Editors and reviewers may be deceived, ...
  53. [53]
    3.4 Digital Images and Misconduct - Council of Science Editors
    Fraudulent manipulation refers to adjustment of image data that does affect the interpretation of the data. Examples include deleting a band from a gel to “fix” ...
  54. [54]
    Fake data - MOSTLY AI
    To generate fake data (also called mock data), one can use available open-source libraries such as Faker and generate it without a need to touch production.
  55. [55]
    How to fake scientific data for a research project - Quora
    Jan 11, 2021 · How do I fake scientific data for a research project? I'm in a total pinch and have only 2 days to get what should be weeks of data for plants.How is fake data spotted in scientific papers? - QuoraWhat is the best statistical software package for an academic ...More results from www.quora.com<|separator|>
  56. [56]
    Detection of data fabrication using statistical tools - OSF
    Aug 19, 2019 · Scientific misconduct potentially invalidates findings in many scientific fields. Improved detection of unethical practices like data ...Missing: software | Show results with:software
  57. [57]
    Artificial Intelligence Can Generate Fraudulent but Authentic ...
    May 31, 2023 · Conclusions: The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles.
  58. [58]
    AI can be a powerful tool for scientists but it can also fuel research ...
    Mar 23, 2025 · AI makes it easy to fabricate research. Academic papers can be retracted if their data or findings are found to be no longer valid.Missing: software | Show results with:software
  59. [59]
    AI-generated research paper fabrication and plagiarism in the ...
    Mar 10, 2023 · Fabricating research within the scientific community has consequences for one's credibility and undermines honest authors.
  60. [60]
    Forensic Statistics to detect Data Fabrication
    Oct 24, 2020 · Data fabrication is a form of research misconduct that affects the credibility of research and decreases public trust in science. In addition, ...Missing: prevalence | Show results with:prevalence
  61. [61]
    Investigating and preventing scientific misconduct using Benford's Law
    Apr 11, 2023 · Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law.
  62. [62]
    Six Red Flags for Identifying Falsified Data in Randomized ...
    Oct 25, 2023 · Red flags include missing key methods, internal inconsistencies, checking author history, trusting instincts, and evaluating the journal.
  63. [63]
    Understanding the Causes - Fostering Integrity in Research - NCBI
    A range of possible reasons were posited: (1) career and funding pressures, (2) institutional failures of oversight, (3) commercial conflicts of interest, (4) ...
  64. [64]
    Leading the charge to address research misconduct
    Sep 1, 2021 · Motivated reasoning and narcissistic thinking were common, with primary investigators who fabricated or falsified data often talking about the ...
  65. [65]
    Effect of medical researchers' creative performance on scientific ...
    Dec 18, 2022 · Medical researchers' creative performance positively relates to scientific misconduct, and moral licensing plays a mediating role in the ...
  66. [66]
    Research Misconduct: The Peril of Publish or Perish - PMC - NIH
    Goodstein identified easiness of fabrication, the peril of publish or perish, and finances and ideology as strong incentives leading to the motivation to commit ...
  67. [67]
    'Publish or perish' culture blamed for reproducibility crisis - Nature
    Jan 20, 2025 · Sixty-two per cent of respondents said that pressure to publish “always” or “very often” contributes to irreproducibility, the survey found.
  68. [68]
    Publish or be ethical? Publishing pressure and scientific misconduct ...
    Dec 18, 2020 · The paper reports two studies exploring the relationship between scholars' self-reported publication pressure and their self-reported ...<|separator|>
  69. [69]
    Misconduct accounts for the majority of retracted scientific publications
    67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).Results · Geographic Origin And Impact... · Discussion
  70. [70]
    Incidence and Consequences - Fostering Integrity in Research - NCBI
    Apr 11, 2017 · Research misconduct and detrimental research practices constitute serious threats to science in the United States and around the world.
  71. [71]
    The 'publish or perish' mentality is fuelling research paper retractions
    Oct 3, 2024 · Published research papers can be retracted if there is an issue with their accuracy or integrity. And in recent years, the number of retractions has been ...
  72. [72]
    How Competition for Funding Impacts Scientific Practice - NIH
    Feb 13, 2024 · Only four of the in total 53 interviewees commented that competition for funding could result in research misconduct, three of which were either ...
  73. [73]
    How A Budget Squeeze Can Lead To Sloppy Science And ... - NPR
    Apr 14, 2017 · Stories of outright misconduct like this are rare in science. But the pressures on scientists manifest in many more subtle ways. If people are ...Missing: fraud | Show results with:fraud
  74. [74]
    [PDF] Data Management and Research Misconduct
    But on 1. July, Dong-Pyou Han, a former biomedical scientist at Iowa State University in Ames, was sentenced to 57 months for fabricating and falsifying data in ...
  75. [75]
    Cancer researcher banned from federal funding for faking data in ...
    Dec 15, 2022 · A former associate professor at Purdue University faked data in two published papers and hundreds of images in 16 grant applications, ...Missing: examples pressure<|control11|><|separator|>
  76. [76]
    Intentional bias & scientific fraud- Properties - InfluentialPoints
    It is now clear that intentional bias in science is not uncommon, and that political and commercial interests have seriously distorted scientific opinion in a ...Missing: ideological causing
  77. [77]
    Investigating and preventing scientific misconduct using Benford's Law
    Apr 11, 2023 · Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law.
  78. [78]
    Detecting academic fraud using Benford law: The case of Professor ...
    We investigate whether Benford's Law can be used to differentiate retracted academic papers that have employed fraudulent/manipulated data from other academic ...<|separator|>
  79. [79]
    Are these data real? Statistical methods for the detection of ... - NIH
    Jul 30, 2005 · In this paper we use statistical techniques to examine data from two randomised controlled trials. In one trial, the possibility of scientific misconduct had ...
  80. [80]
    Detection of data fabrication using statistical tools - OSF
    In two studies, we investigated the diagnostic performance of various statistical methods to detect fabricated quantitative data from psychological research.
  81. [81]
    The GRIM Test - Nicholas J. L. Brown, James A. J. Heathers, 2017
    Oct 18, 2016 · This technique evaluates whether the reported means of integer data such as Likert-type scales are consistent with the given sample size and ...
  82. [82]
    [PDF] The GRIMMER test: A method for testing the validity of reported ...
    Aug 29, 2016 · As mentioned, if all the statistics are fabricated, the. GRIM test alone will likely be sufficient to detect the fraud. ... The value of ...
  83. [83]
    Detecting fabrication in large-scale molecular omics data - PMC
    We develop methods of fabrication detection in biomedical research and show that machine learning can be used to detect fraud in large-scale omic experiments.
  84. [84]
    Detecting Data Falsification and Preventing Scientific Fraud
    Proofig AI detects data falsification by examining sub-images, using AI to detect advanced edits, cloning, deletion, and splicing, and using AI similarity ...<|separator|>
  85. [85]
    [PDF] Detection of data fabrication using statistical tools
    Aug 19, 2019 · Statistical methods to detect potential data fabrication can be based either on reported summary statistics that can often be retrieved from ...
  86. [86]
    Fraud and Peer Review: An Interview with Melinda Baldwin
    Mar 24, 2022 · I think peer review is not set up to detect fraud, and we shouldn't be surprised when fabricated data makes it through a peer review process.
  87. [87]
    Why does it take so long for journals/committees to detect fabricated ...
    Aug 26, 2024 · Why does fraudulent data pass peer review? Experimental results (where something gets measured) cannot be duplicated without strenuous ...
  88. [88]
    The effectiveness of peer review in identifying issues leading to ...
    We analyzed peer-review comments for a sample of retracted papers to check its effectiveness. · In our sample, only 8.1% of peer reviews suggested rejection.
  89. [89]
    5 tips for using PubPeer to investigate scientific research errors and ...
    Aug 1, 2023 · PubPeer, a website where researchers critique one another's work, has played a key role in helping journalists uncover scientific misconduct ...
  90. [90]
    The PubPeer conundrum: Administrative challenges in research ...
    Aug 13, 2024 · Recently, PubPeer comments have led to a significant number of research misconduct proceedings – a development that could not have been ...
  91. [91]
    Broken Illusions: When Scientists Fabricate or Falsify Data
    Apr 7, 2023 · These include increased scrutiny of manuscripts pre-publication, making post-publication instances of misconduct more open, raising awareness ...
  92. [92]
    [PDF] How do journals deal with problematic articles. Editorial response of ...
    The most surprising result is the low response rate of research journals when a scientific paper is reported of misconduct or error in PubPeer. Only 21.5 ...
  93. [93]
    Blots on a field? - Science
    Jul 21, 2022 · Some adherents of the amyloid hypothesis are too uncritical of work that seems to support it, he says. “Even if misconduct is rare, false ideas ...<|control11|><|separator|>
  94. [94]
    Application of Benford's law: a valuable tool for detecting ... - PubMed
    All 12 of the known falsified articles violated Benford-Newcomb's law, which indicated that this analysis had a high sensitivity. The low specificity of the ...
  95. [95]
    Springer Nature unveils two new AI tools to protect research integrity
    Jun 12, 2024 · Springer Nature is rolling out two new bespoke AI tools to support the identification of papers that contain AI-generated fake content and / or problematic ...
  96. [96]
    Retraction Watch – Tracking retractions as a window into the ...
    Did you know that Retraction Watch and the Retraction Watch Database are projects of The Center of Scientific Integrity? Others include the Medical Evidence ...Top 10 most highly cited · Leaderboard · Page 2 – Tracking retractions... · FAQ
  97. [97]
    Unmasking the surge of suspicious medical research publications
    May 18, 2023 · In 2020, up to one-third of neuroscience papers and nearly a quarter of medicine papers were likely falsified or plagiarized, highlighting a concerning ...<|separator|>
  98. [98]
    Overly Ambitious Researchers - Fabricating Data | Online Ethics
    Jan 1, 2000 · A historical case study about the cases of Dr. John Darsee and Dr. Stephen Breuning who both were found to have fabricated data as part of their research.
  99. [99]
    The MMR vaccine and autism: Sensation, refutation, retraction ... - NIH
    In 1998, Andrew Wakefield and 12 of his colleagues[1] published a case ... The fraud behind the MMR scare. BMJ. 2011;342:d22. [Google Scholar]; 10. Deer ...
  100. [100]
    Wakefield's article linking MMR vaccine and autism was fraudulent
    Jan 6, 2011 · 12 This uncovered the possibility of research fraud, unethical treatment of children, and Wakefield's conflict of interest through his ...Errata - March 15, 2011 · All rapid responses · Peer review
  101. [101]
    Researcher at the center of an epic fraud remains an enigma to ...
    Aug 17, 2018 · Sato, a bone researcher at a hospital in southern Japan, had fabricated data for dozens of clinical trials published in international journals.
  102. [102]
    University investigation finds misconduct by bone researcher with 23 ...
    Dec 6, 2017 · ... and authorship issues in 13 papers by Yoshihiro Sato, and plagiarism in another. Sato, a professor at Hirosaki University Medical School…
  103. [103]
    Retraction—Hydroxychloroquine or chloroquine with or ... - The Lancet
    RETRACTED: Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis.
  104. [104]
    Who's to blame? These three scientists are at the heart of ... - Science
    Jun 8, 2020 · Three unlikely collaborators are at the heart of the fast-moving COVID-19 research scandal, which led to retractions last week by The Lancet and The New ...<|separator|>
  105. [105]
    Macchiarini guilty of misconduct, but whistleblowers share blame ...
    Jun 26, 2018 · The Karolinska Institute (KI) in Stockholm has finally, officially, found disgraced surgeon Paolo Macchiarini guilty of scientific misconduct.
  106. [106]
    Paolo Macchiarini, Fraud, and Oversight: A Case of Falsified Stem ...
    Sep 6, 2017 · According to Rasko and Power, Macchiarini failed to test his artificial airways in animals before implanting them in three human patients, and ...<|control11|><|separator|>
  107. [107]
    Top Harvard Medical School Neuroscientist Accused of Research ...
    Feb 1, 2024 · Top Harvard Medical School neuroscientist Khalid Shah allegedly falsified data and plagiarized images across 21 papers, data manipulation ...
  108. [108]
    Dana Farber, and other falsified research scandals. Thoughts?
    Feb 14, 2024 · It's been about a month since Dana Farber was caught falsifiying data, forcing them to retract 6 papers, and submit corrections for 31 others.
  109. [109]
    Commentary: Study highlights ethical ambiguity in physics
    Jun 1, 2015 · Most physicists have heard about the almost mythological fraud performed by Jan Hendrik Schön in the early 2000s.1 His case and that of 15 ...<|control11|><|separator|>
  110. [110]
    September 2002: Schön Scandal Report is Released
    Aug 8, 2022 · On September 26, 2002, an investigatory committee released a report concluding that Jan Hendrik Schön—and none of his coauthors—committed ...
  111. [111]
    Panel Says Bell Labs Scientist Faked Discoveries in Physics
    Sep 26, 2002 · The committee exonerated all 20 of Dr. Schön's collaborators of complicity or knowledge in the fraud. But it also suggested that perhaps Dr.
  112. [112]
    Disgraced Physicist Can Keep His Degree | Science | AAAS
    Jan Hendrik Schön can keep his doctorate, a judge in Freiburg, Germany, decided on Monday. In 2002, the physicist was the center of one of the biggest scandals ...
  113. [113]
    Superconductivity researcher who committed misconduct exits ...
    Nov 19, 2024 · The University of Rochester has confirmed that it no longer employs Ranga Dias, who was found by investigators to have fabricated data.
  114. [114]
    Controversial Physicist Faces Mounting Accusations of Scientific ...
    Jul 25, 2023 · Allegations of data fabrication have sparked the retraction of multiple papers from Ranga Dias, a researcher who claimed discovery of a ...
  115. [115]
    Retractions are part of science, but misconduct isn't - Nature
    Apr 24, 2024 · The guidelines are full of important advice, including that institutions, not publishers, should perform integrity or misconduct investigations.
  116. [116]
    Rupp's Research Exposed as Fraud | American Physical Society
    Nov 14, 2023 · December 1934: Emil Rupp's Research, Which Fooled Even Einstein, is Exposed as Fraud · After Rupp's rise to prominence for seemingly breakthrough ...
  117. [117]
    Dutch University Sacks Social Psychologist Over Faked Data - Science
    AMSTERDAM—A Dutch social psychologist whose eye-catching studies about human behavior were fodder for columnists and policy makers has lost his job after ...
  118. [118]
    Instances of Scientific Misconduct | Diederik Stapel
    Diederik Stapel, a former social psychology professor, conducted scientific fraud and fabricated his data at least 50 times.
  119. [119]
    We discovered one of social science's biggest frauds. Here's ... - Vox
    Jul 22, 2015 · One reason scientists often get caught lying is that they are required to publish in ways that make it easier to catch lies and mistakes, and ...
  120. [120]
    Harvard Professor Under Scrutiny for Alleged Data Fraud
    Jul 5, 2023 · Harvard University professor Francesca Gino, whose research frequently focused on dishonesty, is under scrutiny for allegedly fabricating data in at least four ...
  121. [121]
    Did an honesty researcher fabricate data? - NPR
    Jul 28, 2023 · Duke professor and behavioral scientist Dan Ariely has been accused of using falsified data in research into ways to make people more honest.
  122. [122]
    Faking it - American Psychological Association
    Sep 1, 2012 · Psychology has experienced a recent string of revelations of data fabrication and other lapses of scientific ethics.
  123. [123]
    Fact brief - Were scientists caught falsifying data in the hacked ...
    Jul 13, 2024 · Nine separate investigations found that climate scientists involved in the “climategate” controversy did not falsify data. In 2009, the ...
  124. [124]
    Former NOAA Scientist Confirms Colleagues Manipulated Climate ...
    Feb 5, 2017 · Dr. Bates' revelations and NOAA's obstruction certainly lend credence to what I've expected all along – that the Karl study used flawed data, ...
  125. [125]
    Climategate scientists cleared of manipulating data on global warming
    Jul 7, 2010 · Climate change sceptics claimed they showed scientists manipulating and suppressing data to back up a theory of manmade climate change.
  126. [126]
    Financial costs and personal consequences of research misconduct ...
    Aug 14, 2014 · Most retractions are associated with research misconduct, entailing financial costs to funding sources and damage to the careers of those committing misconduct.
  127. [127]
    RCR Casebook: Research Misconduct | ORI
    RCR Casebook: Research Misconduct · Fabrication is making up data or results and recording or reporting them. · Falsification is manipulating research materials, ...
  128. [128]
    The career effects of scandal: Evidence from scientific retractions
    Retractions lead to an 10% average drop in citations to the prior work of faculty authors. · When retractions involve misconduct, prominent scientists experience ...
  129. [129]
    How do retractions impact researchers' career paths and ...
    May 12, 2025 · About 46% of authors leave their publishing careers around the time of a retraction, a new study has found. SA Memon et al/Nat Hum Behav ...Missing: fabrication | Show results with:fabrication
  130. [130]
    What Studies of Retractions Tell Us - PMC - NIH
    While researchers caught in widespread misconduct likely will need to start looking for work outside the sciences, retractions per se are not a career killer.
  131. [131]
    Dollar Costs of Scientific Misconduct Smaller Than Feared
    Aug 28, 2014 · Each journal article retracted cost an average of $392,582 in direct costs. Ferric Fang, senior author and clinical microbiologist at the ...
  132. [132]
    The Costs and Underappreciated Consequences of Research ...
    Aug 17, 2010 · This article will present a model we have developed to estimate the monetary costs of scientific misconduct.
  133. [133]
    The 'publish or perish' mentality is fuelling research paper retractions
    Sep 23, 2024 · The huge number of retractions indicates a lot of government research funding is being wasted. More importantly, the publication of so much ...
  134. [134]
    The Cost of Scientific Misconduct | Pediatric Research - Nature
    Aug 11, 2015 · A 2009 study by Dr. Daniele Fanelli reported that about two percent of researchers owned up to having “fabricated, falsified or modified data or ...
  135. [135]
    Retracted studies may have damaged public trust in science, top ...
    Jun 5, 2020 · “It is unclear how something that at least looks suspect and at worst was fraudulent got through that process, and that does undermine trust,” ...
  136. [136]
    Why do so many Americans distrust science? - AAMC
    May 4, 2022 · All government and private institutions that develop health policies suffer harm from news reports about research scandals (such as fraud), ...
  137. [137]
    Large-scale falsification of research papers risks public trust in ... - NIH
    ... scientific) literature with fraudulent research papers.,, This trend risks loss of trust by the public. Should the public be worried about claims in ...<|separator|>
  138. [138]
    Waning Public Trust in Science | City Journal
    Oct 31, 2024 · The area where "scientists" have really lost credibility is in climate science. Despite the uncertainty of their science, and the absolute fraud ...
  139. [139]
    Policies - Regulations | ORI - The Office of Research Integrity
    Institutions that have a research misconduct assurance should update their policies and procedures to be in compliance with the new rule as soon as practical.
  140. [140]
    42 CFR Part 93 -- Public Health Service Policies on Research ...
    ORI or another appropriate HHS office will work with the institution to develop and/or advise on a process for handling allegations of research misconduct ...
  141. [141]
    Public Health Service Policies on Research Misconduct
    Sep 17, 2024 · ORI streamlined language to avoid repeatedly distinguishing research misconduct proceedings subject to part 93 from suspension and debarment ...
  142. [142]
    Long-Awaited Changes to Research Misconduct Rules Have Arrived
    Sep 26, 2024 · Part 93 specifically governs alleged research misconduct – ie falsification, fabrication or plagiarism -- in research funded by a Public Health Service (PHS) ...
  143. [143]
    Revised PHS Research Misconduct Sample Policies & Procedures
    Jun 12, 2025 · ORI has released a new Sample Policies and Procedures document, designed to assist institutions in meeting regulatory requirements under the ...
  144. [144]
    4.1.27 Research Misconduct - NIH Grants & Funding
    Fabrication is making up data or results and recording or reporting them. Falsification is manipulating research materials, equipment, or processes, or changing ...
  145. [145]
    Scientific Misconduct, Expressions of Concern, and Retraction - ICMJE
    Scientific misconduct in research and non-research publications includes but is not necessarily limited to data fabrication; data falsification including ...Missing: fraud | Show results with:fraud
  146. [146]
    An International Study of Research Misconduct Policies - NIH
    Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.Missing: fraud | Show results with:fraud
  147. [147]
    A Systematic Review and Meta-Analysis | Science and Engineering ...
    Jun 29, 2021 · The literature review identifies articles that report the prevalence ... data fabrication (Hofmann et al., 2020). This discrepancy between ...
  148. [148]
    A survey among academic researchers in The Netherlands
    Feb 16, 2022 · 6,813 respondents completed the survey. Prevalence of fabrication was 4.3% (95% CI: 2.9, 5.7) and of falsification 4.2% (95% CI: 2.8, 5.6).
  149. [149]
    Linking citation and retraction data reveals the demographics of ...
    The data suggest that approximately 4% of the top-cited scientists have at least 1 retraction. This is a conservative estimate, and the true rate may be higher ...<|separator|>
  150. [150]
    Academic Dishonesty Statistics: Trends and Insights - OctoProctor
    Data Fabrication : 17% of graduate students admit to fabricating lab data. Unauthorized Collaboration : 54% of graduate students admit to unauthorized ...Missing: undetected | Show results with:undetected
  151. [151]
    Federal Research Misconduct Policy | ORI
    The policy addresses research misconduct. It does not supersede government or institutional policies or procedures for addressing other forms of misconduct ...Missing: incentives fraudsters
  152. [152]
    Preventing scientific misconduct. - American Journal of Public Health
    of financial penalties and other sanctions. The extent to which sanctions prevent fur- ther incidents of scientific misconduct is an unexplored empirical ...Missing: systemic | Show results with:systemic
  153. [153]
    How can institutions prevent scientific misconduct? - Retraction Watch
    Jul 16, 2012 · How can institutions prevent scientific misconduct? · Improvement in the quality of mentoring in training programs, and · A policy that ...
  154. [154]
    Promoting trust in research and researchers: How open science and ...
    Sep 20, 2022 · In this commentary, we argue that concepts such as responsible research practices, transparency, and open science are connected to one another.<|separator|>
  155. [155]
    Systemic Obstacles to Addressing Research Misconduct in Higher ...
    Aug 29, 2021 · Further, systemic imperatives in academic settings often incentivize institutional responses that focus on minimizing reputational harm rather ...Missing: preventing | Show results with:preventing
  156. [156]
    Should research misconduct be criminalized? - Sage Journals
    Jan 16, 2020 · Only serious cases of research misconduct should be considered as fraud and, hence, criminalized, i.e., merit criminal punishment such as fines ...<|separator|>
  157. [157]
    Addressing Research Misconduct and Detrimental Research Practices
    Addressing misconduct and detrimental research practices through the implementation of standards and best practices, such as effective mentoring at the lab ...
  158. [158]
    Scientific Misconduct and the Myth of Self-Correction in Science
    There is no evidence that psychology is more vulnerable to fraud than the biomedical sciences, and most frauds are detected through information from.
  159. [159]
    7 Addressing Research Misconduct and Detrimental Research ...
    As is seen in other contexts such as financial or political misconduct, officials may have biases that filter how they hear concerns or lead to reluctance ...
  160. [160]
    It's worse than you might think: Passive corruption in the social ...
    Jun 23, 2023 · Gino is on administrative leave from Harvard amid allegations that research she co-authored contains fabricated data . . . The revelations have ...
  161. [161]
    Science Has a Major Fraud Problem. Here's Why Government ...
    Jan 9, 2024 · 34% percent of scientists receiving federal funding have acknowledged engaging in research misconduct to align research with their funder's ...
  162. [162]
    Is “the time ripe” for quantitative research on misconduct in science?
    Sep 1, 2020 · The public record on misconduct is biased. No evidence is available on those accused of it but who were exonerated or simply not judged guilty. ...
  163. [163]
    Full article: The politicization of retraction - Taylor & Francis Online
    ... science “self-corrects.” The prevailing consensus is that retraction is appropriate only when the reported findings are unreliable due to research misconduct ...
  164. [164]
    Herding, social influences and behavioural bias in scientific research
    Deliberate fraud is rare. More usually, mistakes result from the excessive influence of scientific conventions, ideological prejudices and/or unconscious bias; ...