Fact-checked by Grok 2 weeks ago

Reporting bias

Reporting bias is a form of in scientific research where the dissemination of study findings is systematically influenced by the nature, direction, or significance of the results, leading to selective disclosure or suppression of information. This occurs across various stages, including study design, conduct, , and , and is considered a major type of that undermines the integrity of the evidence base used by clinicians, policymakers, and researchers. Reporting bias manifests in several interrelated subtypes, each contributing to an incomplete or skewed representation of research outcomes. arises when studies with statistically significant or "positive" results are more likely to be published than those with or negative findings, potentially exaggerating effects in meta-analyses. Selective outcome reporting bias involves reporting only a of prespecified outcomes—typically those showing favorable results—while omitting or incompletely describing others, with indicating that up to 62% of randomized controlled trials (RCTs) alter their outcomes post-hoc based on results. Other forms include time lag bias, where publication timing depends on result favorability; language bias, favoring English-language journals for positive findings; and duplicate or multiple , which inflates the visibility of certain results. These biases are particularly prevalent in clinical trials and can affect fields like , , and social sciences. The consequences of reporting bias are profound, as it can lead to overestimation of intervention efficacy, misguided healthcare decisions, and risks. For example, in systematic reviews, adjusting for selective outcome reporting has been shown to reduce treatment effect estimates by a of 39%, highlighting how unreported distorts conclusions. A notable case is the withdrawal of the painkiller (Vioxx) in , where selective reporting of cardiovascular risks in early trials contributed to an estimated 88,000–140,000 excess cases of serious coronary heart disease, many preventable through fuller disclosure. To counteract these issues, strategies such as mandatory prospective registration of trials on platforms like , adherence to standardized reporting guidelines like , and promotion of practices (e.g., via repositories) are essential for enhancing and minimizing bias.

Overview and Fundamentals

Definition and Scope

Reporting bias refers to the systematic distortion in the dissemination of research findings that occurs when the reporting of results is selectively influenced by their direction, statistical significance, or perceived novelty, rather than by the study's methodological quality or rigor. This form of bias arises during the post-study phase, where decisions about what, how, where, and when to report findings can skew the available evidence, often favoring positive or statistically significant outcomes over null or negative ones. The scope of reporting bias extends across various dimensions of dissemination, including the selective choice of which outcomes or entire studies to report (e.g., highlighting positive results while omitting negative ones), the manner in which data are presented (e.g., emphasizing favorable interpretations or downplaying limitations), and the timing or venue of publication (e.g., rapid reporting of novel findings in high-impact journals). Both intentional actions, such as deliberate suppression to align with sponsor interests, and unintentional factors, like journal preferences for exciting results, contribute to this bias, impacting fields beyond research such as media coverage of scientific news. A primary subtype, publication bias, exemplifies this by disproportionately favoring the dissemination of studies with positive results. The concept of reporting bias has roots dating back centuries, but the specific issue of —a key subtype—was first formally suspected in the in by Sterling, who noted that 97% of published psychological reported statistically significant ("positive") results. Empirical investigations and broader recognition grew in the , with like Simes (1986) providing evidence of selective dissemination in clinical trials. Unlike selection biases that emerge during or participant , reporting bias specifically involves distortions in the communication and accessibility of completed , thereby affecting the cumulative without altering the underlying . Reporting bias primarily arises during the dissemination phase of research, after is complete, when findings are selectively reported or suppressed to align with desired narratives or expectations. In contrast, occurs earlier, during the enrollment of study participants, where systematic differences in who is included can distort the sample's representativeness. Performance bias, meanwhile, emerges during the implementation of interventions, often due to differences in how treatments are delivered or how participants adhere to protocols, affecting the comparability of groups. To illustrate these differences, the following table compares reporting bias with selection and performance biases across key dimensions:
Bias TypeTiming in Research ProcessPrimary ImpactExample
Reporting BiasPost-analysis ()Inflates effect sizes in meta-analyses by omitting unfavorable resultsSelective emphasis on positive outcomes in reports, skewing systematic reviews.
Pre-analysis (participant enrollment)Distorts generalizability by uneven samplingExcluding certain demographics from a study population, leading to non-representative findings.
Performance BiasDuring analysis (intervention delivery)Compromises through unequal treatment administrationKnowledge of group assignment influencing caregiver behavior in a .
Unlike , which influences pre-reporting stages by predisposing researchers to interpret data in ways that support preconceived hypotheses, reporting bias specifically pertains to the choices made in presenting those interpretations. Sponsorship bias, often driven by , contributes to reporting bias by exerting external on what results are highlighted or downplayed, but it is not synonymous, as it represents a specific rather than the broader of selective . Reporting bias can amplify —where non-significant results are less likely to be published—by encompassing not only non-publication but also distortions within published works, making the overall evidence base broader in scope yet more skewed. In systematic reviews, reporting bias particularly undermines the "file drawer problem," where unreported null or negative studies remain hidden, disproportionately skewing the synthesized evidence toward positive findings and eroding the reliability of meta-analytic conclusions.

Contexts of Occurrence

In Scientific Research and Publishing

Reporting bias in scientific research arises from systemic pressures that incentivize the selective dissemination of favorable outcomes, often at the expense of null or negative findings. Career advancement in academia is heavily tied to publication records, encapsulated in the "publish or perish" culture, where researchers face intense pressure to produce novel, positive results to secure tenure, promotions, and funding. Funding bodies prioritize projects with high potential impact, further encouraging the pursuit and reporting of statistically significant outcomes over exploratory or inconclusive work. Journal impact factors exacerbate this by rewarding publications in high-profile venues that favor groundbreaking discoveries, leading researchers to emphasize positive results and downplay contradictions. In the publishing ecosystem, processes can perpetuate reporting bias, as editors and reviewers often reject manuscripts with null results due to perceived lack of novelty or interest. This gatekeeping contributes to widespread underreporting, with studies from the estimating that 40% to 50% of research with negative outcomes remains unpublished across disciplines, distorting the toward positive findings. For instance, empirical analyses of U.S. state-level show that heightened publication pressures correlate with increased bias in reported results, as scientists adjust analyses or suppress unfavorable to meet expectations. Systemic factors in scientific publishing have evolved since the , with the rise of trial registries and open-access models aiming to mitigate bias, though challenges persist. Registries, such as those established for prospective study protocols, have improved by mandating pre-registration, reducing opportunities for selective in registered studies, but incomplete limits their full impact. Open-access journals, by removing space constraints common in traditional subscription-based outlets, can accommodate a broader range of results, including null findings, potentially alleviating ; however, high article processing charges in some open-access venues may introduce new inequities favoring well-funded research. Traditional journals, meanwhile, continue to prioritize high-impact positive results to maintain prestige, sustaining the bias despite these reforms.

In Clinical and Medical Studies

In clinical and medical studies, reporting bias manifests prominently through the selective omission of adverse events and null or negative results in publications, often driven by pharmaceutical sponsors seeking to highlight favorable outcomes. For instance, published studies on antidepressants report adverse events in only 46% of cases, compared to 95% in unpublished versions, resulting in a of 64% of adverse events being missed in the literature. This underreporting can lead to incomplete safety profiles for drugs, potentially endangering patients by downplaying risks such as increased suicidality or cardiovascular events. Pharmaceutical companies have historically delayed or suppressed negative reports; in the case of selective for antidepressants, of 37 nonsupporting trials, 19 (51%) were not published and 17 (46%) were published in a way that conveyed a positive outcome, distorting the overall evidence base. These biases have profound medical implications, undermining by inflating perceived treatment benefits and leading to misguided clinical decisions. A seminal of FDA-registered antidepressant trials revealed that selective reporting overestimated drug efficacy by a of 32%, with the apparent rising from 0.31 (FDA data) to 0.41 (), equivalent to an inflation of 20-30% in some cases. Such distortions can result in , as seen with where null results from unpublished trials were systematically excluded, prompting widespread prescribing despite limited true benefits. Outcome reporting , a related issue in trials, further exacerbates this by selectively emphasizing positive secondary endpoints over predefined primary failures. A 2022 confirmed that while has lessened since regulatory changes, unpublished negative trials continue to inflate efficacy estimates by about 20%. Regulatory efforts have aimed to mitigate these issues, notably through the 2007 Food and Drug Administration Amendments Act (FDAAA), which mandated prospective trial registration and results reporting on to curb selective dissemination. Post-FDAAA, registration rates for applicable trials increased from 70% (pre-2007) to 100%, publication rates rose from 89% to 97%, and concordance between published results and FDA interpretations improved from 84% to 97%, reducing opportunities for bias. Prior to such mandates in the pre-2000s era, non-reporting of negative trials was rampant, with estimates indicating that around 50% of trials overall—and a higher proportion of those with unfavorable outcomes—remained unpublished, as evidenced by early FDA reviews showing favorable results were over three times more likely to be published without alteration. A particularly insidious form of reporting bias in medical studies involves ghostwriting, where pharmaceutical sponsors hire professional writers to draft manuscripts that are then attributed to academic experts, subtly influencing content to favor products. For example, GlaxoSmithKline (GSK) commissioned ghostwriters to produce studies, editorials, and even textbook chapters on Paxil (), portraying it as safe and effective for adolescent despite internal data showing inefficacy and risks; these works, bylined by prominent academics, appeared in peer-reviewed journals and shaped prescribing practices until regulatory scrutiny in 2004. In August 2025, the pivotal ghostwritten paper () was retracted by multiple journals due to and misrepresentation. Similarly, used ghostwriting through vendors to promote (HRT) products like Prempro in over 50 journal articles between 1997 and 2003, downplaying risks and exaggerating unproven benefits such as prevention, thereby embedding marketing in credible literature.

Types of Reporting Bias

Publication Bias

Publication bias refers to the selective publication of findings based on their direction or strength, with studies reporting positive or statistically significant results being more likely to be published than those with negative or outcomes. This phenomenon arises primarily from incentives within the , where journals, editors, and researchers prioritize novel or "exciting" results that demonstrate strong effects, often sidelining less favorable findings due to perceived lack of interest or impact. As a result, the published literature becomes distorted, overrepresenting positive outcomes and creating a skewed base that misrepresents the true distribution of results. For example, in trials, analysis of FDA-reviewed studies showed that 94% of published trials reported positive or favorable outcomes, compared to 51% of the full set of trials submitted to the FDA. The issue was formally highlighted by Robert Rosenthal in through the "file drawer problem" analogy, which posits that null or contradictory results are disproportionately stored away unpublished, while significant findings fill journal pages, potentially undermining the validity of cumulative knowledge in a field. Evidence from meta-analyses further underscores the scope, revealing in a substantial proportion (e.g., around 41%) of meta-reviews across disciplines like , where nonsignificant trials are published at rates 11 times lower than those with positive results. The consequences of publication bias are profound, as it systematically inflates effect sizes in meta-analyses, leading to overstated treatment benefits or associations that can influence policy, funding, and clinical practice. A prominent historical example involves () in the 1990s, where observational studies predominantly reported cardiovascular benefits, contrasting with earlier observational studies that suggested cardiovascular benefits, a discrepancy later attributed to factors and selective emphasis rather than comprehensive of risks, as revealed by the 2002 trial. To address such distortions, Begg and Mazumdar introduced the in 1994 as a graphical visualization tool, where asymmetry in the plot of study precision against signals potential from suppressed null results.

Time Lag Bias

Time lag bias refers to the phenomenon in scientific reporting where studies yielding positive or statistically significant results are published more rapidly than those with negative, , or less favorable outcomes, thereby temporarily distorting the available body of toward overestimation of effects. This bias contributes to an imbalance in the timely dissemination of research, as favorable findings are often expedited through submission and processes, while unfavorable ones encounter delays. The mechanism typically involves a publication lag of 1–2 years longer for negative results. For instance, a systematic review of pediatric trials found a time from study completion to of 2.2 years ( 0.9) for positive trials versus 4.2 years ( 1.9) for negative trials, with the difference being statistically significant (log-rank χ² = 4.35, p = 0.037). Similarly, a Cochrane review of randomized controlled trials reported times from completion to of 2 years for positive results compared to 2.6 years for negative or results, highlighting how such delays accumulate across studies to skew meta-analyses conducted in the interim. Evidence from studies in the , including analyses of clinical trials in infectious diseases like vaccine research, illustrated these lags, where negative outcomes were often postponed, impeding real-time evidence synthesis essential during emerging health threats such as pandemics. This pattern persisted into the era, where rapid sharing of preliminary positive findings outpaced the publication of rigorous negative trials, complicating guideline development and . Key causes include researchers' reluctance to submit null results promptly due to perceived career risks and journals' preferences for "breakthrough" stories that align with timely positive narratives, fostering a cycle of delayed unfavorable data. The consequences manifest as temporary overestimation of intervention , potentially leading to misguided clinical practices; for example, early enthusiasm for treatments like was amplified by swift reports of preliminary benefits, only later tempered by delayed publications of large negative randomized trials that clarified limited or adverse effects. In the pediatric case, considering only publications within 3 years yielded a number needed to treat of 7 for , but including delayed studies increased it to 17, underscoring how lags inflate apparent benefits by 10–20% or more in estimates during evidence reviews. Time lag bias represents a continuum with , where extreme delays may culminate in non-publication altogether.

Duplicate Publication Bias

Duplicate publication bias refers to the redundant reporting of the same or substantially overlapping research findings across multiple journals, conferences, or other outlets without adequate or cross-referencing, which can artificially inflate the perceived impact of the work and distort systematic reviews. This practice often aims to boost counts and academic productivity metrics, but it undermines the integrity of the by introducing through overrepresentation of specific results. A key mechanism driving duplicate publication is "salami slicing," where researchers divide a single study's data, methods, or outcomes into several minimally publishable units to generate more papers from one dataset. Audits of biomedical literature in the and have estimated the of such duplicates at 5-10%, with one of 1,234 articles in journals identifying covert duplicates in 5.3% of cases. These rates vary by field but highlight a persistent issue, particularly in high-stakes areas like where publication volume influences funding and career advancement. Notable examples emerged in cardiology during the 2000s, where clinical trials on cardiovascular drugs were republished across journals, sometimes in different languages, without clear acknowledgment of prior versions. Such duplications have skewed meta-analyses by allowing the same dataset to be weighted multiple times, potentially exaggerating treatment efficacy and leading to misguided clinical decisions. This redundancy can also exacerbate citation bias, as duplicated works accumulate citations that should be attributed to the original study. The International Committee of Medical Journal Editors (ICMJE) has addressed this through its uniform requirements, first established in 1997, which mandate that authors disclose any prior or overlapping publications to prevent redundant reporting. Despite these guidelines, enforcement remains inconsistent due to challenges in detecting overlaps during and limited penalties for violations, allowing duplicate publications to continue affecting the literature.

Location Bias

Location bias, a subtype of reporting bias, refers to the disproportionate likelihood of studies originating from high-income countries or prestigious institutions being published in high-impact journals, while research from low- and middle-income countries (LMIC) or less prominent locations is systematically underrepresented. This bias manifests through selective acceptance in top-tier venues such as the New England Journal of Medicine (NEJM) and The Lancet, where editorial preferences and resource disparities favor Western or high-resource settings, marginalizing findings from elsewhere. As a result, global scientific literature skews toward results from affluent regions, distorting the evidence base for policy and practice. Empirical evidence from bibliometric analyses in the and early highlights this underrepresentation. A study of over 10,000 articles in leading biomedical journals (NEJM, , Nature Medicine, The Lancet, ) from 2010–2019 found that 72.6% of publications originated from the (48.2%), (15.9%), (5.3%), and (3.2%), while contributions from low-resource regions like and were minimal, with only 32 of 77 countries contributing 10 or more articles. Similarly, a 2017 survey of 6,491 articles across five high-impact journals showed LMIC (grouped as "Rest of the World") accounting for just 11.9% of content, an increase from 6.5% in 2000 but still representing countries with 88.3% of the global population. In specific domains like tropical diseases, trials from settings face heightened barriers; for instance, authorship analyses of and neglected tropical disease studies indicate laboratory work performed in in 68–90% of cases, yet publication rates in top journals remain low due to collaborative dependencies on high-income partners. These patterns suggest an underrepresentation of 20–40% for LMIC research relative to global in various meta-analyses. The causes of location bias include structural prestige hierarchies in , where high-impact journals prioritize submissions from established networks, and intersecting factors like language barriers that disproportionately affect non-English-speaking regions. Prestige bias exacerbates this, as articles in Western-dominated journals garner 2–3 times higher rates due to visibility and self-citation patterns, creating a feedback loop that reinforces exclusion. For example, domestic self-citations account for 74.5% of total citations in top journals, favoring U.S. and outputs. The impacts of location bias perpetuate inequities by underreporting effective, low-cost interventions developed in resource-limited settings, such as community-based treatments for infectious diseases in LMIC. This skew limits the adoption of contextually relevant evidence in international guidelines, contributing to "safari " where data from developing regions is extracted without local authorship or dissemination, further eroding trust and capacity in those areas. Overall, it hinders equitable knowledge diffusion and policy-making for underrepresented populations.

Citation Bias

Citation bias refers to the tendency of researchers to selectively cite studies based on their outcomes, favoring those with positive, statistically significant, or confirmatory results over or negative findings. This bias manifests when authors disproportionately reference research that aligns with their hypotheses or preconceived notions, often overlooking contradictory . For instance, meta-analyses of citation patterns across scientific disciplines indicate that articles reporting positive results are cited approximately twice as often as those with negative results, thereby amplifying the visibility of supportive in the . A key mechanism underlying citation bias is the , where prominent or high-profile authors and institutions receive disproportionately more citations due to their established , independent of the work's intrinsic merit. This cumulative advantage, first conceptualized by , leads to a feedback loop in which successful researchers garner further recognition, while lesser-known contributors are marginalized. Bibliometric analyses from the early onward have demonstrated this effect across fields, with networking and accounting for significant portions of uneven citation distribution; for example, studies decomposing citation dynamics found that prestige alone can explain up to 40% of the variance in citation counts for influential papers. In social sciences, such patterns exacerbate inequalities, as evidenced by research showing that citation practices often reinforce existing hierarchies, with top-cited works dominating subsequent reviews. Examples of citation bias are evident in controversial areas like denial, where literature frequently cherry-picks and over-cites a narrow subset of skeptical studies while systematically ignoring the broader consensus from thousands of peer-reviewed papers affirming human-caused warming. A review of denial books revealed that over 90% lack and recycle a limited pool of non-consensus sources, creating a distorted through selective referencing. Similarly, gender biases in citations amplify reporting distortions; a 2022 analysis in Proceedings of the National Academy of Sciences of elite scholars' citation patterns found that papers led by women receive fewer citations overall, particularly from male authors, which can undervalue research on topics where women are more represented and perpetuate imbalances in perceived scientific validity. The consequences of citation bias include the perpetuation of echo chambers within literature reviews and meta-analyses, where unrepresentative s skew the perceived weight of and hinder comprehensive scientific progress. This selective reinforcement can lead to misguided decisions and prolonged debates over established facts, as distorted citation networks prioritize confirmatory over diverse perspectives.

Language Bias

Language bias in scientific reporting refers to the systematic underrepresentation or undervaluation of research published in non-English languages, leading to skewed evidence bases in reviews and meta-analyses. This phenomenon arises primarily from the English-language dominance in major indexing databases, such as , which favors English-only journals and systematically excludes a substantial portion of global output from non-English sources. Nearly 80% of all indexed scientific publications worldwide are in English, despite the production of valuable research in other languages, including clinical trials from that often remain overlooked due to language barriers and selective indexing. For instance, among Chinese-sponsored randomized clinical trials, those published in English were over three times more likely to report positive results and achieve higher visibility, while non-English versions faced lower citation rates and limited inclusion (only 21.6% of Chinese articles indexed). Empirical evidence from 2010s reviews underscores the distorting effects of this exclusion on meta-analytic results. Excluding non-English studies has been shown to alter effect sizes in over half of examined meta-analyses, with biases reaching up to 23% in magnitude for certain interventions, such as those assessing impacts or plant life spans. In ecological contexts, comparisons between English and revealed significant discrepancies; for example, English-language findings on rice-field effects were positive, while ones were negative, reversing the overall meta-analytic direction upon inclusion. Similarly, leaf life span analyses showed English studies yielding 23% more negative effect sizes than counterparts, inflating bias by 7% when non-English work was omitted. These patterns highlight how language restrictions introduce systematic asymmetries, particularly in fields reliant on regional data. Illustrative cases include research on medicines for respiratory conditions and on formulations, which are frequently undercited due to publication in local non-English journals and limited international accessibility. A pivotal development occurred in 2005, when analyses of database coverage revealed that even "no language restrictions" searches in MEDLINE and EMBASE missed numerous non-English journals, prompting Cochrane guidelines to emphasize broader inclusion of multilingual sources—yet implementation gaps persist, with non-English studies still comprising less than 30% in many reviews. This overlap with location bias further marginalizes geographically specific findings, as non-English work from regions like or is doubly disadvantaged. The consequences of language bias extend to profound cultural and regional knowledge erosion, as non-English research often embeds unique indigenous or localized insights inaccessible to English-dominant syntheses. In traditional medicine, this leads to the irrecoverable loss of plant-based remedies tied to endangered languages, reducing opportunities for global pharmacological discoveries and perpetuating inequities in scientific representation.

Outcome Reporting Bias

Outcome reporting bias occurs when researchers selectively report or emphasize certain outcomes from a based on their results, such as omitting unfavorable or statistically insignificant findings while highlighting favorable ones, or introducing new outcomes post-hoc to align with positive results. This form of bias compromises the integrity of trial results by distorting the full spectrum of evidence. Empirical reviews of randomized controlled trials from the 2000s indicate that outcome reporting bias affects approximately 25-40% of , with systematic analyses estimating a of 40-62% of trials having at least one primary outcome changed, introduced, or omitted. The primary mechanisms driving outcome reporting bias involve decisions made after , such as altering the emphasis on pre-specified outcomes or redefining them to favor statistically significant results. For instance, trialists may downgrade originally planned primary s that show no benefit while elevating secondary or exploratory measures that appear promising. In trials, a common example is shifting focus from overall survival (OS), a rigorous but harder-to-achieve , to (PFS), a surrogate marker that may yield positive results more readily, thereby masking potential lack of long-term . Such post-hoc adjustments often stem from knowledge of the results, enabling selective presentation without transparent disclosure. This bias can be exacerbated by broader issues like , where only studies with positive findings are pursued for . A notable real-world example is the 2004 Vioxx (rofecoxib) scandal, where Merck downplayed secondary outcomes related to cardiovascular risks in the VIGOR trial, instead framing the increased thrombotic events as evidence of cardioprotection from the comparator drug naproxen, which delayed recognition of the drug's harms and contributed to its market withdrawal after widespread use. The introduction of mandatory registries, such as in 2005, has since helped expose such discrepancies by allowing comparisons between pre-registered protocols and published reports, revealing outcome changes in up to 31% of adequately registered trials. The consequences of outcome reporting bias extend to misleading clinical , as distorted can infiltrate systematic reviews and meta-analyses, leading to overestimation of treatment benefits and underappreciation of risks in guidelines. For example, selective emphasis on favorable endpoints has been shown to inflate effect sizes in Cochrane reviews, potentially guiding inappropriate therapeutic recommendations and contributing to patient harm. This undermines and perpetuates research waste by prioritizing biased narratives over comprehensive data.

Knowledge Chasm Bias

Knowledge chasm bias, also known as knowledge reporting bias, refers to the systematic underreporting of applied or implementation knowledge in favor of findings, creating a gap where fundamental discoveries receive extensive coverage while practical applications, scaling challenges, or real-world failures remain largely undocumented. This bias manifests in fields like , where laboratory breakthroughs, such as early developments showing promise in controlled settings, are widely published, but subsequent hurdles in , adherence, or during rollout are often overlooked or minimally reported. The term was introduced in the context of implementation science by Greenhalgh et al. in their foundational work on the , highlighting how such gaps hinder the progression from evidence generation to practical use. The mechanisms driving knowledge chasm bias stem from academic and funding incentives that prioritize "blue-sky" or discovery-oriented research over translational efforts, as basic science often aligns better with promotion criteria, grant availability, and high-impact journal preferences. For instance, differing priorities among , , and funders create barriers to reporting implementation outcomes, with dominating publication outputs, while a smaller portion addresses deployment or real-world application. A notable example occurs in mental health, where interventions developed in the 1990s for community programs, such as psychosocial support models, were frequently reported in initial efficacy trials but rarely documented during scaling efforts, leading to lost insights on adaptation and sustainability. Details of program implementation, including barriers like resource constraints or cultural mismatches, are typically underreported in published literature, perpetuating cycles of reinvention rather than building on prior experiences. This selective focus relates to outcome reporting bias by emphasizing positive or selective results within studies, but knowledge chasm bias more broadly addresses systemic underdocumentation across research stages. The impacts of knowledge chasm bias are profound, as it slows the translation of evidence into practice, exacerbating the "know-do gap" where effective interventions fail to reach populations in need and contributing to inefficiencies in health systems worldwide. By limiting the visibility of implementation challenges and successes, this bias delays reforms and , ultimately hindering advancements.

Detection and Measurement

Funnel Plot Analysis

Funnel plot analysis serves as a primary graphical technique for identifying potential reporting biases, particularly publication bias, in meta-analyses of clinical trials and other studies. It visualizes the distribution of study effect sizes against a measure of study precision to assess whether smaller studies with less precise estimates are symmetrically represented around the pooled effect. In an unbiased scenario, the plot forms a symmetrical inverted funnel shape, with high-precision (larger) studies clustering near the average effect size and lower-precision (smaller) studies scattering more widely but evenly on both sides. Asymmetry in this distribution, often appearing as a gap or "hole" on the side of smaller or non-significant effects (typically the left side for positive outcomes), indicates that certain results may be missing, suggesting selective reporting or non-publication of unfavorable findings. The construction of a funnel plot involves plotting each study's effect size estimate—such as odds ratios, risk ratios, or mean differences—on the horizontal axis and its precision, conventionally the reciprocal of the standard error (1/SE), on the vertical axis. The vertical axis emphasizes precision because standard error decreases with increasing sample size, making larger studies appear higher on the plot and closer to the true effect. Interpretation focuses on symmetry: under no bias, the scatter should mirror the expected distribution from sampling variation alone. For instance, an asymmetric plot might show an overabundance of small studies with large positive effects but few with negative ones, pointing to suppression of null or adverse results. This method targets as its most common application, where studies with non-significant outcomes are less likely to be disseminated.00377-8/fulltext) The concept of the was originally proposed by Light and Pillemer in 1984 as a simple visual aid for synthesizing research reviews, drawing on the idea that study precision influences effect estimate variability. It gained prominence through refinements by Egger et al. in , who demonstrated its utility in detecting across and paired it with quantitative assessments to enhance reliability. A notable application occurred in a of trials, where funnel plots exposed asymmetry, revealing that selective inflated apparent drug by excluding negative small-scale studies. Despite its widespread adoption, analysis has limitations, including reliance on subjective visual judgment, which can lead to inconsistent interpretations among observers. is not always indicative of and may stem from clinical heterogeneity, methodological differences between studies, or , particularly when fewer than 10 studies are included. Thus, funnel plots should not be used in but complemented by statistical tests for confirmation, as they provide suggestive rather than definitive evidence of reporting .00377-8/fulltext)

Statistical Tests for Asymmetry

Statistical tests for asymmetry provide quantitative methods to detect reporting bias in meta-analyses by evaluating deviations from symmetry in the distribution of study effects, often building on visual assessments like funnel plots. These tests assess whether smaller studies with non-significant or negative results are systematically underreported, leading to an overestimation of overall effects. Common approaches include regression-based and rank correlation tests, which are applied after standard meta-analytic models to test for publication or reporting bias. Egger's test is a widely used method that models the standardized against a measure of . The test regresses the effect estimate divided by its (standardized effect) on the inverse of the (): \frac{\hat{\theta}_i}{\text{SE}(\hat{\theta}_i)} = \beta_0 + \beta_1 \cdot \frac{1}{\text{SE}(\hat{\theta}_i)} + \epsilon_i where \hat{\theta}_i is the for i, \text{SE}(\hat{\theta}_i) is its , \beta_0 is , \beta_1 is the , and \epsilon_i is the error term. The states that the intercept \beta_0 = 0, indicating no ; a significant intercept (p < 0.05, typically) suggests bias, as it implies that smaller studies (lower ) have systematically larger effects. This test is sensitive to small-study effects and is recommended for meta-analyses with at least 10 studies. Begg's rank correlation test, a non-parametric alternative, evaluates the association between the standardized effect sizes and their variances using Kendall's tau rank correlation coefficient. It ranks studies by effect size and variance (or standard error), testing for correlation under the null hypothesis of no bias (tau = 0); a significant correlation (p < 0.05) indicates asymmetry, suggesting underreporting of smaller, less favorable studies. This method is less affected by outliers than Egger's test but has lower power in some scenarios. These tests are routinely applied in meta-analyses to quantify reporting bias, particularly in fields like surgery where selective reporting is prevalent. For instance, a 2019 review of high-impact orthopaedic surgery meta-analyses found evidence of publication bias in 34% of cases using Egger's test, highlighting the issue in surgical outcome research from the 2010s. To adjust for detected bias, the trim-and-fill method estimates and imputes potentially missing studies based on funnel plot symmetry, iteratively "trimming" asymmetric points and "filling" mirrored counterparts to recalculate the pooled effect. This approach assumes symmetry in the complete dataset and is implemented as a sensitivity analysis rather than a definitive correction. Another metric, Orwin's fail-safe N, calculates the number of additional studies with null effects (effect size = 0) needed to reduce the overall meta-analytic effect to a trivial level, providing a robustness estimate against bias. It is computed as N = \frac{\text{ES}_\text{obs} \cdot (k) - \text{ES}_\text{trivial} \cdot (k + N)}{\text{ES}_\text{trivial}}, where \text{ES}_\text{obs} is the observed effect, k is the number of studies, and \text{ES}_\text{trivial} is a predefined small effect (e.g., 0.05 for Cohen's d); a large N (e.g., >5k + 10) suggests resilience to unreported null studies. Tools like the metafor facilitate these analyses, offering functions for Egger's and Begg's tests, trim-and-fill, and fail-safe N within a unified framework for and diagnostics.

Mitigation and Prevention

Editorial and Policy Guidelines

The International Committee of Medical Journal Editors (ICMJE) recommendations for the conduct, , , and of scholarly work in medical journals, first established as uniform requirements in 1978 and updated periodically with the most recent in January 2025, mandate prospective registration of clinical trials in a public registry before enrollment of the first participant and require full of all prespecified outcomes to prevent selective . These guidelines, adopted by over 500 biomedical journals, aim to enhance and reduce outcome reporting bias by ensuring that trial protocols are publicly available for verification against published results. Complementing these, the () statement, initially developed in 1996 and revised multiple times including in 2025, provides an evidence-based of 30 essential items for transparent of randomized trials, emphasizing complete of methods, results, and all outcomes—both positive and negative—to minimize distortions. Endorsed by major journals and organizations, CONSORT facilitates critical appraisal of trials and has been shown to improve the quality of published reports when followed. Journal-specific policies further reinforce these standards; for instance, journals require authors to declare all funding sources, adhere to ICMJE criteria, and provide data availability statements, including full disclosure of results and protocols to promote unbiased . Regulatory frameworks add enforceable measures: the European Union's Clinical Trials Regulation (EU No 536/2014), adopted in 2014 and applicable from 31 January 2022, obligates sponsors to post summary results of interventional s in the EU Clinical Trials Register within 12 months of completion (or 6 months for pediatric s), covering demographics, , safety, and conclusions to ensure public access and curb selective reporting; from 31 January 2023, all new applications must use the Clinical Trials Information System (). In the United States, the of 2007, specifically Section 801, requires submission of basic results summaries to within 12 months of primary completion for applicable trials involving FDA-regulated products, with civil monetary penalties of up to $16,581 per day for noncompliance as of 2025, marking a pivotal enforcement mechanism against non-reporting. The implementation of these post-2005 registry requirements has demonstrably lowered reporting bias risks, with prospective registration linked to reduced selective outcome reporting and lower bias across domains like and blinding in meta-analyses of trials. Despite these advances, challenges persist, as compliance with registration and results remains incomplete, with sponsors at around 74% overall though lower for ones (around 26%), based on 2025 audits of data, often due to resource constraints or varying enforcement across jurisdictions.

Researcher Best Practices

Researchers can mitigate reporting bias by pre-registering study protocols on platforms such as the Open Science Framework (OSF.io), which locks in planned outcomes, hypotheses, and analyses prior to , thereby preventing selective of favorable results. This approach ensures that all pre-specified outcomes are reported, enhancing the and of findings while addressing the tendency to omit null or negative results. Complementing pre-registration, researchers should adopt pre-analysis plans that detail intended statistical methods and decision rules in advance, which directly counters p-hacking practices that inflate the likelihood of false positives through iterative data manipulation. Evidence from large-scale analyses of test statistics indicates that combining pre-registration with such plans significantly reduces evidence of p-hacking and associated . Furthermore, committing to publish all results—regardless of their direction or significance—fosters a culture of complete disclosure, aligning with the "All trials registered, all results reported" principle promoted by international initiatives since the early . For systematic reviews and meta-analyses, adhering to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines promotes unbiased reporting by standardizing the documentation of search strategies, study selection, and risk of bias assessments. Researchers must also routinely disclose any conflicts of interest, as undisclosed financial or professional ties can subtly influence the emphasis on positive outcomes in publications. Incorporating awareness of reporting bias into training equips investigators to identify and avoid selective practices during study design and dissemination. Such education emphasizes the ethical imperative of full , helping to cultivate habits that prioritize scientific over publication incentives. Journals can further support these practices through incentives like open science badges, awarded by organizations such as for Open Science for preregistration, , and materials disclosure, which have demonstrably increased rates of transparent among researchers.

Examples and Implications

Historical Case Studies

One prominent historical example of reporting bias involves the clinical trials for (Vioxx), a COX-2 inhibitor developed by Merck for treatment. From the 1990s through 2004, Merck selectively reported positive outcomes on pain relief and arthritis symptom reduction in publications and regulatory submissions, while omitting or downplaying evidence of cardiovascular risks from internal and external studies. For instance, in the 2000 VIGOR trial, Merck attributed increased heart attack rates in the Vioxx group to a protective effect of the comparator naproxen, rather than highlighting the drug's inherent risks, thereby understating adverse events. This outcome reporting bias was exposed in 2004 by FDA epidemiologist David Graham, a whistleblower who testified on suppressed data showing a 3.7-fold increased risk of heart attack and sudden cardiac death. The revelation, combined with the APPROVe trial results confirming doubled cardiovascular event risks after 18 months of use, prompted Merck to voluntarily withdraw Vioxx from the market on September 30, 2004. Another illustrative case spans the 1970s and 1980s trials of , a used for treatment and prevention. Early randomized controlled trials, such as those from the National Surgical Adjuvant Breast and Bowel Project (NSABP) and others, emphasized tamoxifen's efficacy in reducing recurrence by up to 50%, with widespread publication of these benefits driving its adoption as standard . However, these studies selectively reported positive outcomes while underreporting or inadequately monitoring endometrial adverse effects, such as and , which were observed in preclinical models as early as the 1970s but not systematically disclosed in human trial results until the late 1980s. Isolated case reports of endometrial cancers in tamoxifen users emerged around 1985, but recognition of the elevated risk—approximately a 2- to 4-fold increase—was substantially delayed until the 1990s meta-analyses by the Early Breast Cancer Trialists' Collaborative Group, which pooled data from over 37,000 women and quantified approximately 3 excess endometrial cancers per 1,000 treated for 5 years. In both cases, outcome reporting bias—favoring beneficial or endpoints over safety signals—interacted with , where trials showing unfavorable risks were either not fully published or had adverse data minimized in presentations. For Vioxx, this interplay concealed an estimated 27,785 excess cases of heart attacks and sudden cardiac deaths in the U.S. from 1999 to 2003, based on Merck's own trial data analyzed by Graham. Similarly, for , the selective emphasis on delayed comprehensive , potentially exposing thousands of postmenopausal women to undetected endometrial during the drug's initial decades of use. These distortions not only prolonged market exposure to harmful agents but also eroded trust in clinical evidence. The Vioxx and scandals catalyzed significant transparency reforms in the 2000s, including the World Health Organization's 2005 International Clinical Trials Registry Platform to mandate prospective trial registration and the U.S. Amendments Act of 2007, which required results for certain trials to combat selective disclosure. These measures aimed to ensure balanced of all outcomes, influencing global standards for pharmaceutical accountability and ethical research practices.

Broader Impacts on Evidence Synthesis

Reporting bias profoundly distorts systematic reviews and meta-analyses, often leading to inflated estimates of . For instance, reanalyses of meta-analyses on trials demonstrate that incorporating unpublished from regulatory submissions reduces estimates by a median of 27% (interquartile range 7% to 67%), with 46% of comparisons showing lower and 16% revealing increased harms compared to published alone. This overestimation, for example 11% to 69% (median 32%) for antidepressants in , skews the base toward positive outcomes, compromising the reliability of synthesized results. In the context of evidence synthesis frameworks like , reporting bias—particularly —serves as a key domain for downgrading the certainty of evidence, potentially shifting ratings from high to low when small studies or industry-funded trials predominate and asymmetry is evident. This leads to a "bias ," wherein selective reporting in primary studies propagates through meta-analyses and into clinical guidelines, amplifying distortions and resulting in recommendations that overestimate benefits or underestimate risks. Such propagation affects policies by promoting the over-adoption of ineffective or harmful interventions, diverting resources from more efficacious alternatives and contributing to suboptimal . The broader societal and economic ramifications are substantial, including billions in misguided healthcare spending on treatments whose apparent efficacy stems from biased reporting rather than robust evidence. For example, during the , incomplete and inconsistent reporting in clinical trials of interventions contributed to challenges in evidence synthesis and timely responses. Overall, these impacts erode trust in scientific processes and perpetuate inequities in outcomes. Looking ahead, the integration of tools in the offers potential for mitigating reporting bias in evidence synthesis, as large language models like ChatGPT-4o achieve fair to moderate agreement with human assessors in risk-of-bias evaluations using tools like RoB 2. However, these advancements introduce new risks, such as algorithmic biases inherited from training data, which could perpetuate or exacerbate distortions in meta-analyses if not carefully managed.

References

  1. [1]
    Glossary | Cochrane Bias
    Glossary of main types of bias. Reporting Bias arises when the dissemination of research findings is influenced by the nature and direction of results.
  2. [2]
    Reporting biases | Catalog of Bias
    Reporting biases are a systematic distortion from selective disclosure or withholding of information, considered a significant form of scientific misconduct.Background · Impact · Preventive steps
  3. [3]
    Selective Outcome Reporting as a Source of Bias in Reviews ... - NCBI
    Reporting biases arise when the dissemination of research findings is influenced by the nature and direction of the results, and can arise from processes ...
  4. [4]
    Reporting bias in medical research - a narrative review - PMC
    Apr 13, 2010 · The aim of this narrative review is to gain an overview of reporting bias in the medical literature, focussing on publication bias and selective ...
  5. [5]
    Study Bias - StatPearls - NCBI Bookshelf
    In academic research, bias refers to a type of systematic error that can distort measurements and/or affect investigations and their results.
  6. [6]
    Types of Bias in Research | Definition & Examples - Scribbr
    Rating 5.0 (292) Bias in research is any deviation from the truth which can cause distorted results and wrong conclusions.Information bias · Response bias · Selection bias · Cognitive bias
  7. [7]
    Biases and Confounding | Health Knowledge
    In reporting bias, individuals may selectively suppress or reveal information, for similar reasons (for example, around smoking history). Reporting bias can ...
  8. [8]
    Different Types of Bias in Research - CASP
    This is the tendency for researchers and editors to handle the reporting of experimental results that are positive (i.e., showing a significant finding) ...
  9. [9]
    Reporting Bias: Definition, Types, Examples & Mitigation - Formplus
    May 13, 2022 · Reporting bias is a type of selection bias that occurs when the results of a study are skewed due to the way they are reported.
  10. [10]
    Biases | Catalog of Bias - The Catalogue of Bias
    Arises when the variables under study are affected by the selection of hospitalized subjects leading to a bias between the exposure and the disease under study.Biases | Catalog of Bias | Page 2 · Biases | Catalog of Bias | Page 3 · Page 5
  11. [11]
    Industry Sponsorship bias | Catalog of Bias
    Industry sponsorship bias refers to the tendency of a scientific study to support the interests of the study's financial sponsor.Background · Impact · Preventive steps
  12. [12]
  13. [13]
    Reporting bias in clinical trials: Progress toward transparency and ...
    Jan 19, 2022 · Study publication bias occurs when trials showing negative or no effect are not published; outcome reporting bias occurs when authors fail to ...
  14. [14]
    Assessing risk of bias due to missing evidence in a meta-analysis
    Risk of bias in a meta-analysis result can arise when either an entire study report or a particular study result is unavailable selectively (e.g. because the P ...Inclusion of results from... · The ROB-ME tool for... · Planning the risk of bias...
  15. [15]
    The File Drawer Problem (Publication Bias)
    May 13, 2012 · Publication bias is also called the file drawer problem, especially when the nature of the bias is that studies which fail to reject the null hypothesis.
  16. [16]
    Publication bias (File Drawer Problem) - FORRT
    Feb 7, 2025 · The bias arises when the evaluation of a study's publishability disproportionately hinges on the outcome of the study.
  17. [17]
    Do Pressures to Publish Increase Scientists' Bias? An Empirical ...
    Apr 21, 2010 · Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis).
  18. [18]
    How the publish-or-perish principle divides a science: the case of ...
    Dec 17, 2020 · In this paper we evaluate whether perceived work pressure (publishing, acquisition funds, teaching, administration) is associated with different attitudes ...
  19. [19]
    Modelling science trustworthiness under publish or perish pressure
    Jan 10, 2018 · This analysis suggests that trustworthiness of published science in a given field is influenced by false positive rate, and pressures for positive results.
  20. [20]
    Withholding results to address publication bias in peer-review
    Aug 9, 2025 · This distorts the evidence base, leading to false conclusions and undermining scientific progress. Central to this problem is a peer-review ...
  21. [21]
    [PDF] Trial Registration: Understanding and Preventing Reporting Bias in ...
    Publication bias and other types of reporting biases can be minimized through prospective trial registration that is now an accepted part of medical research.
  22. [22]
    How open access can reduce publication bias | The BMJ
    Apr 28, 2010 · Online open access journals frequently have little or no restrictions on space, and can consider a wider range of articles that traditional
  23. [23]
    Publication bias: What are the challenges and can they be overcome?
    In addition, it has been suggested that open-access journals with relatively high publication charges might introduce a new bias (e-publication bias).Missing: traditional | Show results with:traditional
  24. [24]
    Complaints on 'Publish or perish' from 1990 by the well-known ...
    Apr 7, 2025 · Complaints on 'Publish or perish' from 1990 by the well-known psycologist Meehl. His general complaints on scientific practices in "Why ...
  25. [25]
    Reporting of Adverse Events in Published and Unpublished Studies ...
    Published studies report much less adverse event information (46%) than unpublished (95%), with a median of 64% of adverse events missed in published studies.
  26. [26]
    Selective Publication of Antidepressant Trials and Its Influence on ...
    Jan 17, 2008 · Selective publication of clinical trials, and the outcomes within those trials, can lead to unrealistic estimates of drug effectiveness and alter the apparent ...
  27. [27]
    Association of the FDA Amendment Act with trial registration ... - NIH
    Jul 18, 2017 · Selective clinical trial publication and outcome reporting has the potential to bias the medical literature. The 2007 Food and Drug ...
  28. [28]
    Reporting Bias in Drug Trials Submitted to the Food and Drug ...
    Nov 25, 2008 · In this study, the researchers test the hypothesis that not all the trial results in NDAs are published in medical journals.
  29. [29]
    Half of all clinical trials have never reported results - AllTrials
    Aug 20, 2015 · The estimate that around 50% of trials have never published results comes from a large systematic review of publication bias funded by the NHS in 2010.
  30. [30]
    Drug Company Used Ghostwriters to Write Work Bylined by ...
    Dec 2, 2010 · Newly released documents show how medical ghostwriters--paid for by a UK drug company--penned material published in medical journals and even a textbook.
  31. [31]
    The Haunting of Medical Journals: How Ghostwriting Sold “HRT”
    Sep 7, 2010 · Given the growing evidence that ghostwriting has been used to promote HT and other highly promoted drugs, the medical profession must take steps ...
  32. [32]
  33. [33]
    [PDF] The "File Drawer Problem" and Tolerance for Null Results
    1979, Vol. 86, No. 3, 638-641. The "File Drawer Problem" and Tolerance for Null Results. Robert Rosenthal. Harvard University. For any given research area, one ...
  34. [34]
    Trials with nonsignificant results 11 times less likely to be published
    Sep 16, 2025 · Head and neck cancer trials with no statistically significant results got published 11 times less often as those with significant findings.
  35. [35]
    Meta-analyses in psychology overestimate effects | Royal Society
    Jul 5, 2023 · However, publication bias—the preferential publishing of statistically significant studies—often causes meta-analyses to overestimate mean ...
  36. [36]
    Does publication bias explain the divergent findings on menopausal ...
    May 4, 2021 · Of 16 prospective studies on this subject, 15 found decreased relative risks of CHD among women using HT compared to nonusers, supporting a ...
  37. [37]
    Operating characteristics of a rank correlation test for publication bias
    Operating characteristics of a rank correlation test for publication bias. Biometrics. 1994 Dec;50(4):1088-101. Authors. C B Begg , M Mazumdar. Affiliation. 1 ...Missing: plot | Show results with:plot
  38. [38]
    Effect of the Statistical Significance of Results on the Time to ...
    Positive trials were submitted for publication significantly more rapidly after completion than were negative trials (median, 1.0 vs 1.6 years; P=.001) and were ...
  39. [39]
    Time-Lag Bias in Trials of Pediatric Antidepressants - PMC - NIH
    It is well established in the general medical literature that trials with negative findings are published less frequently than their positive counterparts.
  40. [40]
    Time to publication for results of clinical trials - PubMed
    Nov 27, 2024 · The median time to publication was approximately 4.8 years from the enrolment of the first trial participant and 2.1 years from the trial completion date.
  41. [41]
    Time to publication for results of clinical trials - PubMed
    Apr 18, 2007 · Objectives: To study the extent to which time to publication of a clinical trial is influenced by the significance of its result. Search ...<|control11|><|separator|>
  42. [42]
    Risk of publication bias in therapeutic interventions for COVID-19
    This article describes publication bias, its most frequent causes, its characteristics, the regulatory tools to avoid it, and some statistical techniques to ...
  43. [43]
    Different Patterns of Duplicate Publication: An Analysis of Articles ...
    Feb 25, 2004 · The prevalence of covert duplicate articles (without a cross-reference to the main article) was 5.3% (65/1234). Of the duplicates, 34 (33%) were ...
  44. [44]
    Duplicate publication of articles used in meta-analysis in Korea
    Apr 9, 2014 · Duplicate publication can result in an inappropriate weighting of the study results. The purpose of our study was to assess the incidence and ...
  45. [45]
    Misconduct Policies in High-Impact Biomedical Journals | PLOS One
    Data showing a prevalence of duplicate publications of 8.5% in otolaryngology journals, many published within 12 months of the first article, prompted ...
  46. [46]
    Duplicate or Redundant Publication: Can We Afford It?
    15 The recommendations of the ICMJE on this point are clear.1 In practice, however, redundant publication is still considered a minor offense and, in fact ...
  47. [47]
    [PDF] uniform requirements for manuscripts submitted to biomedical journals
    If redundant or duplicate publication is attempted or occurs without such notification, authors should expect editorial action to be taken. At the least, prompt ...
  48. [48]
    Recommendations | Overlapping Publications - ICMJE
    ICMJE recommends no simultaneous submissions, no duplicate publication without clear reference, and that authors disclose work already reported in other  ...2. Duplicate And Prior... · 3. Preprints · 4. Acceptable Secondary...
  49. [49]
    A bibliometric analysis of geographic disparities in the authorship of ...
    Dec 11, 2023 · In this retrospective study, we examine possible bias related to the geographical location of the conducted research to the number of ...
  50. [50]
    Under-representation of low and middle income countries (LMIC) in ...
    Jul 4, 2023 · We aimed to quantify the contribution of LMIC in high impact medical journals and compare the results with the previous survey conducted in 2000.
  51. [51]
    Under-representation of developing countries in the research literature
    Oct 4, 2004 · Under-representation of developing countries in the research literature: ethical issues arising from a survey of five leading medical journals.
  52. [52]
    Who is telling the story? A systematic review of authorship for ...
    Oct 18, 2019 · The proportion of studies with laboratory tests performed in Africa ranged from 68% to 90% across the six diseases under study. Figure 1.
  53. [53]
    Scientific citations favor positive results: a systematic review and ...
    Our meta-analyses show that positive articles are cited about two times more often than negative ones. Our results suggest that citations are mostly ...
  54. [54]
    Unpacking the Matthew effect in citations - ScienceDirect.com
    This paper reviews the role of citations in science and decomposes the Matthew effect in citations into three components: networking, prestige, and ...
  55. [55]
    Gendered citation patterns among the scientific elite - PNAS
    Sep 26, 2022 · We identify gender disparities in the patterns of peer citations and show that these differences are strong enough to accurately predict the scholar's gender.
  56. [56]
    [PDF] Climate Change Denial Books and Conservative Think Tanks
    Feb 22, 2013 · It appears that at least 90% of denial books do not undergo peer review, allowing authors or editors to recycle scientifically unfounded claims ...
  57. [57]
    Unveiling the ethical void: Bias in reference citations and its ... - PMC
    Jul 26, 2024 · Citation bias receives scant attention in discussions of ethics. However, inaccurate citation may lead to significant distortions in scientific understanding.
  58. [58]
    Exclusion of the non-English-speaking world from the scientific ...
    In some fields, such as the natural and social sciences, over 95% of the papers are published in English (Liu, 2017). In addition, nearly 80% of all indexed ...
  59. [59]
    Assessment of Language and Indexing Biases Among Chinese ...
    May 28, 2020 · This cohort study evaluates the existence of language and indexing biases among Chinese-sponsored randomized clinical trials on drug ...
  60. [60]
    The prevalence of and factors associated with inclusion of non ...
    Aug 23, 2018 · A review of 50 meta-analyses found that including non-English studies influenced effect estimates in more than half of the meta-analyses: in ...
  61. [61]
    Ignoring non‐English‐language studies may bias ecological meta ...
    May 29, 2020 · Ignoring non-English-language literature may bias outcomes of ecological meta-analyses, due to systematic differences in effect sizes ...
  62. [62]
    Brazilian medicinal plants to treat upper respiratory tract and ... - PMC
    No systematic review has evaluated Brazilian medicinal plants (BMP) to treat upper respiratory tract and bronchial illness (URTI).Methods And Analyses · Study Design · Search Methods For Primary...Missing: English undercited
  63. [63]
    Traditional Japanese Kampo Medicine: Clinical Research between ...
    The Japanese traditional herbal medicine, Kampo, has gradually reemerged and 148 different formulations (mainly herbal extracts) can be prescribed.
  64. [64]
    "No Language Restrictions" in Database Searches: What Does This ...
    The aim of this study was to investigate the coverage of non-English journals by MEDLINE® and EMBASE, the two major biomedical databases used for ...
  65. [65]
    Excluding non-English publications from evidence-syntheses did not ...
    For 38 of the 40 outcomes, the exclusion of non-English studies did not markedly alter the size or direction of effect estimates or statistical significance. In ...
  66. [66]
    Language bias | Catalog of Bias - The Catalogue of Bias
    Reading and using only English language research could provide a biased assessment of a topic, and can lead to biased results in systematic reviews. Example. A ...
  67. [67]
    The Hidden Bias of Science's Universal Language - The Atlantic
    Aug 21, 2015 · The vast majority of scientific papers today are published in English. What gets lost when other languages get left out?
  68. [68]
    Extinction of Indigenous languages leads to loss of exclusive ...
    Sep 20, 2021 · The extinction of Indigenous languages equates to a loss of traditional knowledge about medicinal plants, which could reduce chances for the discovery of ...
  69. [69]
    Outcome reporting bias | Catalog of Bias - The Catalogue of Bias
    Outcome reporting bias is the selective reporting of pre-specified outcomes in clinical trials, which can compromise trial validity.
  70. [70]
    Outcome Reporting Bias in Trials - the ORBIT website
    However outcome reporting bias, which has been defined as the selection for publication of a subset of the original recorded outcome variables based on the ...
  71. [71]
    Systematic Review of the Empirical Evidence of Study Publication ...
    We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised ...
  72. [72]
    Outcome reporting bias in trials: a methodological approach ... - NIH
    Sep 28, 2018 · Bias arises when trialists select outcome results for publication based on knowledge of the results. Hutton and Williamson first defined outcome ...
  73. [73]
    Definitions, measurement, and reporting of progression-free survival ...
    Over the past decade, the primary outcome in evaluating oncology medicines has shifted from overall survival (OS) to progression-free survival (PFS), since PFS ...
  74. [74]
    Outcome reporting bias in trials: a methodological approach for ...
    Sep 28, 2018 · Empirical evidence suggests that outcome reporting bias is a threat to the validity of the evidence base and contributes to research waste. We ...
  75. [75]
    Failing the Public Health — Rofecoxib, Merck, and the FDA
    On September 30, 2004, after more than 80 million patients had taken this medicine and annual sales had topped $2.5 billion, the company withdrew the drug ...Missing: outcome | Show results with:outcome
  76. [76]
    Comparison of Registered and Published Primary Outcomes in ...
    Sep 2, 2009 · Among articles with trials adequately registered, 31% (46 of 147) showed some evidence of discrepancies between the outcomes registered and the ...
  77. [77]
    Reporting bias in clinical trials: Progress toward transparency ... - NIH
    Jan 19, 2022 · Reporting bias in clinical trials occurs when outcomes affect publication, including study publication bias (negative results not published) ...
  78. [78]
    Diffusion of Innovations in Service Organizations: Systematic ...
    If there is dedicated and ongoing funding for its implementation, the innovation is more likely to be implemented and routinized (for strong direct evidence, ...
  79. [79]
    How can we improve the translational landscape for a faster cure of ...
    One reason for this is that different incentives drive industry, academia, and funding bodies. These communities therefore lack common goals and often ...<|control11|><|separator|>
  80. [80]
    Lost in translation: the valley of death across preclinical and clinical ...
    Nov 18, 2019 · A rift that has opened up between basic research (bench) and clinical research and patients (bed) who need their new treatments, diagnostics and prevention.
  81. [81]
    Is it time to drop the 'knowledge translation' metaphor? A critical ...
    We conclude that research should move beyond a narrow focus on the 'know–do gap' to cover a richer agenda, including: (a) the situation-specific practical ...
  82. [82]
    Bias in meta-analysis detected by a simple, graphical test - The BMJ
    Sep 13, 1997 · A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses.
  83. [83]
    Conducting Meta-Analyses in R with the metafor Package
    Aug 5, 2010 · The metafor package in R allows for fitting meta-analytic models, including moderators, meta-regression, Mantel-Haenszel, Peto's method, plots, ...
  84. [84]
    An Evaluation of Publication Bias in High-Impact Orthopaedic
    Of the studies that assessed publication bias, 31.9% demonstrated evidence of publication bias. Only 43% and 22% of studies that involved use of the PRISMA ( ...Clinical Relevance · Publication Bias Reporting · Publication Bias Assessments
  85. [85]
    Trim and Fill: A Simple Funnel-Plot–Based Method of Testing and ...
    We show that they provide effective and relatively powerful tests for evaluating the existence of such publication bias. After adjusting for missing studies, we ...Missing: original | Show results with:original
  86. [86]
    A Fail-SafeN for Effect Size in Meta-Analysis - Robert G. Orwin, 1983
    Rosenthan's (1979) concept of fail-safeN has thus far been applied to probability levels exclusively. This note introduces a fail-safeN for effect size.
  87. [87]
    Recommendations | Clinical Trials - ICMJE
    ICMJE requires clinical trials to be registered in a public registry before first enrollment, and trials after 2019 must include a data sharing plan in  ...
  88. [88]
    ICMJE Recommendations ("The Uniform Requirements")
    The ICMJE Recommendations are for the conduct, reporting, editing, and publication of scholarly work in medical journals, and are freely available on the ICMJE ...How Do I Obtain A Print Copy... · Can I Translate/reprint The... · How Do I Cite The...
  89. [89]
    CONSORT 2025 Statement: updated guideline for reporting ...
    Apr 18, 2025 · CONSORT 2025 Statement: updated guideline for reporting randomised trials. Reporting guidelines for main study types.Abstracts · CONSORT - Extension for... · CONSORT Cluster · CONSORT Harms
  90. [90]
    CONSORT 2025 explanation and elaboration: updated guideline for ...
    Apr 14, 2025 · The CONSORT (Consolidated Standards of Reporting Trials) statement aims to improve the quality of reporting and provides a minimum set of items to be included ...CONSORT 2025 statement · Article metrics · All rapid responses · Peer review
  91. [91]
    Best Practices in Research Reporting | PLOS One
    This policy on inclusivity in global research aims to improve transparency in the reporting of research performed outside of researchers' own country or ...
  92. [92]
    Clinical trial results posting in EudraCT mandatory for sponsors
    Jun 19, 2014 · For any interventional clinical trials that ended on or after 21 July 2014, sponsors will have to post results within six or twelve months ...
  93. [93]
    [PDF] Civil Money Penalties Relating to the ClinicalTrials.gov Data Bank
    Civil penalties may be assessed for failing to submit required clinical trial info, submitting false info, or failing to submit/knowingly submitting false ...
  94. [94]
    Clinical trial registration was associated with lower risk of bias ... - NIH
    Jan 23, 2022 · Prospective clinical trial registration was associated with low risks of selection bias due to inadequate allocation concealment, performance bias, and ...
  95. [95]
    Preregistration - Center for Open Science
    The issue: All preregistered analysis plans must be reported. Selective reporting undermines diagnosticity of reported statistical inferences. Possible response ...
  96. [96]
    Full article: Encouraging pre-registration of research studies
    Mar 8, 2019 · Pre-registration can help address publication bias by ensuring experimental analyses are open and transparent. It does not prevent post hoc ...
  97. [97]
    Do Pre-Registration and Pre-analysis Plans Reduce p-Hacking and ...
    We investigate whether these tools reduce the extent of p-hacking and publication bias by collecting and studying the universe of test statistics.
  98. [98]
    [PDF] trials registered. All results reported. - AllTrials
    All trials registered. All results reported. September 2013. The AllTrials campaign calls for all past and present clinical trials to be registered and their ...
  99. [99]
    PRISMA statement
    The main PRISMA reporting guideline (PRISMA 2020) primarily provides guidance for the reporting of systematic reviews evaluating the effects of interventions.Prisma 2020 · PRISMA extensions · PRISMA Executive · Flow diagram
  100. [100]
    Conflict of interest disclosure in biomedical research
    May 3, 2016 · For researchers, the disclosure of conflicts of interest may exacerbate biases in the presentation of research by creating the impetus to ...
  101. [101]
    Ensuring the quality and specificity of preregistrations - PMC
    Researchers face many, often seemingly arbitrary choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results.
  102. [102]
    Open Science Badges
    Open Science Badges enhance openness by incentivizing data sharing and signaling content availability and accessibility. They increase data sharing rates.
  103. [103]
    Merck Manipulated the Science about the Drug Vioxx
    Oct 12, 2017 · In fact, Vioxx has since been found to significantly increase cardiovascular risk, leading Merck to withdraw the product from the market in 2004 ...
  104. [104]
    [PDF] Testimony of David J. Graham, MD, MPH, November 18, 2004
    Nov 18, 2004 · At this meeting a senior manager from ODS labeled our Vioxx study “a scientific rumor.” Eight days later, Merck pulled Vioxx from the market ...Missing: whistleblower bias
  105. [105]
    Vioxx (rofecoxib) Questions and Answers - FDA
    Apr 6, 2016 · The new study shows that Vioxx may cause an increased risk in cardiovascular events such as heart attack and strokes during chronic use. 7. What ...
  106. [106]
    Tamoxifen Therapy for Breast Cancer and Endometrial Cancer Risk
    Reports of endometrial cancers diagnosed among women receiving tamoxifen therapy for breast cancer began to appear in the literature as early as 1985 ( 9 ).
  107. [107]
    Moving Towards Transparency of Clinical Trials - PMC - NIH
    As new policies promote transparency of clinical trials through registries and results databases, further issues arise and require examination.
  108. [108]
    Effect of reporting bias on meta-analyses of drug trials - The BMJ
    Jan 3, 2012 · We hypothesised that inclusion of unpublished data in meta-analyses would decrease drugs' efficacy and increase their harms compared with meta- ...
  109. [109]
    Reporting bias in medical research - a narrative review - Trials
    Apr 13, 2010 · The aim of this narrative review is to gain an overview of reporting bias in the medical literature, focussing on publication bias and selective outcome ...
  110. [110]
    GRADE guidelines: 5. Rating the quality of evidence--publication bias
    In GRADE, publication bias can lower evidence quality, even if individual studies have low bias. It's suspected with small, commercially funded studies, and ...Missing: downgrade | Show results with:downgrade
  111. [111]
    Investigating the impact of trial retractions on the healthcare ...
    Apr 23, 2025 · Retracted trials have a substantial impact on the evidence ecosystem, including evidence synthesis, clinical practice guidelines, and evidence based clinical ...
  112. [112]
    Publication and related biases in health services research
    Jun 1, 2020 · Delay in publication arising from the direction or strength of the study findings, referred to as time lag bias, was assessed in one of the ...Search Strategy · Risk Of Bias Assessment · Literature Search And...
  113. [113]
    Cognitive Bias and Public Health Policy During the COVID-19 ...
    Jun 29, 2020 · This Viewpoint reviews common cognitive biases that led health centers and the public to favor patient- over population health–oriented ...
  114. [114]
    Comparing Cochrane Authors' and ChatGPT's Risk of Bias ...
    Aug 31, 2025 · This study shows that ChatGPT-4o can perform risk of bias assessments using RoB 2 with fair to moderate agreement with human reviewers.
  115. [115]
    Confronting the Mirror: Reflecting on Our Biases Through AI in ...
    Sep 24, 2024 · AI models that predict patient outcomes may inherit biases if the data used reflects historical inequalities in treatment or access to care.