Fact-checked by Grok 2 weeks ago

Impact factor

The journal impact factor (JIF), also known as the impact factor, is a bibliometric metric calculated annually by that quantifies the average number of citations received by citable items—such as research articles and reviews—published in a scholarly during the two preceding years, divided by the total number of such citable items published in those years. Introduced as a for journal selection in abstracting and indexing services, the JIF has become a for assessing within academic fields, though its application extends to evaluating researchers, departments, and funding decisions despite methodological limitations. Devised by Eugene Garfield, founder of the Institute for Scientific Information (ISI), the concept of the impact factor emerged in the mid-1950s amid efforts to develop citation-based tools for navigating scientific literature, with Garfield first proposing it in a 1955 Science article as a means to rank journals by citation frequency. The formula specifically computes, for a given year y, the ratio of citations in y to recent journal content (from y-1 and y-2) over the number of citable publications in those prior years, excluding self-citations in the numerator to varying degrees based on Clarivate's policies. Initially published in ISI's Journal Citation Reports starting in 1975, the metric gained traction with the growth of Web of Science indexing, but its two-year window favors fast-citing disciplines like biomedicine over slower fields such as mathematics. While the JIF enables rough comparisons of journal influence within similar domains, it faces substantial for conflating journal-level aggregates with individual article quality, encouraging citation inflation through practices like excessive self-citation or coordinated quoting, and distorting research priorities toward high-impact outlets at the expense of innovative or interdisciplinary work. Empirical studies reveal that JIF correlates imperfectly with actual citation distributions, where a minority of articles drive most citations, rendering the average misleading as a quality indicator. In response to these flaws, institutions like the Declaration on Research Assessment () advocate decoupling evaluations from JIF, emphasizing that overreliance perpetuates systemic biases in academic hiring and promotion, often prioritizing metrics over substantive contributions.

History

Origins and invention

The concept of the impact factor originated with Eugene Garfield's proposal in 1955 to use counts as a quantitative measure of journal influence, articulated in his article "Citation Indexes for : A New Dimension in Documentation Through Association of Ideas," published in Science. Garfield envisioned citation indexing not only for tracing idea associations but also for evaluating journals' relative importance based on how frequently their articles were referenced by subsequent publications, addressing the limitations of subjective assessments in an era of burgeoning scientific output. This idea laid the groundwork for the Science Citation Index (SCI), which Garfield's (ISI) launched in 1964 as the first comprehensive tool for systematic citation tracking across thousands of journals. The SCI enabled the practical computation of citation-based metrics by compiling references from peer-reviewed articles, initially covering approximately 600 journals selected through preliminary impact assessments developed by Garfield and collaborator Irving H. in the early . The invention responded to the post-World War II explosion in scientific publishing, where annual journal output surged from thousands to tens of thousands of titles, complicating library acquisitions, researcher literature searches, and decisions on which periodicals merited indexing or subscription amid limited resources. Garfield argued that citation frequency provided an objective proxy for a journal's utility and prestige, reducing reliance on reputational heuristics or circulation figures that often favored established but less impactful outlets.

Early adoption and standardization

The (JCR), introduced in 1975 by the Institute for Scientific Information (ISI, now part of ), marked the initial standardization of journal impact factors by compiling citation data from the (SCI) to calculate and rank impact metrics for covered journals using 1974 citations. This annual publication formalized impact factors as a quantifiable measure of journal citation influence, initially serving librarians and publishers in evaluating journal value for subscriptions and amid the post-World War II expansion of . The aggregated citations to recent citable items, providing transparent rankings that shifted journal from subjective to empirical citation patterns. Following its launch, JCR integrated impact factors into early scientific workflows, particularly for journal selection and inter-journal comparison within natural sciences, as ISI's SCI database covered over 2,000 journals by the mid-1970s. By the late 1970s, the inclusion of data from the (SSCI, initiated in 1973) extended standardized impact factor calculations to approximately 1,400 journals, broadening applicability beyond physical and life sciences while relying on ISI's unified citation indexing framework. This expansion facilitated cross-disciplinary benchmarking, though humanities coverage remained limited to the separate Arts & Humanities Citation Index without routine impact factors until much later. As academic output surged in the , JCR's impact factors gained traction in institutional decisions, including for libraries facing budget constraints and the proliferation of serials, which exceeded 100,000 titles globally by 1990. Their adoption standardized in workflows, reducing reliance on anecdotal and enabling data-driven choices in an era of increasing specialization, though initial use emphasized practical utility over formal academic assessments.

Key developments and expansions

The proliferation of online bibliographic databases in the 1990s, such as the transition of the Science Citation Index to digital formats, facilitated greater access to literature and resulted in a marked increase in overall citation volumes, which in turn elevated journal impact factors across disciplines. This digital shift correlated with the emergence of initiatives and expanded indexing, contributing to higher citation rates but also prompting early concerns about inflationary pressures on the metric. By the mid-1990s, average impact factors had begun trending upward, reflecting broader dissemination of research but complicating cross-temporal comparisons. To mitigate challenges in comparing journals across disparate fields with varying citation norms, the (JCR) emphasized subject category assignments, enabling within-discipline rankings and normalization of impact factors. These categories, refined through the , grouped journals by topical alignment, such as partitioning multidisciplinary outlets to prevent apples-to-oranges evaluations between high-citation fields like and lower-citation . This structural adaptation addressed inherent cross-disciplinary biases in raw impact factor values, promoting more equitable assessments for journal selection and evaluation. In the early 2000s, amid growing scrutiny over potential misapplications, , the metric's originator, reiterated its intended role as a journal-level tool rather than for individual articles or careers, defending it against charges of oversimplification while acknowledging limitations like self-citation inflation. Concurrently, impact factors gained traction in national research evaluations, such as the UK's Research Assessment Exercise (RAE), where they served as supplementary indicators alongside in exercises from 1996 onward, influencing institutional funding allocations despite debates over their supplementary status. These integrations highlighted the metric's expanding influence, though they also intensified discussions on its robustness for policy decisions.

Calculation

Core methodology and formula

The journal impact factor (JIF) quantifies the frequency with which the average citable item in a is cited, using a precise based on recent citations. For year y, it is calculated as the total number of citations in y to citable items published in the journal during y-1 and y-2, divided by the total number of such citable items published in those two years: \text{IF}_y = \frac{\text{Citations}_y \text{ to items from } y-1 \text{ and } y-2}{\text{Citable items published in } y-1 \text{ and } y-2} This two-year window prioritizes citations to recent substantive content, reflecting short-term influence rather than long-term legacy. Citable items are restricted to full-length, peer-reviewed articles and reviews, which are deemed capable of attracting meaningful scholarly ; non-citable formats such as editorials, short communications, letters, , and news notes are excluded from the denominator to avoid diluting the metric with low-citation-potential material. Clarivate performs the computation annually, drawing exclusively from citation and publication data indexed in the Core Collection, ensuring consistency across journals. The resulting JIF values are released mid-year via the (JCR); for instance, the 2023 JIF—based on 2023 citations to citable items from 2021 and 2022—was published in June 2024. An illustrative calculation for a high-impact journal might yield: This yields 41.577 for that year, highlighting how large citation volumes relative to publication output drive elevated scores.

Data sources and eligibility criteria

The impact factor relies exclusively on citation data sourced from 's Core Collection, which indexes approximately 21,000 peer-reviewed journals across sciences, social sciences, arts, and humanities as of 2023. This database serves as the foundational repository for (JCR), ensuring that only resolved citations—typically linked via DOIs or parsed bibliographic references—are counted toward the numerator of the impact factor formula. Non-resolved or unparsable references from non-indexed sources are excluded to maintain and verifiability. Journals qualify for inclusion in the Web of Science Core Collection through a rigorous selection emphasizing basic standards, such as regular and timely issuance of issues, adherence to international ethical guidelines (e.g., COPE standards), and provision of English-language abstracts and keywords for global accessibility. Editorial content must demonstrate rigor, with articles exhibiting original scholarly value and relevance to the journal's scope, excluding proceedings papers or non-substantive items unless they meet citable document criteria (primarily research articles and reviews). Selection further prioritizes quality and diversity, requiring that a majority of board members possess verifiable publication records in Web of Science-indexed journals and exhibit broad networks spanning multiple countries to mitigate regional biases. Geographic representation among authors, editors, and cited works is assessed to ensure the journal's extends beyond national boundaries, with undue concentration in self-referential patterns (e.g., excessive journal self-citations exceeding 20-30% thresholds in monitored cases) potentially leading to exclusion or suppression from JCR rankings. during evaluation verifies the journal's external impact, confirming that references are drawn from a diverse pool of high-quality sources rather than promotional stacking or isolated networks. Only journals meeting these multifaceted criteria contribute to impact factor computations, with ongoing re-evaluation to uphold standards against practices like coercive requests.

Recent methodological updates

In response to evolving publishing practices, incorporated (online-ahead-of-print) articles into the Journal Impact Factor (JIF) denominator starting with the 2021 release of (JCR), with full implementation reflected in subsequent years including 2022, to better capture contemporary dynamics and reduce discrepancies between print and digital publication timelines. This adjustment addressed that excluding such content artificially inflated metrics for journals relying on rapid online dissemination, thereby enhancing the accuracy of the two-year window. A significant integrity-focused refinement occurred in the 2025 JCR release (based on 2024 data), where announced the exclusion of citations to and from retracted articles in the JIF numerator, while retaining retracted items in the denominator to preserve and in publication counts. This change, effective from May 2025, responds to documented increases in retractions and aims to prevent citations from distorting journal-level impact assessments, without altering the core formula's structure. Clarivate has also implemented ongoing procedural safeguards for journal title changes, mergers, and splits, requiring unification of metrics across the affected titles for up to two years post-event to deter strategic manipulations that could artificially boost or suppress JIF values. These policies, refined based on historical data from journal transitions, ensure continuity in evaluation while maintaining methodological consistency across evolving publication landscapes.

Applications

Journal ranking and selection

Impact factors enable the categorization of journals into (Q1–Q4) within specific subject areas, where denotes the top 25% by , Q2 the next 25%, Q3 the following quartile, and Q4 the bottom 25%. This system, derived from data, facilitates relative prestige assessments by normalizing for field-specific citation practices, allowing selectors to identify high-performing outlets amid thousands of titles. For instance, in multidisciplinary sciences, journals like achieve Q1 status with impact factors exceeding 50, serving as aspirational targets for manuscript submissions due to their visibility and citation potential. Libraries and universities incorporate impact factors into serials reviews to evaluate subscription viability, often prioritizing and Q2 journals for renewal while scrutinizing lower-quartile titles against budget constraints and usage metrics. In such processes, high-impact benchmarks guide decisions; CA: A Cancer Journal for Clinicians recorded an impact factor of 503.1 in 2024, exemplifying elite journals retained for their influence despite costs. However, cancellations typically balance impact data with local circulation—low-impact journals with strong usage may persist, whereas underused high-impact ones face cuts during fiscal pressures, as seen in post-2008 economic analyses. Funders and institutions increasingly apply impact factor thresholds in steering open-access transitions, favoring high-quartile journals for transformative agreements that shift from subscriptions to article-processing charges. For researcher choices, these metrics inform submission strategies, with outlets prioritized for enhanced dissemination, though selectors caution against sole reliance on impact factors due to their aggregation biases across disciplines.

Academic evaluation and career advancement

In academic evaluations, journal impact factors (JIFs) are frequently employed as proxies for , influencing , , and tenure decisions by signaling the perceived quality and reach of a researcher's work. Institutions often require candidates to include metrics such as the average JIF across their publications in tenure dossiers, treating it as a for scholarly and . A analysis of , , and tenure (RPT) documents from 128 North and institutions revealed that JIFs were referenced in 40% of cases, predominantly to denote journal quality, impact, or , with 87% of those institutions incorporating it into tenure assessments. This practice extends to hiring, where JIFs appear on CVs and in selection criteria to gauge a candidate's with institutional standards. National differences in JIF reliance are pronounced, with heavier emphasis in Asian systems compared to moderated approaches in . In , the National Natural Science Foundation of (NSFC) has long prioritized publications in high-JIF, ()-indexed journals for grant evaluations and career progression, a policy rooted in reforms from the that tied funding and promotions to JIF thresholds as indicators of international competitiveness. This has incentivized researchers to target elevated JIF outlets for eligibility in NSFC competitions, where average JIF metrics contribute to scoring project proposals and appraisals. In , adoption of the San Francisco Declaration on Research Assessment () since 2012, endorsed by bodies like Science in 2022, has prompted shifts toward qualitative over JIFs in evaluations, reducing their weight in hiring and promotions at signatory universities while still acknowledging journal prestige contextually. Supporting JIF use in researcher assessments, empirical data from biomedical fields show moderate positive correlations between journal impact factors and independent quality evaluations. For example, a of physician ratings for clinical journals found a strong association with JIFs (r² = 0.82, p = 0.001), suggesting alignment between citation-based metrics and expert judgments of and rigor. In grant contexts, such as NSFC panels, higher average JIFs of prior outputs correlate with success rates, as they proxy for anticipated dissemination and peer validation, though direct causation remains mediated by field-specific norms. These correlations, typically ranging from 0.4 to 0.6 in peer-reviewed biomedical assessments, underpin JIFs' role in prioritizing candidates with demonstrated publication trajectories in influential venues.

Funding, policy, and institutional decision-making

Performance-based research funding systems (PRFS), implemented in over a dozen countries including the , , , and , allocate government research grants to universities and institutions based on evaluated outputs such as publications weighted by journal prestige, where impact factors serve as a key bibliometric for quality. These systems typically assign points or scores to s in high-impact factor journals to compute institutional performance metrics, influencing annual block funding distributions that can total billions in public expenditure; for instance, 's system directs approximately AUD 2 billion yearly through such evaluations. Despite methodological critiques, impact factors enable scalable comparisons across disciplines, shaping national priorities toward fields with journals exhibiting higher average citation rates. In , institutional and policy incentives historically tied research funding and bonuses to journal impact factors to boost global competitiveness, with universities disbursing cash rewards scaled to impact factor thresholds—often exceeding $100,000 USD per paper in top-tier journals like Nature or —drawing from government-supported budgets. This approach, prevalent through the , correlated with a surge in Chinese-authored high-impact publications, from under 1% of global Nature/Science papers in 2000 to over 15% by 2016, though it prompted concerns over quality dilution and prompted a 2020-2023 policy shift prohibiting such monetary rewards in evaluations. University-level decision-making similarly leverages impact factors for internal resource allocation, such as seed grants or lab funding, prioritizing principal investigators with track records in high-factor journals to maximize institutional prestige and attract external sponsorship. Panels at agencies like the U.S. and , while formally scoring on merit criteria excluding direct metrics, informally reference publication venues' impact factors to gauge project feasibility and investigator productivity during . This practice persists amid declarations like advocating restraint, as high-impact outputs signal potential for broader dissemination and follow-on funding.

Benefits

Standardization and comparability

The impact factor serves as a standardized calculated annually using a across approximately 22,000 journals included in Clarivate's , providing a consistent benchmark for evaluating journal influence irrespective of publication size or discipline. This approach applies the same citation-to-publication ratio—dividing the number of citations in a given year by the citable items published in the preceding two years—to all eligible titles, ensuring methodological consistency that minimizes variability in assessment criteria. By converting disparate citation patterns into a single numerical value, the impact factor diminishes subjectivity in journal evaluations, replacing ad hoc qualitative judgments with data-driven rankings that stakeholders can replicate and verify. This enables direct cross-journal comparisons, such as ranking titles within or across categories based on relative efficiency, which supports informed in ecosystems. Annual releases of impact factors also permit temporal comparisons, allowing observation of trends like a journal's rising or declining reception over successive years, as evidenced by multi-year series in databases. For example, libraries routinely incorporate impact factors into subscription renewal processes, canceling lower-impact titles to redirect funds toward higher-cited s amid fiscal pressures, thereby aligning collections with demonstrated scholarly usage. This linkage of journal standing to empirically observable flows enforces accountability, as prestige accrues from verifiable patterns of scholarly engagement rather than unquantified reputation alone.

Empirical correlations with research quality

Empirical analyses have identified moderate positive correlations between journal impact factors (IFs) and indicators of research quality, including peer-assessed journal prestige and article-level citation rates. A 2017 systematic review and meta-analysis of studies comparing IFs to perception-based rankings by experts reported a pooled correlation coefficient indicating a moderate positive relationship, suggesting that higher IFs align with scholarly evaluations of journal quality across disciplines. In clinical , for instance, the correlation between IFs and mean peer ratings of journals reached 0.67 (p < 0.01), reflecting consistent alignment within that field. Higher IFs also associate with enhanced peer review processes that contribute to output quality. An analysis of 10,000 peer review reports from 1,644 biomedical journals found that reviews for higher-IF journals (up to 74.7) contained a greater proportion of comments on study methods (51.8% in top-IF group versus 40.4% in lowest), indicating more thorough of methodological rigor compared to lower-IF outlets. This selective filtering likely elevates article validity, as evidenced by meta-epidemiological data from 2,459 study results across 446 Cochrane reviews, where higher IFs predicted results closer to pooled "truth" estimates, with relative deviation decreasing by 0.023 per IF unit (95% : -0.32 to -0.21; p < 0.001). Regarding citation-based influence, articles published in higher-IF journals systematically garner more citations, even after accounting for self-selection, due to both pre-publication selection of impactful work and post-publication prestige effects. Causal inference studies confirm that placement in high-IF journals augments citations beyond what the article's inherent qualities alone would attract, with within-field analyses preserving IF as a predictor of relative performance. These patterns hold despite field variations, underscoring IFs' utility in signaling prospective scientific and validity.

Incentives for scientific rigor and dissemination

High-impact journals, motivated by the need to sustain elevated impact factors, implement more stringent processes that emphasize methodological scrutiny. A 2023 analysis of over 1,000 reports from biomedical journals found a positive between journal impact factor and review thoroughness, with higher-impact outlets providing more detailed feedback on study methods (e.g., an average of 1.5 additional methodological comments per report compared to lower-impact journals) while de-emphasizing novelty. This selective pressure elevates overall publication standards by incentivizing authors to produce robust, error-minimized work to meet thresholds. Empirical data indicate that this rigor translates to fewer retractions attributable to errors in high-impact venues. A comprehensive of over 2,000 retracted publications revealed a strong negative (r = -0.91, p < 0.0001) between journal impact factor and retraction rates for honest errors, such as analytical mistakes, contrasting with no such pattern for misconduct-driven retractions. By prioritizing verifiable quality over volume, impact factors thus discourage the submission and acceptance of flawed , fostering a publication ecosystem where only high-utility findings advance. Impact factors also promote broader dissemination practices, as journals adapt policies to maximize citations and visibility. For instance, 's impact factor surged from its inaugural 4.4 in 2010 to a peak of 4.0 by 2013, driven by its open-access model that expanded readership and citation potential across disciplines, encouraging inclusive criteria for sound over perceived . Similarly, mandates for in high-impact outlets correlate with citation uplifts; studies show articles with publicly shared datasets receive up to 69% more citations, independent of journal prestige, prompting editors to incentivize such transparency to bolster their metrics. At a systemic level, impact factors allocate scarce academic attention toward meritorious outputs, enabling subsequent to build on validated, high-diffusion . This merit-aligned mechanism ensures that resources—such as and collaboration opportunities—flow preferentially to impactful work, reinforcing a cycle of rigorous inquiry and accumulation without relying on subjective assessments.

Criticisms

Field-specific and scale biases

The impact factor's reliance on arithmetic means renders it particularly susceptible to skewness in citation distributions, which are empirically observed to follow highly right-skewed, log-normal, or power-law patterns across journals and fields. In such distributions, a small fraction of highly cited articles disproportionately influences the , with studies showing that approximately 15-20% of papers often account for 50% or more of a journal's citations, thereby inflating the impact factor beyond the typical article's performance. This skewness undermines the metric's representativeness, as medians or other measures would yield substantially lower values, highlighting a scale bias where outliers distort journal-level assessments. Field-specific citation norms exacerbate these scale issues, with disciplines exhibiting divergent citation rates and that preclude valid cross-field comparisons using raw factors. For instance, biomedical and journals typically exhibit higher factors (often 5-10 or more for leading titles), driven by rapid publication cycles and frequent referencing practices, whereas and physics journals below 2, reflecting sparser and more deliberate habits. These disparities arise from inherent differences in reference densities—e.g., papers 30-50 versus 10-20 in or —rather than quality variances, rendering normalized or field-adjusted metrics necessary for equitable evaluation. In and social sciences, the impact factor's applicability is further limited by slower citation accumulation cycles, lower overall rates, and a reliance on non-journal formats like monographs, which fall outside indexing. Top humanities journals, when indexed, rarely exceed impact factors of 1-2 due to regionally focused and delayed peer influence, contrasting sharply with sciences' global, fast-paced citation environments. Consequently, applying unadjusted impact factors across fields distorts prestige rankings and , as evidenced by systematic underrepresentation of humanities in high-impact factor databases.

Manipulation and editorial distortions

Journals have engaged in excessive self- practices to artificially elevate their impact factors, where authors affiliated with the journal disproportionately cite its prior publications, inflating the citation numerator. Clarivate Analytics suppresses impact factors for journals exhibiting self-citation rates that distort performance metrics, such as when self-cites exceed 20-30% of total citations, far above the norm for high-quality journals. In the 2018 , 14 journals were denied impact factors due to elevated self-citation levels, demonstrating coordinated efforts among editorial boards and authors to boost rankings. Similarly, in 2025, 20 journals lost their impact factors for excessive self-citation or citation stacking, where clusters of papers mutually cite one another to amplify counts. Publishers strategically favor review articles over original research to manipulate impact factors, as reviews garner higher rates—often 2-5 times more than primary articles—while contributing equally to the denominator of citable items. This selective emphasis increases the numerator without proportionally expanding the publication base, a tactic observed in fields like where journals shifted toward review-heavy issues in the early to climb rankings. Such distortions undermine the metric's validity, as the inflated factor reflects editorial curation rather than broad scholarly influence. Coerced citations and citation farms further enable , with editors pressuring submitting authors to include superfluous to the journal's past works, a practice deemed inappropriate by 85% of surveyed researchers yet complied with by nearly 60% to secure . Citation farms involve organized groups where a small set of authors generate disproportionate citations—sometimes over 50% from mutual referencing—to prop up impact factors, evident in audits revealing 4.2% of journals in 2010 deriving more than half their impact factor numerator from self-cites. These tactics, documented in and sciences journals, highlight systemic incentives for editorial interference over merit-based evaluation.

Misapplication to individuals and systemic harms

The journal impact factor (JIF), as a value, cannot reliably proxy the citation performance of individual articles due to the highly skewed of citations within journals, where a small fraction of papers typically garners the majority of citations. For instance, analyses of citation data across disciplines show that the top 10% of papers often account for 50% or more of a journal's total citations, following power-law or log-normal distributions that render averages misleading for assessing any single publication. This skewness implies that many articles in high-JIF journals receive few or no citations, undermining the of attributing journal-level prestige to personal outputs without direct evaluation of content or impact. Extrapolating JIF to individuals perpetuates errors in hiring, , and tenure decisions, where researchers are judged by the venues of their publications rather than or verifiable influence of their work. Scientometric experts consistently argue against this , noting that it conflates journal metrics with personal merit, leading to undervaluation of substantive contributions in lower-JIF outlets or overvaluation of marginal papers in elite ones. Such misapplication distorts career trajectories, favoring quantity and venue prestige over or reliability, as evidenced by institutional reliance on JIF thresholds despite their disconnect from article-level . At a systemic level, the "publish or perish" imperative amplified by JIF chasing incentivizes practices like p-hacking—manipulating data for statistical significance—and salami slicing, where single studies are fragmented into minimal publishable units to inflate output counts for high-JIF submissions. Surveys of biomedical researchers reveal that nearly 75% perceive a reproducibility crisis, attributing it to pressures prioritizing novel, positive results in prestigious journals over robust, replicable methods, which discourages preregistration or null findings. This culture fosters suppression of negative results, as high-JIF outlets selectively publish confirmatory outcomes, exacerbating the replication crisis through selective reporting and reduced scientific self-correction. Consequently, resources and talent shift toward gaming metrics rather than advancing knowledge, eroding trust in published findings across fields.

Responses and Reforms

Academic declarations and guidelines

The San Francisco Declaration on Research Assessment (DORA), initiated in 2012 at the American Society for Cell Biology meeting, calls for evaluating based on its intrinsic qualities rather than journal-level metrics like the impact factor. It explicitly recommends against using journal impact factors as surrogates for the quality of individual articles or scientists' contributions in hiring, promotion, or funding decisions, advocating instead for assessments that consider the content of publications, , and broader societal impacts. As of 2025, DORA has been signed by over 26,000 individuals and organizations across 168 countries, including funders, institutions, and publishers. Complementing DORA, the Leiden Manifesto, published in 2015, outlines ten principles for the responsible use of quantitative metrics in research evaluation. These principles emphasize that metrics should support, not replace, qualitative expert judgment; be contextualized against specific institutional missions and disciplines; and avoid simplistic rankings that ignore variability in citation practices across fields. Key tenets include keeping and analysis transparent to allow scrutiny, recognizing the distinct contributions of scholars beyond citations (such as mentoring or policy influence), and ensuring metrics align with diverse research goals rather than prioritizing publication counts or journal prestige. Despite these declarations, empirical evidence indicates limited systemic adoption, with journal impact factors remaining prevalent in academic evaluations. A analysis of review, promotion, and tenure documents from 99 institutions worldwide found the Journal Impact Factor referenced in 81% of cases, often as a for . Similarly, studies post-DORA show persistent correlations between journal rankings and assessments, suggesting entrenched reliance on such metrics endures amid institutional and lack of enforceable alternatives. This ongoing use has prompted calls for stronger implementation mechanisms, though declarations alone have not curtailed impact factor dominance in funding and career decisions.

Publisher and database adaptations

Clarivate, administrator of the (JCR), introduced the Journal Citation Indicator in May 2021 as a field-normalized complement to the traditional Journal Impact Factor, calculating average citation impact relative to a global baseline where a score of 1.0 denotes performance at the field average across categories. This adaptation addresses scale and discipline biases by enabling cross-field comparisons without raw citation volume disparities. To curb self-citation excesses, maintains thresholds triggering journal reviews and potential suppression from JCR; for example, in the 2025 release, 20 journals lost Impact Factors due to elevated self-citation rates exceeding 25% or citation stacking patterns. Beginning with the 2025 JCR (incorporating 2024 data), citations to or from retracted articles are fully excluded from the Impact Factor numerator, preventing artificial inflation from invalid references and reinforcing metric integrity against post-publication retractions. Publishers have responded with policy enhancements promoting transparency in practices that could affect Impact Factors. , for instance, expanded transparent to all articles by June 2025, publishing reviewer comments and author responses alongside accepted papers to foster accountability and reduce selective incentives tied to citation metrics. Springer Nature's editorial policies similarly prioritize independence from Impact Factor pressures, mandating disclosure of data, materials, and to support reproducible over metric optimization. These steps, alongside Clarivate's exclusions, have sustained the metric's use by systematically flagging and mitigating distortions, as reflected in persistent but targeted JCR suppressions.

Policy shifts in evaluation practices

Numerous academic institutions and funding agencies have revised evaluation criteria to curtail the prominence of journal impact factors (IFs), favoring multifaceted reviews that prioritize research content, investigator expertise, and societal relevance over proxy metrics. This trend accelerated following widespread adoption of the Declaration on Research Assessment () in 2012, which explicitly advises against employing journal-based metrics like IFs as surrogates for individual article or researcher quality. By 2025, over 2,500 organizations worldwide, including universities, funders, and publishers, had signed , committing to policy reforms that integrate qualitative peer assessments and diverse indicators. Prominent funding bodies exemplify these changes. The (ERC), a major grant provider, announced in July 2021 its decision to eliminate IFs and analogous metrics from evaluations, shifting emphasis to the intrinsic merit of proposals via panels assessing scientific excellence, feasibility, and . Similarly, the U.S. (NIH) has maintained review frameworks since the early 2000s that exclude bibliometric scores like IFs, focusing instead on five core criteria: , investigator qualifications, , approach, and ; in the 2020s, NIH implemented simplified for applications due from January 2025 onward, further streamlining to highlight rigorous, high-potential science without metric quantification. At the institutional level, universities have enacted parallel prohibitions. in the discontinued IF usage in hiring, promotion, and tenure decisions around 2021, substituting it with evaluations of contributions and qualitative impact reviews. Other DORA signatories, such as those in the UK and , have integrated similar guidelines into faculty handbooks, mandating narrative assessments over numerical thresholds. Outcomes of these reforms show variability. Policies have demonstrably curbed overt IF gaming in compliant settings by decoupling career advancement from journal prestige, yet empirical analyses indicate persistent reliance in competitive fields like , where informal metric considerations endure despite formal bans, potentially slowing holistic adoption. Longitudinal studies remain limited, but early indicators suggest enhanced focus on reproducible research amid reduced publication volume pressures in reformed institutions.

Alternatives

Other journal-level metrics

Several journal-level metrics have emerged as complements or alternatives to the Journal Impact Factor (JIF), aiming to mitigate its vulnerabilities such as susceptibility to citation outliers, lack of prestige weighting, and insufficient normalization for field-specific citation practices. These metrics, derived primarily from large databases like , incorporate iterative algorithms or normalization techniques to provide a more nuanced assessment of a journal's influence within its scholarly network. The SCImago Journal Rank (SJR) quantifies the prestige of a journal by modeling citations as transfers of scientific influence, akin to an measure similar to . Developed using data, SJR calculates a journal's score iteratively: each citation's value is weighted by the prestige of the citing journal, with the process repeated until convergence, typically over three years of citations to prior publications. This approach addresses JIF's equal treatment of all citations by diminishing the influence of self-citations and elevating those from high-prestige sources, thereby better capturing a journal's standing in the global citation network rather than raw volume. SJR values are dimensionless, with higher scores indicating greater relative prestige; for instance, journals like consistently rank high due to citations from influential peers. The Source Normalized Impact per Paper (SNIP) provides field-normalized by dividing a journal's raw citation rate by the aggregate citation potential of its subject category. Computed from , SNIP uses citations in the current year to publications from the prior three years, normalized against the median citation rate in the field (termed "rip" or relative impact potential), which accounts for varying publication volumes and citation densities across disciplines. Unlike JIF, which favors fields with naturally higher citation rates like , SNIP penalizes less by rewarding contextual performance and inherently downweights uncited papers through its per-paper averaging. This makes SNIP particularly useful for cross-disciplinary comparisons, with values above 1 indicating above-average impact in the field; it was introduced by the Centre for (CWTS) at in 2010 to promote fairer evaluations. The adapted for journals measures the balance between a journal's and its citation endurance, defined as the largest number h such that at least h articles published in the have each received at least h citations. Typically calculated over all-time citations using databases like or , the journal h-index resists distortion from a handful of highly cited outliers that inflate JIF means, instead emphasizing consistent impact across a core set of papers. For example, a journal with h=50 has 50 papers cited at least 50 times each, providing a robust indicator less prone to manipulation via selective citing practices. Originally proposed by Jorge Hirsch in 2005 for authors, its journal application gained traction for evaluations where median-like robustness is preferred over arithmetic averages, though it accumulates over time and may undervalue newer journals.

Article-level and author-centric measures

Article-level metrics evaluate the influence of individual publications by normalizing s against field-specific expectations, circumventing the aggregation inherent in journal factors. Author-centric metrics aggregate these assessments to gauge a researcher's sustained contributions, emphasizing alongside thresholds rather than recency-biased averages. These approaches prioritize direct evidence of on subsequent work, though they remain susceptible to practices like self- or field rate variations. The Relative Citation Ratio (RCR), introduced by the U.S. (NIH) in 2016, computes an article's scientific influence as the ratio of its actual citations per year to the expected citations for publications in the same field, year, and type. For instance, an RCR exceeding 1 signifies above-average impact relative to NIH-funded peers, with percentiles enabling cross-field comparisons; a median RCR across NIH grants was approximately 0.99 in analyses of over 900,000 publications from 2009–2013. Unlike raw citation counts, RCR adjusts for publication age and discipline-specific norms, such as lower rates in versus , but requires comprehensive databases like for accuracy. It has been integrated into NIH's iCite tool for portfolio analysis, revealing that R01-funded papers average RCRs around 1.1. For authors, the , proposed by physicist in a 2005 Proceedings of the paper, defines h as the maximum value where a researcher has h papers each receiving at least h citations. This metric resists inflation from a few highly cited outliers, correlating empirically with career recognition like Nobel Prizes in physics, where laureates averaged h around 40–50 by award year. However, it disadvantages early-career researchers due to accumulation time and fields with sparse citations, prompting variants like time-normalized h. The extends the by identifying the largest g such that the researcher's top g articles collectively amass at least g² citations, amplifying credit for highly influential works while incorporating lower-cited ones marginally. Developed by Leo Egghe in 2006, it yields higher values than h—for example, an h of 20 might correspond to a g of 30 if citations skew heavily—better capturing uneven impact distributions observed in empirical studies of publication sets. Like the , g-index values rise with career length but offer a more nuanced view of "global citation performance." Altmetrics track alternative indicators of engagement, including social media shares (e.g., mentions), downloads, and policy citations, aggregated into scores like the Altmetric Attention Score, which weights sources such as news outlets (factor of 1) over (0.5). Introduced around 2011, these metrics reveal rapid dissemination—for a sample of biomedical articles, mentions correlated modestly (r=0.2–0.3) with eventual citations but highlighted absent in citation data. They complement citations by evidencing societal reach, as in cases where datasets garner forks or blogs spark discussions, yet risks include gaming via bots or conflating visibility with validity, with scores varying widely by discipline publicity.

Emerging multidimensional assessments

The employs field-normalized bibliometric indicators to evaluate university-level scientific performance, incorporating metrics such as the mean normalized (MNCS), which adjusts for differences in practices across disciplines, alongside counts and breadth, to provide a balanced view of research output and influence. This approach counters biases inherent in unnormalized counts by benchmarking against global field averages, enabling comparisons of institutional impact without favoring high- fields like over others. The 2024 edition covers over 1,500 universities, emphasizing normalized scores to reflect causal contributions to advancement rather than raw . Dimensions badges integrate citation data with to visualize multifaceted article-level impact, displaying not only citation tallies but also online mentions, policy citations, and media attention, which capture societal and practical reach beyond academic echo chambers. Introduced in 2018 and refined thereafter, these badges embed in publisher platforms to offer immediate, interactive overviews, blending quantitative scholarly metrics with qualitative diffusion indicators for a holistic snapshot. integrations further enable this by linking persistent researcher identifiers to diverse data sources, facilitating aggregated profiles that combine bibliometric scores, records, and networks for author-centric evaluations. Such linkages, as implemented in systems like Stanford's centralized since 2024, support comprehensive assessments by verifying affiliations and outputs, reducing duplication errors in multidimensional scoring. Emerging frameworks extend these integrations toward predictive and contextual depth; for instance, the model assesses across responsiveness to needs, accessibility of outputs, reflexivity in methods, in , and in , applied in self-evaluations to incorporate and effects. Similarly, the 2025 AACSB Global Research Impact Framework advocates principles for evaluating through scholarly, educational, societal, and economic dimensions, prioritizing qualitative and testimonials over isolated metrics to align with real-world causal chains. These tools address the factor's temporal myopia by forecasting long-term trajectories via normalized baselines and AI-assisted in citation trajectories, though empirical validation remains nascent.

References

  1. [1]
    The Clarivate Impact Factor
    Thus, the impact factor of a journal is calculated by dividing the number of current year citations to the source items published in that journal during the ...
  2. [2]
    The History and Meaning of the Journal Impact Factor - JAMA Network
    Jan 4, 2006 · I first mentioned the idea of an impact factor in Science in 1955. With support from the National Institutes of Health, the experimental ...
  3. [3]
    [PDF] The History and Meaning of the Journal Impact Factor
    Jan 4, 2006 · The History and Meaning of the Journal Impact Factor. Eugene Garfield, PhD. IFIRST MENTIONED THE IDEA OF AN IMPACT FACTOR IN. Science in 1955.1 ...
  4. [4]
    A closer look at the Journal Impact Factor numerator - Clarivate
    Apr 26, 2017 · The Journal Impact Factor is defined as citations to the journal in the current year to items in the previous two years divided by the count ...
  5. [5]
    The Impact Factor Fallacy - PMC - NIH
    Aug 20, 2018 · The impact factor fallacy is using JIF to measure quality, based on false beliefs, and it's an inaccurate estimate of citations and easy to ...
  6. [6]
    Should we ditch impact factors? - PMC - NIH
    The impact factor is a pointless waste of time, energy, and money, and a powerful driver of perverse behaviours in people who should know better.
  7. [7]
    Read the Declaration | DORA
    Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an ...Missing: Europe | Show results with:Europe
  8. [8]
    [PDF] Garfield, E. "Citation Indexes for Science
    60, 371 (1954). Citation Indexes for Science. A New Dimension in Documentation through Association of Ideas. Eugene Garfield. "The uncritical citation of ...
  9. [9]
    [PDF] The History and Meaning of the Journal Impact Factor
    Sep 16, 2005 · The History and Meaning of the Journal Impact Factor. Presented by. Eugene Garfield. Chairman Emeritus, Thomson ISI. 3501 Market Street ...
  10. [10]
    Science Citation Index Anniversary
    The Science Citation Index (SCI) first became commercially available in 1964 as a five-volume print edition of indexed scientific work. Dr. Eugene Garfield's ...
  11. [11]
    the history and the meaning of the Journal Impact Factor - Clarivate
    In the paper, Garfield discusses how he and Dr. Irving H. Sher came up with the journal impact factor, its proper and improper uses, citation analysis, ...
  12. [12]
    Learning from history: Understanding the origin of the JCR - Clarivate
    May 23, 2018 · By 1975 ISI was preparing the 11th release of the Science Citation Index (SCI). ... The Journal Impact Factor (JIF) takes the absolute citation ...
  13. [13]
    Celebrating 50 years of Journal Citation Reports | Clarivate
    Jun 18, 2025 · This year marks a significant milestone as we celebrate 50 years: 1975 marked the first year for the JCR, introducing the Journal Impact Factor ...Missing: ISI history<|separator|>
  14. [14]
  15. [15]
    Journal Citation Reports | Clarivate
    The Journal Impact Factor is a single journal-level measurement that should not be considered in isolation. We continue to advise that the JIF is a useful ...Journal Impact Factor (JIF) · Search · Publishers · Find journals for your research
  16. [16]
  17. [17]
    [PDF] The weakening relationship between the Impact Factor and papers ...
    The relationship between Impact Factor and citations was strong in the 20th century, but has been decreasing since 1990 due to the digital age, where papers ...
  18. [18]
    Evolution of number of citations per article in Materials Science
    Nov 7, 2023 · The aim of this work is to show the coincidence between the overall increase in impact factor and the appearance of online databases and open ...
  19. [19]
    Trends in the Impact Factor of Scientific Journals
    The median value (50th percentile) of the 1999 impact factors (0.76) was the impact factor of the 47th percentile of the 2004 distribution. Of note, the most ...Missing: key developments
  20. [20]
    Journal Citation Reports - PMC - NIH
    Apr 1, 2019 · The Journal Citation Report is one of the only sources of citation data on journals covering the areas of science and medicine, technology, and social sciences.Missing: ISI | Show results with:ISI
  21. [21]
    JCR Terminology - Journal Citation Reports - The Impact Factor and ...
    Apr 4, 2024 · Journal Ranking: The Journal Ranking table shows the ranking of the current journal in its subject categories based on the journal Impact Factor ...
  22. [22]
    Metrics or Peer Review? Evaluating the 2001 UK Research ...
    Jan 1, 2009 · Impact Factor: 1.8 / 5-Year Impact Factor: 2.6 · Journal ... Evaluating the 2001 UK Research Assessment Exercise in Political Science.<|separator|>
  23. [23]
    the United Kingdom's research assessment exercise - PMC
    Why journals' impact factors should not be used to measure quality of publications*. Impact factors can conceal large differences in citation rates for ...
  24. [24]
    Re: UK Research Assessment Exercise (RAE) review
    Nov 20, 2002 · I consider the impact factor (IF) properly used as a valid measure in comparing of journals; I also consider the IF properly used as a
  25. [25]
    The Journal Impact Factor Denominator: Defining Citable (Counted ...
    Sep 9, 2009 · The impact factor is calculated by considering all citations in 1 year to a journal's content published in the prior 2 years, divided by the ...Missing: formula two-
  26. [26]
    Journal evaluation process and selection criteria - Clarivate
    Content significance. The content in the journal should be of interest, importance, and value to its intended readership and to Web of Science subscribers.
  27. [27]
    Journal Citation Reports: Document Types Included in the Impact ...
    The Impact Factor is calculated by dividing the number of citations in the Journal Citation Reports year (the numerator) by the total number of citable items ...
  28. [28]
    Using the Clarivate Impact Factor
    The Clarivate Impact Factor, as explained in the last essay 1 , is one of the evaluation tools provided by Clarivate Journal Citation Reports (JCR).
  29. [29]
    Journal self-citation in the Journal Citation Reports – Science ...
    For each journal, the self-citation rate is defined as the number of journal self-citations expressed as a percentage of the total citations to the journal in ...
  30. [30]
    [PDF] Journal Selection Process for Web of Science
    1. Basic Journal Publishing Standards. 2. Editorial Content. 3. International Diversity. 4. Citation Analysis. Web of Science Journal ...
  31. [31]
    What's next for JCR: defining 'Early Access' | Clarivate
    Nov 24, 2020 · In this article, we provide further details on how we define Early Access content and how it will be introduced into the 2021 JCR release.
  32. [32]
    ISJ editorial: Addressing the implications of recent developments in ...
    Jan 4, 2023 · In 2020, the calculation was slightly modified to include within the denominator the count of articles appearing in Early View (i.e. articles ...
  33. [33]
    Understanding the Decline of Impact Factors in INFORMS Journals ...
    Sep 5, 2024 · This change meant that early-access content was included in the numerator of the 2020 impact factor calculation. Under the phased-rollout ...
  34. [34]
    JCR 2025: Excluding Retraction Citations to Reinforce Integrity
    May 15, 2025 · From 2025 (2024 data), citations to and from retracted articles will no longer contribute towards the Journal Impact Factor.
  35. [35]
    Clarivate to stop counting citations to retracted articles in journals ...
    May 15, 2025 · Clarivate will no longer include citations to and from retracted papers when calculating journal impact factors, the company announced today.
  36. [36]
    Citations of retracted articles will no longer count towards impact factor
    Oct 3, 2025 · However, retracted articles will still be included in the article count (JIF denominator), maintaining transparency and accountability.” 1.
  37. [37]
    Full article: Metrics, integrity, and editorial responsibility
    Aug 6, 2025 · Beginning with the 2025 Journal Citation Reports, citations to and from retracted or withdrawn articles will be excluded from the Journal Impact ...
  38. [38]
    Content collection and indexing process | Clarivate
    A journal may undergo various changes such as title changes, title mergers and splits, publication frequency (e.g., monthly to quarterly), or publisher changes ...
  39. [39]
    How is the Impact Factor calculated for a journal whose name ...
    Sep 9, 2014 · According to JCR rules, when a journal changes its name, only the old journal title is assigned an IF during that year, and not the new journal title.
  40. [40]
    Guide to Journal Rankings: What are Quartiles – Q1, Q2, Q3 & Q4 ...
    Mar 19, 2024 · Quartiles categorize journals based on their Impact Factor or other metrics, with Q1 representing the top 25% and Q4 the bottom 25%. How can one ...
  41. [41]
    Articles Journal Citation Reports: Quartile rankings and other metrics
    When sorted by Impact Factor, if a journal is rank 78 out of 314 in a category, Z=(78/314)=0.248 which is a Q1 journal. When sorted by Impact Factor, if a ...
  42. [42]
    Q1 to Q4: Understanding Journal Quartiles for Selection - Enago
    Feb 20, 2025 · Q1 journals represent the top 25% of publication with highest impact, featuring the most influential and widely cited research. Conversely, Q4 ...
  43. [43]
    Scientific Journal Rankings - SJR
    1, Ca-A Cancer Journal for Clinicians, journal ; 2, MMWR Recommendations and Reports Open Access, journal ; 3, Nature Reviews Molecular Cell Biology, journal ; 4 ...
  44. [44]
    Impact factors: arbiter of excellence? - PMC - NIH
    The impact factor calculation developed by Eugene Garfield, ISI, was initially used to evaluate and select journals for listing in Current Contents. It covered ...Missing: defenses early
  45. [45]
    The Usefulness of Impact Factors in Serial Selection
    This paper investigates the usefulness of ISI Journal Impact Factors in making serial selection and deselection decisions. It shows that Impact Factors do ...
  46. [46]
    [PDF] JCR-Impact-Factor-2024
    Jul 4, 2024 · ... CA-A CANCER JOURNAL FOR. CLINICIANS. CA-CANCER J CLIN. 0007-9235. 503.1. 2. NATURE REVIEWS DRUG DISCOVERY NAT REV DRUG DISCOV. 1474-1776. 122.7.
  47. [47]
    [PDF] THE USEFULNESS OF IMPACT FACTORS IN SERIAL SELECTION
    Conversely, only after either consultation with users or a use study should one cancel a serial with a low Impact Factor but high local use. The concept of ...
  48. [48]
    View of Factors in Science Journal Cancellation Projects
    Several surveys and case studies have identified factors that librarians take into consideration when making journal cancellation decisions. Spencer and Millson ...Introduction · Literature Review · Methods<|separator|>
  49. [49]
    The “impact” of the Journal Impact Factor in the review, tenure, and ...
    and potential misuse — in current academic evaluation systems, little is known ...
  50. [50]
    The Benefits and Drawbacks of Impact Factor for Journal Collection ...
    The author concludes that impact factor, if used appropriately and in combination with other criteria, is a valid tool that can assist journal collection ...
  51. [51]
    Use of the Journal Impact Factor in academic review, promotion, and ...
    Jul 31, 2019 · We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents.
  52. [52]
    How Journal Impact Factor Affects Your Career - Social Science Space
    May 8, 2019 · Appearing on journal websites, academic CVs, and in hiring decisions—it's both the most widely used and the most criticized research metric that ...
  53. [53]
    China's Research Evaluation Reform: What are the Consequences ...
    In the 1990s, China created a research evaluation system based on publications indexed in the Science Citation Index (SCI) and on the Journal Impact Factor.
  54. [54]
    Science Europe signs Declaration on Research Assessment
    It focusses on reducing the use of journal-based metrics in assessments, encouraging evaluations of the intrinsic quality of research outputs instead of the ...
  55. [55]
    The Commission signs the Agreement on Reforming Research ...
    Nov 8, 2022 · The overall goal is to reduce dependence on journal-based metrics such as journal impact measures and citations towards a culture where ...
  56. [56]
    Relationship between journal impact factor and the thoroughness ...
    Aug 29, 2023 · Our study shows that the peer reviews submitted to journals with higher Journal Impact Factor may be more thorough than those submitted to lower ...Results · Decision Letter 1 · Associated DataMissing: serials | Show results with:serials
  57. [57]
    Performance-based university research funding systems
    Performance-based research funding systems (PRFS) are national systems of research output evaluation used to distribute research funding to universities.
  58. [58]
    MLE on Performance-based Research Funding Systems (PRFS)
    Use of journal impact factors is widespread, despite the growing understanding that these are inappropriate as indicators of the quality or impact of ...
  59. [59]
    Performance-based research funding: Evidence from the largest ...
    A performance-based research funding system (PRFS) is a nationwide incentive scheme that promotes and rewards university research performance.
  60. [60]
    The Truth about China's Cash-for-Publication Policy
    Jul 12, 2017 · The first study of payments to Chinese scientists for publishing in high-impact journals has serious implications for the future of research.
  61. [61]
    Analysis of Chinese universities' financial incentives for academic ...
    Many Chinese universities offer large cash bonuses for papers published in leading academic journals. Recent research into these bonus structures can help ...
  62. [62]
    China to end the reign of the impact factor? - Compuscript
    Sep 29, 2023 · With this new policy, cash incentives to publish in high-impact journals are no longer permitted, and Chinese researchers will no longer be ...
  63. [63]
    "Journal Impact Factor" by Scott H. Church - UNL Digital Commons
    ... grant review committees on the quality of scholars' work.” Though the metric ... Journal Impact Factor (JIF) as a scholarly metric. After first ...
  64. [64]
    Journal Citation Reports 2024: Simplifying Evaluation | Clarivate
    Jun 20, 2024 · The 2024 release builds on this by including unified rankings for 229 science and social science categories which now include journals from the Emerging ...
  65. [65]
    Impact factor and other standardized measures of journal citation
    This paper discusses the limitations of the impact factor with suggestions of how it can be used and how it should not be used.Missing: across subjectivity
  66. [66]
    Differences in impact factor across fields and over time - Althouse
    Dec 12, 2008 · Here we quantify inflation over time and differences across fields in impact factor scores and determine the sources of these differences.
  67. [67]
    [0804.3116] Differences in Impact Factor Across Fields and Over Time
    Apr 19, 2008 · Here we quantify inflation over time and differences across fields in impact factor scores and determine the sources of these differences.Missing: enabling | Show results with:enabling
  68. [68]
    Differences in Impact Factor Across Fields and Over Time
    Aug 6, 2025 · Yet journal impact factors have increased gradually over time, and moreover impact factors vary widely across academic disciplines. Here we ...
  69. [69]
    Correlation Between Perception-Based Journal Rankings and the ...
    This study, based on systematic review and meta-analysis, aimed to collect and analyze evidence on correlation between perception-based journal rankings and ...
  70. [70]
    Peer assessment of journal quality in clinical neurology - PMC - NIH
    The correlation coefficient was 0.67, indicating significant correlation between impact factors and mean journal ratings (P < 0.01).
  71. [71]
    Study results from journals with a higher impact factor are closer to ...
    Our results indicate that study results published in higher-impact journals are on average closer to truth.
  72. [72]
    Inferring the causal effect of journals on citations - MIT Press Direct
    We find that high-impact journals select articles that tend to attract more citations. At the same time, we find that high-impact journals augment the citation ...
  73. [73]
    Impact factor as a metric to assess journals where OM research is ...
    Jun 2, 2011 · The rankings of journal outlets for OM research based on impact factors are positively correlated with the quality rankings for those journals ...
  74. [74]
    Misconduct accounts for the majority of retracted scientific publications
    The relationship between journal impact factor and retraction rate was also found to vary with the cause of retraction. Journal-impact factor showed a highly ...
  75. [75]
    PLoS ONE's 2010 Impact Factor - The Scholarly Kitchen
    Jun 28, 2011 · PLoS ONE received its 2010 journal impact factor today, 4.411, placing the open access journal in 12th spot among 85 Biology journals.
  76. [76]
    Does open data boost journal impact: evidence from Chinese ...
    Feb 14, 2021 · Our results show that open data has significantly increased the citations of journal articles, and articles published before the open data ...
  77. [77]
    How Related are Journal Impact and Research Impact?
    Apr 11, 2023 · Journal impact is not an indicator of the quality of individual studies published in that journal. The journal's impact “score” is retrospective ...
  78. [78]
    Skewed citation distributions and bias factors: Solutions to two core ...
    Highlights. ▻ Skewed distributions and bias factors are central problems of the journal impact factor. ▻ We suggested McCall's area transformation for the ...Missing: skewness | Show results with:skewness
  79. [79]
    Does the journal impact factor predict individual article citation rate ...
    Aug 11, 2022 · A significant proportion of publications analyzed had either zero citations (25%) or one citation (23%) during the two-year period, ...
  80. [80]
    What's wrong with the journal impact factor in 5 graphs | News - Nature
    Apr 3, 2018 · The impact factor is based on a two-year citation window, which makes it “ill-suited across all disciplines, as it covers only a small fraction of citations ...
  81. [81]
    [PDF] Differences in Impact Factor Across Fields and Over Time
    Impact factors increase over time due to more citations. Field-specific differences are due to varying citation fractions. For example, economics has a lower  ...
  82. [82]
    What Is a Good Impact Factor for an Academic Journal? - AKJournals
    First introduced in 1975 by Eugene Garfield, the impact factor was initially ... A journal's impact factor is measured by and announced in Clarivate's ...
  83. [83]
    Humanities, Citations and Currency: Hierarchies of Value and ...
    Jan 15, 2020 · Impact factors constitute but one variable within many phenomena that influence citation rates. Informed peer review and measurement are two ...
  84. [84]
    Publication Metrics: Journal impact - LibGuides at University of Sussex
    Jul 9, 2025 · These biases are particularly problematic in the social sciences and humanities, in which research is more regionally and nationally engaged.
  85. [85]
    Explanation of Missing, Dropped, or Suppressed Journals
    Journals can sometimes be missing, suppressed, or dropped from coverage in Journal Citation Reports. The reason could be as simple as the journal ceased ...Missing: Tumor | Show results with:Tumor
  86. [86]
    Impact Factor Denied to 20 Journals For Self-Citation, Stacking
    Jun 27, 2018 · This year, Clarivate Analytics, publishers of the Journal Citation Reports (JCR), suppressed 20 journals, 14 for high levels of self-citation and six for ...<|separator|>
  87. [87]
    Citation issues cost these 20 journals their impact factors this year
    Jun 18, 2025 · For the first time, the company excluded citations to retracted ... In 2024, Clarivate suppressed 17 impact factors, a substantial increase from ...
  88. [88]
    Peer review and journal impact factor: the two pillars of ... - NIH
    Review articles are heavily cited and inflate the impact factor of journals. ... One may posit that the sheer number of review articles belies their function. The ...
  89. [89]
    (PDF) Business Journals Combat Coercive Citation - ResearchGate
    ... And this coercive practice is successful-only 11% of the respondents asked to coerce avoided to do so, and actually most of them added the number of ...
  90. [90]
    How Much Citation Manipulation Is Acceptable?
    May 30, 2017 · In 2010, for example, 4.2% of all science journals in the JCR had greater than 50% self-cites in their Impact Factor numerator. The self- ...Missing: farms audits
  91. [91]
    Quantitative research assessment: using metrics against gamed ...
    Nov 3, 2023 · Conversely, in a citation farm, a handful of citing authors may account for > 50% or even > 80% of the citations received.Missing: audits | Show results with:audits
  92. [92]
    Universality of citation distributions: Toward an objective measure of ...
    The use of relative indicators is widespread, but empirical studies (19–21) have shown that distributions of article citations are very skewed, even within ...
  93. [93]
    [PDF] Skewness of citation impact data and covariates of citation ... - arXiv
    According to Seglen (1992) the skewed citation distributions can be considered as the basic probability distribution of citations which reflects “the wide range ...
  94. [94]
    Will Citation Distributions Reduce Impact Factor Abuses?
    Jul 18, 2016 · Citation distributions are skewed, that every journal contains a small percentage of highly performing articles, and that these outliers can distort the ...
  95. [95]
    Use of the journal impact factor for assessing individual articles
    May 14, 2020 · The journal impact factor (IF) is the most commonly used indicator for assessing scientific journals. IFs are calculated based on the Web of ...
  96. [96]
    The Misused Impact Factor | Science
    Oct 10, 2008 · One measure often used to determine the quality of a paper is the so-called “impact factor” of the journal in which it was published.
  97. [97]
    Impact factors and their significance; overrated or misused? - Nature
    Apr 9, 2005 · The journal impact factor (IF) is in widespread use for the evaluation of research and researchers, and considerable controversy surrounds it.
  98. [98]
    The misalignment of incentives in academic publishing and ... - PNAS
    The incentive to produce more papers is likely to have a larger impact on people from structurally disadvantaged backgrounds—both from the researcher side and ...
  99. [99]
    'Publish or perish' culture blamed for reproducibility crisis - Nature
    Jan 20, 2025 · Nearly three-quarters of biomedical researchers think there is a reproducibility crisis in science, according to a survey published in November.
  100. [100]
    IU researchers co-author study challenging 'publish or perish ...
    Apr 15, 2025 · A system that favors high-impact journal metrics over rigorous, reproducible science can contribute to the “replication crisis”—a situation ...Missing: reproducibility | Show results with:reproducibility
  101. [101]
    Salami publishing: Walking on thin (sl)ice - PMC - NIH
    Salami slicing refers to breaking up the results of a single research project – the sausage – into multiple papers – each representing a slice and scattering ...
  102. [102]
    San Francisco Declaration on Research Assessment (DORA)
    Help promote best practices in the assessment of scholarly research. Sign DORA. ... 26,573 individuals and institutions have signed our declaration to date.The Declaration · Signers · About DORA · Resource Library
  103. [103]
  104. [104]
    Signers | DORA
    26,573 individuals and organizations in 168 countries have signed DORA to date. ... Hover over a country to see the number of signatories in that country.
  105. [105]
    Leiden manifesto for research Metrics - Home
    10 principles for the measurement of research performance: the Leiden Manifesto for Research Metrics published as a comment in Nature.
  106. [106]
    The Leiden Manifesto for Research Metrics
    1. Quantitative evaluation should support qualitative, expert assessment · 2. Measure performance against the research missions of the institution, group or ...
  107. [107]
    Meta-Research: Use of the Journal Impact Factor in academic ... - eLife
    Jul 31, 2019 · We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents.
  108. [108]
    Beyond declarations: Metrics, rankings and responsible assessment
    We show that the strong association between journal rankings and expert evaluations has not changed, despite institutional endorsements of DORA.
  109. [109]
    Introducing the Journal Citation Indicator: A new, field-normalized ...
    May 20, 2021 · In essence, the Journal Citation Indicator provides a field-normalized measure of citation impact where a value of 1.0 means that, across the ...
  110. [110]
    Journal Citation Indicator. Just Another Tool in Clarivate's Metrics ...
    May 24, 2021 · The Journal Citation Indicator provides a single journal-level metric that can be easily interpreted and compared across disciplines.
  111. [111]
    Transparent peer review to be extended to all of Nature's research ...
    Jun 16, 2025 · Since 2020, Nature has offered authors the opportunity to have their peer-review file published alongside their paper. Our colleagues at Nature ...Missing: Group factor 2020s
  112. [112]
    Editorial policies - Springer Nature
    These policies underpin our commitment as a leading research publisher to editorial independence and supporting research excellence.Missing: factor | Show results with:factor
  113. [113]
    ERC joins DORA and abandons Impact Factor - NFFA.eu
    Jul 21, 2021 · ERC says goodbye to Impact Factor to evaluate candidates. On July 14th, the European Research Council (ERC) has published its new plan for ...
  114. [114]
    Impact factor abandoned by Dutch university in hiring and promotion ...
    Aug 9, 2025 · The University of Utrecht has ceased to use the impact factor and is beginning to use commitment to Open Access as a criterion for employing or evaluating its ...
  115. [115]
    An Analysis of the Declaration on Research Assessment (DORA ...
    Dec 15, 2023 · Methods – The researcher developed a list of 77 universities in the UK who are signatories to DORA and have institutional repositories. Using ...<|separator|>
  116. [116]
    Games academics play and their consequences: how authorship, h ...
    Dec 4, 2019 · Games academics play and their consequences: how authorship, h-index and journal impact factors are shaping the future of academia.
  117. [117]
    [PDF] SCImago Journal Rank (SJR) indicator
    The Scimago Journal Rank (SJR) is based on the transfer of prestige from a journal to another one; such prestige is tranfered through the references that a ...
  118. [118]
    Methodology - CWTS Journal Indicators
    SNIP. The source normalized impact per publication, calculated as the number of citations given in the present year to publications in the past three years ...
  119. [119]
    SJR - Help
    Journal title. SJR (SCImago Journal Rank): trend graph of this metric and the value obtained in the last year available. Journal title. Best Quartile and ...
  120. [120]
    Measuring a journal's impact - Elsevier
    Source Normalized Impact per Paper (SNIP) is a sophisticated metric that intrinsically accounts for field-specific differences in citation practices. It does ...
  121. [121]
    h-index for journals - Research Metrics
    Sep 11, 2025 · The h-index can be calculated for journals as well as authors. The value h is the largest number where at least h articles in that publication were cited at ...
  122. [122]
    What is a good H-index? | Elsevier Author Services Blog
    Basically, the H-index score is a standard scholarly metric. Here you can find what is considered a Good H-index and how to get your work to that point.
  123. [123]
    Relative Citation Ratio (RCR): A New Metric That Uses ... - NIH
    We use the RCR metric here to determine the extent to which National Institutes of Health (NIH) awardees maintain high or low levels of influence on their ...
  124. [124]
    An index to quantify an individual's scientific research output - PNAS
    I propose the index h, defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher.
  125. [125]
    Full article: Understanding the 'g-index' and the 'e-index'
    May 6, 2021 · The g-index is defined as 'the largest number such that the top 'g' articles received together at least g2 citations. ... The citations are ...
  126. [126]
    Metrics: h and g-index - Harzing.com
    Feb 6, 2016 · The g-index is calculated based on the distribution of citations received by a given researcher's publications, such that: given a set of ...
  127. [127]
    The Altmetric Attention Score: What Does It Mean and Why Should I ...
    The Altmetric score represents a weighted count of the amount of attention for a research output from a variety of sources.
  128. [128]
    CWTS Leiden Ranking
    The CWTS Leiden Ranking 2024 provides insights into the scientific performance of over 1500 universities using advanced bibliometric indicators, with list, ...Ranking 2018 · Ranking 2017 · Ranking · Ranking 2023Missing: normalization | Show results with:normalization
  129. [129]
    [PDF] Bibliometrics for Research Management and Research Evaluation
    The. Leiden Ranking is produced annually by CWTS based on data from Web of Science. It offers bibliometric indicators of scientific output, impact, and ...
  130. [130]
    Information - Indicators - CWTS Leiden Ranking
    The CWTS Leiden Ranking 2024 offers a sophisticated set of bibliometric indicators that provide statistics at the level of universities on scientific impact.Missing: multidimensional | Show results with:multidimensional
  131. [131]
    Showcase your impact with Dimensions Badges
    Mar 22, 2018 · The Dimensions Badges give an immediate insight into the impact an article has had through citation data and altmetrics.Missing: multidimensional | Show results with:multidimensional
  132. [132]
    Dimensions Badges: A new way to see citations - Altmetric
    Jan 26, 2018 · Dimensions badges are interactive visualizations that showcase the citation data for individual publications.
  133. [133]
    Case Study: Stanford University Integrates ORCID into a Centralized ...
    Aug 6, 2024 · Implementing ORCID at this level can be a more streamlined approach than managing multiple ORCID integrations into many local systems.
  134. [134]
    MARIA evaluation model: multidimensional method for self ...
    The MARIA method is a light-touch evaluation tool that looks at research impact from multiple dimensions and considers its ethics and sustainability.1/ Responsiveness · 2/ Accessibility · 3/ Reflexivity
  135. [135]
  136. [136]
    (PDF) ARTICLE Multidimensional impact of research: developing ...
    Oct 31, 2023 · This article introduces the impact assessment process of 23 research projects of the Capes prInt Program aimed at internationalizing Brazilian ...