Fact-checked by Grok 2 weeks ago

h -index

The h-index is a bibliometric indicator proposed by in to quantify an individual's scientific research output by integrating measures of productivity and in a single value. It is defined as the largest number h such that a researcher has at least h papers, each cited at least h times, while the remaining papers (if any) have fewer than h citations. This metric emerged as a response to limitations in traditional indicators like total counts, which can be skewed by a few highly cited works, or mere counts, which ignore impact. To calculate the h-index, a researcher's publications are ranked in descending order of citation counts, and h is the highest rank at which the citation number is at least equal to the rank itself—for instance, if the top five papers have at least five citations each, but the sixth has fewer, then h = 5. The index is computed using databases such as , , or , though values may vary slightly due to differences in coverage and update frequencies. Key properties include its monotonic increase over time as new impactful papers are added, relative robustness to outliers (unlike total citations), and simplicity, making it a practical tool for comparative assessments. Widely adopted since its inception, the h-index is used in academic hiring, promotions, tenure decisions, and institutional rankings, such as those guided by the (NAAC) in . It has also been adapted beyond individuals to evaluate journals (based on their article citation profiles), research groups, universities, and even national research outputs, providing a standardized way to gauge collective productivity and influence. Notable variants include the g-index (which weights highly cited papers more) and contemporaneous h-index (focusing on recent citations), addressing some of its original constraints. Despite its advantages—such as combining quantity and quality in one intuitive number and reducing susceptibility to self-citation inflation—the h-index has notable limitations. It disadvantages early-career researchers and those in emerging or niche fields with lower citation norms, favors over true (potentially undervaluing groundbreaking but slowly recognized work, like Einstein's early theories), and does not account for co-authorship contributions or interdisciplinary differences. Critics argue it promotes "" behaviors and should be supplemented with qualitative evaluations for a fuller picture of scholarly merit.

Background and Definition

Definition

The h-index, proposed by physicist , is defined as the largest integer h such that a researcher has at least h publications, each of which has received at least h citations. This metric integrates both the productivity of a researcher, reflected in the number of publications, and their impact, gauged by citation counts, providing a single value that encapsulates these dimensions without favoring extreme outliers. Conceptually, the h-index addresses limitations in traditional bibliometric measures by balancing the sheer volume of publications against their citation-based quality, positioning it as a more equitable alternative to total citation counts—which can be skewed by a few highly cited works—or journal impact factors, which assess venues rather than individual contributions. Hirsch introduced it to better quantify a scientist's overall output in a field-independent manner, emphasizing consistent scholarly influence over isolated successes. Key properties of the h-index include its non-decreasing nature over time, as accumulating citations can only maintain or elevate the value of h, ensuring it reflects ongoing or enduring recognition. It demonstrates robustness to uncited or lowly cited publications, which fall outside the threshold and thus do not diminish the index, while also mitigating the distorting effects of highly cited outliers by requiring a core set of equally impactful works. This design captures the broadness and sustained impact of a researcher's oeuvre, motivated by Hirsch's aim to evaluate long-term contributions beyond dependence on singular breakthroughs.

History

The h-index was proposed by Jorge E. Hirsch, a professor at the , in 2005 to provide a more balanced measure of a researcher's cumulative scientific output than traditional bibliometric indicators such as total publications or total citations, which Hirsch argued were susceptible to distortion by outliers or sheer volume without sustained impact. He first disseminated the idea through a preprint on on August 3, 2005, followed by a peer-reviewed article in the Proceedings of the on November 15, 2005, titled "An index to quantify an individual's scientific research output." This proposal emerged during a period of expanding bibliometric applications in academic evaluations, including tenure decisions, promotions, and funding allocations, where there was growing demand for metrics that integrated both productivity and citation influence without over-relying on highly cited anomalies. To exemplify the metric, Hirsch applied it to prominent physicists such as , yielding an h-index of 110 based on data from the () database, where 110 of Witten's papers had at least 110 citations each. The index quickly gained traction, particularly within physics owing to its initial circulation on —a platform central to that discipline—before extending to broader scientific domains as researchers recognized its simplicity and robustness across databases. By late 2008, Hirsch's original paper had been cited about 200 times, reflecting its swift integration into scientometric discourse. Key milestones in its adoption included the feasibility of computing the h-index using major citation databases by 2007, such as and emerging tools like and [Google Scholar](/page/Google Scholar), which facilitated widespread practical application. Concurrently, debates proliferated in journals, with analyses extending the index to journals, topics, and while scrutinizing its to field-specific citation norms and long-term career stages.

Computation

Calculation Method

The h-index is computed by first compiling a list of an author's publications along with the number of citations each has received. The publications are then sorted in descending order based on their citation counts, denoted as c_1 \geq c_2 \geq \cdots \geq c_n, where n is the total number of publications and c_i represents the citations for the i-th paper in this ranked list. The h-index is the largest integer h such that the first h papers each have at least h citations, meaning c_h \geq h. This procedure can be formalized as the mathematical expression h = \max \{ i \in \{0, 1, \dots, n\} \mid c_i \geq i \}, where the maximum is taken over all indices i satisfying the condition, and h = 0 if no such i > 0 exists. In practice, this involves iteratively checking the ranked list until the citation threshold is violated; for instance, if the 5th paper has 5 or more citations but the 6th has fewer than 6, then h = 5. Edge cases arise when an author has no publications or when all publications are uncited, in which case the h-index is 0. Self-citations are typically included in the citation counts during calculation, as excluding them requires additional that is not standard in most databases; however, their effect on the h-index is generally minimal compared to total citation metrics, since the index focuses on the threshold rather than exact counts. For small publication sets, the h-index can be calculated manually by sorting citations in a . Larger datasets are handled automatically by specialized software and databases, such as , which retrieves data from sources like and computes the index via user queries. Similarly, and provide built-in author search functions that generate citation reports including the h-index, drawing from their curated indexes of peer-reviewed literature.

Required Input Data

To compute the h-index for an individual researcher, the essential input data consists of a complete list of their publications paired with the corresponding number of times each has been cited by other works. This data is primarily sourced from established academic databases, including , , and , each of which aggregates publication records and tracks citations across scholarly literature. These databases enable users to retrieve an author's profile, sort publications by citation count, and derive the h-index directly or manually from the exported data. Data quality plays a critical role in ensuring the reliability of h-index calculations, as differences in database coverage introduce biases that can significantly alter results. offers broad inclusion of sources such as preprints, theses, and gray literature, often yielding higher citation counts, whereas and emphasize peer-reviewed journals and books, resulting in more selective but potentially lower coverage for interdisciplinary or emerging fields. Additionally, time lags in updating citation records affect accuracy; for example, typically exhibits a median indexing delay of about two months for new citations compared to , while may take several months to achieve near-complete coverage of recent publications. The scope of input data for h-index computation is generally career-long, encompassing all citations accumulated over an author's professional lifespan to reflect sustained impact. However, it can be narrowed to field-specific subsets or defined time windows to emphasize recent or discipline-tailored productivity, though this requires manual filtering of database outputs. Database limitations often lead to the exclusion or underrepresentation of non-journal formats like books and book chapters, particularly in and , which prioritize indexed serials and may overlook contributions prevalent in or sciences. Accurate h-index derivation presupposes a thorough compilation of the author's publication record, as omissions can skew the ranking of citations and lower the final value. Handling co-authorship is inherent to the metric's design, with the h-index assigned at the individual level; each co-author receives full credit for citations to a shared , without fractional allocation based on author count, which can inflate scores in collaborative fields.

Illustrations and Applications

Examples

To illustrate the h-index, consider a simple case of an author with six publications receiving 10, 8, 5, 3, 1, and 0 citations, respectively. Sorting these in descending order yields the sequence: 10, 8, 5, 3, 1, 0. The value of h is the largest number such that the first h papers each have at least h citations; here, h=3 because the first three papers have 10 ≥ 3, 8 ≥ 3, and 5 ≥ 3 citations, but the fourth has only 3 < 4. The h-index emphasizes balanced productivity and sustained impact over isolated high-citation outliers. For instance, an author with ten publications each cited exactly ten times achieves h=10, reflecting broad influence across their body of work. In contrast, an author with one publication cited 100 times and nine others cited zero times has h=1, as only one paper meets the threshold of 1 citation. This comparison underscores the metric's resistance to skew from a single blockbuster paper. A real-world application appears in Jorge E. Hirsch's 2005 analysis of prominent physicists using citation data from the ISI Web of Science database. Theoretical physicist Edward Witten, known for contributions to string theory, had an h-index of 110 at that time, indicating 110 papers each with at least 110 citations. By November 2025, Witten's h-index had increased to 214 based on Google Scholar metrics, demonstrating the metric's evolution with accumulating citations over time. The following visualizes the simple example above, with papers ranked by descending count; the h-index corresponds to the where citations fall below the (marked in bold for the first three papers):
RankCitations
110
28
35
43
51
60
Hirsch further illustrated the graphically in his original work, plotting cumulative citations against paper (sorted descending) and identifying h as the intersection with a 45-degree line where citations equal .

Practical Uses

The h-index is widely employed in academic hiring and promotion processes, serving as a quantitative measure to evaluate a researcher's and during tenure reviews and advancement decisions. For instance, in physics, an h-index of approximately 12 is often considered a for tenure, while 18 signifies suitability for full professorship. This provides a balanced beyond mere counts or total citations, helping committees gauge sustained influence in and evaluations at . In grant applications, the h-index functions as a supplementary indicator of a applicant's scientific standing, particularly in competitive funding programs like those from the (ERC), where it informs background assessments of despite not being a formal criterion. Studies of ERC awardees highlight average h-indices around 16 for consolidators, underscoring its role in contextualizing proposal merit. Similarly, it aids resource allocation in broader funding contexts by offering a stable estimator of achievement. At the institutional level, aggregated h-indices of faculty are utilized for university rankings and departmental evaluations, enabling comparisons of research performance across institutions. For example, plotting individual faculty h-indices against career length helps rank academic departments by overall scholarly output. This approach has been applied in global university assessments, where institutional h-indices correlate with broader research impact metrics. Journal evaluations occasionally incorporate aggregated author h-indices to assess editorial quality and influence, though remains primary. Field-specific variations necessitate contextual benchmarks for fair cross-disciplinary comparisons, as citation practices differ significantly. In biomedicine, mid-career researchers often require an h-index exceeding 20 due to higher publication and citation rates, whereas in mathematics, an h-index above 10 suffices for similar career stages, reflecting lower average citations per paper—approximately six times fewer than in life sciences. These disparities highlight the h-index's sensitivity to disciplinary norms, making normalized interpretations essential for evaluations. The h-index integrates seamlessly with researcher profiling platforms, enhancing its accessibility in professional contexts. On , it is automatically computed from user-uploaded publications and citations, facilitating self-assessment and networking. identifiers link to external databases like for h-index derivation, supporting standardized profiles in grant submissions and institutional reporting. In policy frameworks, such as Italy's national research assessment exercise (VQR 2011–2014), bibliometric tools incorporating h-like metrics informed funding allocations, though peer evaluation predominated.

Limitations

Criticisms

The h-index is highly dependent on the academic field, as citation rates and publication norms differ substantially across disciplines, leading to inflated values in rapidly citing fields like compared to slower-accumulating areas such as or the . This field-specific variation renders cross-disciplinary comparisons unreliable and potentially biased, as researchers in high-citation fields can achieve higher h-indices without necessarily demonstrating superior impact relative to their peers. For example, clinical and life sciences often see quicker citation growth due to practical applications and larger audiences, while social sciences experience more gradual accrual, disadvantaging scholars in the latter. Another key limitation is the h-index's disregard for publication age, which systematically favors established researchers with extended careers who have accumulated citations over decades, while penalizing early-career scientists or those whose influential work predates widespread digital indexing. This temporal bias ignores the maturation time for citations and can undervalue groundbreaking but older contributions, as seen in cases where senior scientists like maintain exceptionally high h-indices (e.g., 332 after approximately 48 years as of November 2025) partly due to longevity rather than recent productivity alone. Consequently, the metric disadvantages newcomers and fails to reflect career-stage dynamics equitably. Co-authorship presents further challenges, as the h-index attributes full credit to every on multi-authored papers without adjusting for size, contribution levels, or author position, thereby overcrediting individuals in large collaborations common in fields like physics or . This approach inflates h-indices through citation multiplication—e.g., a three-author paper with 60 citations counts as 180 total—while ignoring partial contributions and encouraging unethical practices like gift authorship to boost scores without added effort. Such issues create unfairness between solo and collaborative researchers, particularly in disciplines with varying collaboration norms, like versus . Broader methodological flaws compound these problems: the h-index remains insensitive to publication venue quality, equating citations from prestigious journals with those from lower-impact outlets or review articles, and it prioritizes publication volume over depth, incentivizing prolific but superficial output rather than innovative, high-quality work. Statistically, it performs poorly with the right-skewed distribution of citations typical in , capturing only a fraction of the by overlooking highly cited outliers and uncited papers, which can yield identical h-indices for researchers with vastly different profiles—such as one with balanced output versus another with many low-citation papers padding the count. These characteristics underscore the metric's arbitrary nature and limited reliability for holistic impact assessment.

Manipulation Risks

The h-index is susceptible to through strategic self-citations, where authors excessively reference their own prior work to artificially elevate citation counts and thereby increase their h-index value. Simulations have demonstrated that deliberate self-citation patterns can significantly inflate the ; for instance, an author with an initial h-index of 10 could raise it to 15 or higher by systematically citing their own publications in subsequent papers. This practice raises ethical concerns, as it distorts evaluations of scholarly impact and can mislead hiring, , or decisions. Citation rings among collaborators exacerbate this issue, involving coordinated mutual s within small groups to boost h-indices without reflecting genuine . Such networks, often among co-authors or affiliated researchers, create inflated loops that are difficult to detect solely from the h-index . Ethical guidelines from bodies like the condemn these tactics as a form of manipulation that undermines . Many databases now provide h-index calculations excluding self-citations, and guidelines from the (COPE) address manipulation as of 2024. Publication tactics like "salami slicing"—dividing a single body of into multiple minimally distinct papers—allow authors to accumulate more publications, each potentially garnering citations to raise the h-index threshold. This strategy increases the number of papers considered in the h-index calculation, even if individual impacts remain low, prioritizing quantity over substantive contribution. Studies in fields like have linked such practices to broader pressures from metric-driven evaluations, contributing to ethical debates on . Database gaming further enables manipulation by selectively reporting h-indices from sources with varying stringency, such as favoring over or . often yields higher h-indices due to its broader, less curated coverage, including non-peer-reviewed materials that can include self-planted citations, making it more prone to inflation than the stricter filtering in . This selective use can present an overly favorable profile in evaluations. Empirical evidence underscores these risks: analyses across disciplines show that self-citations can inflate h-indices, particularly in collaborative or high-output contexts. In the , high-profile retraction scandals involving fabricated and —such as those uncovered in biomedical research—highlighted how manipulated metrics like the h-index can propagate until exposed, leading to widespread reevaluations of affected scholars' careers. These cases emphasize the need for robust in metric usage to mitigate ethical harms.

Extensions

Variants

The variants of the h-index address key limitations of the original metric, such as its sensitivity to duration, unequal crediting in multi-author works, underemphasis on highly influential papers, disciplinary differences in citation norms, and challenges in assessing collective outputs. The contemporary h-index, denoted as h_c, accounts for the recency of publications by weighting citations to give more importance to recent work, enabling fairer comparisons across stages. It is computed by multiplying each paper's citation count by a recency factor (typically based on the normalized age of the paper, such as y(i) = \frac{\text{current year} - \text{publication year}}{\text{current year} - \text{first publication year} + 1}), then applying the standard h-index procedure to these adjusted scores. This adjustment highlights researchers who maintain productivity over time rather than relying on accumulated citations from . The individual h-index, denoted as h_I, adjusts the h-index to account for co-authorship in collaborative research. It is defined as h_I = \frac{h^2}{N_t}, where h is the standard h-index and N_t is the total number of authors across the h most-cited papers (equivalent to dividing h by the average number of authors in those papers). This normalization reduces inflation from large collaborations and better reflects individual contributions in fields with frequent multi-author papers. The extends the h-index by prioritizing highly cited publications to capture broader impact from standout works. It is the largest integer g such that the g most-cited papers collectively receive at least g^2 citations. Unlike the h-index, which treats all qualifying papers equally, the g-index amplifies the role of top performers, making it higher for authors with skewed citation distributions (e.g., a few blockbusters amid average outputs). Field-normalized variants of the h-index adjust for heterogeneous rates across disciplines, often via s to ensure equitable cross-field evaluations. In this adaptation, each paper's raw citation count is replaced by its (e.g., top 10% in its field and publication year), and the h-index is recalculated on these normalized scores; a researcher has an h-index of k if k papers fall in the top k\% within their field. This method preserves the h-index's structure while accounting for baseline differences, such as higher norms in versus . The group h-index, denoted as h_G, applies the h-index to teams or institutions while correcting for varying group sizes to assess collective performance fairly. It is defined as h_G = \frac{h}{\sqrt{N_G}}, where h is the group's standard h-index and N_G is the number of members. This scaling prevents larger teams from automatically outperforming smaller ones due to sheer volume, highlighting efficiency in collaborative impact. The total number of citations received by an author's publications serves as a straightforward measure of overall scholarly impact, aggregating all citations across an individual's body of work. However, this metric can be heavily skewed by a small number of highly cited "big hits," such as review articles or collaborative papers where the author is one of many contributors, potentially misrepresenting sustained productivity. In contrast, the h-index provides a more balanced assessment by requiring a threshold where multiple papers meet or exceed the h citation level, resisting the influence of outliers and better capturing consistent research influence. The impact factor (IF), introduced by in the 1960s, evaluates the average number of citations received by articles published in a over a specific period, typically two years, to gauge the prestige and influence of the publication venue itself. Unlike the author-centric h-index, which accumulates over an entire career and incorporates both and citation distribution, the IF is strictly journal-level and does not directly reflect an individual's contributions or long-term output. This distinction makes the IF useful for assessing publication quality in hiring or funding decisions but less suitable for evaluating personal research trajectories. The i10-index, a metric provided by , counts the number of an author's publications that have received at least 10 citations each, offering a simple indicator of the breadth of moderately impactful work. While easier to compute than the h-index, it lacks nuance by applying a fixed low threshold, potentially overvaluing quantity over the quality distribution captured by the h-index's variable h threshold. For instance, an author with many papers just above 10 citations might have a high i10-index but a lower h-index if citations are unevenly distributed. Eigenfactor and related PageRank-inspired metrics, such as those developed by and colleagues, assess influence through a network analysis of , weighting links from high-prestige sources more heavily and excluding self- to model the flow of scientific attention. These approaches differ fundamentally from the h-index by focusing on interconnected graphs at the level rather than author productivity and threshold-based impact, ignoring personal authorship networks entirely. Researchers often select the h-index for scenarios requiring a balanced view of productivity and sustained impact, such as promotions or peer evaluations, whereas total citations suit volume-based assessments, impact factors inform venue choices, i10-indices highlight publication counts, and network metrics like evaluate journal ecosystems.

References

  1. [1]
    An index to quantify an individual's scientific research output - PNAS
    I propose the index h, defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher.
  2. [2]
    h Index and its limitations - PMC - NIH
    Jun 14, 2024 · They underscore how to calculate its value, where to find it, why it differs across various platforms, its advantages and limitations and also its usage.Missing: key | Show results with:key
  3. [3]
    h-index and Variants
    However, the h-index can be used not only for the lifetime achievements of a single researcher but can be applied to any (more extensive) publication set ( ...Missing: key | Show results with:key
  4. [4]
    The h-index: Advantages, limitations and its relation with other ...
    The main advantage of h-index is that it combines a measure of quantity and impact in a single indicator. It has been calculated in different fields.Missing: key | Show results with:key
  5. [5]
    Citation-Based Indices of Scholarly Impact: Databases and Norms
    Aug 30, 2013 · They found that the h index was highly robust to outliers (e.g., highly-cited papers or data retrieval/entry errors) and among a small handful ...Missing: uncited | Show results with:uncited
  6. [6]
    An index to quantify an individual's scientific research output - arXiv
    Aug 3, 2005 · I propose the index h, defined as the number of papers with citation number higher or equal to h, as a useful index to characterize the scientific output of a ...
  7. [7]
    The state of h index research. Is the h index the ideal way to ...
    The h-index, introduced by Jorge Hirsch, is a measure of research output where a scientist has h papers with at least h citations each.
  8. [8]
    Which h-index? — A comparison of WoS, Scopus and Google Scholar
    Nov 28, 2007 · This paper compares the h-indices of a list of highly-cited Israeli researchers based on citations counts retrieved from the Web of Science, Scopus and Google ...Missing: adoption history integration
  9. [9]
    The h-index of h-index and of other informetric topics | Scientometrics
    Jun 21, 2008 · In this paper we examine the applicability of the concept of h-index to topics, where a topic has index h, if there are h publications that received at least h ...Missing: debates | Show results with:debates
  10. [10]
    Web of Science: h-index information - Support - Clarivate
    The h-index is based on a list of publications ranked in descending order by the Times Cited. The value of h is equal to the number of papers (N) in the list ...Missing: integration | Show results with:integration
  11. [11]
    Publish or Perish - Harzing.com
    Publish or Perish is a software program that retrieves and analyzes academic citations. It uses a variety of data sources to obtain the raw citations.macOS · Publish or Perish version 8 · Publish or Perish in the news · Research
  12. [12]
    Comparisons of Citations in Web of Science, Scopus, and Google ...
    Sep 9, 2009 · The citation accuracy of Google Scholar was found to be slightly lower than Scopus or Web of Science. Strengths of our methods include having a ...
  13. [13]
    A new methodology for comparing Google Scholar and Scopus
    This paper compares Google Scholar and Scopus in terms of source coverage, citation impact of sources, citation counts to individual articles and their ...
  14. [14]
    Coverage of Google Scholar, Scopus, and Web of Science
    Calculating the h-index requires access to resources that list not only an individual's publications but also indicate the literature citing these publications.Missing: input | Show results with:input<|control11|><|separator|>
  15. [15]
    ‪Edward Witten‬ - ‪Google Scholar‬
    Get my own profile. Cited by. View all. All, Since 2020. Citations, 270490, 59910. h-index, 214, 126. i10-index, 456, 354. 0. 12000. 6000. 3000. 9000. 19861987 ...
  16. [16]
    [PDF] What Constitutes Research Excellence? Experimental Findings on ...
    A higher h-index presents the second largest effect, leading to a significant leap in evaluations of research excellence (p < 0.0001).Missing: ERC | Show results with:ERC<|separator|>
  17. [17]
    [PDF] A Measure of Excellence of Young European Research Council ...
    The equivalent values for consolidators in the same panel are 24 papers, 949 citations, and an H-index of 16. Figure 1. Bibliometrics for the 2014–2015 ERC ...
  18. [18]
    who's afraid of h-index? - ERC - Enspire Science
    Debunking the myth about using h-index in your ERC grant proposal. Find out our tips for when you should or should not use h-index.Missing: NSF | Show results with:NSF<|separator|>
  19. [19]
    Exploring the h-index at the institutional level: A practical application ...
    Aug 6, 2025 · This paper measures the research performance of universities all over the world, and the applicability of the h‐index at the institutional level ...
  20. [20]
    The use of the h-index to evaluate and rank academic departments
    For a specific department, the h-index of each faculty is plotted against the number of years since the first publication.
  21. [21]
    [PDF] Comparison of the h-index for Different Fields of Research Using ...
    The h-index measures research quality and sustainability, but direct comparison across fields is difficult due to varying publication and citation rates.<|separator|>
  22. [22]
    What Is a Good H-Index? Practical Guide for Researchers - Samwell.ai
    Apr 11, 2025 · H-index 30+: Suggests exceptional research impact often seen in leading scholars. Remember that these are rough guidelines. An h-index of 20 ...
  23. [23]
    h-index - ResearchGate Help
    Apr 14, 2023 · The h-index is calculated based on two bits of information: the total number of papers published (Np) and the number of citations (Nc) for each ...
  24. [24]
    ORCID
    ORCID is a free, unique, persistent identifier (PID) for individuals to use as they engage in research, scholarship, and innovation activities.Sign in · About · Orcid · ORCID for Universities
  25. [25]
    (PDF) Italian Research Assessment VQR: Framework ...
    Sep 1, 2022 · The Italian research assessment VQR has been reformed rapidly with outstanding achievements and representativecontroversies, thus showing its ...
  26. [26]
  27. [27]
  28. [28]
    reassessing the role of the h-index in academic medicine | QJM
    Oct 1, 2025 · Another critical concern is its susceptibility to manipulation. Self-citations, citation rings, and excessive co-authorships can artificially ...Limitations And Biases · Alternative Metrics And... · Author Contributions<|control11|><|separator|>
  29. [29]
    From Integrity to Inflation: Ethical and Unethical Citation Practices in ...
    Apr 21, 2025 · This paper examines strategies such as excessive self-citation, coercive citation demands by reviewers, and overuse of unpublished works.
  30. [30]
    The h-Index: Understanding its predictors, significance, and criticism
    Nov 21, 2023 · The h-index is an author-level scientometric index used to gauge the significance of a researcher's work.Missing: properties | Show results with:properties
  31. [31]
    Misconduct accounts for the majority of retracted scientific publications
    67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).Missing: manipulation | Show results with:manipulation
  32. [32]
    The h-Index: An Indicator of Research and Publication Output - PMC
    Hirsch defined h-index as “A scientist has index h if h of his or her Np papers have at least h citations each and the other(Np-h) papers have fewer than h ...
  33. [33]
    The History and Meaning of the Journal Impact Factor - JAMA Network
    Jan 4, 2006 · A journal's impact factor is based on 2 elements: the numerator, which is the number of citations in the current year to items published in the previous 2 ...Missing: original | Show results with:original
  34. [34]
    Eigenfactor: Revealing the Structure of Science
    A fascinating new working paper finds that men are far more likely than women to back up their arguments with appeals to a higher authority: themselves.Journal Ranking · About · Scholarly Publishing · PapersMissing: original | Show results with:original