Fact-checked by Grok 2 weeks ago

Academic journal

An academic journal is a periodical publication featuring articles authored by experts that report original findings, critical reviews of existing , or theoretical advancements within a specific , with content typically vetted through to maintain scholarly rigor. These journals serve as primary vehicles for disseminating verified , enabling researchers to build upon prior work while establishing priority for discoveries. Originating in the 17th century amid the , the first academic journal, , appeared in on January 5, 1665, followed shortly by the Philosophical Transactions of the Royal Society in , marking the shift from personal correspondence to systematic publication of scientific and scholarly output. Over centuries, the proliferation of journals paralleled the expansion of specialized fields, with evolving as a core mechanism—formalized in the but inconsistently applied—to filter for validity and novelty, though its efficacy remains debated due to variability in reviewer expertise and susceptibility to subjective biases. Today, academic journals underpin career advancement through metrics like impact factors, yet they grapple with challenges including publication delays, escalating access costs via subscription models, and the rise of predatory outlets that mimic legitimacy without substantive review, underscoring tensions between open dissemination and in an era of exponential output. Despite these issues, journals remain indispensable for archival permanence and interdisciplinary dialogue, with platforms enhancing accessibility while introducing new vulnerabilities to if oversight falters.

Definition and Core Functions

Role in Disseminating Knowledge

Academic journals primarily disseminate knowledge by publishing peer-reviewed articles that communicate original research findings, methodologies, and analyses to specialized scholarly audiences worldwide. This process enables researchers to contribute verifiable data and -based conclusions, forming the foundational record for subsequent studies and applications in various disciplines. Unlike informal channels such as conferences or preprints, journals enforce structured scrutiny to prioritize claims supported by reproducible , thereby reducing the propagation of unsubstantiated assertions. The system, typically involving independent experts evaluating manuscripts for methodological soundness, logical coherence, and empirical validity, serves as a gatekeeping mechanism that elevates the reliability of disseminated content over unvetted alternatives. Journals thus facilitate the incremental advancement of knowledge, where citations to prior publications create traceable lineages of discovery, allowing fields to evolve through critique and extension rather than isolated efforts. This archival function, supported by indexing in databases like or , ensures long-term accessibility, with over 2.5 million published annually across scientific domains as of 2023, underpinning , innovation, and education. In practice, journals' role extends to synthesizing through review articles that aggregate and appraise disparate studies, highlighting causal patterns and gaps in understanding. However, from replication efforts reveals limitations, such as low rates in fields like —where only about 36% of studies from top journals replicated successfully in a large-scale project—indicating that while journals disseminate efficiently, they do not inherently guarantee causal accuracy without post-publication verification. models, adopted by journals representing roughly 20% of global output by 2024, further amplify dissemination by removing paywalls, correlating with 18-50% higher citation impacts per studies on accessibility. Despite these strengths, institutional biases in editorial boards and reviewer pools can skew content toward prevailing paradigms, as documented in analyses of publication patterns favoring rejections over negative results.

Distinction from Non-Academic Publications

Academic journals are distinguished from non-academic publications, such as popular magazines, newspapers, and trade periodicals, primarily through their rigorous peer-review process, whereby submitted manuscripts undergo evaluation by independent experts in the field to assess methodological soundness, originality, and validity before acceptance. In contrast, non-academic outlets typically rely on editorial review by staff journalists or editors without specialized peer scrutiny, prioritizing timeliness, readability, and broad appeal over empirical verification. This absence of peer review in non-academic sources can lead to faster publication but increases vulnerability to unsubstantiated claims or sensationalism, as evidenced by retractions or corrections in outlets like Time or Newsweek that lack the iterative validation of scholarly vetting. Authorship in academic journals features credentialed researchers or academics affiliated with universities or institutions, who present original empirical research, theoretical advancements, or data-driven analyses supported by reproducible methods. Non-academic publications, however, often credit professional writers, freelancers, or subject enthusiasts without equivalent expertise requirements, focusing on interpretive summaries, opinions, or secondary reporting derived from press releases or interviews rather than primary . For instance, a analysis of publication patterns showed that scholarly articles average 20-50 per to enable , whereas popular articles rarely exceed a handful of informal attributions. The content and structure further diverge: academic journals emphasize technical depth, statistical rigor, and specialized aimed at advancing disciplinary , often spanning 5,000-10,000 words with abstracts, methodologies, and appendices. Non-academic formats favor concise narratives, visuals, and advertisements tailored for lay audiences, with brevity (under 2,000 words) and narrative flair over or hypothesis testing. These differences ensure academic journals serve as cumulative repositories for verifiable , while non-academic ones function more as disseminators of current events or trends, though the latter may occasionally scholarly work without the same for accuracy.
CharacteristicAcademic JournalsNon-Academic Publications (e.g., Magazines, Newspapers)
Review ProcessPeer-reviewed by field experts for validity and rigorEditorially reviewed for style and market fit; no expert validation
Primary PurposeDisseminate original research to build scholarly consensusInform, entertain, or opine for general readership
Citations/ReferencesExtensive bibliographies for Minimal or none; often anecdotal
Visuals and FormatSparse illustrations; plain, text-heavy layoutGlossy photos, infographics, ads for engagement

Historical Evolution

Origins in the Scientific Revolution

The , spanning the 16th to 18th centuries, emphasized empirical observation, experimentation, and mathematical reasoning, fostering a need for systematic dissemination of findings beyond personal correspondence or unpublished manuscripts. Prior to this, knowledge sharing relied on letters among scholars and occasional books, but the volume of discoveries—such as those in astronomy by Galileo and Kepler, or in mechanics by Descartes—demanded a more efficient mechanism to verify claims, replicate experiments, and build cumulative knowledge. This shift aligned with the formation of learned societies, like the Royal Society of London chartered in 1662, which prioritized collective scrutiny over individual authority. The earliest academic periodical appeared in France with the Journal des sçavans, launched on January 5, , by Denis de Sallo under the patronage of . Intended to review books, report legal decisions, and cover scholarly news across humanities and sciences, it published 13 weekly issues before suppression due to controversial content, resuming in 1666 under new editors. Though broader than strictly scientific, it established the serial format for ongoing intellectual exchange. Shortly after, on March 6, , Henry Oldenburg, secretary of the Royal Society, issued the first number of Philosophical Transactions, the inaugural journal dedicated to . Oldenburg self-published it, drawing from Society correspondence to feature observations, experiments (e.g., Robert Boyle's on cold), and foreign reports, aiming to create a public repository immune to secrecy or biases. These pioneering journals institutionalized peer-like validation through printed scrutiny, enabling causal chains of inquiry where later workers could reference and test priors. Philosophical Transactions endured disruptions like the 1665 plague and Great Fire, continuing under oversight after Oldenburg's 1677 death, while inspiring continental equivalents such as Germany's Miscellanea curiosa (1672) by the Academia Naturae Curiosorum. By prioritizing verifiable data over speculative philosophy, they laid the groundwork for modern , countering the era's alchemical secrecy and aristocratic gatekeeping with open, if rudimentary, archival access.

19th- and 20th-Century Expansion

The proliferation of academic journals in the was propelled by the expansion of scientific research, the establishment of specialized disciplines, and advancements in printing technology, such as steam-powered presses that reduced production costs and enabled higher volumes. At the century's outset, approximately 100 periodicals existed globally, increasing to around 10,000 by , reflecting the growing output of empirical investigations amid industrialization and the of . In alone, scientific titles rose from 11 in 1800 to over 110 by , paralleled by medical journals expanding from 9 to more than 150, as learned societies and universities formalized dissemination channels for specialized knowledge. This growth facilitated the creation of disciplinary communities, with journals like (founded 1799) exemplifying the shift toward field-specific publication amid burgeoning fields such as and physics. Into the 20th century, academic journals continued exponential expansion, driven by further disciplinary fragmentation and international research collaboration, with annual growth rates of 3.3% to 4.7% in active journals from 1900 onward. By mid-century, estimates placed the total at 30,000 to 60,000 active scholarly titles, encompassing not only sciences but also burgeoning social sciences and humanities outlets like American Journal of Sociology (1895). This era saw journals adapt to increased article volumes, with publications like Philosophical Transactions of the Royal Society growing in size to accommodate denser reporting of experimental results, underscoring the causal link between rising researcher numbers—tied to university proliferation—and publication demand. Such developments prioritized archival permanence over ephemeral formats, though they also introduced challenges like fragmented literature that later necessitated indexing systems.

Post-World War II Growth and Professionalization

Following , scientific research output surged due to unprecedented government funding and institutional expansion, particularly in the United States and , leading to a corresponding of academic journals. The exponential growth in publications averaged around 5% annually, with a of approximately 13 years, as documented in analyses of global . This boom was propelled by Cold War-era priorities, including massive investments in defense-related R&D and the establishment of agencies like the in 1950, which formalized federal support for . By the 1950s, the number of journals cataloged reached about 60,000, reflecting the scaling of to accommodate rising paper volumes. The post-war period also saw the professionalization of journal operations, with evolving from editorial judgments to a standardized, systematic process. Prior to , many journals relied on editors' solitary assessments, but the influx of submissions overwhelmed this model, prompting widespread adoption of external refereeing by the late 1940s and 1950s. For instance, prominent journals like intensified referee use post-war to maintain credibility amid volume increases, while medical publications such as the British Medical Journal formalized anonymous around 1947. This shift, driven by funding bodies' emphasis on for grant-supported work, elevated journals' role as gatekeepers, though it introduced delays and biases later critiqued in empirical studies. Learned societies, traditional stewards of journals, faced operational strains from the expansion, often partnering with commercial publishers to handle printing, distribution, and marketing. This commercialization professionalized workflows—introducing dedicated editorial staff, indexing systems like , and early metrics for journal prestige—but also sowed seeds for profitability tensions, as subscription revenues soared with library budgets. By 1961, estimates placed active peer-reviewed scholarly journals at around 30,000, underscoring the sector's maturation into a structured industry amid unchecked growth.

Publishing Mechanisms

Types of Scholarly Articles

Original research articles constitute the primary format for reporting novel empirical findings or theoretical advancements derived from systematic investigations. These articles typically adhere to the structure—encompassing an introduction outlining the research problem and hypotheses, methods detailing procedures for replication, results presenting data without interpretation, and discussion analyzing implications and limitations—and form the foundational mechanism for knowledge generation in fields such as sciences and social sciences. Review articles synthesize and critically evaluate existing literature on a defined topic, offering an overview of the field's current state, unresolved questions, and prospective research avenues. Unlike original research, they do not generate new data but aggregate and interpret findings from dozens to hundreds of primary studies, often resulting in high counts due to their utility in contextualizing subsequent work; editors frequently invite these contributions for their authoritative perspective. Short reports, letters, or brief communications deliver concise accounts of preliminary or time-sensitive original research results, prioritizing rapid dissemination over exhaustive detail. Subject to stringent word limits, these formats suit competitive environments like medical breakthroughs or funding-dependent inquiries, where they may prompt further validation studies while formatted similarly to full research articles but with abbreviated sections. Case studies document detailed examinations of particular instances, such as unique , organizational events, or environmental phenomena, to illustrate rare occurrences or test theories in context-specific settings. Prevalent in disciplines like and , they emphasize descriptive depth over statistical generalizability, alerting practitioners to potential patterns without claiming broad applicability. Methodological articles introduce or refine experimental techniques, protocols, or analytical tools, demonstrating improvements over prior approaches through validation against benchmarks. These contributions focus on procedural innovations rather than substantive findings, enabling and efficiency gains across studies, and are structured akin to original with emphasis on description and testing. While these categories predominate, variations exist by discipline—such as meta-analyses within reviews for quantitative synthesis or theoretical papers in emphasizing argumentation over data—and journals may include supplementary formats like editorials or commentaries for interpretive discourse, though these less frequently undergo full for empirical claims.

Editorial Processes and Peer Review

The editorial process in academic journals typically begins with manuscript submission, followed by an initial administrative check by the editorial office to ensure compliance with formatting, ethical standards, and scope requirements. If the submission passes this stage, the or editor conducts a preliminary of the paper's novelty, methodological , and fit for the , often rejecting unsuitable manuscripts without external —a desk rejection that accounts for a significant portion of initial decisions. This phase filters out approximately 20-50% of submissions in many fields before , reflecting high selectivity driven by limited publication slots. Upon advancing, the editor assigns 2-4 independent experts as peer reviewers, selected based on expertise, prior publications, and absence of conflicts of interest, with the process emphasizing validity, , and . Reviewers provide confidential reports recommending , revision ( or ), or rejection, typically within 4-8 weeks, though delays are common due to reviewer workload, extending the full first-decision timeline to 5-12 weeks on average. Authors then receive the editor's decision, often with reviewer comments, prompting revisions that may iterate 1-3 times before final , , and . Peer review variants include single-anonymous (reviewers know authors' identities, but not vice versa, predominant in ~70% of journals), double-anonymous (both identities masked to reduce ), open (identities disclosed, sometimes publishing reviews), and post-publication (ongoing critique after online release). Double-anonymous aims to mitigate prestige or affiliation es, yet evidence shows persistent disparities, such as Western-authored papers facing lower rejection rates post-initial denial compared to non-Western ones. Overall rejection rates hover at 68% across disciplines, rising to 80-95% in journals, underscoring the gatekeeping role but also selectivity pressures. Despite its intent to uphold rigor, exhibits systemic flaws, including failure to detect errors—studies planting deliberate flaws in manuscripts found reviewers identifying only 30-40% of major issues—and biases like favoring established paradigms or among acquainted peers. These limitations stem from reviewers' unpaid, voluntary and lack of incentives for thorough scrutiny, compounded by field-specific inter-rater unreliability where on flaws varies widely. Empirical data reveal enhances quality marginally but does not guarantee or truth, as evidenced by replication crises in and where peer-reviewed findings later failed verification at rates exceeding 50%. Academic institutions' ideological homogeneity exacerbates pressures, privileging incremental over disruptive work.

Specialized Review Formats

Specialized review formats in academic journals deviate from conventional anonymized to address issues such as , , and , often incorporating or preemptive evaluation. These models include , registered reports, and post-publication review, each designed to mitigate limitations like reviewer potentially enabling unaccountable critiques or decisions favoring positive results over rigorous methods. Adoption varies by discipline, with greater uptake in fields like and where concerns are acute, though on their superiority remains mixed due to challenges in large-scale comparisons. Open peer review reveals reviewer identities to authors and sometimes publishes review reports alongside accepted articles, aiming to foster accountability and reduce sabotage by incentivizing constructive feedback. Journals such as F1000Research and BMJ implement this by disclosing names and reports post-review, with studies indicating it can improve review quality through reputational stakes but may deter candid criticism due to interpersonal dynamics. A 2023 analysis found open models correlate with higher citation rates in some contexts, yet critics note potential biases from self-selection among reviewers willing to go public. Registered reports shift initial to the study protocol stage, prior to , offering in-principle acceptance if methods are sound, thereby countering selective reporting and p-hacking. Promoted by the Center for Open Science since 2013, this format has been adopted by over 300 journals including Royal Society Open Science and , with evidence from trials showing reduced effect sizes compared to traditional reviews, suggesting less inflation from flexible analyses. A second review occurs post-results, but acceptance hinges primarily on protocol rigor, addressing documented in meta-analyses where null findings are underrepresented by up to 60%. Post-publication peer review enables scrutiny after online-first release, often via platforms like or journal-integrated systems, allowing rapid dissemination while validation. Models like F1000Research publish articles immediately, then solicit signed reviews, which are versioned and public, facilitating iterative improvements and exposing flaws missed in pre-print stages. This approach aligns with ecosystems, as seen in the publish-review-curate , but risks amplifying unvetted claims if lags, with data from 2024 indicating variable review volumes across fields. Empirical critiques highlight that while it enhances , it demands robust to counter , unlike gated traditional processes. Other variants, such as collaborative or overlay reviews, involve input or layering on preprints, as in DARIAH's model for data, prioritizing methodological over novelty. These formats, while innovative, face issues, with adoption limited to niche journals; a 2023 survey reported only 10-15% of outlets using non-traditional models fully. Overall, specialized formats promote causal rigor by decoupling acceptance from outcomes, yet their efficacy depends on disciplinary norms and incentives, with ongoing debates over whether they resolve or merely relocate biases inherent in expert gatekeeping.

Assessment of Prestige and Impact

Metrics and Ranking Systems

The most widely used metric for evaluating academic journal prestige is the Journal Impact Factor (JIF), calculated annually by Clarivate Analytics through its . The JIF for a given year, such as , is determined by dividing the number of citations in that year to citable items (primarily articles and reviews) published in the journal during the preceding two years by the total number of such citable items published in those two years. Introduced in the 1960s by as part of the Science Citation Index, the JIF draws from data and covers approximately 21,000 journals across sciences and social sciences, with values ranging from below 1 for niche outlets to over 100 for top multidisciplinary titles like Nature (JIF 64.8 in 2022). An alternative citation-based measure is , provided by using data, which assesses over 28,000 serial titles including journals and . for a year is computed as the average citations per document, where citations received in that year to documents published in the prior four years are divided by the number of documents (including articles, reviews, and conference papers) from those four years, offering broader coverage than JIF's two-year window. Launched in 2016, percentiles rank journals within subject categories, with top-quartile journals in fields like exceeding 10, and it explicitly excludes non-peer-reviewed content to emphasize scholarly documents. Prestige-oriented rankings include the (SJR), derived from and developed by the SCImago research group, which weights citations by the prestige of the citing journal rather than treating all citations equally. SJR calculates an average prestige per article using an iterative algorithm akin to , aggregating data over three years and assigning quartiles ( highest to Q4 lowest) across 27 subject areas for nearly 30,000 sources; for instance, Q1 journals in physics typically have SJR values above 1.0.
MetricProvider/DatabaseCitation WindowKey FeaturesCoverage (approx. titles)
JIF/2 yearsCounts citable items (articles/reviews); field-normalized categories21,000
CiteScore/4 yearsIncludes broader documents; percentiles available28,000+
SJRSCImago/3 yearsPrestige-weighted; size-independent30,000
Google Scholar Metrics provide open-access alternatives via h5-index and h5-median, ranking publications by the largest h such that h articles from the past five years (e.g., 2020–2024) each received at least h citations, covering over 100 disciplines without paywalls. These systems collectively inform institutional evaluations, funding decisions, and tenure processes, though variations in methodology lead to divergent rankings across providers.

Empirical Critiques of Prestige Measures

Empirical analyses have revealed that journal prestige measures, particularly the Journal Impact Factor (JIF), suffer from high in underlying , where a small fraction of generates the majority of citations. For instance, studies of data show that in 2010, approximately 50% of across fields stemmed from the top 10% most-cited papers, with most pronounced in disciplines. This renders JIF overly sensitive to outliers, as the averages citations over all citable items but is dominated by exceptional papers, leading to unreliable representations of typical influence. Consequently, using JIF to proxy journal or quality introduces systematic errors, as median citation rates—less affected by skew—often diverge substantially from mean-based JIF values. Further critiques highlight the weak predictive power of prestige metrics for individual research outcomes or quality. Analyses of thousands of articles demonstrate that JIF and citation counts correlate inconsistently or negatively with independent expert assessments of research quality, such as methodological rigor or novelty evaluated via peer review. Journal rankings based on JIF also exhibit instability, with re-ranking exercises showing frequent changes due to variations in citation windows or self-citation adjustments, undermining their reliability for evaluative purposes. Moreover, prestige-driven evaluations fail to account for field-specific norms; JIFs in biomedicine and cell biology routinely exceed those in economics or social sciences by factors of 5-10 or more, reflecting differing publication volumes and citation practices rather than inherent superiority. Prestige measures incentivize gaming behaviors that distort scholarly incentives, as evidenced by documented cases of manipulation. Citation cartels—coordinated excessive citing among journals or groups—have led to deny JIF calculations to dozens of outlets, including 10 journals in 2020 for self-citation rates exceeding thresholds or "citation stacking." Such practices inflate metrics without enhancing content quality, while broader empirical links tie higher JIFs to elevated retraction rates, with correlations showing prestigious journals retracting papers at rates up to 10 times higher than lower-tier ones, potentially due to pressures for novel, high-stakes results over replicable findings. studies further indicate inverse relationships, where higher-impact outlets exhibit lower replication success rates, suggesting prestige prioritizes saliency over robustness. These findings underscore how reliance on such metrics perpetuates a , amplifying visibility for already prominent work while marginalizing solid but less cited contributions.

Economic Realities

Publication Costs and Funding Models

Academic journals traditionally operate under a subscription-based funding model, where access fees paid by libraries, institutions, and individual subscribers cover , , and editorial costs. This system generates substantial revenues for commercial publishers; for instance, the global scholarly publishing market was valued at $26.5 billion in 2020, with major players like achieving profit margins of approximately 40%, exceeding those of industries such as software and consumer goods. These margins persist despite reducing marginal costs, as publishers leverage market concentration—five firms control over 50% of articles—and institutional "big deal" bundling that discourages cancellations. The rise of open access (OA) has introduced author-pays models, primarily through article processing charges (APCs), shifting costs upstream to authors, their institutions, or funders. In gold OA journals, APCs fund immediate free access, with global averages ranging from $1,626 to $2,000 per article as of recent estimates, though medians in fields like health sciences reach $2,820. Hybrid journals, which offer optional OA within subscription titles, command higher APCs—often $3,710 versus $1,735 for fully OA equivalents—enabling publishers to extract revenues from both subscribers and authors, a practice termed "double-dipping." APC revenues have grown rapidly, reaching $1.9 billion industry-wide in 2023, with projections to $3.2 billion by 2028, fueled by funder mandates like those from the NIH and Plan S. Alternative models include diamond OA, where journals forgo APCs and subscriptions, relying on subsidies from learned societies, universities, or grants; these account for a minority of outputs but avoid barriers. Overall per-article costs vary by : small operations (100 articles/year) incur about $354 in direct expenses, dropping under $100 for larger ones due to , though commercial publishers layer on higher markups. typically derives from research grants—e.g., NIH allocations for APCs—or institutional budgets, but disparities arise: high-APC journals correlate with prestige metrics like , pressuring underfunded researchers in developing regions. Empirical analyses indicate that while OA expands access, it has amplified publisher profits without proportionally reducing total system costs, as APC hikes outpace inflation.

Subscription Versus Open Access Economics

In the traditional subscription model, academic journals derive primarily from institutional subscriptions paid by universities, libraries, and research organizations, granting access to content for a defined period or volume. Publishers such as and have reported substantial profits under this system, with Elsevier achieving a 37% operating on $3.5 billion in in 2022, yielding $1.3 billion in profits, largely sustained by subscription fees despite academics providing and content without direct compensation. This model enables publishers to bundle journals into large packages, often leading to escalating costs for subscribers—U.S. academic libraries spent over $1.3 billion on serials in 2020 alone—while restricting access to paying institutions and excluding unaffiliated researchers in lower-resource settings. Open access (OA) models shift costs upstream to authors or funders via article processing charges (APCs), making articles freely readable upon publication without subscription barriers. In gold OA journals, APCs cover production and dissemination, with medians at $2,000 in 2023 for fully OA outlets and $3,230 for hybrid options in subscription journals, though top-tier hybrids like those in portfolios exceed $10,000. Publishers have increasingly monetized OA, generating $589.7 million from it at and $221.4 million at in recent years, often alongside retained subscription income in hybrid setups, raising concerns of "double dipping" where institutions pay both ways. This cost transfer promotes broader dissemination—nearly half of global research output was openly accessible by 2020, enhancing citations in fields like physics by up to 35% equivalent to price reductions—but burdens authors in resource-limited regions, where APC waivers are inconsistent and high fees deter submissions. Economically, subscription models incentivize publishers to prioritize high-prestige content for bundled sales, yielding margins rivaling tech giants (e.g., Elsevier's near 40%), but they perpetuate access inequities, as evidenced by lower citation rates for paywalled articles compared to OA equivalents in controlled studies. OA, by contrast, aligns incentives toward volume and visibility, potentially accelerating research impact through wider reach—empirical reviews from 2000–2023 show positive effects on societal translation and discovery—but risks APC inflation and quality dilution if funders cap fees, as proposed in policies like Europe's Plan S, which could compress publisher revenues unless offset by scale. Hybrid transitions have not eliminated high profits, with publishers adapting by layering APCs atop subscriptions, but pure OA journals indexed in Scopus or Web of Science match subscription peers in impact metrics, suggesting viability without inherent quality trade-offs. Scholarly societies, reliant on subscriptions for operations, face viability challenges in full OA shifts, often requiring subsidies or consortia to sustain non-profit missions amid for-profit dominance.

Publisher Incentives and Market Dynamics

Academic publishers, particularly the dominant for-profit entities, operate under strong incentives to maximize revenue through high-margin models that leverage subsidized academic labor for content creation and . Major players such as , Wiley, , , and —collectively known as the ""—achieve profit margins exceeding 30%, with Elsevier reporting approximately 40% in recent years, surpassing those of tech giants like and . These margins stem from low marginal production costs in digital formats, where academics contribute manuscripts and reviews without compensation, while institutions fund both and access. In the traditional subscription model, publishers bundle thousands of journals into packages sold to libraries, creating "big deal" contracts that inflate costs and revenue streams despite stagnant or declining usage for many titles. This structure incentivizes publishers to acquire and consolidate journals to increase bundle value, contributing to where the top five publishers control nearly 50% of the scholarly output, up from 39% in prior decades. The oligopolistic dynamics reduce competitive pressure on prices, as libraries face inelastic demand for prestige titles, leading to serials crises where subscription expenditures crowd out monographs and other resources. The shift toward () alters these incentives by replacing reader-side payments with author-side article processing charges (), often exceeding $3,000 per article for or gold OA journals controlled by the . Publishers benefit from this model by capturing fees on top of residual subscription income in setups, potentially boosting overall profits as OA mandates proliferate, with estimates suggesting APC revenues rival or exceed traditional margins. However, this creates quantity-over-quality pressures, as revenue ties directly to publication volume rather than selectivity, exacerbating "" dynamics without proportional improvements in dissemination or rigor. Market dynamics further entrench this system through high for independents, despite digital tools lowering technical hurdles, as prestige accrues to established imprints via citation networks and institutional habits. Non-collusive coordination among sustains elevated , with antitrust concerns arising from practices like non-disclosure agreements in negotiations that obscure cost data. Empirical analyses indicate that this concentration correlates with reduced citation rates due to access barriers, underscoring a causal tension between publisher profits and broader scientific progress. Efforts to disrupt the , such as diamond OA or consortial bargaining, face resistance from publishers' scale advantages in marketing and infrastructure.

Major Challenges and Crises

Reproducibility and Replication Failures

The reproducibility crisis in refers to the widespread failure of many peer-reviewed studies to yield consistent results when independently replicated, undermining the reliability of published scientific knowledge. Large-scale replication efforts have demonstrated low success rates across disciplines, with original studies often reporting statistically significant effects that fail to materialize in rigorous re-tests. For instance, the Collaboration's 2015 project attempted to replicate 100 experiments from three leading journals published in 2008, finding that while 97% of the originals showed significant results, only 36% of replications did, with effect sizes in successful replications averaging less than half of the originals. Similar issues pervade biomedical fields; scientists in 2012 reported inability to replicate 47 of 53 "landmark" cancer biology papers from top journals, citing discrepancies in experimental details and data handling. Academic journals exacerbate these failures through systemic incentives that prioritize novel, positive findings over null or replication results, fostering where non-significant outcomes are systematically underreported. A analysis of social and behavioral found that nonreplicable publications receive more citations than replicable ones, even after replication failures are documented, as journals and readers favor sensational claims that advance careers and funding prospects. This bias stems from "" pressures, where tenure and grants hinge on high-impact outputs in prestigious outlets, deterring replication submissions; many journals explicitly discourage or reject them due to perceived lack of novelty. Surveys of researchers corroborate the scope: a 2016 poll indicated over 70% had failed to reproduce others' experiments, while a 2024 PLOS Biology survey revealed 72% of biomedical scientists view the field as in crisis, attributing it partly to journal gatekeeping that amplifies questionable practices like selective reporting. Field-specific replication rates highlight uneven but pervasive problems, with at around 25% success in early estimates, at 50%, and at 46% in the : Cancer Biology's 2022 assessment of preclinical studies. Journals' processes, intended as quality filters, often overlook reproducibility flaws due to reviewers' reliance on rather than methodological rigor or data transparency, compounded by underpowered studies (typically below 50% power) that inflate false positives. These dynamics not only propagate erroneous findings into policy and practice—such as ineffective drug candidates based on irreproducible preclinical data—but also erode public trust, as evidenced by resource waste estimated in billions annually from pursuing non-replicable leads. Despite reforms like pre-registration mandates in some journals, core incentive misalignments persist, with replication papers still garnering fewer citations on average than originals or novel claims.

Ideological Biases in Gatekeeping

Surveys of political affiliations reveal a pronounced left-leaning skew in , particularly in social sciences and , with liberal-to-conservative ratios often exceeding 10:1 and some departments reporting zero conservative tenure-track . This homogeneity extends to peer reviewers and editors, who are drawn from the same pools, potentially introducing systematic biases into gatekeeping processes such as evaluation and decisions. Empirical evidence indicates viewpoint discrimination in peer review, where research diverging from progressive orthodoxies faces disproportionate rejection. In , a field with a documented 14:1 liberal-to-conservative ratio among researchers, anonymous surveys revealed that 19% of respondents would discriminate against a conservative-leaning paper, rising to 24% for explicitly Republican-associated authors. A 2015 analysis by Duarte et al. highlighted mechanisms like and moral purity concerns, where reviewers favor hypotheses aligning with egalitarian priors while scrutinizing or dismissing those implying innate group differences. The 2017–2018 Grievance Studies project further exposed vulnerabilities in ideological gatekeeping. Researchers submitted 20 fabricated papers mimicking grievance studies rhetoric—such as a rewrite of as feminist praxis and a study advocating rape hierarchies—to prominent journals; four were accepted, three published, despite methodological flaws, illustrating how alignment with activist scholarship can bypass rigorous scrutiny. This contrasts with documented rejections of empirically robust work challenging consensus views, as in cases where evolutionary or hereditarian arguments encounter editorial resistance absent for congruent narratives. Such biases manifest causally through self-reinforcing cycles: dominant ideologies shape agendas, , and , marginalizing dissenters and entrenching in review criteria. Norwegian survey experiments on research evaluations found ideological skews influencing assessments, with left-leaning evaluators rating conservative-aligned work lower even when quality-equivalent. Consequences include stifled hypothesis testing, as alternative explanations in areas like differences or effects are underrepresented, undermining the self-correcting of scientific . Reforms like blinded review for ideology or diversity mandates have been proposed, though implementation remains limited.

Predatory and Low-Quality Outlets

Predatory journals exploit the open-access publishing model by charging authors article processing charges (APCs) without delivering rigorous , editorial oversight, or other standard scholarly services, prioritizing financial gain over . These outlets often mimic legitimate journals through deceptive websites, fabricated impact factors, and promises of rapid publication, deceiving researchers into submitting work that receives minimal scrutiny. The phenomenon gained prominence with librarian Jeffrey Beall's compilation of potential predatory publishers, first publicized around 2012 and maintained until 2017, which highlighted criteria such as aggressive solicitation via spam emails, fictitious editorial boards, lack of proper indexing in databases like or , and grammatical errors in communications. Common hallmarks include unusually fast peer-review timelines—often weeks rather than months—high acceptance rates without substantive feedback, and hosting on platforms that evade established quality controls. By 2024, databases tracking predatory outlets, such as , identified over 18,000 such journal titles, with the number rising to approximately 19,771 by mid-2025, reflecting sustained growth despite awareness efforts. Annual output from these journals escalated from 53,000 articles in 2010 to 420,000 by 2014, continuing to produce hundreds of thousands yearly, which burdens citation databases and dilutes the scholarly record. Low-quality outlets differ from predatory ones in intent and ; while predatory journals actively deceive through false claims of legitimacy, low-quality journals may operate with lax standards—such as superficial or high-volume acceptance—but without overt , often in niche fields lacking robust oversight. Examples include certain mega-journals that prioritize quantity over selectivity, leading to inconsistent rigor, though they typically disclose policies upfront unlike predatory mimics. These outlets erode trust in by disseminating unvetted or flawed research, complicating efforts as predatory publications flood meta-analyses and citations with low evidentiary value. They exacerbate resource waste, with authors paying fees upward of $1,000–$3,000 per article for negligible services, and undermine public confidence in evidence-based knowledge, particularly in fields like where can propagate. Responses include blacklists like Cabell's Predatory Reports and successors to , alongside initiatives such as the Think. Check. Submit. campaign, which educates researchers on verifying journal credentials through ISSN validity, DOAJ inclusion, and independent metrics. Institutional policies increasingly penalize publications in identified predatory venues during hiring and funding evaluations, though challenges persist due to the fluidity of these operations and varying definitions across disciplines.

Technological and Structural Shifts

Transition to Digital and Electronic Formats

The transition to digital formats for academic journals began experimentally in the late 1970s, with the first peer-reviewed electronic scholarly journal appearing in , though it remained a one-time effort limited by . By the late 1980s, the earliest persistent e-journals emerged in format, distributed via networks like BITNET and early precursors. These initial efforts focused on overcoming print limitations such as distribution delays and costs, enabling near-instantaneous sharing among researchers connected to academic networks. The 1990s marked acceleration driven by the World Wide Web's emergence around 1991, which facilitated graphical interfaces and broader accessibility. In 1990, Postmodern Culture launched as the first fully online-only peer-reviewed journal, eschewing print entirely and hosted on a university server. The following year, physicist established , an e-print server for preprints in high-energy physics, which by 1996 served over 35,000 users and demonstrated digital formats' potential for rapid dissemination outside traditional publishing gatekeeping. By 1995, 44 new peer-reviewed electronic journals had appeared, reflecting explosive growth as universities and societies experimented with digital platforms. Commercial publishers initially supplemented print with digital versions in the mid-1990s, with widespread online availability by the early 2000s through aggregators like (launched 1997 for archives) and publisher portals. This hybrid phase allowed parallel print and electronic editions, but digital adoption surged due to searchability, hyperlinks, and global access, reducing reliance on physical libraries. Empirical data from the period show electronic journals' use rising rapidly; for instance, faculty at research institutions reported increasing reliance on e-journals by the early 2000s, correlating with infrastructure expansion. Full transitions to electronic-only formats became common in the for many titles, as print costs proved unsustainable amid declining subscriptions and digital efficiencies, though legacy print persisted in some fields for archival purposes. This shift prioritized causal advantages like faster cycles and integration, though it introduced dependencies on stable digital and long-term preservation.

Integration of AI and Automation

AI tools are increasingly integrated into academic journal workflows to automate manuscript screening, peer review processes, and production stages, aiming to reduce and expedite publication timelines. Publishers such as Wiley introduced AI-enhanced systems in September 2025 for portable and automated transfers, facilitating seamless movement between journals while preserving review histories. These systems perform initial checks for completeness, formatting compliance, and basic statistical validity, allowing editorial teams to focus on substantive evaluation. In , AI algorithms match manuscripts to suitable reviewers based on expertise derived from histories and patterns, while conducting preliminary assessments of novelty and methodological soundness. A analysis demonstrated that such can balance reviewer workloads and shorten cycles by up to 30% in high-volume journals, though human validation remains essential to preserve rigor. Large models (LLMs) have been trialed for generating summaries or flagging inconsistencies, with 12 reviewers across disciplines reporting in a February 2025 study that LLMs aided in identifying logical gaps but risked overlooking domain-specific nuances without oversight. Organizations like the (AACR) imposed bans on AI use in peer reviews in late 2023, correlating with a 50% decline in detected AI-generated text by mid-2025. Detection of AI-generated content poses ongoing challenges, as tools like those evaluated in 2023-2025 studies often misclassify human-authored text or fail against sophisticated LLMs, with accuracy rates below 70% for scientific abstracts. Journals have responded by mandating disclosures of assistance in authorship and reviews, yet anomalies such as repetitive phrasing or factual inconsistencies in published papers indicate undetected infiltration, prompting calls for specialized forensic software. A study in June 2024 identified rising use in abstract writing, with over 10% of biomedical abstracts showing machine-like patterns post-ChatGPT's 2022 release. Alongside the use of AI as largely invisible infrastructure in manuscript screening and peer review, a small number of experimental projects have explored assigning more explicit, quasi-authorial roles to AI systems in connection with journal-adjacent scholarship. Within the philosophical framework of the Aisentica Research Group, for example, the AI-based Digital Author Persona Angela Bogdanova has been registered with an ORCID iD and credited as a co-author of a semantic specification deposited on the research repository Zenodo (DOI 10.5281/zenodo.15732480), alongside human collaborators. While mainstream journal policies informed by bodies such as COPE and the ICMJE continue to prohibit listing AI tools as authors and instead require disclosure of their use as methodological assistance, this kind of experiment positions an AI identity as a named entity within scholarly metadata rather than a purely hidden tool. Such cases highlight emerging tensions for academic journals: how to integrate machine-originated contributions into editorial and authorship workflows while preserving human accountability for research integrity, responsibility, and legal authorship status. Beyond review, streamlines through XML tagging, figure validation, and accessible generation, as seen in tools like Typefi, which eliminate manual layout for complex scholarly outputs. These advancements, while boosting throughput—evidenced by reduced times in AI-adopting publishers—raise causal concerns about embedded biases from training data, potentially amplifying existing academic gatekeeping flaws if not transparently audited. Policies emphasizing hybrid human-AI models, as advocated in a January 2025 , seek to harness efficiency gains without eroding credibility. Open access (OA) in academic journals has expanded significantly since the early 2000s, driven by funder mandates and technological advancements enabling digital dissemination without traditional subscription barriers. By 2024, gold OA—the model providing immediate, fee-based access upon publication—accounted for 40% of global articles, reviews, and conference papers, up from 14% in 2014, reflecting a compound annual growth rate fueled by policies like the 2002 Budapest Open Access Initiative and subsequent national requirements. However, recent data indicate a slowdown, with OA article output growing only 2.1% in 2023–2024 after a post-COVID spike, leading to a loss of market share relative to subscription models as transformative agreements stabilize hybrid publishing economics. Major publishers have adapted through hybrid journals and fully OA titles, with revenues from OA journal publishing rising from $1.9 billion in 2023 to $2.1 billion in 2024, projected to reach $3.2 billion by 2028 amid increasing article processing charges (APCs) averaging $2,000–$3,000 per paper. For instance, reported 44% of its primary research output as OA in 2024, up from 38% in 2022, correlating with a 31% rise in downloads and heightened impact metrics for OA articles, though critics argue this incentivizes volume over rigor in fee-dependent systems. Transformative agreements, negotiated by consortia to offset APCs via block funding, have proliferated since 2019, enabling over one million immediate OA articles by 2024 but facing phase-out by funders like cOAlition S after 2024 to prioritize non-hybrid routes. Plan S, implemented from 2021 by 24 national funders and agencies, has accelerated gold OA in fields by mandating immediate access under licenses, yet independent evaluations in 2024 reveal mixed outcomes: higher OA compliance among affected papers but no disproportionate boost in overall OA rates compared to non-mandated research, alongside unintended shifts toward APC-funded journals that burden unfunded authors. This has spotlighted diamond OA—no-APC, community-sustained models—as an alternative, with global summits in 2024 emphasizing its role in equitable access for non-Western scholars, though scalability remains limited by reliance on institutional subsidies rather than market incentives. Emerging trends include institutional repositories for green self-archiving and collaborative funding to mitigate inequities, particularly in and social sciences where subscription legacies persist. While enhances dissemination—evidenced by 21% higher downloads from lower-income regions—empirical analyses underscore causal trade-offs: elevated publication volumes without corresponding quality controls, as models align publisher incentives with output quantity over selective gatekeeping. Future trajectories hinge on balancing with , as 2025 policies from entities like the Gates Foundation prioritize zero-embargo while scrutinizing hybrid costs. In , a of copyright disputes stems from standard publication agreements requiring authors to transfer exclusive rights to publishers, thereby restricting authors' subsequent use, sharing, or adaptation of their own work despite often bearing the costs of through public or institutional funding. This transfer typically encompasses reproduction, distribution, and derivative works, leaving authors with limited permissions for personal archiving or preprint sharing, which has fueled conflicts as digital platforms enable broader dissemination. Publishers justify such clauses by citing investments in , editing, and long-term archiving, yet critics, including academic librarians and advocates, argue that these practices enable monopolistic control over publicly funded knowledge. A notable escalation occurred with unauthorized online sharing, exemplified by the 2017 lawsuit filed by and the against , a hosting over 20 million research items. The publishers alleged infringement of U.S. through the unauthorized uploading and distribution of approximately 50 full-text articles, many behind paywalls; the case, centered in the U.S. District Court for the Eastern District of , highlighted tensions over platforms' "making available" rights and authors' habits. Settled in September 2023 without admission of liability, the agreement mandated to develop proactive detection tools for copyrighted content and collaborate on takedown procedures, reflecting broader industry efforts to curb systematic infringement while preserving legitimate sharing. Disputes also arise in educational contexts under fair use doctrines, as seen in the protracted litigation initiated in April 2008 by , , and SAGE Publications against . The suit targeted the university's practice of posting digital excerpts from academic books—totaling thousands of pages annually—for course reserves without obtaining permissions, which publishers claimed constituted unauthorized reproduction exceeding limits under 17 U.S.C. § 107. Initial district court rulings in 2012 and 2014 largely favored Georgia State, applying the four factors (purpose, nature, amount, and market effect) to deem most uses transformative and non-substitutive, though the 11th Circuit's 2018 reversal remanded for re-evaluation of the third factor, underscoring inconsistent judicial interpretations of "amount used" in scholarly contexts; as of 2020, publishers dropped further appeals after partial losses. Ownership complexities intensify in multi-author collaborations, prevalent in fields like or , where papers may list hundreds of contributors, complicating unanimous consent for transfers. Legal analyses have questioned the enforceability of such agreements when not all listed authors sign or are even aware of the terms, potentially rendering transfers invalid under joint authorship doctrines in jurisdictions like the U.S. and , where undivided interests require . For instance, a 2021 study of group-authored publications argued that publishers' reliance on lead-author signatures risks voiding exclusivity, exposing works to risks or unauthorized reuse, and recommended explicit multi-party clauses to mitigate disputes. These conflicts reveal underlying causal dynamics: commercial publishers' profit models, generating billions in revenue from subscription fees despite minimal marginal costs post-digitization, clash with academia's ethos of knowledge dissemination, particularly as taxpayer-funded research underpins much output. While publishers maintain that robust copyrights incentivize quality curation, from budget strains—such as U.S. academic serials expenditures rising 400% from 1986 to 2016 adjusted for —suggests market distortions favoring ownership retention by intermediaries over originators. Shifts toward models, where authors retain copyright via licenses, have reduced some disputes but introduced new ones over article processing charges and hybrid journal compliance.

Handling Fraud, Retractions, and Integrity

Academic journals address research integrity through established protocols for investigating allegations of , issuing retractions, and upholding ethical standards, primarily guided by frameworks like those from the (COPE). Retraction serves to correct the scientific record rather than punish authors, triggered by evidence of major errors, misrepresentation, unethical practices, or compromised processes. COPE recommends that retraction notices clearly state the reasons, link bidirectionally to the original article, and be published promptly upon confirmation of issues, with journals notifying relevant indexing services and institutions to facilitate removal from databases where appropriate. Despite these mechanisms, prior to often fails to detect , shifting reliance to post-publication scrutiny via platforms like and , which crowdsource identifications of anomalies such as image duplication or statistical improbabilities. Empirical data reveal a sharp rise in retractions, with over 10,000 issued globally in alone, equating to roughly 1 in 500 published papers, a tenfold increase over two decades. Misconduct drives the majority, accounting for 67.4% of cases in a comprehensive , including or suspected (43.4%), (9.8%), and (14.2%), while honest errors comprise the remainder. This stems from multiple causal factors: intensified publication pressures in a "publish or perish" environment incentivizing data fabrication; proliferation of paper mills producing fabricated manuscripts for sale; and enhanced detection via statistical tools like for spotting manipulated datasets and software for identifying AI-generated content. For instance, retracted 2,923 papers in 2024, with 41% involving post-2023 publications often linked to papermill operations or third-party manipulations. Journals handle fraud allegations through internal investigations or delegation to authors' institutions, demanding raw data verification and employing forensic analyses, though delays persist due to resource constraints and reluctance to retract high-profile works. Notable cases illustrate enforcement gaps: the 1998 Wakefield study linking MMR vaccines to autism, retracted in 2010 by The Lancet after revelations of data falsification and undisclosed conflicts, continued influencing public health debates for over a decade post-retraction. Similarly, the 2020 Surgisphere COVID-19 dataset paper in The Lancet, retracted amid unverifiable data claims, highlighted vulnerabilities in rapid-review wartime publishing. Recent COPE updates address evolving threats like paper mills and AI, urging retractions for any misrepresentation, including identity theft or fabricated peer reviews, and emphasizing proactive editor training. Persistent challenges undermine efficacy: retracted papers often retain citations, with studies showing 3.3% of top-cited researchers authoring retractions, signaling entrenched issues in high-impact fields. Journals mitigate this via watermarking retracted articles and public notices, but systemic incentives—career advancement tied to publication volume over replicability—sustain , as evidenced by comprising 65.3% of retractions in biomedical literature, led by and review compromises. Enhanced requires bolstering preemptive measures, such as mandatory repositories and adversarial replication mandates, alongside cultural shifts away from quantity-driven metrics.

References

  1. [1]
    What Is A Scholarly Journal? - Public Administration Research Guide
    Feb 28, 2025 · A scholarly journal is a periodical that contains articles written by experts in a particular field of study and reports the results of research in that field.
  2. [2]
    Q. What's a scholarly journal, academic journal, or peer-reviewed ...
    Jul 14, 2025 · Scholarly/academic journals publish research articles by experts, and these articles usually go through a peer-review process.
  3. [3]
    What are Journals for? - PMC - PubMed Central - NIH
    Journals register work, certify research, disseminate knowledge, archive work, and act as a scientific filter, also providing navigation.
  4. [4]
    How It Works and Where You Fit: The History of Academic Publishing
    Oct 10, 2025 · First journal: Journal des Sçavans, France, January 5, 1665 (image source: Wikimedia; public domain) · Royal Society of London for the ...
  5. [5]
    A Brief History of Academic Journals: Foundations of Knowledge ...
    Jun 12, 2024 · The first academic journal, Journal des Sçavans, was published in Paris in 1665 as a forum to disseminate information and connect with scholars.
  6. [6]
    Scientific Publishing in Biomedicine: A Brief History of Scientific ...
    The first scientific journal that had peer review was the Edinburgh Medical Journal; its papers have been peer-reviewed since 1733 (13, 23). Philosophical ...
  7. [7]
    The limitations to our understanding of peer review
    Apr 30, 2020 · Now, publication of peer-reviewed journal articles plays a pivotal role in research careers, conferring academic prestige and scholarly ...<|control11|><|separator|>
  8. [8]
    Peer Review in Scientific Publications: Benefits, Critiques, & A ...
    The major advantage of a peer review process is that peer-reviewed articles provide a trusted form of scientific communication. Since scientific knowledge is ...
  9. [9]
    The present and future of peer review: Ideas, interventions, and ...
    Jan 27, 2025 · What is wrong with the peer review system? Is peer review sustainable? Useful? What other models exist? These are central yet contentious ...
  10. [10]
    Dissemination of scientific information through open access by ...
    Apr 15, 2024 · Academic journals produce public records of knowledge that alter the landscape of disciplines and hence play a significant role in the diffusion ...
  11. [11]
    Scrutinizing science: Peer review
    In science, peer review helps provide assurance that published research meets minimum standards for scientific quality.
  12. [12]
    [PDF] Academic Journal Pricing and Research Dissemination
    In these two fields, as in economics, subscription-based journals serve as a main media for knowledge dissemination, and journal publication is a major ...
  13. [13]
    Open access improves the dissemination of science: insights from ...
    Oct 15, 2024 · Among various sources, academic and peer-reviewed publications are widely regarded as the most reliable. The OA movement provides ...
  14. [14]
    What is the difference between scholarly journals and popular ...
    Mar 22, 2023 · Articles in scholarly journals (also known as academic, peer-reviewed, or refereed journals) are different from articles in popular magazines for many reasons.<|separator|>
  15. [15]
    Understanding Journals: Peer-Reviewed, Scholarly, & Popular
    Aug 8, 2025 · Scholarly Journals. Although peer-reviewed journals are always scholarly in nature, scholarly journals are not always peer-reviewed.
  16. [16]
    Scholarly vs Popular Sources - Guides - UAA/APU Consortium Library
    Nov 12, 2024 · Scholarly (or academic) journals contain articles written by researchers who are experts in their field. Authors are usually employed by ...
  17. [17]
    Q. What's the difference between scholarly journals and popular ...
    Jul 14, 2025 · While both kinds of periodicals may have information about the same topic, the presentation, depth and type of information will be different.
  18. [18]
    Distinguish between Popular and Scholarly Journals - Library Guides
    Jul 29, 2025 · The purpose of distinguishing between these types of works is to determine their degree of authority and depth of research on a given topic.
  19. [19]
    Scholarly vs. Popular: Characteristics of Scholarly Resources
    Aug 8, 2024 · Scholarly journals (also called academic, professional, or peer-reviewed journals), are written by experts for other experts. They are considered more ...
  20. [20]
    Scholarly Journal Articles - Scholarly vs Popular Periodicals
    This guide explains the differences between scholarly and popular periodicals, and why scholarly peer-reviewed journal articles are considered more credible and ...
  21. [21]
    Scholarly vs Popular - Research Essentials
    Sep 16, 2025 · Scholarly Journals. contains articles and letters written by scholars. report results of research and other scholarly activities.
  22. [22]
    Journals vs. Magazines - Journals and Magazines
    Feb 23, 2023 · Below is a listing of general characteristics which can be used to identify differences between popular magazines and scholarly journals.
  23. [23]
    A. Scholarly Journals vs. Popular Magazines - UAHT
    Scholarly journals are concerned with academic study and contain research. They often use a process of peer review prior to publishing an article.
  24. [24]
    Scholarly vs. Popular Periodicals - Instruction - Rasmuson Library
    Sep 24, 2025 · Most scholarly journals are peer reviewed or refereed. This refers to a process in which submitted articles undergo rigorous evaluation by a group of academics ...
  25. [25]
    Science periodicals in the nineteenth and twenty-first centuries - PMC
    Oct 5, 2016 · Science periodicals grew from 100 to 10,000 in the 19th century, facilitating science growth. They help create scientific communities and ...
  26. [26]
    [PDF] A-History-of-Scientific-Journals.pdf - UCL Discovery
    1.1. Title page of the first volume of the. Transactions, 1665–6. 23. 1.2. 'Introduction' to the first issue of the. Transactions, 1665.
  27. [27]
    History of Philosophical Transactions | Royal Society
    Philosophical Transactions is the world's first and longest-running scientific journal. It was launched in March 1665 by Henry Oldenburg (c.1619-1677), ...
  28. [28]
    Journal des sçavans: The First Scientific Journal Begins Publication
    On January 5, 1665 French writer Denis de Sallo Offsite Link , Sieur de la Coudraye (pseudonym Sieur d'Hédonville) published from Paris the first issue of ...
  29. [29]
    Home | Philosophical Transactions of the Royal Society of London
    It was launched in March 1665 by Henry Oldenburg (c.1619-1677), the Society's first Secretary, who acted as publisher and editor.
  30. [30]
    Henry Oldenburg: The first journal editor - PMC - NIH
    In 1665, Oldenburg came up with a proposal to the Royal Society where he decided to bring out a printed version of the scientific communications of the society.
  31. [31]
    Why We Publish: The Past, Present, and Future of Science ...
    Apr 30, 2013 · And thus the scientific journal was born, in 1665, as the Philosophical Transactions of the Royal Society. Now scientific work could finally ...
  32. [32]
    [PDF] Scientific, Medical, and Technical Periodicals in Nineteenth-Century ...
    In 1800, there were nine British medical titles, and 11 scientific titles. By 1900, there were over 150 medical and 110 scientific titles.
  33. [33]
    19th century – A History of Scientific Journals
    Throughout the nineteenth century the number of people conducting scientific research, or working in a scientific job, was increasing rapidly. One of the ...
  34. [34]
    Scopus 1900–2020: Growth in articles, abstracts, countries, fields ...
    Science is not static, with the number of active journals increasing at a rate of 3.3%–4.7% per year between 1900 and 1996 (Gu & Blackmore, 2016; Mabe & Amin, ...
  35. [35]
    A, Increased number of scholarly journals from their birth (1665). The...
    May 16, 2025 · The number of active scholarly peer-reviewed journals has been estimated to be 30,000 in 1961 (24) (27), indicating a linear growth at least in ...
  36. [36]
    The rate of growth in scientific publication and the decline in ... - NIH
    The data indicated a growth rate of about 5.6% per year and a doubling time of 13 years. The number of journals recorded for 1950 was about 60,000 and the ...
  37. [37]
    350 years of scientific periodicals | Notes and Records - Journals
    Jul 15, 2015 · As the nineteenth century wore on, the increased desire to publish articles was reflected in the increased bulk of the volumes of Philosophical ...
  38. [38]
    [PDF] The growth and number of journals - Serials
    The number of active, peer-reviewed journals is estimated at 14,694, with growth since 1665. Estimates range from 10,000 in 1951 to 71,000 in 1987.<|separator|>
  39. [39]
    Credibility, peer review, and Nature, 1945–1990 | Notes and Records
    Jul 1, 2015 · This paper examines the refereeing procedures at the scientific weekly Nature during and after World War II. In 1939 former editorial ...Missing: formalization | Show results with:formalization<|control11|><|separator|>
  40. [40]
    The History and Practice of Peer Review - 2014 - Wiley Online Library
    Nov 22, 2013 · Peer review did not become standard practice for many scientific journals until after World War II (WW II). For example, the British medical ...Missing: formalization | Show results with:formalization
  41. [41]
    Credibility, peer review, and Nature, 1945–1990 - PMC
    Jul 1, 2015 · This paper examines the refereeing procedures at the scientific weekly Nature during and after World War II.
  42. [42]
    Self-help for learned journals: Scientific societies and the commerce ...
    Mar 18, 2021 · In the decades after the Second World War, learned society publishers struggled to cope with the expanding output of scientific research and ...
  43. [43]
  44. [44]
    Types of Scholarly Articles - Resource Guides - SUNY Oswego
    Oct 15, 2024 · The types of scholarly articles are: original research, brief communications, review articles, case studies, methods/methodologies, and ...
  45. [45]
    An introduction to the journal review and editorial process - PMC - NIH
    Step 1: Editorial office checks manuscript against journal requirements · Step 2: Editors perform an initial review of the manuscript · Step 3: Peer review of the ...
  46. [46]
    Editorial and Peer Review Process | PLOS One - Research journals
    The editor reviews the manuscript against our publication criteria and determines whether reviews from additional experts are needed to evaluate the manuscript.
  47. [47]
    Journal Rejections: How Common Are They? - Enago Academy
    Nov 5, 2020 · From 2005 to 2010 the overall acceptance rates decreased slightly from 40,6% to 37,1%. The major reason is probably the increased share of ...
  48. [48]
    The Peer Review Process - Wiley Author Services
    The peer review process · 1. Submission of paper · 2. Editorial Office assessment · 3. Appraisal by the Editor-in-Chief (EIC) · 4. EIC assigns an Associate Editor ( ...
  49. [49]
    How long will the peer review process take?
    The typical time from submission to first decision on a manuscript is approximately 5 weeks. Authors can ensure the fastest review times by providing all ...
  50. [50]
    How Long Should Authors Wait for a Journal's Response ... - AJE
    May 4, 2023 · On average, the length of time it takes an editor to process a paper submitted to their journal and send it out for peer review is 2-3 weeks.FAQ from a researcher: How... · Some helpful background on...
  51. [51]
    Types of Peer Review - AJE
    Apr 2, 2024 · In this blog, we will review the most common types of peer review processes. We will also discuss some pros and cons of each.Single-blind peer review... · Double-blind peer review... · Post-publication review
  52. [52]
    Western scientists more likely to get rejected papers published
    Jul 2, 2024 · Some 62% were rejected. The researchers scoured a bibliometric database to see whether the same (or similar) work was subsequently published ...<|control11|><|separator|>
  53. [53]
    Journal Rejection Rate by Field: 2025 Data Analysis & Strategies
    “Our analysis of over 2,000 journals across disciplines reveals that acceptance rates average 32%, with a range from just over 1% to 93.2%. Fields with formal ...
  54. [54]
    Why Do Research Papers Get Rejected? - PMC - NIH
    Rejection Rates. Rejection rates of various top-tier journals including ours vary between 80 and 85%. Some journals have reported it to be around 90–95% [3–5].
  55. [55]
    What errors do peer reviewers detect, and does training improve ...
    A second study introduced 10 major and 13 minor errors in a manuscript and distributed it to 262 reviewers of the Annals of Emergency Medicine. Reviewers failed ...
  56. [56]
    Peer Review Bias: A Critical Review - Mayo Clinic Proceedings
    Failure of peer reviewers to assess the quality of studies. Poor ... systematic reviews on various peer review practices. The International Congress on ...
  57. [57]
    Problems with Peer Review Shine a Light on Gaps in Scientific ...
    Apr 13, 2023 · Inter-rater reliability scores can be calculated to determine what scientific errors scientists commonly fail to detect, and course content or ...
  58. [58]
    [PDF] ADVANCES IN INFORMATION SCIENCE Bias in Peer Review
    Dec 6, 2012 · Failures in impartiality lead to outcomes that result from the “luck of the reviewer draw” (Cole, Cole, & Simon,. 1981, p. 885), fail to uphold ...
  59. [59]
  60. [60]
    Types of Peer Review - Wiley Author Services
    The three most common types of peer review are single-anonymized, double-anonymized, and open peer review.
  61. [61]
    Understanding peer review - Author Services - Taylor & Francis
    Peer review is vitally important to uphold the high standards of scholarly communications, and maintain the quality of individual journals. It is also an ...
  62. [62]
    Open peer review urgently requires evidence: A call to action - PMC
    Oct 4, 2023 · OPR models are also increasingly the de facto standard for related innovations, including preprint peer review, registered reports [7], and ...
  63. [63]
    Open Peer Review | Essential Information from F1000Research
    Our peer review process is formal, invited, and fully open (meaning the reports are published alongside the article, along with reviewer names and affiliations) ...<|control11|><|separator|>
  64. [64]
    8 Things you should know about open peer review - Blog
    Oct 12, 2021 · Gates Open Research operates a post-publication, open peer review model. Articles are published online and then undergo formal, open peer ...#2: Open Peer Review Reports... · #3: Open Peer Review Means... · #4: Open Peer Review Isn't...
  65. [65]
    Registered Reports - Center for Open Science
    Registered Reports is a publishing format that emphasizes the importance of the research question and the quality of methodology by conducting peer review ...
  66. [66]
    Registered Reports | Royal Society Open Science - Journals
    A Registered Report is a form of journal article in which methods and proposed analyses are pre-registered and peer-reviewed prior to research being conducted.
  67. [67]
    What We Publish | PLOS One
    Registered Reports are research articles that undergo peer review at the study design or protocol stage, prior to conducting experiments, data collection or ...
  68. [68]
    Peer review | Discover F1000's open, post-publication process
    Post-publication peer review allows research to be viewed and cited immediately, while signaling that the article is awaiting review by experts in the field.
  69. [69]
    Peer-reviewed preprints and the Publish-Review-Curate model
    Oct 28, 2024 · The Publish-Review-Curate (PRC) model involves depositing preprints, peer review, and then curation, which is a selective process of organizing ...
  70. [70]
    Alternative Peer Review Models: Through the Editor's Lens
    Sep 25, 2023 · Alternative peer review models include manuscript seminars, open peer review, direct access for board members, and review workshops.
  71. [71]
    Introducing the DARIAH Overlay Journal: an alternative and ...
    Feb 29, 2024 · An overlay journal is an innovative type of scholarly publication that operates on top of pre-existing data repositories and preprint servers.
  72. [72]
    Journal Citation Reports | Clarivate
    Evaluate journals with multiple indicators, including the Journal Impact Factor (JIF), alongside descriptive open access statistics and contributor information.Journal Impact Factor (JIF) · Academic Libraries · Search · Publishers
  73. [73]
    Journal Citation Reports: Document Types Included in the Impact ...
    The Impact Factor is calculated by dividing the number of citations in the Journal Citation Reports year (the numerator) by the total number of citable items ...
  74. [74]
    A closer look at the Journal Impact Factor numerator - Clarivate
    Apr 26, 2017 · The Journal Impact Factor is defined as citations to the journal in the current year to items in the previous two years divided by the count of ...Missing: history | Show results with:history
  75. [75]
    Scopus CiteScore - Elsevier
    CiteScore metrics, powered by Scopus, measure citation impact for journals, book series, and more, using data from 7000+ publishers across 334 disciplines.Using Citescore Metrics · Gather More Insights By... · Citescore Metrics Benefits
  76. [76]
    Introducing CiteScore 2024: A Comprehensive and Transparent ...
    Jun 9, 2025 · CiteScore is a metric for journal citation impact, calculated for active titles using Scopus data, and is more comprehensive than JIF.
  77. [77]
    Scopus metrics - Elsevier
    Scopus metrics help measure the impact of scholarly research, including the h-index, citations, and journal metrics such as the CiteScore, SNIP and SJR.
  78. [78]
    [PDF] SCImago Journal Rank (SJR) indicator
    The Scimago Journal Rank (SJR) is based on the transfer of prestige from a journal to another one; such prestige is tranfered through the references that a ...
  79. [79]
    Google Scholar Metrics Help
    Available Metrics. The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each.
  80. [80]
    English - Google Scholar Metrics
    h5-index is the h-index for articles published in the last 5 complete years. It is the largest number h such that h articles published in 2020-2024 have at ...Humanities, Literature & Arts · Engineering & Computer... · Social Sciences
  81. [81]
    Regular article Skewness of citation impact data and covariates of ...
    In 2010, nearly half of the citation impact is accounted for by the 10% most-frequently cited papers; the skewness is largest in the humanities.
  82. [82]
    Citation inequality and the Journal Impact Factor: median, mean ...
    Dec 17, 2020 · Skewed citation distribution is a major limitation of the Journal Impact Factor (JIF) representing an outlier-sensitive mean citation value per journal.
  83. [83]
    Use of the journal impact factor for assessing individual articles
    May 14, 2020 · Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors.
  84. [84]
    Citation counts and journal impact factors do not capture some ...
    Aug 17, 2022 · Both citation counts and impact factors were weak and inconsistent predictors of research quality, so defined, and sometimes negatively related to quality.
  85. [85]
    Reliability of journal impact factor rankings
    Nov 15, 2007 · This article proposes that caution should be exercised in the interpretation of journal impact factors and their ranks.
  86. [86]
    [PDF] Differences in Impact Factor Across Fields and Over Time
    Average impact factors differ not only over time, but across fields. For example, in 2006 the highest impact factor in the field of economics is 4.7, held by ...
  87. [87]
    [PDF] Uses and Misuses of the Journal Impact Factor
    Journals in fundamental life sciences (e.g., biochemistry and cell biology) and neurological science on average have higher impact factors than do journals ...
  88. [88]
    Ten journals denied 2020 Impact Factors because of excessive self ...
    Jun 30, 2021 · Clarivate suppressed 10 journals for excessive self-citation which inflates the Impact Factor, or for “citation-stacking,” sometimes referred to as taking part ...
  89. [89]
    Reproducibility of science: Fraud, impact factors and carelessness
    Indeed there is a very tight positive correlation between the Impact Factor of a journal and the fraction of papers that are retracted [12]. This correlation ...
  90. [90]
    Higher impact factor journals have lower replicability indexes
    Feb 7, 2022 · Higher impact factor journals have lower replicability indexes, and the relationship is negative, meaning impact factor does not predict actual ...
  91. [91]
    Deep impact: unintended consequences of journal rank - PMC
    Supporting these weak correlations between journal rank and future citations are data reporting classification errors (i.e., whether a publication received too ...
  92. [92]
    [PDF] The Cost of Knowledge: Academic Journal Pricing and Research ...
    Dec 22, 2024 · The Web of Science recorded 24,974 journals in October 2021, and the market value of scholarly publishing was $26.5 billion in 2020. The market ...
  93. [93]
    The money behind academic publishing
    Aug 17, 2020 · Elsevier has a profit margin approaching 40 %, which is higher than that of companies such as Microsoft, Google and Coca Cola, and the curve ...
  94. [94]
    The oligopoly's shift to open access: How the big five academic ...
    Nov 1, 2023 · For gold OA journals that do rely on APCs, the average author fees range from US$1,3711 to $2,000 (Crawford, 2022; Jahn & Tullney, 2016; ...2. Literature Review · 2.1. Open Access Models And... · 3. Materials And Methods
  95. [95]
    High Prices and Market Power of Academic Publishing Reduce ...
    Apr 24, 2024 · Elsevier has a roughly 40% profit margin, comparable to highly profitable companies such as Microsoft, Google, and Coca–Cola. No wonder academic ...<|control11|><|separator|>
  96. [96]
    Weighing the Cost: Open Access Article Publishing Charges ...
    Jan 23, 2024 · APCs are the most common funding method for this type of journal, and the current global average of APCs is $1,626. ... For a more direct ...
  97. [97]
    Open Access Publishing Metrics, Cost, and Impact in Health ... - NIH
    Oct 16, 2024 · The median (IQR) APC for all journals was $2820.00 ($928.00-$3300.00). Associations were observed between impact factor and APC (β coefficient, ...Missing: empirical | Show results with:empirical
  98. [98]
    Paying to publish: A cross-sectional analysis of article processing ...
    Overall, the median cost to publish open access was significantly greater for hybrid journals compared with open access journals ($3710 vs $1735; P<0.0001).Missing: empirical | Show results with:empirical
  99. [99]
    Open Access Journal Publishing 2024-2028 - The Freedonia Group
    Open Access sales reached $1.9 billion in 2023 and $2.1 billion in 2024, up from $1.8 billion in 2022) and are expected to grow to $3.2 billion by 2028.
  100. [100]
    NIH decision looms about caps on scholarly-journal publishing fees
    Oct 6, 2025 · Journals that are free to read, or open access, usually charge authors a fee, known as an article processing charge (APC), to publish in them.
  101. [101]
    Business Models for Journals - Open Access Network
    Open access is often funded in the author-pays model with publication charges (APC). Many open access journals are free of charge.
  102. [102]
    Current market rates for scholarly publishing services - PMC
    Small journals with 100 articles would face average per article total publication costs of US$353.71, while journals with 1,000 or more articles would only face ...
  103. [103]
    Study examines 5 commercial academic publisher profits from open ...
    Dec 22, 2023 · This study illustrates how the dominant scholarly journal publishers are using open access article publication to increase their profits.
  104. [104]
    (PDF) Estimating global article processing charges paid to six ...
    This study presents estimates of the global expenditure on article processing charges (APCs) paid to six publishers for open access between 2019 and 2023.<|separator|>
  105. [105]
    Shaking up the scholarly publishing market – Why caps on APCs ...
    Sep 11, 2025 · For example, 54% of the $55.2 million Elsevier revenue estimated to stem from APCs on NIH-funded articles ($13.9 from gold, $41.4 from hybrid ...
  106. [106]
    Research Companies: Elsevier - SPARC
    Elsevier operates at a 37% reported operating profit margin compared to Springer Nature which operates at a 23% margin. Elsevier has often pursued tone-deaf ...
  107. [107]
    Scientists paid large publishers over $1 billion in four years to have ...
    Nov 21, 2023 · Elsevier's annual income was $3.5 billion, with $1.3 billion in profit, according to its 2022 accounts. “This means that for every $1,000 that ...
  108. [108]
    The economic impact of open science: a scoping review - Journals
    Sep 17, 2025 · This article summarizes a comprehensive scoping review of the economic impact of open science (OS), examining empirical evidence from 2000 ...
  109. [109]
    Is the pay-to-publish model for open access pricing scientists out?
    Aug 1, 2024 · In 2023, the median APC for gold OA was $2000, and for hybrid, $3230, Haustein's study found. At the high end, the Nature portfolio of hybrid ...
  110. [110]
    How the big five academic publishers profit from article processing ...
    Among the five publishers, Springer Nature made the most revenue from OA ($589.7 million), followed by Elsevier ($221.4 million), Wiley ($114.3 million), ...
  111. [111]
    Open access is working — but researchers in lower-income ... - Nature
    Jun 11, 2024 · Almost half of the world's research output in 2020 is now available to be read or downloaded without payment, according to the Global State of ...
  112. [112]
    How Scientific Publishers' Extreme Fees Put Profit Over Progress
    May 31, 2023 · With more than $2 billion in annual revenue, the publisher's profit margin approaches 40 percent—rivaling that of Apple and Google. “Elsevier ...<|separator|>
  113. [113]
    Does it pay to pay? A comparison of the benefits of open-access ...
    Feb 27, 2024 · Here, we tested if paying to publish open access in a subscription-based journal benefited authors by conferring more citations relative to ...
  114. [114]
    Open access versus subscription journals: a comparison of scientific ...
    Jul 17, 2012 · Our results indicate that OA journals indexed in Web of Science and/or Scopus are approaching the same scientific impact and quality as subscription journals.
  115. [115]
    Scholarly Associations and the Economic Viability of Open Access ...
    The paper considers a number of economic issues that scholarly associations are confronting in moving their journals online, with a focus on the possible ...
  116. [116]
    Quantifying Consolidation in the Scholarly Journals Market
    Oct 30, 2023 · Over this period, the top 5 largest publisher increased their share of the market from 39% to 49%, the top 10 largest increased from 47% to 58% ...Missing: concentration | Show results with:concentration
  117. [117]
    The Economics of Scientific Publishing - PMC - PubMed Central - NIH
    Jun 30, 2023 · The peculiar nature of scientific publishing has allowed for a high degree of market concentration and a non-collusive oligopoly.
  118. [118]
    The Oligopoly of Academic Publishers in the Digital Era | PLOS One
    Jun 10, 2015 · Due to the publisher's oligopoly, libraries are more or less helpless, for in scholarly publishing each product represents a unique value and ...
  119. [119]
    The gold rush: Why open access will boost publisher profits
    Jun 4, 2019 · Under open access, anyone can access research articles, reducing the need for libraries to subscribe to journals. This might benefit libraries, ...
  120. [120]
    (PDF) The Oligopoly's Shift to Open Access. How the Big Five ...
    This study estimates fees paid for gold and hybrid open access articles in journals published by the oligopoly of academic publishers, which acknowledge funding ...<|control11|><|separator|>
  121. [121]
    The misalignment of incentives in academic publishing and ... - PNAS
    For most researchers, academic publishing serves two goals that are often misaligned—knowledge dissemination and establishing scientific credentials.
  122. [122]
    The challenge of open access incentives - Science
    Oct 20, 2022 · In open access models based on author publication fees, the publishers make more money by publishing more articles. Quantity incentives increase.
  123. [123]
    Challenging the Academic Publisher Oligopoly
    Nov 16, 2022 · The largest academic publishers—the oligopoly—package their products in bundles, much like US cable-TV providers do. Before streaming, the US ...
  124. [124]
    Estimating the reproducibility of psychological science
    Aug 28, 2015 · We conducted a large-scale, collaborative effort to obtain an initial estimate of the reproducibility of psychological science.
  125. [125]
    In cancer science, many "discoveries" don't hold up | Reuters
    Mar 28, 2012 · During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals ...
  126. [126]
    Nonreplicable publications are cited more than replicable ones
    May 21, 2021 · Our main finding is that papers that fail to replicate in (5–7) are cited more than those that are replicable. We find no significant change in ...Introduction · Results · Materials And Methods
  127. [127]
    Reproducibility and replicability in research: What 452 professors ...
    Mar 26, 2025 · In 2016, a survey published in Nature reported that more than 70% of researchers have attempted and failed to reproduce other scientists' ...
  128. [128]
    72% of biomedical researchers say field in crisis: survey
    Nov 5, 2024 · 72% of respondents agreed that the field is facing a reproducibility crisis. What's more, 62% of the researchers blamed a culture of “publish or perish” for ...Missing: statistics | Show results with:statistics
  129. [129]
    Preclinical cancer research suffers another reproducibility blow
    Jan 13, 2022 · The Reproducibility Project: Cancer Biology (RP:CB) was able to replicate just 46% of the preclinical experiments it re-ran, the team behind the ...
  130. [130]
    Do replications receive fewer citations? A counterfactual approach
    Mar 25, 2025 · Most replications receive fewer citations than their matched counterfactuals, but a sizable portion, and sometimes even a majority, receive more.
  131. [131]
    Homogenous: The Political Affiliations of Elite Liberal Arts College ...
    Thus, 78.2 percent of the academic departments in my sample have either zero Republicans, or so few as to make no difference. My sample of 8,688 tenure track, ...
  132. [132]
    Political diversity will improve social psychological science1
    Jul 18, 2014 · Increased political diversity would improve social psychological science by reducing the impact of bias mechanisms such as confirmation bias, and by empowering ...
  133. [133]
    Political diversity will improve social psychological science - PubMed
    Increased political diversity would improve social psychological science by reducing bias and empowering dissenting minorities to improve the quality of the ...
  134. [134]
    Is Social Psychology Biased Against Republicans? - The New Yorker
    Oct 30, 2014 · Over all, close to nineteen per cent reported that they would have a bias against a conservative-leaning paper; twenty-four per cent, against a ...
  135. [135]
    Academic Grievance Studies and the Corruption of Scholarship
    Jan 22, 2020 · We undertook this project to study, understand, and expose the reality of grievance studies, which is corrupting academic research.
  136. [136]
    Powerless Conservatives or Powerless Findings? | PS
    Jun 25, 2020 · A relative lack of conservatives in political science can lead to bias in publications against political science research supporting conservative viewpoints.
  137. [137]
    Ideological biases in research evaluations? The case of research on ...
    May 23, 2022 · The ideological skew might influence research evaluations, but empirical evidence is limited. We conducted a survey experiment where Norwegian ...
  138. [138]
    Predatory Journals: What They Are and How to Avoid Them - NIH
    List of Predatory Journals. https://predatoryjournals.com/journals/. Accessed February 28, 2020. 32. Cabells International. Cabells International. https:// ...
  139. [139]
    Predatory Journals: What the Researchers and Authors Should Know
    Feb 21, 2024 · Predatory journals are publications that exploit the academic publishing model by charging fees to authors without providing the expected ...
  140. [140]
    How to spot predatory journals: 4 tips and 2 checklists
    Sep 30, 2025 · Predatory journals are publications that present themselves as legitimate academic journals but prioritize profit by charging authors a fee ...
  141. [141]
    How to recognize predatory journals - Beall's List
    One can use the criteria used by Jeffrey Beall for determining predatory publishers. They were used to create the original Beall's list and are used now for the ...Missing: history | Show results with:history
  142. [142]
    Beall's List – of Potential Predatory Journals and Publishers
    A list of potential predatory publishers created by a librarian Jeffrey Beall. We will only update links and add notes to this list.Standalone Journals · How to recognize predatory... · Hijacked Journals · ContactMissing: characteristics history
  143. [143]
    Distinguishing Predatory from Reputable Publishing Practices - PMC
    There are 5 key practices that distinguish reputable journals. First, reputable journals rely on peer reviewers who are experts in their respective fields.
  144. [144]
    Predatory Journals
    Predatory journals charge fees without quality checks, lack peer review, and may use fake academics and misleading claims.
  145. [145]
    Predatory Publishing in 2025 - Research Information
    Dec 9, 2024 · This year we have seen the number of journals included in Cabell's Predatory Reports database rise to an all-time high of 18,000 titles ...
  146. [146]
    Prevalence of predatory journals in OpenAlex
    Jun 3, 2025 · At the time of our licenced access to the service, May 2025, the list comprised reports on 19.771 predatory journals. Based on the consideration ...<|separator|>
  147. [147]
    The Dark Side of Publishing: Predatory Journals and Congresses ...
    Sep 9, 2025 · Indeed, the number of articles published in predatory journals increased from 53,000 in 2010 to 420,000 in 2014.2 As of May 2022, Cabells' ...
  148. [148]
    Mega Journals 2: Promising or Predatory? - Researcher Connect
    While mega journals are not necessarily predatory journals, authors should be cautious and critical in order to select journals which are ...Concerns · Volume And Value · Talent War?
  149. [149]
    Journals Operating Predatory Practices Are Systematically Eroding ...
    Jun 19, 2025 · Such predatory practices result in the systematic degradation of research quality and its “truthfulness”. Moreover, they undermine the science ...
  150. [150]
    Problems and challenges of predatory journals - PubMed Central
    The companies publishing predatory journals are an emerging problem in the area of scientific literature as they only seek to drain money from authors.
  151. [151]
    Confronting the Rise of Predatory Publishing: Implications for Public ...
    This trend has continued, with recent analyses indicating that predatory journals now publish hundreds of thousands of articles annually 3. Several factors ...
  152. [152]
    Full article: Evolution of Scientific Productivity on Predatory Journals
    May 22, 2024 · Researchers who publish in predatory journals may unknowingly contribute to the dissemination of misinformation, which can have negative ...Missing: reproducibility | Show results with:reproducibility
  153. [153]
    Cabell's New Predatory Journal Blacklist: A Review
    Jul 25, 2017 · The purpose of a predatory-journal blacklist is to identify and call attention to such scam operations, in order to make it harder for them to fool authors.
  154. [154]
    Resources - Think. Check. Submit.
    In our resources you can learn about various topics such as predatory publishing, metrics, preservation, peer review and much more.Missing: Cabell's | Show results with:Cabell's
  155. [155]
    Blacklists and Whitelists To Tackle Predatory Publishing: a Cross ...
    The considerable overlap between the two blacklists indicates that Cabell's list may use Beall's list as a source of predatory publishers. The relatively ...
  156. [156]
    How To Cope With Predatory Journals - PMC - PubMed Central - NIH
    Jan 7, 2025 · It is difficult to pinpoint the number of predatory journals, but in 2021, it was estimated to be more than 15,000.1 Therefore, as of 2025, ...
  157. [157]
    [PDF] Back to basics: what is the e-journal? - White Rose Research Online
    The earliest e-journals date from the late 1980s, in plain text format, and some of them can still be viewed in their original form today3.<|separator|>
  158. [158]
    Electronic Journals: A Short History and Recent Developments - OiTiO
    One of the most important factors in the development of electronic journals in the past two years is the emergence of the World Wide Web as an enabling ...
  159. [159]
    Scholarly Publishing: a Brief History - AJE
    Scholarly publishing is a unique and ever-evolving industry with a long history; The earliest journals date back to the 17th century; Recent innovations ...
  160. [160]
  161. [161]
    [PDF] Faculty Use of Electronic Journals at Research Institutions - ALAIR
    The meteoric rise in the number of electronic journals pub- lished during the 1990s (see Figure 1) is documented in the. ARL Directory of Electronic Journals, ...
  162. [162]
    The Second Digital Transformation of Scholarly Publishing
    Jan 29, 2024 · The scholarly publishing sector is undergoing its second digital transformation. The first saw a massive shift from paper to digital, ...Missing: milestones | Show results with:milestones
  163. [163]
    As the world turns: scientific publishing in the digital era - PMC
    Feb 26, 2024 · A quarter of the way into the 21st Century the technology of encoding and transmitting information in digital form is in full flower.
  164. [164]
    Wiley Advances Research Exchange Platform with AI-Driven ...
    Sep 29, 2025 · AI-enhanced transfers and portable peer review capabilities mark significant step toward frictionless scholarly publishing.
  165. [165]
    The Rise of Intelligent Workflows in Academic Publishing 2025
    Apr 28, 2025 · Discover how AI-driven intelligent workflows are transforming academic publishing in 2025, boosting speed, accuracy, compliance, ...
  166. [166]
    AI in Academic Peer Review: Opportunities, Challenges in 2025
    Sep 10, 2025 · AI can streamline the peer review process by automating reviewer matching, balancing workloads, and conducting initial quality checks. By ...Missing: 2023-2025 | Show results with:2023-2025
  167. [167]
    Accelerating editorial processes in scientific journals: Leveraging AI ...
    The study showcases how AI can contribute to improved manuscript quality, increased efficiency in review processes, and enhanced accessibility.
  168. [168]
    Exploring the Impact of Generative AI on Peer Review: Insights from ...
    Feb 11, 2025 · This study investigates the perspectives of 12 journal reviewers from diverse academic disciplines on using large language models (LLMs) in the peer review ...
  169. [169]
    AI tool detects LLM-generated text in research papers and peer ...
    Sep 11, 2025 · The analysis found that AI-generated text in peer-review reports dropped by 50% in late 2023, after the AACR banned peer reviewers from using ...
  170. [170]
    Evaluating the efficacy of AI content detection tools in differentiating ...
    Sep 1, 2023 · This study investigates the capabilities of various AI content detection tools in discerning human and AI-authored content.
  171. [171]
    Detecting generative artificial intelligence in scientific articles
    We observed that most detectors are not effective in detecting a text generated by a generative AI, and even a human text could be identified as being generated ...
  172. [172]
    Obvious artificial intelligence‐generated anomalies in published ...
    Sep 6, 2024 · Academic journals are encouraged to integrate specialized software tools tailored for the detection of AI-generated content. Comparable to ...
  173. [173]
    Detecting machine-written content in scientific articles
    Jun 13, 2024 · UChicago researchers evaluated thousands of scientific abstracts and spotted a growing trend that researchers are using AI tools in scientific writing.
  174. [174]
    Scientific & Scholarly Publishing Automation - Typefi
    Typefi automates publishing, turning content into outputs, generating complex elements, and creating accessible formats without manual work, using InDesign.
  175. [175]
  176. [176]
    Peer Review in the AI Era: A Call for Developing Responsible Integrati
    Jan 24, 2025 · This editorial addresses the critical and timely challenge of integrating AI into peer review, a rapidly emerging issue poised to reshape ...
  177. [177]
    Uptake of Open Access (OA) - STM Association
    Between 2014 and 2024, the percentage share of global articles, reviews and conference papers made available via gold has increased by 26%, from 14% to 40%.
  178. [178]
    News & Views: Open Access Loses Share – Market Sizing 2024 ...
    Jul 16, 2024 · OA article output grew by 2.1%. Based on underlying trends, we estimate a 2023-2025 CAGR (average growth each year) in OA output of around 10%.
  179. [179]
    Open access loses market share for the first time in years
    Oct 15, 2024 · Following a “post-COVID spike”, growth has now slowed back down to long-term trends, with OA losing market share.
  180. [180]
    With 44% of its published articles now open access (OA), Springer ...
    Aug 8, 2024 · 44% of the publisher's primary research is now published OA in its hybrid and fully OA journals, up from 38% in 2022.
  181. [181]
    Springer Nature's 2024 Open Access (OA) report highlights growing ...
    Jul 31, 2025 · Downloads of OA book and journal content rose by over 31% in 2024. Wider global reach - downloads of OA content increased by 21% in Lower-middle ...
  182. [182]
    MARKET WATCH - ESAC Initiative
    Apr 30, 2025 · the growth of open access via transformative agreements and the ... publication of well over one million articles immediately open access by 2024.<|separator|>
  183. [183]
    A mixed review for Plan S's drive to make papers open access
    Oct 15, 2024 · The open-access percentage among Coalition S–backed papers did increase by more than the ones not subject to an open-access mandate in one ...
  184. [184]
    cOAlition S confirms the end of its financial support for Open Access ...
    Jan 26, 2023 · The leadership of cOAlition S reaffirms that, as a principle, its members will no longer financially support these arrangements after 2024.
  185. [185]
    Galvanising the Open Access Community: A Study on the Impact of ...
    Oct 15, 2024 · This is the report arising from the study on the impact of Plan S commissioned by the cOAlition S group of funders to assess the impact of their policy ...
  186. [186]
    Recap of the 2nd Global Summit on Diamond Open Access, 2024
    Sep 2, 2025 · This report offers a brief recap of the 2nd Global Summit on Diamond Open Access, held from 11–13 December 2024 at the University of Cape Town.
  187. [187]
    Diamond Open Access in Scholarly Publishing (2024) - OPERAS
    Smaller publishers tend to be squeezed out, and this trend can increase the concentration of the industry. The presence of a portal is also useful for the ...
  188. [188]
    Academic Publishing Trends in 2024: Navigating the Change
    Apr 2, 2024 · One notable trend in academic publishing is the increasing prevalence of open access publishing, fueled by growing support from funding agencies ...
  189. [189]
    A decade of open access policy at the Gates Foundation based on ...
    Jan 28, 2025 · This article provides an in-depth look at the Gates Foundation's open access (OA) policy journey as 2025 marks a decade of OA policy for the foundation.
  190. [190]
    Managing Ownership of Copyright in Research Publications to ...
    Dec 18, 2023 · This article explains current dynamics in academic publishing and research ownership. It seeks to explain the complex interface of copyright law.
  191. [191]
    Publishers settle copyright infringement lawsuit with ResearchGate
    Sep 18, 2023 · In 2017, the two publishers sued ResearchGate for allegedly violating US copyright laws, specifically relating to 50 research papers uploaded by ...
  192. [192]
    Three Major Academic Publishers Appeal to the 11th Circuit in ...
    Cambridge filed its case on April 15, 2008, for what it deemed “systematic, widespread and unauthorized copying and distribution” at Georgia State University.
  193. [193]
    Copyright transfer in group-authored scientific publications | Insights
    Apr 14, 2021 · We examine the issue of copyright transfer from authors to a publisher. A key argument is the potential invalidation of a copyright transfer agreement.
  194. [194]
    [PDF] Copyright Ownership & the Impact on Academic Libraries
    ' Much of the interests of academic libraries concerning copyright ownership actually center on concerns about the rapidly escalating costs of journal ...
  195. [195]
    Retraction guidelines - COPE: Committee on Publication Ethics
    Aug 29, 2025 · Retraction is to correct literature, not punish authors, for major errors, misrepresentation, unethical practices, or compromised peer review.
  196. [196]
    Retraction Watch – Tracking retractions as a window into the ...
    A journal says it will retract a 2019 paper on an Alzheimer's treatment after an institutional investigation found research misconduct, according to emails seen ...Missing: notable | Show results with:notable
  197. [197]
    Retractions Increase 10-Fold in 20 Years - and Now AI is Involved
    In 2023 retractions increased to 1 in 500 papers. “Statistically speaking, there is a not-zero chance that one of the papers you're citing has been retracted.
  198. [198]
    Misconduct accounts for the majority of retracted scientific publications
    67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).
  199. [199]
    Biomedical paper retractions have quadrupled in 20 years — why?
    May 31, 2024 · Unreliable data, falsification and other issues related to misconduct are driving a growing proportion of retractions.
  200. [200]
    Investigating and preventing scientific misconduct using Benford's Law
    Apr 11, 2023 · Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law.
  201. [201]
    Springer Nature retracted 2923 papers last year
    Feb 17, 2025 · 38.5% (1126) of retractions were for articles published after January 2023. 41% of the retractions for content published after January 2023 ...Missing: statistics | Show results with:statistics
  202. [202]
    Research Fraud: Falsification and Fabrication of Data
    Keep records of all raw data; if falsification or fabrication are suspected, the journal or other investigative body will demand to review your information.
  203. [203]
    Top 10 most highly cited retracted papers
    Below, we present the list of the 10 most highly cited retractions as of May 23, 2025. Readers will see some familiar entries, such as the infamous Lancet paper ...
  204. [204]
    New COPE retraction guidelines address paper mills, third parties ...
    Sep 4, 2025 · COPE also recommends retracting papers with “any form of misrepresentation,” including “deception; fraud (eg, a paper mill); identity theft or ...
  205. [205]
    Worryingly high prevalence of retraction among top-cited researchers
    Nov 12, 2024 · Of 217,097 scientists who are top on the basis of career-long citation impact, 7,083 (3.3%) are authors of retracted papers, as are 8,747 (3.9%) ...Missing: statistics | Show results with:statistics
  206. [206]
    Misconduct as the main cause for retraction. A descriptive study of ...
    The main cause of retraction was misconduct (65.3%), and the leading reasons were plagiarism, data management and compromise of the review process.
  207. [207]
    How to fight fake papers: a review on important information sources ...
    Jul 6, 2024 · Software tools provided by publishers to fight fake papers. Publishers should be responsible for ensuring that editorial boards can maintain the ...
  208. [208]
    Semantic Specification on Zenodo
    Research repository deposit crediting AI persona Angela Bogdanova as co-author in experimental scholarly metadata.
  209. [209]
    Authorship and AI tools
    COPE position statement prohibiting AI tools from authorship and requiring disclosure of use.
  210. [210]
    Defining the Role of Authors and Contributors
    ICMJE recommendations stating that AI and AI-assisted technologies should not be listed as authors.