Fact-checked by Grok 2 weeks ago

Scientific journal

A scientific journal is a periodical that disseminates original research findings, reviews, and scholarly analyses in specific scientific disciplines, with articles typically subjected to by independent experts to assess technical validity, originality, and scientific merit. Scientific journals originated in the 17th century amid the , with the launching in January 1665 as the first academic periodical, followed by Philosophical Transactions of the Royal Society in March 1665, which became the earliest dedicated exclusively to scientific content and remains in continuous . These journals serve as the primary mechanism for archiving peer-validated knowledge, facilitating cumulative progress in by allowing researchers to cite, critique, and extend prior work, while also functioning as benchmarks for career advancement and funding allocation in . Central to their operation is the process, whereby submissions are evaluated anonymously by domain specialists to filter out flawed or unsubstantiated claims, though indicates variability in its stringency and occasional failures to detect errors or biases.

Definition and Role

Core Purpose and Functions

Scientific journals primarily serve to disseminate original findings and scholarly analyses within specific disciplines, enabling the to build cumulatively on verified . This function establishes a , archival record of discoveries, which supports replication, critique, and further inquiry essential to the empirical validation of hypotheses. By publishing peer-reviewed articles, journals certify the methodological rigor and novelty of contributions, thereby advancing collective understanding and prioritizing claims based on over . A core mechanism is the process, where independent experts evaluate submissions for validity, accuracy, and originality prior to publication, acting as a filter to minimize errors and biases in the . This scrutiny helps ensure that only work meeting established standards of and logical coherence enters the permanent record, though it is not infallible and relies on the expertise of reviewers drawn from the relevant field. Journals thus function not only as disseminators but also as gatekeepers, fostering accountability by associating authors' names with scrutinized outputs and enabling through citations. Additional functions include hosting review articles that synthesize existing data, short communications for rapid reporting of significant results, and occasionally editorials or to debate implications or methodological issues. Through indexing in and adherence to standardized formats, journals facilitate and across global research efforts, underpinning funding decisions, academic promotions, and policy formulations grounded in . Ultimately, their role reinforces by demanding explicit linkages between observations, experiments, and conclusions, while archiving prevents loss of institutional knowledge and supports longitudinal analysis of scientific progress.

Distinction from Non-Scientific Publications

Scientific journals are distinguished from non-scientific publications by their adherence to rigorous , whereby submitted manuscripts undergo anonymous evaluation by independent experts in the field to assess methodological soundness, validity of , and contribution to existing before acceptance. This process serves as a primary mechanism, filtering out unsubstantiated claims and ensuring that published work meets minimum standards of scientific rigor, unlike non-scientific outlets such as magazines or newspapers, which typically rely on editorial discretion without expert scrutiny. Content in scientific journals centers on original empirical research, featuring detailed descriptions of experimental designs, raw , statistical analyses, and reproducible protocols that allow for independent verification and potential falsification—core tenets of the . In contrast, non-scientific publications often present secondary interpretations, opinion pieces, or anecdotal reports aimed at general audiences, lacking primary or testable hypotheses, and prioritizing or narrative appeal over evidentiary depth. For instance, while a journal article might include quantitative results from controlled trials with p-values and confidence intervals, a popular magazine article on the same topic would summarize findings without methodological appendices or calls for replication. This demarcation extends to authorship and audience: scientific journals are written by domain specialists for fellow researchers, employing technical terminology and formal structures like abstracts, methods sections, and references to prior peer-reviewed work, fostering cumulative knowledge advancement. Non-scientific , however, cater to lay readers with simplified , illustrations, and editorially selected viewpoints that may reflect journalistic biases rather than empirical , often without disclosing conflicts of interest or data sources in equivalent detail. Although is not infallible—critics note risks of oversight or —it remains the institutional hallmark separating vetted scientific from unchecked dissemination in broader publications.

Historical Development

Origins in the Scientific Revolution

The emergence of scientific journals coincided with the of the 17th century, a period marked by the advocacy of empirical methods and experimentation over , driven by figures such as and . Prior to periodicals, scientific communication relied on private letters within the , personal monographs, and oral presentations at informal gatherings, which limited wide dissemination and verification of findings. The invention of these journals facilitated regular, public sharing of observations and experiments, enabling cumulative progress in . The first academic periodical, , appeared in on 5 , edited by de Sallo (under the pseudonym Sieur de Hédouville) at the behest of , minister to . Intended to chronicle advancements in the , it encompassed literature, , , and nascent , featuring book reviews, legal decisions, and abstracts of scholarly works rather than original articles. Though not exclusively scientific, it included coverage of mathematical and natural historical developments, setting a precedent for structured scholarly news. The journal faced suppression in over satirical content but resumed publication in 1669 under new editorship. In , , the inaugural secretary of the —chartered in 1662 to promote —launched Philosophical Transactions on 6 March 1665, predated only by the French journal. This publication focused singularly on scientific content, printing letters, experiment accounts, and instrument descriptions submitted by Fellows and correspondents, without direct Royal Society endorsement initially. Oldenburg's role involved soliciting contributions, translating foreign works, and distributing copies internationally, which amplified the Society's influence amid events like the Great Plague of 1665–1666 that disrupted meetings. These early journals institutionalized the verification of claims through communal scrutiny, though formal evolved later; Philosophical Transactions emphasized factual reporting over endorsement, allowing readers to assess validity. By 1666, both periodicals had established a model of periodic issuance—weekly or monthly—contrasting with one-off pamphlets, and they numbered among fewer than a dozen such ventures by century's end, primarily in . Their advent powered the Revolution by bridging isolated investigators, as evidenced by rapid uptake: Philosophical Transactions reached volume 2 by 1667, covering topics from microscopy to astronomy.

19th-Century Expansion and Institutionalization

The 19th century witnessed exponential growth in scientific journals, driven by the professionalization of science and expanding research output from universities, laboratories, and academies. Worldwide, the number of science periodicals increased from around 100 at the century's start to approximately 10,000 by 1900. This proliferation paralleled the rise of dedicated scientific workers, whose empirical investigations demanded efficient channels for sharing findings amid rapid industrialization and institutional development. Institutionalization manifested through the integration of journals with scientific societies, which formalized publication as a core function of professional communities. Established outlets like the Society's Philosophical Transactions grew substantially in volume to handle surging submissions, reflecting heightened demand for archival and communicative roles. Societies across and , such as the British Association for the Advancement of Science (founded 1831), leveraged periodicals to foster discourse and standardization, though formats remained experimental and unstable. A pivotal development was the launch of on November 4, 1869, by astronomer and publisher Alexander Macmillan, intended as a weekly digest of scientific news to bridge gaps left by slower society transactions. This initiative underscored the era's push toward timely, accessible reporting, catering to an enlarging audience of practitioners and enthusiasts. By century's close, specialization spurred discipline-focused journals and adaptations like the Royal Society's 1895 sectional committees, institutionalizing peer scrutiny and thematic organization amid disciplinary fragmentation.

20th-Century Growth and Post-War Boom

The number of scientific journals expanded steadily throughout the early , reflecting the professionalization of and the proliferation of specialized fields. Annual growth rates for active journals averaged 3.3% to 4.7% from to 1996, driven by rising research productivity amid industrialization and institutional support for . This period saw the establishment of prominent multidisciplinary outlets, such as (relaunched in by the American Association for the Advancement of Science) and increased output from learned societies, though publication volumes remained constrained by limited funding and manual production processes. Following , scientific publishing entered a pronounced boom, propelled by unprecedented public investment in research infrastructure and personnel. In the United States, wartime innovations demonstrated science's strategic value, leading to the creation of the in 1950 with an initial budget of $3.5 million, which grew to support across disciplines. Federal R&D expenditures escalated rapidly, reaching about 2% of GDP by the 1960s, with agencies like the expanding grants that funded biomedical research and journal submissions. This funding surge expanded the scientific workforce—U.S. researchers numbered around 100,000 in 1950 and doubled within a decade—generating far more manuscripts and necessitating additional journals or expanded issues to accommodate the output. The post-war era marked a shift to unrestricted exponential growth in journals, with an annual rate of approximately 4.1% from onward, contrasting with pre-war constraints from economic depressions and global conflicts. Internationally, similar patterns emerged as governments emulated U.S. models; for instance, Europe's recovery involved reinvesting in , while competition amplified funding. This boom strained traditional nonprofit models, as society journals grappled with rising costs for printing and distribution, paving the way for commercial publishers to acquire titles and capitalize on subscription revenues from institutional libraries. By the late , the journal count had risen dramatically, from roughly 10,000 in the mid-century to over 100,000 by 2000, underscoring how funding-driven productivity reshaped dissemination.

Digital Transition and 21st-Century Shifts

The digital transition of scientific journals commenced in the early 1990s, driven by the internet's capacity for rapid dissemination. The preprint server, launched in August 1991 by physicist at , marked an initial milestone by enabling physicists to share manuscripts electronically before formal , bypassing print delays. This model addressed the limitations of physical mailing and photocopying, which had previously constrained distribution to high-energy physics communities. By the mid-1990s, searchable online databases emerged, with debuting in January 1996 as a free interface to the , initially covering over 9 million biomedical citations and abstracts. Commercial publishers followed suit; Elsevier's platform launched in March 1997, providing electronic access to full-text articles from more than 1,000 journals, including searchable PDFs and formats. Pioneering fully digital, peer-reviewed journals also appeared, such as in March 1996, which operated entirely online and focused on internet-related scholarship without a print counterpart. These developments shifted journals from print-centric models—reliant on physical production and library subscriptions—to hybrid systems incorporating digital supplements like data files and images. Into the , the transition intensified, with most major journals adopting online-first publication by the early , allowing articles to appear digitally months before issues. This enabled innovations such as Digital Object Identifiers (DOIs) via CrossRef, established in 2000, for persistent linking and citation tracking across platforms. Electronic submission systems, standardized by tools like ScholarOne and Editorial Manager around 2005, streamlined by facilitating anonymous digital exchanges, reducing processing times from months to weeks in many cases. Publication volumes surged, growing at an average annual rate of 4.1% and reaching about 2.5 million peer-reviewed articles per year by 2017, fueled by lower digital reproduction costs and expanded global authorship. Preprint proliferation extended beyond physics, with discipline-specific servers like (2013) and (2019) accelerating knowledge sharing, particularly during the when over 100,000 SARS-CoV-2-related preprints appeared on medRxiv and by mid-2020. Digital formats supported richer content, including interactive datasets, videos, and code repositories linked via platforms like Figshare (launched 2011), enhancing but revealing gaps in data-sharing compliance, with only 20-30% of articles in top journals providing accessible raw data as of 2016. Global output rose 59% from 2012 to 2022, shifting production leadership from the U.S. and toward , which accounted for 21% of papers by 2022. These changes democratized access but strained quality controls, as evidenced by retraction rates climbing from 0.01% of publications in 2000 to 0.04% by 2020, often uncovered through digital tools like for post-publication scrutiny.

Publishing Models

Traditional Subscription-Based Systems

In the traditional subscription-based model of scientific journal publishing, to is granted through payments made by readers, institutions, or libraries for subscriptions, while authors typically incur no direct publication fees. This system, predominant since the inception of formal scientific journals in the , relies on revenue from these subscriptions to cover editorial, , production, and distribution costs. For instance, Philosophical Transactions of the Royal Society, the world's oldest scientific journal established in 1665, was initially sustained by sales to subscribers and remains a foundational example of this approach, even as the Royal Society has historically offered individual or package subscriptions for its titles. Libraries and universities, as primary subscribers, negotiate access to journal portfolios, often through "Big Deals"—bundled packages providing comprehensive coverage of a publisher's titles at discounted rates per journal but with escalating overall costs. These deals, pioneered by (acquired by ) in 1996, enable broader access but reduce libraries' ability to cancel underused titles, effectively locking in expenditures and contributing to diminished flexibility. By 2021, such bundles had become standard for major publishers, with academic libraries licensing large portions of content this way, though analyses indicate they yield less value per dollar spent compared to selective subscriptions due to inclusion of lower-impact journals. The model's economics have drawn scrutiny amid the "," where subscription prices have risen faster than budgets and , eroding since the late . For example, the average cost of a journal increased 59% from $226 in 2000 to $360 by around 2005, with similar trends across disciplines like (54% rise) and (49%). Large commercial publishers, such as , derive substantial revenue from this system; its scientific, technical, and medical division reported €3.26 billion in revenue for 2022, with adjusted operating profits reaching £1.17 billion in 2024 and profit margins near 40%. Despite enabling rigorous gatekeeping and wide institutional , the subscription model perpetuates inequities in for unaffiliated researchers and strains public , as taxpayers support both grant-funded research and subsequent subscription barriers. Nonprofit society publishers often charge lower rates than for-profits, but market consolidation favors the latter, with bundled amplifying revenue concentration.

Open Access and Hybrid Approaches

Open access (OA) in scientific journals entails making freely available online without financial or legal barriers beyond attribution, enabling unrestricted reading, downloading, and reuse. The formalization of OA principles occurred through the Budapest Open Access Initiative in February 2002, which recommended two complementary paths: accepted manuscripts in public repositories (green OA) and direct publication in OA journals that do not impose subscription fees on readers (gold OA). Gold OA typically relies on article processing charges (APCs) paid upfront by authors, institutions, or funders to cover editorial, , and dissemination costs, with a global average APC of approximately US$1,626 as of 2023 data aggregated across journals. The OA movement traces its precursors to early digital repositories like , launched in 1991 for physics preprints, which demonstrated the feasibility of free online dissemination prior to widespread journal adoption. By 2024, gold accounted for about 40% of global research articles, reviews, and conference papers, up from 14% in 2014, driven by mandates from funders such as Europe's (initiated in 2018) requiring publicly funded research to be by 2021. Diamond , a subset of gold without author fees (often funded by societies or governments), remains marginal, comprising less than 10% of output despite its appeal for equity. indicates articles garner higher rates—up to 47% more in some fields—due to increased , though causation is debated as self-selection (higher-quality papers opting for ) may contribute. Hybrid approaches integrate into traditional subscription journals, permitting authors to pay an (often $2,000–[US](/page/United_States)5,000, varying by publisher) to render specific articles openly accessible while non- content remains behind paywalls. Introduced in the mid-2000s by major publishers like and as a bridge to full , hybrid models now dominate transitional publishing, with 82% of Springer Nature's 2024 hybrid articles funded via institutional "transformative agreements" that bundle subscriptions and APCs. Proponents argue hybrids facilitate compliance with funder policies and enhance article reach without disrupting journal revenue streams, potentially boosting overall citations for selections. Critics, including analyses from library consortia, contend that hybrids enable "double dipping," where publishers derive revenue from both subscriptions and APCs for overlapping content pools, leading to net cost increases for institutions without commensurate gains—evidenced by hybrid APCs exceeding subscription-derived per-article costs in some cases. Hybrid uptake has slowed full transitions, with fully journals' output shrinking to 75% of articles in among member publishers, as hybrids absorbed the balance. Sustainability challenges persist, as APC escalation (averaging 5–10% annual increases in high-impact journals) burdens authors from low-resource settings, where fees can exceed annual research budgets, prompting waivers or no-fee alternatives in only about 20% of journals. Despite these, hybrids and gold collectively surpassed closed-access articles globally by 2021, signaling a tempered by economic and quality-control tensions.

Economic Incentives and Predatory Practices

The subscription-based model of scientific journals generates substantial revenue for large publishers, with (parent of ) reporting €2.7 billion in scientific, technical, and medical publishing revenue in 2023, contributing to group-wide profit margins of approximately 33%. These profits arise from institutional subscriptions funded largely by public and budgets, despite authors and peer reviewers receiving no direct compensation, as incentives prioritize career advancement through publication counts over monetary rewards. The "" culture in amplifies this dynamic, pressuring researchers to maximize output for tenure, , and promotions, which increases submission volumes and sustains publisher revenues without corresponding improvements in . The transition to (OA) models, particularly gold OA reliant on article processing charges (), has introduced new economic incentives, with global APC revenues exceeding $2 billion annually by 2020 among major publishers and median APCs rising to incentivize higher-fee journals. In hybrid systems, authors pay APCs to make subscription articles freely accessible, shifting costs from readers to writers while publishers retain dual revenue streams, but this aligns poorly with academic pressures that reward publication quantity, often leading to selective reporting of positive results and practices like salami slicing or data manipulation. Such incentives have contributed to a surge in retractions, with flawed research proliferating due to deadline pressures and metric-driven evaluations, undermining the reliability of the . Predatory journals exploit these OA APC incentives by charging fees—often $500 to $3,000—while providing minimal or no , editorial oversight, or indexing, masquerading as legitimate outlets to deceive authors seeking quick publications. Emerging prominently in the alongside OA expansion, these operations, frequently based in low-regulation regions, have proliferated to thousands of titles, preying on the publish-or-perish imperative in systems where promotions hinge on tallies, including cash-per-paper bonuses in some exceeding $100,000 for high-impact work. Their impact includes diluting the with low-quality or fabricated , facilitating undeserved academic advancements, and eroding trust in OA models, as evidenced by higher retraction rates tied to systemic publication pressures. Efforts to combat predation, such as lists maintained by scholars like until 2017, highlight how economic misalignments—favoring volume over rigor—enable such practices to thrive amid inequities in global .

Editorial Processes

Submission and Peer Review Mechanisms

Manuscripts are typically submitted electronically through dedicated online submission systems managed by journal publishers, such as Editorial Manager for journals or ScholarOne for Wiley titles, where authors upload files including the main text, figures, and supplementary materials while adhering to specific formatting guidelines. Accompanying submissions often include a justifying the work's novelty and fit for the journal, along with declarations of conflicts of interest and funding sources. Initial administrative checks verify completeness and compliance, followed by editorial screening to assess alignment, ethical standards, and preliminary merit, rejecting unsuitable papers without external to streamline the process. Upon passing initial hurdles, editors select 2–4 expert reviewers from the field, often drawing from databases or personal networks, to conduct , a step evaluating scientific validity, methodological rigor, , and . Reviewers provide confidential reports recommending acceptance, minor/major revisions, or rejection, with editors synthesizing these alongside their assessment to render a decision. Common formats include single-anonymized review, where reviewers know authors' identities but not vice versa; double-anonymized, masking both parties to reduce ; and open review, revealing identities for , though the latter remains rare due to concerns over reviewer candor. The process duration varies by and , with median times to first peer-reviewed decision ranging from 21 to 263 days across analyzed publications, often averaging 40–60 days for initial editorial feedback and extending to several months total including revisions. Despite aims for efficiency, delays arise from reviewer recruitment challenges and iterative revisions, sometimes spanning years in cycles of rejection and resubmission elsewhere. Peer review, while intended as an impartial gatekeeper, exhibits vulnerabilities including , where reviewers favor findings aligning with established paradigms; affiliation favoring prestigious institutions; and inconsistent among reviewers, leading to variable outcomes for similar manuscripts. These systemic issues, compounded by human subjectivity and failure to detect flaws like non- results, have prompted critiques that it stifles and inadequately filters low-quality work, eroding trust in published . Efforts to mitigate include statistical checks for in reviewer assignments and training, though of broad efficacy remains limited.

Editorial Decision-Making and Gatekeeping

Editors synthesize peer reviewer reports, often 2–4 per , alongside their independent assessment of scientific merit, novelty, methodological soundness, and broader impact to render decisions of , rejection, or revision. This process typically follows external for submissions passing initial editorial screening, with editors weighing reviewer consensus while exercising discretion to override divergent opinions if they deem the work's intrinsic quality warrants it. Gatekeeping manifests through stringent selectivity, enabling journals to uphold standards and allocate limited space amid high submission volumes; top-tier outlets like accept fewer than 6% of submissions, while rejects over 90%, frequently via desk rejection without review to expedite triage. This filtering legitimizes published findings, shapes research agendas, and distributes prestige, yet empirical analyses indicate rejected manuscripts often achieve comparable citation impacts to accepted ones upon republication elsewhere, questioning the absolute efficacy of editorial judgments. Biases can distort decisions, including favoritism toward authors from institutions, which correlates with higher invitation rates for revision in journals, disadvantaging submissions from less prominent affiliations. Nepotistic practices, such as elevated publication rates for hyper-prolific authors linked to boards, have been documented in subsets of biomedical journals, undermining . While some studies find no systematic gender bias in post-review decisions, geographical and institutional prejudices persist, potentially amplifying inequities in global scientific output. Gatekeeping failures include overlooking or irreproducibility, as evidenced by delayed retractions in prominent cases where initial approval preceded post-publication scrutiny revealing flaws missed. In fields prone to replication crises, conservative thresholds prioritizing novelty over confirmatory rigor have perpetuated questionable claims, with editors sometimes resisting negative or results despite their validity. Reforms, such as eLife's shift to review-without-gatekeeping by publishing all reviewed preprints sans accept/reject binaries, aim to mitigate over-reliance on vetoes while preserving .

Post-Publication Corrections and Retractions

Post-publication corrections address honest errors in published scientific articles, such as inaccuracies in data reporting, methodological descriptions, or statistical analyses that do not undermine the overall conclusions, while retractions are issued for severe flaws, including , falsification, , or ethical violations rendering the work unreliable. The (COPE) recommends that corrections be clearly labeled, linked to the original article, and limited to substantive changes, whereas retractions involve withdrawing the article from the literature with a notice explaining the reasons, distinguishing misconduct from error, and advising against citing the retracted work except to discuss the retraction itself. Journals typically initiate these processes upon notification from authors, readers, or institutions, often involving by editors, peer reviewers, or external bodies; for instance, COPE guidelines emphasize timely notices—ideally within months of —and watermarking PDFs to prevent further dissemination of flawed versions. Retraction notices must detail the specific issues, such as "" including or involvement, and journals are encouraged to coordinate with third-party databases for visibility. Post-publication platforms like facilitate detection by allowing anonymous comments on images or data anomalies, contributing to retractions in cases overlooked during initial review. Retraction rates have risen sharply, from approximately 1 in 5,000 papers in 2002 to 1 in 500 by 2023, with biomedical fields seeing a quadrupling over two decades and overall rates reaching about 0.2% amid over 35,000 total retractions tracked by 2025. This increase reflects expanded publication volumes, improved detection tools, and heightened scrutiny, though data problems—encompassing fabrication, duplication, or analytical errors—account for over 75% of recent cases. drives nearly 67% of retractions, compared to 16% for honest errors, with common triggers including (frequent in social sciences) and image manipulation (prevalent in ). occur more frequently but receive less attention, with studies showing constant rates across fields while retractions surge due to systemic pressures like "" incentives that prioritize quantity over verification. Challenges persist in , including delays averaging years from to retraction—exacerbated by institutional reluctance to investigate —and continued citations of retracted papers, which decline but persist at 2-5 per year post-retraction versus pre-retraction peaks. Academic institutions, often biased toward protecting reputational incentives, underreport , leading to incomplete records; for example, only 44% of papers from a flagged 2019 were retracted by 2025 despite evidence of issues. Retractions impact careers, reducing future output for authors, yet they enhance scientific integrity by signaling self-correction, with top-cited scientists facing a conservative 4% retraction rate. Emerging COPE updates address paper mills and third-party , urging proactive editor involvement to counter these trends.

Content Structure and Formats

Standard Article Components

Scientific journal articles, particularly those reporting original empirical research, adhere to a standardized structure known as IMRaD (Introduction, Methods, Results, and Discussion) to facilitate logical presentation, reproducibility, and reader comprehension. This format emerged in the mid-20th century as scientific publishing professionalized, replacing less structured narratives, and is now the norm across disciplines like biology, physics, and social sciences, though variations exist in humanities or theoretical fields. Preceding the main body are front-matter elements such as the title, author list with affiliations, abstract, and keywords, while appendices may include references, acknowledgments, and supplementary materials. Journals like PLOS ONE and Nature mandate adherence to this outline, with specific length limits and formatting to ensure consistency. The title concisely captures the article's core contribution, often limited to 10-15 words, emphasizing key variables, methods, or findings without abbreviations or hype to aid indexing and searchability. Author details follow, listing contributors in order of contribution, with corresponding author designated for correspondence, and disclosures of conflicts of interest to uphold transparency. The abstract, typically 150-250 words, provides a standalone summary covering background, objectives, methods, principal results (with quantitative data), and conclusions, enabling readers to assess relevance without the full text. Keywords (3-10 terms) are appended for database indexing, selected from controlled vocabularies like MeSH in biomedicine. In the , authors contextualize the research gap with 1-2 pages of literature review, state hypotheses or objectives, and outline significance, avoiding exhaustive history to focus on unresolved questions. The methods section details protocols for replication, including materials, experimental design, statistical analyses, and ethical approvals (e.g., IRB for subjects), with sufficient specificity—such as reagent sources or software versions—to enable , often supplemented by online protocols. Results present findings objectively via text, tables, and figures (e.g., graphs showing p-values or effect sizes), without interpretation, typically in chronological or thematic order, with raw data deposited in repositories like Figshare for larger datasets. The discussion interprets results in light of hypotheses, compares with prior studies, addresses limitations (e.g., sample size or confounders), and suggests implications or future directions, often concluding with broader impacts while avoiding overstatement. References follow a journal-specific style (e.g., or ), citing 20-100 sources, primarily peer-reviewed, to credit priors and combat via tools like Crossref. Additional elements include acknowledgments for funding or contributions, figures/tables with captions and legends for visual representation (limited to 6-8 per article in many journals), and supplementary information for extended methods or , increasingly required for amid crises in fields like . This modular design supports modular review and digital parsing, though rigid adherence can constrain interdisciplinary work.

Field-Specific Variations and Innovations

In medicine, clinical trial reports follow specialized formats like the CONSORT 2025 guidelines, which require a structured abstract with subheadings for background, methods, results, and conclusions, alongside a participant flow diagram and checklist items covering trial design, randomization processes, interventions, outcomes, and harms to facilitate critical appraisal and replication. Physics papers, particularly theoretical ones, adapt IMRaD by emphasizing model derivations, equations, and simulation validations over lengthy experimental protocols, often structuring content around analytical frameworks, numerical results, and error analyses tailored to high-precision computations. In biology, articles incorporate detailed subsections within methods for organism handling, molecular techniques, and bioinformatics pipelines, with extensive supplementary files hosting sequence data, phylogenies, and raw micrographs to manage volume beyond print limits. Innovations extend beyond static text to interactive and multimedia integrations. Journals in fields like enable embedded videos of protein or rotatable models, allowing readers to manipulate visualizations directly, as seen in platforms supporting dynamic supplements for enhanced interpretability of complex datasets. Data papers, which catalog and describe reusable datasets without primary analysis, have proliferated in and sciences, featuring metadata schemas, access protocols, and validation metrics to promote FAIR principles (findable, accessible, interoperable, reusable). Software articles, common in computational disciplines, detail code repositories, benchmarks, and usage examples, often with executable demos to verify functionality and foster community contributions. These formats address limitations of linear narratives, prioritizing verifiability and extensibility amid growing data complexity.

Evaluation and Metrics

Citation-Based Impact Measures

The Journal Impact Factor (JIF), calculated by Clarivate Analytics and published in , serves as a primary citation-based metric for assessing journal influence. It quantifies the average number of citations received by articles published in a journal during the two preceding years, divided by the number of citable items (primarily research articles and reviews) published in those years. For instance, the 2023 JIF for a journal is computed as citations in 2023 to items from 2021 and 2022, divided by citable items from 2021 and 2022. This two-year window emphasizes recent impact but excludes other document types like editorials. CiteScore, derived from Elsevier's Scopus database, offers an alternative by averaging citations per document over a four-year window, encompassing a wider array of content including and book chapters. It is calculated as the number of citations in year Y to documents published in Y-1 through Y-4, divided by the total documents published in Y-1 through Y-4, and updated annually with percentile rankings relative to subject categories. Unlike JIF, includes all document types in the denominator, potentially broadening its applicability across disciplines. The (SJR), based on data, differentiates itself by weighting citations according to the prestige of the citing , using an iterative algorithm similar to Google's to transfer "prestige" through networks. SJR employs a three-year citation window and normalizes scores so the average receives a value of 1, prioritizing influential citations over sheer volume. This approach aims to mitigate biases from citations in low-prestige outlets. Other metrics include the Score, which evaluates a journal's network influence over five years using data, accounting for citation directionality and discounting journal self-citations to reflect broader scholarly importance. The derived Article Influence Score divides this by the journal's article count and scales it relative to an average of 1.0. The , adaptable to journals, denotes the largest h such that h articles have received at least h citations each, often computed over all time or recent periods via databases like or .
MetricDatabaseCitation WindowKey Features
Journal Impact Factor (JIF)2 yearsAverage citations per citable item; focuses on articles/reviews.
4 yearsIncludes all documents; provides percentiles.
SCImago Journal Rank (SJR)3 yearsPrestige-weighted citations via PageRank-like method.
Score5 yearsNetwork centrality; discounts self-cites.

Limitations and Gaming of Metrics

Citation-based metrics, such as the Journal Impact Factor (JIF), exhibit significant limitations in assessing research , as they aggregate citations across journals without distinguishing between influential and mediocre articles. Citation distributions within journals are highly , with a small number of papers receiving the majority of citations, meaning JIFs are disproportionately driven by outliers rather than average output. This leads to overlapping distributions across journals of varying prestige, undermining JIF as a reliable for individual article merit. Empirical analyses show that JIF correlates weakly or inconsistently with assessments, sometimes inversely, particularly in fields where citations accumulate slowly beyond the standard two-year window. Field-specific biases further erode metric validity; disciplines with shorter publication cycles or review articles inflate JIFs relative to those with longer citation lags, such as or . The h-index, intended to balance productivity and impact at the researcher level, disadvantages early-career scientists—whose values are capped by publication count—and favors high-citation fields like physics over others with sparser referencing norms. It also ignores context, such as negative citations or self-citations, failing to capture true scholarly influence. Gaming practices exploit these flaws, incentivizing manipulation over substantive contributions. Excessive self-citation, where authors or journals disproportionately reference prior work, artificially boosts metrics; for instance, editorial policies urging citations to the publishing journal have been documented to elevate JIFs without enhancing content value. Citation cartels—coordinated groups of researchers or journals citing each other at rates far exceeding field norms—emerged prominently by 2016, with Analytics issuing warnings to over 50 journals in 2017 for such patterns, including reciprocal referencing between titles like Applied Computing and Informatics and Management Information Systems. These networks, often spanning special issues or institutional affiliations, can inflate metrics by 20-50% in affected clusters, distorting evaluations for funding and promotions. Other tactics include "salami slicing"—dividing research into minimal publishable units to multiply opportunities—and "sneaked references," where irrelevant are inserted to pad counts, detected in up to 10% of papers in some analyses. mills and servers have facilitated organized inflation since the mid-2010s, with services offering paid to game h-indices and JIFs. Such behaviors, driven by institutional pressures tying career advancement to metrics, undermine in science by prioritizing quantifiable signals over empirical rigor. Despite efforts like Clarivate's exclusion of excessive self-cites from JIF calculations starting in , gaming persists, highlighting the need for multifaceted evaluation beyond aggregates.

Alternative Quality Indicators

Alternative quality indicators for scientific journals encompass metrics and assessments that extend beyond citation counts, addressing shortcomings such as susceptibility to manipulation, field-specific biases, and failure to capture non-academic influence or methodological rigor. These alternatives emphasize policies, broader societal engagement, and intrinsic editorial standards to better evaluate a journal's contribution to reliable knowledge dissemination. For instance, the TOP Factor, developed by the Center for Open Science, scores journals on eight criteria including data , code sharing, and preregistration policies, with scores ranging from 0 to 6 based on policy stringency and enforcement; high-scoring journals like Psychological Science (TOP Factor 5.4 as of 2020) demonstrate commitment to , which empirical studies link to reduced error rates in published findings. Altmetrics provide real-time indicators of attention by tracking mentions in , news outlets, blogs, policy documents, and downloads, offering a complement to delayed citation data. Originating from platforms like Altmetric.com, these metrics aggregate sources such as shares (now X) and saves to quantify dissemination speed and public discourse, with advantages including applicability to diverse outputs like datasets and software, unlike citation-focused tools. However, face criticism for vulnerability to —such as coordinated campaigns inflating scores without reflecting substantive impact—and correlation with hype rather than validity, as evidenced by studies showing weak alignment with citation-based quality in fields like . Editorial board composition and retraction rates serve as proxies for gatekeeping rigor, with journals featuring diverse, high-expertise editors from institutions like NIH or equivalent demonstrating lower publication of flawed research. Retraction rates, tracked via databases like Retraction Watch, inversely correlate with quality; for example, elite journals maintain rates below 0.1% annually, while predatory outlets exceed 1%, signaling effective pre- and post-publication scrutiny. Usage statistics, including abstract views and full-text downloads from publisher platforms, further indicate practical relevance, though they must be normalized for field size to avoid conflating popularity with verifiability. These indicators, when combined, foster a multidimensional assessment resistant to the inflationary pressures of citation metrics.

Challenges and Controversies

Reproducibility and Replication Failures

The reproducibility crisis in scientific publishing denotes the systematic failure to independently verify many findings reported in peer-reviewed journals, undermining the reliability of published knowledge. A landmark effort, the , coordinated by the Collaboration and published in Science in 2015, attempted to replicate 100 experimental and correlational studies originally appearing in three prominent journals (Journal of Experimental Psychology: General, Journal of Personality and , and Psychological Science). Of 97 studies with replication attempts, only 36 achieved statistical significance at the conventional p < 0.05 , with replication sizes averaging half those of the originals, indicating inflated initial estimates due to factors like selective reporting or low statistical power. In biomedical fields, replication rates are similarly dismal, as evidenced by industry attempts to validate foundational studies for . In 2012, researchers at reported successfully reproducing only 11% (6 out of 53) of landmark preclinical cancer biology papers relied upon for therapeutic targets, attributing discrepancies to incomplete experimental details, selective data presentation, and methodological inconsistencies in the originals. Similarly, a 2011 analysis by Bayer HealthCare scientists found that just 20-25% of published preclinical studies in and other areas could be confirmed internally, prompting calls for enhanced rigor in reporting to mitigate translational failures. These corporate validations, conducted with substantial resources unavailable to most academic labs, highlight how journal-published results often prioritize novelty and positive outcomes over verifiable robustness, exacerbating resource waste in downstream applications. Surveys of researchers underscore broad recognition of the problem within academia, though publication practices perpetuate it. A 2016 poll of over 1,500 scientists found more than 70% had failed to replicate others' experiments, with over 50% unable to reproduce their own, linking issues to "" incentives that favor eye-catching results over replication attempts—rarely published due to perceived low novelty. A 2024 survey of biomedical researchers echoed this, with 72% affirming a reproducibility crisis and only 36% of those attempting self-replications publishing the outcomes, often due to null findings facing rejection. Recent large-scale efforts, such as a 2025 project replicating high-impact biomedical experiments, confirmed rates below 50%, aligning with prior estimates and revealing persistent gaps despite awareness. Journals contribute causally through gatekeeping that amplifies questionable findings: emphasis on statistical significance incentivizes practices like p-hacking or HARKing (hypothesizing after results are known), while replication studies encounter barriers to publication absent groundbreaking implications. This systemic bias toward Type I errors over false positives erodes public trust, as non-replicable results inform policy, funding, and clinical trials; for instance, irreproducible preclinical data has contributed to high attrition rates in pharmaceutical pipelines, with billions in annual losses. Reforms like preregistration and journals dedicated to replications (e.g., eLife's policies) aim to counteract these failures, but adoption remains uneven, as traditional metrics undervalue confirmatory work.

Ideological Biases in Review and Publication

Scientific journals, particularly in the social sciences and , display ideological biases in and publication, largely attributable to the political homogeneity of academic gatekeepers such as editors and reviewers. Surveys of U.S. faculty reveal that over 60% self-identify as liberal, with liberal or far-left identifiers comprising nearly 60% in recent data, resulting in Democrat-to-Republican ratios exceeding 10:1 in many disciplines. This skew, more pronounced in institutions, fosters environments where conservative or dissenting perspectives face systemic disadvantages, as reviewers predisposed to one may undervalue or reject incongruent findings. Empirical studies confirm discriminatory tendencies in review processes. A 2012 survey of social psychologists found that 14% of respondents indicated they would discriminate against conservatives when reviewing manuscripts, with higher rates (up to one in six) for decisions on invitations or hires, patterns that extend to . In legal scholarship, analyses of selections show ideological by student editors, who favor articles aligning with liberal viewpoints, driven by their ability to gauge quality through an ideological lens. Such biases manifest in lower acceptance rates for research challenging progressive orthodoxies, such as studies on reverse , which undergo heightened methodological compared to aligned topics. Quantitative assessments of published content further evidence a liberal tilt. An analysis of over 9,000 articles on 12 controversial topics rated their ideological stance on a 1 () to 5 (conservative) scale, yielding a mean of 2.71—indicating slight bias overall, with stronger skews in areas like (mean 1.91). The 2018 Grievance Studies project, involving fabricated papers infused with progressive ideological tropes, resulted in seven acceptances or revisions at peer-reviewed journals, exposing how conformity to prevailing dogmas can override evidentiary standards in fields susceptible to . These patterns, while less documented in natural sciences, reflect broader institutional incentives prioritizing narrative alignment over , eroding peer review's purported .

Commercialization and Access Inequities

Commercial publishers in scientific journals, such as Elsevier under RELX, generate substantial profits from content largely produced and peer-reviewed by unpaid academics funded by public grants. In 2024, RELX's scientific, technical, and medical division, encompassing Elsevier, reported an adjusted operating profit of £1.17 billion, contributing to the parent company's overall profit rise to £3.2 billion, a 10% increase from prior years. RELX achieved a profit margin nearing 40% in 2023, with global industry revenues exceeding £19 billion, often criticized for extracting value from taxpayer-supported research without commensurate costs for production, as actual publishing expenses represent only 10-15% of revenues charged to authors or institutions. Paywalls in subscription-based journals exacerbate access inequities, restricting availability to approximately 75% of scholarly documents across disciplines, thereby limiting dissemination to researchers in under-resourced institutions and developing countries lacking institutional subscriptions. This barrier disproportionately affects global south scholars, where high subscription costs—far exceeding those of consumer media—divert budgets from other needs, despite research often being publicly funded, creating a "double payment" dynamic where taxpayers finance both creation and restricted access. The shift toward models, intended to mitigate paywalls, introduces new inequities through article processing charges (APCs), which range from $500 to over $6,000 per article and correlate positively with journal impact factors. Median APCs reached $2,820 in health sciences journals by 2024, with journals charging significantly more ($3,710 versus $1,735 for fully ones), placing financial burdens on authors without grant support and perpetuating advantages for well-funded researchers in wealthy institutions. High-impact journals impose steeper APCs, widening gaps for unaffiliated or low-resource scholars, as evidenced by global analyses showing APC-funded reinforcing rather than resolving publication inequalities.

Societal Impact and Reforms

Contributions to Scientific Progress

Scientific journals have advanced knowledge accumulation since their inception in the by providing a structured mechanism for disseminating experimental results and theoretical insights beyond personal correspondence or lectures. The launch of Philosophical Transactions by the Royal Society in March 1665 established the periodical format, enabling geographically dispersed researchers to access, critique, and build upon findings such as Robert Boyle's observations on air pressure and . This archival function created a cumulative record, fostering incremental progress through verifiable replication and extension of prior work. Peer-reviewed publication in journals has elevated the evidentiary threshold for accepted claims, promoting methodological rigor and reducing reliance on anecdotal or untested assertions. By subjecting manuscripts to expert scrutiny, journals mitigate propagation of flawed ideas, as evidenced by historical instances where peer feedback refined submissions prior to print, such as early contributions to Society's proceedings. Empirical analyses indicate that reviewed articles exhibit higher rates and subsequent discoveries, underscoring the process's role in channeling resources toward validated hypotheses. For example, Albert Einstein's 1905 papers on and the , published in , directly catalyzed and by integrating mathematical formalism with experimental data. Journals have specialized into disciplinary silos, accelerating domain-specific advancements by concentrating expertise and enabling rapid iteration within focused communities. In , serial publications of incremental findings—such as those detailing penicillin's isolation in (1929) and subsequent refinement—facilitated therapeutic breakthroughs that halved infection-related mortality rates in the mid-20th century. Similarly, the 1953 Nature article by and Crick on DNA's double helix structure galvanized , leading to technologies like amplification by the 1980s. These platforms also incentivize hypothesis testing through prestige and career rewards tied to publication, driving empirical validation over speculation. By standardizing formats for data presentation and , journals enhance of outputs, permitting meta-analyses and cross-validation that amplify collective insight. Historical shifts, including the 19th-century of field-specific titles, correlated with in patentable innovations and technological applications, as scientists leveraged published syntheses for practical . Despite occasional delays or conservatism, the system's emphasis on transparency and has empirically correlated with shifts, such as the of through accumulated journal-documented observations.

Criticisms of Systemic Flaws

The process, foundational to scientific journals, exhibits systemic weaknesses in detecting errors, , and biases, often failing to ensure the reliability of published work. Empirical studies have demonstrated that peer reviewers frequently overlook deliberately inserted methodological flaws; for instance, one experiment found that reviewers missed an average of only 30% of fabricated errors in manuscripts. Similarly, a ulent paper with obvious scientific inconsistencies was accepted by 157 medical journals, highlighting the process's vulnerability to deception despite its gatekeeping role. These failures stem from overburdened reviewers, inconsistent standards, and a lack of in , with little that systematically improves manuscript quality beyond basic screening. Incentive structures in exacerbate these issues by prioritizing publication quantity over rigor, fostering questionable research practices and . The "" culture correlates with higher rates of , selective reporting, and p-hacking, as researchers face career pressures tied to output metrics rather than replicability or long-term impact; surveys indicate that publication pressure explains variance in self-reported across demographics and fields. Journals' emphasis on novel, positive results amplifies , where null or negative findings are underrepresented, distorting the scientific record and incentivizing hypothesis-driven designs that yield at the expense of truth-seeking exploration. This misalignment, rooted in tenure and evaluations, has prompted federal scrutiny of journals' role in perpetuating flawed incentives. Commercial dynamics further entrench these flaws, as profit-oriented publishers exploit publicly funded while imposing barriers to and validation. High subscription fees and article processing charges create inequities, yet fail to deliver commensurate , with incomplete data or code sharing in publications hindering independent verification and reuse. The system's reliance on voluntary, unpaid peer labor, combined with opaque editorial processes, undermines accountability, allowing predatory practices to proliferate alongside prestigious outlets. Reforms targeting these structural incentives, such as preregistration mandates or policies, remain inconsistently adopted, perpetuating a cycle where systemic flaws compromise the journals' role in advancing verifiable knowledge.

Emerging Alternatives and Future Trajectories

Preprint servers have emerged as a prominent alternative to traditional journal publication, enabling rapid dissemination of research without initial peer review. Platforms like arXiv, launched in 1991, and bioRxiv, established in 2013, hosted over 2.3 million submissions by 2023, allowing authors to share findings immediately and solicit community feedback. This model accelerates scientific communication, with studies showing preprints garnering citations faster than journal versions in fields like physics and biology, though they lack formal validation and raise concerns about unvetted errors or scooping. Post-publication peer review platforms represent another shift, inverting the traditional sequence by publishing articles first and inviting open scrutiny afterward. F1000Research, introduced in 2013, publishes all submissions transparently and assigns DOIs upon upload, with signed reviews published alongside referee reports, fostering accountability but exposing works to potential flaws before refinement. , active since 2012, facilitates anonymous commenting on published papers via DOIs, aiding detection of image manipulations in over 10,000 articles by 2023, though its informal nature limits structured evaluation. These systems address pre-publication bottlenecks but depend on voluntary reviewer participation, yielding uneven coverage compared to journal gatekeeping. Decentralized science (DeSci) initiatives leverage for publishing, aiming to bypass centralized publishers' costs and biases through immutable ledgers and incentives. Projects like those on Ethereum-based platforms enable contract-driven reviews and via DAOs, with early adopters reporting reduced article processing charges—traditional APCs averaging $2,000–$3,000—while ensuring tracking. By 2023, DeSci ecosystems included over 50 protocols integrating tools for , though scalability issues and low mainstream uptake persist due to technical barriers and entrenched academic incentives. Future trajectories may converge on models combining preprints with overlay validation layers, potentially diminishing reliance on high-impact journals amid rising outputs— articles exceeding 3 million annually by 2022. Reforms targeting social dilemmas, such as coordinated shifts to communal platforms, could replace legacy systems, but empirical evidence suggests persistence of metrics like impact factors unless tied to institutional rewards. Blockchain's might mitigate ideological filtering observed in some processes, yet widespread adoption hinges on demonstrating superior and over current practices.

References

  1. [1]
    scientific journal - Understanding Science
    Publication that contains firsthand reports of scientific research, often reviewed by experts. In these articles, scientists describe a study.
  2. [2]
    Definition of peer-reviewed scientific journal - National Cancer Institute
    A publication that contains original articles that have been written by scientists and evaluated for technical and scientific quality and correctness by ...
  3. [3]
    What is a scientific journal? How is it used for research? - FAQ
    Mar 19, 2021 · Scientific journals represent the most vital means for disseminating research findings and are usually specialized for different academic disciplines or ...
  4. [4]
    Scientific Publishing in Biomedicine: A Brief History of Scientific ...
    The publication of the first scientific journals dates back to about 358 years ago. In 1665, following the publication of Le Journal des Sçavans (Journal of the ...
  5. [5]
    History of scientific journals | Royal Society
    In 2017 we digitised original copies of all the journals from 1665 to 1996 to create the Royal Society Journals Archive, a resource that provides a fascinating ...
  6. [6]
    Getting Published in Scientific Journals | Science | AAAS
    Publication in peer-reviewed journals is how scientists communicate their results to the scientific community; it is also an enduring record of your small--or ...<|separator|>
  7. [7]
    Scrutinizing science: Peer review
    Peer-reviewed journals are publications in which scientific contributions have been vetted by experts in the relevant field. Peer-reviewed articles provide ...
  8. [8]
    Peer review - Why, when and how - ScienceDirect.com
    Peer review ensures scientific information is truthful, valid, and accurate, and helps authors improve their work before publication.
  9. [9]
    The Purpose of Publication and Responsibilities for Sharing - NCBI
    The purpose of publication is to create a public record of knowledge, move science forward, and authors must share data for others to build upon.
  10. [10]
    What purpose does a scientific journal serve? - Quora
    Oct 20, 2016 · The main function of a research journal is registration, certification, dissemination and archiving. The academic journal is still perceived ...What is the purpose of scholarly journals? - QuoraWhy are scientific journals important? - QuoraMore results from www.quora.com
  11. [11]
    The changing roles of scientific journals - PMC - NIH
    Oct 4, 2024 · Diversification of journal types. The original purpose of journals was to communicate research findings to inform the scientific community ...
  12. [12]
    2.3 Reviewer Roles and Responsibilities - Council of Science Editors
    Peer review is the principal mechanism by which the quality of research is judged. Most funding decisions in science and the academic advancement of scientists ...
  13. [13]
    Scientific Papers | Learn Science at Scitable - Nature
    Scientific papers are for sharing your own original research work with other scientists or for reviewing the research conducted by others.Missing: core | Show results with:core<|separator|>
  14. [14]
    Peer Review in Scientific Publications: Benefits, Critiques, & A ...
    Peer review is now standard practice by most credible scientific journals, and is an essential part of determining the credibility and quality of work submitted ...
  15. [15]
    Understanding peer review - Author Services - Taylor & Francis
    Peer review is the independent assessment of your research paper by experts in your field. The purpose of peer review is to evaluate the paper's quality and ...
  16. [16]
    The essential role of peer review - PMC - NIH
    Peer reviewing is the manner in which we self-monitor our work. We should make sure that it remains an important factor in the whole process that transfers ...
  17. [17]
    Q. What's the difference between scholarly journals and popular ...
    Jul 14, 2025 · While both kinds of periodicals may have information about the same topic, the presentation, depth, and type of information will be different.
  18. [18]
    Scholarly vs. Popular: Characteristics of Scholarly Resources
    Aug 8, 2024 · Scholarly Journals vs. Popular Magazines ; Articles are chosen and checked by editors, Articles must pass a review by experts in the same field ...
  19. [19]
    What is the difference between scholarly and peer reviewed journals?
    A scholarly publication is regarded as scholarly if it is authored by experts for experts. The publication is academic in focus as it reports original ...
  20. [20]
    Journals vs. Magazines - Journals and Magazines
    Feb 23, 2023 · Characteristics of Journals vs. Magazines ; Audience. Non-professionals, General audience. Written in non-technical language. Professors, ...
  21. [21]
    [PDF] The Republic of Letters and the Origins of Scientific Knowledge ...
    The advent of scientific journals in the 17th century helped power the Scientific Revolution by allowing researchers to communicate across time and space ...
  22. [22]
    Journal des sçavans: The First Scientific Journal Begins Publication
    The Journal des sçavans, the first French literary and scientific journal, was published on January 5, 1665, and was the earliest scientific journal in Europe.
  23. [23]
    The birth and early days of the Philosophical Transactions - Journals
    In a like way the Philosophical Transactions was not the earliest scientific periodical to come forth, since the first number of the Journal des Sçavans ...
  24. [24]
    History of the Royal Society
    They became influential figures in the early years of the Society: Oldenburg by establishing the journal Philosophical Transactions of the Royal Society in 1665 ...History Of Science Blog... · Journals History · Search The Catalogues
  25. [25]
    Philosophical Transactions in the 17th Century
    When the inaugural meeting of what was to become the Royal Society was held at Gresham College in late November 1660, many of Oldenburg's philosophical friends ...Missing: founding | Show results with:founding
  26. [26]
    Science periodicals in the nineteenth and twenty-first centuries - PMC
    Oct 5, 2016 · Science periodicals grew from 100 to 10,000 in the 19th century, facilitating science growth. They help create scientific communities and ...
  27. [27]
    Science periodicals in the nineteenth and twenty-first centuries
    Oct 5, 2016 · As the historical papers in this volume show, the rise of the scientific journal in the nineteenth century was marked by instability and ...
  28. [28]
    350 years of scientific periodicals - PMC - PubMed Central - NIH
    As the nineteenth century wore on, the increased desire to publish articles was reflected in the increased bulk of the volumes of Philosophical Transactions; ...
  29. [29]
    History of Nature
    The first issue of Nature was published on 4 November 1869. Many earlier publishing adventures in science had failed dismally, but Nature's eventual success ...
  30. [30]
    19th century – A History of Scientific Journals
    Throughout the nineteenth century the number of people conducting scientific research, or working in a scientific job, was increasing rapidly. One of the ...
  31. [31]
    Scopus 1900–2020: Growth in articles, abstracts, countries, fields ...
    Science is not static, with the number of active journals increasing at a rate of 3.3%–4.7% per year between 1900 and 1996 (Gu & Blackmore, 2016; Mabe & Amin, ...INTRODUCTION · METHODS · RESULTS AND DISCUSSION · LIMITATIONS
  32. [32]
    Growth rates of modern science: a latent piecewise growth curve ...
    Oct 7, 2021 · The results of the unrestricted growth of science calculations show that the overall growth rate amounts to 4.10% with a doubling time of 17.3 years.<|separator|>
  33. [33]
    NSF and postwar US science | Physics Today - AIP Publishing
    May 1, 2020 · After World War II, scientists began to directly connect the state of US education with national security concerns.
  34. [34]
    How the US became a science superpower | University of California
    Sep 11, 2025 · By the 1960s, the federal government was spending about two percent of U.S. GDP on research and development. · What are some examples of those ...
  35. [35]
    How Academic Science Gave Its Soul to the Publishing Industry
    This international boost in research support in turn fed explosive growth in scientific publication. Many journals at the time were financially stressed and ...Missing: post- boom
  36. [36]
    A, Increased number of scholarly journals from their birth (1665). The...
    May 16, 2025 · The number of scientific journals has exponentially grown from 10 at the end of the 17th century to 100,000 at the end of the 20th century. The ...Missing: statistics | Show results with:statistics
  37. [37]
    The arXiv preprint server hits 1 million articles | Nature
    Dec 30, 2014 · The repository, launched as an 'electronic bulletin board' in August 1991, just before the dawn of the World Wide Web, took 17 years to ...
  38. [38]
    Lessons from arXiv's 30 years of information sharing - PMC - NIH
    Aug 4, 2021 · arXiv began in the print-only era in 1991. Started at Los Alamos National Laboratory, and known as xxx.lanl.gov until 1998, it was intended to ...
  39. [39]
    PubMed Celebrates its 10th Anniversary!. NLM Technical Bulletin ...
    Oct 5, 2006 · PubMed was first released in January 1996 as an experimental database under the Entrez retrieval system with full access to MEDLINE · On January ...
  40. [40]
    ScienceDirect 25 years of discovery - Elsevier
    Launch of ScienceDirect. This year, 2024, marks a milestone in ScienceDirect's distinguished history: the celebration of 25 years of research, scholarship ...
  41. [41]
    About the Journal | First Monday
    First Monday is one of the first openly accessible, peer–reviewed journals on the Internet, solely devoted to the Internet.Missing: fully | Show results with:fully
  42. [42]
    As the world turns: scientific publishing in the digital era
    Feb 26, 2024 · A quarter of the way into the 21st Century the technology of encoding and transmitting information in digital form is in full flower.
  43. [43]
    Changes in scientific publishing and possible impact on authors ...
    May 29, 2024 · This article describes and comments on the major changes that recently deeply modified the scientific publishing system and analyzes how they potentially ...
  44. [44]
    21st Century Science Overload
    Jan 7, 2017 · We passed the 50 million mark in terms of the total number of science papers published since 1665, and approximately 2.5 million new scientific papers are ...
  45. [45]
    Publications Output: U.S. Trends and International Comparisons | NSF
    Dec 11, 2023 · From 2012 to 2022, the global yearly publication total grew by 59%. In terms of growth for these two largest producers, China and the United ...
  46. [46]
    What's the Big Deal? - Ithaka S+R
    Jun 22, 2021 · For nearly two decades, a number of academic libraries have licensed a great deal of their journal content in the form of large bundles from ...Introduction · What's the Big Deal? · Assessing Changes to Journal...
  47. [47]
    Full article: Lessons Learned from Reevaluating Big Deals with Unsub
    Oct 26, 2022 · Bundles of journals started to be sold to libraries by Academic Press (now part of Elsevier) in 1996 (Coghill, 2019; Galbraith & Hess, ...
  48. [48]
    Is It Such a Big Deal? On the Cost of Journal Use in the Digital Era
    5 The dilemma of dramatically increasing subscription rates and budget cuts faced by modern university libraries has been dubbed the serials crisis.
  49. [49]
    The “Serials Crisis” Explained… - Tufts University
    Rapidly rising journal subscription prices have severely eroded the ability of libraries, universities, and scholars to purchase the publications necessary for ...
  50. [50]
    The term “serials crisis” has come to be common shorthand for the ...
    The average cost of a political science journal has increased 59% since 2000, from $226 to $360, followed by sociology (54%), business and economics (49%), and ...
  51. [51]
    The Cost of Elsevier | In the Dark - telescoper.blog
    Mar 8, 2023 · This annual report for RELX contains the accounts for Elsevier for 2022 in which I found the following headline figures: Revenue: €3.26 billion.
  52. [52]
    Elsevier parent company reports 10% rise in profit, to £3.2bn
    Feb 13, 2025 · Elsevier parent company reports 10% rise in profit, to £3.2bn · Scientific arm of Relx reports adjusted operating profit of £1.17 billion in 2024.
  53. [53]
    Evaluating big deal journal bundles - PNAS
    Jun 16, 2014 · We report the results of this investigation and compare the bundled subscription prices charged by for-profit and nonprofit publishers.
  54. [54]
    [PDF] Budapest Open Access Initiative
    Feb 14, 2002 · The public good they make possible is the world-wide electronic distribution of the peer-reviewed journal literature and completely free and.
  55. [55]
    Article processing charges for open access journal publishing: A ...
    Jul 3, 2023 · The average APC per journal increased slightly, but the average per article increased from US $904 to US $1626, indicating that authors chose to ...
  56. [56]
    Uptake of Open Access (OA) - STM Association
    Between 2014 and 2024, the percentage share of global articles, reviews and conference papers made available via gold has increased by 26%, from 14% to 40%.
  57. [57]
    Open Access: History, 20-Year Trends, and Projected Future for ...
    Feb 8, 2023 · Open access began with arXiv.org in 1991, with the BOAI in 2001. Open access articles increased from 30% in 2010 to 50% in 2019.
  58. [58]
    Springer Nature Says It Has Reached 50 Percent Open Access
    Apr 14, 2025 · In 2024, 82 percent of open-access articles in our hybrid journals were published via transformative agreements. These agreements are also ...
  59. [59]
    Hybrid Journals - Scientific News
    Oct 11, 2025 · Advantages · Wider visibility for open access articles. · Authors can meet funding agency requirements for OA publication. · Increased citations ...
  60. [60]
    Article-processing charges as a barrier for science in low-to-medium ...
    However, most of the open access journals apply article-processing charges (APCs), which can cost more than USD 10,000.00. In regions where support for ...
  61. [61]
    Fully OA journals output shrank in 2023, but hybrid OA made up the ...
    Feb 5, 2025 · In 2023, 75% of the OASPA members' Open Access articles included in this dataset were published in fully OA journals, down from 79% last year.Missing: statistics | Show results with:statistics
  62. [62]
    [PDF] NSF 25-347 Open-Access Publishing in a Global Context
    Aug 6, 2025 · Closed-access articles, which decreased from 53% to 29%, with the percentage of Gold OA articles surpassing Closed-access articles in 2021.
  63. [63]
    Overpaid bankers? Think again. Scientific publishers top the charts
    Aug 31, 2023 · RELX reports revenues from scientific publishing of EUR 2,7 billion, which corresponds to close to half of its total operating revenues. 76 ...<|control11|><|separator|>
  64. [64]
    [PDF] 2023 Annual Report - RELX
    Feb 22, 2024 · ... scientific, technical and medical research products; competitive ... Revenue. 8,553. 9,161. +7%. +7%. +8%. EBITDA. 3,174. 3,544. Operating ...
  65. [65]
    Why Do Academic Publishers Reap the Financial Rewards While ...
    Jul 31, 2024 · The main reason that authors are not paid for academic publications is that their primary incentive for publishing isn't monetary. While ...
  66. [66]
    Publish or perish: Where are we heading? - PMC - NIH
    The increasing number of publication have led to rise in unethical practices, dubious research practices such as salami slicing, plagiarism, duplicate ...
  67. [67]
    The effects of the publish or perish culture on publications in the field ...
    This study aims to reveal the effects of publish or perish culture, which is a product of the marketization of science, in the field of educational ...
  68. [68]
    Article Processing Charges (APCs) and the new enclosure of research
    Aug 11, 2022 · In 2020 we estimate the annual revenues from article processing charges (APCs) among major scholarly journal publishers to have exceeded 2 ...
  69. [69]
    Is the pay-to-publish model for open access pricing scientists out?
    Aug 1, 2024 · Many authors are opting to publish in journals with higher article-processing charges, driving the median paid APC higher than the median ...
  70. [70]
    Scientific Utopia: II. Restructuring Incentives and Practices to ...
    Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting decisions that elicit positive ...
  71. [71]
    Publish or Perish Culture and its Importance - Paperpal
    Dec 8, 2023 · The pressure to publish can lead to unethical practices, including plagiarism, self-plagiarism, and data manipulation. Researchers may be ...Merits of the Publish or Perish... · Disadvantages of Publish or...<|separator|>
  72. [72]
    The 'publish or perish' mentality is fuelling research paper retractions
    Oct 3, 2024 · Published research papers can be retracted if there is an issue with their accuracy or integrity. And in recent years, the number of retractions has been ...
  73. [73]
    Predatory Journals: What They Are and How to Avoid Them - NIH
    Predatory journals—also called fraudulent, deceptive, or pseudo-journals—are publications that claim to be legitimate scholarly journals, but misrepresent ...Missing: history | Show results with:history
  74. [74]
    How common are cash-per-publication incentives in different ...
    Aug 9, 2023 · They say that cash-per-publication incentives are common and that scientists who publish in the top Western journals can earn in excess of $100 ...
  75. [75]
    Predatory journals and their effects on scientific research community
    Presence of predatory journals has helped many pseudo-researchers to be promoted in countries where academic promotion is still based on “publish or perish” ...Missing: definition | Show results with:definition<|separator|>
  76. [76]
    Full article: Understanding 'Predatory' Journals and Implications for ...
    Aug 1, 2024 · This article draws on a broad body of research to explore the factors that enable predatory journals to thrive, some of which arise from inequities.
  77. [77]
    7 steps to publishing in a scientific journal - Elsevier
    These guidelines focus on preparing a full article (including a literature review), whether based on qualitative or quantitative methodology.
  78. [78]
    Submission & Peer Review - Wiley Author Services
    Your submission checklist: · Review your chosen journal's submission requirements · Register for an ORCID iD and associate it to your manuscript when you submit.Manuscript Submission · Under Review · Registered Reports · Refer and Transfer
  79. [79]
    Making your submission - Author Services - Taylor & Francis
    1. Read your chosen journal's submission requirements · 2. Write a compelling cover letter · 3. Get familiar with Taylor & Francis Editorial policies · 4. Learn ...
  80. [80]
    Editorial process - How to publish a scientific paper
    Aug 19, 2025 · 1. Submission: The manuscript is submitted by the corresponding author, and receives a submission or tracking ID number. 2. Preliminary editorial screening.
  81. [81]
    Understanding the Publishing Process - PLOS
    Jan 14, 2025 · The publishing process involves initial checks, editorial screening, peer review, revision, and formal acceptance, with journals overseeing the ...
  82. [82]
    UNDERSTANDING THE PEER REVIEW PROCESS - PMC - NIH
    Peer review is a vital part of the quality control mechanism that is used to determine what is published, and what is not.
  83. [83]
    Peer review process | What is peer review - Editor Resources
    The process involves both the journal editors and independent expert reviewers, who evaluate the submitted articles. Peer reviewers can recommend whether or not ...
  84. [84]
    Peer Review - What is Peer Review: Definition, Process, Types and ...
    Mar 22, 2024 · Peer review, also known as refereeing, is a process utilized by journals to assess the quality, validity, and relevance of scholarly articles ...
  85. [85]
    Types of Peer Review - Wiley Author Services
    The three most common types of peer review are single-anonymized, double-anonymized, and open peer review.
  86. [86]
    How to perform a high-quality peer review - PMC - NIH
    Feb 3, 2025 · Peer review serves as a quality control mechanism, ensuring that only robust, accurate and meaningful scientific work is published. For medical ...
  87. [87]
    Review and Publication Times and Reporting Across Journals on ...
    May 27, 2025 · Median (range) time to first decision before peer review was 10.0 (2.0-67.0) days; to first peer-reviewed decision, 60.5 (21.0-263.0) days; and ...
  88. [88]
    Editorial and Peer Review Process | PLOS One - Research journals
    The time to render a first decision averages about 43 days, but times vary depending on how long it takes for the editor to receive and assess reviews. The ...<|control11|><|separator|>
  89. [89]
    The present and future of peer review: Ideas, interventions ... - PNAS
    Jan 27, 2025 · Peer review is intended to support scientific integrity, correct errors, and democratize decisions about publication and funding. Critical ...Proposed Solutions · Open And Transparent Peer... · Preprint Peer Review<|control11|><|separator|>
  90. [90]
    Peer Review Bias: A Critical Review - ScienceDirect.com
    Peer review bias can be defined as a violation of impartiality in the evaluation of a submission.
  91. [91]
    Peer Review Bias: A Critical Review - Mayo Clinic Proceedings
    Failure of peer reviewers to assess the quality of studies Poor reproducibility among reviewers, inability to recruit knowledgeable reviewers, attrition of ...
  92. [92]
    Reducing bias in the peer‐review process - Frair - The Wildlife Society
    Mar 24, 2025 · Systemic biases in the scholarly review process erode public trust in science. It is incumbent upon editorial boards to consider policy and ...
  93. [93]
    Exploring Bias in Scientific Peer Review: An ASCO Initiative
    Oct 10, 2022 · To investigate implicit bias (IB) in the peer review process across ASCO and Conquer Cancer Foundation and to propose potential mitigation strategies.Abstract · Introduction · Conclusion And Future...Missing: criticisms | Show results with:criticisms
  94. [94]
    Behind the curtain of the editorial process: how editors decide! - PMC
    Mar 1, 2023 · Editors use associate editors and referees, with EiCs making final decisions. They consider if the work is new, true, and if anyone cares, ...
  95. [95]
    Making an Editorial Decision | BMC Medicine - BioMed Central
    Editors decide based on reviewer reports and their own reading. They may reject if concerns are unaddressable, request revisions if acceptable after changes, ...
  96. [96]
    A model of the editorial process in academic journals - ScienceDirect
    Editors of academic journals make their acceptance or rejection decisions about submitted papers based on their own prior assessment of the intrinsic quality of ...
  97. [97]
    Why is it so hard to get a paper published in Science or Nature, and ...
    Jun 3, 2025 · Science, for example, says on their web site that the acceptance rate is<6%. Why is this? Both Nature and Science are considered top journals.Where is it harder to get published: Science or Nature? - QuoraWhat does it mean for a paper to be rejected by prestigious journals ...More results from www.quora.com
  98. [98]
    Journal acceptance rates? : r/AskAcademia - Reddit
    Nov 29, 2018 · I know the Nature-branded Nature Press journals in general have an approximately 5% acceptance rate, which is largely due to an ~85% desk reject ...Are the acceptance rates mentioned on nature transfer desk genuine?In top-tier journals, most papers will be rejected without a peer ...More results from www.reddit.com
  99. [99]
    Measuring the effectiveness of scientific gatekeeping - PNAS
    The decisions of gatekeepers—editors and peer reviewers—legitimize scientific findings, distribute professional rewards, and influence future research.Results · Rejected Manuscripts · Fates Of Rejected Vs...
  100. [100]
    Impact of institutional affiliation bias in the peer review process
    Mar 11, 2025 · A common bias involves favouring submissions from renowned institutions, which makes it harder for authors from lesser-known institutions to get published.
  101. [101]
    Are editors of top-tier journals biased in their decisions by the ...
    Feb 9, 2024 · The answer is yes. Here's a recent example. Our analysis shows that editors tend to be more likely to invite high-scoring manuscripts for revision or ...
  102. [102]
    A survey of biomedical journals to detect editorial bias and nepotistic ...
    This study explores the relationship between hyper-prolific authors and a journal's editorial team, finding a subset of journals where a few authors, often ...
  103. [103]
    Gender and geographical bias in the editorial decision-making ...
    We did not identify evidence of gender bias during the editorial decision-making process for papers sent out to peer review.Methods · Results · Discussion<|separator|>
  104. [104]
    Advisory Note on Bias in Science Publishing
    Unacceptable bias occurs when the decision to send a paper for review, or the decision to accept it, is influenced by factors other than the scientific content ...
  105. [105]
    For Science's Gatekeepers, a Credibility Gap - The New York Times
    May 2, 2006 · Fraud is a substantial problem, and the attitude toward it has changed little over the years, other editors say. Some journals fail to retract ...
  106. [106]
    The peer review system has flaws. But it's still a barrier to bad science
    Sep 20, 2017 · The peer review system has flaws. But it's still a barrier to bad science · Scientific truth is built on replication · Why do we need peer review?<|control11|><|separator|>
  107. [107]
    Scientific Publishing: Peer review without gatekeeping - eLife
    Oct 20, 2022 · eLife is changing its editorial process to emphasize public reviews and assessments of preprints by eliminating accept/reject decisions after peer review.Cite This Article · Reviewed Preprints At Elife · Article And Author...
  108. [108]
    Retraction guidelines - COPE: Committee on Publication Ethics
    Aug 29, 2025 · Essential guidance for editors on when and how to retract articles, including the content of retraction notices, timing, and handling ...
  109. [109]
    Retractions: Guidance from the Committee on Publication Ethics ...
    Dec 1, 2009 · Notices of retraction should mention the reasons and basis for the retraction, to distinguish cases of misconduct from those of honest error; ...
  110. [110]
    How to correct a published paper | COPE
    Visibility of corrections and retractions. Find out COPE's position on how visible a correction, expression of concern, or retraction should be. Slide 9 of 10.<|separator|>
  111. [111]
    New COPE retraction guidelines address paper mills, third parties ...
    Sep 4, 2025 · COPE also recommends retracting papers with “any form of misrepresentation,” including “deception; fraud (eg, a paper mill); identity theft or ...
  112. [112]
    Post-publication peer review and the identification of methodological ...
    We aimed to determine to what extent systematic reviewers and post-preprint and post-publication peer review identified methodological and reporting issues in ...
  113. [113]
  114. [114]
    Biomedical paper retractions have quadrupled in 20 years — why?
    May 31, 2024 · Of all the retracted papers, nearly 67% were withdrawn owing to misconduct and around 16% for honest errors.
  115. [115]
    Clarivate to stop counting citations to retracted articles in journals ...
    May 15, 2025 · But the overall retraction rate has risen recently, to about 0.2%, which, along with a decrease in the time it takes to retract papers, ...<|separator|>
  116. [116]
    Analysis of scientific paper retractions due to data problems
    The results show that since 2000, retractions due to data problems have increased significantly (p < 0.001), with the percentage in 2023 exceeding 75%. Among ...
  117. [117]
    Opening the black box of article retractions: exploring the causes ...
    Dec 18, 2024 · While great emphasis has been placed on articles retracted due to scientific misconduct, studies show many retractions are due to honest errors.
  118. [118]
    Retractions of scientific publications: responsibility and accountability
    Jun 15, 2014 · While the frequency of corrections has been constant throughout the various scientific fields (11), the frequency of retracted publications has ...
  119. [119]
    Self-correction in science: The effect of retraction on the frequency of ...
    Dec 7, 2022 · Retraction led to a decrease in average annual citation frequency from about 5 before, to 2 citations after retraction. In contrast, for non- ...
  120. [120]
    Continued use of retracted papers: Temporal trends in citations and ...
    Dec 1, 2021 · Our temporal analyses show that retracted papers continued to be cited, but that old retracted papers stopped being cited as time progressed.
  121. [121]
    Retraction Watch – Tracking retractions as a window into the ...
    A sleuth who has identified several hundred articles describing clinical women's health research with untrustworthy data, leading to nearly 300 retractions, ...
  122. [122]
    Characterizing the effect of retractions on publishing careers - Nature
    Apr 11, 2025 · Retracting academic papers is a fundamental tool of quality control, but it may have far-reaching consequences for retracted authors and their careers.
  123. [123]
    Linking citation and retraction data reveals the demographics of ...
    The data suggest that approximately 4% of the top-cited scientists have at least 1 retraction. This is a conservative estimate, and the true rate may be higher ...Missing: statistics | Show results with:statistics
  124. [124]
    The introduction, methods, results, and discussion (IMRAD) structure
    IMRAD is a standardized structure for scientific articles, consisting of introduction, methods, results, and discussion.
  125. [125]
    Scientific Writing: Sections of a Paper - Guides - Duke University
    Typically scientific journal articles have the following sections: Abstract. Introduction. Materials & Methods. Results. Discussion. References used:.
  126. [126]
    Submission Guidelines | PLOS One
    Limit manuscript sections and sub-sections to 3 heading levels. Make sure heading levels are clearly indicated in the manuscript text. Layout and spacing.Getting Started · Figures · LaTeX · Writing Center
  127. [127]
    How to Write and Publish a Research Paper for a Peer-Reviewed ...
    Apr 30, 2020 · Structure of the Introduction Section. The introduction section should be approximately three to five paragraphs in length. · Methods Section.
  128. [128]
    Anatomy of a Scholarly Article - NCSU Libraries - NC State University
    Every source cited in the paper will be in the References section. Most scholarly articles cite 20-50 sources, or more! If you found interesting or relevant ...
  129. [129]
  130. [130]
    CONSORT 2025 statement: updated guideline for reporting ... - Nature
    Apr 15, 2025 · The CONSORT 2025 statement consists of a 30-item checklist of essential items that should be included when reporting the results of a randomized trial.
  131. [131]
    The Consolidated Standards of Reporting Trials (CONSORT ... - NIH
    The CONSORT allows readers to quickly review eligibility criteria and the exclusions that were applied to understand the final study population.
  132. [132]
    [PDF] How to write a Physics Journal Article
    This is a guide to the format, structure, and style of modern physics journal articles. You will note that there are significant differences between journal ...
  133. [133]
    [PDF] Writing Physics Papers
    Why are we Writing Papers? • What Physics Journals are there? • Structure of a Physics Article. • Style of Technical Papers. • Hints for Effective Writing ...
  134. [134]
    [PDF] How to Write a Paper in Scientific Journal Style and Format
    Use 12pt font, centered title, centered section headings, and include sections like abstract, introduction, methods, results, discussion, and literature cited.Missing: variations | Show results with:variations
  135. [135]
    Interactive Publication: The document as a research tool - PMC
    Multimedia and large stores of research data are increasingly considered indispensable to the scientific publishing enterprise. While “multimedia documents” ...Missing: innovations | Show results with:innovations
  136. [136]
    (PDF) Interactive formats: considerations for scientific publications
    Interactive media can be defined as the integration of various digital media content, including multimedia elements (text, graphics, audio, animation and video) ...
  137. [137]
    Innovative Research Formats - DN Life Science
    Innovative Research Formats refers to non-traditional ways of presenting scientific information that go beyond the standard research article format.
  138. [138]
    Journal Citation Reports: Learn the Basics
    Jan 24, 2025 · The Journal Impact Factor identifies the frequency with which an average article from a journal is cited in a particular year. You can use this ...
  139. [139]
    Journal Citation Reports: Document Types Included in the Impact ...
    The Impact Factor is calculated by dividing the number of citations in the Journal Citation Reports year (the numerator) by the total number of citable items ...
  140. [140]
    Scopus CiteScore - Elsevier
    Calculated using data from Scopus, CiteScore metrics help you evaluate journals, book series and conference proceedings to empower well-informed decisions.
  141. [141]
    What is CiteScore and why should you care about it? - Elsevier
    Jun 3, 2021 · CiteScore is a metric that measures the impact of scholarly journals by calculating the average number of citations per article over a ...
  142. [142]
    SJR - About Us
    The SCImago Journal & Country Rank is a publicly available portal that includes the journals and country scientific indicators developed from the information ...
  143. [143]
    Journal Metrics - Research Impact - Research Guides at Ohio State ...
    Eigenfactor (see link at right) uses the source journals in Journal Citation Reports and an algorithm based on network theory and similar to Google's PageRank.
  144. [144]
    Impact Factor, Citation Analysis, and other Metrics
    Nov 22, 2024 · The h-index is an index that attempts to measure both the scientific productivity and the apparent scientific impact of a scientist. The index ...
  145. [145]
    Exposing the data behind the impact factor highlights its limitations
    Jul 6, 2016 · Citation distributions are broad, ranging from zero to more than 100 in many journals. · Distributions are skewed and overlapping, with the ...Missing: scholarly | Show results with:scholarly
  146. [146]
    Citation counts and journal impact factors do not capture some ...
    Aug 17, 2022 · Both citation counts and impact factors were weak and inconsistent predictors of research quality, so defined, and sometimes negatively related to quality.
  147. [147]
    What's wrong with the journal impact factor in 5 graphs | News - Nature
    Apr 3, 2018 · The impact factor is based on a two-year citation window, which makes it “ill-suited across all disciplines, as it covers only a small fraction of citations ...Missing: limitations | Show results with:limitations<|separator|>
  148. [148]
    Journal Impact Factor: Its Use, Significance and Limitations - PMC
    Impact factor is commonly used to evaluate the relative importance of a journal within its field and to measure the frequency with which the “average article” ...
  149. [149]
    The H-index is an unreliable research metric for evaluating the ...
    Jul 18, 2024 · The H-index is a widely used research metric for assessing the reputation of scientists. It is a numerical indicator that measures publication impact.
  150. [150]
    Halt the h-index - Leiden Madtrics
    May 19, 2021 · Another problem of the h-index is that it does not account for differences in publication and citation practices between and even within fields.
  151. [151]
    What's wrong with the h-index, according to its inventor - Nature
    Mar 24, 2020 · One downside is that it can deter researchers from innovative thinking. For instance, a student working under a professor with a high h-index ...
  152. [152]
    The impact factor game: an agent-based exploration of self-citation ...
    Oct 9, 2025 · The manipulation of the Impact Factor (IF) through editorial decisions inflating self-citations is a growing concern in academic publishing.<|separator|>
  153. [153]
    Scientific 'cartels' band together to cite each others' work - STAT News
    Jan 13, 2017 · A small number of scientists band together to reference each other's work, gaming the citation system to make their studies appear to be ...
  154. [154]
    Visualizing Citation Cartels - The Scholarly Kitchen
    Sep 26, 2016 · A visualization of the citation network of papers published in ACI (blue) and MIM (red) from 2013 through 2015.
  155. [155]
    Detecting anomalous citation groups in journal networks - Nature
    Jul 15, 2021 · In citation networks, a citation cartel is manifested as a group of journals that excessively cite papers published in other journals within the ...
  156. [156]
    Citation gaming induced by bibliometric evaluation: A country-level ...
    In particular, we exported from SCIval two metrics: (1) Citation Count including self-citations, and (2) Citation Count excluding self-citations. For both ...
  157. [157]
    When scientific citations go rogue: Uncovering 'sneaked references'
    Jul 9, 2024 · Citation counts heavily influence research funding, academic promotions and institutional rankings. Manipulating citations can lead to unjust ...
  158. [158]
    Quantitative research assessment: using metrics against gamed ...
    Nov 3, 2023 · This review examines several gaming practices, including authorship-based, citation-based, editorial-based, and journal-based gaming as well as gaming with ...
  159. [159]
    Citation manipulation through citation mills and pre-print servers
    Feb 14, 2025 · Citations are widely considered in scientists' evaluation. As such, scientists may be incentivized to inflate their citation counts.
  160. [160]
    Citation metrics for appraising scientists: misuse, gaming and proper ...
    Feb 4, 2020 · The number of publications, for example, is currently extremely easy to game, as there are thousands of journals (many of them unnecessary ...
  161. [161]
    New Measure Rates Quality of Research Journals' Policies to ...
    Feb 10, 2020 · TOP Factor assesses journal policies for the degree to which they promote core scholarly norms of transparency and reproducibility.
  162. [162]
    TOP Factor rates journals on transparency, openness | News - Nature
    Feb 18, 2020 · New tool seeks to change editorial practices. A new journal rating system aims to encourage scientific editors and publishers to rethink —and, ...
  163. [163]
    Do altmetrics point to the broader impact of research? An overview ...
    Altmetrics is a term to describe web-based metrics for the impact of publications and other scholarly material by using data from social media platforms.
  164. [164]
    A critical review on altmetrics: can we measure the social impact ...
    Altmetrics measure the digital attention received by a research output. They allow us to gauge the immediate social impact of an article by taking real-time ...Research Output · Altmetric Providers · Altmetrics In Insights Into...
  165. [165]
    The Pros and Cons of the Use of Altmetrics in Research Assessment
    The potential advantages of altmetrics for research evaluation are that they may reflect important non-academic impacts and may appear before citations when an ...
  166. [166]
    What performance metrics should Journal Editors report? Or, is ...
    Sep 14, 2020 · Impact Factor is a measure of how often the average paper is cited in a particular year in a particular journal (Glänzel & Moed, 2002). The 2019 ...Missing: reproducibility board
  167. [167]
    Reliability of editors' subjective quality ratings of peer reviews of ...
    Conclusions: Subjective editor ratings of individual reviewers were moderately reliable and correlated with reviewer ability to report manuscript flaws.
  168. [168]
    Estimating the reproducibility of psychological science
    Aug 28, 2015 · We conducted a large-scale, collaborative effort to obtain an initial estimate of the reproducibility of psychological science.
  169. [169]
    Reproducibility Project: Psychology - OSF
    We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original ...
  170. [170]
    Most laboratory cancer studies cannot be replicated, study shows
    Apr 4, 2012 · Most laboratory studies of cancer are wrong, says a former head of global cancer research at the biotechnology company Amgen.
  171. [171]
    how much can we rely on published data on potential drug targets?
    Aug 31, 2011 · The validity of published data on potential targets is crucial for companies when deciding to start novel projects.Missing: landmark | Show results with:landmark
  172. [172]
    Raise standards for preclinical cancer research - Nature
    Mar 28, 2012 · Unfortunately, Amgen's findings are consistent with those of others in industry. A team at Bayer HealthCare in Germany last year reported that ...
  173. [173]
    Biomedical researchers' perspectives on the reproducibility of ...
    Nov 5, 2024 · Key findings include that 72% of participants agreed there was a reproducibility crisis in biomedicine, with 27% of participants indicating ...
  174. [174]
    Huge reproducibility project fails to validate dozens of biomedical ...
    Apr 25, 2025 · The teams were able to replicate the results of less than half of the tested experiments. That rate is in keeping with that found by other large ...
  175. [175]
    Is There a Reproducibility Crisis? On the Need for Evidence-based ...
    The reasons provided for the reproducibility crisis (methodological/statistical shortcomings, a focus on novelty resulting in publication bias/file drawer ...
  176. [176]
    Reproducibility: The science communities' ticking timebomb. Can we ...
    Sep 27, 2022 · The fact that up to 65% of researchers have tried and failed to reproduce their own research is astonishing, to say the least.Missing: key | Show results with:key
  177. [177]
    Reproducibility and replicability in research: What 452 professors ...
    Mar 26, 2025 · In 2016, a survey published in Nature reported that more than 70% of researchers have attempted and failed to reproduce other scientists' ...
  178. [178]
    The Hyperpoliticization of Higher Ed: Trends in Faculty Political ...
    Higher education has recently made a hard left turn—sixty percent of faculty now identify as “liberal” or “far left.” This left-leaning supermajority is ...Missing: homogeneity | Show results with:homogeneity
  179. [179]
    Partisan Professors - CTSE@AEI.org - American Enterprise Institute
    Dec 2, 2024 · The results show that among those registered to vote by party ID, professors are almost all Democrats. That ratio is even stronger among those ...
  180. [180]
    Homogenous: The Political Affiliations of Elite Liberal Arts College ...
    Why Political Homogeneity Is Troubling. Political homogeneity is problematic because it biases research and teaching and reduces academic credibility. In a ...
  181. [181]
    Is research in social psychology politically biased? Systematic ...
    In a survey of social psychologists, Inbar and Lammers (2012) found that many respondents reported a willingness to discriminate against conservative colleagues ...
  182. [182]
    Yes, Ideological Bias in Academia is Real, and Communication ...
    Mar 6, 2018 · To the first question: Evidence indicates political skew in academia toward the left. Over a decade ago, Gross and Simmons's study on the ...
  183. [183]
    [PDF] Political Discrimination in the Law Review Selection Process
    Feb 12, 2018 · We find evidence that the ideological discrimination is driven by student editors' superior ability to ascertain the quality of articles that.Missing: face | Show results with:face
  184. [184]
    The Disappearing Conservative Professor | National Affairs
    The study found that the proposals on reverse discrimination were the hardest to get approved, often because their research designs were scrutinized more ...
  185. [185]
    The Gatekeepers of Academia: Investigating Bias in Journal ...
    Aug 22, 2025 · Results suggest a slight liberal bias across topics, such that more liberal articles were published, with notable differences based on the topic ...
  186. [186]
    “The Grievance Studies Affair” Project: Reconstructing and ...
    May 4, 2020 · Other possible sources of bias include the gender of the reviewer and of the author, and the latter's academic position, institutional ...
  187. [187]
    Academic Grievance Studies and the Corruption of Scholarship
    Jan 22, 2020 · Peer review may need reform to prevent it from being susceptible to political, ideological, and other biases, but it remains the best system we ...
  188. [188]
    Academics Are Pushing Back on For-Profit Academic Publishing
    Jun 20, 2025 · Relx, the parent company of the “biggest player in this business,” Elsevier, reaped a profit margin of almost 40 percent in 2023, “rivalling ...
  189. [189]
    Is the staggeringly profitable business of scientific publishing bad for ...
    Sep 1, 2024 · With total global revenues of more than £19bn, it weighs in somewhere between the recording and the film industries in size, but it is far more ...
  190. [190]
    Scientific Papers Do Not Have to Be So Expensive | Age of Awareness
    Jan 12, 2024 · All this is happening even though it has been found that publishing a journal only costs 10 to 15% of what publishers charge authors to make ...
  191. [191]
    Worldwide inequality in access to full text scientific articles
    Oct 30, 2019 · One is forced to note that paywalls still limit access to approximately 75% of scholarly documents in all disciplines (Bosman & Kramer, 2018; ...
  192. [192]
    What's driving the inequality in scientific research?
    Jul 10, 2015 · Access is another issue. These coveted journals generally reside behind paywalls. This excludes those who cannot afford to pay for it, like ...
  193. [193]
    Why Are Subscription-Based Journals So Expensive? - Orvium
    May 14, 2021 · Academic journal subscriptions are much more expensive than magazine subscriptions. While a yearly subscription to the Journal of Coordination ...<|control11|><|separator|>
  194. [194]
    Article Processing Charges (APCs) - Open Access Scholarship
    Aug 26, 2025 · The cost of APCs varies from journal to journal, but can range anywhere from $500 to $6,000. In a 2022 study of Gold OA journals indexed in the ...
  195. [195]
    Open Access Publishing Metrics, Cost, and Impact in Health ... - NIH
    Oct 16, 2024 · The median (IQR) APC for all journals was $2820.00 ($928.00-$3300.00). Associations were observed between impact factor and APC (β coefficient, ...
  196. [196]
    Paying to publish: A cross-sectional analysis of article processing ...
    Overall, the median cost to publish open access was significantly greater for hybrid journals compared with open access journals ($3710 vs $1735; P<0.0001).
  197. [197]
    Scrutinising what Open Access Journals Mean for Global Inequalities
    Nov 8, 2020 · In the current article, we tested our hypothesis by which high-impact journals tend to have higher Article Processing Charges (APCs) by comparing journal IF ...<|separator|>
  198. [198]
    350 years of scientific periodicals | Notes and Records - Journals
    Jul 15, 2015 · Over those 350 years, scientific periodicals have performed many roles. As well as storing records of research for the future, they have enabled geographically ...
  199. [199]
    The Peer Review Process: Past, Present, and Future
    Jun 16, 2024 · The aim of the peer review process is to help journal editors assess which manuscripts to publish, excluding papers that are not on topic or ...
  200. [200]
    The impact of peer review on the contribution potential of scientific ...
    Sep 15, 2021 · The peer-reviewing process has long been regarded as an indispensable tool in ensuring the quality of a scientific publication.Missing: discoveries | Show results with:discoveries
  201. [201]
    Peer Review | Baldwin - Encyclopedia of the History of Science
    Throughout the eighteenth century and well into the nineteenth, journal articles were just one way among many for researchers to communicate recent scientific ...
  202. [202]
    The role of journals in the growth of scientific knowledge
    Specialized scientific journals not only facilitate development of networks of scientists that focus on solving very specific problems (Kuhn, 1970), but also ...
  203. [203]
    When did peer review start | Researcher blog - F1000
    Oct 12, 2023 · In this blog, we explore the origins and evolution of peer review from its earliest roots to the present day.
  204. [204]
    The Rise and Fall of Scientific Journals and a Way Forward
    Jan 30, 2025 · Scientific journals initially aided science, but now face issues like commercial publishers and slow publishing. A new model with open access ...Missing: growth | Show results with:growth
  205. [205]
    Standards in the Face of Uncertainty: Peer Review Is Flawed and ...
    Thus the review process seemed not to be influenced by social issues. Two studies were carried out to determine how many deliberately inserted mistakes would be ...
  206. [206]
    Challenges and Controversies in Peer Review - ScienceDirect.com
    Despite numerous obvious scientific errors in the fraudulent paper, it was accepted for publication at 157 medical journals: “Of the 255 papers that ...
  207. [207]
    Peer review: a flawed process at the heart of science and journals
    So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to ...
  208. [208]
    Factors associated with scientific misconduct and questionable ... - NIH
    Mar 26, 2019 · Our findings suggest that the variables of age, number of publications, geographical location, work role, and publication pressure explain ...
  209. [209]
    The misalignment of incentives in academic publishing and ... - PNAS
    In this perspective, we discuss the complex issues of incentive alignment in academic publishing and alternative publication models aimed at addressing these ...
  210. [210]
    Scientific Publishing Industry Faces Federal Scrutiny
    Oct 2, 2025 · Federal officials are raising long-standing concerns with research journals and the academic incentive structures propping them up.
  211. [211]
    The Business of Scientific Publishing | Science | AAAS
    May 16, 2024 · Publishers of academic/scientific/scholarly journals charged very high subscription fees to libraries as a captive audience.
  212. [212]
    Scientific Publishing: Enough is Enough - by Seemay Chou
    Jun 2, 2025 · Incomplete information: Key components of publications, such as data or code, often aren't shared to allow full review, reuse, and replication.Missing: criticisms | Show results with:criticisms
  213. [213]
    Preprints: What Role Do These Have in Communicating Scientific ...
    Because they are not journals, preprint servers have no Impact Factor and authors retain copyright of their articles. Each online version of an article ...
  214. [214]
    The evolution, benefits, and challenges of preprints and their ...
    Feb 20, 2022 · The concerns over preprints included a lack of scientific integrity, stealing ideas/ scooping data, priority issues regarding research ideas, ...
  215. [215]
    The consistency of impact of preprints and their journal publications
    Besides speeding up the publishing process, uploading new manuscripts on preprint servers has the benefits of citation advantage, pre-publication comments and ...
  216. [216]
    Open Peer Review | Essential Information from F1000Research
    Open peer review at F1000Research is fully transparent, post-publication, and formal, with reports and reviewer details published alongside the article.
  217. [217]
    Reviewing post-publication peer review - PMC
    Apr 4, 2015 · ... platforms where the review can readily take place, such as Faculty of 1000 (F1000), ResearchGate, and PubPeer, as well as blogs (Table 1).
  218. [218]
    A Guided Tour of Post-Publication Review Sites
    Nov 14, 2014 · PubPeer refers to itself as the online journal club that allows users to search for papers via DOIs, PMIDs, arXiv IDs, keywords and authors ...
  219. [219]
    Decentralising scientific publishing: can the blockchain improve ...
    We present a decentralised solution for managing scientific communication, based on distributed ledger technologies, also called blockchains.
  220. [220]
    A Novel Blockchain-Based Scientific Publishing System - MDPI
    We propose a decentralized blockchain-based scientific publication platform to eliminate the traditional publication system deficiencies.<|separator|>
  221. [221]
    Exploring the decentralized science ecosystem: insights ... - Frontiers
    Feb 27, 2025 · This study presents an analysis of the Decentralized Science (DeSci) landscape in 2023, focusing on organizational structures, technological foundations, and ...Abstract · Introduction · Results · Discussion
  222. [222]
    What is DeSci? (Decentralized Science) - Chainlink
    Aug 1, 2025 · DeSci introduces a decentralized, transparent, and incentive-aligned infrastructure built on the innovations of blockchains, Web3, and DeFi.
  223. [223]
    Beyond the journal: The future of scientific publishing | FEBS Network
    Jun 28, 2024 · Some proponents of reform have argued that the solution is to replace traditional journals with a decentralized network under the governance of ...
  224. [224]
    Replacing academic journals | Royal Society Open Science
    Jul 19, 2023 · Replacing traditional journals with a more modern solution is not a new idea. Here, we propose ways to overcome the social dilemma underlying the decades of ...
  225. [225]
    Replacing academic journals - PMC - PubMed Central
    Replacing traditional journals with a more modern solution is not a new idea. Here, we propose ways to overcome the social dilemma underlying the decades of ...