Fact-checked by Grok 2 weeks ago

Metascience

Metascience, also known as metaresearch or the of , employs scientific methods to investigate and enhance the processes, practices, and outcomes of scientific research itself. It focuses on evaluating key elements such as , , research funding, incentives, and dissemination to address systemic inefficiencies and improve the overall rigor and impact of . Emerging as an interdisciplinary field, metascience integrates insights from , statistics, , and to study how functions and why it sometimes falters. The roots of metascience trace back to philosophical inquiries into scientific , but it has solidified as a distinct discipline in the , propelled by widespread concerns over the reproducibility crisis across fields like and . Notable early contributions include analyses of and citation patterns through , which quantify bibliographic data to reveal biases in scientific output. By the , high-profile initiatives, such as replication projects in , highlighted metascience's role in identifying flaws like p-hacking and selective , leading to reforms in statistical practices and policies. Key areas of metascience encompass research integrity, practices, equity in scientific careers, and the influence of emerging technologies like on research workflows. It aims to realign evaluation systems—such as grant allocations and journal metrics—to prioritize societal benefits over narrow academic metrics, thereby fostering greater transparency and inclusivity. Recent developments include the establishment of dedicated policy units, like the UK's Metascience Unit in 2024, and international collaborations such as the Metascience Alliance launched in 2025, which unite over 25 institutions to advance evidence-based improvements in global research ecosystems. Through empirical studies and interventions, metascience not only critiques but actively drives the evolution of science to better serve and .

Definition and Overview

Definition

Metascience, also known as meta-research or the of , is the application of scientific methodologies to the study of science itself, employing empirical techniques such as testing, , and experimentation to examine scientific practices, institutions, and outputs. This approach treats science as an object of investigation, seeking to understand its processes, incentives, and outcomes through quantifiable methods rather than purely theoretical or descriptive means. For instance, metascience investigates how operates, how funding decisions influence research directions, and the reliability of published findings, all while aiming to enhance the quality and efficiency of scientific endeavors. Unlike , which is a branch of focused on the nature, sources, and limits of through conceptual , metascience adopts an empirical stance, using observational data and statistical models to test claims about scientific production. Similarly, while encompasses an interdisciplinary array of fields like the , , and of to explore its cultural and social dimensions, metascience narrows its focus to practical improvements in research practices, emphasizing actionable insights over broad interpretive frameworks. This distinction underscores metascience's commitment to evidence-based reforms, distinguishing it from the more theoretical orientations of related disciplines. Central to metascience are concepts such as the examination of biases in and reporting, which can distort findings and undermine ; efforts to boost efficiency by addressing wasteful practices like redundant studies or pressures; and assessments of 's societal , including how translates into public benefits or exacerbates inequalities. These elements collectively aim to foster a more robust and equitable scientific enterprise. The field's roots trace briefly to foundational work in the of , such as Robert K. Merton's analysis of scientific norms, which laid groundwork for empirical of systems.

Scope and Importance

Metascience encompasses the systematic study of scientific practices, processes, and outcomes across various dimensions, including institutional structures such as funding mechanisms and systems, research practices like and , and outputs including publications and their broader impacts. This field employs quantifiable methods to how these elements influence the reliability and efficiency of scientific conclusions, both within specific disciplines (intrafield ) and across multiple fields (cross-disciplinary ). By examining these components, metascience provides a framework for understanding the social and methodological dynamics that shape scientific progress, independent of any particular discipline or . The importance of metascience lies in its capacity to address systemic challenges in science, such as the reproducibility crisis, which has been estimated to waste approximately $28 billion annually in the United States on preclinical research alone due to irreproducible findings. Empirical studies have highlighted reproducibility challenges in various fields, underscoring the need for evidence-based reforms to enhance scientific reliability and reduce inefficiencies. Through rigorous analysis, metascience drives faster scientific advancement by identifying and mitigating barriers to robust knowledge production. On a societal level, metascience informs policy decisions, ethical standards, and governance structures in science, thereby bolstering public trust by promoting responsible research practices and transparency. It facilitates the acceleration of innovation in critical domains, such as artificial intelligence and climate research, by optimizing resource allocation and evaluation processes to align scientific outputs with pressing global needs. Ultimately, these efforts ensure that science serves the public good more effectively, fostering ethical conduct and sustainable progress.

History

Origins and Early Concepts

The roots of metascience trace back to early philosophical reflections on the nature and method of scientific inquiry, particularly in the pre-20th century period. , a 17th-century English philosopher and statesman, laid foundational groundwork through his advocacy for an empirical, inductive approach to knowledge production, emphasizing systematic observation and experimentation over speculative deduction. In his seminal work (1620), Bacon critiqued the "idols" or biases that distort human understanding and proposed "tables of discovery" to interpret natural phenomena gradually, from particulars to general axioms, thereby promoting a collaborative, methodical reform of . These ideas exemplified polymathic efforts to reflect on scientific practice itself, influencing later thinkers and prefiguring metascience's focus on improving scientific processes. In the , precursors to metascience emerged through broader philosophical examinations of scientific methodology amid the rapid industrialization and institutionalization of science. Thinkers like and contributed to this by analyzing the logic of and the historical development of scientific concepts, with Whewell's History of the Inductive Sciences (1837) and of the Inductive Sciences (1840) highlighting the interplay between discovery, hypothesis, and verification in scientific progress. These works represented early attempts to study science as a social and cognitive enterprise, bridging philosophy and the emerging empirical study of knowledge production, though they remained largely non-sociological. The 20th century marked a shift toward more systematic and empirical metascience, beginning with influences from the philosophy of science in the 1930s. Karl Popper's Logik der Forschung (1934) introduced falsifiability as a demarcation criterion for scientific theories, arguing that genuine science advances through bold conjectures that risk refutation via empirical testing, rather than untestable verification. This emphasis on critical rationalism provided an analytical framework for evaluating scientific validity, influencing later metascience by underscoring the need to scrutinize methodological rigor. Around the same time, philosopher Charles W. Morris introduced the term "metascience" in 1938, referring to semiotics as a metascience that analyzes the relations between signs and scientific knowledge. The empirical turn in metascience gained momentum in the mid-20th century through the establishment of sociology of science as a distinct field during the 1930s and 1950s. Robert K. Merton played a pivotal role in this establishment, pioneering sociological analyses of scientific institutions and norms. In his 1938 book Science, Technology and Society in Seventeenth-Century , Merton explored the social factors driving scientific growth, linking Puritan values to increased scientific activity without reducing it to economic or technical causes alone. His 1942 paper "The Normative Structure of Science" further defined the ethos of science through four institutional imperatives—communalism (sharing discoveries as common property), universalism (judging claims by objective criteria), disinterestedness (motivation by curiosity over personal gain), and organized skepticism (deferred judgment pending scrutiny)—collectively known as the CUDOS norms. Written amid wartime challenges to scientific , Merton's framework highlighted how these norms foster scientific productivity and integrity, solidifying sociology of science as an academic pursuit by the 1950s. This foundational work paved the way for later empirical expansions in metascience.

Modern Developments

The field of metascience experienced significant growth from the 1970s to the , building on foundational ideas in pioneered by in the early 1960s, such as his analysis of in scientific publications and the shift toward "" characterized by large-scale collaborations and resource-intensive research. Price's quantitative models, including the logistic growth curve for scientific output, matured during this period as empirical tools became more accessible, enabling broader analyses of scientific productivity and citation networks. The launch of the journal in 1978 marked a key institutional milestone, fostering dedicated research and leading to a surge in publications, with the field seeing spectacular expansion in the due to digital databases and computational advances. Concurrently, Thomas Kuhn's 1962 concept of scientific paradigms influenced the shift toward empirical metascience by inspiring studies that examined how disciplinary frameworks shape research practices and knowledge production, moving beyond philosophical debate to data-driven investigations. The 2010s brought heightened attention to metascience through the reproducibility crisis, particularly in , where concerns over unreliable findings prompted large-scale replication efforts. In 2011, Brian Nosek and the Collaboration initiated the : Psychology, which was coordinated by the Center for Open Science after its founding in 2013, aiming to systematically replicate studies from top journals to assess reliability. The project's landmark 2015 results, published by the Collaboration, revealed that only 36% of 100 replicated experiments produced significant effects matching the originals, with replication effect sizes averaging half the original magnitude, underscoring systemic issues in statistical power and . This crisis extended beyond , galvanizing metascience research into incentive structures that prioritized novel over replicable results, though reforms focused on cultural shifts rather than overhauling evaluation systems. In the 2020s, metascience has increasingly integrated and to analyze vast scientific datasets, enabling predictive modeling of research trends and efficiency. For instance, tools now assist in mapping patterns and identifying biases at scale, accelerating insights into scientific . This period also saw institutional advancements, such as the establishment of the UK's Metascience Unit in 2024, a government-backed initiative to fund evidence-based studies on research systems and policy. Global efforts culminated in events like the Metascience 2025 conference, which emphasized 's transformative effects on scientific workflows, collaboration, and discovery processes.

Methods and Approaches

Data Analysis and Scientometrics

and in metascience involve the quantitative examination of scientific outputs, such as publications, citations, and collaborations, to uncover patterns, trends, and impacts in the production of . These methods draw on —the statistical analysis of written publications—and —the broader study of as a social phenomenon—to provide empirical insights into how evolves and functions. By leveraging large-scale datasets, researchers can measure aspects like research productivity, , and biases without relying on subjective assessments, enabling a data-driven understanding of scientific dynamics. Core techniques in this domain include , which evaluates the influence of scientific works by tracking how often they are referenced by subsequent research, revealing knowledge flows and impact hierarchies. For instance, citation counts aggregate references to quantify a paper's or author's reach, while normalized metrics adjust for field-specific differences in citation practices. Another key metric is the , proposed by physicist in 2005, defined as the largest number h such that an author has h publications each with at least h citations; this balances productivity and impact, offering a robust alternative to simple citation totals. Network analysis of co-authorship, meanwhile, models collaborations as graphs where nodes represent researchers and edges denote joint publications, allowing the identification of clusters, centrality, and collaboration patterns that drive scientific communities. Essential tools and datasets for these analyses include proprietary databases like and , which provide comprehensive, curated collections of abstracts, citations, and metadata spanning millions of documents across disciplines. , maintained by , covers over 90 million records from journals, books, and conference proceedings, facilitating bibliometric queries for trend analysis. , from , offers similar coverage with advanced indexing, enabling precise citation tracking and historical data from 1900 onward. Bibliometric software such as VOSviewer uses these datasets to create visual maps of scientific landscapes, overlaying co-citation or keyword networks to delineate emerging fields, knowledge structures, and interdisciplinary connections. In metascience applications, these techniques measure researcher and institutional productivity through metrics like the , which has been widely adopted for tenure evaluations despite critiques of its simplicity. They also detect biases, such as , where non-significant results are underrepresented; funnel plots visualize this by plotting study effect sizes against precision, with asymmetry indicating potential suppression of null findings. Progress in science is quantified via indices like the disruption index (D), introduced by , , and Evans in 2019 based on the CD index developed by and Owen-Smith in 2017, which assesses a publication's novelty by comparing citations to its predecessors versus successors, highlighting breakthroughs that redirect research trajectories. These approaches integrate briefly with journal studies to contextualize output metrics within broader publication ecosystems.

Journalology and Publication Studies

Journalology, also known as publication studies, examines the mechanisms, practices, and challenges within scientific publishing, focusing on how is disseminated through journals and the operational dynamics that influence this process. This field analyzes the system, journal metrics, and systemic issues to improve the integrity and equity of . Key investigations reveal inefficiencies in traditional practices, such as the variability in outcomes and the unintended consequences of evaluation metrics. Peer review, the cornerstone of scientific validation, has been scrutinized for its efficacy, with studies demonstrating low among reviewers. For instance, analyses of conference peer reviews have reported values ranging from 0.3 to 0.4, indicating only fair agreement on manuscript quality and recommendations, which raises questions about the consistency and objectivity of decisions. Similarly, journal impact factors, intended to gauge a publication's , are often misused, incentivizing practices like slicing—dividing a single study into multiple minimal publications to inflate output and boost career metrics, thereby fragmenting scientific and distorting . This pressure stems from institutional evaluations heavily reliant on such factors, leading to ethical concerns in . Biases in the publication pipeline further complicate journal operations, with gender and geographic disparities evident in acceptance rates. Research across disciplines shows that manuscripts with female first authors often receive lower evaluation scores or reduced acceptance probabilities compared to those with male authors, particularly in fields like earth sciences and medicine, exacerbating the underrepresentation of women in authorship. Geographic biases similarly disadvantage authors from non-Western regions; systematic reviews indicate that submissions from low- and middle-income countries face higher rejection rates, even when controlling for quality, due to reviewer preferences for familiar institutional affiliations and cultural contexts. These inequities highlight how editorial and reviewer demographics, often skewed toward high-income, Western institutions, perpetuate global imbalances in scientific visibility. The proliferation of predatory journals has intensified since the , exploiting models with promises of rapid publication for fees, often without rigorous review. By 2023, estimates identified over 10,000 such journals worldwide, contributing to the publication of low-quality or fraudulent and eroding trust in scholarly outputs. This rise correlates with the expansion of , where profit-driven entities mimic legitimate publishers, leading to widespread retractions and challenges in distinguishing credible venues. Reforms in scientific publishing aim to address these issues through structural changes. Open access initiatives like , launched in 2018 by cOAlition S—a of research funders—mandate immediate, full for publicly funded research starting in 2021, promoting equitable dissemination without paywalls while supporting sustainable models like . Additionally, trials of double-blind , where neither authors nor reviewers know identities, have shown promise in mitigating biases; a randomized study at a major journal found it reduced favoritism toward prestigious authors, leading to fairer evaluations without compromising review quality. Citation metrics, such as , complement impact factors by providing a more nuanced assessment of individual contributions. These efforts collectively seek to enhance transparency, reduce inequities, and foster a more robust .

Experimental and Survey Techniques

Experimental and survey techniques in metascience involve empirical approaches to directly test and evaluate scientific practices, such as through controlled interventions and polls of researchers' behaviors and attitudes. These methods allow for causal inferences about how practices like preregistration affect research integrity, contrasting with observational by enabling manipulation of variables in real or simulated scientific settings. By focusing on interventions and self-reported data, these techniques help identify effective reforms to enhance and reduce biases in science. Experimental designs in metascience often employ randomized controlled trials or field experiments to assess interventions aimed at improving scientific rigor. A prominent example is preregistration, where researchers pre-specify hypotheses and analysis plans before data collection to mitigate selective reporting. The AllTrials campaign, launched in January 2013, advocated for the registration of all past and present clinical trials, including their methods and results, to promote transparency and reduce non-reporting biases; this initiative has influenced policy and practice in clinical research globally. Similarly, the Registered Reports format, introduced in journals around 2012 and expanded thereafter, pre-accepts studies based on protocol quality rather than results, thereby reducing p-hacking—the practice of manipulating data analysis to achieve statistical significance. A 2021 analysis of Registered Reports found they lead to a higher proportion of null or non-significant results compared to traditional publications, indicating reduced publication bias and p-hacking. Lab-based experiments have also tested cognitive biases in scientific decision-making; for instance, a 2020 field experiment submitted fabricated manuscripts to psychology journals, revealing that reviewers favored statistically significant results over original but non-significant findings, highlighting confirmation bias in peer review. Survey methods complement experiments by capturing researchers' attitudes, experiences, and self-reported practices on a larger scale. These polls often reveal discrepancies between perceived and actual behaviors, informing targeted interventions. A seminal 2016 survey published in polled over 1,500 scientists across disciplines, finding that more than 70% had tried and failed to reproduce others' experiments, and over 50% failed to reproduce their own; respondents identified selective reporting and low statistical power as major barriers to . Building on such efforts, surveys of early-career researchers often reveal concerns about influencing career decisions, with high fluidity in intentions to pursue scientific careers. studies, which track groups over time, provide longitudinal insights into career paths. As of 2025, emerging methods include AI-assisted analysis of large datasets for bias detection and ongoing international surveys tracking adoption. Ethical considerations are paramount in metascience experiments and surveys, as they often involve participants—typically —who may face professional risks. Institutional Review Boards (IRBs) oversee these studies to ensure compliance with principles of respect for persons, beneficence, and justice, requiring , minimal risk, and equitable participant selection. For instance, surveys must anonymize responses to avoid stigmatizing individuals for admitting questionable practices, while experiments simulating must prevent harm to participants' reputations or careers. IRBs classify metascience research involving or as potentially requiring full , emphasizing the need to balance scientific gain with participant welfare.

Key Research Areas

Reproducibility and Replication

Reproducibility in science refers to the ability to obtain consistent results when repeating an experiment under the same conditions, while replication involves independently verifying findings through new experiments. The , a major focus of metascience, highlights widespread concerns that a significant portion of published cannot be reliably reproduced or replicated, undermining the reliability of scientific knowledge. This issue has been quantified across fields, with estimates suggesting that approximately 50% of findings may not hold up upon scrutiny, driven by methodological and statistical practices that inflate false positives. Key contributors to non-reproducibility include p-hacking, where researchers selectively analyze data or perform multiple statistical tests until achieving a statistically significant (typically p < 0.05), and (hypothesizing after the results are known), where post-hoc hypotheses are presented as pre-planned without disclosure. These practices increase the likelihood of false discoveries, as they exploit the flexibility in without accounting for multiple comparisons or exploratory adjustments. Publishing biases, such as favoring or positive results, further exacerbate irreproducibility by discouraging null findings and incentivizing questionable research practices. Large-scale replication efforts have empirically documented the scope of the crisis. In , the 2015 attempted to replicate 100 studies from top journals, finding that only 36% produced significant effects in the same direction as the originals, with replication effect sizes about half as large. Similarly, the : Cancer Biology, which targeted experiments from 50 high-impact cancer papers published between 2010 and 2012, reported in 2021 that 46% of replicated effects met more success criteria than failure criteria, though overall effect sizes were 85% smaller than in the originals. These studies underscore the challenge of replicating even well-cited work, attributing partial successes to factors like smaller sample sizes in replications and variations in experimental conditions. To address non-reproducibility, metascience has promoted interventions centered on and . Mandating the sharing of and analysis code enables independent of results, reducing errors from undisclosed methods and facilitating meta-analyses; for instance, platforms like the Open Science Framework encourage depositing materials in accessible repositories to support exact reproductions. Additionally, open science badges, introduced in 2013 by the Center for Open Science, provide visual incentives in journals for practices like , code availability, and preregistration, with evidence showing they increase uptake of these behaviors without substantial burden on authors. These tools aim to embed into routine scientific workflows, fostering a culture of verifiable research.

Evaluation, Incentives, and Governance

The evaluation of scientific research is predominantly shaped by the "" culture, a pressure originating in the early that ties career advancement, , and institutional prestige to volume. This system incentivizes prolific output over quality, leading to an explosion in low-impact publications—such as the rise from 16,000 journals in 2001 to 23,750 by 2006—and practices like salami slicing, where single studies are fragmented into multiple papers. Critics argue it diverts focus from teaching and practical applications, with only 45% of articles in top journals receiving citations within five years, and up to 25% of those being self-citations. Such incentives exacerbate biases, including a preference for novel but risky results, contributing to issues like the reproducibility crisis where pressure leads to selective reporting. Traditional evaluation metrics, such as counts, measure academic influence but overlook broader societal impact, prompting the development of in the as complementary tools. , introduced via the 2010 Manifesto, track online attention through mentions, policy s, and discussions, capturing impacts in fields like social sciences and where citations lag. However, studies show weak positive correlations between altmetrics and citations—for instance, and activity aligns moderately with citation rates but identifies different impact dimensions, with altmetric coverage rising from 15% of publications in 2011 to over 25% by 2013. While altmetrics enhance evaluations by quantifying public engagement, they risk inflating attention over substance without qualitative context. Contributorship norms further complicate evaluation by inadequately crediting diverse roles in collaborative research, often marginalizing and support contributions. The International Committee of Medical Journal Editors (ICMJE) established authorship criteria in 1985, requiring substantial intellectual input, drafting or revision, final approval, and accountability—criteria updated in 2013 to emphasize overall work integrity. These guidelines, while standardizing inclusion, can exclude non-author contributors like mentors or trainees from formal credit, fostering inequities in global collaborations where power imbalances limit low-income country researchers' roles. To address this, the Contributor Roles Taxonomy (), developed in the mid-2010s and standardized in 2022, delineates 14 roles, including "Supervision" for oversight and external to the core team, enabling granular attribution beyond authorship lists. Governance structures in science aim to mitigate these biases through policy oversight, with funding agencies implementing rigor-focused initiatives. In 2015, the (NIH) launched requirements for grant applications to enhance , mandating descriptions of scientific premises, experimental designs accounting for biological variables like sex, and authentication of key resources. These policies, effective from 2016, address evaluation flaws by prioritizing transparency over volume, influencing criteria without adding scored elements. Science policy bodies like the American Association for the Advancement of Science (AAAS) further support governance by articulating evidence-based positions, hosting workshops to bridge science and policy, and fostering international advocacy for equitable systems.

Science Communication and Public Engagement

Science communication in metascience examines the processes by which scientific knowledge is disseminated to non-expert audiences, including the , policymakers, and , to foster informed societal participation and influence research practices. This involves analyzing how communication strategies affect understanding, in , and the broader societal of outputs. Metascience highlights the need for effective dissemination to bridge the gap between specialized scientific findings and , ensuring that informs while addressing barriers like and . A major challenge in science communication is the rapid spread of , particularly during crises such as the 2020s infodemics, where false information proliferated on , undermining responses and contributing to . Studies show that infodemics exacerbated anxiety and delayed mitigation efforts by overwhelming accurate information with unverified claims. Additionally, inaccuracies in journalistic reporting pose significant issues; for instance, a seminal analysis found that 40% of university press releases on health research contained exaggerated advice, which often propagated into news coverage, leading to overstated benefits or causal claims. This highlights how errors originating in institutional communications can amplify distortions in public perceptions of science. To measure and enhance public impact, tools like have emerged, capturing non-traditional indicators of engagement such as mentions, blog posts, and policy citations to gauge how research resonates beyond academia. provide a complementary view to citation counts, quantifying societal reach—for example, by tracking shares on platforms like to assess real-time public interest and influence. Preprints further support rapid communication by allowing researchers to share findings before ; , launched in 1991, pioneered this model in physics and has since expanded across disciplines, enabling faster dissemination during urgent events like pandemics while promoting . Engagement strategies in metascience emphasize participatory approaches, such as , where non-experts contribute to data collection and analysis, enhancing public involvement and democratizing research. Projects like these not only generate valuable data but also build trust and through direct collaboration. Furthermore, effective influences policy by translating research into actionable insights; for example, media and social channels bridge research with governance, as evidenced by studies showing that policy documents citing scientific work amplify societal benefits like improved measures. These strategies underscore metascience's role in aligning scientific progress with public needs.

Science Education and Misconceptions

Metascience plays a crucial role in science education by emphasizing the integration of and inquiry-based approaches into curricula, enabling students to understand not just scientific content but the processes and limitations of science itself. The (NGSS), released in 2013, exemplify this by designing curricula around three dimensions: disciplinary core ideas, science and engineering practices, and crosscutting concepts, which foster inquiry-based problem-solving and critical evaluation of evidence. For instance, NGSS performance expectations require students to engage in practices such as asking questions, developing models, and analyzing data, promoting metacognitive awareness of scientific reasoning. This framework shifts from rote memorization to active engagement, preparing students to apply metascience principles like evaluating hypotheses and recognizing biases in everyday decision-making. Training in metascience for students further enhances this by explicitly teaching the nature of scientific knowledge, including its provisionality and social construction, to build skills in discerning valid from flawed scientific claims. Educational programs incorporating metascience elements, such as lessons on cognitive biases and logical fallacies, help students "think like scientists" by applying to real-world phenomena, as outlined in inquiry-based pedagogies. For example, curricula that address the hallmarks of scientific thinking—empirical testing, , and error correction—equip learners with tools to navigate complex scientific debates, reducing reliance on intuitive errors. Common misconceptions in science education often stem from intuitive reasoning patterns, such as teleological thinking, where students attribute biological features to purposeful rather than evolutionary processes. In evolution education, this manifests as explanations like "organisms have traits because they need them," rooted in a design stance that conflicts with principles and persists across age groups. Similarly, anti-science attitudes, exemplified by , arise from distrust in scientific sources, amplified by and social identities post-2010s, leading to rejection of evidence-based interventions like vaccinations. Studies on highlight how perceived biases in messaging exacerbate these attitudes, particularly among groups influenced by conspiracy narratives. Evidence-based interventions in science education, informed by metascience, demonstrate the efficacy of active learning strategies in addressing misconceptions and improving outcomes. A 2020 meta-analysis of 146 studies found that active learning reduced achievement gaps in examination scores by 33% and in failure rates by 45% for underrepresented students in undergraduate courses. These approaches, including collaborative problem-solving and guided inquiries, help dismantle misconceptions by prompting students to confront and revise erroneous beliefs through iterative evaluation. By prioritizing such pedagogies, educators can cultivate resilient , mitigating anti-science views through targeted, empirically supported methods.

Scientific Progress and Evolution

Factors of Success and Progress

Metascience examines several key factors that contribute to success and progress in scientific endeavors, with novelty serving as a critical driver. Novelty metrics, such as the disruption index, quantify how scientific papers challenge or consolidate existing knowledge. Developed by Wu, Wang, and Evans, this index measures a paper's disruptiveness by analyzing citation patterns: it calculates the relative frequency with which subsequent works cite the focal paper alone versus alongside its references, yielding scores where higher values indicate greater disruption of prior paradigms. Disruptive papers, often from smaller teams, have historically propelled breakthroughs by introducing ideas that redirect trajectories, as evidenced by their outsized influence in fields like physics and . Social elements, including collaboration networks and team diversity, further accelerate progress. Collaboration networks enable knowledge exchange and resource sharing, with "super ties"—strong, repeated partnerships—boosting productivity by 17% and citations per paper through enhanced idea refinement and access to diverse expertise. Diversity within teams amplifies ; gender-diverse scientific teams produce approximately 7% more novel and are about 15% more likely to have high-impact publications compared to same-gender teams, due to broader perspectives that challenge assumptions and generate unconventional solutions. Progress studies reveal patterns in scientific advancement, often highlighting historical rates of deceleration offset by efforts. by Bloom et al. demonstrates slowing progress in fields like semiconductors and , where research productivity has declined despite exponential increases in effort, implying on ideas. However, labor advantages in —such as the approximately 23-fold increase in the number of researchers engaged in R&D since —have sustained overall growth by distributing tasks across larger teams, though this requires careful management to avoid coordination inefficiencies. Debates persist on accurately measuring such progress, given methodological challenges in indices like disruption.

Controversies, Debates, and Challenges

A central in metascience revolves around the suppression of null results, which fosters and skews the scientific record toward positive findings. This practice inflates false discovery rates, as smaller studies with low power are more likely to produce spurious positives when only significant results are published. Ioannidis's seminal analysis demonstrates that, under common conditions like flexible study designs and high researcher bias, the majority of published research findings may be false, emphasizing the need for systemic changes to address this distortion. Efforts to mitigate this include Registered Reports, a peer-review format that accepts studies based on methodological rigor prior to data collection, thereby reducing incentives to withhold null outcomes and promoting a more balanced evidence base. Another ongoing challenge in metascience involves pooling data in meta-analyses, particularly when heterogeneity in effect sizes complicates synthesis and interpretation. Heterogeneity arises from variations in study populations, interventions, or methodologies, often leading to debates over —fixed-effects assuming uniformity versus random-effects accommodating variability—and the validity of overall estimates. In the social sciences, for instance, this issue is pronounced due to contextual differences across studies, where high heterogeneity signals substantive rather than mere noise, yet it risks overgeneralization if not carefully explored through subgroup analyses or meta-regressions. The I² statistic, commonly used to quantify heterogeneity, can be biased in small meta-analyses, further fueling discussions on robust reporting standards to avoid misleading conclusions. Ethical dilemmas in metascience increasingly focus on in global , highlighted by the North-South divide that intensified in the amid unequal resource distribution and pandemic responses. Structural inequities in the international economic order limit participation from Global South researchers, perpetuating a dominance of Northern perspectives in knowledge production and exacerbating gaps in collaborative efforts like initiatives. As of 2025, studies show significant underrepresentation of Global South researchers in high-impact publications, with participation rates below 20% in fields like climate . During the infodemic, for example, disparities in access to information and technology widened these divides, raising concerns about fair and benefit-sharing in scientific advancements. Post-2023, AI integration in research has sparked debates over biases that undermine ethical standards, as algorithms trained on non-diverse datasets propagate inequalities in hypothesis generation, , and . Such biases, embedded across the AI lifecycle from data acquisition to deployment, can lead researchers to adopt skewed , as evidenced in tasks where AI recommendations reinforce human prejudices even after disuse. This raises profound concerns in metascience about ensuring algorithmic fairness to prevent the amplification of systemic inequities in scientific outputs. Key challenges also encompass the over-interpretation of metascience findings, where results from replication studies or bibliometric analyses are extrapolated beyond their scope, potentially eroding trust in . Questionable practices, such as selective emphasis on dramatic failures without contextual nuance, contribute to this issue and mirror flaws critiqued in primary . Additionally, resistance to metascience-driven s persists due to entrenched academic incentives prioritizing novelty over rigor, with critics highlighting potential like reduced innovation or overlooked equity considerations in reform agendas. These tensions intersect briefly with debates on scientific factors, where interpretive overreach can obscure genuine enablers of advancement.

Knowledge Integration and Topic Mapping

Knowledge integration in metascience involves systematic methods to synthesize disparate research findings into coherent overviews, enabling researchers to assess cumulative and identify patterns across studies. Systematic reviews compile and evaluate existing on a specific question, while meta-analyses statistically combine quantitative results from multiple studies to estimate overall effects. These approaches reduce and enhance reliability by following structured protocols, such as the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines introduced in 2009, which provide a 27-item and for transparent reporting. Living systematic reviews extend this framework by continuously updating syntheses as new evidence emerges, particularly useful in rapidly evolving fields like . Developed in the by organizations such as Cochrane, these reviews incorporate and ongoing to maintain timeliness, with Cochrane publishing its first living reviews in 2017 and issuing guidance on production processes by 2019. Challenges in integration, such as debates over pooling heterogeneous results, underscore the need for rigorous statistical methods to ensure validity. Topic mapping techniques visualize the structure and evolution of scientific fields by analyzing relationships between publications. Semantic analysis tools, powered by natural language processing and machine learning, extract concepts and themes from abstracts and full texts to generate interactive maps of research landscapes. For instance, Semantic Scholar, an AI-driven platform developed by the Allen Institute for AI in the 2010s and advanced in the 2020s with features like topic-based searches and paper recommendations, facilitates discovery of related works through semantic similarity rather than just keywords. Citation networks complement this by modeling interconnections via directed graphs, where nodes represent papers and edges denote citations, allowing tracking of knowledge flow over time. Seminal work in this area includes co-citation analysis, introduced by Henry Small in 1973, which clusters frequently co-cited documents to delineate subfields and intellectual structures. Applications of these methods include pinpointing research gaps by highlighting underexplored areas in topic or review syntheses, as seen in systematic reviews that quantify voids through plots or analyses under PRISMA. They also reveal intrafield developments, such as shifts, where co-citation clusters identify bursts of interconnected citations signaling theoretical transitions, for example, in the emergence of new frameworks in physics or . By integrating diverse data sources, these tools support metascience efforts to scientific progress without relying on anecdotal assessments.

Reforms and Interventions

Pre-registration and Transparency

Pre-registration in scientific research refers to the practice of documenting and publicly archiving a study's hypotheses, methods, procedures, and analysis plans prior to gathering or analyzing data, thereby committing researchers to their intended approach and minimizing post-hoc adjustments that could introduce . This process typically involves creating a detailed, time-stamped protocol and submitting it as a read-only file to an online registry, where it remains accessible for verification against the final report. Platforms such as , established in 2000 by the U.S. of Medicine to register , have long supported this for , requiring details like study design, primary outcomes, and participant criteria before enrollment begins. For non-clinical fields, the Open Science Framework (OSF), launched in 2013 by the Center for , offers a versatile, free tool for preregistering diverse study types, including observational and experimental designs across disciplines. Empirical evidence demonstrates that pre-registration enhances research credibility by curbing questionable research practices like p-hacking—manipulating analyses to achieve —and selective reporting of results. In clinical trials, mandatory pre-registration has been linked to a substantial increase in null findings, from about 17% before 2000 to over 50% afterward among large National Heart, Lung, and Blood Institute-funded studies, indicating reduced inflation of positive effects due to bias. Similarly, across fields, pre-registered studies exhibit higher rates of non-significant results compared to non-registered ones, with meta-scientific reviews confirming lower evidence of in pre-registered work. Adoption has accelerated, particularly in , where surveys of articles published in 2022 indicate that 7-14% incorporate pre-registration, reflecting growing institutional encouragement through badges and journal policies. Despite these advantages, pre-registration encounters significant challenges, including inconsistent due to its largely voluntary nature outside regulated areas like clinical trials, where relies on and funder mandates rather than universal oversight. This leads to under-adoption and frequent undisclosed deviations, with studies showing that up to 40% of pre-registered protocols experience unacknowledged changes. Variations across fields further complicate implementation; for instance, exploratory or field-based research in or sciences often requires adaptive designs that clash with rigid pre-specification, potentially stifling while still demanding solutions.

Reporting Standards

Reporting standards in metascience emphasize the need for transparent, complete, and standardized documentation of methods, results, and analyses to enhance and trustworthiness. These standards address longstanding issues in scientific , such as incomplete disclosures that can obscure flaws or biases in studies. By providing checklists and guidelines, they guide researchers and journals toward consistent practices that facilitate , meta-analyses, and public scrutiny. Key frameworks have emerged to promote rigorous reporting across research types. The (CONSORT), first published in 1996, offers a 25-item for randomized controlled , covering trial design, participant flow, and outcome measures to minimize ambiguity in clinical reports. Similarly, the Strengthening the Reporting of Observational Studies in (STROBE) guidelines, introduced in 2007, provide a 22-item tailored to , case-control, and cross-sectional studies, ensuring clear description of study rationale, methods, and limitations. For animal research, the Animal Research: Reporting of Experiments (ARRIVE) guidelines, launched in 2010 and updated in 2020, include 10 main items and 31 sub-items to detail experimental procedures, statistical analyses, and ethical considerations, aiming to improve the quality of preclinical studies. A major challenge these standards tackle is selective reporting, where researchers omit unfavorable outcomes or alter analyses post-hoc, potentially inflating effect sizes. For instance, a analysis of education research found that nonsignificant outcomes were 30% more likely to be omitted from published studies than significant ones, highlighting biases in what reaches the . To counter this, standards advocate for the full disclosure of datasets, protocols, and all pre-specified outcomes, often linking to preregistration as a foundational step to align reporting with original plans. The evolution of reporting standards has progressed toward broader openness, with the Transparency and Openness Promotion () Guidelines, released in 2015 by the Center for Open Science, establishing modular levels (0-3) of compliance across eight areas, including , code, and materials sharing, adopted by over 1,000 journals to enforce varying degrees of transparency. The Guidelines were updated in 2025 () to include specific guidance on disclosing use in research processes and enhancing verifiability of computational methods. In the , updates have incorporated guidance on -assisted research, such as disclosing generative tools like in methods sections to address concerns in automated analyses and . These advancements reflect metascience's ongoing push to adapt standards to while maintaining core principles of completeness and verifiability.

Incentive Reforms and Governance Changes

Incentive reforms in metascience seek to realign scientific rewards away from publication quantity and high-impact journals toward robust practices and societal value. A key proposal is the Declaration on Research Assessment (), launched in 2012, which advocates against over-relying on journal impact factors for evaluating researchers and instead emphasizes diverse outputs like datasets, software, and qualitative impacts. , now signed by over 3,500 organizations worldwide, promotes assessments that consider the full context of research contributions to foster quality over metrics that incentivize sensationalism. Complementing this, funding agencies have introduced support for replication studies to address issues; for instance, the U.S. (NSF) issued a 2018 Dear Colleague Letter encouraging proposals for projects enhancing replicability and across disciplines, allocating resources to verify prior findings and build cumulative knowledge. Governance changes further emphasize collaborative and transparent structures. Team-based rewards, such as group-level funding and , are proposed to encourage over individual , with evaluations shifting to weigh team outcomes like shared resources and interdisciplinary impacts. Similarly, recognizing negative results through dedicated awards and publication incentives counters ; the European College of Neuropsychopharmacology (ECNP), for example, has awarded the Best Negative Data Prize since 2017 for null or non-confirmatory findings in neuropsychopharmacology, providing and such as to the . Policy experiments test these ideas empirically; the UK's Metascience Unit, established in 2024 under , runs randomized controlled trials on grant allocation and to optimize funding efficiency, with an initial £10 million budget for evaluating interventions like simplified assessment criteria. These reforms target flaws in traditional evaluation systems, such as overemphasis on novelty, by promoting metrics that reward reliability and . Early implementations suggest potential for significant gains in scientific productivity; models from the indicate that incentive realignments could boost accumulation by reducing and enhancing , with projections of up to 160% increases in annual output under optimized structures. Overall, such changes aim to create a self-improving where supports long-term progress over short-term outputs.

Applications Across Disciplines

In Medicine and Health Sciences

Metascience in and health sciences examines the processes, incentives, and methodologies underlying , with a particular emphasis on clinical trials and to enhance reliability, , and efficiency. This field addresses systemic challenges in generating robust for treatments, diagnostics, and interventions, where failures in and can have life-threatening consequences. By applying metascience principles, researchers aim to refine trial designs, improve data , and mitigate biases, ultimately accelerating the of discoveries into effective healthcare practices. A major issue in is the lack of in clinical trials, addressed by mandates for prospective registration. The Food and Drug Administration Amendments Act (FDAAA) of 2007 required the registration of applicable clinical trials on within 21 days of enrolling the first participant, aiming to prevent selective and enhance public access to trial protocols and results. This legislation expanded the scope to include 2 through 4 drug and trials for all diseases, significantly increasing the number of registered studies from about 1,400 before 2007 to over 300,000 by 2023. Compliance remains imperfect, with studies showing that only around 70-80% of trials fully adhere to requirements, underscoring ongoing metascience efforts to enforce . Reproducibility challenges in drug development further highlight metascience concerns, as preclinical studies often fail to predict clinical outcomes. Approximately 90% of drugs that successfully complete preclinical testing do not advance to approval due to inefficacy, toxicity, or other issues, reflecting limitations in experimental design, statistical power, and translational validity. In , this high attrition rate—exacerbated by favoring positive results—has prompted metascience interventions to standardize preclinical protocols and promote sharing, reducing wasted resources estimated at billions annually in the . General rates in biomedical hover around 50-60% for key findings, but in drug development, the stakes are heightened by regulatory and ethical imperatives. Metascience has contributed pivotal tools for evidence synthesis and bias assessment in . The , founded in 1993, pioneered systematic reviews and meta-analyses to aggregate high-quality , producing over 8,000 reviews by 2023 that inform guidelines on interventions like and therapies. These meta-analyses statistically combine from multiple trials to estimate treatment effects more precisely, often revealing discrepancies in individual studies due to heterogeneity or bias, and have become foundational for . Complementing this, the ROBINS-I tool, developed in 2016, provides a structured framework for evaluating risk of bias in non-randomized studies of interventions, addressing domains such as , selection, and measurement errors through signaling questions and algorithms. Widely adopted in systematic reviews, ROBINS-I has improved the rigor of from observational , which constitutes a significant portion of outside randomized trials. In the 2020s, metascience has integrated (AI) to optimize design, enhancing efficiency and inclusivity. AI algorithms now assist in patient stratification, selection, and simulation of trial outcomes, with potential to improve efficiency by identifying optimal protocols from historical data. For instance, models predict recruitment feasibility and adverse events, as highlighted in FDA discussions on AI's role in research. The accelerated metascience applications in rapid evidence synthesis, with initiatives like living systematic reviews updating meta-analyses in real-time to guide policy on vaccines and treatments. Projects such as the meta-evidence bot and rapid responses synthesized thousands of studies within weeks, demonstrating how metascience can support crisis decision-making while minimizing outdated or low-quality evidence.

In Psychology and Social Sciences

Metascience has played a pivotal role in addressing the in and social sciences, particularly following high-profile controversies that exposed vulnerabilities in research practices. In 2011, psychologist published a study in the Journal of Personality and claiming experimental evidence for , or the ability to perceive future events, based on nine experiments showing statistically significant effects. This work ignited widespread debate, as subsequent replication attempts, including a large-scale effort involving multiple labs, failed to reproduce the results, with no evidence of precognitive effects observed across hundreds of participants. The controversy highlighted issues like selective reporting and p-hacking, contributing to broader concerns about the reliability of psychological findings. Large-scale replication projects in the further quantified , revealing substantial variability in replicability across studies. The Many Labs initiatives, coordinated by teams of researchers, attempted to reproduce effects from prominent papers using standardized protocols and larger samples. For instance, Many Labs 1 (2014) tested 13 effects and found replication rates ranging from 0% to 100%, with an overall success rate of about 77% for but often diminished effect sizes. Subsequent projects, such as Many Labs 2 (2018), examined variation across international samples and reported rates as low as 25% for some effects, underscoring factors like cultural differences and . These efforts demonstrated that while some findings held up, many did not, prompting a reevaluation of methodological rigor in behavioral research. In response, metascience-driven interventions have emphasized transparency and statistical robustness to mitigate these issues. The Open Science Framework (OSF), launched by the Center for Open Science, has seen widespread adoption in since the mid-2010s, enabling preregistration of studies, sharing, and reproducible workflows to reduce questionable research practices. Over 1,000 journals now integrate OSF tools, facilitating higher replication rates in recent metascience evaluations. Additionally, post-2015 advancements in Bayesian statistical methods have gained traction for assessing evidence strength more reliably than traditional null-hypothesis testing, allowing researchers to quantify uncertainty and prior beliefs in replication contexts. These approaches, applied alongside brief references to enhanced reporting standards like those from the , have improved the credibility of psychological studies. The broader impacts of metascience in these fields extend to policy applications, particularly in during the . Nudge units, such as the UK's , have incorporated metascience to evaluate and scale interventions like default opt-ins for savings or health behaviors, using meta-analyses to confirm small but consistent effects (Cohen's d ≈ 0.21–0.43). Comprehensive reviews from these units, including randomized controlled trials at scale, show that rigorous replication and evidence synthesis enhance policy effectiveness, informing global efforts in areas like compliance and environmental nudges.

In Physics and Natural Sciences

In physics and the natural sciences, metascience examines the reliability and dynamics of theoretical predictions against empirical data, particularly in fields where large-scale experiments and simulations dominate. A prominent issue is the mismatch between theoretical hype and experimental realities, as seen in during the . Despite optimistic projections of near-term breakthroughs in fault-tolerant quantum systems, practical implementations have been limited by high error rates and scalability challenges, with current devices capable only of small-scale demonstrations rather than solving complex real-world problems. This discrepancy highlights metascience's role in critiquing overpromising narratives that can distort funding and research priorities. Similarly, the scale of collaborations has grown exponentially, exemplified by the (LHC) experiments, where papers from the ATLAS and collaborations often involve over 5,000 authors, enabling the pooling of expertise but complicating attribution and coordination. Metascience tools in these disciplines focus on quantifying uncertainties and tracing intellectual influences to enhance rigor. Error propagation is essential for validating simulations in high-energy physics and , where methods propagate uncertainties through complex models to assess the robustness of predictions against observational data. For instance, in quantum simulations of many-body systems, forward error propagation quantifies how initial uncertainties amplify, informing the design of more reliable computational frameworks. further reveals the mechanics of paradigm shifts, such as the impact of Einstein's , where bibliometric studies show a gradual replacement of Newtonian citations over decades, rather than an abrupt , underscoring the cumulative nature of theoretical transitions. These tools help metascience identify when entrenched s resist anomalous data, promoting more adaptive scientific practices. Recent developments in the 2020s have applied metascience to evaluate progress in and observational frontiers. Analyses of research trajectories indicate steady but incremental advances, with key performance parameters improving by factors of 10,000 over six decades, yet remaining just short of net energy gain at scale; metascience critiques emphasize the need for diversified to bridge gaps between experiments like and practical reactors. In astronomy, the Legacy Survey of Space and Time (LSST) at the , which began commissioning in 2023 and released first images in 2025, mandates policies, providing 500 petabytes of public images and catalogs to accelerate discoveries in and transient events while enabling metascience studies on data-sharing impacts. These initiatives demonstrate how metascience fosters transparency and collaboration in resource-intensive fields.

In Computer Science and Information Technologies

Metascience in and information technologies examines the processes, incentives, and systemic factors influencing research practices in fields characterized by rapid , large-scale , and algorithmic . This includes scrutinizing how benchmarks drive progress, how biases propagate in systems, and how shapes publication outcomes, often revealing challenges and ethical gaps that undermine reliability. Unlike more stable disciplines, computer science's emphasis on software and empirical validation amplifies metascience's role in promoting robust and equitable outcomes. A key area of metascience application is benchmark reproducibility, where standardized datasets like have been central to advancing models but have exposed significant issues. In 2018, amid the deep learning boom, researchers highlighted a crisis in , noting that variations in random seeds, hardware, and implementation details led to inconsistent results across reported benchmarks, with experiments often failing to replicate due to unstandardized protocols. This crisis persisted, as evidenced by studies showing that even widely used datasets suffer from data leakage and non-deterministic training, resulting in inflated performance metrics that mislead progress tracking. For instance, a 2023 analysis of machine-learning-based found rates below 50% in benchmark-driven fields, prompting calls for standardized evaluation pipelines to restore trust. Post-2020, metascience has increasingly focused on AI fairness audits to address biases in algorithmic , particularly in high-stakes applications like hiring and lending. These audits systematically evaluate models for demographic disparities, using metrics such as and equalized to quantify and mitigate unfair outcomes. A seminal proposed in 2022 outlines a multidisciplinary approach for auditing AI systems, emphasizing interdisciplinary validation to ensure fairness across protected groups, and has influenced regulatory guidelines. Recent advancements, including differentially private methods for auditing without compromising , have enabled scalable bias detection in production systems, with 2025 studies demonstrating up to 30% bias reduction in audited models. Tools like (AutoML) exemplify metascience's meta-optimization strategies, automating hyperparameter tuning and model selection to enhance research efficiency. AutoML leverages techniques to learn from prior tasks, predicting optimal configurations for new problems and reducing manual trial-and-error in algorithm design. in AutoML frameworks, as detailed in a 2024 review, treats the outer loop as hyperparameter search over an inner model training loop, achieving 10-20% performance gains over traditional methods in benchmark suites. This approach not only accelerates discovery but also meta-optimizes the scientific process itself by standardizing reproducible pipelines. Conference peer review studies have applied metascience to evaluate biases in processes, with the 2014 NeurIPS experiment revealing substantial inconsistency. In this study, 150 papers were reviewed twice by independent committees under double-blind conditions, finding that approximately 26% received conflicting accept/reject decisions, highlighting reviewer subjectivity and low (kappa ≈ 0.2). The adoption of double-blind reviewing at NeurIPS since 2014 aimed to reduce author prestige bias, but follow-up analyses confirmed persistent variability, informing reforms like reviewer calibration tools. A 2021 revisit of underscored that such inconsistencies disproportionately affect novel work, prompting ongoing metascience efforts to refine review mechanisms. Emerging metascience discussions center on AI acceleration's impact on scientific workflows, particularly how generative models are transforming generation and code synthesis in . At the 2025 Metascience , panels debated 's role in speeding up discovery cycles, with evidence from sessions indicating that tools like large language models could compress timelines by 20-50% while risking over-reliance on black-box outputs. These conversations emphasize the need for metascience to guide ethical integration, ensuring augments rather than supplants human oversight in iterative .

Organizations and Resources

Institutes and Policy Units

The Center for Open Science (COS), founded in 2013 as a nonprofit technology organization in Charlottesville, Virginia, aims to increase the openness, integrity, and reproducibility of research through tools, training, and policy alignment. It develops infrastructure like the Open Science Framework for sharing research materials and promotes the Transparency and Openness Promotion (TOP) Guidelines to standardize practices across journals and funders. COS engages policymakers to shift norms, demonstrating how open practices reduce reproducibility issues and enhance evidence quality, as evidenced by its 2020 impact report on pre-registration's role in improving peer review reliability. The Meta-Research Innovation Center at Stanford (METRICS), launched in April 2014 with initial funding from the Laura and John Arnold Foundation, conducts meta-research to transform practices in and other fields by evaluating biases, reporting standards, and . METRICS builds multidisciplinary teams, offers postdoctoral fellowships, and trains leaders in meta-research methods to develop solutions like improved evaluation metrics for scientific claims. Its work has informed reforms by generating evidence on systemic flaws, such as publication biases, through collaborations with global affiliates from over 20 institutions. In 2024, the (UKRI) established the Metascience Unit with a £10 million budget (2024–2027) to scientifically test and optimize research funding processes, including grant allocation and support mechanisms. The unit designs policy trials, such as randomized controlled trials and pilots on , akin to ARPA-style high-risk experimentation, to assess effectiveness and disseminate findings to UKRI, the Department for Science, Innovation and Technology (DSIT), and international funders. Early experiments, including distributed models, have shortened assessment timelines by approximately three months while maintaining quality, contributing to 2020s evidence for broader reforms in funding efficiency. These institutes engage in international collaborations to scale metascience impacts, such as COS's leadership in the Metascience —a 2025 pilot coalition of funders, publishers, and researchers from multiple countries to align priorities on open practices, which, as of November 2025, includes 39 organizations. METRICS hosts the International Forum, a biweekly webinar series connecting global meta-researchers to share policy trial insights. The Metascience Unit partners with international bodies on grants, including a £4 million AI-focused fellowship program with the and to study technology's effects on research workflows. Collectively, their evidence generation has driven reforms, including 2020s reports advocating innovations to mitigate biases and boost research reliability across funding systems.

Journals, Conferences, and Tools

Key journals dedicated to metascience and meta-research include Research Integrity and Peer Review, an open-access launched in 2016 by (now under ), which focuses on empirical studies of processes, , practices, and solutions to integrity challenges in . The journal emphasizes transparent and has published influential work on topics like in editorial decisions and standards, serving as a primary venue for advancing metascience through rigorous analysis of scientific workflows. Other prominent outlets include dedicated collections within broader journals, such as eLife's Meta-Research series (initiated in 2018), which aggregates studies on , statistical , and gender biases in science, and 's Meta-Research Collection in and (expanded since 2016), highlighting interdisciplinary meta-research on research practices across fields like and . Conferences play a vital role in fostering collaboration in metascience, with the Metascience 2025 Conference serving as a landmark event held from June 30 to July 2, 2025, at , attracting over 800 participants from more than 65 countries to discuss innovations in research institutions, , funding reforms, applications in research, and meta-economics. The conference featured panels on policy interventions like the UK Metascience Unit and launched the Metascience Alliance to coordinate global efforts in improving scientific practices. Earlier gatherings, such as the Metascience 2019 Symposium, organized by the Fetzer Franklin Fund and researchers including Brian Nosek of the Center for , brought together leading scholars to establish metascience as a , focusing on questions of scientific incentives, , and through workshops and keynotes. Essential tools in metascience enable transparency, tracking, and evaluation of research outputs. The Database, launched in 2010 by the Center for Scientific Integrity in partnership with Crossref, is a comprehensive, publicly accessible repository that catalogs over 60,000 retractions from scholarly literature as of mid-2025, updated daily with details on reasons for withdrawal, such as or errors, to support meta-research on reliability. Preregistration platforms, including the Framework (OSF) Preregistration service by the Center for and AsPredicted by the Wharton Credibility Lab, allow researchers to timestamp and publicly commit to study plans, hypotheses, and analysis strategies before , reducing selective reporting and enhancing across disciplines. APIs, provided by Altmetric.com since 2011, track non-traditional impact indicators like mentions, policy citations, and downloads for scholarly works, enabling metascience analyses of broader research dissemination and influence beyond citation counts.

References

  1. [1]
    Metascience - Association for Psychological Science – APS
    Oct 29, 2019 · Metascience, also known as metaresearch or the science of science, attempts to use quantifiable scientific methods to elucidate how science works and why it ...
  2. [2]
    Metascience can improve science — but it must be useful to society ...
    Jul 8, 2025 · Metascience has essentially become a broad umbrella that includes investigations of peer review, reproducibility, research evaluation, research impact, open ...Missing: definition sources
  3. [3]
    An Introduction to Metascience: The Discipline of Evaluating the ...
    The overarching goal of metascience is to improve the efficiency, rigor, and overall impact of scientific research [1]. This includes addressing systemic ...
  4. [4]
    Is Research Software Science a Metascience? - arXiv
    Sep 16, 2025 · Metascience is broadly defined as the scientific study of science itself, with the explicit aim to describe, explain, evaluate, and ultimately ...Missing: sources | Show results with:sources<|control11|><|separator|>
  5. [5]
    What is metascience? - Transforming Evidence
    Here, we discuss findings from a bibliometric analysis to reflect on the evolution of metascience and its meaning today. What is metascience? Despite the recent ...Missing: scholarly | Show results with:scholarly
  6. [6]
    Meta-research: Why research on research matters | PLOS Biology
    Mar 13, 2018 · Meta-research may be our best chance to defend science, gain public support for research, and counter antiscience movements.<|separator|>
  7. [7]
    [PDF] What is Metascientific Epistemology? - PhilArchive
    Abstract—Metascientific epistemology differs from any philosophical epistemologies in its aims, objects and methods. Through an examination of Mario Bunge's ...
  8. [8]
    (PDF) Mεtascience, No 3 - Metascientific Epistemology
    Among metascientific disciplines, epistemology occupies a promi- nent place in this issue of Mεtascience. Metascience differs from phi- losophy in its rejection ...
  9. [9]
    Meta-assessment of bias in science - PMC - NIH
    Science is said to be suffering a reproducibility crisis caused by many biases. How common are these problems, across the wide diversity of research fields?
  10. [10]
    [PDF] Metascience - how to improve science using science - Zenodo
    Oct 12, 2024 · Metascience, also known as meta-research, research on research, the science of science - is the use of scientific methodology to study ...
  11. [11]
    115: Sociology of Science - Robert K. Merton
    Jul 23, 2024 · Robert K. Merton was a sociologist who founded the field of the sociology of science, the study of how acts of research influence and are ...Missing: metascience | Show results with:metascience
  12. [12]
    Meta Science / Meta Research - Gilad Feldman
    Meta Science is the science of science, using quantifiable methods to study how scientific practices influence the veracity of scientific conclusions.<|separator|>
  13. [13]
    Doing Research Better: The Role of Metascience in the UK ... - RAND
    Jul 15, 2025 · Metascience is a field of study that is agnostic to discipline and methodology and so has the potential to bring together a variety of ways to study how ...Missing: importance | Show results with:importance
  14. [14]
    The Economics of Reproducibility in Preclinical Research
    Jun 9, 2015 · Nevertheless, we believe a 50% irreproducibility rate, leading to direct costs of approximately US$28B/year, provides a reasonable starting ...
  15. [15]
    A Vision of Metascience
    Oct 18, 2022 · It is a vision in which metascience is an engine of improvement for the social processes and ultimately the culture of science.
  16. [16]
    How can meta-research be used to evaluate and improve the quality ...
    By addressing ethical concerns and promoting responsible conduct, meta-research contributes to maintaining public trust, safeguarding research participants, and ...
  17. [17]
    The applications of metascience to research and innovation systems
    Mar 18, 2025 · ies and thereby accelerating the pace of innovation. 22.3.4 Ethics and Trust. Science policy must prioritise the public good over individual ...
  18. [18]
    Metascience Working Group - Using science to improve science
    The Metascience Working Group (MSWG) is a forum and coordination platform for academics and policy practitioners to contribute and draw on key insights.
  19. [19]
    Francis Bacon - Stanford Encyclopedia of Philosophy
    Dec 29, 2003 · Francis Bacon (1561–1626) was one of the leading figures in natural philosophy and in the field of scientific methodologyNatural Philosophy: Theory of... · Science and Social Philosophy · Bibliography
  20. [20]
    Karl Popper - Stanford Encyclopedia of Philosophy
    Nov 13, 1997 · The logic of his theory is utterly simple: a universal statement is falsified by a single genuine counter-instance. Methodologically, however, ...6. Probability, Knowledge... · 11. Critical Evaluation · Secondary Literature/other...Missing: 1934 | Show results with:1934
  21. [21]
    None
    ### Summary of Early History of Sociology of Science: Merton’s Contributions (1930s-1940s)
  22. [22]
    Robert K. Merton, The Normative Structure of Science (1942)
    It presents the basic principles on which is based the ethos of science, that is the ethical values shared by all scientists.
  23. [23]
    [PDF] History and Sociology of Science - PhilArchive
    Since the 1940s and following the pioneering work of Robert Merton, social and historical studies of science grew in number and were empirical investi- gations ...
  24. [24]
    Reproducibility Project: Psychology - OSF
    We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original ...Missing: crisis | Show results with:crisis
  25. [25]
    UK Metascience Unit - UKRI
    Sep 18, 2025 · Established in 2024, the UK Metascience Unit is a small team of policymakers, analysts and funding delivery specialists who work across UK ...
  26. [26]
    Programme - Metascience 2025
    A year in the work of the UK Metascience Unit: looking back and ahead. T1.2 Mission metascience: pathways for optimising decision-making in STI policy.
  27. [27]
    A tale of two databases: the use of Web of Science and Scopus in ...
    Feb 22, 2020 · This paper conducts a comparative, dynamic, and empirical study focusing on the use of Web of Science (WoS) and Scopus in academic papers published during 2004 ...
  28. [28]
    An index to quantify an individual's scientific research output - PNAS
    I propose the index h, defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher.
  29. [29]
    Scopus | Abstract and citation database - Elsevier
    Scopus is a trusted, source-neutral abstract and citation database curated by independent subject matter experts who are recognized leaders in their fields.Scopus metrics · Scopus content · Scopus AI · Scopus data
  30. [30]
    A Dynamic Network Measure of Technological Change - PubsOnLine
    Mar 22, 2016 · This article outlines a network approach to the study of technological change. We propose that new inventions reshape networks of interlinked technologies.Missing: paper | Show results with:paper
  31. [31]
    Full article: To Slice or Perish - Taylor & Francis Online
    Jan 26, 2023 · Salami slicing occurs when the intent to advance scientific knowledge is superseded by motive of external, secondary gain. Among the driving ...
  32. [32]
    Researchers' Individual Publication Rate Has Not Increased in a ...
    Therefore, the widespread belief that pressures to publish are causing the scientific literature to be flooded with salami-sliced, trivial, incomplete, ...<|separator|>
  33. [33]
    Gender differences in peer review outcomes and manuscript impact ...
    In contrast to these observations on submitted manuscripts, gender differences in peer‐review outcomes were observed in a survey of >12,000 published ...
  34. [34]
    The role of geographic bias in knowledge diffusion: a systematic ...
    Jan 15, 2020 · This study synthesizes the evidence from randomized and controlled studies that explore geographic bias in the peer review process.
  35. [35]
    Rising number of 'predatory' academic journals undermines ...
    Sep 19, 2023 · In 2021, another estimate said there were 15,000 predatory journals. This trend could weaken public confidence in the validity of research on ...
  36. [36]
    'Plan S' and 'cOAlition S' – Accelerating the transition to full and ...
    Plan S is an initiative for Open Access publishing that was launched in September 2018. The plan is supported by cOAlition S, an international consortium of ...About · About cOAlition S and Plan S · Plan S Principles · Diamond Open Access
  37. [37]
    Reviewer bias in single- versus double-blind peer review - PNAS
    Our analysis shows that single-blind reviewing confers a significant advantage to papers with famous authors and authors from high-prestige institutions.
  38. [38]
    Why Most Published Research Findings Are False | PLOS Medicine
    Aug 30, 2005 · It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some ...
  39. [39]
    The Extent and Consequences of P-Hacking in Science - PMC - NIH
    Mar 13, 2015 · One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant.
  40. [40]
    Estimating the reproducibility of psychological science
    Aug 28, 2015 · Overall, this analysis suggests a 47.4% replication success rate. ... replicability: A “many labs” replication project. Soc. Psychol. 45 ...
  41. [41]
    Investigating the replicability of preclinical cancer biology - eLife
    Dec 7, 2021 · The Reproducibility Project: Cancer Biology was set up to provide evidence about the replicability of preclinical research in cancer biology.
  42. [42]
    Reproducible research practices, transparency, and open access ...
    Nov 20, 2018 · In this study, we investigate the reproducibility and transparency practices across the published biomedical literature from 2015–2017.<|control11|><|separator|>
  43. [43]
    Open Science Badges
    What are Open Science Badges? Badges to acknowledge open science practices are incentives for researchers to share data, materials, or to preregister; Badges ...
  44. [44]
    Publish or perish: Where are we heading? - PMC - NIH
    “Publish or perish” is now becoming the way of life. It is race to get more and more publications to one's credit. The current trend is forcing scientists to ...
  45. [45]
    'Publish or perish' culture blamed for reproducibility crisis - Nature
    Jan 20, 2025 · Sixty-two per cent of respondents said that pressure to publish “always” or “very often” contributes to irreproducibility, the survey found.
  46. [46]
    The Use of Altmetrics in Promotion and Tenure | EDUCAUSE Review
    Mar 7, 2016 · Altmetrics can help fill in the knowledge gaps that citations leave, allowing researchers to understand the use of their research by diverse ...
  47. [47]
    [PDF] Do 'altmetrics' correlate with citations? Extensive comparison of ...
    Jan 17, 2014 · Altmetrics show positive but weak correlations with citations, and do not always filter highly cited publications better than journal citation ...
  48. [48]
    Defining the Role of Authors and Contributors - ICMJE
    Who Is an Author? The ICMJE recommends that authorship be based on the following 4 criteria: Substantial contributions to the conception or design of the work; ...Missing: 1985 2013
  49. [49]
    Using scientific authorship criteria as a tool for equitable inclusion in ...
    Introduction. In 1985, the International Committee of Medical Journal Editors (ICMJE) created a standardised set of criteria for authorship.
  50. [50]
    CRediT
    - **Official CRediT Site**: Contributor Role Taxonomy (CRediT) resource hub.
  51. [51]
    NOT-OD-15-103: Enhancing Reproducibility through Rigor and ...
    Jun 9, 2015 · The results of these pilots have informed the revised instructions to address NIH expectations about the rigor and transparency in NIH supported ...Missing: initiatives | Show results with:initiatives
  52. [52]
    Shaping Science Policy | American Association for the Advancement ...
    AAAS programs help the public better understand science and its role in evidence-based policy-making. AAAS also works with global partners to strengthen ...
  53. [53]
    The communications gap between scientists and public - NIH
    There is also a growing emphasis on science communication as a two‐way process by which the course of research can be influenced by societal feedback, according ...Peer Pressure And Perception · Targeting Audiences · Communicating Uncertainty
  54. [54]
    Infodemics and health misinformation: a systematic review of reviews
    During the COVID-19 pandemic specifically, studies show that social media is contributing to the spread of misinformation about the vaccine, and that ...
  55. [55]
    A Comprehensive Analysis of COVID-19 Misinformation, Public ...
    Aug 21, 2024 · Background: The COVID-19 pandemic was marked by an infodemic, characterized by the rapid spread of both accurate and false information, ...
  56. [56]
    The association between exaggeration in health related science ...
    Dec 10, 2014 · Exaggeration in news is strongly associated with exaggeration in press releases. Improving the accuracy of academic press releases could represent a key ...
  57. [57]
    What is Altmetric Score? Meaning & Why Care?
    The Altmetric score represents a weighted count of the amount of attention for a research output from a variety of sources.
  58. [58]
    Twitter in scholarly communication - Altmetric
    Jun 12, 2018 · Stefanie Haustein explores the overall role Twitter plays in scholarly communication, especially social media based metrics.Missing: definition examples
  59. [59]
    Citizen Science as an Ecosystem of Engagement - NIH
    Jun 22, 2022 · We propose a volunteer-centric framework that explores how the dynamic accumulation of experiences in a project ecosystem can support broad learning objectives.
  60. [60]
    Mechanisms for enhancing public engagement with citizen science ...
    Oct 7, 2020 · It investigates how citizen science can apply democratic processes to be more responsive, while drawing on insights from behaviour change ...
  61. [61]
    Societal and scientific impact of policy research: A large-scale ...
    Our findings emphasize the crucial role of science communication channels like news media and social media in bridging the gap between research and policy.Societal And Scientific... · 5. Results · 5.3. Effects Of Different...
  62. [62]
    FAQs - Next Generation Science Standards
    How are critical thinking and communications skills, which are fundamental to student success in today's global economy, addressed in the Next Generation ...
  63. [63]
  64. [64]
    Redefining Critical Thinking: Teaching Students to Think like Scientists
    A course focused on cognitive biases, logical fallacies, and the hallmarks of scientific thinking adapted for each grade level may provide students with the ...
  65. [65]
    Scientific Thinking and Critical Thinking in Science Education
    Sep 5, 2023 · Scientific thinking and critical thinking are two intellectual processes that are considered keys in the basic and comprehensive education of citizens.
  66. [66]
    Students' “teleological misconceptions” in evolution education - NIH
    Jan 9, 2020 · Teleology, explaining the existence of a feature on the basis of what it does, is usually considered as an obstacle or misconception in ...
  67. [67]
    Why are people antiscience, and what can we do about it? - PNAS
    Jul 12, 2022 · Antiscience attitudes are more likely to emerge when a scientific message comes from sources perceived as lacking credibility.
  68. [68]
    Combating antiscience: Are we preparing for the 2020s?
    Mar 27, 2020 · In 2019, the World Health Organization listed “vaccine hesitancy” as a leading global health threat [15]. Too often, the public health ...
  69. [69]
    Common Origins of Diverse Misconceptions: Cognitive Principles ...
    Oct 13, 2017 · In summary, teleological thinking may underlie a variety of seemingly unrelated biological misconceptions, and may thereby play a role in ...
  70. [70]
    A systematic literature review to clarify the concept of vaccine ...
    Aug 22, 2022 · Vaccine hesitancy (VH) is considered a top-10 global health threat. The concept of VH has been described and applied inconsistently.
  71. [71]
    Large teams develop and small teams disrupt science and technology
    Feb 13, 2019 · Wu, L., Wang, D. & Evans, J.A. Large teams develop and small teams disrupt science and technology. Nature 566, 378–382 (2019). https://doi ...
  72. [72]
    Quantifying the impact of weak, strong, and super ties in scientific ...
    We find that super ties contribute to above-average productivity and a 17% citation increase per publication.<|control11|><|separator|>
  73. [73]
    Gender-diverse teams produce more novel and higher-impact ...
    Aug 29, 2022 · Science teams made up of men and women produce papers that are more novel and highly cited than those of all-men or all-women teams.
  74. [74]
    Are Ideas Getting Harder to Find? - American Economic Association
    More generally, everywhere we look we find that ideas, and the exponential growth they imply, are getting harder to find. Citation. Bloom, Nicholas, Charles I.
  75. [75]
    Papers and patents are becoming less disruptive over time - Nature
    Jan 4, 2023 · We find that papers and patents are increasingly less likely to break with the past in ways that push science and technology in new directions.
  76. [76]
    The past, present and future of Registered Reports - Nature
    Nov 15, 2021 · Registered Reports are a form of empirical publication in which study proposals are peer reviewed and pre-accepted before research is undertaken.
  77. [77]
    Heterogeneity in effect size estimates - PMC - NIH
    We provide a framework for studying heterogeneity in the social sciences and divide heterogeneity into population, design, and analytical heterogeneity.
  78. [78]
    How to understand and report heterogeneity in a meta-analysis
    In any meta-analysis it is important to report not only the mean effect size but also how the effect size varies across studies. A treatment that has a moderate ...
  79. [79]
    The heterogeneity statistic I2 can be biased in small meta-analyses
    Apr 14, 2015 · I 2 has a substantial bias when the number of studies is small. The bias is positive when the true fraction of heterogeneity is small.The Naïve Estimator · Random-Effects Model · Fixed-Effects Model
  80. [80]
    UN expert urges structural reforms to bridge North–South divide and ...
    Sep 12, 2025 · “Structural inequities remain embedded in the global economic order,” Katrougalos told the UN Human Rights Council. “Historical legacies and ...Missing: ethical equity science 2020s
  81. [81]
    Bridging the Infodemic Equity Gap: North-South Digital Health ...
    Oct 16, 2025 · A forward agenda is outlined for harmonized indicators, evaluation methods, and ethical safeguards needed to reduce inequities in future health ...
  82. [82]
    AI pitfalls and what not to do: mitigating bias in AI - PMC - NIH
    AI bias can occur from task definition, data acquisition, limited diversity, and hidden signals, and is present across the AI lifecycle.
  83. [83]
    Humans inherit artificial intelligence biases | Scientific Reports
    Oct 3, 2023 · The current research aims to examine how biased AI recommendations can influence people's decision-making in a health-related task and to test ...
  84. [84]
    Questionable Metascience Practices - Journal of Trial and Error
    Apr 24, 2023 · A QMP is a research practice, assumption, or perspective that has been questioned by several commentators as being potentially problematic for metascience.
  85. [85]
    Symposium: Critical Perspectives on the Metascience Reform ...
    Metascience is often defined as the scientific investigation of science itself with the aim to improve science. This 'improving' part of metascience has ...Missing: scope | Show results with:scope
  86. [86]
  87. [87]
    [PDF] Guidance for the production and publication of Cochrane living ...
    This guidance outlines methods, production, and publication processes for living systematic reviews, including LSR enablers, managing searches, and ...
  88. [88]
    Living Systematic Reviews: An Emerging Opportunity to Narrow the ...
    Feb 18, 2014 · Living systematic reviews are high quality, up-to-date online summaries of health research, updated as new research becomes available, and ...
  89. [89]
    About Us - Semantic Scholar
    Semantic Scholar provides free, AI-driven search and discovery tools, and open resources for the global research community. We index over 200 million academic ...<|control11|><|separator|>
  90. [90]
    Detection of paradigm shifts and emerging fields using scientific ...
    Centrality analysis, path analysis, cluster analysis, etc. are used to identify the key papers of paradigm shifts, emerging fields, relatively important ...
  91. [91]
    Home | ClinicalTrials.gov
    ClinicalTrials.gov is a website and online database of clinical research studies and information about their results.Frequently Asked Questions · Learn About Studies · FDAAA 801 and the Final Rule
  92. [92]
    Open Science Framework (OSF) - PMC - PubMed Central
    It is developed and maintained by the Center for Open Science (COS), a nonprofit organization founded in 2013 that conducts research into scientific practice, ...
  93. [93]
    Preregistration - Center for Open Science
    When you preregister your research, you're simply specifying your research plan in advance of your study and submitting it to a registry.
  94. [94]
    Likelihood of Null Effects of Large NHLBI Clinical Trials Has ...
    Aug 5, 2015 · Citation: Kaplan RM, Irvin VL (2015) Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time. PLoS ONE 10(8): ...
  95. [95]
    The preregistration revolution - PNAS
    Preregistration is a solution that helps researchers maintain clarity between prediction and postdiction and preserve accurate calibration of evidence.
  96. [96]
    Prevalence of Transparent Research Practices in Psychology
    Dec 24, 2024 · Relatively few articles had a preregistration (field-wide: 7% [2.5%, 12%]; prominent: 14% [8.5%, 19%]), materials (field-wide: 16% [9%, 24%]; ...
  97. [97]
    A survey on how preregistration affects the research workflow
    Jul 6, 2022 · The goal of this exploratory study was to identify the perceived benefits and challenges of preregistration from the researcher's perspective.<|control11|><|separator|>
  98. [98]
    Preregistration in practice: A comparison of preregistered and non ...
    Nov 10, 2023 · Because preregistration theoretically prevents HARKing and p-hacking, preregistered publications should contain a lower proportion of positive ...
  99. [99]
    Reporting guidelines | EQUATOR Network
    Guidelines for Reporting Outcomes in Trial Reports: The CONSORT-Outcomes 2022 Extension; 142; Guidelines for Reporting Outcomes in Trial Protocols: The SPIRIT ...CONSORT 2025 Statement · Observational Studies: STROBE · Experimental studies
  100. [100]
    STROBE - Strengthening the reporting of observational studies in ...
    STROBE stands for an international, collaborative initiative of epidemiologists, methodologists, statisticians, researchers and journal editorsChecklists · Translations · Publications · Commentaries
  101. [101]
    ARRIVE Guidelines: Home
    The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) are a checklist of recommendations for the full and transparent reporting of research ...Guidelines 2.0Author checklistsAboutArrive 2.0Author checklist
  102. [102]
    TOP Guidelines - Center for Open Science
    TOP 2015 (OA) included eight policy recommendations and was updated to reflect feedback from journal implementers.
  103. [103]
    Evolution of Research Reporting Standards - PubMed Central - NIH
    Jul 16, 2024 · This study explores the impact of AI technologies, specifically ChatGPT, on past reporting standards and the need for revised guidelines.
  104. [104]
    Read the Declaration | DORA
    The Declaration on Research Assessment (DORA) recognizes the need to improve the ways in which the outputs of scholarly research are evaluated.
  105. [105]
    San Francisco Declaration on Research Assessment (DORA)
    The Declaration on Research Assessment (DORA) recognizes the need to improve the ways in which researchers and the outputs of scholarly research are evaluated.The Declaration · Signers · A decade of DORA | DORA · About DORA
  106. [106]
    How Science Can Reward Cooperation, Not Just Individual ...
    Jan 8, 2024 · 1. Shift scientific evaluation to more strongly weigh group outcomes. · 2. Provide more funding for supportive positions. · 3. Create group-level ...Missing: governance | Show results with:governance
  107. [107]
    Rewarding negative results keeps science on track - Nature
    Nov 21, 2017 · One, announced earlier this month by the European College of Neuropsychopharmacology, offers a €10,000 (US$11,800) award for negative results in ...
  108. [108]
    The UK launched a metascience unit. Will other countries follow suit?
    Aug 7, 2024 · The metascience unit will fund studies that analyse UK research in the hope of boosting its quality and efficiency.
  109. [109]
    FDAAA Certification to Accompany Drug, Biological Product
    Mar 28, 2018 · Food and Drug Administration Amendments Act (FDAAA) of 2007 ... mandates the expansion of the clinical trials data bank (ClinicalTrials.gov).
  110. [110]
    Evaluation of Compliance With the FDA Amendments Act of 2007
    May 24, 2021 · This cross-sectional study evaluates clinical trials' rates of compliance with the legal requirements of the US Food and Drug ...Missing: mandate | Show results with:mandate<|separator|>
  111. [111]
    Why 90% of clinical drug development fails and how to improve it?
    Despite this validated effort, the overall success rate of clinical drug development remains low at 10%–15%5, 6, 7. Such persistent high failure rate raises ...
  112. [112]
    90% of drugs fail clinical trials
    Mar 12, 2022 · Lastly, 10% of failures were attributed to lack of commercial interest and poor strategic planning. This high failure rate raises the question ...
  113. [113]
    Meta-analysis and The Cochrane Collaboration - Systematic Reviews
    Nov 26, 2013 · Accordingly, the first Cochrane meeting on statistics was held at the UK Cochrane Centre in Oxford in July 1993, masterminded by Iain Chalmers ...
  114. [114]
    a tool for assessing risk of bias in non-randomised studies ... - PubMed
    We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative ...
  115. [115]
    The Role of Artificial Intelligence in Clinical Trial Design and ... - FDA
    May 30, 2024 · AI, including machine learning, is all gaining traction in clinical research, changing the clinical trial landscape, and is increasingly being ...
  116. [116]
    Examples and Lessons Learned from COVID-19 - PMC
    Jun 26, 2024 · Our objective is to present three exemplar cases of rapid evidence synthesis products from the Veterans Healthcare Administration Evidence ...
  117. [117]
    [PDF] A rapid response to the COVID-19 outbreak: the meta-evidence project
    Jun 23, 2021 · The meta-evidence project was a dynamic tool to capture emerging COVID-19 evidence, connect research teams, and aid decision-making.<|control11|><|separator|>
  118. [118]
    Failing the Future: Three Unsuccessful Attempts to Replicate Bem's ...
    Mar 14, 2012 · Bem's experiments have attracted considerable controversy, with much of the debate focusing on various statistical issues. For example, some ...Data Analysis · Results · Discussion
  119. [119]
    A Bayesian Perspective on the Reproducibility Project: Psychology
    Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist. 2015;70(6):487. doi: 10.1037 ...
  120. [120]
    The effectiveness of nudging: A meta-analysis of choice architecture ...
    Over the past decade, choice architecture interventions or so-called nudges have received widespread attention from both researchers and policy makers.Missing: 2020s | Show results with:2020s
  121. [121]
    Don't believe the hype — quantum tech can't yet solve real ... - Nature
    Apr 16, 2025 · Investors and the public should know what quantum devices can and, more importantly, can't do.Missing: 2020s | Show results with:2020s<|separator|>
  122. [122]
    Physics paper sets record with more than 5,000 authors - Nature
    as far as anyone knows — broken the record for the largest number of contributors to a single research ...
  123. [123]
    ATLAS Collaboration | ATLAS Experiment at CERN
    It is one of the largest collaborative efforts ever attempted in science, with over 5000 members and almost 3000 scientific authors.
  124. [124]
    Propagation of errors and quantitative quantum simulation with ...
    Below, we analyse errors occuring in current experimental platforms, evaluating the sources of error in the simulation of dynamics after a quench in the Hubbard ...
  125. [125]
    Tutorial: Guide to error propagation for particle counting ...
    This article covers practical applications of forward error propagation in the context of particle counting measurements.
  126. [126]
    [PDF] How accurately does Thomas Kuhn's model of paradigm change ...
    All in all, the citation analysis of the cosmology papers indicates that a paradigm shift is not a short-term revolutionary process but instead a process ...
  127. [127]
    [PDF] A bibliometric perspective on Kuhnian paradigm shifts
    Jan 1, 2011 · The emergence of general relativity was a true scientific paradigm shift, in which the long-respected laws of Newtonian physics were replaced ...Missing: analysis patterns
  128. [128]
    60 years of progress - ITER
    Fusion research has increased key fusion plasma performance parameters by a factor of 10,000 over 60 years; research is now less than a factor of 10 away ...Missing: metascience 2020s
  129. [129]
    [PDF] Powering the Future Fusion & Plasmas - DOE Office of Science
    This report provides a decade-long vision for the field of fusion energy and plasma science and presents a path to a promising future of new scientific.Missing: metascience | Show results with:metascience
  130. [130]
    About Rubin Observatory - LSST.org
    Rubin Observatory project is to conduct the 10-year Legacy Survey of Space and Time (LSST). LSST will deliver a 500 petabyte set of images and data products ...
  131. [131]
    Legacy Survey of Space and Time (LSST) - Rubin Observatory
    Rubin Observatory will generate a new snapshot of the entire southern sky every few nights with the decade-long Legacy Survey of Space and Time (LSST).
  132. [132]
    AutoML: A systematic review on automated machine learning with ...
    AutoML technologies optimize the process of designing along with selecting machine learning models leading to more efficient with effective solutions. These ...
  133. [133]
    The Machine Learning Reproducibility Crisis - Pete Warden's blog
    Mar 19, 2018 · I was recently chatting to a friend whose startup's machine learning models were so disorganized it was causing serious problems as his team ...
  134. [134]
    Leakage and the reproducibility crisis in machine-learning-based ...
    Aug 4, 2023 · Reproducibility issues arose despite the use of standard, widely used datasets, often because of the lack of standard modeling and evaluation ...
  135. [135]
    Investigating the Impact of Randomness on Reproducibility in ... - arXiv
    The reproducibility crisis in machine learning is a growing concern that questions the reliability and validity of reported research findings [1] .
  136. [136]
    Auditing the AI auditors: A framework for evaluating fairness and ...
    A standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives.
  137. [137]
    A Framework for Evaluating Fairness and Bias in High Stakes AI ...
    Oct 9, 2025 · A standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives.
  138. [138]
    Quantitative Auditing of AI Fairness with Differentially Private ... - arXiv
    Apr 30, 2025 · Fairness auditing of AI systems can identify and quantify biases. However, traditional auditing using real-world data raises security and ...
  139. [139]
    Meta-Learning - AutoML.org
    Meta-Learning aims to improve learning across different tasks or datasets instead of specializing on a single one.
  140. [140]
    Bilevel optimization for automated machine learning
    AutoML refers to a set of technologies that streamline the entire process of applying ML to complex problems by automating many of the traditionally manual ...
  141. [141]
    [2109.09774] Inconsistency in Conference Peer Review: Revisiting ...
    Sep 20, 2021 · In this paper we revisit the 2014 NeurIPS experiment that examined inconsistency in conference peer review.
  142. [142]
    A Retrospective on the 2014 NeurIPS Experiment - Neil Lawrence
    Jun 16, 2021 · The objective of the NeurIPS experiment was to determine how consistent the process of peer review is. One way of phrasing this question is to ...Missing: blind | Show results with:blind
  143. [143]
    Recordings from Metascience 2025 Plenary Sessions now available
    Sep 1, 2025 · Recordings from Metascience 2025 Plenary Sessions now available. Catch up on the big conversations shaping the future of research. Categories:.
  144. [144]
    About - Center for Open Science
    The Center for Open Science (COS) was founded in 2013 to start, scale, and sustain open research practices that will democratize access to research.Team · Board · Partners · Finances
  145. [145]
    About Us - Meta Research Innovation Center at Stanford
    Launched in April 2014 with a founding grant from the Laura and John Arnold Foundation, the Meta-Research Innovation Center at Stanford (METRICS) is a research ...
  146. [146]
  147. [147]
    UK metascience unit releases first results - C&EN
    Jul 8, 2025 · Other initiatives from the unit include using $5 million to jointly fund 23 international projects on a variety of topics related to metascience ...Missing: 2024 | Show results with:2024
  148. [148]
    Metascience Alliance - Center for Open Science
    The Metascience Alliance brings together researchers, funders, institutions, publishers, policymakers, infrastructure providers, entrepreneurs, and others.
  149. [149]
    METRICS International Forum
    Metrics International Forum is a biweekly online webinar on meta-research topics. It occurs Thursday's 9:00 am PT time.
  150. [150]
    International fellowships to explore AI's impact on science - UKRI
    Oct 9, 2025 · The AI Metascience Fellowship Programme is part of broader efforts by each funding agency to ensure our understanding of the implications of AI ...Missing: big 2020s
  151. [151]
    Research Integrity and Peer Review: Home
    Research Integrity and Peer Review is an international, open access, peer reviewed journal that encompasses all aspects of integrity in research publication.Submission guidelinesAbout
  152. [152]
  153. [153]
  154. [154]
    Meta-Research: A Collection of Articles - eLife
    Nov 23, 2018 · This collection of articles highlights the breadth of meta-research with articles on topics as diverse as gender bias in peer review, statistical power in ...
  155. [155]
    PLOS Biology and PLOS ONE Meta-research Collection
    Nov 16, 2021 · This Collection highlights some of PLOS Biology's Meta-Research articles since 2016 as well as some of PLOS ONE's articles over the past few years.Missing: reforms efficiency gains projections
  156. [156]
    Metascience 2025 Conference
    The Metascience 2025 Conference is a global gathering for knowledge sharing, community building, and defining research priorities, bringing together ...Programme · Virtual symposia · Registration is now closed! · Prior Conferences
  157. [157]
    Metascience 2019 Symposium - The Emerging Field of Research on ...
    This symposium served as a formative meeting for metascience as a discipline. The meeting brought together leading scholars that are investigating questions ...Missing: Summer | Show results with:Summer
  158. [158]
    Retraction Watch – Tracking retractions as a window into the ...
    Journal retracts 'bizarre' placebo effect paper. Did you know that Retraction Watch and the Retraction Watch Database are projects of The Center of Scientific ...Database User GuideTop 10 most highly cited
  159. [159]
    Retraction Watch - Crossref
    Jan 19, 2025 · The database contains retractions gathered from publisher websites and is updated every working day by Retraction Watch.
  160. [160]
    AsPredicted: Home
    WHAT IS ASPREDICTED? A platform that makes pre-registrations easy to make and evaluate. All pre-registrations can be downloaded as single page PDFs that are ...Missing: metascience | Show results with:metascience