Research Excellence Framework
The Research Excellence Framework (REF) is the United Kingdom's periodic system for evaluating the quality and broader effects of research conducted in higher education institutions (HEIs).[1] It assesses research through expert peer review across 34 disciplinary units of assessment, focusing on three core elements: scholarly outputs such as publications and datasets; societal, economic, and cultural impacts arising from research; and the vitality and sustainability of the research environment, including staff resources and infrastructure.[2][3] Outcomes directly inform the allocation of approximately £2 billion in annual public funding for universities' research capabilities by the four UK funding bodies—Research England, the Scottish Funding Council, Higher Education Funding Council for Wales, and Department for the Economy in Northern Ireland.[4] Introduced in 2014 to replace the Research Assessment Exercise (RAE), which had evaluated research since 1986 primarily on publication quality, the REF expanded criteria to include impacts and environment, aiming to better capture research's real-world contributions and institutional support structures.[5][6] The 2014 and 2021 exercises rated submissions on a scale from unclassified to world-leading (4*), with REF 2021 results showing 41% of activity as world-leading and 43% as internationally excellent, underscoring the UK's sustained research strengths across disciplines and regions.[7] These assessments have incentivized investments in research infrastructure and outreach, contributing to the UK's global research prominence, as evidenced by consistent high rankings in international bibliometric indicators.[8][9] Despite these outcomes, the REF has drawn substantial criticism for its administrative burdens, with preparation and submission costs estimated in the tens of millions per institution and total system-wide expenses exceeding £200 million per cycle, diverting resources from actual research.[10][3] Institutions have exploited rules through practices like selective staff and output submissions, potentially inflating average scores while excluding lower-rated work, which undermines the framework's goal of comprehensive quality assurance.[11] The emphasis on demonstrable impacts has also prompted concerns over subjective judgments in panel evaluations and incentives for short-term, quantifiable outcomes over fundamental or long-horizon inquiry, though empirical reviews indicate mixed evidence on whether it systematically distorts research priorities.[10][11]Origins and Evolution
Pre-REF Assessments: The Research Assessment Exercise
The Research Assessment Exercise (RAE) served as the UK's principal system for periodically evaluating the quality of research conducted in higher education institutions, operating from 1986 to 2008 before being succeeded by the Research Excellence Framework.[12] Conducted approximately every five years on behalf of the higher education funding councils, the RAE employed peer review to rank research outputs submitted by university departments across various subject areas, known as units of assessment (UoAs).[13] Its core purpose was to inform the allocation of quality-related (QR) research funding, shifting from an earlier equity-based model to one that concentrated resources in high-performing areas, thereby aiming to enhance overall research excellence and international competitiveness.[10] Originating amid concerns over declining UK research citation impact, the first exercise in 1986—termed the Research Selectivity Exercise (RSE) and led by the University Grants Committee under Peter Swinnerton-Dyer—introduced standardized peer review of departmental research outputs to justify public funding and redistribute approximately 40% of institutional grants toward stronger performers.[10] This initial round laid the groundwork for selectivity, with a follow-up in 1989 expanding to 152 UoAs and adopting a five-point rating scale to refine funding decisions, influencing around 50% of grants.[12] By the 1992 RAE, formally renamed as such after the 1992 Further and Higher Education Act, the process assessed quality across expanded institutions, using a 1-5 rating scale where higher grades received disproportionately more funding—such as grade 5 departments getting four times the allocation of grade 2—tying over 90% of research budgets to performance outcomes.[10] Subsequent rounds iteratively refined the methodology while maintaining peer review as the dominant approach. The 1996 RAE required submissions of up to four publications per researcher, introduced a 1-5* scale (with 5* denoting internationally excellent work), and covered 69 UoAs, excluding funding for the lowest-rated categories to sharpen incentives.[12] In 2001, emphasis shifted toward evaluating the merit of individual outputs rather than departmental averages, incorporating provisions for special circumstances like interdisciplinary work, with ratings extended to include a rare 6* for world-leading research in some panels.[10] The final 2008 iteration featured 67 UoAs, shifted to granular quality profiles assigning percentages of staff outputs to categories 1 through 4* (where 4* signified world-leading quality), and rewarded "pockets of excellence" within institutions to avoid penalizing uneven departmental strengths.[12] Throughout its tenure, the RAE's peer-reviewed assessments—conducted by expert panels drawing on publications, research income, and staff details—prioritized output quality over quantity, enabling funding councils to benchmark institutions and allocate billions in annual QR funding selectively.[13] This approach demonstrably boosted UK research productivity, as evidenced by rising mean ratings across rounds (particularly from 1996 to 2001) and sustained high citation impacts, though it faced critiques for encouraging game-playing, such as selective staff submissions, and burdening academics with preparation costs.[10] By concentrating funding—QR grants totaled around £2 billion annually by the late 2000s—the RAE fostered critical mass in elite departments but widened disparities between high- and low-rated units, informing the REF's design to incorporate broader elements like research environment and societal impact.[12]Inception and Design of the REF
The Research Excellence Framework (REF) emerged as a response to limitations identified in the preceding Research Assessment Exercises (RAEs), with its inception rooted in a UK government announcement in the December 2006 Pre-Budget Report to replace the RAE with a new system combining expert peer review and metrics for greater efficiency and broader assessment scope.[14] Following the 2008 RAE, the Higher Education Funding Council for England (HEFCE), alongside counterpart bodies in Wales, Scotland, and Northern Ireland, led the development of REF to inform the selective allocation of Quality-Related Research (QR) funding based on research excellence across UK higher education institutions.[14] The framework was formally positioned as the successor to the RAE, aiming to evaluate not only research outputs but also their societal impacts and the sustainability of research environments.[15] Development involved extensive consultations to refine the design, including an initial HEFCE consultation in November 2007 on the balance between metrics and expert review, which favored a hybrid approach over a purely metrics-based system initially proposed.[14] A second consultation in September 2009 addressed core features such as the inclusion of impact assessment, leading to a pilot exercise on impact in 2010 that informed final guidelines.[14] These processes ensured stakeholder input from academia and funding bodies, resulting in the publication of detailed assessment guidance by HEFCE in 2011, which outlined submission requirements for the inaugural REF covering the period from 2008 to 2013.[16] Submissions opened in January 2013, with results published on December 18, 2014, and subsequent funding allocations in March 2015.[14] The REF's design emphasized a unified framework applicable across all disciplines, structured around three equally weighted elements in terms of overall quality profile but with differentiated assessment contributions: research outputs (65% weighting), impacts (20%), and research environment (15%).[15] Outputs were evaluated by peer panels for originality, significance, and rigour, typically comprising up to four items per researcher from January 1, 2008, to December 31, 2013.[14] Impacts were assessed via case studies demonstrating economic or societal benefits traceable to research conducted from January 1, 1993, onward, with submissions required to link impacts to specific units of assessment.[17] The environment component drew on templates detailing strategy, staffing, and infrastructure from January 1, 2008, to July 31, 2013, to gauge vitality and sustainability.[14] Peer review by expert sub-panels, organized into main panels by broad subject areas, formed the core methodology, supplemented by selective metrics for outputs where appropriate.[15] In contrast to the RAE's focus primarily on peer-reviewed outputs, REF introduced impact and environment as explicit criteria to capture broader research contributions and institutional capacity, while incorporating metrics to reduce burden and enhance objectivity—though expert judgment remained predominant following consultation feedback against full metric reliance.[14] This hybrid model addressed criticisms of the RAE's high costs and game-playing incentives, such as selective staff submissions, by mandating returns for all research-active staff and emphasizing demonstrable real-world effects.[14] The design prioritized transparency, equity, and equality, with guidelines requiring institutions to address staff circumstances affecting submissions.[15] Overall, REF aimed to incentivize high-quality, impactful research while informing funding decisions that rewarded excellence, with 82% of UK research rated world-leading or internationally excellent in the 2014 exercise.[18]Implementation of REF 2014
Institutions submitted data to the REF 2014 through an online system managed by the Higher Education Funding Council for England (HEFCE), with the census date for eligible staff set at 31 July 2013 and the submission deadline on 29 November 2013.[19][20] A total of 154 higher education institutions (HEIs) participated, producing 1,911 submissions across 36 units of assessment (UOAs) grouped under four main panels: Panel A (medicine, health, and life sciences), Panel B (physical sciences and engineering), Panel C (social sciences), and Panel D (arts and humanities).[21][18] Each submission included research outputs (typically four per selected researcher), impact case studies (with evidence of socioeconomic benefits from research conducted between 1993 and 2013), and details on the research environment, weighted at 65%, 20%, and 15% respectively in the overall evaluation.[22] The assessment process relied on peer review by over 1,000 expert panel members, including academics and research users, appointed between 2010 and 2012 to ensure disciplinary expertise and independence.[23] Panels applied standardized criteria to score outputs, impacts, and environments on a four-point scale (4* for quality that is world-leading, down to unclassified), with calibration exercises in early 2014 to align scoring across sub-panels and main panels.[24] Impact evaluation involved reviewing 6,975 case studies and institutional templates, allocated to 2-4 reviewers per item, followed by individual scoring, moderation meetings, and limited auditing of selected cases to verify evidence.[23] Sub-panels handled initial reviews within clusters, escalating discrepancies for resolution, while main panels oversaw cross-panel consistency without direct rescoring.[23] Results were finalized after panel deliberations throughout 2014 and published on 18 December 2014, providing quality profiles for each submission rather than overall institutional rankings.[25][18] These outcomes informed research funding allocations by the four UK higher education funding bodies for the period 2015-16 to 2020-21, distributing approximately £2 billion annually based on the volume and quality of research, with 4* rated activity attracting the highest funding intensity.[14] The implementation marked the first inclusion of impact assessment in the UK's periodic research evaluation, introducing requirements for demonstrable evidence of real-world benefits, though panels noted variability in evidence quality and the need for clearer pathways linking outputs to impacts.[23]REF 2021 and Key Modifications
The Research Excellence Framework (REF) 2021 represented the second periodic assessment of UK higher education research, with submissions required by 31 March 2021 following delays from the original November 2020 deadline, and results published on 12 May 2022.[26] In response to the Stern Review's recommendations following REF 2014, the framework shifted toward greater inclusivity by mandating submission of all academic staff with a "significant responsibility for research," eliminating the selective approach used previously and resulting in a 46% increase in submitted staff full-time equivalents to 76,132 across 157 institutions.[26] This change aimed to reduce gaming of the system but raised administrative burdens, with institutions required to declare staff circumstances—such as career breaks or health issues—affecting 41% of submitted staff to adjust output expectations, up from 29% in REF 2014.[26] Assessment weightings were adjusted to emphasize broader research contributions: outputs weighted at 60% (down from 65% in 2014), impact at 25% (up from 20%), and environment at 15% (down from 20%).[4] Outputs were assessed on quality, significance, and rigor, with a minimum average of 2.5 per full-time equivalent researcher but no individual minima, and portability limited to those generated during the 2014–2021 period at the submitting institution; pre-2014 outputs were ineligible unless substantially revised.[26] Impact evaluation expanded to include case studies demonstrating effects from research conducted between 1 August 1993 and 31 July 2020, with impacts realized from 1 August 2014 to 31 December 2020 (extended due to pandemic disruptions), requiring evidence of rigorous pathways from research to outcomes across economic, societal, cultural, and other domains.[26] The research environment component transitioned to an institutional-level narrative (via REF 5a template) supplemented by unit-specific data, focusing on strategy, resources, and support for research, people, and collaboration, rather than solely unit-of-assessment templates as in 2014.[26] An open access policy mandated that journal articles and conference proceedings published from 1 April 2016 to 31 July 2020 be deposited in institutional repositories within three months of acceptance or publication, with exceptions for national security or publisher restrictions, to promote accessibility while balancing compliance challenges. Interdisciplinary research received explicit recognition through panel criteria and identifiers, with 6,340 cross-referrals across 34 units of assessment.[26] COVID-19 prompted targeted revisions, including a four-month timeline extension, virtual or hybrid panel assessments from May 2021, and allowances for narrative extensions on disruptions; physical outputs (13,176 items) were handled flexibly via USB distribution to 1,100 assessors.[26] Overall, 185,594 outputs and 6,781 impact case studies were submitted, with total costs at £16.7 million—16% above REF 2014 but under budget—reflecting efficiencies from digital processes despite heightened equality, diversity, and inclusion measures like diverse panel recruitment from over 130 universities.[26] These modifications prioritized fairness and reduced environmental footprint, with hybrid meetings replacing 250 in-person sessions from 2014.[26]Developments Toward REF 2029
In May 2023, UK Research and Innovation (UKRI) published initial decisions for the next Research Excellence Framework exercise, initially termed REF 2028 but later confirmed as REF 2029, following recommendations from the Future Research Assessment Programme (FRAP). These decisions emphasized reducing administrative burden, shifting assessment from individual researchers to institutional and disciplinary levels, and rebalancing evaluation elements to better reflect research culture and societal benefits.[27] Central reforms include a new structure with three weighted assessment elements: Contributions to Knowledge and Understanding (CKU) at 50%, Engagement and Impact (E&I) at 25%, and People, Culture, and Environment (PCE) at 25%, subject to validation via a PCE pilot. CKU submissions decouple outputs from specific staff, requiring only institutional employment links and allowing unit-level reductions for special circumstances, while eliminating minimum output quotas per researcher. E&I broadens to include institutional statements and fewer mandatory case studies (one minimum for units below 9.99 full-time equivalent staff), removing the prior 2* quality threshold. PCE introduces institution-wide narratives on research environment, informed by advisory panels on people/diversity and research diversity. Submissions occur across 34 units of assessment, using average staff volume from Higher Education Statistics Agency data, with no individual staff returns. Open access policies for journal articles and conference contributions, refined after a 2024 consultation, mandate post-publication deposits within three months starting January 1, 2026, permit certain Creative Commons licenses beyond CC-BY, and add exceptions while excluding longform outputs initially.[2][28][29] Guidance documents advanced progressively, with the overview published on January 16, 2025, detailing submission processes and eligibility—such as participation requests by June 30, 2027, for institutions lacking research degrees. Detailed CKU and Code of Practice guidance followed on June 12, 2025, stressing transparency, equity, and inclusion in staff identification, alongside support for diverse roles like technicians. Expert peer review panels were appointed in 2025 to handle assessments. However, on September 5, 2025, UKRI announced a pause in criteria-setting and final guidance publication, initiated by Science Minister Lord Patrick Vallance, to align with government higher education priorities; this short stock-taking period allows continued work on PCE piloting, potential lighter tracks for less research-intensive institutions, and funding modifications for collaboration, with autumn sector engagement planned. The exercise aims for results in December 2029 to inform annual allocation of approximately £2 billion in public funding.[2][28][30][31][5]Assessment Framework
Units of Assessment and Institutional Eligibility
The Research Excellence Framework (REF) structures its assessments around discipline-specific units of assessment (UOAs), into which higher education institutions (HEIs) submit their research outputs, impact case studies, and environment data.[32] These UOAs ensure evaluations are conducted by expert sub-panels with relevant disciplinary knowledge, covering the full spectrum of academic research. For REF 2021, there were 34 UOAs grouped under four main panels: Main Panel A (Medicine, Health and Life Sciences, UOAs 1–6), Main Panel B (Physical Sciences, Engineering and Mathematics, UOAs 7–11), Main Panel C (Social Sciences, UOAs 12–26), and Main Panel D (Arts and Humanities, UOAs 27–34).[32] [33] Institutions select UOAs based on the primary disciplinary focus of their staff's research, with submissions required to align with the defined scope of each unit to facilitate peer review by specialists. This structure, retained for REF 2029 with minor adjustments to sub-panel configurations, allows for granular assessment while aggregating results at the institutional level for funding purposes.[34] Institutional eligibility for REF submissions is restricted to UK HEIs recognized by the four funding councils—Research England, the Scottish Funding Council, the Higher Education Funding Council for Wales, and the Department for the Economy in Northern Ireland—as undertaking research activities eligible for quality-related (QR) funding.[35] [2] Only these bodies' recognized providers may participate, ensuring the framework supports the allocation of public research funding to domestic institutions. In REF 2021, 157 such HEIs submitted across the UOAs, reflecting the population of qualifying UK universities and colleges with sufficient research volume.[33] Eligibility does not extend to non-UK entities or private providers without funding body recognition, maintaining the REF's focus on publicly funded higher education research.[2] Institutions must also comply with submission requirements, including a code of practice for staff selection and data provision, verified against funding council lists prior to assessment.[36] For REF 2029, the criteria remain aligned with this model, emphasizing institutional research enabled within the UK HE sector.[2]Submission Processes and Data Requirements
Institutions submit data to the REF through an online submission system managed by the Higher Education Statistics Agency (HESA), following preparation guided by the official REF guidance documents. For REF 2021, submissions were due by midday on 31 July 2020, though extensions were granted due to the COVID-19 pandemic, allowing institutions to import structured data files covering staff, outputs, impacts, and environment for each unit of assessment (UoA).[37][35] Prior to final submission, higher education institutions (HEIs) must develop and submit a Code of Practice (CoP) outlining internal procedures for staff selection, output portability, and data integrity, which undergoes review and approval by the four UK funding bodies to promote equity and auditability.[35][38] Key data requirements for REF 2021 included Category A staff details—defined as individuals employed by the HEI on or after 1 August 2013 with a minimum of 0.2 full-time equivalent (FTE) research involvement and significant responsibility for research by 31 July 2020—linked to a minimum average of 2.5 research outputs per FTE, such as peer-reviewed journal articles, books, or other scholarly works meeting open access criteria where applicable.[39][35] Impact case studies required evidence of societal or economic benefits arising from research conducted between 1 August 1993 and 31 July 2020, submitted via standardized templates (REF5) with details on underpinning research, pathways to impact, and verifiable outcomes.[39] Environment templates (REF4) demanded quantitative data on research income, doctoral completions, and staffing profiles, alongside narrative statements up to 10,000 words per UoA on research strategy and support.[39] All elements were subject to audit trails, with non-compliance risking exclusion.[35] For REF 2029, submission processes shift toward reduced individual-level scrutiny, eliminating mandatory staff submissions and decoupling outputs from specific researchers to emphasize institutional contributions.[2][28] Outputs are reclassified as Contributions to Knowledge and Understanding (CKU), selected via CoP-guided processes without minimum per-person quotas, allowing up to five CKU per researcher nomination but prioritizing institutional relevance over individual attribution; these must demonstrate significant advancement in knowledge within the UoA.[40] Impact evaluation expands to include a broader range of activities with evidence from 1 January 2014 onward, while environment assessments focus on people, culture, and infrastructure through templates capturing institutional strategies, with submissions anticipated in late 2027.[2] Open access mandates evolve, requiring post-publication deposit by 1 January 2026 for eligible outputs, exempting monographs contracted before that date.[2] These changes aim to lower administrative burdens, estimated at a 50% reduction in output volume compared to REF 2021, while maintaining peer-reviewed rigor.[2]Peer Review Panels and Expertise
The Research Excellence Framework (REF) employs a system of expert peer review conducted by four main panels and 34 sub-panels, each aligned with subject-based units of assessment (UoAs).[41] Main panels oversee the overall assessment process, ensuring consistent application of criteria across sub-panels, calibrating standards, and approving final results; they include chairs of sub-panels, interdisciplinary experts, international members for benchmarking, and specialists in research application.[42] Sub-panels perform the core evaluation of institutional submissions, applying discipline-specific criteria to outputs, impacts, and environments through expert judgment informed by peer review protocols.[41] Panel membership is recruited via nominations from subject associations, learned societies, and research organizations, with selections prioritizing individuals of established reputation and deep expertise in relevant fields to reflect the UK's research community's strengths.[43] For REF 2021, nominations closed on December 20, 2017, emphasizing balance in expertise, including senior academics for outputs and environments, research users (non-academic experts) for impact case studies, and international advisers; equality and diversity considerations guide appointments without compromising scholarly rigor.[43] Sub-panels may expand during the assessment phase by adding specialists, such as those for interdisciplinary work or language-specific outputs, to address gaps in coverage.[41] Expertise within panels draws from academic leaders, with main panel chairs for REF 2021 exemplifying domain authority: Professor John Iredale for Main Panel A (medicine, health, life sciences), Professor David Price for Main Panel B (physical sciences, engineering, mathematics), Professor Jane Millar for Main Panel C (social sciences), and Professor Dinah Birch for Main Panel D (arts and humanities).[42] This composition enables rigorous, field-specific scrutiny, where sub-panels assess quality thresholds (e.g., 4* for world-leading, 3* for internationally excellent) via double-blind review for outputs in many cases, supplemented by bibliometric data where appropriate, while main panels harmonize judgments to mitigate subjectivity.[42] Panels operate in phases—criteria-setting, submission review, and assessment—with documented working methods to maintain transparency and accountability.[41]Evaluation Criteria
Outputs and Their Quality Assessment
Research outputs in the REF, such as peer-reviewed journal articles, monographs, conference contributions, software, exhibitions, and performances, are evaluated through expert peer review by sub-panels organized into main panels covering broad disciplinary areas.[44] These panels consist of academics and, in some cases, users of research, who assess outputs against three core criteria: originality, referring to the novelty and innovative contribution to knowledge, methods, or understanding; significance, encompassing the influence and importance of the work within its field or beyond; and rigour, evaluating the soundness of the methodology, analysis, and intellectual coherence.[44][45] In REF 2021, outputs carried a 60% weighting in the overall quality profile for each unit of assessment (UoA), with sub-panels reviewing a total of 185,594 items across 34 UoAs, often involving double review and cross-referral for interdisciplinary or specialist expertise.[44] Quality is graded on a five-point scale: 4* for world-leading in terms of originality, significance, and rigour; 3* for internationally excellent; 2* for internationally recognized; 1* for nationally recognized; and unclassified for work below the standard of national recognition.[44][45] Institutions submit an average of 2.5 outputs per full-time equivalent (FTE) researcher in the submitting pool, with a minimum of one and maximum of five per individual, allowing flexibility for varied productivity while emphasizing quality over quantity; only outputs published during the assessment period or attributable to the institution are eligible.[44] Peer review prioritizes qualitative judgment of the output's content, explicitly avoiding reliance on citation metrics or journal prestige to align with declarations like DORA, though panels may consider contextual evidence of impact within the assessment.[45] For REF 2029, the assessment evolves into the "Contribution to Knowledge and Understanding" (CKU) element, weighted at 50%, which builds primarily on outputs but incorporates broader disciplinary contributions evidenced through institutional statements and quantitative data.[2] This shift aims to capture cumulative advancement rather than isolated items, with sub-panels applying similar OSR criteria but emphasizing evidence of field-wide progress, potentially reducing gaming through output maximization.[2] Outputs remain central, but the framework encourages recognition of diverse formats, including open research practices, while maintaining peer-led evaluation to ensure rigour against verifiable standards.[2]Impact Measurement and Case Studies
Impact in the Research Excellence Framework (REF) constitutes 25% of the overall assessment weight in REF 2021, up from 20% in REF 2014, with evaluation based exclusively on submitted impact case studies rather than broader institutional narratives.[44] [46] These case studies must evidence tangible effects or benefits beyond academia, such as changes to the economy, society, culture, public policy, services, health, or the environment, arising from research conducted at the submitting higher education institution (HEI).[47] Eligible impacts are limited to those occurring between 1 August 2013 and 31 July 2020 for REF 2021, while the underpinning research must have been produced between 1 August 1993 and 31 July 2020, with a requirement that it was active during the 2008–2013 assessment period.[48] [49] Institutions submit case studies using a standardized template outlined in Annex G of the REF guidance, which requires sections detailing the underpinning research (including outputs and context), references to that research, and a narrative of the impact achieved, supported by corroborative evidence such as testimonials, metrics, or third-party documentation.[50] [51] The template emphasizes demonstrable causal links between the research and impacts, with evidence assessed for rigour and plausibility rather than exhaustive quantification.[50] In REF 2021, UK HEIs collectively submitted over 6,800 impact case studies across 34 units of assessment, covering diverse sectors including health policy, environmental policy, and economic development.[46] [52] Assessment occurs through peer review by REF sub-panels, comprising academic experts and research users (non-academics from beneficiary sectors), who evaluate case studies holistically against two primary criteria: reach, defined as the breadth of audiences, sectors, or geographical areas affected, and significance, encompassing the scale, magnitude, and beneficial changes realized.[44] [53] These criteria are not scored separately but integrated to assign an overall quality profile on a five-point scale: 4* (world-leading in terms of reach and significance), 3* (internationally excellent), 2* (recognised internationally), 1* (recognised nationally), or unclassified (below national quality).[54] [55] Panels verify claims through the provided evidence, cross-referencing with outputs and environment templates, and may consult external assessors for specialized validation, prioritizing empirical substantiation over anecdotal assertions.[53] [56] This methodology has been credited with incentivizing evidence-based demonstration of research benefits, as evidenced by REF 2021 analyses showing impacts spanning local to global scales, though critiques highlight potential inconsistencies in peer judgments due to subjective interpretations of causality and evidence thresholds.[57] [58] Sub-panel guidelines require calibrated scoring via mock exercises and inter-panel moderation to mitigate variability, ensuring assessments reflect verifiable contributions rather than institutional prestige alone.[55] In practice, high-rated case studies often feature quantifiable metrics, such as policy adoptions influencing millions or economic returns exceeding research costs by factors of 10 or more, underscoring the framework's emphasis on causal realism in impact attribution.[46]Research Environment Evaluation
The research environment in the Research Excellence Framework (REF) evaluates the vitality, sustainability, and support structures underpinning research activities within units of assessment (UoAs), including strategies for staff development, research student supervision, resource allocation, and collaboration. In REF 2021, this element accounted for 15% of the overall assessment weighting, with the remaining portions allocated to outputs (60%) and impact (25%).[59] Submissions comprised a narrative template limited to 10 pages per UoA, supplemented by quantitative indicators such as full-time equivalent (FTE) staff numbers, doctoral completions (targeting 20-30% of submitted staff as research students over the assessment period), and trends in research income from 2013/14 to 2019/20.[60][61] Peer review panels applied a star-based grading scale (4* for world-leading quality to 1* for nationally recognized, with unclassified below that), focusing on criteria such as the unit's capacity to sustain high-quality research, support for diverse researchers including early-career staff and those with protected characteristics, and evidence of research infrastructure like facilities and funding success rates.[62] Panels calibrated scores through benchmarking against national norms and cross-panel consistency exercises, emphasizing demonstrable evidence over unsubstantiated claims; for instance, high scores required proof of equitable practices, such as progression data for under-represented groups, rather than aspirational statements.[63] Quantitative metrics informed but did not determine scores, as human judgment predominated to assess contextual factors like discipline-specific challenges.[64] For REF 2029, the environment weighting is set to rise to 25%, with a shift toward greater emphasis on institutional-level elements, including research culture metrics piloted in areas like strategy, responsibility, connectivity, development, and equity.[5] This includes potential standalone assessment of people and culture (e.g., via surveys on workload, openness, and inclusivity), alongside continued UoA-specific infrastructure evaluation, aiming to better capture systemic factors influencing research sustainability amid critiques that prior templates underemphasized cultural dynamics.[65] Panels will retain peer review primacy but integrate more standardized indicators to enhance comparability, with guidance stressing evidence of causal links between environment features and research outcomes.[66]Grading Scales and Profiles
The Research Excellence Framework (REF) evaluates research quality using a five-point scale applied separately to outputs, impacts, and the research environment within each unit of assessment (UOA). This scale comprises grades of 4* (world-leading), 3* (internationally excellent), 2* (internationally recognised), 1* (nationally recognised), and unclassified (below nationally recognised standards), with assessments calibrated against criteria of originality, significance, and rigour for outputs; reach and significance for impacts; and vitality and sustainability, research support, and collaborative environment for the environment. Panels determine grade boundaries through calibration exercises to ensure consistency, with 4* reserved for work demonstrating exceptional advancement in knowledge or practice, while lower grades reflect diminishing international or national comparability. For each submission, peer review panels produce distinct quality profiles indicating the percentage of research activity assigned to each grade level. Output profiles reflect the distribution across submitted items (e.g., publications or artefacts), weighted by staff time; impact profiles derive from case studies and additional evidence, capturing non-academic benefits; and environment profiles assess the UOA as a whole based on submitted templates and supporting data like staffing and income metrics. These element-specific profiles enable granular analysis of strengths, with unclassified portions typically minimal in high-performing UOAs but indicating areas failing basic quality thresholds.[67] An overall quality profile is then calculated as a weighted composite: 60% from outputs, 25% from impacts, and 15% from environment, yielding a distribution of percentages across the scale that informs institutional rankings and funding. This aggregation, while promoting balanced evaluation, has drawn scrutiny for potentially diluting element-specific insights, as the weighted average smooths disparities (e.g., strong outputs offsetting weaker impacts).[63] Grade point averages (GPAs), derived by assigning numerical values (4 for 4*, etc.) to profile percentages, provide a supplementary metric for comparisons, though profiles themselves prioritize nuanced distributions over scalar summaries.[68]| Grade | Description |
|---|---|
| 4* | World-leading in originality, significance, and rigour (outputs); outstanding reach and significance (impacts); world-leading support for sustained excellence (environment). |
| 3* | Internationally excellent in originality, significance, and rigour (outputs); significant reach and importance (impacts); internationally excellent vitality and sustainability (environment). |
| 2* | Recognised internationally in originality, significance, and rigour (outputs); recognised but modest additional reach and significance (impacts); recognised internationally as conducive to excellent research (environment). |
| 1* | Recognised nationally in originality, significance, and rigour (outputs); limited additional reach and significance (impacts); recognised nationally as supportive of research (environment). |
| Unclassified | Does not meet recognised national standards. |
Outcomes and Resource Allocation
REF 2021 Results and Institutional Performance
The REF 2021 results, published on 12 May 2022, evaluated submissions from 157 UK higher education institutions (HEIs) across 34 subject-based units of assessment, covering research activity from the 2014–2020 period.[69][67] Overall, 41 per cent of submitted outputs received a 4* rating for world-leading quality, while 43 per cent were graded 3* for internationally excellent quality, reflecting a high baseline of performance across the sector.[69] Impact case studies and environment templates similarly garnered strong evaluations, with the increased weighting of impact (to 25 per cent from 20 per cent in REF 2014) contributing to elevated overall scores, particularly in social sciences and arts/humanities where mean GPAs rose by approximately 0.24–0.25 points.[70] Institutional performance was measured via overall quality profiles, yielding a grade point average (GPA) on a 1–4 scale (weighted 60 per cent outputs, 25 per cent impact, 15 per cent environment), alongside research power (GPA adjusted for submission volume).[71] Imperial College London achieved the highest overall GPA of 3.63, surpassing the previous leader, the Institute of Cancer Research (ICR), which ranked second despite its narrower submission scope of just two units.[72] The University of Cambridge and London School of Economics placed joint third, underscoring the dominance of selective, research-intensive institutions in quality metrics.[69] Specialized entities like the ICR and London School of Hygiene and Tropical Medicine (LSHTM) excelled in niche areas, with LSHTM posting the top impact GPA of 3.78.[73] Larger comprehensive universities, such as University College London and the University of Manchester, demonstrated superior research power due to higher submission volumes, retaining positions in the top five for aggregate output despite slightly lower per-unit GPAs compared to elite peers.[72] Mid-tier and post-1992 institutions showed gains in select units, with overall sector improvements attributed to refined assessment criteria emphasizing real-world impact over pure volume.[70] However, disparities persisted: "Golden Triangle" HEIs (Oxford, Cambridge, Imperial, UCL, LSE, King's College London) captured a disproportionate share of top grades, comprising most of the top 10 by GPA, though their collective funding allocation faced marginal dilution from broader excellence distribution.[72] These outcomes highlighted competitive stratification, with 93 per cent of research at high performers like UCL rated 3* or above, versus more variable profiles at smaller or teaching-focused providers.[74]Mechanisms for Funding Distribution
The Research Excellence Framework (REF) serves as the primary mechanism for distributing approximately £2 billion in annual Quality-Related (QR) funding across UK higher education institutions, administered by the four UK funding bodies: Research England for England, the Scottish Funding Council (SFC) for Scotland, the Higher Education Funding Council for Wales (HEFCW) for Wales, and the Department for the Economy (DfE) for Northern Ireland.[5][75] This block grant supports research infrastructure, staff costs, and knowledge exchange, with institutions retaining discretion over its use. Allocations are formulaic, deriving from REF's overall quality profiles—comprising outputs, impact, and environment sub-profiles—adjusted for research volume measured in full-time equivalent (FTE) staff. Funding emphasizes high-quality research rated 3* (internationally excellent) and 4* (world-leading), using multipliers such as 1.0 for 4*, 0.6 for 3*, and lower for lesser grades, multiplied by volume to compute "REF power."[75][76] In England, which receives the largest share (around £1.3 billion mainstream QR annually), Research England applies REF 2021 results through a weighted formula: 70% for outputs, 15% for impact, and 15% for environment.[77] This differs from REF's assessment weightings (60% outputs, 25% impact, 15% environment) to prioritize publication quality in funding decisions. For 2025-2026, total QR allocations reached £1,987 million, incorporating mainstream QR (£1,303 million), research degree program supervision (£344 million), charity support (£219 million), and business research (£114 million), with no changes to weightings or methods from prior years to ensure allocation stability.[77] Post-REF results, such as those from 2021, inform allocations for several years, blending with historical performance to mitigate volatility.[76] The devolved funding bodies adapt REF outcomes to national priorities, though specifics vary and are less transparently formulaic than in England. Scotland's SFC, for instance, integrates REF quality shares but incorporates philanthropic income and regional equity factors.[75] Wales and Northern Ireland similarly use REF to gauge excellence while adjusting for local policies, such as supporting smaller institutions or specific disciplines, resulting in allocations that may diverge from pure REF power metrics.[75] Across all bodies, QR funding post-REF 2021 has included targeted supplements, like flexible allocations for research culture (2022-2024), but core mechanisms remain tied to empirical REF scores rather than competitive bidding.[75] This approach aims to reward sustained excellence while enabling devolved flexibility, though it has prompted debates on inter-nation disparities.[75]Rankings, Incentives, and Competitive Dynamics
The REF produces rankings at the level of higher education institutions (HEIs) and their units of assessment (UoAs) via aggregated quality profiles, which quantify the proportion of research rated at each grade: world-leading (4*), internationally excellent (3*), recognised internationally (2*), and nationally recognised (1*). For REF 2021, results released on 12 May 2022 indicated that 41 per cent of outputs across all submissions achieved 4* status, while 43 per cent were rated 3*.[69] [67] These profiles feed into broader institutional rankings by research power—computed as the volume of submitted activity multiplied by the grade point average (GPA)—enabling comparisons that highlight top performers, such as the Institute of Cancer Research topping biological sciences in weighted quality.[6] Rankings are not absolute but relative, with full datasets available for download, allowing analysis by UoA, institution, or overall power.[67] Incentives arise chiefly from REF's integration with Quality-related Research (QR) funding, where outcomes determine the distribution of approximately £2 billion annually from UK funding councils to HEIs, with higher-rated research attracting a disproportionate share—often weighted heavily toward 4* activity.[4] [78] This structure motivates institutions to enhance research portfolios, evidenced by empirical studies showing accelerated publication outputs and submission volumes in the lead-up to REF deadlines, such as a detectable spike in research activity preceding the 2014 assessment.[79] For researchers, incentives include career advancement tied to REF-eligible contributions, though panels emphasize quality over quantity, with 100 per cent of eligible staff now required to be returned under REF 2029 rules to promote inclusivity in assessment.[5] Competitive dynamics manifest in a funding contest among over 150 submitting HEIs, where gains for high performers come at the expense of others, driving strategies like selective recruitment of REF-optimizing academics—often timed near submission cycles to boost volume and GPA.[79] This rivalry has elevated overall UK research standards, as monitoring via REF correlates with sustained performance uplifts in monitored metrics like citation impact, yet it also amplifies institutional hierarchies, with pre-1992 universities dominating top rankings due to scale advantages.[80] Post-REF 2021, such dynamics influenced QR allocations, rewarding powerhouses while pressuring lower-ranked entities to consolidate or specialize, though critiques note potential for short-termism over long-term innovation.[81][82]Effects on UK Research Ecosystem
Evidence of Quality Improvements
Empirical analyses using difference-in-differences methods comparing UK universities to matched US institutions as a control group demonstrate that the 2014 REF cycle prompted a 41.37% increase in research outputs per department between 2009 and 2014, with a concurrent rise in the proportion of publications in high-quality journals (rated 3*, 4*, or 4** under the Academic Journal Guide by the Chartered Association of Business Schools).[83] This shift towards more prestigious venues suggests an enhancement in the perceived or assessed quality of submitted outputs, as institutions strategically prioritized work aligned with REF's emphasis on excellence.[83] The effect was more pronounced in Russell Group universities, indicating competitive pressures amplified quality-oriented behaviors in top-tier institutions.[83] Bibliometric indicators further corroborate sustained quality gains post-REF implementation. UK research exhibits field-weighted citation impacts 26% above the global average for internationally co-authored papers, reflecting robust influence and reception in peer communities.[84] REF's periodic assessments have been linked to these trends by incentivizing investments in research environments, with over 80% of evaluated outputs in REF 2021 classified as world-leading or internationally excellent, up from prior RAE cycles when adjusted for methodological changes.[85] However, per-author productivity remained stable, implying that volume-driven expansions via hiring contributed to aggregate improvements without proportional efficiency gains.[83] Longer-term incentives from REF appear to foster diversification in research topics post-evaluation periods, potentially enhancing overall innovation and quality by reducing deadline-induced conservatism.[79] Analyses of 3.6 million UK publications reveal that outputs immediately preceding REF deadlines suffer from lower citations and publication in lower-impact journals, but this reverses afterward, supporting the framework's role in elevating baseline standards through accountability and funding ties.[79] These patterns align with causal evidence from performance-based systems, where evaluation cycles correlate with heightened focus on rigorous, impactful work.[79]Societal and Economic Contributions
The Research Excellence Framework (REF) has driven societal contributions by systematically evaluating and incentivizing research impacts beyond academia, as demonstrated through impact case studies (ICSs) submitted by UK higher education institutions. In REF 2021, institutions provided 6,781 ICSs, of which 6,361 were publicly analyzed, covering benefits in health, policy, environment, and culture across local, national, and international scales.[46] These cases, peer-assessed for quality, highlight multidisciplinary research—72% drawing from at least two fields—translating into tangible outcomes such as enhanced public health and community cohesion. For instance, the University of Oxford's development of the AstraZeneca COVID-19 vaccine, supported by REF-evaluated research, resulted in over 2.5 billion doses supplied globally by 2020, contributing to pandemic mitigation efforts.[46] Similarly, the RECOVERY trial, informed by UK research excellence, saved approximately 650,000 lives worldwide by optimizing COVID-19 treatments.[46] Economic contributions stem from REF's role in allocating around £2 billion annually in public funding to high-performing institutions, prioritizing research with verifiable non-academic returns.[4] In REF 2021 analyses, 34% of ICSs referenced financial metrics or return on investment (ROI), with 58 cases quantifying ROI in detail, often exceeding national averages.[46] Business and entrepreneurship featured in 298 ICSs, fostering SME growth, innovation, and job retention. The University of Manchester's Urban Living Labs, for example, secured £26 million in infrastructure investment, doubling cycling rates and reducing van trips by 20,000 km annually in the region.[46] Another case, the Centre for Global Eco-Innovation, achieved a 5.5:1 ROI ratio, surpassing the UK average of 2.8:1 through technology transfer and commercialization.[46] Earlier REF cycles provide longitudinal evidence of sustained effects. REF 2014's 6,975 ICSs evidenced societal benefits in 75% of cases (e.g., health advancements aiding millions) and economic impacts in 25%, including £4.7 billion in activity from University of Cambridge research alone.[86] These outcomes, reaching thousands of businesses and global beneficiaries, underscore REF's mechanism in channeling resources toward high-impact work, though self-reported elements in ICSs warrant panel validation to mitigate potential overstatement. Overall, REF has amplified UK research's role in sectors like health (e.g., 192 ICSs on cancer diagnostics) and environment (80 on net zero), with 46% of analyzed ICSs linked to UK Research and Innovation funding.[46]| Key REF Impact Examples | Sector | Societal Benefit | Economic/Quantified Benefit |
|---|---|---|---|
| AstraZeneca Vaccine (REF 2021) | Health | Global pandemic response | 2.5 billion doses by 2020[46] |
| RECOVERY Trial (REF 2021) | Health | Treatment optimization | ~650,000 lives saved[46] |
| Urban Living Labs (REF 2021) | Urban Planning | Improved mobility | £26m investment; 20,000 km reduced trips[46] |
| Centre for Global Eco-Innovation (REF 2021) | Technology | Innovation transfer | 5.5:1 ROI[46] |
| Cambridge Research (REF 2014) | Economy | Industry advancements | £4.7bn activity generated[86] |