Full Fact
Full Fact is a British independent fact-checking charity founded in 2009 and publicly launched in 2010, headquartered in London, that verifies claims disseminated by politicians, media outlets, public institutions, and on social media platforms using empirical evidence, data analysis, and transparent methodologies to identify and correct misinformation.[1] The organization, which gained charitable status in 2014, operates with a cross-party board of trustees and employs over 40 staff across editorial, policy, AI development, and advocacy teams to conduct manual and automated checks, follow up on corrections, and campaign for systemic improvements in information ecosystems.[1] Key achievements include developing AI tools that monitor millions of sentences daily for factual accuracy—deployed in more than 40 countries—and partnerships such as with Google.org to enhance global fact-checking capabilities, alongside targeted efforts during events like the COVID-19 pandemic to address health-related falsehoods.[1] While Full Fact emphasizes impartiality through diverse governance and non-partisan funding safeguards, it has faced accusations of selective scrutiny and underlying biases in its evaluations, akin to critiques leveled at other fact-checking bodies amid broader debates over institutional leanings in media verification.[2][3]Origins and Development
Founding and Initial Launch
Full Fact was incorporated on 29 July 2009 as a private limited company under the name FACTCHECK, with its registered office in London.[4] The organization was established by a cross-party group of trustees drawn from political, journalistic, and academic backgrounds, aiming to improve standards in UK public debate through independent fact-checking of claims made by politicians, media, and others.[1] This founding reflected a response to growing concerns over misinformation and declining trust in public discourse, particularly amid events like the expenses scandal in British politics earlier that decade, though the trustees emphasized a non-partisan approach from inception.[5] The company name changed to FULL FACT LTD in March 2011, aligning with its operational identity.[6] Initial operations focused on manual verification of high-profile claims, with early trustees including figures such as Michael Samuel MBE, a co-founder who later served as chair.[7] Full Fact launched its public fact-checking activities in 2010, beginning with scrutiny of statements during the UK general election that year, marking its entry as one of the first dedicated fact-checking entities in the country.[1] This launch involved publishing corrections and analyses on a website, prioritizing transparency by detailing verification methods and sources for each claim assessed.[1] By 2014, Full Fact achieved charitable status from the Charity Commission after an initial rejection, formalizing its objectives to advance public education by promoting accurate information and accountability.[1] Early funding came from diverse philanthropic sources, avoiding government or political party reliance to maintain independence, though specifics on initial donors were not publicly detailed beyond general transparency commitments.[1] The organization's foundational methodology emphasized evidence-based rebuttals over opinion, setting it apart from contemporaneous partisan commentary outlets.[1]Expansion and Key Milestones
Full Fact's operational expansion accelerated after its early years, with volunteer contributions exceeding 4,000 hours during the 2015 UK general election to bolster fact-checking capacity.[7] This period saw the organization build specialized teams in areas such as health and social care, exemplified by the 2016 appointment of Claire Milne to lead verification in those domains.[7] Financial growth underpinned these developments, as annual income rose from £627,000 in 2016 to £956,000 in 2017, driven by increased corporate and foundation support.[8] A pivotal shift occurred in 2018 with the launch of third-party fact-checking partnerships, including with Facebook (now Meta), which integrated Full Fact's verifications into social media platforms to address circulating claims at scale.[7] In 2019, the creation of a dedicated AI team under Andrew Dudfield advanced automated tools for monitoring millions of sentences daily, enhancing efficiency in claim detection.[7] These technological investments coincided with further revenue expansion, culminating in £2,915,195 total income by 2024, funded partly by £443,482 from Google.org for AI initiatives and £353,475 from Meta for partnership programs.[8] Internationally, Full Fact pursued collaborative expansions rather than direct offices, partnering with Africa Check by 2020 to extend operations into South Africa, Nigeria, and Kenya, and with Chequeado in Argentina to combat regional misinformation.[9] Participation in the European Fact Checking Standards Network further solidified its role in cross-border standards.[7] Organizational restructuring included the 2023 appointment of Chris Morris as Chief Executive, who directed coverage of high-profile events like the 2024 UK general election and the US presidential outcome, amid team growth across editorial, operations, AI, public affairs, fundraising, and training functions.[7]Organizational Framework
Governance and Leadership
Full Fact operates as a non-profit company limited by guarantee, incorporated on 29 July 2009 and registered as a charity (number 1133330) with the Charity Commission for England and Wales on 17 November 2014, following two prior rejections of its application due to concerns over exclusively charitable purposes.[10] The organization's governance is structured to prioritize independence and impartiality, with a volunteer Board of Trustees holding ultimate responsibility for oversight of all activities, including strategic direction, financial management, and compliance with charitable objectives, while deliberately excluding involvement in day-to-day operations or individual editorial decisions to safeguard fact-checking integrity.[7][5] The Board comprises ten trustees representing a cross-party composition drawn from the UK's three main political affiliations—Conservatives, Labour, and Liberal Democrats—alongside independent experts, a design intended to mitigate partisan influence and reflect diverse viewpoints.[7][11] Current trustees include Chairman Michael Samuel MBE, a co-founder with experience in business and philanthropy; Antonia Cox, a former Conservative policy adviser; Tim Gordon, ex-CEO of the Liberal Democrats; Baroness Janet Royall, former Labour Leader of the House of Lords; and specialists such as Professor Anand Menon in European policy and Dr. Claire Wardle in misinformation research.[7] Samuel announced his intention to step down as chair in September 2025 after an extended tenure.[12] Executive leadership is headed by Chief Executive Chris Morris, appointed in October 2023 following a career at the BBC where he pioneered its Reality Check unit; Morris is responsible for overall strategy, external relations, and operational delivery.[13][7] He succeeded founder Will Moy, who led the organization from its 2010 launch until 2023.[14] The editorial team reports to Editor Steve Nowottny, overseeing verification processes, while Chief Operating Officer Laura Dewis manages internal strategy implementation and resource allocation.[7] This separation of governance from operations aligns with charity regulations and Full Fact's commitments under the International Fact-Checking Network's code of principles, emphasizing non-partisanship and transparency.[11]Funding Sources and Financial Transparency
Full Fact, registered as a charity (no. 1158683) in England and Wales, derives its funding from diverse non-governmental sources, including individual donations, charitable trusts, grants from technology companies, and revenue from training services and interest.[8] The organization explicitly receives no government funding and emphasizes independence by prohibiting funders from exerting editorial influence, with a formal policy to refuse donations that could compromise neutrality.[8][15] In its most recent reported year (2024), total income reached £2,915,195, reflecting a strategy to broaden revenue streams beyond reliance on large tech grants.[8] Significant contributions come from technology firms, which historically accounted for a substantial portion of funding; for instance, in 2022, approximately 40% of Full Fact's budget derived from Google and Meta (formerly Facebook).[16] In 2024, Google.org provided £443,482 specifically for AI-related fact-checking initiatives, while Meta contributed £353,475 for third-party fact-checking services.[8] Charitable trusts supplement these, with the Mohn Westlake Foundation granting £250,000 for core operations and the Nuffield Foundation £100,000.[8] Individual donations and gift aid, often via small contributions and matched gift aid, have grown notably, exceeding Google's 2021 funding by nearly 50% and forming a core unrestricted revenue stream estimated at £160,803 to £403,139 annually in recent years.[8][16] Additional income includes £31,934 from training sales and £21,741 in interest.[8] Financial transparency is maintained through mandatory public filings with the UK Charity Commission, where annual accounts and trustees' reports are accessible, detailing income breakdowns, expenditures, and assets.[8][17] Full Fact discloses major donors (typically those contributing over £5,000) on its website and adheres to International Fact-Checking Network standards requiring funding transparency to mitigate potential conflicts.[8][11] This approach includes rigorous internal safeguards, such as segregated project funding and no quid pro quo for specific fact-checks, though critics have questioned the implications of tech giant dependencies on impartiality in an era of platform-driven content moderation.[8]Fact-Checking Methodology
Claim Selection Criteria
Full Fact selects claims for fact-checking based on their relevance to public debate and potential impact on public understanding or decision-making. The organization prioritizes statements made by politicians, media outlets, and other prominent figures that enter the public domain through broadcasts, newspapers, or online sharing.[18] Claims are chosen with an emphasis on those demonstrating the highest potential to cause harm to individuals' lives, such as misinformation influencing health choices, economic perceptions, or policy support.[18] Key selection factors include virality and prominence: claims that gain widespread traction online, appear repeatedly in high-visibility media, or are reiterated by political candidates receive higher priority. Full Fact aligns its focus with recurring public concerns tracked by the Ipsos MORI Issues Index, encompassing areas like crime, immigration and law, education, health, social care, and the economy.[18] To maintain balance, the organization endeavors to cover claims across the political spectrum and from multiple sides of debates, avoiding systematic favoritism toward any viewpoint. Public submissions are welcomed via tips, but not all can be addressed; submitters are encouraged to highlight the claim's potential impact to aid prioritization.[18] Full Fact adheres to the International Fact-Checking Network's (IFCN) code of principles, which mandates transparency in explaining claim selection methodologies.[19]Verification Procedures
Full Fact's verification procedures emphasize rigorous evidence-based assessment, prioritizing primary sources to evaluate claims for accuracy and potential misleading elements. Researchers first dissect the claim, scrutinizing both its explicit content and implicit assumptions to uncover any distortions or omissions that could mislead the public.[18] Contact is made with the claimant to solicit the original source material and afford a right of reply, except in cases where the claim's basis is immediately apparent from public records. This step aims to clarify intent and obtain any supporting data directly from the originator.[18][5] Verification then proceeds through compilation of evidence from an extensive pool of publicly accessible materials, favoring unaltered primary documents such as official statistical datasets, raw data tables from government agencies, legal texts, and peer-reviewed studies over interpretive summaries like press releases or media reports. At minimum, assessments reference at least two such primary sources to substantiate conclusions, ensuring verifiability by independent readers.[18][5] For claims involving specialized knowledge, researchers consult independent experts, disclosing their identities and qualifications when their input is cited in the published verdict. This external validation helps address complexities beyond general public data, such as nuanced interpretations of scientific or economic metrics.[18] Internal safeguards include mandatory peer review by a second researcher prior to publication, with the executive director conducting additional oversight for claims touching on politically charged issues to mitigate bias risks. Articles are formatted with upfront summaries of the claim, verdict (e.g., correct, misleading, incorrect), and key evidence, followed by contextual explanations and direct links to sources, enabling audiences to replicate the verification independently.[18][5] If discrepancies arise, Full Fact presses claimants to furnish supporting evidence or issue corrections, escalating to public campaigns for transparency where initial responses prove inadequate. Potential conflicts, including affiliations of staff, trustees, or funders with the subject matter, are explicitly disclosed to uphold procedural integrity.[18]Correction and Follow-Up Protocols
Full Fact maintains a structured process for addressing feedback and correcting errors in its own fact-checks, emphasizing transparency and adherence to the European Federation of Fact-Checkers' Code of Standards. Feedback is submitted via a contact form and undergoes dual review: an initial assessment within days, not exceeding two weeks, followed by evaluation by a senior team member, with all submissions logged in a database for tracking. While individual responses are not guaranteed due to volume, identified errors prompt updates to articles, marked clearly as changes; major revisions are publicized on the original dissemination channels, and serious errors include explanatory notes. An overview of corrections is available on their dedicated page, ensuring public accountability.[20] In handling complaints about its work, Full Fact offers an internal review process, with the option for trustees to appoint an independent reviewer if needed; unresolved issues can be escalated to the International Fact-Checking Network (IFCN) complaints portal under the European Code. This protocol aligns with broader commitments to non-partisanship and corrections, as verified through IFCN assessments. Feedback outcomes are reported to trustees, balancing responsiveness with editorial independence.[20] For external claims, Full Fact's follow-up protocols involve promptly contacting originators—such as politicians, media outlets, or public figures—to request corrections, prioritizing quick action and prominent notifications to affected audiences. Specific guidance includes editing social media posts, adding comments, notifying broadcasters or editors, or submitting formal corrections to parliamentary records like Hansard. In 2020, the organization requested approximately 60 such corrections, demonstrating routine application of this approach. Success varies, with examples including media outlets like the Daily Mail issuing corrections on immigration claims and politicians like Alastair Campbell retracting statements on asylum hotels following Full Fact's interventions.[21][22][23][24] These protocols extend to monitoring outcomes, such as whether misleading claims are retracted or cease circulation, and advocating systemic improvements, like extending parliamentary correction mechanisms to all MPs beyond ministers. Full Fact's methodology does not mandate tracking every correction but focuses on impactful follow-ups to reduce misinformation persistence, informed by evidence that originator-led corrections are most effective in shifting beliefs. Where requests fail, they may escalate through public fact-check publications or campaigns for regulatory adherence to codes like the Ministerial or Broadcasting Code.[25][26][22]Framework for Information Incidents
The Framework for Information Incidents, developed by Full Fact since 2020, serves as a voluntary tool to enable collaboration among counter-misinformation actors—including technology companies, governments, civil society organizations, official information providers, and media outlets—in identifying and addressing clusters of inaccurate or misleading claims that proliferate around specific events or topics.[27] It defines an "information incident" as a proliferation of such claims or narratives capable of influencing public behavior or perceptions, emphasizing coordinated responses over isolated efforts to mitigate potential harms like eroded trust or incited actions.[28] The framework introduces a structured severity scale and response guidelines to standardize assessments, drawing on empirical indicators such as content velocity, engagement metrics, and account types involved in dissemination.[27] Central to the framework is a five-level severity scale, calibrated by factors including the scale of spread, potential for harm, and resource demands for response:- Level 1 (Business as Usual): Characterized by baseline levels of low-volume misinformation; actors focus on proactive resilience-building, such as routine monitoring and capacity enhancement, without escalated interventions.[27]
- Level 2 (Emerging Incident): Involves detectable increases in misleading content velocity or engagement; responses prioritize monitoring, light-touch preparations like internal alerts, and targeted fact-checks to prevent escalation. An example includes 2019 UK 5G conspiracy theories linking the technology to health risks or surveillance, which gained traction via arson attacks on towers but remained containable through localized debunking.[27]
- Level 3 (Incident Occurring): Features rapid proliferation affecting moderate audiences; actors implement swift, coordinated measures such as amplified debunking, platform labeling, or cross-sector alerts. The 2019 Notre Dame Cathedral fire in France exemplified this, with false narratives about arson or insurance fraud spreading amid chaos, necessitating quick verifications by officials and media.[27]
- Level 4 (Severe Incident): Marked by widespread harm potential and high engagement; demands intensified coordination, including resource surges, policy adjustments, and multi-stakeholder task forces. The 2021 Afghan refugee crisis in Turkey illustrated this, where anti-migrant rumors fueled pogroms, requiring joint efforts in content moderation and public communications.[27]
- Level 5 (High-Impact Incident): Rare crises with systemic threats, such as the 2020 COVID-19 pandemic, which involved global-scale disinformation on vaccines and origins; responses entail maximum collaboration, including emergency protocols and long-term strategy shifts across sectors.[27]