Fact-checked by Grok 2 weeks ago

Community Notes

Community Notes is a crowdsourced feature on the social media platform X that enables eligible contributors to propose contextual notes providing factual clarifications or additional information to potentially misleading posts, with notes surfaced algorithmically based on agreement from users representing diverse political viewpoints. Originally developed by Twitter as Birdwatch and piloted in 2021 as a community-driven approach to counter misinformation through user-submitted notes, the system was rebranded and expanded platform-wide as Community Notes following Elon Musk's acquisition of the company in 2022, emphasizing a decentralized alternative to centralized fact-checking. The platform's "bridging-based" algorithm prioritizes notes endorsed across ideological divides, aiming to mitigate echo chambers and promote consensus-driven corrections over top-down moderation, which has distinguished it from traditional institutional fact-checking often critiqued for partisan skew. Empirical analyses have demonstrated its efficacy in curbing misinformation spread, including reduced virality of false claims on topics like vaccines and elections, increased user trust in attached fact-checks compared to anonymous or elite sources, and higher rates of self-correction among posters when notes appear promptly. Despite these successes, the system has encountered challenges, such as delays in note deployment during fast-moving events, underrepresentation of non-English languages and regions like South Asia, and criticisms from advocacy groups alleging insufficient intervention against certain disinformation narratives, though academic evaluations generally affirm its accuracy and neutrality over alternatives prone to institutional biases.

Development and History

Origins as Birdwatch on Twitter

Birdwatch originated as an experimental pilot program launched by Twitter on January 25, 2021, aimed at addressing misinformation through user-generated contextual notes attached to potentially misleading tweets. The initiative allowed a limited cohort of invited participants in the United States, termed "Birdwatchers," to identify tweets containing disputed information and draft notes providing factual context or clarifications, with visibility determined by algorithmic assessments of note helpfulness across diverse contributor viewpoints. This crowdsourced model was positioned by Twitter as a scalable alternative to centralized fact-checking, inspired by collaborative platforms like Wikipedia, amid heightened scrutiny over the platform's handling of election-related claims in 2020. Under then-CEO , Birdwatch represented Twitter's strategic pivot toward decentralized moderation, reflecting internal recognition that top-down content controls had proven insufficient for managing the platform's 192 million daily active users' output. Eligibility for participation required users to have a verified phone number, no recent violations of Twitter's rules, and an in good standing, ensuring initial contributors numbered in the low thousands. The program's core mechanism involved a "bridging-based" rating system, where notes achieving broad agreement—particularly across users with differing political alignments—were prioritized for display, an approach intended to mitigate echo-chamber effects but later critiqued for potential underrepresentation of minority viewpoints. Early implementation focused on English-language tweets in the U.S., with emphasizing by publishing datasets of note contributions and ratings to allow external scrutiny of the system's performance. Despite its innovative intent, Birdwatch faced immediate questions regarding contributor demographics and ideological balance, as initial participants skewed toward established users rather than a random cross-section, raising concerns about systemic biases in note generation comparable to those observed in .

Acquisition and Relaunch on X

Following Elon 's acquisition of for $44 billion on October 27, 2022, the platform's existing Birdwatch feature—initially piloted in limited U.S. markets since January 2021—was rapidly reoriented toward broader implementation as a core tool. , who had previously expressed support for the concept's potential despite critiquing 's prior centralized , prioritized its expansion to counter what he described as ineffective legacy systems. On November 5, 2022, announced the of Birdwatch to Community Notes, stating it held "incredible potential for improving information accuracy" through crowd-sourced contributions rather than top-down interventions. This relaunch involved algorithmic refinements to the "bridging" system, which selects notes based on agreement across diverse contributor perspectives, and an initial push to increase eligible participants beyond the restrictive U.S.-only, waitlist-based access of the pre-acquisition era. By early 2023, following Twitter's to in July 2022, Community Notes achieved platform-wide visibility, attaching to millions of posts monthly and demonstrating higher rating volumes than under Birdwatch. The transition emphasized empirical validation over institutional gatekeeping, with Musk publicly endorsing notes that corrected narratives he viewed as biased, such as those challenging claims. However, early implementation faced delays in note deployment speeds, attributed to scaling the volunteer contributor base from thousands to over 100,000 by late 2023, amid X's reduced trust-and-safety staff post-acquisition layoffs. This shift marked a departure from Twitter's pre-Musk reliance on contracted fact-checkers, favoring a decentralized model verifiable through public contributor ratings and algorithmic transparency reports.

Global Expansion and External Adoptions

Community Notes, initially limited to English-language users in the United States following its pilot phase, expanded to all U.S. users on October 6, 2022, and became available globally on December 11, 2022, enabling contributions and displays in multiple languages beyond English. By May 2024, the feature operated in 70 countries, with ongoing additions of languages and topics to broaden coverage, though growth has varied by region; for instance, contributor submissions doubled in English and between 2023 and 2024. Despite this, implementation remains uneven, with minimal notes in South Asian languages—only 1,737 out of 1.85 million public notes as of mid-2025—highlighting linguistic imbalances in non-Western regions. External adoptions of the Community Notes model have emerged primarily through 's platforms. In March 2025, Meta began testing a crowdsourced notes system modeled on X's approach across , , and Threads in the United States, initially supporting languages including Spanish, Chinese, Vietnamese, French, and Portuguese, with plans for further expansion. This replaced Meta's prior third-party program, shifting to user-driven contributions rated by diverse participants to determine visibility, though early critiques noted potential limitations in visual compared to X's text-focused origins. By September 2025, Meta had fully rolled out the feature, emphasizing its role in adding context to potentially misleading posts without centralized . No widespread integrations or adoptions by other major platforms have been reported as of late 2025, though the model's influence has prompted discussions on crowdsourced moderation's scalability.

Operational Mechanics

Contribution and Proposal Process

Users eligible to participate in Community Notes must possess an X account that is at least six months old, a verified phone number from a trusted carrier, and no recent violations of platform rules; applications to join are reviewed on a rolling basis via the official signup process. Once approved as a contributor, participants initially rate existing proposed notes to build a track record, with the ability to propose new notes unlocked only after achieving a sufficient Impact score, typically starting at five or higher, demonstrating consistent helpful ratings across diverse viewpoints. To propose a note, a qualified contributor navigates to the target on X, selects the three-dot in the top right, and chooses "Write a Community ," which directs them to the Community Notes platform. There, they answer multiple-choice questions specifying the aspects of the warranting additional context, such as claims of misleading information, lack of important details, or other substantive issues, followed by drafting a concise explanatory limited to essential facts. Proposals must include hyperlinks to supporting sources, prioritizing verifiable evidence over opinion, and are subject to daily or per-author limits based on the contributor's prior Writing Impact and success rate—for instance, low-impact contributors may be restricted to one per author per week if their helpful rating hit rate falls below certain thresholds. Contributors can edit or delete proposals at any time before final rating via the 's options. Effective proposals adhere to guidelines emphasizing brevity, clarity, and neutrality: notes should use short sentences, avoid advocacy or unsubstantiated claims, and focus solely on adding factual context without altering the original post's meaning. Notes asserting a post is not misleading provide supplementary information but are less likely to display publicly unless rated highly helpful, whereas those addressing misleading elements require demonstration of broad agreement across ideological lines for visibility. Upon submission, proposals enter the rating queue for evaluation by other contributors, contributing to the proposer's Writing Impact score regardless of outcome, which influences future proposal allowances—higher scores enable more frequent contributions, up to approximately 200 times the hit rate for established users.

Bridging Algorithm and Note Selection

The bridging algorithm in Community Notes employs a matrix factorization model to evaluate proposed notes based on ratings from contributors, aiming to identify content that garners agreement across diverse ideological perspectives rather than relying on approval. This approach decomposes user ratings into latent factors: each rater is assigned a "friendliness" value reflecting their general tendency to rate notes positively, and both raters and notes receive a "" score capturing ideological alignment. The predicted rating for a note j by user i is modeled as \hat{y}_{ij} = \mu + b_i \cdot h_j + p_i \cdot q_j, where \mu is a global baseline, b_i is user friendliness, h_j is note helpfulness, and p_i, q_j are polarities; parameters are optimized via to minimize prediction error across all ratings. By isolating the helpfulness factor h_j, the algorithm discounts ratings attributable to echo chambers, elevating notes where positivity persists despite polarity differences—effectively requiring "bridging" support from opposing viewer clusters inferred from cross-note voting patterns. Note selection proceeds once a accumulates sufficient ratings, typically from at least eight eligible contributors spanning multiple "perspective bridges" to ensure robustness against narrow . The system computes an overall score for each note by weighting ratings according to the inferred reliability of raters' judgments on similar topics, derived from their historical accuracy in bridging contexts; notes exceeding a helpfulness (e.g., h_j \geq 0.4) and meeting minimum rating volume are candidates for display. This probabilistic ranking favors notes resilient to , as evidenced by simulations showing suppression of ideologically slanted content lacking broad endorsement, while promoting fact-based corrections applicable to varied audiences. Multiple candidate notes on the same post are ranked similarly, with the highest-scoring one surfaced if it qualifies, preventing dominance by any single viewpoint. The algorithm's open-source implementation, hosted on since its public release, allows verification and experimentation, with production code in languages like for the bridging server emphasizing scalability across millions of ratings. Empirical tuning has refined it to handle real-world , where traditional upvote/downvote systems falter by amplifying tribal signals; instead, bridging exploits the signal that even polarized users rate factual notes similarly when stripped of affective priors. Critics note potential vulnerabilities to coordinated if rater demographics heavily, though diversification requirements (e.g., via geographic and activity-based eligibility) mitigate this by enforcing exposure to heterogeneous rating pools. As of 2025, ongoing iterations incorporate multilingual adaptations, maintaining core principles while adjusting for language-specific polarity inference.

Contributor Eligibility and Incentives

Community Notes contributors must meet specific eligibility criteria to participate, primarily to minimize disruptions from bad actors and ensure basic account reliability. Accounts require a minimum age of six months and must have no recent notices of violations of X's rules, with "recent" generally interpreted as violations since January 2023. Users can sign up directly via the Community Notes website if eligible, after which they gain access to rating existing notes but must demonstrate consistent helpful ratings to unlock note-writing privileges. To propose notes, contributors face quantitative thresholds tied to performance metrics, ensuring only those with a track record of accuracy can contribute substantially. For instance, contributors with negative helpfulness scores are limited to one active note proposal at a time, while others may propose based on formulas such as their writing impact score plus five or hit rate multiplied by 200, whichever is higher. Helpfulness scores, calculated from peer ratings of past contributions, amplify the influence of high-performing users in the system's "bridging" , which prioritizes agreement across diverse ideological viewpoints to select notes for display. This scoring mechanism fosters accountability, as poor performance reduces future input limits and visibility. Incentives for participation are primarily non-monetary, emphasizing intrinsic motivations like enhancing platform accuracy and earning reputational weight within the system. Contributors receive no direct , and the relies on voluntary engagement to crowdsource for misleading posts. Instead, the primary incentive is amplified influence via elevated helpfulness scores, which allow seasoned contributors to shape more notes effectively, alongside the broader goal of promoting diverse perspectives in . As of April 2025, the had attracted over one million contributors across 200 countries, suggesting sustained appeal through these reputational and communal benefits rather than extrinsic rewards.

Deployment and Integration

Display and Visibility Rules

Community Notes are displayed beneath original posts on X when the attached note achieves a "helpful" status through the platform's , which prioritizes consensus across diverse contributor perspectives to mitigate ideological bias. This status is determined by aggregating ratings from contributors whose past voting patterns indicate differing viewpoints, ensuring that only notes bridging divides—rather than those appealing solely to one side—meet the visibility threshold. The weighs votes accordingly, requiring a sufficient volume of helpful ratings from this diverse pool to elevate a note from "testable" or "needs more ratings" to publicly visible on the post. Unlike traditional moderation labels that may deprioritize or hide , Community Notes add contextual without altering the post's , reach, or algorithmic unless the post independently violates X's rules. Notes become eligible for display once they surpass an internal threshold of weighted helpful votes, though exact numerical criteria remain implementation details of the open-source rather than fixed public quotas. This approach aims to surface substantively corrective context while withholding polarizing or low-consensus proposals, with notes on media or links potentially propagating to matching across the platform. Visibility is universal for qualifying notes on supported posts, initially rolled out to all U.S. users in early 2022 before expanding globally as of 2023, though regional availability and language support continue to evolve. Users encounter notes directly under the relevant when viewing it on X's web, iOS, or apps, with the note's text, citations, and contributor (anonymized) presented in a collapsible format for readability. If a note later accumulates sufficient unhelpful ratings or fails ongoing reviews, it may be demoted or removed from display, triggering re-evaluation by the system.

Multilingual Support and Platform Adaptations

Community Notes on X supports multiple languages, though implementation varies significantly by linguistic region. The system initially focused on English, with expansion to yielding substantial note volumes: approximately 1.12 million notes in English and 165,000 in as analyzed in early 2025. Efforts to broaden coverage include contributor in Arabic-speaking countries such as , , , and others starting November 22, 2023, enabling note proposals in . However, non-Western languages face systemic challenges in adoption and visibility. In South Asian languages including and , public notes constitute only 1,737 out of 1.85 million total notes, or 0.094%, due to limited contributor pools and bridging algorithm hurdles requiring diverse rater agreement. Less than 40% of drafted notes in these languages achieve public display, compared to over 65% for English drafts, exacerbating exposure in high-volume regions. These disparities stem from the algorithm's emphasis on cross-ideological , which performs unevenly without balanced multilingual contributor bases. The Community Notes model has influenced adaptations on other platforms, promoting crowdsourced context over centralized . Meta initiated testing of its version on , , and Threads in the United States on March 18, 2025, with broader rollout by September 2025, allowing users to propose and rate contextual additions to misleading posts while retaining algorithmic selection for display. This shift followed 's January 7, 2025, decision to end third-party fact-checking partnerships, prioritizing user-driven notes to address perceived biases in traditional moderation. Similarly, introduced a comparable feature in September 2025, enabling users to add and evaluate contextual notes on feed content, aligning with industry trends toward decentralized systems. These implementations adapt X's bridging mechanism but tailor it to platform-specific algorithms and user interfaces, such as 's focus on visual content integration.

Interaction with User Behavior and Moderation

Community Notes on X influence behavior by attaching contextual annotations to potentially misleading posts, which empirical studies show reduces overall engagement and diffusion of false . A analysis of over 200,000 posts found that those with attached Notes received significantly lower rates of likes, reposts, and replies compared to similar unnoted content, thereby curbing virality. Similarly, published in PNAS demonstrated that Notes moderate interactions more effectively when posts reach audiences via reposts, altering diffusion patterns and limiting the spread of across networks. This mechanism encourages behavioral adjustments among users, including greater skepticism toward flagged content and increased likelihood of retracting or correcting false claims. A University of study involving experimental exposure to noted posts revealed that participants were more prone to revise misleading statements they had shared, attributing this to the crowd-sourced credibility of the Notes system. Additionally, exposure to Community Notes has been linked to heightened trust in processes, with a nationwide survey of 1,810 U.S. adults showing improved perceptions of annotation reliability over traditional top-down corrections. In terms of platform , Community Notes function as a decentralized, user-driven complement to centralized enforcement, prioritizing contextual addition over content removal to foster open discourse. This approach aligns with X's shifts post-acquisition, emphasizing and input to mitigate biases inherent in institutional bodies. By rating proposed Notes for helpfulness, users indirectly shape outcomes, with the system's bridging selecting annotations that achieve cross-ideological , thus reducing the platform's reliance on opaque algorithmic or human-led decisions. However, challenges persist, as declining Note publication rates in high-volume scenarios can limit real-time efficacy against rapidly spreading falsehoods.

Empirical Evidence of Effectiveness

Internal Metrics from X

X evaluates the quality of Community Notes through three primary internal metrics: accuracy, informativeness, and helpfulness. Accuracy is assessed by having professional reviewers rate published notes for factual correctness, with ongoing monitoring to detect any declines that could trigger guardrails. Informativeness is tested via user experiments comparing comprehension of original posts against versions with attached notes. Helpfulness is measured through surveys eliciting user feedback on whether notes provide substantive context, weighted toward responses from diverse ideological perspectives to ensure broad applicability. These metrics underpin operational safeguards, including alerts for suboptimal performance and circuit breakers that temporarily halt visibility to non-contributors if severe issues arise, such as sustained low accuracy or helpfulness scores. No public activations of these circuit breakers have occurred, indicating stable internal performance as of the latest evaluations. The draws from earlier Birdwatch prototypes, emphasizing cross-viewpoint agreement via the bridging algorithm to filter notes. Scale metrics reflect growth in participation: as of May 2024, over 500,000 contributors operated in 70 countries, expanding to more than 1 million contributors by May 2025. X publicly releases anonymized datasets of notes, ratings, and statuses to enable external verification, though internal impact analytics on metrics like engagement reduction remain .

Independent Academic Studies

A 2024 study published in PNAS analyzed 40,078 posts on X from to June 2023 using synthetic control methods, finding that Community Notes reduced reposts by 46.1%, likes by 44.1%, replies by 21.9%, and views by 13.5% in the 48 hours following attachment, with lifetime reductions of 11.6% for reposts and similar magnitudes for other metrics; the notes also curtailed the depth and of cascades without affecting breadth. However, a 2024 in Proceedings of the ACM on Human-Computer Interaction employing difference-in-differences and regression discontinuity designs on data from Community Notes' rollout since 2021 found no significant reduction in engagement (retweets or likes) with misleading tweets, attributing limited impact to delays in note attachment that fail to interrupt early spread. Research on note quality has yielded positive assessments in specific domains. A 2024 JAMA study of 657 Community Notes addressing vaccination misinformation from December 2022 to 2023 rated 97% as entirely accurate and 93% as drawing from high- or moderate-credibility sources, with predominant topics including adverse events (51%) and conspiracy theories (37%); inter-annotator agreement was high (κ=0.90 for accuracy). Similarly, a 2025 analysis of 41,128 notes from October 2022 to June 2023 showed that references to external sources increased helpfulness odds by 2.33 times, while high-bias sources reduced ratings by 10.16% relative to low-bias ones, and the system penalized both left- and right-leaning biases, favoring third-party fact-checkers like . Experimental evidence supports increased user trust in Community Notes. A pre-registered 2024 survey with 1,810 U.S. participants exposed to misleading posts found higher trust in crowd-sourced notes compared to professional fact-checks or platform labels, particularly among those valuing community input. These findings indicate that while Community Notes demonstrate accuracy and bias-mitigating mechanisms in controlled evaluations, their platform-wide impact on engagement remains contested, with methodological differences—such as observational versus causal designs—explaining divergent outcomes across studies.

Comparative Analyses with Traditional Fact-Checking

Community Notes differs from traditional fact-checking organizations, such as or , primarily in its decentralized, crowdsourced methodology versus centralized expert-driven processes. Traditional fact-checkers rely on professional journalists or specialists who select claims for verification, often using subjective rating scales like PolitiFact's Truth-O-Meter, which evaluates statements on a spectrum from "True" to "Pants on Fire." In contrast, Community Notes enables any eligible X user to propose contextual notes on posts, with selection determined by a "bridging" that prioritizes notes achieving agreement across raters from diverse political viewpoints, aiming to surface broadly consensus-driven corrections without top-down editorial control. This approach fosters scalability through user participation but remains reactive, attaching notes only after a post gains traction and sufficient ratings, whereas traditional fact-checkers can proactively target viral claims off-platform. Empirical analyses reveal interdependence rather than outright replacement, as Community Notes frequently incorporates outputs from professional fact-checkers. A large-scale annotation of notes found they cite fact-checking sources up to five times more than prior estimates, with notes addressing broader misinformation narratives twice as likely to reference such organizations compared to other evidence types. Overlap in targets between Community Notes and is rare, but when it occurs, assessments align highly; however, Notes contributors prioritize posts from larger, more influential accounts and focus disproportionately on content from ideologically opposed users, while responds faster overall. Traditional models, limited by editorial resources, cover fewer claims at scale, whereas Notes leverages crowd input for broader platform-specific reach, though both exhibit selection biases—traditional toward high-profile political statements and Notes toward contentious, user-flagged content. On , traditional fact-checkers face criticism for systemic imbalances, with studies documenting disproportionate scrutiny of conservative claims—for instance, one analysis of found roughly 75% of "false" ratings applied to statements versus 25% for Democrats, potentially reflecting selection or evaluative skew. Community Notes mitigates this through its cross-ideological rating requirement, yielding more balanced coverage; pre-2022 data showed contributor demographics leaning (50% vs. 24% conservative), but post-acquisition shifts narrowed ideological gaps to 8%, enhancing perceived neutrality. Neither eliminates entirely—traditional via institutional leanings and Notes via participant self-selection—but the mechanism in Notes reduces echo-chamber effects more effectively than expert-only judgments. Regarding effectiveness, experimental evidence indicates Community Notes outperforms simpler fact-check formats in building user trust and correcting perceptions. In a survey of 1,347 U.S. participants exposed to misleading and non-misleading posts, Notes increased trustworthiness ratings by 4.8 to 8.2 percentage points over basic expert or community flags, with stronger effects among Biden supporters, and improved misleading post identification by 9.6 points versus 7.1 for expert flags. Traditional fact-checks enhance factual knowledge but fail to curb sharing or alter entrenched beliefs, while Notes reduce retweet likelihood by up to 50% and increase post deletion probability by 80% when applied. However, aggregate platform analyses show mixed engagement impacts, with one study finding no overall reduction in misleading tweet interactions post rollout, suggesting Notes' influence is context-dependent and strongest on corrected instances. Scalability favors Notes for , platform-embedded corrections, though its reliance on professional sources underscores hybrid potential over pure .

Criticisms and Limitations

Declining Usage and Publication Challenges

In 2025, participation in Community Notes on X experienced a marked decline, with submission volumes plummeting compared to prior years. Data analyzed by NBC News indicated that the feature's usage dropped sharply, contributing to concerns over its viability as a primary misinformation countermeasure on the platform. This downturn followed a period of growth in contributor numbers through 2024, where over 126,000 individuals submitted notes, but failed to sustain momentum into the following year amid broader platform traffic reductions from 4.7 billion visits in January to 4.4 billion by May. Publication rates for submitted notes have similarly contracted, with more than 90 percent of contributions across languages remaining unpublished as of mid-2025. For English-language notes specifically, the rate of notes achieving "helpful" status—required for display—fell from 9.5 percent in 2023 to 4.9 percent in early 2025, despite an increase in overall submissions totaling 1.76 million notes from January 2021 to March 2025 across 55 languages. Many notes languish in evaluation limbo, unevaluated by sufficient contributors, exacerbating the bottleneck in a system dependent on crowd-sourced ratings for visibility. Core publication challenges stem from the algorithmic requirement for "bridging" agreement, where notes must garner supportive ratings from contributors across ideological divides to be deemed helpful and displayed. This threshold, intended to mitigate , has resulted in persistently low success proportions, with the share of helpful notes broadly declining since May 2024 and highlighting systemic risks to the program's . Transparency gaps in rating processes and contributor motivations further complicate evaluations, as notes often fail to attract diverse input needed for approval, even as total contributor activity rose in some metrics prior to the 2025 drop-off. These dynamics have prompted critiques that the model, while theoretically robust, struggles in practice with scaling participation and ensuring timely, unbiased consensus.

Allegations of Bias and Ideological Skew

Critics, particularly from conservative perspectives, have alleged that Community Notes exhibits a left-leaning ideological skew, pointing to the higher frequency of notes applied to right-leaning posts and the reliance on sources perceived as . For instance, analyses of over 300,000 notes from 2021 to revealed that left-leaning notes often displayed more negative sentiment and stronger ties to real-world events, while right-leaning notes scored lower on , suggesting potential disparities in tone and framing that could disadvantage conservative viewpoints. Such observations have fueled claims that the crowdsourced system, despite its design, inherits biases from contributor pools or source selection, echoing broader skepticism toward moderation tools influenced by platform demographics. Empirical data, however, indicates that observed asymmetries in flagging—such as Republicans' posts receiving notes twice as often as Democrats' (66% versus 33% of flagged content)—stem from partisan differences in sharing rates rather than in the rating process. A June 2025 study by researchers at , Panthéon-Sorbonne, and , analyzing millions of ratings, found no evidence of among raters, with the "bridging" algorithm requiring cross-ideological agreement to promote notes, effectively penalizing one-sided contributions. This mechanism aims to foster consensus across viewpoints, as demonstrated in contexts like German politics, where no consistent party preference emerged in published notes despite varying proposal volumes. Source citation patterns provide mixed signals on skew: approximately 52% of notes referenced left-leaning outlets, compared to 14% right-leaning and 34% neutral, potentially reflecting availability or contributor preferences. Yet, notes linking to high- sources of any stripe were rated 3% less helpful per unit of bias intensity, with external, unbiased references boosting perceived utility by over twofold, underscoring the system's incentives against overt partisanship. analyses, while data-rich, warrant caution due to potential institutional leanings in research institutions, which may underemphasize conservative critiques; nonetheless, the bridging model's emphasis on diverse rater agreement appears to mitigate echo-chamber effects more robustly than traditional .

Specific Controversies and Case Studies

In October 2024, the Center for Countering Digital Hate (CCDH), an advocacy group focused on combating online , analyzed 283 X posts containing misleading claims about U.S. integrity, such as assertions that the 2020 was stolen or that voting systems were inherently unreliable; none received a published Community Note despite receiving proposed notes from contributors. These posts collectively amassed over 14 million views before any contextual intervention, with CCDH attributing the failure to the system's "bridging" algorithm requiring cross-ideological agreement, which stalled publication when notes were downvoted by users opposing corrections. X disputed the CCDH's methodology, arguing it selectively highlighted unresolved proposals while ignoring the system's overall low , though independent verification confirmed delays in high-engagement political content. A notable case arose in February 2025 when a Community Note was attached to an X post citing a poll showing high favorability ratings for President among Americans; the note clarified the poll's representativeness and context, prompting to publicly criticize it as misleading and call for its removal, stating it misrepresented public sentiment. , who owns X, argued the note ignored broader evidence of declining support for aid, highlighting a perceived between the crowdsourced system's and platform governance; the note remained visible, but the incident fueled debates over algorithmic neutrality when contradicting the owner's views. This event exemplified allegations of vulnerability to , as reports indicated "toxic" users strategically downvoted notes on geopolitically sensitive topics to prevent publication, with over 90% of proposed notes in similar categories failing to meet the threshold for display in 2025. Another controversy involved inconsistent application during the July 2024 U.S. presidential transition, where posts amplifying unverified claims about Kamala Harris's —such as exaggerated delegate counts—received delayed or absent notes, allowing rapid diffusion before corrections surfaced; researchers noted that while notes eventually appeared on some, the initial lag enabled millions of impressions, underscoring the system's dependence on contributor volume and agreement amid polarized audiences. Critics, including experts, pointed to this as evidence of scalability limits, with notes succeeding more reliably on non-partisan topics like data but faltering on real-time political events due to ideological clustering among raters. X countered that such cases represent a minority, with internal data showing notes reducing engagement on flagged by up to 20-30% when published, though external analyses questioned the bridging mechanism's resilience against coordinated .

Broader Implications

Impact on Misinformation Dynamics

Community Notes has demonstrated measurable effects on curbing the amplification of misleading content on X by attaching contextual annotations to posts, which appear once sufficient contributors rate them as helpful. A University of Washington-led analysis of over 100 million posts found that those flagged with Community Notes experienced significantly reduced repost rates and overall virality compared to similar unnoted content, with false information diffusion dropping by up to 20-30% in networked spread models. Similarly, an experimental study exposing users to Community Notes on misleading posts reported a 62% average reduction in subsequent sharing behaviors, attributing this to the notes' provision of sourced counter-evidence that prompts user reevaluation. These dynamics suggest a causal mechanism where visible corrections interrupt momentum-driven propagation, particularly for high-engagement falsehoods, without suppressing true content equivalently. The system's bridging-based consensus model, which prioritizes notes appealing across ideological lines, fosters broader acceptance and alters trajectories by elevating epistemically robust contributions over ones. indicates that notes citing or diverse sources receive higher helpfulness ratings, leading to their prominence and thereby diminishing the relative visibility of unchecked claims in users' feeds. This has correlated with increased user trust in platform moderation, as community-sourced notes outperform algorithmic flags or third-party labels in perceived credibility across political spectra, with participants in controlled experiments rating them 15-25% more trustworthy. Consequently, exposure to such notes encourages self-correction among posters, with studies observing higher retraction rates for noted misleading tweets, reshaping incentives toward accuracy in . However, aggregate platform-wide reductions in engagement remain inconsistent, as some quasi-experimental roll-out analyses detect no statistically significant drop in overall interactions with pre-noted deceptive posts, potentially due to entrenched habits or selective . Short-term boosts in author engagement post-note have also been noted, indicating that while diffusion slows for specific instances, systemic resilience against coordinated campaigns may depend on contributor scale and algorithmic promotion. Overall, Community Notes introduces a decentralized friction in falsehood cascades, promoting a more contested information environment that privileges verifiable claims through collective scrutiny rather than unilateral suppression.

Shifts in Platform Governance

The introduction of Community Notes marked a pivotal transition in X's approach to , evolving from the limited Birdwatch pilot program launched by in January 2021 to a globally accessible, crowdsourced system following Elon Musk's acquisition of the platform in October 2022. Birdwatch had restricted participation to select U.S. users and relied on a smaller pool of volunteer contributors for proposed notes, which were displayed experimentally on a subset of tweets; in contrast, Community Notes opened eligibility to contributors worldwide who met basic criteria, such as maintaining an account for at least six months and avoiding repeated violations of platform rules, thereby democratizing the process. This expansion reflected a deliberate pivot away from top-down moderation by platform staff or partnered organizations toward algorithmic facilitation of user-generated context, with notes ranked for visibility based on agreement from raters across diverse ideological perspectives to prioritize "bridging" contributions that transcend echo chambers. This governance model emphasized transparency and resilience against manipulation, incorporating an open-source that evaluates note helpfulness through pairwise comparisons by contributors, aiming to surface context without suppressing original posts—a departure from pre-acquisition practices where frequently labeled or removed content deemed misleading by internal teams or external partners often criticized for institutional biases favoring progressive viewpoints. articulated this shift as a means to harness over elite gatekeeping, arguing that centralized by entities with perceived ideological skews, such as outlets or academic-affiliated verifiers, undermined neutrality; instead, Community Notes sought to mitigate such risks by requiring from demographically and politically varied raters, with data indicating that notes achieving broad agreement were more likely to address effectively. By 2023, the had scaled to millions of contributions, reducing reliance on tools and aligning with X's broader of minimal intervention to foster open discourse, though it retained platform oversight for appeals and enforcement against . The adoption of Community Notes influenced platform governance by redistributing authority from centralized trust-and-safety teams—slashed post-acquisition—to a distributed network of users, enabling rapid scaling without proportional increases in staff and positioning X as a model for amid declining in traditional gatekeepers. This approach prompted competitors like to pilot similar community note systems in 2025, citing limitations in expert-driven , such as inherent biases that skewed toward institutional narratives on topics like and elections. However, the model's dependence on user participation introduced challenges in maintaining consistent , as evidenced by fluctuations in note volume and occasional interventions by platform leadership to address perceived gaming, underscoring an ongoing tension between and in self-governing digital ecosystems.

Sustainability and Future Directions

Participation in Community Notes has declined sharply in 2025, with submissions cratering compared to prior years, raising concerns about the program's long-term viability as a crowdsourced system reliant on volunteer contributors. An analysis indicates that over 90% of submitted notes remain unpublished and stuck in review limbo, exacerbating contributor fatigue and potentially deterring new participants due to low success rates in seeing notes go live. A October 2025 study highlights systemic threats, including dependency on an effective on-boarding pipeline and the positive feedback loop where published notes encourage future authoring; disruptions here could lead to a shrinking contributor base, as the bridging algorithm favors cross-ideological agreement but struggles with low-volume topics or coordinated low-quality inputs. To address these challenges, X has implemented algorithmic improvements, such as enhanced detection of coordinating contributors to bolster robustness against manipulation attempts, as announced in May 2025. In June 2025, the platform began piloting a feature leveraging to elevate content achieving agreement across typically divided user groups, aiming to promote broadly consensual information and potentially increase engagement incentives. Looking ahead, integration of represents a key direction for scalability, with X deploying agents to generate and publish notes starting in July 2025, enabling faster response to at volumes beyond human capacity while drawing on web-wide context. This hybrid approach seeks to mitigate participation shortfalls by automating initial drafting and verification, though it introduces risks of AI hallucinations or biases if not rigorously aligned with the existing bridging criteria. Sustained empirical monitoring of contributor retention and note publication rates will be essential to evaluate whether these enhancements counteract declining trends, as the system's causal reliance on diverse, incentivized human input remains a foundational .

References

  1. [1]
    About Community Notes on X | X Help
    Community Notes aim to create a better informed world by empowering people on X to collaboratively add context to potentially misleading posts.
  2. [2]
    Community Notes - X
    Screenshot of a mobile device showing a post with a Community Note. Community Notes: a collaborative way to add helpful context to posts and keep people better ...Signing up · FAQs · Challenges · Diversity of perspectives
  3. [3]
    Introducing Birdwatch, a community-based approach to misinformation
    Jan 25, 2021 · Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context.Missing: history | Show results with:history
  4. [4]
    From Birdwatch to Community Notes, from Twitter to X - arXiv
    Oct 10, 2025 · Community Notes (formerly known as Birdwatch) is the first large-scale crowdsourced content moderation initiative that was launched by X ...
  5. [5]
    The algorithmic heart of Community Notes - by Tom Stafford
    Jan 9, 2025 · Community Notes, originally called Birdwatch, was launched in 2021 and rolled out platform wide in 2023 (Musk acquired Twitter in 2022).Missing: evolution | Show results with:evolution
  6. [6]
    Study Finds X's Community Notes Provides Accurate Responses to ...
    Apr 24, 2024 · A new UC San Diego-led study published in JAMA finds that X's Community Notes, a crowdsourced approach to addressing misinformation, helped counter false ...
  7. [7]
    Community notes increase trust in fact-checking on social media - NIH
    Here, we presented n = 1,810 Americans with 36 misleading and nonmisleading social media posts and assessed their trust in different types of fact-checking ...
  8. [8]
    Community notes reduce engagement with and diffusion of false ...
    Finally, Community Notes is an evolving system and our analysis reflects the effects of the system during the study period, March–June 2023. Like all social ...
  9. [9]
    Study: Community Notes on X could be key to curbing misinformation
    Nov 18, 2024 · Gies research reveals that crowd-sourced fact-checking can effectively curb misinformation, as users are more likely to retract false or ...Missing: achievements | Show results with:achievements
  10. [10]
    Community Notes help reduce the virality of false information on X ...
    Sep 18, 2025 · Icons for social media apps on a smartphone. A University of Washington-led study of X found that posts with Community Notes attached were less ...
  11. [11]
    X's Community Notes and the South Asian Misinformation Crisis
    Jun 30, 2025 · This report examines the performance of X's Community Notes feature in South Asia, with a focus on adoption, linguistic representation, ...
  12. [12]
    Do Community Notes work? - Impact of Social Sciences - LSE Blogs
    Jan 14, 2025 · Tom Stafford assesses the evidence for the effectiveness of community notes as a form of collective intelligence.
  13. [13]
    Twitter's 'Birdwatch' Aims to Crowdsource Fight Against Misinformation
    Feb 10, 2021 · With a new pilot program called Birdwatch, Twitter is hoping to crowdsource the fact-checking process, eventually expanding it to all 192 million daily users.
  14. [14]
    Twitter Launches Birdwatch, A Wikipedia-Style Approach To Fight ...
    Jan 25, 2021 · Birdwatch will allow regular users, called “Birdwatchers,” to identify tweets they think have misinformation and write notes with more ...Missing: origins | Show results with:origins
  15. [15]
    Twitter launches Birdwatch, a fact-checking program intended to ...
    Jan 25, 2021 · Twitter has launched its Birdwatch program, meant to address misinformation on the platform by allowing users to fact-check tweets.Missing: origins | Show results with:origins
  16. [16]
    Birdwatch: Inside Twitter's Plan to Fact-Check Tweets - Bloomberg
    Mar 4, 2021 · Like many projects Twitter and Chief Executive Officer Jack Dorsey undertake, Birdwatch is ambitious and idealistic. If it works, it could ...Missing: development | Show results with:development
  17. [17]
    Partisanship and the evaluation of news in Twitter's Birdwatch ...
    Birdwatch operates by allowing participants to identify tweets as misleading or not, write free-response fact-checks of tweets, and evaluate the quality of ...
  18. [18]
    How Twitter's Birdwatch fact-checking project really works
    Nov 9, 2022 · Started as a small pilot program in January 2021, Birdwatch is now open to any Twitter user in the U.S. who signs up. Its participants are ...
  19. [19]
  20. [20]
    Musk Renames Birdwatch 'Community Notes,' Touts 'Improving ...
    Nov 5, 2022 · Elon Musk said Twitter's Birdwatch feature will be renamed 'Community Notes' and is aimed at 'improving information accuracy' amid growing content-moderation ...
  21. [21]
  22. [22]
    [PDF] From Birdwatch to Community Notes, from Twitter to X - arXiv
    Oct 10, 2025 · On January 23, 2021, X (formerly Twitter) launched Community Notes (formerly Birdwatch), the first large-scale community-driven initiative ...<|separator|>
  23. [23]
    [PDF] HOW X'S COMMUNITY NOTES SYSTEM FALLS SHORT ON ...
    This allows misleading posts about voter fraud, election integrity, and political candidates to spread and be viewed millions of times. Posts without Community ...
  24. [24]
    [PDF] Did the Roll-Out of Community Notes Reduce Engagement ... - arXiv
    The Community Notes feature was later expanded to all users in the U. S. on October 6, 2022, and to global users on December 11, 2022 [69]. In an attempt to ...Missing: countries | Show results with:countries
  25. [25]
    Use of Community Notes on Elon Musk's X has plummeted in 2025
    Jun 6, 2025 · Submissions to X's Community Notes, which add user-generated context and corrections to the platform's posts, have cratered this year.
  26. [26]
    A Deep Dive into X's Community Notes: An Analysis of English and ...
    Jul 9, 2025 · The Origin and Evolution of the Program​​ Community Notes, formerly known as Birdwatch, is X's flagship initiative to crowdsource content ...
  27. [27]
    Testing Begins for Community Notes on Facebook, Instagram and ...
    Mar 13, 2025 · Takeaways · Many of you will be familiar with X's Community Notes system, in which users add context to posts. · Meta won't decide what gets rated ...
  28. [28]
    Meta's shift to Community Notes model proves that we can fix big ...
    Jan 7, 2025 · Meta's recent decision to replace its third-party fact-checking program with a user-driven “Community Notes” system modeled after X (formerly Twitter)Missing: integrations | Show results with:integrations
  29. [29]
    How Meta's take on Community Notes misses the mark - Platformer
    Mar 13, 2025 · That core innovation aside, though, Community Notes have several important limitations, according to former Twitter executives I've interviewed.Missing: rollout | Show results with:rollout<|separator|>
  30. [30]
    Community Notes: A New Way to Add Context to Posts
    Sep 10, 2025 · As we announced in March 2025, Meta has rolled out a Community Notes feature that lets people add more context to Facebook, Instagram and ...Missing: X | Show results with:X
  31. [31]
    Threats to the sustainability of Community Notes on X - arXiv
    Oct 1, 2025 · The Community Notes system pioneered by Twitter, now known as X, uses a bridging algorithm to identify user-generated context with upvotes ...
  32. [32]
    Twitter users need to unlock the ability to write Community Notes
    Dec 21, 2022 · The ability to write Community Notes will be unlocked when a contributor has achieved a Rating Impact of at least five. Notes in need of ratings ...
  33. [33]
    Writing notes - X
    Anyone can read and rate Community Notes, but only contributors who've unlocked the ability to write can add new notes to posts. Here's how to add a note:.
  34. [34]
    What Are X / Twitter Community Notes And How Do They Fight ...
    Jul 26, 2024 · Answer multiple-choice questions regarding why you're writing a community note. · Write a brief note to share more context on the post. · Specify ...
  35. [35]
    What do I think about Community Notes?
    ### Summary of Community Notes Algorithm (Bridging-Based Approach)
  36. [36]
    Understanding Community Notes and Bridging-Based Ranking
    Jan 1, 2024 · The Community Notes algorithm can be used in any forum with high entropy (lots of downvotes) as a way to identify posts with posts with high Information Value.Introduction · Understanding the Algorithm · Conclusion · Example Notes
  37. [37]
    Documentation and source code powering Twitter's Community Notes
    Welcome to Community Notes's public repository. This repository is a place for us to transparently host our content, algorithms, and share updates about the ...Issues 40 · Pull requests 13 · Activity · ActionsMissing: bridging | Show results with:bridging
  38. [38]
    Signing up - X
    Eligibility. To become a Community Notes contributor, accounts must have: No recent notice of violations of X's Rules. Intended to reduce the likelihood of ...
  39. [39]
    How to sign up for Community Notes on Twitter / X - Mashable
    Oct 24, 2023 · An account can sign up for Community Notes if the user has not recently violated the platform's rules and has been on the platform for at least 6 months.<|separator|>
  40. [40]
    Getting started - X
    1. Sign up. Anyone who meets the eligibility criteria can sign up to become a Community Notes contributor. As there are important nuances in each market, we'll ...
  41. [41]
    Contributor helpfulness scores - X
    Helpfulness scores are a way to give more influence to people with a track record of making high-quality contributions to Community Notes.
  42. [42]
    X CEO Linda Yaccarino Says Community Notes Reach 1M Users
    Apr 30, 2025 · Yaccarino said that the platform's Community Notes feature now boasts 1 million contributors across 200 countries and every major language.
  43. [43]
    Challenges - X
    First, as described above, Community Notes uses a bridging based algorithm to identify notes that are likely to be helpful to people from many points of view.Missing: explanation | Show results with:explanation
  44. [44]
    Note ranking algorithm - X
    The Community Notes ranking algorithm leverages contributor ratings to identify notes that are found helpful by people from different perspectives. Since ...
  45. [45]
    Notes on Media & Links - X
    Community Notes on posts that contain ... visible on all posts that our system identifies as containing a matching image, video or link. These notes, when ...
  46. [46]
    Community Notes - X
    Nov 22, 2023 · Welcome new contributors in Algeria, Bahrain, Egypt, Israel, Jordan, Kuwait, Lebanon, Morocco, Oman, ...
  47. [47]
    X Community Notes Fail South Asian Users, Study Finds - MediaNama
    Jul 4, 2025 · Less than 40% of community notes drafted in South Asian languages on X meet the visibility criteria, compared to over 65% success rate for those in English.
  48. [48]
    How X's Community Notes Leave South Asians Disproportionately ...
    Jul 25, 2025 · X's Community Notes face delays and low coverage in Hindi, Urdu, and other South Asian languages, leaving misinformation unchecked, ...
  49. [49]
    Meta is ending its fact-checking program in favor of a ... - NBC News
    Jan 7, 2025 · Meta CEO Mark Zuckerberg announced a series of major changes to the company's moderation policies and practices, saying the election felt ...<|control11|><|separator|>
  50. [50]
    Why are social media sites betting on crowdsourced fact-checking?
    Sep 8, 2025 · TikTok is the latest social media platform to enable users to write and rate posts to give context to content on their feeds.Missing: adaptations | Show results with:adaptations
  51. [51]
    Making Meta's Community Notes Work: Current Challenges and ...
    Jul 25, 2025 · Coinciding with the start of the Trump administration, Meta ended its longstanding fact-checking partnership with independent experts.
  52. [52]
    Fact check: Are X's community notes fueling misinformation? - DW
    Aug 5, 2025 · "Community notes" are supposed to provide helpful context to potentially misleading content on X. But they are appearing less often and, ...
  53. [53]
    Evaluation - X
    Temporarily raising the threshold required for notes to be publicly visible on a post; Temporarily pausing the scoring of newly created notes; Temporarily ...
  54. [54]
    communitynotes/birdwatch_paper_2022_10_27.pdf at main · twitter/communitynotes
    Insufficient relevant content. The provided text is a GitHub page metadata and navigation menu for the file `birdwatch_paper_2022_10_27.pdf`, but it does not contain the actual paper content, internal metrics, or evaluation results (e.g., accuracy rates, agreement levels, survey results on helpfulness or informativeness).
  55. [55]
  56. [56]
    X's Community Notes Reaches 1M Contributors - Yahoo Finance
    May 12, 2025 · X's Community Notes program continues to expand - but is it the most effective content moderation approach?
  57. [57]
    Downloading data - Community Notes - X
    X and its partners use cookies to provide you with a better, safer and ... The Community Notes data is released as five separate files: Notes: Contains ...Notes · Note Status History · Ratings
  58. [58]
    Did the Roll-Out of Community Notes Reduce Engagement With ...
    Nov 8, 2024 · We find no evidence that the introduction of Community Notes significantly reduced engagement with misleading tweets on X/Twitter.Missing: relaunch | Show results with:relaunch<|control11|><|separator|>
  59. [59]
    Characteristics of X (Formerly Twitter) Community Notes Addressing ...
    Apr 24, 2024 · This study evaluated the topics, accuracy, and credibility of X (formerly Twitter) Community Notes addressing COVID-19 vaccination.<|separator|>
  60. [60]
    References to unbiased sources increase the helpfulness ... - Nature
    Jul 16, 2025 · Community-based fact-checking is a promising approach to address misinformation on social media at scale. However, an understanding of what ...Missing: peer- | Show results with:peer-
  61. [61]
    A Brief Review of Fact-Checking in the Digital Era - R Street Institute
    Mar 5, 2025 · This essay aims to assess literature on traditional fact-checking and compare it to newer decentralized models.Missing: comparison | Show results with:comparison
  62. [62]
    How the Crowd Selects Fact-Checking Targets on Social Media
    May 28, 2024 · In this study, we empirically analyze differences in how contributors to Community Notes and Snopers select their targets when fact-checking social media posts.Missing: traditional comparison
  63. [63]
    Can Community Notes Replace Professional Fact-Checkers?
    ### Summary of Key Findings on Community Notes vs. Professional Fact-Checkers
  64. [64]
    Use of Community Notes on Elon Musk's X has plummeted in 2025 ...
    Jun 6, 2025 · Worldwide, traffic to X has ticked down since January from about 4.7 billion visits to 4.4 billion in May, according to estimates provided to ...
  65. [65]
    More than 90% of X's Community Notes are never published and ...
    Jul 10, 2025 · “As the volume of notes submitted grows, the system's internal visibility bottleneck becomes more apparent –- especially in English,” the study ...Missing: metrics | Show results with:metrics
  66. [66]
    New study finds Republicans flagged for posting misleading tweets ...
    Jun 17, 2025 · New research from the Oxford Internet Institute at Oxford University, in partnership with researchers at Panthéon-Sorbonne and MIT Sloan ...
  67. [67]
    Crowdsourced Fact-Checking or Biased Commentary? Analyzing ...
    May 23, 2025 · Analyzing Political Bias in Twitter's Community Notes.” In the video, we give an overview of why crowd-sourced fact-checking and the ...
  68. [68]
  69. [69]
    [PDF] arXiv:2503.10560v1 [cs.SI] 13 Mar 2025
    Mar 13, 2025 · This suggests that the rating mechanism on the Community Notes platform successfully penalizes one-sidedness and politically motivated reasoning ...<|separator|>
  70. [70]
    Do Community Notes have a party preference? – Digital Society Blog
    Feb 20, 2025 · To arrive at a final rating, X uses a so-called bridging algorithm. This algorithm takes into account the position of registered users on the ...Missing: explanation | Show results with:explanation
  71. [71]
    X's Community Notes fail to address US election misinformation
    Oct 30, 2024 · ... was stolen and that voting systems are unreliable, CCDH said. In the cases where Community Notes were displayed, the original misleading ...Missing: biased | Show results with:biased
  72. [72]
    Elon Musk has problem with X Community Notes after Ukraine ...
    Feb 21, 2025 · Musk disagreed with Community Notes regarding a poll that found high favorability ratings for Ukraine President Volodymyr Zelenskyy. Community ...
  73. [73]
    Toxic X users sabotage Community Notes that could derail disinfo ...
    Oct 31, 2024 · In a report, the CCDH flagged 283 misleading X posts fueling election disinformation spread this year that never displayed a Community Note. Of ...
  74. [74]
    Elon Musk Wants People on X to Police Election Posts. It's Not ...
    Jul 25, 2024 · Less than an hour after President Biden endorsed Vice President Kamala Harris as the Democratic candidate for president on Sunday, users on ...
  75. [75]
    Community-based fact-checking reduces the spread of misleading ...
    Sep 13, 2024 · This suggests that once the number of helpfulness ratings exceeded the display threshold, the efficacy of community notes in reducing the spread ...
  76. [76]
    Did the Roll-Out of Community Notes Reduce Engagement ... - arXiv
    Jul 16, 2023 · We find no evidence that the introduction of Community Notes significantly reduced engagement with misleading tweets on X/Twitter.
  77. [77]
    Meta Community Notes and Content Moderation in a Free Market | ITIF
    Jan 16, 2025 · Meta announced on January 7, 2025 that it was ending its third-party fact-checking program on its social media platforms—Facebook, ...Missing: adaptations | Show results with:adaptations
  78. [78]
    Elon Musk's Community Notes Feature on X Is Working - Bloomberg
    May 22, 2024 · F.D. Flam is a Bloomberg Opinion columnist covering science. She is host of the “Follow the Science” podcast.<|separator|>
  79. [79]
    Meta says it will follow X, replace fact-checking with community notes
    Jan 7, 2025 · The company said it decided to end the program because expert fact checkers had their own biases and too much content ended up being fact ...Missing: tweets | Show results with:tweets
  80. [80]
    [2510.00650] Threats to the sustainability of Community Notes on X
    Oct 1, 2025 · Our analysis shows the positive effect on future note authoring of having a note published. This highlights the risk of the current system, ...Missing: directions | Show results with:directions
  81. [81]
    Elon Musk on X: "Improvements to @CommunityNotes" / X
    May 15, 2025 · Robustness update: We've extended Community Notes ability to detect coordinating contributors with additional features targeting ...
  82. [82]
  83. [83]
    X Will Deploy AI to Write Community Notes, Expand Fact-Checking
    Jul 1, 2025 · Elon Musk's X will start to publish Community Notes written by artificial intelligence agents, a move to increase the speed of the social network's fact- ...
  84. [84]
    AI joins Community Notes on X, aims to fight misinformation faster
    Jul 4, 2025 · Community Notes now has AI help. X's latest system lets large language models write notes, while humans still decide what's helpful.