Fact-checked by Grok 2 weeks ago

Content moderation on Facebook

Content moderation on Facebook consists of the policies, technologies, and human oversight mechanisms employed by Meta Platforms, Inc., to identify, assess, and restrict deemed to violate the platform's Community Standards, which target harms including violence, , adult nudity, , and , while navigating legal obligations in over 190 countries. Meta's system processes tens of billions of pieces of content daily through a combination of algorithms for proactive detection, human moderators for nuanced review, and partnerships with independent fact-checkers, resulting in the removal or demotion of millions of violating items each day—typically less than 1% of total uploads—according to the company's enforcement reports. Key achievements include substantial reductions in the prevalence of and terrorist propaganda on the platform, as measured by Meta's internal metrics, alongside the establishment of an independent Oversight Board in 2020 to adjudicate high-profile appeals and refine policies. However, the practice has faced persistent controversies over inconsistent enforcement, algorithmic errors leading to erroneous removals, and evidence of non-neutral outcomes in political content moderation, with empirical analyses indicating that interventions have disproportionately impacted certain viewpoints, particularly conservative-leaning material, amid broader debates on viewpoint versus prevention. In early 2025, Meta shifted toward "more speech and fewer mistakes" by curtailing automated enforcement for certain categories, emphasizing human judgment to reduce over-removal, a response to accumulated critiques of prior over-reliance on AI-driven decisions that amplified mistakes.

Historical Development

Inception and Early Policies (2004-2015)

was founded on February 4, 2004, initially as a restricted network for students, where was rudimentary and primarily reactive, centered on addressing basic and through user-submitted reports handled by a small internal team. With limited scale at launch—fewer than 1,000 users in the first month—the platform's early terms emphasized user conduct aligned with its collegiate focus, prohibiting disruptive behaviors but lacking comprehensive formal standards. Moderation relied heavily on community flagging, as automated tools were absent, reflecting the causal reality that low volume permitted manual oversight without systemic strain. By 2005, as Facebook expanded beyond universities, initial community standards emerged to tackle emerging issues like nudity and denial of historical events such as , marking the shift toward codified policies in response to intellectual property complaints and visual content violations. These guidelines, part of the evolving Statement of Rights and Responsibilities, introduced ad-hoc human reviews for reported infractions, including prohibitions on explicit imagery and unauthorized use of copyrighted material, though enforcement remained inconsistent due to resource constraints. In , amid rapid growth to 120 million monthly , the moderation team consisted of just 12 individuals reviewing all flagged posts, underscoring scaling challenges where empirical volume outpaced human capacity. The period from 2010 to 2015 saw incremental adoption of algorithmic aids, particularly for detection, as classifications were deployed by teams like the internal "Black Ops" unit to filter automated abuse and suspicious accounts proactively. User base expansion to over 500 million by 2010 amplified these needs, prompting and team growth to approximately 1,000 moderators by 2013, many via contractors. A pivotal event was the 2012 mandatory rollout of profiles, which retroactively surfaced historical user content, broadening visibility and exposing previously private or archived material to wider audiences, thereby necessitating policy expansions to handle increased reports of harassment and IP disputes tied to older posts. This change highlighted causal tensions between enhanced user engagement and moderation burdens, as greater content exposure correlated with elevated violation volumes without proportional staffing until later years.

Scaling Challenges and Initial Reforms (2016-2020)

Following the 2016 U.S. presidential election, encountered substantial challenges in moderating content at scale, as the platform's algorithms amplified and foreign election interference, affecting over 2 billion monthly active users by then. In response, the company initiated major expansions, investing billions in AI-driven detection tools and human moderators to proactively identify and remove violating posts, with reported expenditures exceeding $13 billion on safety and security since 2016. This included hiring thousands of additional content reviewers globally to handle the surging volume of reports and automated flags. The 2018 Cambridge Analytica scandal, involving the unauthorized harvesting of data from up to 87 million users for political targeting, amplified demands for moderation reforms by highlighting vulnerabilities to manipulative content ecosystems. Although primarily a privacy breach, it catalyzed broader policy shifts toward greater transparency in enforcement against and coordinated inauthentic behavior. Concurrently, on April 24, 2018, Facebook publicly released its previously internal Community Standards guidelines, outlining enforceable rules against , graphic violence, and false news, while expanding user appeals processes for removed content. Global scaling efforts relied heavily on outsourcing to third-party firms in low-wage regions, including and the , where contractors reviewed thousands of posts daily under high-pressure conditions to cover linguistic and cultural diversity. This approach enabled rapid workforce growth but strained operational consistency. In , the platform's failure to curb anti-Rohingya —spiking during the 2017-2018 ethnic violence that displaced over 700,000 people—prompted targeted interventions, such as deploying more Burmese-language moderators and temporarily amplifying removals of inflammatory after an internal review admitted the platform fomented offline harm. By 2019, these reforms yielded measurable outcomes, including the removal of over 25 million pieces of terrorist propaganda since early 2018, predominantly through hashing techniques matching known extremist content, with human review for edge cases. Initial third-party partnerships, launched in December 2016 with organizations like and , began demoting disputed stories in users' feeds, marking an early pivot toward external validation for misinformation claims ahead of the 2018 midterms.

Post-Pandemic Adjustments and Recent Shifts (2021-2025)

Following the heightened scrutiny during the and the 2020 U.S. , implemented stringent policies targeting and potential . In early 2021, amid concerns over election integrity, the platform temporarily altered its rules to limit exceptions for "newsworthy" content that might otherwise glorify violence or incite unrest, a shift prompted by the events. On January 7, 2021, indefinitely suspended then-President Donald Trump's accounts, citing risks of further violence, a decision later adjusted to a two-year suspension in June 2021 after Oversight Board review, which upheld the initial action but criticized its vagueness. These measures extended to aggressive removal of , including claims contradicting public health guidance, though internal documents later revealed inconsistent enforcement favoring user engagement. The establishment of the Oversight Board in May 2020 provided an independent appeals mechanism, with its first major rulings in 2021 addressing high-profile cases like Trump's suspension and emphasizing procedural transparency. However, whistleblower Frances Haugen's October 2021 disclosures, based on leaked internal research, highlighted systemic tensions: algorithms amplified divisive content for growth, civic integrity teams were under-resourced, and safety interventions often clashed with profit motives, exacerbating harms like teen mental health issues and political polarization. Haugen testified before Congress that Facebook's leadership, under Mark Zuckerberg, consistently overrode recommendations to curb misinformation, prioritizing metrics over empirical risk assessments. These revelations fueled external pressure but did not immediately alter core enforcement trajectories. By 2023-2025, pursued reforms reducing proactive interventions, reinstating Trump's accounts in January 2023 with enhanced guardrails against violations. In January 2025, the company terminated its third-party program—criticized by Zuckerberg as biased and overly restrictive—and adopted a model inspired by X (formerly ), enabling user-generated context labels to promote "more speech and fewer mistakes" over top-down corrections. Concurrently, updates to hateful conduct policies relaxed prohibitions on certain derogatory statements, such as labeling individuals as "mentally ill" or denying their existence, framing these as protected opinions rather than direct attacks, though critics argued this increased risks to vulnerable groups without sufficient evidence of reduced harms. Meta reported a approximately 50% drop in enforcement errors from Q4 2024 to Q1 2025, attributing it to focused prioritization of severe violations like illegal content over borderline cases, with U.S. over-removals declining over 75% weekly by mid-2025. The Oversight Board, in April 2025, condemned these rapid overhauls as "hasty," lacking impact evaluations and potentially undermining accountability, as evidenced by unchanged appeal volumes despite policy shifts. These adjustments reflect a pivot toward viewpoint-neutral enforcement, driven by internal audits and external political pressures, though empirical data on net effects remains contested amid rising reports of unchecked harmful content.

Moderation Methods and Technologies

Automated Detection and AI Systems

Facebook utilizes hash-matching algorithms, such as PhotoDNA, to proactively flag known child sexual abuse material (CSAM) by generating perceptual hashes of images and videos that match against established databases of verified exploitative content. Similar hashing techniques apply to terrorist propaganda through shared databases coordinated by the Global Internet Forum to Counter Terrorism (GIFCT), enabling rapid identification of prohibited visual and textual elements without relying on exact file matches. Since the mid-2010s, (NLP) models have formed a core component of automated detection for , employing transformer-based architectures to parse linguistic patterns, context, and intent in user-generated text at scale. These systems evolved from rule-based filters to classifiers trained on labeled datasets, achieving reported proactive detection rates exceeding 97% for certain violation categories by 2021. By the 2020s, transitioned to frameworks capable of jointly analyzing text, images, and videos, addressing limitations of unimodal approaches in detecting violations embedded in memes, edited media, or combined formats. Advancements included real-time scoring for variants, leveraging embeddings from large language models to identify novel iterations of debunked claims without prior human fact-checker input. Automated systems face inherent challenges, including elevated false positive rates—where benign content is erroneously flagged due to ambiguous phrasing or cultural variances—and false negatives, where subtle violations evade detection owing to adversarial evasion tactics or insufficient training data diversity. A 2025 Northeastern University study highlighted algorithmic mismatches, revealing that content moderation typically activates after the majority of views have occurred, as recommendation engines prioritize engagement signals over real-time violation probabilities. To mitigate over-removal in ambiguous cases, outputs feed into recommendation algorithms that demote borderline by reducing its distribution proportionally to violation confidence scores, preserving platform utility while curbing visibility rather than enforcing outright deletion. This integration aims to balance scale with precision but amplifies disparities when prioritization favors high-engagement posts preceding full detection.

Human Review Processes

Meta relies on a global network of human reviewers, primarily contractors, to verify content flagged by automated systems and handle nuanced enforcement decisions. By , had engaged tens of thousands of such contractors worldwide, with operations scaled to over safety and security personnel including moderators by 2021. These teams operate from hubs in low-wage countries such as the , , and , where labor costs are significantly lower than in the United States—often $1.50 to $3 per hour compared to $15 or more domestically. Reviewers undergo specialized training on Meta's Community Standards, emphasizing cultural and contextual nuances to interpret policies across languages and regions. In the tiered process, AI systems initially detect and prioritize potential violations, routing uncertain or high-priority cases—such as those involving or —to human queues for final assessment. Human judgment is essential for edge cases where algorithms lack sufficient context, with reviewers evaluating millions of items daily across over 50 languages from more than 20 global sites. Exposure to disturbing material, including videos of suicides, assaults, and , has led to widespread reports of psychological harm among reviewers, including PTSD symptoms, anxiety, and nightmares. Turnover rates are elevated, with many lasting only about two years in the role due to and inadequate support, despite provisions like required PTSD disclosure forms in some contracts. Users can appeal initial decisions through an internal process, with escalated cases potentially reaching the independent Oversight Board after exhausting company reviews. Meta processes millions of appeals annually, though exact figures vary by quarter; for instance, the Oversight Board received over one million appeals in its first full year (2021), selecting only a fraction for review and overturning Meta's decisions in about 14 cases initially, rising to around 90% reversal rates in selected cases. Internal overturn rates on appeals hover lower, often below 10% for certain violation types, reflecting the challenges in scaling consistent human oversight. Outsourcing to third-party firms in low-regulation environments has raised concerns over , with reports of inconsistent due to rushed quotas, minimal oversight, and insufficient adaptations for local contexts. These arrangements prioritize cost efficiency but contribute to errors in content classification, as evidenced by lawsuits alleging inadequate safeguards against and variable decision accuracy in regions like and .

User Reporting and Community-Driven Tools

Facebook's user reporting system, introduced shortly after the platform's launch in 2004, enables individuals to flag content suspected of violating community standards, such as , , or , through contextual menus on posts, profiles, or comments. These reports are processed confidentially, with reviewers assessing them against established policies before deciding on actions like removal or account restrictions, often integrating for initial to prioritize high-severity cases amid billions of daily interactions. By 2025, user-initiated reports continued to serve as a primary reactive , complementing proactive detection and triggering a notable share of enforcement outcomes, though exact proportions vary by violation type as detailed in Meta's periodic disclosures. In a shift toward decentralized moderation, rolled out in early 2025, allowing eligible users to propose and rate contextual additions to potentially misleading posts on , , and Threads without mandating content removal. Inspired by X's (formerly ) model, the feature began testing in March 2025 and expanded following the January 2025 announcement to phase out third-party programs, which cited as error-prone and overly restrictive. Notes become visible only after achieving cross-ideological consensus via user ratings, aiming to foster broader participation and reduce top-down enforcement mistakes, with initial data showing notifications for interacted posts receiving notes to enhance user awareness. This community-driven approach has demonstrated potential in surfacing nuanced , but faces challenges including vulnerability to coordinated abuse, where partisan users might amplify echo chambers through biased ratings, and inconsistent application across diverse linguistic or cultural contexts. Critics, including advocates, argue that relying heavily on volunteer contributions risks under-moderating harms to vulnerable groups without robust safeguards, though emphasizes algorithmic safeguards and transparency in note selection to mitigate such issues. Overall, these tools represent an effort to balance with user agency, prioritizing additive context over deletion to minimize errors in high-volume environments.

Policy Framework and Enforcement

Core Community Standards

Facebook's Core Community Standards establish the substantive rules prohibiting specific types of content deemed to pose risks to users or platform integrity, categorized broadly into safety, objectionable content, integrity, and respecting . Safety standards and incitement, prohibiting credible threats of harm, of self-injury, or coordination of criminal acts, as well as bullying and harassment that targets individuals based on personal attributes. Objectionable content covers hate speech—now termed hateful conduct—which bans attacks on people based on protected characteristics like race, ethnicity, religion, or sexual orientation, alongside restrictions on adult nudity, sexual activity, and graphic unless contextually justified. Integrity policies target , fraudulent schemes, and intellectual property infringements, while respecting rights focuses on privacy violations such as non-consensual sharing of intimate images or doxxing. These standards evolved significantly after 2018, incorporating expansions to combat and election interference following high-profile incidents like influence campaigns in 2016 U.S. elections, leading to policies against false claims about processes and coordinated inauthentic behavior—defined as networks of accounts deceiving users about their origins or coordination for manipulative purposes. By 2020, had removed thousands of such networks, including those tied to foreign actors. In 2025, amid shifts toward prioritizing free expression, relaxed certain hateful conduct rules, permitting that does not directly target protected groups or incite imminent harm, such as general criticisms of ideologies or institutions, while ending third-party in favor of to reduce perceived over-removal of political speech. The underlying rationale draws from a balance between enabling open discourse and preventing demonstrable real-world harms, with Meta emphasizing removal only for content showing causal pathways to violence or exploitation, as articulated by CEO in discussions of policy reviews post-2020 unrest. For instance, political discourse is allowed absent direct threats or calls to violence, enabling debate on elections or ideologies provided it avoids personal targeting. Bans on coordinated inauthentic behavior exemplify this, targeting deceptive operations like state-sponsored networks rather than organic user opinions.

Transparency Reports and Metrics

Meta publishes quarterly Community Standards Enforcement Reports via its Transparency Center, providing self-reported data on content moderation actions across and . These reports quantify total violations detected, actions taken (such as removals or restrictions), and breakdowns by policy category, including , violent content, and . Proactive detection—via and algorithms—accounts for the majority of actions, with rates consistently above 80% for most categories by the early 2020s, rising from lower figures in prior years; for example, proactive detection reached 95% of removals in Q2 2020. The Q1 2025 report, released May 29, 2025, highlighted a roughly 50% reduction in enforcement mistakes following 2025 policy updates emphasizing fewer over-removals and greater speech allowances. Overall quarterly actions numbered in the billions, with and fake accounts comprising the largest volumes in Q2 2025, per trends spanning 2017–2025. removal volumes decreased post-2025 amid narrowed proactive moderation scopes under revised policies, aligning with self-reported lower prevalence rates. Separate disclosures cover requests for user data, with biannual or periodic reports detailing request volumes and compliance rates that vary by country—often higher in regions with robust legal oversight but lower where requests lack judicial warrants. Ad metrics, via the Ad Library, track active ads on issues, elections, or , including total counts, spend amounts, and targeting details, with quarterly summaries of violations leading to ad removals. These metrics offer quantifiable baselines for enforcement efficacy and harm trends, though as self-reported figures, they rely on Meta's internal methodologies without independent audit in the disclosures.

Oversight Mechanisms

The Oversight Board, established by Meta in 2020, serves as an independent body comprising experts in areas such as , , and to review select content moderation appeals escalated from users. Its decisions are binding on Meta for the specific cases under review but carry advisory weight for broader policy recommendations, with the company required to respond publicly within 30 days. In 2025, the Board issued decisions critiquing Meta's rapid policy adjustments, such as those following political shifts, including analyses of symbols like the in July and initial post-reform cases in April, highlighting tensions between enforcement speed and contextual accuracy. Appeals processed through the Oversight Board demonstrate enforcement inconsistencies, with the panel overturning Meta's original decisions in approximately 80-90% of reviewed cases across over 100 rulings as of 2024, though it handles only a fraction of the millions of annual user appeals. Broader internal appeal data indicate error rates in content removals around 10-20%, prompting reviews but underscoring variability in automated and human judgments. Internally, Meta conducts audits of moderation systems, including efforts to enhance AI explainability by analyzing algorithmic decision factors in escalated cases, as examined in Oversight Board reviews of automated detection flaws. Third-party fact-checkers, previously contracted for labeling potentially misleading content without direct removal authority, were phased out in the United States on January 7, 2025, shifting oversight toward decentralized models. This reform introduced , a user-contributed system rolled out across , , and Threads starting in March 2025, aimed at adding context to disputed posts via crowd-sourced input rather than centralized , thereby reducing reliance on external partners and internal gatekeeping. The Oversight Board's co-chair expressed concerns over this transition, arguing it could undermine accountability without adequate safeguards for persistence.

Controversies and Debates

Allegations of Political and Ideological Bias

Allegations of conservative bias in Facebook's content moderation have centered on claims of disproportionate enforcement against right-leaning viewpoints, particularly during the 2020 U.S. presidential election. Internal documents leaked by whistleblower Frances Haugen in 2021 revealed company debates over handling right-wing content, including accusations of making "special exceptions" for certain publishers amid concerns about misinformation and polarization. These leaks highlighted internal pressures to curb content perceived as harmful from conservative sources, with employees citing risks of amplifying divisive narratives that aligned with left-leaning media critiques. Conservative organizations have argued this reflected systemic left-leaning influences within Meta's workforce and policies, leading to higher scrutiny and removal rates for posts questioning election integrity or criticizing progressive policies, though Meta has denied intentional ideological targeting. Empirical studies have provided mixed evidence on ideological disparities in feeds and moderation outcomes. A 2023 study published in Nature analyzed user feeds during the 2020 election period and found that over 50% of political content came from like-minded ideological sources, yet this prevalence did not significantly increase polarization or exposure to extremes, suggesting algorithmic recommendations reinforced existing preferences without overt bias in suppression. Conversely, research on user-moderator interactions in social platforms indicated that political biases among moderators—often left-leaning—could amplify echo chambers by disproportionately flagging or removing opposing viewpoints, as seen in analyses of comment moderation dynamics. Incentives within Meta's system, driven by advertiser sensitivities and regulatory pressures from left-leaning institutions, have been cited as encouraging over-caution toward conservative speech labeled as "harmful" or "misinformative," while under-enforcing similar extremism from progressive sources normalized in mainstream discourse. Counter-evidence from targeted cases points to inconsistencies beyond a simple left-right divide. A 2025 study examined removals of Arabic-language posts during the 2021 Israel-Palestine escalation and found that many deleted contents did violate platform rules on violence and , challenging claims of wrongful over-moderation against pro-Palestine (often left-aligned) advocacy. Similarly, research on Meta's systems revealed disparities in racial classification for detection, where algorithmic biases led to uneven flagging across demographic groups, potentially exacerbating ideological enforcement gaps independent of explicit political intent. These findings underscore that while user and employee biases may tilt moderation leftward, technical and contextual factors introduce additional variances, with academic sources—often institutionally left-biased—sometimes downplaying conservative over-removal in favor of broader "polarization" narratives.

Over-Moderation and Censorship Claims

Critics have argued that 's practices result in over-removal of legitimate speech, creating a on open discourse. In October 2020, temporarily limited the visibility of a article detailing alleged emails from Hunter Biden's laptop, citing concerns over hacked materials and potential following an FBI warning about Russian campaigns. CEO later acknowledged in August 2022 that the platform's fact-checkers demoted the story, which he described as a precautionary measure that may have erred on the side of caution. Pro-life advocacy groups have frequently reported erroneous flaggings and removals of their content under policies. For instance, in October 2020, the List claimed blocked its political advertisements containing factual claims about late-term abortions, deeming them violative despite appeals. Similarly, the Human Coalition documented repeated of posts and videos in 2020, including those depicting client testimonies without graphic imagery, which were removed as promoting violence. These incidents highlight false positives in automated and human review systems, where nuanced political expression is conflated with prohibited content. Such over-moderation has been linked to among users, with studies indicating that fear of removal discourages participation in borderline discussions. A 2020 report estimated Facebook's content moderators overturned or erred on approximately 300,000 decisions daily out of millions reviewed, underscoring systemic inaccuracies that prioritize erring toward removal to mitigate legal and reputational risks. Critics from right-leaning perspectives contend this erodes protections by demonstrating editorial curation akin to publishing, potentially inviting greater liability. While moderation has successfully curtailed and terrorist , the trade-off includes stifled legitimate debate, prompting Meta's January 2025 policy overhaul to adopt looser guidelines and a system, explicitly aimed at reducing over-removals and false positives. This shift replaces third-party with user-driven corrections, reflecting an admission that prior aggressive enforcement amplified mistakes in pursuit of safety.

Under-Moderation and Persistent Harmful Content

In Myanmar during 2017-2018, Facebook's inadequate moderation of Burmese-language content enabled the rapid spread of hate speech and incitement against the Rohingya Muslim minority, contributing to ethnic violence that the United Nations described as a textbook example of ethnic cleansing. The platform's algorithms amplified inflammatory posts, with limited local moderators failing to curb viral dissemination until after significant real-world harm had occurred, including mass displacement and killings. Facebook later acknowledged that the platform was used to incite violence but initial under-investment in regional content review exacerbated the issue. Following Meta's adjustments, which reduced reliance on third-party fact-checkers and shifted toward community-driven notes, s indicated upticks in persistent harmful content. Meta's Q1 2025 integrity documented a slight rise in and prevalence on , from 0.06-0.07% to 0.07-0.08%, attributed to spikes in such material post-changes. Similarly, violent content and increased, with the company noting enforcement actions but acknowledging delays in removal that allowed broader exposure. Research from highlighted systemic moderation lags, where content recommendation algorithms outpace detection and removal, enabling harmful posts to garner most views before takedown. This mismatch facilitated viral spread of threats, including in election contexts where persisted despite interventions, though causal impacts on voter behavior remain empirically contested with mixed study outcomes. raised alarms in 2025 that relaxed policies heighten violence risks in fragile regions like , potentially echoing prior failures by permitting nuanced incitements to evade proactive filters. While has enhanced proactive removals for —actioning millions of pieces quarterly—gaps persist in addressing subtler, context-dependent harms like coordinated campaigns, underscoring trade-offs between scaled and comprehensive threat mitigation. links under-moderation to amplified real-world risks in high-stakes scenarios, yet claims of direct "genocide fueling" warrant scrutiny given confounding factors like offline actors and platform penetration limits in affected areas.

High-Profile Incidents and Whistleblower Revelations

In early 2020, implemented policies to remove content claiming that was man-made or engineered, classifying such discussions as amid the initial consensus favoring natural origins. This included suppression of the lab-leak hypothesis, which was treated as a lacking sufficient evidence at the time. By May 26, 2021, following renewed scientific and intelligence assessments elevating the lab-leak possibility—such as U.S. of reports and coverage— reversed the policy, ceasing removals of posts asserting man-made origins while still prohibiting false claims about specific bioweapon development or genetic targeting. The shift highlighted internal tensions between rapid controls and evolving empirical data, prompting that early over-removal stifled legitimate . Following the January 6, 2021, U.S. Capitol riot, indefinitely suspended then-President Donald Trump's accounts on January 7, citing risks of further based on his posts praising participants and questioning election integrity. The decision, upheld by the Oversight Board in May 2021 after review, extended beyond his term and sparked international backlash, including calls from European leaders for consistent application of rules to avoid perceived U.S.-centric bias. Trump's posting privileges were not restored until January 2023, with new guardrails implemented. This incident fueled lawsuits, such as those alleging viewpoint discrimination, and congressional scrutiny over selective enforcement favoring certain political narratives. Frances Haugen, a former product manager, leaked internal documents in September 2021 and testified before the U.S. Commerce Committee on October 5, 2021, revealing that algorithms prioritized engagement over safety, exacerbating polarization, mental health harms among teens via , and spread. Haugen disclosed research showing profit motives often overrode user well-being, with executives aware of these risks yet delaying reforms; for instance, internal studies linked platform design to increased divisiveness but were not publicly acted upon decisively. Her revelations triggered hearings, filings of the documents, and debates on regulatory needs, underscoring causal links between unchecked algorithmic amplification and real-world harms like the events. In January 2025, post-U.S. election, announced major moderation shifts, including ending third-party on January 7 in favor of a system and reducing proactive content demotions labeled as "censorship" by CEO . The Oversight Board criticized these "hasty" changes in April 2025 for lacking impact assessments, noting potential unexamined risks to vulnerable users without evidence of rigorous pre-implementation review. This prompted calls for investigations into policy reversal rationales and their effects, revealing ongoing profit-safety trade-offs amid political pressures.

Government Interventions and Global Variations

United States Regulatory Pressures

In the , regulatory pressures on Facebook's content moderation have centered on debates over of the , which immunizes platforms from liability for while allowing moderation discretion. Following 2018 congressional hearings where CEO faced questions on alleged anti-conservative bias, Republican lawmakers advanced reforms to condition protections on viewpoint neutrality, arguing that editorial-like moderation forfeits immunity. For instance, in 2020, Rep. introduced legislation specifically targeting platforms' filtering of conservative messages, and President Trump threatened to veto national defense authorization bills unless was reformed to address perceived . The Department of Justice's 2020 review recommended narrowing to reduce platforms' unchecked power over speech, emphasizing accountability for biased enforcement. These efforts persisted through 2024, with bills like S. 941 aiming to strip immunity for hosting censoring accounts linked to foreign adversaries, though broader bias-focused reforms stalled amid partisan divides. Congressional scrutiny intensified via hearings probing moderation's impact on elections and public discourse. Zuckerberg testified in November 2020 before the Commerce Committee on content dissemination practices, defending against claims of suppressing conservative viewpoints during the cycle. A 2023 House Judiciary Committee report documented how federal officials and experts pressured platforms, including , to censor political speech ahead of the 2020 election, with higher compliance rates for requests aligning with Democratic priorities—such as demoting the New York Post's Hunter Biden laptop story—compared to Republican inquiries on over-moderation. Empirical transparency data from showed elevated responsiveness to executive branch flagging of "," often favoring institutional narratives on topics like election integrity, though platforms resisted similar conservative-led probes into . Whistleblower Frances Haugen's 2021 testimony further fueled demands for oversight, alleging internal prioritization of engagement over , prompting bipartisan but uneven legislative pushes. Executive actions added informal coercion, particularly under the Biden administration, which Zuckerberg later confirmed pressured Meta in 2021 to suppress COVID-19 content labeled as misinformation, leading to regretted compliance and expanded moderation rules. This dynamic extended to election policies, where platforms adjusted algorithms to limit "harmful" narratives, often under threat of regulatory reprisal, as evidenced by White House communications urging removals. The Federal Trade Commission (FTC) imposed a record $5 billion penalty on Facebook in July 2019 for privacy violations under a prior consent decree, mandating structural changes to data practices that indirectly constrained moderation tools reliant on user profiling. By 2025, amid shifting political winds, Meta pivoted toward reduced intervention—ending third-party fact-checking for a community-driven model—citing past overreach as censorship, a move welcomed by Republican critics but decried by civil rights groups as risking unchecked harm. These pressures underscore a causal link between regulatory threats and moderation favoring certain viewpoints, with empirical patterns revealing asymmetric enforcement.

European Union Compliance and Fines

The 's (), which entered into force on November 16, 2022, and became fully applicable to very large online platforms like from February 17, 2024, imposes obligations on to conduct risk assessments for systemic harms, including the dissemination of illegal content such as and , and to implement mitigation measures like enhanced algorithms and user reporting tools. Non-compliance can result in fines of up to 6% of a platform's global annual turnover, with additional penalties for repeated violations. As a designated very large online platform (VLOP) with over 260 million monthly active users in the EU, must provide transparent reports on moderation actions and ensure straightforward mechanisms for users to flag illegal content. Preceding the DSA, national laws like Germany's (NetzDG), effective January 1, 2018, required platforms with more than 2 million users to remove or disable manifestly illegal content—such as or —within 24 hours of notification and to document handling of user complaints, with fines up to €50 million for failures in reporting or enforcement. In July 2019, German authorities fined €2 million for under-reporting the number of complaints about illegal content between July 2017 and December 2018, highlighting early enforcement challenges under NetzDG. Similar national requirements in countries like , though partially aligned under the DSA's harmonization efforts, emphasized rapid takedowns, contributing to a patchwork of stricter EU-wide standards compared to other regions. Compliance efforts have led to elevated moderation activity in the EU; Meta's DSA transparency report for October 2023 to March 2024 documented the proactive removal of over 69 million pieces of content deemed illegal under law on , including 1.16 million for violations, alongside processing 601,863 user notices with a 20.3% removal rate. Empirical assessments indicate DSA enforcement has correlated with reduced prevalence of harmful content in EU languages, such as a noted decrease in Lithuanian posts following implementation. However, on October 24, 2025, the preliminarily found Meta in breach of DSA transparency obligations, citing ineffective and overly complex systems for users to report illegal content on and , potentially exposing the company to significant fines. Critics argue these frameworks incentivize over-moderation to avoid penalties, resulting in the removal of lawful speech and chilling effects on expression; an empirical study of NetzDG found evidence of platforms erring toward overblocking to minimize risk, with users self-censoring due to broad definitions of illegal content under law. Such outcomes reflect the EU's emphasis on proactive enforcement against perceived harms, though enforcement priorities often prioritize categories that overlap with political discourse, raising concerns about disproportionate impacts on non-mainstream viewpoints without equivalent scrutiny of under-moderation elsewhere.

Authoritarian Regimes and Content Restrictions

has demonstrated high levels of compliance with content removal requests from governments in authoritarian regimes, often restricting access to material critical of state policies or leaders to maintain operations in those markets. In such contexts, compliance rates frequently exceed 90%, enabling selective suppression of while platforms adapt policies to align with regime stability demands. In , restricted over 4,300 items between July and December 2024 following requests from the Ministry of Public Security and other authorities, citing violations of decrees prohibiting content that distorts or slanders the state. Government reports indicate compliance rates as high as 96% for takedown requests, with platforms like serving as tools for censoring anti-regime posts amid broader to avoid blocks. Pakistan's government, via the , prompted to restrict over 8,200 items in the same period for offenses including , sectarian enmity, and content condemning national independence, reflecting patterns of enforcement against perceived threats to religious or state authority. Earlier data show compliance around 85% for blasphemous material removals, underscoring consistent accommodation of demands that prioritize regime sensitivities over unrestricted speech. In , government takedown requests have been infrequent, with only private reports leading to 11 restrictions in late 2024, but Meta's 2018 moderation lapses permitted unchecked proliferation of targeting the Rohingya, fostering conditions for offline violence through inadequate enforcement against regime-aligned incitement. This selective under-moderation, despite internal awareness, effectively abetted authoritarian narratives by failing to curb content that mobilized ethnic pogroms. Efforts to enter China, where remains blocked, involved developing dedicated censorship tools overseen by , alongside considerations of user data sharing, illustrating preemptive adaptations to authoritarian controls despite public commitments to free expression. Such compliance patterns, higher in autocracies than democratic settings, have drawn scrutiny for platforms' role in entrenching illiberal information ecosystems, countering claims that attribute suppression solely to state coercion without acknowledging corporate facilitation.

Other Regional Cases

In the , has occasionally removed content amid government pressures to suppress dissent, as seen in during the 2022 , where the platform initially deleted posts featuring the slogan "death to Khamenei" on grounds of potential incitement to violence, only for the Oversight Board to overturn the decision in January 2023, citing undue restriction on political expression. In , the 2023 Anti-Homosexuality Act prompted heightened online targeting of LGBTQ+ individuals, including and doxxing on , exacerbating harms under local laws that criminalize related advocacy, though 's responses have prioritized global policies over full alignment with such restrictions. In nations, compliance with strict local statutes has led to proactive content takedowns. Thailand's authorities have enforced lèse-majesté laws through repeated demands, culminating in 2020 prosecutions against for failing to remove 436 specified posts within deadlines and a September 2025 regulation imposing 24-hour removal obligations for flagged monarchy-critical material. In , platforms including have faced mandates to excise blasphemy-accused content, with cases like the 2022 prosecution of a former minister over a social media post highlighting how such removals aid enforcement against religious minorities amid rising prosecutions under expanded 2025 criminal code provisions. In , Brazil's 2022 presidential election saw remove over 140,000 posts violating interference policies, including coordinated inauthentic behavior and false claims about voting processes, as part of efforts to curb disruptions tied to supporters of then-President . Across these regions, Meta's ad-hoc adaptations to local regulations are reflected in transparency data showing surging government demands; for example, first-half 2024 reports logged over 99,000 user data requests from —many linked to content disputes—outpacing U.S. figures and underscoring reliance on legal compliance over uniform standards.

Effectiveness, Impacts, and Empirical Assessments

Quantitative Metrics of Success and Failure

Meta's content moderation systems achieve high proactive detection rates for explicit violations such as , with over 96.7% of such content actioned before user reports in late 2023, and subsequent reports indicating rates exceeding 99% through advanced hashing and models. In Q4 2024, alone submitted over 2 million CyberTipline reports to the National Center for Missing & Exploited Children, reflecting the scale of automated detections. Similarly, for , Meta proactively detected and removed over 99% of the 4.3 billion accounts actioned in 2024. These metrics demonstrate efficacy in prioritized, unambiguous categories where technical precision is feasible. Following January 2025 policy shifts emphasizing AI-user hybrids like and reduced third-party interventions, reported a approximately 50% reduction in enforcement errors—erroneous takedowns—between Q4 2024 and Q1 2025, alongside overall decreases in content removals without proportional rises in violations. Daily operations involve reviewing billions of content pieces across platforms, with millions of violations actioned, though these represent under 1% of total uploads. Proactive rates for less objective categories, such as , hover around 95%, with systems actioning 3.4 million such posts in early 2025 periods. Failures manifest in false positive removals, particularly for nuanced content like , where appeals data shows significant overturn rates—up to 36% of appeals tied to such violations—indicating persistent over-removal risks estimated at 5-10% in algorithmic outputs based on internal benchmarks and third-party analyses. Interventions often occur post-viral dissemination, as detection lags for emerging or context-dependent violations, allowing millions of views before takedown. Oversight Board audits have critiqued these gaps, urging to disclose internal accuracy audits for human and reviews to verify claimed improvements. Despite reductions in reported violations post-reforms, the Board's findings highlight that aggregate efficacy remains uneven across violation types.

Studies on Bias and Societal Outcomes

A 2022 Stanford Human-Centered Artificial Intelligence (HAI) policy brief examined user perceptions of legitimacy on , finding that participants viewed expert panels as more legitimate than algorithmic or paid contractor processes, with algorithms perceived as opaque and unrepresentative despite their efficiency. This perception gap persisted across political ideologies, suggesting that reliance on AI moderation may erode trust in enforcement decisions, particularly when users question the fairness of automated judgments without human oversight. Research from 2024 highlights biases in AI-driven on platforms like , where the role of speaker intent in abusive is frequently underappreciated or ignored, leading to over-removal of contextually non-harmful posts. Algorithms often prioritize surface-level signals like keywords over nuanced intent, resulting in inconsistent application that disadvantages users expressing legitimate dissent mistaken for abuse. Such oversights contribute to claims of viewpoint , as evidenced by empirical audits showing disparate enforcement against conservative-leaning in politically charged topics. A 2023 study published in Nature analyzed Facebook feeds and found that exposure to like-minded sources is prevalent—comprising about 65% of political content—but does not causally increase polarization, as experimentally reducing such exposure via algorithmic tweaks yielded no measurable shift in users' partisan attitudes or affective polarization. This challenges narratives attributing rising societal divides primarily to algorithmic echo chambers, instead pointing to offline factors as stronger drivers of ideological entrenchment. Empirical assessments of moderation's societal outcomes reveal mixed causal effects on harm reduction. A PNAS study demonstrated that proactive moderation can reduce the spread of highly harmful content by up to 20-30% on fast-paced platforms, correlating with lower offline engagement in targeted harms like coordinated campaigns. However, post-2020 policy shifts emphasizing free expression on led to a 5-10% rise in detected violent content and reports in 2024-2025, even as overall removals declined, indicating potential trade-offs where relaxed enforcement amplifies certain risks without proportionally curbing others. Evidence also links unmoderated to offline violence spillovers, such as ethnic targeting, though causal chains remain contested due to confounding variables like pre-existing tensions. Longitudinal causal studies are scarce owing to platforms' data opacity, limiting robust inference on net societal impacts; most evidence derives from short-term experiments or correlational audits, underscoring gaps in understanding sustained effects on discourse quality or democratic health.

Economic and User Behavior Effects

Meta Platforms' content moderation operations have historically imposed significant economic burdens, with the company disclosing expenditures surpassing $13 billion on safety and security initiatives from 2016 through 2021, encompassing human reviewers, AI systems, and related infrastructure. In efforts to mitigate these costs, Meta accelerated adoption of AI-driven moderation by 2025, reducing reliance on human evaluators for risk assessment, while outsourcing partners like Accenture previously commanded annual contracts around $500 million as of 2021. A pivotal shift occurred in January 2025, when Meta discontinued its third-party fact-checking program in favor of a Community Notes system, enabling users to append contextual annotations to misleading posts—mirroring X's model and projected to lower operational expenses by decentralizing verification. This transition reflects a broader incentive structure where moderation expenses, potentially billions annually, compete against revenue imperatives, prompting optimizations toward scalable, lower-cost mechanisms without fully sacrificing enforcement. User behavior exhibits patterns of self-censorship in response to moderation policies, as evidenced by Meta's internal analysis revealing instances where individuals draft but withhold posts anticipating negative reactions or platform penalties. Research on North American users corroborates this, documenting avoidance of statements on historically contentious topics due to fears of cancellation or algorithmic deprioritization, though quantitative prevalence varies across studies without uniform 20-30% estimates. Perceptions of over-removal have correlated with retention declines; Meta admitted in late 2024 to frequent erroneous content and account suspensions, eroding trust and contributing to observed drops in platform engagement time by 2025. Post-2022, some users migrated to platforms like X perceived as less stringently moderated, particularly following its ownership change, though aggregate data indicates limited exodus from , with alternatives attracting niche dissatisfied cohorts amid broader fragmentation. Engagement algorithms amplify these dynamics by prioritizing sensational, emotionally provocative content to sustain user and ad impressions—core to Meta's $132 billion in 2023 —yet this design inherently tensions with harm mitigation, as high-engagement material often skirts or exceeds moderation thresholds. Empirical assessments link such algorithmic favoritism to heightened , complicating causal isolation of moderation's role in user retention versus inherent platform stickiness. Advertiser responses have imposed episodic revenue pressures tied to moderation lapses; the 2020 "Stop Hate for Profit" , mobilizing brands against perceived lax enforcement, yielded an estimated 8% temporary ad dip, though top participants accounted for merely 6% of total spend, limiting systemic impact. By contrast, 2025's moderation relaxations, including reduced and content removals, elicited no widespread pullbacks, with Q1 ad inflows exceeding $41 billion despite brand-safety apprehensions, underscoring advertisers' tolerance for policy shifts when metrics remain robust.

References

  1. [1]
    Community Standards | Transparency Center
    The Community Standards outline what is and isn't allowed on Facebook, Instagram, Messenger and Threads.Hateful Conduct · Bullying and Harassment · Locally Illegal Content... · Spam
  2. [2]
    Meta Transparency Center
    We use technology and review teams to detect, review and take action on millions of pieces of content every day on Facebook and Instagram.How Meta prioritizes content... · Meta Content Library and API · How Meta improves
  3. [3]
    More Speech and Fewer Mistakes - About Meta
    Jan 7, 2025 · Allowing More Speech. Over time, we have developed complex systems to manage content on our platforms, which are increasingly complicated for us ...
  4. [4]
    Integrity Reports, Fourth Quarter 2024 | Transparency Center
    Feb 27, 2025 · We're publishing our fourth quarter reports for 2024, including the Community Standards Enforcement Report, Adversarial Threat Report, Widely Viewed Content ...
  5. [5]
    Oversight Board | Improving how Meta treats people and ...
    Providing an independent check on Meta's content moderation. The Oversight Board's mission is to improve how Meta treats people and communities around the world ...
  6. [6]
    What Does Research Tell Us About Technology Platform ...
    May 20, 2025 · Empirical research over the past decade reveals that social media content moderation has not always been neutral in its social or political impact.
  7. [7]
    [PDF] Reasoning about Political Bias in Content Moderation
    These allegations of political bias, however, are based on anecdotes, and there is little support from logical reasoning and empirical evidence (Jiang, ...
  8. [8]
    [PDF] Facebook report - The Future of Free Speech
    Release/Launch Date: February 4, 2004. • Number of Users/Visitors: 2.910 billion monthly active users103. • Short Overview of Content Moderation Process: ...
  9. [9]
    History of Facebook: Facts and Latest Developments - TheStreet
    Feb 18, 2020 · History of Facebook ... Founded on February 4, 2004, by Mark Zuckerberg, Eduardo Saverin, Dustin Moskovitz and Chris Hughes in a Harvard dorm room ...
  10. [10]
  11. [11]
    Facebook reveals its censorship guidelines for the first time
    Apr 24, 2018 · Facebook's content policies, which began in earnest in 2005, addressed nudity and Holocaust denial in the early years. They have ballooned from ...Missing: date | Show results with:date
  12. [12]
    Facebook Terms of Service (Statement of Rights and Responsibilities)
    Nov 15, 2013 · Facebook Terms of Service (Statement of Rights and Responsibilities) summarized/explained in plain English.
  13. [13]
    How does Facebook use machine learning? - Quora
    May 5, 2013 · The Black Ops team uses classification to detect spam content and users (Spam Detection). The discovery team (which includes search and ...How does Facebook use AI and machine learning?How do Facebook posts get marked as spam? Is it done ...More results from www.quora.com
  14. [14]
    [PDF] Who Moderates the Social Media Giants?
    Jun 1, 2020 · By 2013, when Willner left the company, he says, Facebook had more than a billion users and about. 1,000 moderators, most of them now outsourced ...
  15. [15]
    Still Protesting? Facebook Will Soon Force You To Switch To Timeline
    Jul 30, 2012 · Over the next few months, anyone still refusing to voluntarily switch to the Timeline profile redesign will be automatically migrated, Facebook tells me.
  16. [16]
    Facebook privacy scare illuminates the evolution of online ...
    Sep 24, 2012 · Facebook users worldwide today are claiming that private messages sent in 2008 and 2009 are now publicly viewable on their Timeline profile pages.
  17. [17]
    Facebook says it has spent $13 billion on safety and security efforts ...
    Sep 21, 2021 · Facebook Inc. said it has spent more than $13 billion on safety and security efforts since the 2016 U.S. election, and now has 40,000 employees ...
  18. [18]
    How Facebook got addicted to spreading misinformation
    Mar 11, 2021 · The company's AI algorithms gave it an insatiable habit for lies and hate speech. Now the man who built them can't fix the problem.
  19. [19]
    The Silent Partner Cleaning Up Facebook for $500 Million a Year
    Oct 28, 2021 · He has promoted the use of artificial intelligence to weed out toxic posts and touted efforts to hire thousands of workers to remove the ...<|separator|>
  20. [20]
    Cambridge Analytica and Facebook: The Scandal and the Fallout ...
    Apr 4, 2018 · Revelations that digital consultants to the Trump campaign misused the data of millions of Facebook users set off a furor on both sides of the Atlantic.
  21. [21]
    Publishing Our Internal Enforcement Guidelines and Expanding Our ...
    Apr 24, 2018 · We are publishing the internal implementation guidelines that our content reviewers use to make decisions about what's allowed on Facebook.Missing: formalization | Show results with:formalization
  22. [22]
    Facebook releases long-secret rules on how it polices the service
    Apr 24, 2018 · The new community standards do not incorporate separate procedures under which governments can demand the removal of content that violates local ...
  23. [23]
    Some Facebook content reviewers in India complain of low pay ...
    Feb 28, 2019 · On a busy day, contract employees in India monitoring nudity and pornography on Facebook and Instagram will each view 2000 posts in an ...
  24. [24]
    Content moderators at YouTube, Facebook and Twitter see the ...
    Jul 25, 2019 · Social media giants have tasked a workforce of contractors with reviewing suicides and massacres to decide if such content should remain online.
  25. [25]
    Facebook admits it was used to 'incite offline violence' in Myanmar
    Nov 6, 2018 · Facebook says it is tackling problems highlighted in an independent report on its role in ethnic violence.
  26. [26]
    Facebook Admits It Was Used to Incite Violence in Myanmar
    Nov 6, 2018 · A report commissioned by Facebook found the company failed to keep its platform from being used to “foment division and incite offline violence” in Myanmar.
  27. [27]
    Community Standards Enforcement Report, November 2019 Edition
    Nov 13, 2019 · Expanded data on terrorist propaganda: Our Dangerous Individuals and Organizations policy bans all terrorist organizations from having a ...
  28. [28]
    Facebook partners with fact-checking organizations to begin ...
    Dec 15, 2016 · Facebook is introducing tools designed to make it easier to report links shared in the News Feed as fake news, and it's working with four independent fact- ...Missing: initial | Show results with:initial
  29. [29]
    In Response to Oversight Board, Trump Suspended for Two Years
    Jun 4, 2021 · In Response to Oversight Board, Trump Suspended for Two Years; Will Only Be Reinstated if Conditions Permit.
  30. [30]
    Former President Trump's suspension - The Oversight Board
    May 5, 2021 · The Board has upheld Facebook's decision on January 7, 2021, to restrict then-President Donald Trump's access to posting content on his Facebook page and ...
  31. [31]
    Facebook Let 2020 Election Misinformation Flow, Report Says | TIME
    Mar 23, 2021 · The company could have prevented billions of views on pages sharing misinformation related to the 2020 U.S. election, according to Avaaz.<|separator|>
  32. [32]
    Facebook whistleblower Frances Haugen - CBS News
    Oct 4, 2021 · But tonight, Frances Haugen is revealing her identity to explain why she became the Facebook whistleblower. Facebook's response to 60 Minutes' ...
  33. [33]
    4 takeaways from Facebook whistleblower Frances Haugen's ... - NPR
    Oct 5, 2021 · Technology · Whistleblower tells Congress that Facebook products harm kids and democracy · Haugen was an insider, making her a powerful critic.
  34. [34]
    Meta's new hate speech rules allow users to call LGBTQ people ...
    Jan 7, 2025 · Changes to its hate speech guidelines were among broader policy shifts Meta made to its moderation practices. Image: Mark Zuckerberg. Meta ...
  35. [35]
    Meta's oversight board rebukes company over policy overhaul
    Apr 23, 2025 · Meta Platforms' Oversight Board on Wednesday sharply rebuked the Facebook and Instagram owner over a policy overhaul in January that cut ...
  36. [36]
    Meta Shares Latest Data on Policy Enforcements and Content Trends
    Aug 27, 2025 · Since we began our efforts to reduce over-enforcement, we've cut enforcement mistakes in the U.S. by more than 75% on a weekly basis.” Which ...
  37. [37]
    New Technology to Fight Child Exploitation - About Meta
    Oct 24, 2018 · We're using artificial intelligence and machine learning to proactively detect child nudity and previously unknown child exploitative content when it's ...
  38. [38]
    Meta Launches New Content Moderation Tool as It Takes Chair of ...
    Dec 13, 2022 · This January, Meta will become chair of cross-industry counter-terrorism organization GIFCT. · We're launching a new free tool to help platforms ...Missing: automated CSAM
  39. [39]
    How Facebook uses super-efficient AI models to detect hate speech
    Nov 19, 2020 · Facebook AI recently developed a new Transformer architecture called Linformer. It makes it possible to use them efficiently at scale.
  40. [40]
  41. [41]
    Our New AI System to Help Tackle Harmful Content - About Meta
    Dec 8, 2021 · We're introducing new AI technology that can adapt more easily to take action on new or evolving types of harmful content faster.Missing: multimodal | Show results with:multimodal
  42. [42]
    Here's how we're using AI to help detect misinformation - AI at Meta
    Nov 19, 2020 · We're now introducing new AI systems to automatically detect new variations of content that independent fact-checkers have already debunked.Missing: 2023 | Show results with:2023
  43. [43]
    The Limitations of Automated Tools in Content Moderation
    One of the primary concerns around the deployment of automated solutions in the content moderation space is the fundamental lack of transparency that exists ...
  44. [44]
    Facebook's content moderation 'happens too late,' says new ...
    May 29, 2025 · Research from Northeastern University finds a “mismatch” between the speed of Facebook's content moderation and its recommendation algorithm.Missing: views prioritization<|separator|>
  45. [45]
    How technology detects violations - Transparency Center
    Oct 18, 2023 · Our technology proactively detects and removes the vast majority of violating content before anyone reports it.Missing: moderation | Show results with:moderation
  46. [46]
    Meta's Gruesome Content Broke Him. Now He Wants It to Pay | WIRED
    Feb 6, 2023 · While US-based moderators made around $15 per hour, moderators in places like India, the Philippines, and Kenya make much less, according to ...
  47. [47]
    Inside Facebook's African Sweatshop - Time Magazine
    Feb 17, 2022 · At an external Facebook content moderation facility in Kenya, employees are paid as little as $1.50 per hour for traumatizing work.
  48. [48]
    How Does Facebook Moderate Content? AI, Human Review & Policies
    Apr 3, 2025 · These moderators receive specialized training to assess flagged posts against Facebook's detailed guidelines. ... cultural nuances. Scale ...
  49. [49]
    How Facebook Trains Content Moderators - VICE
    Feb 25, 2019 · It's just interesting that we have to take those cultural nuances into account, when ensuring that people even know about the resources that are ...
  50. [50]
    How Meta prioritizes content for review - Transparency Center
    Nov 12, 2024 · Our human review teams use their expertise in certain policy areas and locales to make difficult, often nuanced judgment calls. Every time ...
  51. [51]
    Facebook is now using AI to sort content for quicker moderation
    Nov 13, 2020 · The new moderation workflow, which now uses machine learning to sort the queue of posts for review by human moderators. Image ...<|separator|>
  52. [52]
    The people behind Meta's review teams - Transparency Center
    Jan 19, 2022 · We have over 20 sites around the world, where these teams can review content in over 50 languages. As an essential branch of our content ...
  53. [53]
    The secret lives of Facebook moderators in America | The Verge
    while the average Facebook employee has a total compensation of $240,000. In stark ...
  54. [54]
    Facebook moderators tell of strict scrutiny and PTSD symptoms
    Feb 26, 2019 · Facebook says it has hotline for whistleblowers after report paints picture of contractors' working conditions.
  55. [55]
    Outsourced Facebook content moderators suffering trauma 'in silence'
    May 12, 2021 · Staff turnover in these outsourcing companies is high with most people only lasting two years in the position, she said. Those who are ...<|separator|>
  56. [56]
    [PDF] Facing Contracting Issues:The Psychological and Financial Impacts ...
    CONTRACTUAL. LABOR. ARRANGEMENTS: FINANCIAL & LEGAL RISKS TO PLATFORMS 1567. A. Worker Classification in the Gig Economy.
  57. [57]
    [PDF] 2021 Annual Report - The Oversight Board
    Users submitted more than a million appeals to the Board during this period, with the vast majority of appeals to restore content to Facebook or Instagram ...Missing: 2021-2025 | Show results with:2021-2025
  58. [58]
    Meta's Oversight Board Received 400K Appeals in 2023
    Jun 27, 2024 · Indeed, according to the Oversight Board, it issued more than 50 decisions in 2023, overturning Meta's original decision in around 90% of cases.Missing: millions rates 2021-2025
  59. [59]
    A million appeals for justice, and 14 cases overturned
    Jul 15, 2022 · A million appeals for justice, and 14 reversals. That's the scorecard from the Facebook Oversight Board's first annual report, released this ...
  60. [60]
    Meta's content moderators face worst conditions yet at secret… | TBIJ
    Apr 27, 2025 · Content warning: This story contains references to violence, suicide, child abuse and self-harm. A suicide attempt, depression, ...
  61. [61]
    Content moderation is a new factory floor of exploitation – labour…
    Jun 26, 2025 · Data labelers and content moderators are the unseen gatekeepers of our digital lives, but they often work in exploitative conditions.
  62. [62]
    Report Content on Facebook | Facebook Help Center
    Report Content on Facebook | Facebook Help Center.
  63. [63]
    Find out what happens when you report something to Facebook and ...
    Your name and other personal information will be kept completely confidential when we reach out to the person responsible.
  64. [64]
    Meta Transparency Reports
    A report on how well we're helping people protect their IP, including combating copyright and trademark infringement, and counterfeit goods.Widely Viewed Content Report · Content Restrictions Repor · Community Standards
  65. [65]
    Testing Begins for Community Notes on Facebook, Instagram and ...
    Mar 13, 2025 · Originally published on March 13, 2025 at 5:00AM PT. InJanuaryMeta announced that we will end our third party fact checking program and move ...
  66. [66]
    Meta adds new features to Community Notes fact checks, including ...
    Sep 10, 2025 · Now users will be notified when they've interacted with a post on Facebook, Instagram, or Threads that receives a Community Note.
  67. [67]
    Meta's Community Notes program is promising, but needs to ...
    May 20, 2025 · Meta's Community Notes program is promising, but needs to prioritize transparency · Community-driven moderation · Positive impact? · Challenges and ...
  68. [68]
    Meta's New Content Policy Will Harm Vulnerable Users. If It Really ...
    Jan 9, 2025 · The use of automated tools for content moderation leads to the biased removal of this language, as well as essential information. In 2022, Vice ...
  69. [69]
    Meet the Next Fact-Checker, Debunker and Moderator: You
    Jan 7, 2025 · Researchers believe that community fact-checking is effective when paired with in-house content moderation efforts. But Meta's hands-off ...
  70. [70]
    Hateful Conduct - Transparency Center
    We recognize that people sometimes share content that includes someone else's hate speech to condemn it or raise awareness. ... Standards. Get help with hateful ...
  71. [71]
    Expanding Our Policies on Voter Suppression - About Meta
    Oct 15, 2018 · Last month, we extended this policy further and are expressly banning misrepresentations about how to vote, such as claims that you can vote ...Missing: misinformation | Show results with:misinformation
  72. [72]
    Removing Coordinated Inauthentic Behavior - About Meta
    Jul 8, 2020 · Today, we removed four separate networks for violating our policy against foreign interference and coordinated inauthentic behavior (CIB).
  73. [73]
    Mark Zuckerberg says Facebook will 'review' policies on speech ...
    Jun 5, 2020 · Mark Zuckerberg says Facebook will 'review' policies on speech promoting state violence. He also says he supports the Black Lives Matter ...Missing: prevention causal<|separator|>
  74. [74]
    Removing Coordinated Inauthentic Behavior - About Meta
    Oct 8, 2020 · We removed 78 Facebook accounts, 45 Pages, 93 Groups and 46 Instagram accounts engaged in coordinated inauthentic behavior.
  75. [75]
    Community Standards Enforcement Report - Transparency Center
    Our quarterly report on how well we're doing at enforcing our policies on Facebook and Instagram.
  76. [76]
    [PDF] Content Moderation By The Numbers - NetChoice
    Such as violence, terrorism/violent extremism, child sexual exploitation, abuse/harassment, hateful conduct, suicide/self-harm, sensitive media, including.Missing: early history 2004-2008
  77. [77]
    Facebook claims it proactively detected 95% of hate speech ...
    Aug 11, 2020 · Facebook claims it proactively detected 95% of hate speech removed in Q2 2020. About 22.5 million pieces of content published to Facebook were ...Missing: rates | Show results with:rates<|separator|>
  78. [78]
    Trends in Meta's Content Moderation 2017–2025: Spam, Fake ...
    Oct 13, 2025 · Meta's Q2 2025 Community Standards Report highlights that most enforcement targets spam and fake accounts, which dominate volumes,<|control11|><|separator|>
  79. [79]
    Meta claims hate speech is low, but so is the bar - SMEX
    Jul 29, 2025 · Meta claims that its Community Standards Enforcement Report for the first quarter of 2025 resulted in significant improvement in moderation ...Missing: statistics | Show results with:statistics
  80. [80]
    Meta Ad Library Report - Facebook
    This report shows the number of ads about social issues, elections or politics, and the total amounts spent for those ads, reaching Accounts Center accounts in ...Missing: metrics | Show results with:metrics
  81. [81]
    Oversight Board recommendations - Transparency Center
    The Oversight Board can issue recommendations for Meta's content policies and how we enforce our policies on Facebook, Instagram, and Threads.
  82. [82]
    Meta's Oversight Board, DOI Policy, and Kolovrat Symbol Decision
    Jul 29, 2025 · The Kolovrat decision reveals more than a dispute over a symbol—it lays bare the deep fault lines in Meta's content moderation architecture.
  83. [83]
    The Meta Oversight Board in the Trump Era - Verfassungsblog
    May 26, 2025 · In late April 2025, the Meta Oversight Board (“Board”) published its first six decisions since Meta implemented sweeping changes to its ...
  84. [84]
    The Governance, Legitimacy and Efficacy of Facebook's Oversight ...
    Aug 18, 2024 · Of the more than 100 decisions by the Board, a total of 80% had been overturned. Interestingly, the percentage of overturned decisions has been ...
  85. [85]
    When Freedom Bites Back: Meta, Moderation, and the Limits of ...
    Jan 21, 2025 · It cites a 10-20% error rate in content removals—a staggering figure when multiplied across billions of daily posts. While Meta will maintain ...Missing: success | Show results with:success
  86. [86]
    Content Moderation in a New Era for AI and Automation
    The ways in which social media companies enforce their content rules and curate people's feeds have dramatically evolved over the 20 years since Facebook ...Missing: hash CSAM
  87. [87]
    Meta to end fact-checking program on Facebook and Instagram - NPR
    Jan 7, 2025 · CEO Mark Zuckerberg called the company's previous content moderation policies "censorship," repeating talking points from President-elect ...
  88. [88]
    Community Notes: A New Way to Add Context to Posts
    Sep 10, 2025 · Meta has rolled out a Community Notes feature that lets people add more context to Facebook, Instagram and Threads posts that are potentially misleading or ...
  89. [89]
    Meta Oversight Board co-chair responds to company's ... - NPR
    Jan 10, 2025 · Meta Oversight Board co-chair responds to company's decision to end fact-checking. January 10, 20254:38 PM ET. Heard on All Things ...
  90. [90]
    Facebook documents show inner turmoil over approach ... - Engadget
    Oct 24, 2021 · Internal documents show Facebook's infighting over right-wing content, including claims it makes 'special exceptions' for publishers.
  91. [91]
  92. [92]
    The Facebook Papers: What you need to know - NPR
    Oct 25, 2021 · The documents, known collectively as the Facebook Papers, were shared in redacted form with Congress after whistleblower Frances Haugen, a ...Missing: leaning bias
  93. [93]
    Like-minded sources on Facebook are prevalent but not polarizing
    Jul 27, 2023 · Fourth, our study examines the prevalence of echo chambers using the estimated political leanings of users, Pages, and groups who share content ...
  94. [94]
    U-M study explores how political bias in content moderation on ...
    Oct 28, 2024 · Our research documents political bias in user-driven content moderation, namely comments whose political orientation is opposite to the moderators' political ...Missing: empirical | Show results with:empirical
  95. [95]
    Walid Magdy finds that deleted Facebook posts on Middle East ...
    Apr 25, 2025 · A recent study which found that hundreds of posts about an outbreak of violence in Israel and Palestine in 2021 that were removed by a social media company did ...
  96. [96]
    [PDF] META'S AI BIAS TO WHAT EXTENT DO META'S ALGORITHMIC ...
    This research examines how Meta's algorithmic processes for classifying racist content affect fairness, bias, and social justice across Facebook, Instagram, and ...<|separator|>
  97. [97]
    Zuckerberg tells Rogan FBI warning prompted Biden laptop story ...
    Aug 26, 2022 · In an interview with Joe Rogan, Mark Zuckerberg says the story was flagged after an FBI warning.
  98. [98]
    Mark Zuckerberg admits Facebook censored Hunter Biden laptop ...
    Aug 26, 2022 · Facebook CEO Mark Zuckerberg has admitted that the social networking site censored a story about US President Joe Biden's son during the 2020 US presidential ...<|separator|>
  99. [99]
    Anti-abortion rights group SBA List claims Facebook blocked its ads ...
    Oct 16, 2020 · The Susan B. Anthony List, one of the biggest anti-abortion rights groups in the US, said Friday that Facebook blocked its political advertisements for ...Missing: moderation | Show results with:moderation
  100. [100]
    Inside Social Media's War Against Pro-Life Information - Andy Biggs
    Feb 3, 2020 · The Texas-based pro-life group Human Coalition has repeatedly seen its content censored by tech companies.Missing: moderation | Show results with:moderation
  101. [101]
    [PDF] The Myth of the Chilling Effect - Harvard Journal of Law & Technology
    such, Facebook's content moderation decisions likely create a chilling effect, much like public regulations that censor speech are purposed to do.59. In ...
  102. [102]
    Report: Facebook Makes 300,000 Content Moderation Mistakes ...
    Jun 9, 2020 · Facebook content moderators review posts, pictures, and videos that have been flagged by AI or reported by users about 3 million times a day.Missing: statistics | Show results with:statistics
  103. [103]
    Meta's content moderation changes closely align with FIRE ...
    Jan 9, 2025 · Mark Zuckerberg announces sweeping changes to bolster free expression on Facebook, Instagram, and Threads that track FIRE's 2024 Social Media Report.
  104. [104]
    Myanmar: Facebook's systems promoted violence against Rohingya
    Sep 29, 2022 · Facebook owner Meta's dangerous algorithms and reckless pursuit of profit substantially contributed to the atrocities perpetrated by the Myanmar military ...
  105. [105]
    Report: Facebook Algorithms Promoted Anti-Rohingya Violence | TIME
    Sep 28, 2022 · Amnesty International claims Meta ignored warnings of human rights risks in Myanmar and implemented inadequate safeguards.<|separator|>
  106. [106]
    Facebook: We didn't do enough to prevent Myanmar violence - CNN
    Nov 6, 2018 · The United Nations has called for Myanmar's military leaders to be investigated and prosecuted for genocide, crimes against humanity and war ...Missing: under- | Show results with:under-
  107. [107]
    Integrity Reports, First Quarter 2025 | Transparency Center
    May 29, 2025 · We're publishing our first quarter reports for 2025, including the Community Standards Enforcement Report, where following the changes announced in January we' ...
  108. [108]
    Facebook sees rise in violent content and harassment after policy ...
    May 29, 2025 · Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta.Missing: history 2004-2008
  109. [109]
    Meta Says Online Harassment Is up After Content Moderation ...
    May 29, 2025 · Meta reported that online bullying and harassment on Facebook rose slightly in Q1 2025 compared to Q4 2024. "There was a small increase in ...
  110. [110]
    Researchers call for new way of thinking about content moderation
    Apr 29, 2025 · Study finds Facebook usually takes down posts too late to prevent most people from viewing them.From Our Notebooks · Government Scanner · Inside The Industry
  111. [111]
    The effects of Facebook and Instagram on the 2020 election - NIH
    May 13, 2024 · We provide the largest-scale evidence available to date on the effect of Facebook and Instagram access on political knowledge, attitudes, ...
  112. [112]
    Meta's new content policies risk fueling violence and genocide
    Feb 17, 2025 · Meta's algorithms prioritize and amplify some of the most harmful content, including advocacy of hatred, misinformation, and content inciting racial violence.
  113. [113]
    Bullying and Harassment - Transparency Center
    We do not tolerate bullying and harassment on Facebook and Instagram. Because we recognize bullying can be especially harmful for minors, our policies provide ...Missing: increase | Show results with:increase
  114. [114]
    Facebook, Telegram, and the Ongoing Struggle Against Online Hate ...
    Sep 7, 2023 · Case studies from Myanmar and Ethiopia show how online violence can exacerbate conflict and genocide—and what social media companies can do in ...
  115. [115]
    An Update on Our Work to Keep People Informed and Limit ...
    Apr 16, 2020 · We are expanding our efforts to remove false claims on Facebook and Instagram about COVID-19, COVID-19 vaccines and vaccines in general during the pandemic.Missing: reversal | Show results with:reversal
  116. [116]
    Facebook no longer treating 'man-made' Covid as a crackpot idea
    May 27, 2021 · The findings have reinvigorated the debate about the so-called Wuhan lab-leak theory, once dismissed as a fringe conspiracy theory.Missing: reversal | Show results with:reversal
  117. [117]
    Facebook lifts ban on posts claiming Covid-19 was man-made
    May 27, 2021 · Facebook has lifted a ban on posts claiming Covid-19 was man-made, following a resurgence of interest in the “lab leak” theory of the disease's onset.Missing: reversal | Show results with:reversal
  118. [118]
    Facebook's reversal on banning claims that covid-19 is man-made ...
    May 27, 2021 · Facebook reversed course Thursday and said that it would no longer remove posts that claim the virus is man-made.
  119. [119]
    Facebook shows 'true and ugly colors' with Wuhan stories ban
    May 27, 2021 · Facebook's reversal comes as the lab leak theory gains steam, amplified by a recent report in The Wall Street Journal that said three ...<|separator|>
  120. [120]
    Facebook Bans President Trump From Posting For The Rest Of His ...
    Jan 7, 2021 · Facebook said Thursday it is banning President Trump until the end of his presidency and possibly longer. It is the most forceful action a ...
  121. [121]
    Ending Suspension of Trump's Accounts With New Guardrails to ...
    Jan 25, 2023 · We'll be reinstating Mr. Trump's Facebook and Instagram accounts in the coming weeks with new guardrails in place to deter repeat offenses.
  122. [122]
    Trump's banishment from Facebook and Twitter: A timeline.
    May 13, 2022 · Trump, social media sites including Twitter and Facebook were urged to limit hate speech and the glorification of violence on their platforms.<|control11|><|separator|>
  123. [123]
    Facebook whistleblower testifies before Senate committee - CNBC
    Oct 5, 2021 · Facebook whistleblower Frances Haugen told a Senate panel Tuesday that Congress must intervene to solve the “crisis” created by her former employer's products.
  124. [124]
    The Key Takeaways from Frances Haugen's Facebook Testimony
    Oct 25, 2021 · In her testimony, she encouraged lawmakers to demand more documents and internal research from Facebook, stating that it was only through ...<|separator|>
  125. [125]
    Meta 'hastily' changed moderation policy with little regard to impact ...
    Apr 23, 2025 · Mark Zuckerberg's Meta announced sweeping content moderation changes “hastily” and with no indication it had considered the human rights impact.Missing: 2021-2025 | Show results with:2021-2025
  126. [126]
    Meta Oversight Board calls on company to investigate how content ...
    Apr 23, 2025 · Meta's Oversight Board is calling on the company to evaluate how recent changes to its content moderation policies could impact the human rights of some users.
  127. [127]
    Trump and Section 230: What to Know | Council on Foreign Relations
    President Trump has threatened to veto a major defense funding bill over a law that protects social media companies from liability for what their users post.
  128. [128]
    President Trump's Freedom of Speech Order Takes Aim at Social ...
    Jan 27, 2025 · Chairman Carr favors Section 230 reforms that track the positions outlined in a July 2020 Petition for Rulemaking filed at the FCC by the ...
  129. [129]
    Social Media: Content Dissemination and Moderation Practices
    Mar 20, 2025 · This report provides an overview of social media platforms and their content moderation practices. ... platforms operated by Meta Platforms ...
  130. [130]
    DEPARTMENT OF JUSTICE'S REVIEW OF SECTION 230 OF THE ...
    The US Department of Justice analyzed Section 230 of the Communications Decency Act of 1996, which provides immunity to online platforms from civil liability.Missing: bias 2020-2024
  131. [131]
    Removing Section 230 Immunity for Official Accounts of Censoring ...
    To remove immunity protections from social media platforms which host accounts of censoring foreign adversaries, and for other purposes. IN THE SENATE OF THE ...Missing: targeting | Show results with:targeting
  132. [132]
    'Censorship teams' vs. 'working the refs': Key moments from ... - Politico
    Nov 17, 2020 · Facebook CEO Mark Zuckerberg made his third congressional appearance since July. Twitter CEO Jack Dorsey joined him in front of the Senate ...
  133. [133]
    [PDF] the weaponization of “disinformation” pseudo-experts and
    Nov 6, 2023 · These reports of alleged mis- and disinformation were used to censor Americans engaged in core political speech in the lead up to the 2020 ...
  134. [134]
    What We Learned from the Facebook Whistleblower Hearing
    Oct 5, 2021 · Frances Haugen spent hours detailing to lawmakers how the social network harms young people. Facebook disagreed with her testimony but said ...Missing: bias | Show results with:bias
  135. [135]
    Mark Zuckerberg says Meta was 'pressured' by Biden administration ...
    Aug 27, 2024 · Zuckerberg in his letter to the judiciary committee said the pressure he felt in 2021 was “wrong” and he came to “regret” that his company ...
  136. [136]
    Zuckerberg says the White House pressured Facebook to 'censor ...
    Aug 27, 2024 · Meta CEO Mark Zuckerberg says senior Biden administration officials pressured Facebook to “censor” some COVID-19 content during the pandemic.
  137. [137]
    FTC Imposes $5 Billion Penalty and Sweeping New Privacy ...
    Jul 24, 2019 · The $5 billion penalty against Facebook is the largest ever imposed on any company for violating consumers' privacy and almost 20 times greater ...
  138. [138]
    Digital Services Act: keeping us safe online - European Commission
    Sep 22, 2025 · Large platforms that do not comply with the DSA may face fines of up to 6% of their global annual turnover and will be required to take ...
  139. [139]
    [PDF] Regulation (EU) 2022/2065 Digital Services Act Transparency ...
    Apr 26, 2024 · This report is for Facebook under the EU Digital Services Act, covering the period from Oct 1, 2023 to March 31, 2024, and includes data on ...
  140. [140]
    German Content Moderation and Platform Liability Policies
    Jul 17, 2024 · However, the FSM ceased these operations once NetzDG was repealed under the broader EU Digital Services Act (DSA), which will be discussed later ...
  141. [141]
    Germany fines Facebook for under-reporting complaints | Reuters
    Jul 2, 2019 · German authorities have fined Facebook 2 million euros (£1.7 million) for under-reporting complaints about illegal content on its social ...
  142. [142]
    [PDF] Impact of the Digital Services Act
    May 25, 2025 · The decrease in harmful posts in the. Lithuanian language may be a result of im- proved Facebook content moderation after the DSA enforcement.
  143. [143]
  144. [144]
    Evaluating the regulation of social media: An empirical study of the ...
    This study compiles an original data set of Facebook posts and comments to analyze potential overblocking and chilling effects of a German law
  145. [145]
    Germany's NetzDG and the Threat to Online Free Speech
    Oct 10, 2017 · Beyond incentivizing overenforcement, the NetzDG further threatens online free speech due, in part, to the breadth of Germany's defamation law.
  146. [146]
    Meta's Vietnam playbook: comply, delete and keep quiet - Asia Times
    Oct 16, 2025 · ” The government boasted of a 96% compliance rate for its takedown requests by Meta/Facebook, 92% by YouTube and 97% by TikTok. Those ...
  147. [147]
    World Report 2024: Vietnam | Human Rights Watch
    The Vietnamese authorities constantly request social media companies, including Meta (Facebook ... contents (a 91% compliance rate [with government requests]).”.
  148. [148]
    Content Restrictions Based on Local Law | Transparency Center
    ### Summary of Government Requests for Content Restrictions in Vietnam
  149. [149]
    Pakistan - Content Restrictions Report
    We restricted access in Pakistan to over 8,200 items reported by the Pakistan Telecommunication Authority (PTA) for allegedly violating local laws, ...
  150. [150]
    Facebook removed blasphemous content on government's request
    Mar 27, 2017 · Facebook has removed about 85 per cent blasphemous material at the request of Pakistan, the government today informed a top court.
  151. [151]
    Content Restrictions Based on Local Law | Transparency Center
    ### Summary of Government Requests for Content Restrictions in Myanmar
  152. [152]
    Why Facebook is losing the war on hate speech in Myanmar - Reuters
    Aug 15, 2018 · Reuters found more than 1000 examples of content attacking the Rohingya and other Muslims on Facebook. A secretive operation to combat the ...
  153. [153]
    [PDF] Human Rights Impact Assessment: Facebook in Myanmar
    Government takedown requests for content are not common, but they are increasing, and at least 61 people were prosecuted for online speech from June 2016 to May.
  154. [154]
    Zuckerberg's Meta considered sharing user data with China ...
    Mar 9, 2025 · Meta went to extreme lengths, including developing a censorship system, in a failed attempt to bring Facebook to millions of internet users in China.
  155. [155]
    Facebook's Zuckerberg oversaw censorship tool for China
    Apr 11, 2025 · Meta compromised U.S. national security and freedom of speech to establish businesses in China, whistleblower says.
  156. [156]
    Iran protests: Facebook wrong to remove 'death to Khamenei ... - BBC
    Jan 10, 2023 · An oversight board for Facebook's parent company says the platform was wrong to remove a post with a common protest slogan against Iran's ...Missing: requests | Show results with:requests
  157. [157]
    Iran protest slogan - The Oversight Board
    Jan 9, 2023 · The Oversight Board has overturned Meta's original decision to remove a Facebook post protesting the Iranian government, which contains the slogan “marg bar... ...
  158. [158]
    Uganda: Criminalization shrinks online civic space for LGBTQ people
    Oct 23, 2024 · Online attacks against Uganda's LGBTQ communities have drastically increased, owing to overly broad laws that criminalize various aspects of the lives of LGBTQ ...
  159. [159]
    How Uganda's anti-LGBTQ+ laws entrap people online - Access Now
    May 15, 2025 · Research into cases of digital entrapment in Uganda shows how anti-LGBTQ+ laws and policies endanger people and undermine human rights.
  160. [160]
    Thailand takes first legal action against Facebook, Twitter over content
    Sep 24, 2020 · Thailand launched legal action on Thursday against tech giants Facebook and Twitter for ignoring requests to take down content, in its first ...
  161. [161]
    Thailand's 24-Hour Content Takedown Rule - Lexology
    Sep 29, 2025 · It establishes a strict 24-hour takedown rule for online content. Social media platforms must now remove any material that the Ministry of ...
  162. [162]
    Yet Another Victim of Indonesia's Blasphemy Law
    Aug 12, 2022 · Indonesia's toxic blasphemy law has claimed another victim, this time a former government minister over a social media post deemed insulting to Buddhists.
  163. [163]
    Revised blasphemy laws to go into effect in Indonesia - Christian Post
    Sep 27, 2025 · Indonesia's revised criminal code will take effect in three months, expanding its blasphemy laws from one to six articles and introducing ...
  164. [164]
    How Meta Is Preparing for Brazil's 2022 Elections
    Aug 12, 2022 · We want to share our work to protect the integrity of presidential elections taking place in Brazil in October 2022.
  165. [165]
    Brazil: Facebook approves all ads containing electoral ...
    Our efforts in Brazil's previous election resulted in the removal of 140,000 posts from Facebook and Instagram for violating our election interference policies ...
  166. [166]
  167. [167]
    Government Requests for User Data | Transparency Center
    Our report on the nature and extent of the requests we receive from governments for user data.Missing: Democrats Republicans
  168. [168]
    A content analysis of metrics on online child sexual exploitation and ...
    During October to December, 2023, Facebook detected and took action on 96.70 % of child sexual exploitation content before user reports, with the remaining 3.30 ...
  169. [169]
    [PDF] FINAL SUBMITTED Meta 2025 ACPDM transparency report .docx
    We displayed warnings on over 6.4 million distinct pieces of content on Facebook, and over 509,000 on Instagram in Australia (including reshares), based on ...<|separator|>
  170. [170]
    Meta's content moderation pullback cuts takedown errors, removals
    May 30, 2025 · Erroneous content takedowns dropped by half between Q4 2024 and the end of Q2 2025, per its Q1 Integrity Report. · The overall amount of content ...Missing: metrics accuracy detection rates CSAM positives
  171. [171]
    [PDF] Oversight Board Q2 2022 transparency report
    The Board continues to raise with Meta the questions this poses for the accuracy of the company's content moderation and the appeals process the company applies ...
  172. [172]
    2024 Annual Report Highlights Board's Impact in the Year of Elections
    Aug 27, 2025 · Improved the indicators it gives to moderators when they are reviewing long-form videos for potential violations, for more accurate enforcement ...Missing: findings | Show results with:findings
  173. [173]
    Algorithms and the Perceived Legitimacy of Content Moderation
    Dec 15, 2022 · This brief explores people's views of Facebook's content moderation processes, providing a pathway for better online speech platforms and ...
  174. [174]
    Comparing the Perceived Legitimacy of Content Moderation ... - arXiv
    Feb 13, 2022 · We present a survey experiment comparing the perceived institutional legitimacy of four popular content moderation processes.
  175. [175]
    The unappreciated role of intent in algorithmic moderation of ...
    Jul 29, 2025 · User reports can be influenced by personal biases or coordinated attacks, leading to false positives or negatives (Jhaver et al., 2019). Uneven ...
  176. [176]
    (PDF) The Unappreciated Role of Intent in Algorithmic Moderation of ...
    May 17, 2024 · As social media has become a predominant mode of communication globally, the rise of abusive content threatens to undermine civil discourse.
  177. [177]
    Researchers Examine 'Like-Minded Sources' on Social Media
    Jul 27, 2023 · In the control group, 53.7% of the content in their Facebook news feed was from like-minded sources, compared to 36.2% for the treatment group.
  178. [178]
    The effectiveness of moderating harmful online content - PNAS
    It is significant as it indicates that DSA moderation can effectively stop the most harmful content. Third, despite a significant difference in the ...