Fact-checked by Grok 2 weeks ago

Shadow banning

Shadow banning, also referred to as stealth banning or ghost banning, is a technique used by platforms to diminish the visibility of specific users' posts, comments, or accounts to other users without notifying or acknowledging the restriction to the affected individual. This method typically involves algorithmic adjustments that prevent content from appearing in search results, feeds, recommendations, or notifications, while allowing the user to continue posting as normal, often leading to perceptions of reduced engagement without evident cause. Platforms implement shadow banning primarily to curb , , , or violations of community guidelines, though its opaque nature has fueled widespread allegations of selective enforcement. The practice emerged prominently in the mid-2010s on forums and evolved with the rise of algorithmic feeds on major platforms like (now X), , , and , where it gained notoriety around 2017 amid user complaints of unexplained reach drops. Internal mechanisms, such as Twitter's "visibility filtering" tools, have been documented to downrank content deemed problematic, including temporary labels that reduced amplification of tweets from conservative accounts or on politically sensitive topics like election integrity. Empirical audits and studies confirm that such interventions can disproportionately affect certain demographics or viewpoints, with one analysis of Twitter revealing shadow bans correlating with user behaviors and content that challenged prevailing narratives, potentially amplifying algorithmic biases embedded in moderation systems. Controversies surrounding shadow banning center on its potential for covert and , as platforms have historically denied its while employing equivalent practices under euphemistic terms like "deboosting" or "reduced ." Revelations from the in 2022-2023 exposed coordinated efforts with government entities to suppress stories and accounts, including right-leaning voices, prompting debates over in despite platforms' claims of viewpoint neutrality. Modeling studies further illustrate how targeted shadow banning can shift network-wide opinions or heighten without overt bans, raising causal concerns about platforms' unaccountable influence on public discourse. Following Elon Musk's acquisition of Twitter in 2022, the platform publicly phased out undisclosed visibility filtering in favor of labeled restrictions, marking a shift toward greater amid ongoing .

Definition and Mechanisms

Core Definition

Shadow banning, also known as stealth banning, ghost banning, or hell banning, refers to the practice of an online platform limiting the visibility or reach of a user's or to other users without notifying the affected individual. This typically involves algorithmic or manual interventions that prevent posts from appearing in public feeds, search results, recommendations, or notifications, while allowing the user to continue posting and viewing their own as if unaffected. The result is a partial that isolates the user from the broader , often evading detection because engagement metrics like views or likes may appear diminished gradually or inconsistently. Unlike explicit suspensions or removals, which notify users and provide mechanisms, shadow banning operates covertly to maintain harmony without overt confrontation, functioning as a subtle tool for addressing , , violations, or low-quality . Legal definitions, such as those in , codify it as blocking or partially blocking user from an , emphasizing the lack of transparency. The practice traces its conceptual roots to early forums and systems in the 1980s and 1990s, where moderators would redirect disruptive users' posts to invisible threads visible only to themselves, predating modern but adapting to algorithmic scales. Platforms frequently contest the term's pejorative implications, framing such visibility reductions as routine algorithmic filtering for rather than deliberate suppression, though empirical cases reveal targeted applications against specific viewpoints or behaviors. This distinction fuels debates over intent, with shadow banning enabling causal control over discourse flows—amplifying compliant content while demoting outliers—without the backlash of acknowledged . Detection often requires external verification, such as alt-account testing or third-party analytics, highlighting the opacity inherent to the mechanism.

Technical Methods of Implementation

Shadow banning is implemented primarily through algorithmic systems that automate the enforcement of platform policies by reducing the discoverability and reach of user content without notifying the affected users. These systems classify posts using models trained on features such as text semantics, user behavior patterns, and engagement signals to assign visibility scores or flags, enabling large-scale suppression. A core technique is downtiering or algorithmic downranking, where content receives a lowered priority in recommendation algorithms, causing it to appear less prominently—or not at all—in users' feeds, explore pages, or personalized timelines. This is achieved by adjusting ranking factors like relevance scores or affinity metrics, often without altering the content's existence on the platform. For example, on platforms like (now X), replies from flagged accounts may be hidden behind an barrier, visible only after users select "show more replies," effectively limiting exposure unless actively sought. Another method involves search and suggestion bans, which exclude content from search indexes, autocomplete suggestions, or hashtag results. Content flagged for policy proximity—such as borderline violations of guidelines on or —is systematically omitted from these discovery mechanisms, reducing organic traffic while preserving the illusion of normal posting functionality for the user. Historical instances include Twitter's exclusion of certain accounts from search suggestions and Reddit's quarantining of subreddits, which removes them from default search visibility. Ghost banning represents a more severe variant, where posts remain visible to the original poster but are invisible to others, simulating account suspension without triggering user alerts or backend changes detectable by standard interfaces. This is facilitated by user-specific rendering filters in the platform's frontend algorithms, which apply visibility restrictions based on account-level flags derived from moderation rules. Platforms like YouTube implement exclusion from recommendations for videos "close" to violating community guidelines, using probabilistic scoring to withhold promotion in homepages or suggested videos sections. Similarly, Meta's systems on Facebook and Instagram algorithmically limit political content distribution, as evidenced by reduced audience reach for posts containing terms like "vote," through feed prioritization adjustments that favor non-controversial material. These techniques collectively enable platforms to moderate at scale while minimizing user backlash, though exact parameters remain proprietary and subject to iterative updates based on policy evolution.

Historical Development

Early Origins in Online Forums

The practice of shadow banning predates modern social media, emerging in the mid-1980s within systems () as a moderation tool for managing disruptive users. software, a popular system for early online communities, implemented a feature known as the "twit bit," which administrators could enable for problematic individuals. When activated, this mechanism allowed affected users to view their own posts and interactions as normal but rendered them invisible to all other participants, effectively isolating trolls or spammers without alerting them to the restriction. This stealthy approach aimed to reduce forum disruptions by preventing banned users from adapting their or escalating conflicts through awareness of the . The specific term "shadow ban" was coined in by moderators on , an influential web forum established in known for its irreverent community and early . Rich Kyanka, the site's founder, reported that moderators used the phrase to describe a technique where offending users' posts were hidden from the broader audience while remaining visible to the posters themselves, often applied humorously or sparingly to extreme violators. 's implementation built on precedents but adapted to web-based threading, emphasizing deception to maintain community harmony without overt confrontation. This method reflected broader early forum moderation challenges, where volunteer administrators sought to curb abuse in nascent digital spaces lacking advanced algorithmic tools. These early techniques, including variants later termed "hellbanning" on platforms like around 2011, prioritized anti-spam and anti-troll efficacy over transparency, allowing moderators to observe continued misbehavior for evidentiary purposes. However, they drew criticism even then for potential misuse, as users remained unaware of diminished visibility, complicating self-correction and fostering distrust in processes. By the early , such practices had become a staple in online forums, influencing subsequent content control strategies as communities scaled.

Rise with Mainstream Social Media

As mainstream social media platforms scaled to billions of users in the , algorithmic techniques proliferated to manage , , and without explicit bans, often resulting in reduced visibility that users termed "shadow banning." , for instance, filed a in 2011 (granted in 2015) describing a system to demote "low-quality" posts from users' feeds without notification, prioritizing content from trusted sources to enhance . This approach aligned with the platform's growth from 500 million users in to over 1.5 billion by 2015, necessitating automated filtering over manual oversight. Allegations of biased implementation intensified in 2016 following the U.S. presidential election, when a investigation claimed curators and algorithms suppressed conservative-leaning stories in its Trending Topics section, such as those from outlets like the or . denied systemic bias but acknowledged human intervention in topic selection and responded by automating more of the process with to reduce subjectivity, effectively embedding visibility reductions deeper into its algorithms. Critics, including Republican lawmakers, argued this reflected institutional preferences among employees, though internal reviews found no evidence of deliberate partisan suppression. Twitter faced similar scrutiny by 2018, as its user base exceeded 300 million monthly active users, prompting expanded anti-abuse measures like de-emphasizing "troll-like behaviors" in replies and search results. In July of that year, reported that prominent Republican accounts, including those of Senator and Representative , were omitted from autocomplete search suggestions while Democrats appeared, a practice attributed to neutral filters against manipulative behavior but which conservatives labeled as shadow banning. The platform adjusted its algorithms in response, but the incident amplified claims of ideological throttling, particularly amid post-2016 efforts to combat election-related through downranking flagged content. These developments marked shadow banning's transition from niche forum tactics to a core, opaque feature of mainstream moderation, driven by regulatory pressures and advertiser demands for "safe" environments.

Key Revelations and Policy Shifts Post-2022

The , a series of internal documents released starting December 2, 2022, under Elon Musk's direction, revealed that pre-acquisition employed systematic visibility filtering and deamplification—mechanisms functionally equivalent to shadow banning—targeting accounts and content deemed problematic, including those discussing origins or holding conservative viewpoints, without user notification. These practices contradicted prior executive statements, such as former CEO Jack Dorsey's 2018 claim that did not shadow ban based on political views but merely ranked content algorithmically. Specific disclosures included "secret blacklists" for downranking replies from disfavored users and temporary search bans on topics like the laptop story in 2020, applied unevenly across ideological lines. The files highlighted reliance on opaque tools like "Vis Fowl" for temporary visibility reductions, often coordinated with government entities, underscoring a gap between public transparency pledges and operational reality. In response, Musk's , 2022, acquisition prompted immediate scrutiny of shadow banning, with him pledging on his first day to investigate and eliminate such practices. By , 2022, X revised its suspension policy to limit permanent bans to or illegal conduct, shifting from proactive ideological to reactive . January 2023 updates included software to display users' "true account status," explicitly indicating shadow ban or deboosting application for . The platform adopted a ", not reach" framework by mid-2023, permitting posting of controversial content while algorithmically limiting its distribution to mitigate harm without outright suppression, a departure from pre-2022's hidden filters. Further shifts materialized by mid-2025, with X dismantling its Trust and Safety Council—criticized in the Files for enabling biased deprioritization—and replacing human-led moderation with systems to reduce subjective interventions. These changes aimed to address File-revealed abuses, though reports of residual deboosting against Musk critics emerged, attributing them to targeted enforcement rather than systemic opacity. Broader policies evolved to prioritize user notifications for visibility limits, contrasting earlier denials and fostering accountability, albeit amid ongoing debates over enforcement consistency.

Practices Across Platforms

Twitter and X

Prior to Elon Musk's acquisition of Twitter in October 2022, the platform employed visibility filtering mechanisms, internally described as temporary labels that reduced the reach of tweets deemed to violate rules against hateful conduct or , without notifying users. These practices, revealed through the —a series of internal documents released starting in December 2022—included algorithmic de-amplification of accounts, such as those associated with the and conservative commentator , where notifications were hidden and search suggestions suppressed. Independent journalist documented in Twitter Files installment No. 2 how teams built blacklists to prevent disfavored tweets from trending and throttled visibility for right-leaning voices, a process Twitter executives had publicly denied as "shadow banning" while distinguishing it from full content undiscoverability. Following the acquisition, publicly committed to eliminating shadow banning, criticizing prior practices as and implementing greater , including a planned feature to notify users of visibility limitations. The rebranded X shifted to a "freedom of reach" policy in 2023, allowing deboosting of content violating guidelines on , , or harm promotion, while maintaining that outright shadow banning—rendering posts invisible except to the author—does not occur. Algorithmic updates emphasized engagement-based ranking, prioritizing replies and verified accounts, but internal audits post-2022 indicated persistent de-amplification for policy breaches, with defending it as necessary to prevent abuse without infringing core speech rights. Controversies persisted into 2024 and 2025, with users accusing X of shadow banning critics of , including conservative figures like , whose reach dropped amid disputes over policies. Reports from April 2025 highlighted reduced visibility for accounts posting negatively about , prompting claims of despite official denials. In July 2024, an ruled X violated data protection rules by not disclosing algorithmic restrictions to a user, leading to daily fines until compliance, underscoring ongoing opacity in visibility decisions. X's engineering team attributed such reductions to automated filters and user-reported blocks, rejecting allegations, though empirical studies noted algorithmic skew toward high-engagement content, potentially disadvantaging niche or contrarian voices.

Other Major Platforms

Meta platforms, including and , utilize algorithmic reductions in content distribution that function similarly to shadow banning, often without user notification. In a December 2023 report, detailed systemic suppression of Palestine-related content on and through practices like temporary visibility limits and account demotions, affecting thousands of posts and users. Belgian courts in August 2024 ruled that Meta's automated visibility reductions for terms-of-use violations constitute shadow banning via decision-making algorithms lacking transparency. Meta representatives have denied formal shadow banning, asserting instead that distribution adjustments target violations like or , though independent analyses show posting patterns can trigger these independently of content. YouTube applies shadow banning mechanisms, particularly to comments and recommendations, where algorithmic filters hide or deprioritize content deemed violative without strikes or alerts. User reports and analyses from 2024-2025 indicate sudden drops in views, likes, and comment visibility as hallmarks, often linked to suspected guideline breaches like or controversial topics. Google's policies emphasize guidelines enforcement through reduced recommendations rather than overt bans, but from channel audits confirms undisclosed suppression affecting engagement metrics. TikTok explicitly reduces discoverability for content violating guidelines, such as or , by limiting appearances in For You Page feeds, searches, and recommendations without direct user notice. A 2024 Yale demonstrated how such shadow banning can polarize opinions by subtly altering , with experiments showing shifts in user positions after targeted suppression. Triggers include posting restricted material or using banned hashtags, leading to drops of up to 90% in affected accounts as of 2025 reports. Reddit has implemented shadow banning since at least the early to address and abuse, applying sitewide filters that auto-remove posts and comments across subreddits without notifying the user. Detection methods, updated as of 2025, involve checking via Reddit's appeal process or external tools, revealing false positives in cases like VPN usage or rapid posting. Appeals have overturned bans for power users in subreddits like r/, highlighting opacity in automated enforcement.

Evidence of Usage and Controversies

Empirical Evidence and Studies

A 2023 audit study in the Journal of Communication examined shadowbanning on through repeated tests on a stratified random sample of approximately 25,000 American accounts. Researchers identified 2,476 instances of shadowbans across 1,731 accounts, equating to 6.2% of the 27,718 active accounts tested experiencing at least one such restriction. Shadowbans manifested as reduced visibility of replies, search deprioritization, or reply deboosting, with bot-like behavioral patterns increasing the likelihood, while verified accounts faced lower risks. Replies to offensive tweets and political content from both left- and right-leaning sources were disproportionately downtiered, hidden from recipients without notification to the original poster. Further quantitative analysis by Le Merrer et al. in 2021, drawing from data, revealed that users engaging with shadowbanned accounts were over four times more likely to themselves face shadowbanning, rising from a rate of 2.3% to 9.3% for interactors. This effect underscores algorithmic enforcement extending beyond initial targets to networks, based on observational data from account interactions. Empirical detection on other platforms remains limited, with most evidence relying on user audits or internal disclosures rather than large-scale academic audits. A 2021 study by Ma and Kou on documented algorithmic demotion reducing video recommendations for certain creators, correlating with socioeconomic harms, though not explicitly termed shadowbanning. Platforms like and have fewer peer-reviewed quantifications, though self-reported surveys indicate perceived restrictions: a 2021 poll of 1,205 U.S. users found 9.2% believed they experienced shadowbanning within the prior year, highest on at 8.1%. These findings, while suggestive, highlight challenges in distinguishing intentional shadowbanning from algorithmic quirks, as platforms often attribute visibility drops to neutral ranking factors.

Political and Ideological Bias Claims

Claims of political and ideological bias in shadow banning primarily center on allegations that major platforms disproportionately apply visibility reductions to conservative or right-wing content and accounts, often without transparent justification. These assertions gained traction in 2018 when President publicly accused platforms of shadow banning Republicans, citing reduced search visibility for prominent conservative figures on . Internal documents released via the in late 2022 and 2023 provided of such practices, including "visibility filtering" tools that suppressed tweets from Republican politicians and right-leaning journalists without user notification, such as temporary search bans on accounts like those of and during the . Further disclosures revealed algorithmic adjustments prioritizing left-leaning media outlets while deprioritizing conservative ones, with executives acknowledging in 2020 that certain right-wing accounts faced "temporary labels" limiting reach based on perceived , though platforms maintained these were not viewpoint-specific. Critics, including former staff cited in the files, argued this reflected a systemic left-leaning among teams, corroborated by a 2023 analysis showing overrepresentation of progressive viewpoints in content policy decisions. Platforms like under pre-2022 leadership denied ideological targeting, asserting actions followed neutral rules on or , but the opacity of these systems fueled perceptions of . On other platforms, similar claims emerged, such as YouTube's alleged demotion of conservative channels post-2016 election, with a 2021 internal leak indicating algorithmic tweaks to counter "right-wing extremism" that inadvertently reduced visibility for mainstream GOP content. Facebook whistleblower testified in 2021 to congressional committees about prioritization algorithms favoring "authoritative" sources, which disproportionately sidelined conservative outlets during events like the 2020 election. Empirical studies offer mixed support; a 2024 Yale analysis found pro-Trump hashtag accounts faced higher suspension rates, potentially extending to shadow banning, while a MIT study attributed disparities to conservatives posting more violative content like , not inherent bias. Counterclaims of against liberals are rarer and less substantiated by internal evidence, with most platform defenses emphasizing behavior-based over ; however, a 2023 survey indicated 58% of U.S. adults perceived political viewpoint , predominantly among Republicans. These debates highlight tensions between platforms' self-reported neutrality and documented practices favoring certain ideological alignments, particularly pre-Musk where logs showed explicit discussions of suppressing "Trump-adjacent" narratives.

Notable Incidents and Examples

In July 2018, faced widespread accusations of shadow banning after users observed that searching for prominent accounts, such as those of Senators and , failed to autocomplete their names in the platform's search bar, while Democratic equivalents appeared normally. This discrepancy, affecting visibility in search results without altering the accounts' follower counts or tweet functionality, prompted then-President to publicly claim on July 26 that was "SHADOW BANNING prominent Republicans." acknowledged the issue as an unintended algorithmic adjustment related to combating spam and manipulation, and reversed it within hours following backlash, restoring functionality. The release of the in December 2022 provided internal documentation of systematic visibility filtering practices predating Elon Musk's acquisition. Journalist , granted access to Twitter's archives, detailed how the platform maintained secret "blacklists" and "deboosting" mechanisms that reduced the reach of tweets from conservative users, including podcaster and Stanford epidemiologist , often without user notification or appeal options. For example, Bhattacharya's account was flagged for temporary search bans and placed on a "Trends Blacklist," limiting its algorithmic promotion despite high engagement metrics. Twitter employees internally referred to these as "visibility filtering" tools to suppress potentially "problematic" content, though the company publicly denied political motivations. On , creators have reported algorithmic shadow banning since at least , with sudden drops in video recommendations and search rankings for content deemed "borderline" by automated systems, even without formal strikes. A internal policy shift explicitly aimed to reduce visibility of such videos to prevent spread, leading to cases like conservative commentators experiencing 90% view declines overnight; YouTube attributed this to quality filters rather than targeted political suppression. Facebook has encountered similar allegations, particularly during 2018 U.S. congressional hearings where lawmakers cited reduced post reach for conservative pages as evidence of bias. Internal audits later revealed algorithmic demotions for "engagement bait" and low-quality content, which disproportionately affected right-leaning outlets according to some analyses, though maintained these were neutrality-driven efforts.

Justifications and Criticisms

Platform Defenses as Moderation Tool

Platforms defend reduced visibility measures—such as algorithmic downranking or deboosting of —as a calibrated technique that restricts the amplification of harmful material while permitting its initial posting, thereby upholding commitments to free expression alongside platform integrity. This method targets the viral spread of deemed violative of standards like , , or coordinated inauthentic behavior, without the escalatory effects of suspensions that could alienate users or invite legal challenges. By intervening at the distribution layer rather than the publication stage, platforms claim to foster safer environments that sustain user engagement and advertiser confidence, as unchecked harmful erodes and usability. On X (formerly Twitter), Elon Musk endorsed this approach in a November 2022 policy announcement, emphasizing "freedom of speech, but not freedom of reach," whereby negative or hateful posts undergo maximum deboosting and demonetization to deter manipulation without prohibiting expression. Musk positioned visibility filtering as a safeguard against platform exploitation by bad actors, arguing it prevents the degradation seen in less moderated spaces while avoiding overt censorship accusations. Subsequent updates, including status notifications for affected accounts implemented in December 2022, were framed as enhancing transparency in these algorithmic adjustments. Meta's and similarly apply downranking to problematic content, as outlined in their transparency reports, which detail reductions in feed distribution for violations to limit exposure without removal. Internal evaluations indicate these interventions curbed engagement with misinformation-disseminating groups by 16-31% and websites by approximately 45%, demonstrating efficacy in containing falsehoods that could otherwise amplify rapidly. contends this granular control supports scalable via , reducing reliance on resource-intensive human review while minimizing overreach compared to blanket bans. YouTube, owned by , integrates downranking into its recommendation systems to deprioritize borderline or policy-violating videos, defending it as a means to prioritize user safety and experience by curbing the promotion of misleading or abusive material. This tactic, refined through ongoing policy adjustments as of June 2025, allows retention of educational or contextual with harms, provided it does not dominate recommendations, thus balancing informational access against risk proliferation. Platforms collectively assert that such tools are indispensable for managing vast volumes—billions of posts daily—where full visibility equates to endorsement, and selective de-amplification preserves core functionalities like and connectivity.

Critiques of Opacity and Potential Abuse

Critics argue that the opacity inherent in shadow banning practices undermines and , as affected individuals receive no notification of reduced , preventing appeals or adjustments to . This lack of transparency, often termed "" moderation, allows platforms to implement algorithmic deamplification without scrutiny, fostering perceptions of arbitrary enforcement. For instance, a 2021 study described how platforms leverage their perceived authority over algorithms to gaslight users by denying shadow banning exists, eroding confidence in content distribution mechanisms. Such practices complicate empirical verification, as users must resort to external tests or to detect restrictions, highlighting a in for digital expression. The potential for abuse arises from this secrecy, enabling selective suppression that can distort public discourse without oversight. Internal documents released via the in December 2022 revealed "visibility filtering" tools used to quietly limit reach of accounts deemed problematic, including those of conservative commentators, contradicting prior executive denials of viewpoint-based shadow banning. These mechanisms, applied without user notification, facilitated what reporters described as "secret blacklists," raising concerns over ideological bias in moderation decisions by unelected staff. A 2024 analysis modeled how shadow banning could be strategically deployed to shape network opinions, demonstrating its efficacy in amplifying favored narratives while marginalizing dissent, particularly when rates are low enough to evade detection. Further critiques point to the risk of institutional capture, where platforms' internal biases—often aligned with prevailing cultural or political elites—lead to disproportionate targeting of heterodox views. The exposed instances of deamplification applied to high-profile right-leaning accounts, such as those questioning policies or election integrity, without transparent criteria, suggesting moderation served external pressures rather than neutral . This opacity not only invites abuse but also insulates platforms from legal or market repercussions, as users cannot prove discrimination absent disclosure. Legal scholars have proposed mandates, arguing that undetectable sanctions violate principles of fair notice akin to those in traditional , yet platforms resist, citing competitive harms or safety risks. Empirical studies reinforce that such hidden tools exacerbate echo chambers, as suppressed content fails to challenge dominant ideologies, potentially entrenching from unopposed sources.

Societal and Psychological Impacts

Effects on Individual Users

Shadow banning typically manifests for individual users as an abrupt and unexplained decline in content visibility, resulting in sharply reduced metrics such as views, likes, shares, and comments. This demotion in algorithmic distribution prevents posts from appearing in followers' feeds, search results, or recommendation algorithms, effectively isolating the user from their audience without any notification or appeal process. Users often discover the issue through secondary indicators, like third-party tools showing normal posting activity but negligible reach, leading to a of . The psychological consequences include heightened frustration, anxiety, and erosion of self-efficacy, as users experience a disruption in expected social feedback loops that platforms cultivate for validation and community building. Affected individuals report feelings of invisibility and powerlessness, which can undermine their digital self-concept and prompt paranoia about platform surveillance or bias. In qualitative studies of content creators, particularly those from marginalized groups on platforms like TikTok, shadow banning correlates with emotional distress, including demotivation to create content and negative shifts in platform trust. This lack of transparency exacerbates the impact, as users cannot distinguish between algorithmic demotion and organic decline, fostering self-doubt and behavioral adjustments like content dilution to regain visibility. For users reliant on for professional or financial purposes, such as influencers or owners, the effects extend to tangible economic harm through stalled audience growth and diminished opportunities. Engagement drops can reduce ad revenue, sponsorship deals, or traffic by limiting exposure to potential customers, with some creators reporting sustained visibility suppression lasting weeks or months. In response, individuals may engage in "invisible labor," such as testing alternate accounts or altering posting strategies, which consumes time and resources without guaranteed recovery. Persistent experiences have driven some users to migrate to alternative platforms or reduce overall online activity, contributing to personal disillusionment with expression.

Broader Implications for Public Discourse

Shadow banning distorts the flow of in online public by selectively reducing the of without user notification, which can create artificial perceptions of or minority status for certain viewpoints. Empirical modeling of social networks demonstrates that such interventions can silently shift user opinions toward platform-preferred positions or amplify overall , as reduced exposure to dissenting reinforces existing biases in recommendation algorithms. This mechanism exacerbates the , where individuals withhold expression due to the illusion of low support, thereby homogenizing and limiting exposure to diverse perspectives essential for robust debate. analyses indicate that shadow banning facilitates over political , potentially hampering movements while favoring established narratives, as platforms prioritize "healthy" metrics over unfiltered exchange. The opacity inherent in shadow banning undermines trust in as neutral arenas for democratic , fostering perceptions of algorithmic as arbitrary or ideologically driven. Studies on experiences reveal chilling effects, where fear of invisible penalties discourages candid participation, pushing toward performative compliance rather than substantive engagement. This erosion of credibility is compounded by documented disparities in application, such as disproportionate filtering of conservative-leaning accounts revealed in internal platform disclosures, which fuel accusations of despite platforms' claims of neutrality. Over time, reliance on such tools risks fragmenting public into insulated silos, where algorithmic suppression entrenches echo chambers and diminishes the corrective role of open contention in refining collective understanding. In democratic contexts, these dynamics pose risks to informed , as shadow banning can subtly shape electoral narratives or debates by throttling viral potential of unpopular but factually grounded critiques. highlights how algorithmic , including shadow banning, intersects with by creating loops that prioritize over accuracy, potentially amplifying from dominant voices while muting challenges. Critics argue this contravenes first-principles of free expression, where visibility throttling without transparency equates to , impairing the necessary for societal progress. from platform audits underscores the need for verifiable metrics on visibility reductions to mitigate these distortions, though platforms often resist disclosure citing competitive harms.

European Union Regulations

The (DSA), Regulation (EU) 2022/2065, entered into force on November 16, 2022, and became fully applicable to all online platforms from February 17, 2024, imposing obligations to enhance transparency in practices, including restrictions on visibility akin to shadow banning. The DSA defines "restriction of visibility" to encompass demotion in ranking or recommender systems, as well as limitations on distribution or access to information, thereby addressing undetectable moderation techniques by requiring platforms to notify affected users and provide reasons for such actions. Under Article 42, intermediary services must furnish a "statement of reasons" for decisions suspending or terminating service provision, restricting visibility, or removing content, detailing the factual basis, legal grounds, and applied parameters, with users entitled to appeal these decisions internally or via out-of-court bodies. Article 27 mandates transparency in personalized recommendation systems, obligating platforms to disclose—upon user request—the primary parameters, including algorithms and data sources, used to match content with recipients, while allowing users to of recommendations based on . This provision effectively curtails shadow banning by prohibiting opaque demotions without disclosure, though exceptions apply for protecting minors or preventing systemic risks, subject to justification. For very large online platforms (VLOPs) with over 45 million monthly EU users, such as Meta and X, additional requirements include annual risk assessments for recommender systems' impacts on civic discourse and mandatory independent audits, with non-compliance risking fines up to 6% of global annual turnover. Enforcement by the and national coordinators has targeted VLOPs, with investigations into platforms like and in 2024-2025 for inadequate in moderation, including failures to effectively handle user complaints about visibility restrictions. The DSA's framework thus shifts from self-regulation to accountable processes, though critics argue its emphasis on systemic risks could enable broader suppression under guise of , as evidenced by ongoing debates over codes effective from July 2025.

United States Developments

In the , concerns over shadow banning intensified following the 2022 acquisition of by , who released internal documents known as the . These disclosures, beginning in December 2022, revealed that employees had applied "visibility filtering"—a mechanism reducing the reach of specific accounts and content without user notification—to high-profile users, including political figures and journalists, often in response to internal assessments of potential or external pressures. For instance, files detailed the temporary suppression of accounts like Stanford professor Jay Bhattacharya in 2021 for policy critiques and the New York Post's October 2020 article on Hunter Biden's laptop, where algorithmic deboosting limited distribution. Congressional scrutiny followed, with the House Judiciary Committee's Select Subcommittee on the Weaponization of the Federal Government holding hearings in February 2023 featuring former Twitter executives , , and others. Testimony and documents highlighted inconsistencies in prior denials of shadow banning, such as 2018 public statements rejecting viewpoint-based suppression, contrasted against internal practices of temporary labels reducing visibility by up to 90% for select tweets. These revelations fueled Republican-led probes into government-platform coordination, including FBI communications influencing , though executives maintained actions addressed platform integrity rather than ideology. State-level responses emerged to curb perceived platform overreach. Florida's Senate Bill 7072, enacted in May 2021, bars large social media firms from "censoring, deplatforming, or shadow banning" candidates for office, journalists, or users with substantial followings (over 500,000), mandating explanations for moderation decisions. Texas's House Bill 20, signed in September 2021, imposes similar restrictions on viewpoint-based deprioritization or demonetization. Tech coalitions challenged these as First Amendment violations, arguing they compelled speech by overriding editorial discretion protected under of the . The U.S. addressed these in Moody v. NetChoice () and NetChoice v. Paxton () during its 2023-2024 term. On June 27, 2024, the Court unanimously vacated Eleventh and Fifth Circuit injunctions against the laws, remanding for merits analysis on whether provisions regulate professional conduct (e.g., algorithmic outputs) or compel expressive moderation choices, emphasizing challenges must prove no valid applications exist. Lower courts proceeded, with ongoing litigation testing shadow banning prohibitions amid claims platforms' opacity undermines accountability without violating editorial rights. Federally, the advanced scrutiny in February 2025 by issuing a on " ," explicitly targeting shadow banning, demonetization, and tied to . Andrew Ferguson sought public submissions from affected individuals to assess unfair or deceptive practices, potentially invoking Section 5 of the Act against discriminatory access denial, with comments due by May 2025. This inquiry reflects bipartisan wariness of unchecked algorithmic moderation, though platforms contend such tools prevent harm without constituting bans. Individual lawsuits, such as a 2024 German regional court ruling against X (formerly Twitter) for undisclosed visibility reductions violating transparency duties—echoed in U.S. filings—underscore mounting legal pressures for disclosure.

Global Perspectives and Challenges

In authoritarian regimes, shadow banning serves as a covert tool for suppressing dissent, often through government pressure on platforms to reduce visibility of critical content without overt bans. For instance, in , (now X) complied with requests to block accounts criticizing President just before the 2023 elections, effectively shadow banning opposition voices to align with local demands. Similarly, in , users alleged algorithmic demotion of content opposing government narratives as early as 2020, coinciding with heightened scrutiny of foreign apps amid concerns. These practices reflect a broader pattern where platforms, fearing operational bans, acquiesce to regimes that weaponize moderation algorithms, as noted in analyses of digital authoritarianism. In contrast, perspectives in Latin American democracies highlight shadow banning's role in stifling against and , with reports documenting its use to silence journalists and voices. Brazil's platforms have faced accusations of demoting content critical of political elites, exacerbating amid polarized elections. The Inter-American framework views such opacity as violating rights to expression and information, urging platforms to disclose algorithmic interventions. However, enforcement remains inconsistent, as U.S.-based firms prioritize compliance with local laws over uniform transparency, leading to fragmented global standards. Key challenges include jurisdictional conflicts and the scalability of transparency mandates across borders. Platforms operating in over 190 countries must navigate divergent legal expectations, such as India's 2021 IT Rules requiring but lacking specific anti-shadow banning provisions, versus more prescriptive frameworks elsewhere. Authoritarian governments exploit platforms' reluctance to publicize shadow banning techniques, pressuring for targeted suppressions without , as evidenced by Twitter's 83% compliance rate with such requests from 2021-2022. This opacity fosters distrust and algorithmic —user beliefs in unseen manipulations—undermining public discourse worldwide. Broader hurdles involve balancing moderation against free expression in multicultural contexts, where cultural norms influence what constitutes "harmful" content. In regions like and the , shadow banning of religious or ethnic discourse risks amplifying through perceived biases, yet platforms rarely disclose metrics, complicating empirical assessment. International bodies like the UN have called for global norms on algorithmic , but adoption lags due to sovereignty concerns, perpetuating a patchwork where users in less regulated states face heightened risks of undetected suppression.

References

  1. [1]
    What is a Shadowban? | Later Social Media Glossary
    Shadowbans are used by platforms to limit the spread of harmful or unwanted content without completely banning the user from the platform.
  2. [2]
    What Is Shadow Banning? - Neil Patel
    Shadow banning is when your social media posts can no longer be viewed by other users. You can still access your content, but content is less likely to appear.Shadow Banning Defined · Instagram Shadow Ban · TikTok Shadow Ban
  3. [3]
    Understanding Shadow Ban: what it is and how to prevent it
    Sep 17, 2024 · Shadowban is if a social media company shadowbans someone's posts, it limits who can see them, usually without the person who has published them knowing.Is shadow banning real? · How do I know if I've got... · How can shadow banning...
  4. [4]
    Shadow Banned - Sprout Social
    Shadow banned means that your social content has been blocked or restricted in some capacity without you receiving any notice. The concept of shadow banning ...<|separator|>
  5. [5]
    Everything we know about 'shadowbans' on social media
    Oct 16, 2024 · “Shadowban” gained popularity around 2017 to describe what users claimed was sneaky activity by social media companies to hide their content or opinions.
  6. [6]
    [PDF] Latest 'Twitter Files' reveal secret suppression of right-wing ...
    Dec 8, 2022 · “We do not shadow ban,” Gadde and Beykpour said in 2018, per Weiss. “And we certainly don't shadow ban based on political viewpoints or ideology ...
  7. [7]
    [PDF] Twitter's Secret Blacklists | The Free Press - Congress.gov
    Mar 28, 2023 · Not surprisingly, when we searched the Twitter Files for evidence of. "shadow banning”—or "shadowbanning"-nothing came up. That's mostly a ...
  8. [8]
  9. [9]
    The shadow banning controversy: perceived governance and ...
    Mar 12, 2022 · Meanwhile, platforms like Instagram, Twitter and TikTok vehemently deny the existence of the practice. In this paper, it will be argued that ...
  10. [10]
    The Twitter Files should disturb liberal critics of Elon Musk
    Jan 1, 2023 · The Twitter Files provide evidence of collusion between tech companies, liberal politicians and the “deep state” to silence conservatives.
  11. [11]
    How Shadow Banning Can Silently Shift Opinion Online - Yale Insights
    May 9, 2024 · In a new study, Yale SOM's Tauhid Zaman and Yen-Shao Chen show how a social media platform can shift users' positions or increase overall polarization.
  12. [12]
    Shaping opinions in social networks with shadow banning - PMC - NIH
    Here we present an optimization based approach to shadow banning that can shape opinions into a desired distribution and scale to large networks.
  13. [13]
    [PDF] Sunsetting “Shadowbanning” - Yale Law School
    Jul 17, 2023 · 33 See Brett Samuels, Trump: 'We Will Look into' Twitter for 'Shadow Banning' Republicans,. THE HILL (July 26, 2018), https://thehill.com ...<|separator|>
  14. [14]
  15. [15]
    What Is 'Shadow Banning'? - The New York Times
    real or imagined — that social media companies are taking stealth actions to limit a post's visibility.
  16. [16]
    [PDF] “What are you doing, TikTok?” : How Marginalized Social Media ...
    In a 2022 report, Nicholas stated that shadowbanning means “to limit or eliminate the exposure of a user, or content or material posted by a user, to other ...
  17. [17]
    SB1428 - 551R - I Ver - Arizona Legislature
    7. "Shadowban" means: (a) The act of blocking or partially blocking a user or the user's content from an online community ...
  18. [18]
    HB1333 - Hawaii State Legislature
    (5) "Shadow ban" means action by a social media platform, through any means, whether the action is determined by a natural person or an algorithm, to limit ...
  19. [19]
    shadow banning Meaning & Origin | Slang by Dictionary.com
    Mar 22, 2022 · The term shadow banning has been applied to moderation practices used by many popular online platforms, such as Reddit, Twitter, and Facebook.
  20. [20]
    Shadowbanning | Business & Information Systems Engineering
    Oct 28, 2024 · Overall, we consider shadowbanning to be a predominantly punitive mechanism with its specific forms differing in their degree of severity ...<|control11|><|separator|>
  21. [21]
    An end to shadow banning? Transparency rights in the Digital ...
    This paper offers a legal perspective on the phenomenon of shadow banning: content moderation sanctions which are undetectable to those affected.
  22. [22]
    'Black box gaslighting' challenges social-media algorithm ...
    Jan 20, 2022 · This is colloquially known as “shadowbanning,” a term influencers use to describe situations in which accounts display uncharacteristic drops ...
  23. [23]
    Content Moderation with Shadowbanning - PubsOnLine
    Sep 25, 2025 · Their findings reveal that shadowbanning can manipulate opinions and amplify polarization. Other empirical studies on shadowbanning rely on ...Content Moderation With... · 3. Notation And Assumptions · 5. Shadowbanning Strategy...Missing: controversies | Show results with:controversies
  24. [24]
    Where Did the Concept of 'Shadow Banning' Come From? - VICE
    Jul 31, 2018 · The idea that Silicon Valley giants use shadow banning to silence conservatives was popularized by far right publications over the last two ...
  25. [25]
    What is shadow banning? - Gwe Cambrian Web
    May 3, 2019 · So, shadow banning came about in the earliest years of the internet, when bulletin board systems (BBS) for software like Citadel had what they ...
  26. [26]
    Suspension, Ban or Hellban? - Coding Horror
    Jun 4, 2011 · I've always associated hellbanning with the Something Awful Forums. Per this amazing MetaFilter discussion, it turns out the roots of ...
  27. [27]
    History of shadow banning | | lompocrecord.com
    Sep 6, 2018 · Shadow bans started in the early days of online discussion groups and the tools used to police disruptive participants.
  28. [28]
    Shadowbanning Is Big Tech's Big Problem - The Atlantic
    Apr 28, 2022 · When the word shadowban first appeared in the web-forum backwaters of the early 2000s, it meant something more specific. It was a way for online ...Missing: origins | Show results with:origins
  29. [29]
    Conservatives Accuse Facebook of Political Bias
    May 9, 2016 · Facebook responded to a report by the website Gizmodo that accused the social network of suppressing stories from conservative news sources.
  30. [30]
    Facebook to change trending topics after investigation into bias claims
    May 23, 2016 · Facebook has denied allegations that the team responsible for its trending topics section deliberately suppressed conservative views – but says it will improve ...
  31. [31]
    Inside Facebook's GOP charm offensive - POLITICO
    May 16, 2016 · After accusations of anti-conservative bias began to burn Facebook last week, the social media giant quietly reached out to Republican Party ...
  32. [32]
    Thune Statement on Facebook Response to Questions About ...
    Thune Statement on Facebook Response to Questions About “Trending Topics” Bias Allegations. May 23, 2016. WASHINGTON – U.S. Senate Commerce Committee ...
  33. [33]
    Shadow banning: Did Twitter try to boost Dems and hurt GOP?
    Jul 26, 2018 · Shadow banning, broadly defined, is any practice that limits someone's reach on the web. In its strongest form on Twitter, a user could send all ...<|separator|>
  34. [34]
    Shadow banning - Wikipedia
    Platforms frequently targeted by these accusations include Twitter, Facebook, YouTube and Instagram. To explain why users may come to believe ...
  35. [35]
    What is shadowbanning? How do I know if it has happened to me ...
    Nov 3, 2022 · The term “shadowbanning” first appeared in 2001, when it referred to making posts invisible to everyone except the poster in an online forum.
  36. [36]
    Latest 'Twitter Files' reveal secret suppression of right-wing ...
    Dec 8, 2022 · “We don't shadow ban, and we certainly don't shadow ban based on political viewpoints,” Dorsey wrote in a tweet. “We do rank tweets by default ...<|separator|>
  37. [37]
    Elon Musk Will Be 'Digging' Into Shadowbans on First Day at Twitter
    Oct 28, 2022 · Elon Musk says he'll be 'digging' into shadowbans on his first day at Twitter as conservatives urge him to overturn their previous punishments.<|separator|>
  38. [38]
    Musk changes Twitter suspension policy - POLITICO
    Nov 24, 2022 · Elon Musk on Thursday announced a policy change for Twitter in which only accounts that are pushing spam or breaking the law will remain banned.Missing: shadow | Show results with:shadow
  39. [39]
    Twitter Files of internal company documents attract extensive media ...
    Jan 4, 2023 · Musk said Twitter is working on a software update that will "show your true account status, so you know clearly if you've been shadowbanned, the ...
  40. [40]
    Shadowbanning and Elon Musk's Free Speech Narrative - X
    Jul 26, 2025 · Musk's 2022 promise to eliminate shadowbanning (per the Twitter Files and his public statements) and his August 2023 pledge to address ...
  41. [41]
    Explore Twitter Files impact
    Yet, the files' revelation of shadowbanning and algorithmic deprioritization led X to dismantle its Trust and Safety team by mid-2025, replacing it with AI- ...
  42. [42]
    platform policies - X
    Since Elon Musk's 2022 takeover, X has shifted its content moderation approach, focusing on reach restrictions rather than suspensions unless the violation is ...
  43. [43]
    Are the 'Twitter Files' a Nothingburger? - Cato Institute
    Dec 14, 2022 · People who use the term “shadow banning” loosely to mean lowering the visibility of a person's tweets claim that the execs lied, because they ...
  44. [44]
    'Twitter Files' sheds light on longtime practice of shadow banning ...
    Dec 9, 2022 · “A new #TwitterFiles investigation reveals that teams of Twitter employees build blacklists, prevent disfavored tweets from trending, and ...
  45. [45]
    Twitter Files 2: Elon Musk's Hyped Up Exposé Unveils 'Secret ...
    Dec 9, 2022 · Musk said politicians running for office had been shadow banned on Twitter and that the firm is working on a feature to show users whether ...
  46. [46]
    Auditing Political Exposure Bias: Algorithmic Amplification on Twitter ...
    Jun 23, 2025 · Our findings indicate that X 's algorithm skews exposure toward a few high-popularity users across all monitoring accounts, with right-leaning ...Missing: controversies | Show results with:controversies
  47. [47]
    Has Elon Musk shadow banned his conservative critics ... - MSN
    In a recent flare-up within the MAGA community, accusations have surfaced against Elon Musk, suggesting he might be shadow banning his conservative critics on X ...
  48. [48]
    Twitter Appears to Be Shadow Banning Accounts That Criticize Elon ...
    Apr 23, 2025 · Twitter Appears to Be Shadow Banning Accounts That Criticize Elon Musk ; suspended the accounts of journalists, most notably in 2022, ; targeting ...
  49. [49]
    X is reportedly shadow banning accounts that criticise Elon Musk
    Apr 24, 2025 · Social media platform X/Twitter appears to be “shadow banning” accounts that are critical of Elon Musk, a report has suggested.
  50. [50]
    More bad news for Elon Musk after X user's legal challenge to ...
    Jul 12, 2024 · If X continues to violate Europe's data protection rules, the company is on the hook for fines of up to €4000 per day.
  51. [51]
    Meta's Broken Promises: Systemic Censorship of Palestine Content ...
    Dec 21, 2023 · Meta does not formally acknowledge the practice of shadow banning, effectively denying users transparency, as well as adequate access to ...
  52. [52]
    Recent judicial scrutiny of the shadow banning practices
    Aug 20, 2024 · Meta's shadow banning amounts to automated decision-making. In the case in Belgium, due to violations of Facebook's terms of use and ...
  53. [53]
  54. [54]
    Recognizing and responding to shadow bans – RJI
    Sep 30, 2024 · Shadow banning—the practice of reducing the visibility of social media content without the content creator's knowledge—can restrict the reach of ...
  55. [55]
    How to Check a YouTube Shadowban: Your 101 Guide - Bitdefender
    Dec 26, 2024 · The most obvious sign of a YouTube shadowban is a sharp and unexplained decline in view counts, likes, and comments on your videos, without changes to your ...
  56. [56]
    Analyzing Shadowbanning on YouTube: Does It Really Exist?
    May 7, 2025 · Is shadowbanning on YouTube real or just a myth? We break down how it works, possible causes, and how to protect your channel's visibility ...
  57. [57]
    Community Guidelines strike basics on YouTube - Google Help
    Community Guidelines are the rules of the road for how to behave on YouTube. These policies apply to all types of content on our platform.
  58. [58]
    TikTok and Our Shadowbanned Future - Cato Institute
    Oct 27, 2022 · Unlike other platforms, TikTok openly shadowbans, saying it “may reduce discoverability” for objectionable content by “redirecting search ...Missing: evidence | Show results with:evidence
  59. [59]
    TikTok Shadow Ban Explained: What It Is, and How to Fix It - Outfy
    Sep 9, 2025 · What Triggers a TikTok Shadow Ban? · 1. Posting Restricted Content · 2. Bullying or Harassment · 3. Copyright Infringement · 4. Using Banned or ...
  60. [60]
    Detecting a Shadowban - Reddit
    Jan 14, 2025 · There are two ways to check for a shadowban yourself so you don't have to rely on anyone else. 1) Go to https://www.reddit.com/appeal. Log in to the account ...
  61. [61]
    Users that over-turn a shadow ban do not have their post histories ...
    Sep 10, 2024 · In the last 18 months we have had a handful of r/anime power users get their accounts shadow banned. For these users this was a false-positive ...
  62. [62]
    Shedding Light on Shadowbanning
    Apr 26, 2022 · Some interview respondents also felt gaslit by social media companies' public denial of shadowbanning, even in the face of their own evidence.
  63. [63]
    Do Social Media Platforms Suspend Conservatives More?
    Oct 15, 2024 · Our research found that accounts sharing pro-Trump or conservative hashtags were suspended at a significantly higher rate than those sharing pro-Biden or ...
  64. [64]
    Social media users' actions, rather than biased policies, could drive ...
    Oct 2, 2024 · MIT Sloan research has found that politically conservative users tend to share misinformation at a greater volume than politically liberal users.
  65. [65]
    What is 'shadow banning', and why did Trump tweet about it?
    Jul 26, 2018 · Why are conservatives talking about “shadow bans”? “Twitter 'SHADOW BANNING' prominent Republicans,” Donald Trump tweeted Thursday morning.
  66. [66]
    What Is a 'Shadow Ban,' and Is Twitter Doing It to Republican ...
    Jul 26, 2018 · Vice claimed that Twitter was shadow banning some Republicans because their accounts “no longer appear in the auto-populated drop-down search box.”
  67. [67]
    Twitter Tried to Curb Abuse. Now It Has to Handle the Backlash
    Jan 16, 2018 · Another video shows a series of current and former employees explaining "shadowbans," a practice by which Twitter will sometimes make it more ...
  68. [68]
    [PDF] Content Moderation By The Numbers - NetChoice
    Without the ability to take down posts that are illegal or inappropriate, using social media would not only be less enjoyable, but potentially far more ...<|control11|><|separator|>
  69. [69]
    How Social Media Firms Moderate Their Content
    Jan 24, 2022 · Taking down someone's content reduces that user's (and some other users') enjoyment of the site, while not taking it down can also offend others ...
  70. [70]
    Twitter 'Shadow Bans' Compared to Elon Musk's Plan to 'Deboost ...
    Dec 9, 2022 · Musk's tweet from last month stated: "New Twitter policy is freedom of speech, but not freedom of reach. Negative/hate tweets will be max ...
  71. [71]
    Elon Musk says Twitter is rolling out a new feature that will flag ...
    Dec 9, 2022 · “Twitter is working on a software update that will show your true account status, so you know clearly if you've been shadowbanned, the reason ...
  72. [72]
    Reducing the distribution of problematic content | Transparency Center
    Apr 25, 2025 · Meta regularly publishes reports to give our community visibility into community standards enforcement, government requests and internet ...
  73. [73]
    Measuring the effect of Facebook's downranking interventions ...
    Jun 13, 2022 · Facebook's downranking reduced engagement by 16-31% for groups and about 45% for websites with two or more false links. Total engagement does ...Article Metrics · Implications · Findings
  74. [74]
    YouTube Makes Adjustments to Its Moderation Guidelines - ADWEEK
    Jun 9, 2025 · The changes allow controversial content to remain on YouTube as long as it is considered to be in the public's interest.<|control11|><|separator|>
  75. [75]
    How Does YouTube Moderate Content? - NeoWork
    Feb 25, 2025 · Content moderation helps remove inappropriate, offensive, or harmful content that could negatively impact the user experience and drive people ...
  76. [76]
    “Shadowbanning is not a thing”: black box gaslighting and the ...
    Oct 28, 2021 · Black box gaslighting captures how platforms may leverage perceptions of their epistemic authority on their algorithms to undermine users' confidence.Missing: admissions | Show results with:admissions
  77. [77]
    The Transparency Theater of the Twitter Files - WIRED
    Dec 12, 2022 · Gadde and Beykpour set forth a clear definition of shadowbanning: “deliberately making someone's content undiscoverable to everyone except the ...<|separator|>
  78. [78]
    Shaping opinions in social networks with shadow banning | PLOS One
    Intelligent policies should be enacted to prevent such abuse by social media platforms. Conventional measures such as shadow ban rates may not reveal the bias ...
  79. [79]
    Twitter Files spark debate about 'blacklisting' - BBC
    Dec 13, 2022 · Revelations about Twitter's content moderation decisions have raised questions about political bias.
  80. [80]
    "What are you doing, TikTok?" : How Marginalized Social Media ...
    Apr 26, 2024 · In this paper, we use qualitative surveys and interviews to understand how marginalized social media users make sense of shadowbanning.
  81. [81]
    [PDF] “What are you doing, TikTok?” : How Marginalized Social Media ...
    Several studies have empirically examined social media shadowbanning and users' perceptions of it. In one study, some Facebook users felt stifled and silenced ...
  82. [82]
    Digital silence: the psychological impact of being shadow banned ...
    Sep 19, 2025 · Shadow banning also known as Stealth banning, silently prevents or restricts a user's reach in the social media platforms. It is a kind of ...
  83. [83]
    Study looks at 'shadowbanning' of marginalized social media users
    Apr 22, 2024 · Some marginalized groups say social media platforms restrict the visibility of their online posts, according to a new University of Michigan study.
  84. [84]
    "Dialing it Back:" Shadowbanning, Invisible Digital Labor, and how ...
    Jan 10, 2025 · Content creators with marginalized identities are disproportionately affected by shadowbanning on social media platforms, which impacts their ...<|separator|>
  85. [85]
    (PDF) "Dialing it Back:" Shadowbanning, Invisible Digital Labor, and ...
    Aug 7, 2025 · Content creators with marginalized identities are disproportionately affected by shadowbanning on social media platforms, which impacts their ...
  86. [86]
    Spotlight on Shadowbanning - Center for Democracy and Technology
    Oct 4, 2021 · What are the effects of shadowbanning on speech? As with any form of content moderation, we want to understand the chilling effects that ...<|separator|>
  87. [87]
    Shadowbanning: Sorting Fact from Fiction | TechPolicy.Press
    Jul 13, 2022 · Still, shadowbanning is harmful both as a practice and as a rhetorical device. Systems that automatically shadowban have no way of correcting ...
  88. [88]
    How Algorithms Can Influence Content Visibility on Social Media
    Oct 22, 2024 · These studies suggest that shadow banning is indeed a widespread practice across social media platforms (Le Merrer et al., 2021; Bartley et al. ...
  89. [89]
    Official Journal L 277/2022 - EUR-Lex - European Union
    In particular, the concept of 'illegal content' should be defined broadly to cover information relating to illegal content, products, services and activities.
  90. [90]
    Recital 55 - CMS DigitalLaws
    Recital 55: Restriction of visibility may consist in demotion in ranking or in recommender systems, as well as in limiting accessibility by one or more ...
  91. [91]
    How the Digital Services Act enhances transparency online
    Sep 24, 2025 · Once the DSA is fully in force in February 2024, the obligation to provide users with statements of reasons explaining content moderation ...
  92. [92]
  93. [93]
    EU Disinformation Code Takes Effect Amid Censorship Claims and ...
    Jul 1, 2025 · The DSA now requires large tech platforms to meet tougher transparency and audit rules to curb disinformation, reports Ramsha Jahangir.
  94. [94]
  95. [95]
    The Cover Up: Big Tech, the Swamp, and Mainstream Media ...
    Feb 8, 2023 · Former Twitter employees testified on their decision to restrict protected speech and interfere in the democratic process.Missing: shadow | Show results with:shadow
  96. [96]
    Hearing on the Weaponization of the Federal Government
    FBI handed out security clearances to folks at Twitter. They communicated with Twitter on this secret teleporter app where messages disappear after certain ...
  97. [97]
    Transcript: House Oversight Hearing with Former Twitter Executives
    Feb 9, 2023 · The hearing focused on Republican concerns over Twitter's handling of a 2020 New York Post story, and its interactions with government.
  98. [98]
    Big Tech Wields Unchecked Power to Suppress Constitutional Speech
    Feb 8, 2023 · That brings us to the specific topic of today's hearing: Twitter's censorship of a news article that shed light on Joe Biden's involvement in ...
  99. [99]
    Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton
    Dec 14, 2023 · In May 2021, Florida enacted S.B. 7072, which restricts social-media platforms from moderating content by censoring, shadow banning, ...
  100. [100]
    Why the Texas and Florida Social Media Cases are Important for ...
    Feb 23, 2024 · Florida's law prohibits platforms from “censor[ing], deplatform[ing], or shadow ban[ning]” certain types of users like journalists or political ...
  101. [101]
    The Supreme Court considers state laws regulating social media ...
    Feb 21, 2024 · “The provisions that prohibit deplatforming candidates, deprioritizing and 'shadow-banning' content by or about candidates . . . or shadow ...
  102. [102]
    Federal Trade Commission Launches Inquiry on Tech Censorship
    Feb 20, 2025 · The Federal Trade Commission launched a public inquiry to better understand how technology platforms deny or degrade users' access to services.
  103. [103]
    Request for Public Comments Regarding Technology Platform ...
    The Federal Trade Commission invites public comment to better understand how technology platforms deny or degrade (such as by “demonetizing” and “shadow banning ...Missing: investigations | Show results with:investigations
  104. [104]
    Under Elon Musk, Twitter has approved 83% of censorship requests ...
    May 24, 2023 · The most recent example was the blocking of accounts critical of President Recep Tayyip Erdogan, two days before the elections held in Turkey ...
  105. [105]
    Censorship claims emerge as TikTok gets political in India - BBC
    Jan 30, 2020 · Ajay Barman, 22, is a fading TikTok star in India. Not because he is past his prime, but because - he alleges - he's been "shadow banned" ...Missing: Brazil | Show results with:Brazil
  106. [106]
    Full article: Digital authoritarianism: a systematic literature review
    Nov 24, 2024 · Violations of Rights: surveillance, spyware, malware, and shadow banning. Broadly the same range of digital practices described in Deibert's ...
  107. [107]
    How Shadow Banning Silences Voices Across Latin America
    Oct 16, 2025 · Demanding light in the shadows​​ Shadow-banning violates fundamental principles enshrined in international human rights law. The Inter-American ...Missing: perspectives | Show results with:perspectives<|separator|>
  108. [108]
  109. [109]
    [PDF] Shedding Light on Shadowbanning - CDT
    Apr 17, 2022 · The final section of this paper recommends three ways social media services can mitigate the harms of shadowbanning: sharply limiting the ...<|separator|>