Fact-checked by Grok 2 weeks ago

Deplatforming

Deplatforming refers to the exclusion or removal of individuals, groups, content, or behaviors from digital platforms, particularly networks, typically justified by platform operators as necessary to mitigate harms such as , , or violations of service terms. This moderation tactic has proliferated since the mid-2010s amid the growth of user-generated online ecosystems, serving as a primary mechanism for platforms to enforce community guidelines against perceived malicious actors. Empirical analyses reveal deplatforming's causal effects are often limited and context-dependent: bans can temporarily disrupt targeted networks, reducing coordinated harmful activity on the originating site by up to 30-50% in some cases, yet they frequently drive displaced users to alternatives, where engagement and revenue from content rebound or exceed prior levels, as observed in shifts from mainstream video hosts to decentralized ones. For instance, large-scale removals of norm-violating influencers diminish immediate attention to them but fail to eradicate broader , with studies documenting near-complete offsets via increased activity on successor platforms. The practice's defining controversies stem from its inherent trade-offs between harm reduction and speech curtailment, including risks of inconsistent enforcement that may reflect operator biases rather than neutral standards, and systemic questions about whether siloed bans adequately address harms in an interconnected web where content migrates rather than dissipates. Proponents view it as essential for curbing real-world linked to online , while critics highlight of inefficacy and potential for amplifying echo chambers, underscoring unresolved debates on platform accountability absent robust, transparent metrics for long-term impact.

Definition and Conceptual Framework

Core Definition and Mechanisms

Deplatforming refers to the systematic exclusion of individuals, groups, or from digital platforms by revoking access to communication tools, thereby limiting their ability to disseminate information publicly. This process typically involves permanent or temporary bans enforced by platform operators, often predicated on alleged breaches of (TOS) prohibiting deemed harmful, such as incitement to violence, dissemination of , or promotion of . Unlike mere content removal, deplatforming targets the entity's overall presence, ejecting users or entities from the to prevent further engagement. Operational mechanisms encompass direct platform-level actions like account suspensions or terminations, which sever user and posting privileges on social networks or hosting services. Broader enforcement extends to ancillary infrastructure, including domain registrars denying renewal or transfer (e.g., withholding DNS services), web hosts deactivating sites, and payment processors like or halting under risk policies. Advertiser boycotts can indirectly amplify these by pressuring platforms to act, as coordinated withdrawals reduce revenue tied to controversial content. These steps cascade across interconnected services, as platforms often coordinate with upstream providers to enforce compliance without requiring judicial oversight. Causal dynamics stem from platforms' structural dominance via network effects, where value accrues exponentially with user scale, fostering winner-take-most markets that concentrate gatekeeping authority in few hands. This monopoly-like leverage enables unilateral enforcement, as users and creators depend on these hubs for reach, lacking viable alternatives due to switching costs and audience lock-in. Absent regulations, private operators wield discretionary power akin to common carriers historically, implementing bans rapidly through automated and human review without adversarial .

Distinctions from Shadowbanning and Content Moderation

Deplatforming entails the outright termination of a user's account on a platform, resulting in complete exclusion from core functions such as posting, interacting, or accessing the service, whereas shadowbanning involves algorithmic downranking or reduced visibility of content without user notification or apparent changes to account status. This distinction manifests in intent and detectability: shadowbanning operates covertly to limit reach while preserving user illusion of normalcy, often as a subtler tool to evade backlash, in contrast to deplatforming's explicit of bans that signal violations publicly. Content moderation, by comparison, comprises a spectrum of interventions including the deletion or labeling of individual posts, temporary suspensions, or notations, targeting specific infractions rather than eradicating an entire presence. Deplatforming escalates beyond these by holistically revoking platform access, which causally severs all affiliated dissemination from the banned , amplifying disruptions to audience engagement compared to piecemeal removals that permit residual activity. For instance, pre-2022 policies on platforms like authorized permanent suspensions for repeated "hate speech" breaches, effecting full deplatforming distinct from isolated deletions under the same rules. Demonetization further delineates from deplatforming, as it restricts revenue generation—such as ad eligibility or features—while allowing continued content publication and user interaction, serving as a financial penalty rather than existential exclusion. This graduated approach preserves partial utility, underscoring deplatforming's more absolute curtailment of expressive capabilities.

Historical Development

Pre-Internet and Early Online Instances

In the pre-internet era, deplatforming primarily involved physical efforts to deny speakers access to venues, often through protests, disinvitations, or disruptions at public events, particularly on university campuses where private institutions exercised control over their facilities. A prominent example occurred in the 1960s with , founder of the in 1959, whose planned appearances elicited organized opposition aimed at preventing his speeches. At the in February 1966, student groups and faculty debated and protested his invitation by a conservative organization, framing the event as a test of institutional authority to exclude controversial figures, though the speech ultimately proceeded amid heightened security. Similarly, Rockwell's 1966 lecture at , his , faced vocal protests but was not canceled, highlighting early tensions between exclusionary pressures and procedural allowances for speech. These incidents reflected a reliance on to limit dissemination of views deemed objectionable, leveraging the platform owner's discretion over physical spaces without the scalability of digital networks. The transition to early online environments extended these dynamics into virtual exclusion, where operators of dial-up Systems (), operational from the late 1970s through the 1990s, routinely banned users for rule violations, effectively revoking their access to community discussions and . , which peaked with tens of thousands active by the early 1990s, operated as private servers where sysops enforced to preserve limited and user harmony, often ejecting participants for off-topic posts, , or illegal content distribution like pirated software. This practice mirrored pre-digital property rights to curate spaces but introduced permanence via account termination, as banned users could not easily relocate without new hardware or phone lines. A landmark commercial instance unfolded in December 1995, when , a major online service provider with millions of subscribers, preemptively blocked global access to about 200 sex-oriented forums following a prosecutor's investigation into and violations under local laws. The decision stemmed from a raid on CompuServe's partner, prompting the company to restrict content to avoid extraterritorial legal risks, despite operating under U.S. . This event underscored the causal amplification of deplatforming through centralized digital infrastructure, where a single policy shift could exclude thousands from niche communities, prioritizing compliance over unfettered access in an era before widespread alternatives.

Rise During Social Media Expansion (2000s-2015)

As social media platforms proliferated in the mid-2000s— in 2004, in 2005, in 2005, and in 2006—moderation practices evolved from basic and illegal content removal to address scaling challenges, including advertiser sensitivities and user complaints about . Early efforts emphasized compliance with laws, such as suspending terrorist-linked accounts; removed Al-Shabaab's @HSM_Press on September 21, 2013, and Al-Qaeda's @shomokhalislam on September 29, 2013, amid U.S. government pressure. Platforms began issuing transparency reports to document these actions, with 's inaugural 2012 report revealing over 1,000 account suspensions for policy violations, rising in subsequent periods as user numbers grew from millions to hundreds of millions. The 2014 Gamergate controversy accelerated this shift, exposing platforms to coordinated campaigns against gaming industry figures, primarily via and , where anonymous users amplified threats and doxxing. In response, revised its rules on December 2, 2014, to explicitly target "trolls" and abuse, enabling reports of targeted or threats of violence, which resulted in suspensions like that of @chatterwhiteman for attacks on developer . This policy update marked a move toward proactive enforcement of for subjective harms like "abusive behavior," with reporting an 84% increase in global government content removal requests by early 2015, alongside rising user-flagged violations. Reddit followed suit in 2015, formalizing an anti-harassment policy that banned subreddits promoting targeted abuse. On June 10, 2015, it quarantined or removed communities such as r/fatpeoplehate, r/hamplanethatred, r/transfags, and r/neofag, citing repeated personal attacks outside site norms. By August 2015, further updates led to bans of additional offensive groups, including racist ones, as CEO emphasized curbing content that intimidated users from participation. These changes reflected infrastructural adaptations to platform growth, prioritizing "safe spaces" through TOS invocations for harassment, though critics noted inconsistent application favoring certain viewpoints.

Peak and Polarization (2016-2022)

The period from 2016 to 2022 marked a surge in deplatforming on major platforms, coinciding with intensified U.S. following the 2016 presidential election. Platforms like and responded to widespread concerns over "fake news" influencing the election outcome by expanding content removal and flagging mechanisms. In December 2016, announced plans to flag disputed stories using user reports and partnerships with independent fact-checkers, including , the , , , and , to demote or remove misleading content. These measures represented an escalation from prior ad-based or algorithmic approaches, prioritizing proactive interventions amid accusations that false narratives had swayed voter behavior, though empirical studies later questioned the scale of ' electoral impact. Polarization deepened through subsequent election cycles, with platforms facing pressure from governments, advertisers, and advocacy groups to curb perceived , often targeting right-leaning accounts and narratives. By 2020, amid the and presidential contest, removal rates for violating content rose significantly; for instance, reported quarterly takedowns in the tens of millions for hate speech and false information, reflecting policy expansions beyond election-specific issues. This trend linked causally to real-time events, as platforms adjusted rules reactively—such as Twitter's introduction of labels on world leaders' misleading posts in May 2020—rather than through predefined, transparent criteria. The apex occurred in early 2021 following the riot, triggering unprecedented mass deplatformings across platforms. Twitter suspended tens of thousands of accounts associated with the events, including high-profile figures, citing violations of policies against and ; this included the permanent ban of then-President on January 8, 2021, after internal deliberations deemed his posts posed ongoing risks. Similar actions by and others involved temporary suspensions of posting privileges for political figures and the effective shutdown of alternative platforms like via app store removals, framed as emergency measures to prevent violence. These decisions often bypassed standard review processes, with platforms invoking exceptions to long-standing norms against banning elected officials. Subsequent disclosures from the in late exposed the ad hoc nature of many such interventions, drawing from internal emails and messages spanning prior years. Employees described key bans as "one-off ad hoc decisions" deviating from published rules, influenced by a predominantly internal culture that prioritized suppressing deemed harmful to democratic norms over consistent . This revealed causal drivers rooted in pressures and ideological priors rather than scalable policies, exacerbating perceptions of viewpoint amid polarization; for example, teams debated interventions based on potential real-world fallout rather than violations per se, leading to inconsistent application across ideological lines. Such practices peaked during election-adjacent crises, underscoring how platforms' reactive scaling of deplatforming amplified divides without robust empirical validation of uniform threat levels.

Notable Examples

High-Profile Political Deplatformings

Following the U.S. Capitol riot on January 6, 2021, former President faced widespread deplatforming across major social media platforms. permanently suspended his @realDonaldTrump account on January 8, 2021, citing the risk of further incitement of violence based on his posts praising participants in the events. and indefinitely suspended his accounts the prior day, January 7, 2021, after he posted content interpreted as endorsing the violence, with the suspension upheld by the company's Oversight Board in May 2021. restricted his channel uploads for at least seven days initially, later extending limitations, while platforms like and also removed his presence or content. In , deplatforming targeted allies of then-President amid investigations into dissemination. On July 24, 2020, and complied with a order to suspend 16 accounts and related profiles belonging to high-profile Bolsonaro supporters, including lawmakers and influencers, as part of a probe into networks. Bolsonaro's personal accounts remained active, but platforms enforced content-specific removals, such as a video posted on October 25, 2021, falsely claiming vaccines increased AIDS risk, which was deleted from both and for violating policies. Left-leaning political deplatformings have been less frequent among high-profile figures but include actions against accounts linked to activism. In 2017 and subsequent years, suspended multiple prominent Antifa-associated accounts for policy violations including doxxing, threats of violence, and harassment, prompting claims from activists of targeted suppression of leftist organizing. During the , platforms suspended accounts promoting anti-lockdown protests if content was flagged as , though such cases often involved cross-ideological skeptics rather than strictly left-leaning politicians; for instance, isolated suspensions targeted organizers inciting unrest without permits, but verifiable high-profile examples remain sparse compared to right-leaning instances. Empirical analyses of suspension patterns reveal geopolitical and ideological asymmetries, with accounts sharing pro-Trump or conservative hashtags suspended at significantly higher rates than those aligned with or pro-Biden content during the 2020 U.S. period, based on audits of over 100,000 actions. This disparity extends internationally, where platforms' enforcement has disproportionately impacted right-leaning political expressions in studies of global account takedowns.

Influencers and Media Figures

In August 2018, and his platform were banned from Apple, , , and , with following in September, for alleged violations of policies against , harassment, and abusive behavior; these actions occurred amid defamation lawsuits filed by families of victims, whom Jones had repeatedly claimed staged a . The coordinated removals significantly reduced Jones' online reach, though he migrated to alternative platforms. In August 2022, self-described influencer faced bans from ( and ), , , and , cited for promoting misogynistic views and associating with "dangerous organizations and individuals" under platform rules, as Romanian authorities investigated him for , , and organized exploitation. Tate's account was reinstated in November 2022 after Elon Musk's acquisition of the platform, leading to a surge in followers exceeding six million by early 2023. Deplatformings of left-leaning influencers remain infrequent by comparison; one instance involved Facebook's May 2019 ban of leader for longstanding antisemitic rhetoric, including references to as "termites." Data from platform enforcement analyses reveal an empirical asymmetry, with accounts using pro-Trump or conservative hashtags suspended at significantly higher rates than those with pro-Biden or liberal equivalents, potentially reflecting differences in content patterns or enforcement priorities amid institutional biases toward left-leaning norms.

Organizational and Group Cases

The neo-Nazi website , operated by the white supremacist group of the same name, was deplatformed in August 2017 after it published content celebrating the death of Heather Heyer during the . Domain registrar terminated its .com registration on August 14, 2017, citing violation of terms prohibiting content that promotes violence; when the site transferred to , also refused registration the same day, stating it violated policies against offensive content. followed on August 16, 2017, by ceasing DDoS protection and traffic proxying, with CEO explaining the decision stemmed from the site's role in inciting harm, though he acknowledged it set a beyond automated enforcement. This coordinated withdrawal from domain, hosting infrastructure, and security services forced offline from the clear web, relocating to Russian domains and the . The alternative social media platform , positioned as a free-speech haven for conservative users, underwent extensive deplatforming in January 2021 following the U.S. Capitol riot on January 6. Apple removed Parler from the on January 9, 2021, for failing to implement adequate to prevent of , as evidenced by posts related to the riot; Google had suspended it from the Play Store the prior day on similar grounds. then suspended Parler's hosting on January 10, 2021, after reviewing 98 posts that allegedly encouraged in violation of its terms, rendering the site inaccessible and halting operations until it secured alternative hosting. Parler's reliance on these third-party app distribution and cloud infrastructure amplified the deplatforming's effects, temporarily eliminating its mobile access for millions of users and underscoring vulnerabilities for group-affiliated platforms dependent on major tech providers. Such cases highlight the networked nature of organizational deplatforming, where refusals by intermediary services like content delivery networks, domain registrars, and payment processors create cascading disruptions beyond primary hosting. For instance, the , a pro-Western chauvinist group, saw its official and accounts banned in late 2018 after repeated violations of community standards on and violence, with the platform designating it a hate ; similar restrictions applied across other services post-2020, limiting coordinated group communications. While predominantly affecting right-leaning entities in high-profile instances, platforms have enforced policies against left-leaning groups for specific violations, such as suspending accounts tied to doxxing or during 2020 unrest, though these often targeted individual actors rather than formalized organizations.

Empirical Evidence on Impacts

Effects on Deplatformed Individuals and Reach

Deplatforming typically results in a substantial reduction in the reach and directed toward the affected individual, as evidenced by quasi-experimental analyses of norm-violating influencers. A of 101 deplatformed influencers across platforms found that , whether temporary or permanent, led to decreased overall metrics, including search and mentions, with effects persisting beyond the initial period. Similarly, evaluations of deplatforming as a strategy indicate it minimizes the dissemination of associated content by limiting access to mainstream audiences, though banned users may exhibit heightened activity on alternative venues. In the case of , deplatforming from , , and other major platforms following the , 2021, U.S. Capitol events correlated with an immediate and sharp decline in his visible online footprint. Analyses reported a 73% plunge in election-related volume linked to and allies post-ban, reflecting diminished amplification through algorithmic feeds and user networks previously sustaining his 88 million followers. subsequently migrated to , launched in 2022, which attracted over 2 million sign-ups within days but achieved engagement levels orders of magnitude below his prior mainstream presence, with active user metrics stabilizing below 5 million by mid-2022. Alex Jones, banned from YouTube, Facebook, Apple, and Spotify in August 2018, experienced an initial disruption in video distribution and ad revenue streams tied to those platforms, prompting a shift to proprietary sites like Infowars.com and alternative hosts such as band.video. Despite claims of financial harm during subsequent legal proceedings, financial disclosures revealed Jones' net worth rose from approximately $5 million pre-bans to $50-100 million by 2022, driven by direct e-commerce sales of supplements and merchandise to a loyal subscriber base exceeding 100,000 paid members. Empirical matching of banned creators' channels to alt-platform equivalents, using donation data as a revenue proxy, further shows that while mainstream reach fragments, monetization can recover or exceed prior levels for those with established off-platform infrastructure. Across influencers, migration to alternatives like , , or Telegram often yields temporary surges in niche engagement—termed attention spikes—due to media coverage of the bans, but long-term reveal sustained fragmentation of audiences and reduced cross-platform . Banned users demonstrate higher retention on sites yet lower overall , as alternative ecosystems lack the and algorithmic push of incumbents, leading to 20-50% effective in traceable metrics. This pattern underscores deplatforming's causal role in constraining individual influence to ideologically aligned , with success varying by pre-existing direct channels.

Broader Platform and Ecosystem Dynamics

Deplatforming on platforms typically reduces the volume and visibility of targeted within those ecosystems, as suspended users lose to large audiences and algorithmic . However, this localized suppression is frequently counterbalanced by cross-platform migrations, where deplatformed individuals and communities redistribute their activity to alternative sites, increasing density and engagement intensity on those venues. A 2023 analysis of 's deplatforming following the , , U.S. events found that while user activity on Parler itself plummeted, overall participation in fringe did not decline; instead, displaced users surged onto platforms like Gab and Telegram, maintaining or elevating their posting frequency and interaction rates. This pattern of migration aligns with network-theoretic principles, wherein deplatforming disrupts mainstream ties but strengthens connections within homophilous subgroups, concentrating users into denser, more insular clusters on alternatives. Empirical data from Twitter bans, examined in a 2023 study, show that prohibited accounts often relocate to ideologically congruent platforms such as Gab, where they exhibit sustained productivity and follower growth, albeit with reduced exposure to diverse viewpoints. Such consolidations amplify effects, as network effects— including to similar users—foster rapid reinforcement of shared narratives without the moderating influence of broader discourse. Cross-site dynamics further illustrate how deplatforming reshapes the broader : while mainstream experience a net decrease in controversial content volume, venues absorb the influx, leading to heightened partisan and diminished inter-group bridging. For example, post-2021 deplatforming waves, banned users from demonstrated resilience by aggregating on sites with laxer moderation, resulting in more cohesive communities but lower cross-ideological reach compared to pre-ban patterns. This redistribution underscores a causal where interventions inadvertently bolster networks' , prioritizing internal over systemic suppression.

Assessments of Harm Reduction Effectiveness

Empirical studies on deplatforming's role in reducing harm, primarily through metrics like content dissemination, user engagement, and toxicity, yield mixed findings, with evidence of short-term platform-specific declines but limited proof of sustained ecosystem-wide or offline benefits. A 2023 analysis of six disruptions targeting hate-based organizations on found that removing key members decreased hateful content production by an average of 69%, consumption by 62%, and intra-group engagement by 55%, suggesting localized containment of coordinated . Similarly, deplatforming high-profile accounts following the January 6, 2021, U.S. Capitol events on (now X) reduced the circulation of URLs by those users by over 70% and by their followers by approximately 40%, as measured through difference-in-differences models comparing pre- and post-ban periods. These effects persisted for months, indicating that targeted removals can curb amplification on the affected platform. However, such interventions often fail to eliminate harm across the broader online , as users migrate to alternative venues with laxer . Deplatforming of the in January 2021, following its association with post-election unrest, did not diminish overall activity among its users on other fringe platforms like Gab or Telegram; instead, migration sustained or redirected engagement without net reduction in fringe participation. Conspiracy-oriented communities exhibit particular resilience, with a showing that while deplatforming initially shrinks group size and connectivity, these networks reconstitute faster than non-conspiracy counterparts, maintaining cross-group ties and content volume over time. A large-scale ban of nearly 2,000 subreddits in 2020 led to 15.6% of affected users departing the site and a 6.6% average drop in among remaining users, but it also prompted shifts to less regulated spaces, complicating claims of overall harm . Deplatforming norm-violating influencers further demonstrates attention reduction but underscores proxy limitations for true harm abatement. Longitudinal tracking of 101 influencers across platforms like and revealed that permanent bans decreased attention by 64% and Wikipedia views by 43% after 12 months, with temporary suspensions yielding smaller but positive effects; misinformation-focused deplatformings amplified these drops. Yet, these metrics capture visibility rather than causal impacts on real-world harms like or , where long-term data is scarce and confounded by dynamics. Systemic reviews note the absence of robust causal evidence linking deplatforming to decreased offline incidents, as displaced actors often intensify in unregulated environments, potentially heightening fringe echo chambers without verifiable societal gains. Overall, while deplatforming achieves tactical online suppressions, empirical gaps persist in demonstrating net beyond immediate platform boundaries.

Arguments in Favor of Deplatforming

Platform Liability and Safety Imperatives

Platforms operate as private entities with the discretion to moderate content, yet face potential liability for facilitating harm through user-generated material, prompting proactive deplatforming to mitigate legal risks. Under of the of 1996, interactive computer services enjoy immunity from liability for third-party content, but subsection (c)(2) explicitly shields platforms engaging in "good faith" efforts to restrict offensive or harmful material, such as content promoting violence. This framework incentivizes moderation, as failure to act could expose platforms to claims under other statutes, including aiding and abetting liability or in distributing dangerous content. Proponents argue that such imperatives align with platforms' roles as curators, compelling them to prioritize user safety over unrestricted hosting to avoid lawsuits and regulatory scrutiny. Safety considerations extend to averting real-world violence incited by amplified extremist rhetoric, with deplatforming positioned as a necessary tool for containment. The 2019 , where attacker Brenton Tarrant livestreamed the assault on and disseminated a across platforms, exemplified how rapid online propagation can inspire copycat acts, galvanizing industry-wide removals of similar content. Platforms responded by enhancing algorithms and policies to detect and excise terrorist manifestos and live violence, arguing that unchecked spread constitutes a direct pathway to physical harm. Advocates cite post-deplatforming patterns as evidence of efficacy in curbing extremism's momentum, though these observations rely on correlations rather than definitive causation. indicates that removing hate-oriented accounts diminishes overall platform toxicity, with one finding that excising hundreds of such entities causally improved network health by reducing toxic interactions. Similarly, deplatforming norm-violating influencers has been linked to a 63% drop in their online attention after 12 months, limiting exposure to audiences prone to . These measures, per supporters, safeguard communities by disrupting pathways from incitement to offline , fulfilling platforms' duty to foster environments free from foreseeable perils.

Purported Benefits for Public Discourse

Proponents of deplatforming contend that it ameliorates public discourse by diminishing the proliferation of and divisive content, thereby enabling more rational and evidence-based exchanges among users. By excising accounts deemed to propagate falsehoods or , platforms purportedly shield audiences from manipulative narratives that could otherwise polarize communities or incite unrest. This perspective posits that sustained exposure to unchecked harmful erodes in institutions and facts, whereas targeted removals restore a baseline of verifiable . Empirical claims supporting this include analyses of post-January 6, 2021, deplatformings on , where the intervention reportedly curtailed circulation not only from the banned accounts but also from their followers, reducing overall reach by measurable margins. Similarly, on deplatforming "bad actors" has found it effective in policing and curtailing spread, with platforms experiencing lower incidences of coordinated false narratives after such actions. In the context of health-related discourse, initiatives like Facebook's removal of anti-vaccine content during the were advanced as mechanisms to limit the viral transmission of dangerous health falsehoods, preserving space for authoritative messaging. Regarding norm enforcement, advocates assert that deplatforming reinforces civil standards by deterring escalatory behaviors, such as or calls to , which proponents link to degraded quality. Expert assessments indicate that banning extremists can diminish aggregate volumes on platforms, fostering environments where moderate voices predominate and constructive debate thrives over antagonism. Platform analyses of hate organization removals further claim causal improvements in site-wide health metrics, including reduced toxicity and enhanced user retention among non-extremist demographics. These benefits are often highlighted in academic and circles, though originating from entities with incentives to justify practices.

Criticisms and Opposing Views

Free Speech Implications and Censorship Risks

Deplatforming by private technology platforms raises profound concerns regarding free speech, as these entities have evolved into the primary venues for public discourse, effectively serving as contemporary equivalents to traditional town squares. Unlike government actors bound by the First Amendment, platforms wield unilateral power to remove users or content, enabling what amounts to viewpoint discrimination without legal oversight or appeal mechanisms. This dynamic allows private gatekeepers to shape narratives by silencing dissenting perspectives, potentially stifling the open exchange of ideas essential to democratic deliberation. A key risk is the from targeted restrictions on imminent violence or threats to broader suppression of . Initially framed as necessary to prevent harm, such as to physical violence, deplatforming policies have expanded to encompass challenges to prevailing orthodoxies, including queries about . For instance, following the 2020 U.S. presidential election, platforms like systematically suspended or restricted accounts questioning vote counts or procedural irregularities, reclassifying such speech from permissible debate to "" warranting removal. This illustrates how vague standards can erode protections for non-violent expression, transforming platforms into arbiters of truth rather than neutral conduits. Empirical analyses further highlight enforcement asymmetries that exacerbate risks, with conservative-leaning users disproportionately affected. A 2024 Yale School of Management study examining hashtag usage found that accounts promoting pro-Trump or conservative content faced suspension rates significantly higher than those with pro-Biden or liberal equivalents, even when controlling for policy violations. Such disparities suggest selective application of rules, potentially driven by internal biases or external pressures, which undermines claims of neutral and concentrates power in unelected hands. Critics contend this not only discriminates against specific ideologies but also chills across the spectrum, as users anticipate uneven scrutiny.

Evidence of Ineffectiveness and Unintended Consequences

Deplatforming efforts have frequently failed to achieve net reductions in harmful online activity due to user migration to alternative platforms, where engagement often persists or intensifies. A 2023 study analyzing the January 2021 deplatforming of following the U.S. riot found that while Parler's user base declined sharply, affected users increased their posting volumes on other platforms such as Gab and Telegram by comparable margins, resulting in no overall decrease in social media activity. Similarly, examinations of deplatformed users from mainstream sites like reveal heightened toxicity and activity on clones, suggesting that isolated bans displace rather than diminish norm-violating behavior across the ecosystem. Unintended backfire effects have also emerged, where deplatforming correlates with reinforced commitment among fringe audiences. Research on the removal of hate organization leaders from in multiple disruptions between 2018 and 2021 showed short-term reductions in platform-specific , but target audiences exhibited sustained or redirected engagement on successor groups or external channels, potentially entrenching ideologies through perceived martyrdom narratives. Observational data from deplatforming events indicate in extremist communities, with bans sometimes amplifying internal and as users frame exclusions as evidence of systemic opposition, though causal links remain challenging to isolate without experimental controls. Empirical assessments of deplatforming's efficacy are limited by the absence of randomized controlled trials, relying instead on quasi-experimental designs prone to factors like concurrent events or self-selection in platform migrations. Long-term studies are scarce, with most drawn from short windows post-intervention, obscuring whether displaced activity eventually dissipates or evolves into offline harms. This methodological underscores a normalized overconfidence in deplatforming's harm reduction potential, as cross-platform tracking reveals persistent aggregate exposure rather than elimination of targeted content.

Asymmetry, Bias, and Power Concentration

Deplatforming practices demonstrate a notable empirical asymmetry, with right-leaning accounts and figures facing suspensions and bans at higher rates than their left-leaning counterparts. A 2024 analysis of Twitter data revealed that accounts using pro-Trump or conservative hashtags were suspended at significantly higher rates than those using pro-Biden or liberal hashtags, even after accounting for activity levels. This pattern aligns with broader observations from platform data, where conservative users experienced elevated enforcement actions during politically charged periods, such as post-2020 election moderation waves targeting election-related claims predominantly from the right. Comparable left-leaning rhetoric, including calls associated with 2020 urban unrest, rarely triggered equivalent high-profile deplatforming, highlighting selective application despite similar potential for incitement. While some studies attribute higher conservative suspension rates to elevated sharing of or rule-violating content by those users, this does not fully explain inconsistencies in thresholds or the rarity of symmetric actions against left-leaning violations. Internal platform cultures contribute causally, as evidenced by employee political donations skewing overwhelmingly Democratic—often exceeding 95% at firms like and —fostering norms that prioritize suppression of dissenting views over neutral rule application. Releases from the exposed internal deliberations where moderation teams hesitated on left-leaning content while accelerating actions against conservative accounts, reflecting a toward protecting prevailing institutional narratives rather than uniform standards. The concentration of power in a handful of tech monopolies amplifies these biases, enabling opaque, unaccountable decisions without democratic oversight or appeal mechanisms. Platforms like and pre-acquisition commanded over 70% of U.S. traffic as of , allowing executives to wield censorship authority akin to state powers amid pressures from advertisers boycotting right-leaning content and governments requesting removals disproportionately affecting conservative speech. This structural dynamic incentivizes enforcement aligned with elite consensus—often left-leaning due to Silicon Valley demographics—over impartiality, as competitive alternatives remain marginal and reliant on mainstream gateways for reach.

United States Framework and Challenges

Section 230 of the , enacted on February 8, 1996, immunizes providers and users of interactive computer services from civil liability for content created by third parties, while also protecting platforms from liability for good-faith efforts to block or restrict access to material deemed obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. This dual protection—under subsections (c)(1) treating platforms as non-publishers of user content and (c)(2) safeguarding moderation decisions—allows companies to deplatform users or remove posts without incurring distributor or publisher liability that might otherwise apply under . Absent Section 230, platforms might face heightened legal risks for moderation, potentially constraining aggressive deplatforming practices. The statute does not impose common carrier obligations on platforms, such as mandatory neutrality in hosting speech, distinguishing them from providers regulated under Title II of the Communications Act. This enables unilateral content curation as private editorial choices, free from federal mandates to carry all lawful speech. However, challenges emerge from the interplay with the First Amendment, which prohibits government abridgment of speech but does not compel private entities to host content, allowing platforms to enforce reflecting their own expressive interests. A key challenge involves allegations of government "jawboning"—persuasive or coercive pressure on platforms to moderate content—which courts scrutinize as potential state action violating the First Amendment when it crosses into compulsion rather than mere advocacy. In Missouri v. Biden, filed on May 5, 2022, by attorneys general from Missouri and Louisiana alongside individual plaintiffs, the suit claimed Biden administration officials, including from the White House, FBI, and CDC, pressured platforms like Facebook, YouTube, and Twitter to censor disfavored views on COVID-19 origins, vaccines, and election integrity through repeated demands, threats of antitrust scrutiny, and public shaming. On July 4, 2023, U.S. District Judge Terry Doughty granted a preliminary injunction, finding the government likely engaged in a "far-reaching and widespread censorship campaign" via coercion exceeding protected persuasion. The Fifth Circuit, in a September 8, 2023, en banc decision, largely affirmed, ruling that officials' actions constituted viewpoint discrimination and exceeded First Amendment bounds by jawboning platforms into suppressing conservative-leaning speech. These rulings highlight enforcement challenges: while empowers private moderation, government involvement risks invalidation if proven coercive, yet proving such causation amid platforms' independent policies remains evidentiary hurdles, as platforms retain discretion to align with or ignore official entreaties. Pre-2023 frameworks thus preserve platform autonomy but expose systemic vulnerabilities where official pressure amplifies deplatforming of non-mainstream views without direct statutory redress for private biases.

International and EU Approaches

In the , the (DSA), which entered into force on November 16, 2022, and began phased application from February 17, 2024, regulates intermediary services including social platforms to address illegal content and systemic risks such as and harm to civic discourse. For very large online platforms (VLOPs) with over 45 million users, the DSA mandates risk assessments and mitigation measures, which may include user deplatforming to prevent the amplification of harmful content; non-compliance can result in fines up to 6% of global annual turnover imposed by the . Platforms must provide reports detailing actions, including suspensions, with statements of reasons and appeal mechanisms to affected users, aiming to balance enforcement with accountability. The distinguishes between general obligations for all platforms—such as expeditious removal of notified illegal content—and enhanced duties for VLOPs, where deplatforming decisions target "systemic risks" like election interference, without granting platforms immunity from liability for user-generated harms. Critics, including legal scholars, argue this framework empowers unelected regulators to influence moderation practices extraterritorially, potentially pressuring platforms to err toward over-removal to avoid penalties, though data as of 2025 shows initial focus on audits rather than mass deplatformings. In the , the , receiving on October 26, 2023, imposes proactive duties on user-to-user services to prevent exposure to priority illegal harms, including and child exploitation material, requiring platforms to assess risks and implement removal systems with fines up to 10% of qualifying worldwide revenue or £18 million enforced by . Duties for illegal content became enforceable on March 17, 2025, compelling platforms to use tools like hashing and URL detection for swift deplatforming of offending accounts, while smaller services face lighter tailored obligations. The Act's emphasis on "safety by design" has led to concerns over chilled speech, as platforms may preemptively suspend users to meet vague harm thresholds, diverging from U.S. immunity models by holding companies directly accountable for systemic failures. Outside Europe, Brazil exemplifies judicial-driven deplatforming, where the (STF) has ordered platforms to suspend accounts disseminating electoral misinformation, as in 2022 rulings by Justice targeting networks linked to former President Jair Bolsonaro's election challenges, resulting in blocks of Telegram channels and accounts for non-compliance with content removal directives. These monocratic decisions, upheld under Brazil's 1988 Constitution's provisions, fined platforms up to 10% of local revenue and threatened nationwide bans, culminating in the 2024 X (formerly ) suspension after repeated defiance of orders to block specific users. Such approaches highlight risks of executive-judicial overreach, as individual justices wield broad discretion without legislative checks, contrasting regulatory models like the by prioritizing rapid enforcement over transparency and potentially enabling politicized targeting of opposition figures.

Recent Developments (2023-2025)

In February 2025, the U.S. () initiated a into technology platform practices, examining how platforms deny or degrade user access to services through mechanisms such as and demonetization. The inquiry, launched on February 20, seeks public comments to assess potential anticompetitive effects and consumer harms from such moderation, signaling increased regulatory scrutiny on platforms' content controls beyond traditional safety concerns. Legislative efforts intensified in 2025, with the bipartisan STOP HATE Act, announced on July 24, proposing fines of up to $5 million per day for companies failing to report and enforce moderation against terrorist content and . Sponsored by Representatives Gottheimer and with support from the , the bill mandates transparency in moderation outcomes but has drawn criticism for potentially outsourcing censorship decisions to advocacy groups, raising risks of viewpoint under the guise of . On platforms like X (formerly Twitter), Elon Musk's ownership led to policy shifts reducing deplatforming asymmetry, including high-profile reinstatements and active engagement; former President , previously banned across major platforms post-January 6, 2021, resumed posting on X in August 2024 ahead of an with Musk, contributing to his visibility during the election cycle. Analyses post-2024 election highlighted the failure of sustained deplatforming efforts against , as alternative channels and policy relaxations enabled his return to mainstream discourse without evident suppression of influence. Public support for content restrictions declined in 2025, with a survey from April showing only 52% of U.S. adults favoring government limits on false information online, down from 60% in 2023, and similar drops for tech company actions on violent content. Globally, Pew's April 2025 report across 35 countries underscored broad prioritization of free expression, though variances persisted in perceptions of freedoms. Debates over reforms accelerated amid AI-generated deepfakes, with a July 2024 bipartisan House bill conditioning immunity on platforms' efforts to detect and label such content, while broader proposals call for sunsetting the provision by late 2025 to address evolving liabilities. Concurrently, platforms like announced in January 2025 the end of third-party , shifting to user-generated notes, and further tweaks in , reflecting a away from aggressive intervention toward reduced enforcement intensity.

Alternatives and Future Trajectories

Migration to Alternative Platforms

Following the deplatforming of prominent conservative figures and platforms after the , 2021, U.S. Capitol events, users migrated en masse to alternatives like and Gab, with peaking at over 15 million users amid a surge that propelled it to the top of app stores before its removal. Gab experienced a comparable influx, as deplatforming from mainstream sites drove millions in new registrations and revenue, according to a 2022 Stanford Observatory analysis. Similarly, Telegram saw heightened adoption by U.S. far-right extremists between 2020 and 2023, serving as a hub for and radical networks due to its lax and encrypted channels. These shifts illustrate a pattern where deplatformed communities seek ideologically aligned spaces, often resulting in temporary spikes in user acquisition. Truth Social, launched in 2022 by former President after his suspension in January 2021, exemplifies loyalty-driven migration, attracting a dedicated base of Trump supporters unwilling to engage mainstream platforms. While its user base remains modest—around 2% of U.S. adults report using it for news, with maintaining 4.43 million followers compared to his prior 88 million on —the platform sustains engagement through niche appeal, though it struggles with broader scalability due to limited infrastructure and algorithmic reach. Alternative platforms like these face inherent challenges in achieving mainstream viability, as smaller networks constrain content virality and monetization, yet they cultivate intense user retention among ideologically committed groups. Empirically, such migrations correlate with diminished overall online attention and engagement for deplatformed entities—studies estimate a 43-63% reduction in search and visibility metrics post-exile—while fostering persistent echo chambers that amplify homogeneous viewpoints. Research on Gab, , and highlights how these environments reinforce right-leaning insularity, with users recycling similar narratives and exhibiting heightened risks absent cross-ideological exposure. Deplatforming thus redirects activity to fringe ecosystems without eradicating it, potentially sustaining long-term user loyalty at the cost of broader integration.

Decentralization and Technological Countermeasures

Decentralization efforts in online platforms aim to mitigate deplatforming risks by distributing across multiple nodes, relays, or servers, thereby eliminating centralized chokepoints where a single authority can enforce bans. Federated systems like those using the protocol allow instances to interconnect voluntarily, enabling content to propagate beyond any one server's policies. This structure inherently reduces the leverage of deplatforming, as users and data can migrate or replicate across independent operators without relying on gatekeepers. Mastodon, a prominent federated microblogging network, exemplifies this approach, with users hosting their own servers that federate via ActivityPub. Following Elon Musk's acquisition of Twitter in October 2022, experienced rapid growth, expanding from approximately 500,000 active monthly users to nearly 9 million by November 2024, driven in part by users seeking alternatives amid concerns over centralized moderation. Donations to the project surged 488% in 2022, reaching €325,900, reflecting increased community support for self-sustained, decentralized infrastructure. Such federation ensures that deplatforming on one instance does not erase content, as it persists and is accessible via interconnected peers. Protocols like further advance censorship resistance through relay-based architecture, where messages are stored and forwarded by independent servers without a central . Launched in 2020, Nostr saw accelerated adoption post-2022, with client applications reporting over 18 million users by mid-2025. An analysis of the network revealed 616 million post replications across 17.8 million unique posts, averaging 34.6 relays per post, demonstrating high redundancy that safeguards against targeted removals. This replication mechanism causally undermines deplatforming by ensuring content availability persists even if specific relays enforce bans, as users can connect to alternative relays. Blockchain-based platforms provide immutable storage and economic incentives to resist , leveraging distributed ledgers to verify and propagate content without intermediaries. Platforms such as DTube, built on for video sharing, enable users to upload and access media in a manner resistant to unilateral takedowns, as transactions are recorded on-chain and verifiable by consensus. Self-hosting tools, including for running personal instances of federated services or custom servers, allow individuals to bypass platform dependencies entirely; for instance, deploying or similar on private hardware evades bans by granting full operational control. Empirical assessments indicate these technologies contest centralized platform dominance by fostering resilient networks, though challenges persist in achieving mass adoption.

References

  1. [1]
    Deplatforming - The Yale Law Journal
    Nov 2, 2023 · 34 As a result, “deplatforming” is the exclusion or ejection of not only individuals or entities, but also content or particular behavior from a ...
  2. [2]
    The systemic impact of deplatforming on social media - PMC
    Deplatforming, or banning malicious accounts from social media, is a key tool for moderating online harms. However, the consequences of deplatforming for ...
  3. [3]
    Disrupting hate: The effect of deplatforming hate organizations ... - NIH
    Jun 5, 2023 · How does removing the leadership of online hate organizations from online platforms change behavior in their target audience?
  4. [4]
    Does Deplatforming Work? Unintended consequences of banning
    Being deplatformed on YouTube results in a 30% increase in weekly Bitcoin revenue and a 50% increase in viewership on Bitchute.
  5. [5]
    Deplatforming Norm-Violating Influencers on Social Media Reduces ...
    We find that both permanent and temporary deplatforming reduce online attention toward influencers.
  6. [6]
    The Unintended Consequence of Deplatforming on the Spread of ...
    Nov 1, 2023 · “Deplatforming” refers to the banning of individuals, communities, or entire websites that spread misinformation or hate speech.Missing: scholarly | Show results with:scholarly
  7. [7]
    [PDF] Understanding the Effect of Deplatforming on Social Networks
    Most research in this space has looked at deplatforming in a siloed fashion, evaluating the effect that these actions have on the platforms the accounts where ...
  8. [8]
    Efficacy and Unintended Consequences of a Massive Deplatforming ...
    Jun 13, 2024 · Here, we assess the effectiveness of The Great Ban, a massive deplatforming operation that affected nearly 2,000 communities on Reddit. By ...
  9. [9]
    DEPLATFORM Definition & Meaning - Merriam-Webster
    transitive verb. : to remove and ban (a registered user) from a mass communication medium (such as a social networking or blogging website)
  10. [10]
    DEPLATFORM Definition & Meaning - Dictionary.com
    to prohibit (a person or people) from sharing their views in a public forum, especially by banning a user from posting on a social media website or application.
  11. [11]
    [PDF] Deplatforming and Freedom: - James Madison Institute
    In extreme cases, users and even competing applications find themselves “deplatformed,” or cut off from connectivity, by service providers for actions that are ...Missing: mechanisms | Show results with:mechanisms
  12. [12]
    [PDF] Network Effects and Market Power: What Have We Learned in the ...
    SINCE THE EARLY YEARS OF PLATFORM and antitrust analysis, network effects have been an important consideration when analyzing potential market power.Missing: media deplatforming
  13. [13]
    [PDF] Assessing the Strength of Network Effects in Social Network Platforms
    One factor which tends to limit the strength of network effects on social media platforms is tight network clustering. While users value having their close ...Missing: deplatforming | Show results with:deplatforming
  14. [14]
    [PDF] Scholarship@Vanderbilt Law Deplatforming
    34 As a result, “deplat- forming” is the exclusion or ejection of not only individuals or entities, but also content or particular behavior from a platform.35 ...
  15. [15]
    The shadow banning controversy: perceived governance and ...
    Mar 12, 2022 · Shadow banning refers to a controversial and hard to detect type of social media content moderation. The term was reportedly coined by ...
  16. [16]
    Shadowbanning in Relation to (De)platformization
    Jul 6, 2021 · Shadowbanning is an attempt to deplatform creators whose content the company deems undesirable. Users have found creative ways around Instagram's increasingly ...Missing: differences | Show results with:differences
  17. [17]
    [PDF] Reduction / Borderline content / Shadowbanning - Yale Law School
    The broader public debate about content moderation has overwhelm- ingly focused on removal: social media platforms deleting content and.
  18. [18]
    The Art of the Shadowdan. Web3 social is supposed to be different…
    May 8, 2024 · Whereas deplatforming involves the outright removal of an account, shadowbanning involves restricting an account. Shadowbanning may actually be ...Missing: differences | Show results with:differences
  19. [19]
    A Guide to Content Moderation for Policymakers - Cato Institute
    May 21, 2024 · Content moderation is how companies create their preferred online space using rules to decide what content is allowed, protected by the First ...
  20. [20]
    Deplatforming/De-platforming
    Dec 17, 2021 · 'Deplatforming', or 'de-platforming', refers to the ejection of a user from a specific technology platform by closing their accounts, banning them, or blocking ...
  21. [21]
    Twitter, other tech companies slip on removing hate speech, EU ...
    Nov 24, 2022 · European Union data shows that Twitter took longer to review hateful content and removed less of it in 2022 compared with the previous year.Missing: policy deplatforming pre-
  22. [22]
    Demonetization – Glossary of Platform Law and Policy Terms
    17 Dec 2021 · Compared to deplatforming, which entails the removal of a user, demonetization is a less stringent sanction, as it only removes potential ...
  23. [23]
    Demonetization: Definition & Historical vs. Current Examples
    This type of demonetization can apply to individual posts or an entire channel, but it's different from “deplatforming,” which refers to banning content ...
  24. [24]
    Invitation to a Nazi | University of Michigan Heritage Project
    The contest over George Lincoln Rockwell's appearance at Michigan became a classic battle in microcosm over the proper role of the First Amendment on a college ...
  25. [25]
    When An Actual Nazi Spoke on an American College Campus
    Nov 28, 2016 · 50 years ago in 1966, American Nazi Party co-founder George Lincoln Rockwell lectured at his alma mater Brown University.
  26. [26]
    The Lost Civilization of Dial-Up Bulletin Board Systems - The Atlantic
    Nov 4, 2016 · The first BBS came to life in 1978 during a particularly bad Chicago blizzard. ... Once the web arrived in the mid-1990s, it seemed inevitable ...
  27. [27]
  28. [28]
    Social Media Usage: 2005-2015 - Pew Research Center
    Oct 8, 2015 · Today, 90% of young adults use social media, compared with 12% in 2005, a 78-percentage point increase. At the same time, there has been a 69- ...
  29. [29]
    Twitter suspensions - Wikipedia
    Between 2014 and 2016, Twitter suspensions were frequently linked to ISIL-related accounts. A "Twitter suspension campaign" began in earnest in 2015, and on ...
  30. [30]
    Twitter's New Transparency Report Shows Increase in Government ...
    Jan 29, 2013 · Removal demands to Twitter by governments around the world also increased sharply since six months ago. In the first half of 2012, there were ...Missing: harassment 2012-2015
  31. [31]
    How Gamergate foreshadowed the toxic hellscape that the internet ...
    Mar 24, 2025 · In 2014, a group of anonymous gamers launched a harassment campaign against women in the gaming industry. More than a decade later, the tactics ...
  32. [32]
    A New Harassment Policy for Twitter - The Atlantic
    Dec 2, 2014 · A New Harassment Policy for Twitter ... The social network is trying to balance openness and the prevention of abuse. ... It's no secret that ...
  33. [33]
    A #GamerGate Target Wants Twitter to Make Harassment Harder
    Oct 14, 2014 · The Twitter account used to harass game developer Brianna Wu—@chatterwhiteman— has been removed from the site.
  34. [34]
    Twitter implements anti-harassment policy - UPI.com
    Dec 3, 2014 · Twitter has stepped up its strategy to stop harassment on its website with a new policy and set of anti-harassment tools.
  35. [35]
    Twitter transparency report: US among biggest offenders requesting ...
    Feb 9, 2015 · In the same period Twitter received an 84% increase in government and government-sanctioned demands to remove content from its service.Missing: harassment | Show results with:harassment
  36. [36]
    Reddit bans five subforums over harassment concerns - The Guardian
    Jun 10, 2015 · Five subreddits were removed: r/fatpeoplehate and r/hamplanethatred, both of which are about hating overweight people; r/transfags and r/neofag, ...
  37. [37]
    Reddit Sets New Content Policy, Banning Some Racist Communities
    Aug 5, 2015 · Reddit, the popular community news site, formalized a new set of guidelines that aim to restrict some of the risqué and potentially offensive content posted to ...
  38. [38]
    Reddit CEO Steve Huffman Changes Content Policy, Lists Bannable ...
    Jul 16, 2015 · Earlier this year, Reddit took a stand and banned non-consensual pornography[1] . This was largely accepted by the community, and the world is a ...
  39. [39]
    Reddit bans r/fatpeoplehate, four other subreddits under new ...
    Jun 12, 2015 · Reddit's newly minted anti-harrassment policy has led to the axing of some subreddits but has left other blatantly racist subreddits untouched.<|separator|>
  40. [40]
    Social Media and Fake News in the 2016 Election
    Following the 2016 US presidential election, many have expressed concern about the effects of false stories ("fake news"), circulated largely through social ...Missing: deplatforming removals
  41. [41]
    Facebook to begin flagging fake news in response to mounting ...
    Dec 15, 2016 · Disputed articles will be marked with the help of users and outside fact checkers amid widespread criticism that fake news influenced the US ...
  42. [42]
    Checking in with the Facebook fact-checking partnership
    Apr 4, 2018 · Facebook and five US news and fact-checking organizations—ABC News, the Associated Press, FactCheck.org, PolitiFact, and Snopes—created a ...Missing: Twitter | Show results with:Twitter
  43. [43]
    [PDF] Social Media and Fake News in the 2016 Election
    We use fake news headlines that PolitiFact rated as ”Pants on Fire” or ”False.” Third, we use a list of 21 major fake news articles that appeared between August ...Missing: deplatforming removals
  44. [44]
  45. [45]
    With fact-checks, Twitter takes on a new kind of task | Reuters
    May 30, 2020 · In addition to disputing misleading claims made by U.S. President Donald Trump about mail-in ballots this week, Twitter has added ...
  46. [46]
    Deplatforming Accounts After the January 6th Insurrection at the US ...
    Jun 5, 2024 · A new paper in Nature suggests Twitter's intervention against tens of thousands of accounts had a measurable effect, writes Gabby Miller.
  47. [47]
    What the Twitter Files Reveal About Free Speech and Social Media
    Jan 11, 2023 · In the hours after the January 6th insurrection, executives at Twitter had to decide what to do about Donald Trump's account.Missing: hoc | Show results with:hoc
  48. [48]
    Parler and the Road to the Capitol Attack: Executive Summary
    Twitter and Facebook to ban Donald Trump from their platforms while Apple, Google, and Amazon all moved to cut off access to Parler, a social media site ...
  49. [49]
    Elon Musk is using the Twitter Files to discredit foes and push ... - NPR
    Dec 14, 2022 · They're a collection of internal emails and Slack chats capturing Twitter employees discussing company policies and fraught moderation calls.Missing: ad hoc
  50. [50]
    Why the Twitter Files Are in Fact a Big Deal - Jacobin
    Dec 29, 2022 · It led one employee to call the ban a “one-off ad hoc decision,” not the only example of Twitter employees describing acts of censorship as “one ...Missing: deplatforming | Show results with:deplatforming
  51. [51]
    The real revelation from the 'Twitter Files': Content moderation is ...
    Dec 14, 2022 · The Twitter Files reports appear aimed at calling into question the integrity of Twitter's former leadership and riling up the right-leaning user base.Missing: hoc | Show results with:hoc
  52. [52]
    Twitter Files spark debate about 'blacklisting' - BBC
    Dec 13, 2022 · Revelations about Twitter's content moderation decisions have raised questions about political bias.Missing: ad hoc
  53. [53]
    Permanent suspension of @realDonaldTrump - Blog - X
    Jan 8, 2021 · We have permanently suspended the account due to the risk of further incitement of violence. In the context of horrific events this week, we made it clear on ...
  54. [54]
    Former President Trump's suspension - The Oversight Board
    May 5, 2021 · The Oversight Board has upheld Facebook's decision to suspend Mr. Trump's access to post content on Facebook and Instagram on January 7, 2021.
  55. [55]
    Facebook Bans President Trump From Posting For The Rest Of His ...
    Jan 7, 2021 · Twitter forced Trump to remove three posts, including the video, and suspended his account as a warning for 12 hours. Officials at Twitter said ...
  56. [56]
    Twitter and Facebook block accounts of Jair Bolsonaro supporters ...
    Jul 25, 2020 · Twitter and Facebook suspended the accounts Friday of 16 allies of Brazilian President Jair Bolsonaro after a Supreme Court judge ordered them blocked.
  57. [57]
    Facebook, YouTube take down Bolsonaro video over false vaccine ...
    Oct 25, 2021 · Facebook (FB.O) and YouTube have removed from their platforms a video by Brazilian President Jair Bolsonaro in which the far-right leader made a false claim.
  58. [58]
    Trudeau vows to freeze anti-mandate protesters' bank accounts - BBC
    Feb 14, 2022 · Canadian Prime Minister Justin Trudeau has taken the unprecedented step of invoking the Emergencies Act to crack down on anti-vaccine mandate protests.Missing: figures | Show results with:figures
  59. [59]
    Do Social Media Platforms Suspend Conservatives More?
    Oct 15, 2024 · Our research found that accounts sharing pro-Trump or conservative hashtags were suspended at a significantly higher rate than those sharing pro-Biden or ...
  60. [60]
    The Geopolitics of Deplatforming: A Study of Suspensions of ...
    Feb 14, 2024 · The study relies on a sample of politically-interested users to assess the effect of deplatforming on political conversations related to Iranian ...
  61. [61]
    Facebook, Apple, YouTube and Spotify ban Infowars' Alex Jones
    Aug 6, 2018 · All but one of the major content platforms have banned the American conspiracy theorist Alex Jones, as the companies raced to act in the wake of Apple's ...
  62. [62]
    Alex Jones and Infowars Content Is Removed From Apple ...
    Aug 6, 2018 · Top technology companies erased most of the posts and videos on their services from Alex Jones, the internet's notorious conspiracy theorist.
  63. [63]
    Twitter Bans Alex Jones And InfoWars; Cites Abusive Behavior - NPR
    Sep 6, 2018 · Twitter said it has "permanently suspended" the conspiracy theorist and his InfoWars outlet, citing tweets and videos posted Wednesday that ...Missing: deplatformed | Show results with:deplatformed
  64. [64]
    Twitter bans Alex Jones and Infowars for abusive behaviour - BBC
    Sep 6, 2018 · Twitter follows other tech giants in banning the conspiracy theorist for "abusive behaviour".Missing: lawsuits | Show results with:lawsuits
  65. [65]
    Andrew Tate is banned from social media platforms - NPR
    Aug 20, 2022 · Andrew Tate, an influencer and former professional kickboxer known for his misogynistic remarks, has been banned from Facebook, Instagram and TikTok.
  66. [66]
    YouTube joins Facebook in banning Andrew Tate - BBC
    Aug 23, 2022 · The Google-owned social media site took action following Meta's decision to ban Mr Tate from Facebook and Instagram. The former kickboxer rose ...
  67. [67]
    'Dangerous misogynist' Andrew Tate removed from Instagram and ...
    Aug 19, 2022 · Self-described sexist removed for violating Meta's policies on 'dangerous organizations and individuals'
  68. [68]
    Andrew Tate is banned from TikTok, Instagram, YouTube - Vox
    Aug 24, 2022 · Editor's note, December 28: Andrew Tate's Twitter account was reinstated in November, following a five-year ban.
  69. [69]
    Andrew Tate: How the arrest of controversial influencer has ...
    Apr 29, 2023 · Andrew Tate has gained more than six million followers in the five months since his Twitter account was reinstated.
  70. [70]
    Facebook Bans Alex Jones, Louis Farrakhan And Other 'Dangerous ...
    May 3, 2019 · Facebook Bans White Supremacists And Anti-Semites From Platform The social media platform said it was banning the high-profile individuals ...Missing: deplatformed | Show results with:deplatformed
  71. [71]
    Facebook Bars Alex Jones, Louis Farrakhan and Others From Its ...
    May 2, 2019 · Louis Farrakhan, the outspoken black nationalist minister who has frequently been criticized for his anti-Semitic remarks, was also banned.Missing: deplatformed | Show results with:deplatformed
  72. [72]
    Google And GoDaddy Ban White Supremacist Site After Virginia Rally
    Aug 14, 2017 · The Daily Stormer, a neo-Nazi website that promoted the "Unite the Right" rally in Charlottesville, Va., moved its site to Google before its registration there ...
  73. [73]
    Why We Terminated Daily Stormer - The Cloudflare Blog
    Aug 16, 2017 · Cloudflare terminated the account of the Daily Stormer. We've stopped proxying their traffic and stopped answering DNS requests for their sites.
  74. [74]
    Cloudflare CEO says removing The Daily Stormer is slippery slope
    Aug 17, 2017 · Cloudflare CEO Matthew Prince's decision to terminate protection of neo-Nazi website the Daily Stormer because he "woke up in a bad mood" could be dangerous.
  75. [75]
    Neo-Nazi site Daily Stormer resurfaces with Russian domain ... - Vox
    Aug 16, 2017 · The notorious site bounced around various domains and the dark web before landing on a .ru domain.
  76. [76]
    Amazon, Apple and Google Cut Off Parler, an App That Drew Trump ...
    Jan 13, 2021 · Apple, Google and Amazon kick Parler off their platforms. Apple and Google said they would remove Parler from its App Stores. Amazon said it ...
  77. [77]
    Parler has now been booted by Amazon, Apple and Google - CNN
    Jan 9, 2021 · Amazon said it would remove Parler from its cloud hosting service, Amazon Web Services, Sunday evening, effectively kicking it off of the public ...
  78. [78]
    Parler: Amazon to remove site from web hosting service - BBC
    Jan 9, 2021 · Amazon told Parler it had found 98 posts on the site that encouraged violence. Apple and Google have removed the app from their stores.
  79. [79]
    Parler drops offline after Amazon withdraws support - CNBC
    Parler, a social media app popular with conservatives and supporters of President Donald Trump, has gone offline after Amazon withdrew its support.
  80. [80]
    Explainer: What is Parler and why has it been pulled offline? | Reuters
    Jan 13, 2021 · Before Apple Inc (AAPL.O) banned Parler from its App Store store on Saturday, the social media site topped the charts as the most popular ...
  81. [81]
    Does banning extremists online work? It depends. - Vox
    Feb 3, 2022 · Social media bans can make it harder to recruit new followers, but existing supporters can become more toxic.Missing: besides | Show results with:besides
  82. [82]
    Deplatforming Reduces Overall Attention to Online Figures, Says ...
    Jan 6, 2024 · The researchers observed “similar effects for both temporary and permanent deplatforming,” but note that “users banned for spreading ...
  83. [83]
    [PDF] Evaluating the Effectiveness of Deplatforming as a Moderation ...
    Social Science Research Network. 2021. TLDR. Analysis shows that deplatforming is effective in minimizing the reach of disinformation and extreme speech, as ...<|separator|>
  84. [84]
    Misinformation went down after Twitter banned Trump
    Jan 16, 2021 · Online misinformation about election fraud plunged 73 percent after several social media sites suspended President Trump and key allies last week.Missing: metrics | Show results with:metrics
  85. [85]
    With Trump Out Of Office, Disinformation Online Is On A Decline - NPR
    Mar 4, 2021 · His Twitter audience actually grew by 30% over 2020. So his ban led to an immediate drop-off in misinformation online, according to a number of ...
  86. [86]
    Alex Jones Got Even Richer After Being Thrown Off Social Media
    Aug 15, 2022 · The far-right radio host and professional conspiracy theorist built a career, an audience of millions, and an astonishing fortune by spreading lies.
  87. [87]
  88. [88]
    Understanding the Effect of Deplatforming on Social Networks
    Jun 22, 2021 · Deplatforming may cause banned users to move to alternative platforms with increased activity and toxicity, although their reach may decrease.
  89. [89]
    Deplatforming did not decrease Parler users' activity on fringe social ...
    Mar 21, 2023 · Deplatformed users often migrate to alternative platforms, which raises concerns about the effectiveness of deplatforming.
  90. [90]
    Deplatforming did not decrease Parler users' activity on fringe social ...
    Mar 21, 2023 · Deplatforming is a common practice among online platforms to reduce content deemed harmful. However, its effectiveness has been debated, as ...Missing: distinctions | Show results with:distinctions
  91. [91]
    Deplatforming did not decrease Parler users' activity on fringe social ...
    Deplatformed users often migrate to alternative platforms, which raises concerns about the effectiveness of deplatforming. Here, we study the deplatforming of ...
  92. [92]
    The systemic impact of deplatforming on social media - ResearchGate
    Oct 18, 2023 · Deplatforming, or banning malicious accounts from social media, is a key tool for moderating online harms. However, the consequences of ...
  93. [93]
    Disrupting hate: The effect of deplatforming hate organizations on ...
    Jun 5, 2023 · We study the effects of six network disruptions of designated and banned hate-based organizations on Facebook, in which known members of the organizations were ...Sign Up For Pnas Alerts · Materials And Methods · Results
  94. [94]
    Post-January 6th deplatforming reduced the reach of misinformation ...
    Jun 5, 2024 · For simplicity, we refer to these URLs as 'misinformation' but the measurement does not distinguish misinformation from disinformation. We ...
  95. [95]
    Deplatforming did not decrease Parler users' activity on fringe social ...
    Mar 21, 2023 · Our results indicate that deplatforming a major fringe platform in isolation was ineffective at reducing overall user activity on fringe social media.
  96. [96]
    Online conspiracy communities are more resilient to deplatforming
    Oct 31, 2023 · Our findings show that deplatforming can effectively reduce the activity, size, and connections with other groups of both types of communities ( ...
  97. [97]
    Deplatforming Norm-Violating Influencers on Social Media Reduces ...
    2 Jan 2024 · Through a difference-in-differences approach, we find that deplatforming reduces online attention toward influencers. After 12 months, we ...
  98. [98]
    Section 230: An Overview | Congress.gov
    Jan 4, 2024 · Courts have interpreted Section 230 as creating broad immunity that allows the early dismissal of many legal claims against interactive ...
  99. [99]
    Interpreting the ambiguities of Section 230 - Brookings Institution
    Oct 26, 2023 · ... Section 230 to eliminate distributor liability is that such immunity is necessary to encourage platforms to aggressively moderate content.
  100. [100]
    It's Time to Update Section 230 - Harvard Business Review
    Aug 12, 2021 · Section 230 has two key subsections that govern user-generated posts. The first, Section 230(c)(1), protects platforms from legal liability ...
  101. [101]
    The Christchurch mosque shooting, the media, and subsequent gun ...
    In March 2019, a mass shooting at two Christchurch mosques, livestreamed to Facebook, resulted in the deaths of 51 people.
  102. [102]
    Five years on from Christchurch: Assessing the evolution of the ... - ISD
    Mar 15, 2024 · In the five years since the Christchurch attack, the act of white supremacist terror has become the basis for a wave of copycats inspired by both its method ...
  103. [103]
    Deplatforming Norm-Violating Influencers on Social Media Reduces ...
    May 2, 2025 · Through a difference-in-differences approach, we find that deplatforming reduces online attention toward influencers. After 12 months, we ...Missing: distinctions | Show results with:distinctions
  104. [104]
    [PDF] GAO-24-106262, COUNTERING VIOLENT EXTREMISM
    Jan 31, 2024 · This “deplatforming” has the benefit of exposing fewer users on mainstream platforms to violent extremist content, some experts said; however, ...
  105. [105]
    Post-January 6th deplatforming reduced the reach of misinformation ...
    Jul 1, 2024 · We show that the intervention reduced circulation of misinformation by the deplatformed users as well as by those who followed the deplatformed users.Missing: extremism removals
  106. [106]
    Can Deplatforming Users on Social Media Reduce Misinformation?
    Jun 5, 2024 · Researchers have found that deplatforming bad actors on social media can be used to police and reduce misinformation.
  107. [107]
    The efficacy of Facebook's vaccine misinformation policies ... - Science
    Sep 15, 2023 · We therefore conducted an evaluation of Facebook's attempts to remove antivaccine misinformation from its public content throughout the COVID-19 ...
  108. [108]
    Does 'deplatforming' work to curb hate speech and calls for violence ...
    Jan 15, 2021 · Dubbed “deplatforming,” these actions restrict the ability of individuals and communities to communicate with each other and the public.
  109. [109]
    Is Deplatforming Enough To Fight Disinformation And Extremism?
    Jan 25, 2021 · Experts say deplatforming can be an important first step in cutting off the oxygen to disinformation and violence, which seemed to be confirmed ...
  110. [110]
    Social Media: Private Enterprise Meets the Town Square
    Jul 10, 2024 · Therefore, Trump's actions constituted viewpoint discrimination. In a past case in 2017, Matal v. Tam, 137 S. Ct. 1744 at 1758, the Supreme ...Missing: squares | Show results with:squares
  111. [111]
    [PDF] Public Forum Doctrine and Viewpoint Discrimination in the Social ...
    Micah Telegen, You Can't Say That!: Public Forum Doctrine and Viewpoint Discrimination in the Social ... be viewpoint discriminatory because of the platform's ...Missing: facto | Show results with:facto<|control11|><|separator|>
  112. [112]
    [PDF] Harm and Hegemony: The Decline of Free Speech in the United ...
    24, 2021) (testimony of Professor Jonathan Turley); The Right of The People Peaceably. To Assemble: Protecting Speech By Stopping Anarchist Violence: Hearing ...<|separator|>
  113. [113]
    [PDF] Deplatforming Norm-Violating Influencers on Social Media Reduces ...
    We find that deplatforming reduces online attention toward influencers. Specifically, after 12 months, we estimate that online attention toward ...
  114. [114]
    Evaluating the Effectiveness of Deplatforming as a Moderation ...
    Oct 18, 2021 · Deplatforming refers to the permanent ban of controversial public figures with large followings on social media sites.Missing: definition | Show results with:definition
  115. [115]
    The Twitter bias hearings point to favoritism, but not for liberals
    Feb 10, 2023 · Analysis of the data reveals, however, “that the tendency for conservative users to be suspended at higher rates than liberal users can be ...
  116. [116]
    False equivalencies: Online activism from left to right - Science
    Sep 4, 2020 · Such ideological asymmetries between left- and right-wing activism hold critical implications for democratic practice, social media governance, ...
  117. [117]
    Differences in misinformation sharing can lead to politically ... - Nature
    Oct 2, 2024 · Right and left, partisanship predicts (asymmetric) vulnerability to misinformation. Harvard Kennedy School (HKS) Misinformation Review https ...
  118. [118]
    Most liberal tech companies, ranked by employee donations - CNBC
    Jul 2, 2020 · Netflix employees send 98% of their political contributions to Democrats, while those at Qualcomm split theirs evenly between parties.Missing: asymmetry | Show results with:asymmetry
  119. [119]
    Tech workers' political donations overwhelmingly skew Democratic
    Oct 26, 2018 · Partisan donations this political cycle from workers at technology giants skew overwhelmingly Democratic, according a report released Friday ...
  120. [120]
    [PDF] Twitter Kept Entire 'Database' of Republican Requests to Censor Posts
    Feb 8, 2023 · Some Republicans even believe the release of the. “Twitter Files” is the “tip of the spear” of their crusade against the alleged liberal bias of ...Missing: evidence | Show results with:evidence
  121. [121]
    [PDF] Regulating Online Content Moderation - Georgetown Law
    Today, a small number of politically-unaccountable technology oligarchs exercise state-like censorship powers without any similar limitation. We have ...
  122. [122]
    The Rise of Content Cartels | Knight First Amendment Institute
    Feb 11, 2020 · Content moderation more generally is going through a crisis of legitimacy. There is growing awareness about the arbitrary and unaccountable way ...
  123. [123]
    Media Platforming and the Normalisation of Extreme Right Views
    Aug 12, 2025 · This demarcation strategy has become rare over the past two decades, as there are few cases where far right actors are completely ignored ( ...
  124. [124]
    47 U.S. Code § 230 - Protection for private blocking and screening ...
    section 230 of the Communications Act of 1934 (47 U.S.C. 230; commonly known as the 'Communications Decency Act of 1996') was never intended to provide ...Missing: explanation | Show results with:explanation
  125. [125]
    DEPARTMENT OF JUSTICE'S REVIEW OF SECTION 230 OF THE ...
    The US Department of Justice analyzed Section 230 of the Communications Decency Act of 1996, which provides immunity to online platforms from civil liability.
  126. [126]
    Missouri, et al. v. Biden, et al. (f/k/a Murthy, et al. v. Missouri, et al.)
    CASE SUMMARY. Public statements, emails, and publicly released documents establish that the President of the United States and other senior officials in the ...
  127. [127]
    [PDF] Case 3:22-cv-01213-TAD-KDM Document 293 Filed 07/04/23 Page ...
    Jul 4, 2023 · The States allege that the Defendants have caused harm to the states of Missouri and Louisiana by suppressing and/or censoring the free speech ...
  128. [128]
    [PDF] 23-30445-CV0.pdf - Fifth Circuit Court of Appeals
    Sep 8, 2023 · A group of social-media users and two states allege that numerous federal officials coerced social-media platforms into censoring certain social ...
  129. [129]
    The EU's Digital Services Act - European Commission
    Oct 27, 2022 · Its main goal is to prevent illegal and harmful activities online and the spread of disinformation. It ensures user safety, protects fundamental ...
  130. [130]
    Questions and answers on the Digital Services Act*
    The DSA requires platforms to be transparent in their content moderation: The Digital Services Act sets rules on transparency of content moderation decisions.
  131. [131]
    Article 15, the Digital Services Act (DSA)
    Article 15, Transparency reporting obligations for providers of intermediary services - the Digital Services Act (DSA)
  132. [132]
    Article 22 Digital Services Act: Building trust with trusted flaggers
    Under the rule of law, state entities require a legal basis to engage in content moderation, even if it simply means encouraging platforms to enforce their own ...
  133. [133]
    The Digital Services Act and the EU as the Global Regulator of the ...
    This essay discusses the Digital Services Act (DSA), the new regulation enacted by the EU to combat hate speech and misinformation online.
  134. [134]
    Online Safety Act: explainer - GOV.UK
    The Act requires all companies to take robust action against illegal content and activity. Platforms are now required to implement measures to reduce the risks ...
  135. [135]
    The UK's Online Safety Act | ITIF
    Jun 9, 2025 · The Act mandates platforms detect and remove illegal content including terrorism material, child sexual abuse material, and fraud, while ...
  136. [136]
    Online Safety Act - GOV.UK
    Jul 24, 2025 · As of 17 March 2025, platforms have a legal duty to protect their users from illegal content online. Ofcom are actively enforcing these duties ...<|separator|>
  137. [137]
    Brazil: Freedom on the Net 2022 Country Report
    The ban did not ultimately take effect; it was lifted two days later, after Telegram quickly complied with court orders to remove certain content and appoint a ...
  138. [138]
    Online content moderation lessons from outside the US | Brookings
    Jun 17, 2020 · This post explains the general trends in regulation around the world, highlighting two specific cases, Europe and Brazil, where similar reforms ...Missing: deplatforming excluding
  139. [139]
    The Case of the Rumble Ban in Brazil - Global Freedom of Expression
    The Federal Supreme Court (STF) of Brazil, in a ruling by Justice Alexandre de Moraes, suspended the operations of the social network Rumble in the country.<|separator|>
  140. [140]
    Federal Trade Commission Launches Inquiry on Tech Censorship
    Feb 20, 2025 · The Federal Trade Commission launched a public inquiry to better understand how technology platforms deny or degrade users' access to services.
  141. [141]
    Request for Public Comments Regarding Technology Platform ...
    The Federal Trade Commission invites public comment to better understand how technology platforms deny or degrade (such as by “demonetizing” and “shadow banning ...
  142. [142]
    RELEASE: Gottheimer, Bacon, ADL Announce Legislation to ...
    Jul 24, 2025 · Gottheimer and Bacon are announcing the bipartisan STOP HATE Act to help stop terrorism and disinformation on social media and online. This ...Missing: proposals deplatforming
  143. [143]
  144. [144]
    The STOP HATE Act: How Congress Plans to Outsource Censorship ...
    The STOP HATE Act is not about fighting terrorism—it's about control. It represents an attempt to use the moral authority of anti-terrorism efforts to create an ...Missing: deplatforming | Show results with:deplatforming
  145. [145]
    Trump returns to X with two-hour Elon Musk chat hit by technical glitch
    Aug 13, 2024 · United States Republican presidential nominee and former President Donald Trump has returned to the social media platform X as he attempts to ...
  146. [146]
    Why the attempt to deplatform Trump failed so utterly - Vox
    Nov 13, 2024 · The 2024 election has conclusively proven something that we really should have known since 2016: America's gatekeepers have failed.
  147. [147]
    Support dips for restrictions on false, violent online content
    Apr 14, 2025 · In the new survey, about half of U.S. adults (52%) support the government taking these steps, down from 60% in 2023, the first year we asked ...
  148. [148]
    Global Views of Press, Speech and Internet Freedoms
    Apr 24, 2025 · This Pew Research Center analysis focuses on public opinion of free speech, freedom of the press and freedom on the internet in 35 countries.
  149. [149]
    Exclusive: New House bill amends Section 230 to combat AI ...
    Jul 30, 2024 · A new bipartisan House bill would condition the tech industry's treasured Section 230 legal protections on efforts by tech platforms to combat deepfake ...Missing: 2023-2025 | Show results with:2023-2025
  150. [150]
    A Final Bow for Section 230? Latest Plea for Reform Calls for Sunset ...
    Jun 11, 2024 · In particular, Congress has focused on AI regulation, introducing a bill that would expressly remove most immunity under the CDA for a provider ...Missing: debates | Show results with:debates
  151. [151]
    Meta Says It Will End Its Fact-Checking Program on Social Media ...
    Jan 7, 2025 · The social networking giant will stop using third-party fact-checkers on Facebook, Threads and Instagram and instead rely on users to add notes to posts.
  152. [152]
    Fact-checked out: Meta's strategic pivot and the future of content ...
    Feb 24, 2025 · Meta changed content moderation by ending fact-checking, overhauling hate speech policies, and introducing "community notes" where users add ...Missing: removals | Show results with:removals
  153. [153]
    The Limits of Deplatforming - The Regulatory Review
    Nov 9, 2023 · In a new paper, three experts argue that removing harmful social media platforms from the internet may push users to seek out other, more extreme outlets.Missing: definition | Show results with:definition<|separator|>
  154. [154]
    New report analyzes dynamics on alt-platform Gab | FSI
    Jun 1, 2022 · The deplatforming events following January 6 were a huge boost to Gab, and may have resulted in millions of dollars of income for Gab ...
  155. [155]
    [PDF] US Extremism on Telegram: Fueling Disinformation, Conspiracy ...
    The authors used OSINT and observational methods on a sample of 125. Telegram channels containing hate speech and violent extremist content from far-right and ...
  156. [156]
    Truth Social: Banned from Twitter, Trump returns with a new platform
    Feb 21, 2022 · The app had similarities to Twitter, commentators noted - Mr Trump was banned from Twitter, Facebook and YouTube last year. And some early ...
  157. [157]
    Truth Social Is Rising as the Anti-Mastodon - WIRED
    Nov 23, 2022 · It's a small but loyal group. Around 2 percent of US adults say they use Truth Social for news—compared to 14 percent getting news on Twitter, ...
  158. [158]
    Trump's Truth Social Really Is a (Tiny, Conservative) Phenomenon
    Nov 1, 2022 · When Trump was banned from Twitter in January 2021, he had around 88 million followers; according to his Truth Social profile, he now has 4.43 ...
  159. [159]
    [2509.08676] Echo Chambers and Information Brokers on Truth Social
    Sep 10, 2025 · Abstract:This study examines the structural dynamics of Truth Social, a politically aligned social media platform, during two major ...
  160. [160]
    Labour pains: Content moderation challenges in Mastodon growth
    Mar 31, 2025 · After Elon Musk took over Twitter in October 2022, the number of users on the alternative social media platform Mastodon rose dramatically.
  161. [161]
    Amid Twitter chaos, Mastodon grew donations 488% in 2022 ...
    Oct 2, 2023 · According to Mastodon's annual report, released today, the company says it's seen a 488% increase in donations, totaling €325.9K, or roughly ...
  162. [162]
    How Top X Rivals Fared Since Elon Musk Sparked Twitter Exodus
    Nov 12, 2024 · Mastodon, meanwhile, has expanded from 3.5 million users in November 2022 to almost 9 million this month. Newsweek reached out to X, Mastodon, ...
  163. [163]
    An Empirical Analysis of the Nostr Social Network: Decentralization ...
    We find 616M post replications for 17.8M posts. This means, on average, a post is replicated across 34.6 relays (§5.1, §5.3). Another challenge stemming from ...Missing: deplatforming | Show results with:deplatforming
  164. [164]
    Nostr: Pioneering an Open-Source Social Media Network in the Age ...
    As of now, Nostr clients are used by more than 18 million users and still growing. This talk proposes an introduction to the Nostr protocol, highlighting its ...Missing: adoption post- 2022 deplatforming
  165. [165]
    How Blockchain Can Transform Social Media Platforms - Coinmetro
    May 15, 2025 · DTube: DTube is a decentralized video-sharing platform built on blockchain. It allows users to share videos without the risk of censorship.
  166. [166]
    Platforms, blockchains and the challenges of decentralization
    May 2, 2023 · This commentary explores the feasibility of blockchain technologies (and cryptocurrencies) in contesting the power of centralized, corporate platforms.