Fact-checked by Grok 2 weeks ago

Section 230

Section 230 of Title 47 of the provides civil immunity to providers and users of "interactive computer services"—a term encompassing websites, apps, and online platforms—from liability for third-party content they host, stating that no such provider "shall be treated as the publisher or speaker of any information provided by another information content provider." It further shields platforms from liability for good-faith actions to block or restrict access to material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." Enacted as part of the within the and signed into law by President , the provision was co-authored by Representatives (R-CA) and (D-OR) in response to early court rulings, such as Stratton Oakmont, Inc. v. Prodigy Services Co., which held that moderation efforts could render platforms liable as publishers of user content. The law's core intent was to foster internet innovation and by eliminating the threat of publisher-level , which had deterred early online services from hosting forums or discussions; its authors described it as enabling platforms to moderate without forfeiting protections, thereby balancing free expression with voluntary self-regulation. courts have interpreted these immunities broadly, routinely dismissing claims against platforms at early stages unless plaintiffs demonstrate the platform itself created or materially contributed to the offending content, a threshold rarely met in practice. This framework underpinned the explosive growth of , , and user-driven sites from the late onward, as companies could host vast amounts of speech without routine exposure to , , or other suits over user posts. Yet Section 230 has drawn significant controversy for insulating dominant platforms from accountability amid rising concerns over harmful content, including , , and child exploitation, while permitting aggressive policies that critics contend enable viewpoint-based suppression rather than mere "" filtering of objectionable material. Original congressional findings emphasized promoting "the continued development of the " free from "a patchwork of State laws" that could stifle it, but evolving interpretations have decoupled immunity from editorial neutrality, allowing platforms to feeds algorithmically without publisher liability—a shift some attribute to overbroad judicial deference rather than statutory text. Reform attempts have included the 2018 Allow States and Victims to Fight Online Act (FOSTA), which carved out exceptions for facilitation, and various proposals to condition protections on or proportionality in ; executive actions, such as the 2020 order seeking to tie immunity to consistent policy enforcement, were largely invalidated by courts as exceeding statutory bounds. Recent rulings, including Gonzalez v. (2023), have reaffirmed core immunities without resolving ambiguities in algorithmic recommendations or systemic practices, leaving ongoing debates over whether the law now entrenches unaccountable power in a few tech intermediaries.

Origins and Enactment

Pre-1996 Context and Rationale

In the early 1990s, as commercial services like and expanded access to systems and forums, U.S. courts grappled with applying traditional to internet intermediaries, creating uncertainty over their potential for . Under print media precedents, distributors like bookstores enjoyed for third-party materials unless they exercised control, but publishers faced stricter for defamatory statements. This distinction raised questions about whether platforms hosting user posts would be treated as passive conduits or active editors, potentially discouraging investment in digital infrastructure amid fears of lawsuits. The 1991 federal district court decision in Cubby, Inc. v. , Inc. exemplified a hands-off approach, ruling that CompuServe could not be held liable as a publisher for allegedly defamatory content in third-party newsletters it distributed without reviewing or editing them, analogizing the service to a vendor with no knowledge of the material's falsity. CompuServe's lack of editorial intervention shielded it from liability, suggesting that unmoderated platforms might avoid responsibility akin to distributors under . However, this outcome incentivized inaction on problematic content, as any attempt at filtering could risk reclassifying the intermediary as a liable publisher. Contrastingly, the 1995 New York state court ruling in Stratton Oakmont, Inc. v. Prodigy Services Co. imposed publisher status on Prodigy for defamatory anonymous posts criticizing the investment firm, because Prodigy had promoted itself as a family-oriented service, employed human moderators, and used automated software to screen messages for offensiveness—actions deemed sufficient editorial control. Unlike Cubby, this decision held that proactive moderation transformed the platform into a publisher subject to full liability for user content, even without specific knowledge of defamation. The conflicting precedents engendered a regulatory chill: services faced a dilemma where abstaining from moderation preserved distributor protections but allowed harmful content to proliferate, while voluntary efforts to remove objectionable material invited expansive lawsuits. This legal ambiguity, coupled with rising concerns over online indecency and defamation amid the internet's commercialization under the 1990s telecommunications deregulation, prompted lawmakers to seek clarity to foster platform self-regulation without stifling growth. Advocates argued that absent immunity, intermediaries would either over-censor to mitigate risks or under-moderate to evade publisher status, both outcomes threatening free expression and innovation in an emerging medium projected to handle vast user interactions. The Prodigy case, in particular, galvanized congressional attention, as it demonstrated how traditional liability regimes ill-suited to scalable digital forums could deter good-faith efforts to curb offensive speech, influencing proposals for federal protections tied to the broader Communications Decency Act.

Passage of the Communications Decency Act

The (CDA) originated as Senate Bill S. 314, introduced by Senator James Exon (D-NE) on February 1, 1995, with the primary aim of prohibiting the transmission of obscene or indecent communications to minors over interactive computer services and telecommunications devices. This legislation sought to extend broadcast-style regulations to the , imposing criminal penalties for transmitting materials deemed harmful to minors, amid growing concerns over pornography and explicit content accessible to children. In parallel, the case of , Inc. v. Services Co. (1995) highlighted liability risks for online services that moderated user content, treating as a publisher rather than a mere distributor and holding it accountable for defamatory statements. To address this and promote voluntary without incurring publisher-level liability, Representative (R-CA) and Senator (D-OR) proposed an amendment in June 1995, initially under the Internet Freedom and Family Empowerment Act, granting civil immunity to providers and users of "interactive computer services" for third-party content while preserving good-faith efforts to restrict objectionable material. This Cox-Wyden provision was incorporated into the House version of the Telecommunications Act during deliberations, serving as a counterbalance to the CDA's restrictive measures by emphasizing industry self-regulation over federal censorship. The full Telecommunications Act of 1996, encompassing the CDA as Title V—including Section 230—passed the Senate on June 15, 1995, by a vote of 81-18, and after conference reconciliation with the House, the final bill cleared both chambers overwhelmingly before President Bill Clinton signed it into law on February 8, 1996, at the Library of Congress. Section 230 itself encountered minimal opposition in Congress, passing as a relatively uncontroversial element within the expansive deregulation-focused legislation, which aimed to foster competition in telecommunications while shielding emerging online platforms from lawsuits that could stifle innovation and speech.

Statutory Text and Interpretation

Key Provisions of Section 230(c)

Section 230(c)(1) stipulates that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." This provision grants broad immunity to online intermediaries, preventing courts from imposing publisher liability—such as for or in content selection—for third-party material hosted or transmitted on their platforms. The clause distinguishes between the original creator of content (the "information content provider") and the , which merely facilitates access without assuming editorial responsibility. Courts have interpreted this to cover a wide array of claims, including torts arising from user-generated posts, provided the platform did not materially contribute to the unlawful content's illegality. Section 230(c)(2), often termed the "Good Samaritan" provision, exempts providers and users from civil for actions taken in to restrict access to or availability of deemed obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, regardless of whether such is constitutionally protected. It further shields platforms for enabling technical tools that allow content providers or users to self-restrict such . Unlike the broader immunity of subsection (c)(1), this protection applies only to voluntary moderation efforts motivated by a subjective in the content's offensiveness, and it does not extend to failures to remove harmful or to outside the specified categories. The "" requirement has been upheld as an objective standard in some judicial rulings, though it rarely results in due to to platforms' . This subsection incentivizes proactive filtering without fear of retaliatory lawsuits from content creators whose is blocked.

Distinction Between Publisher and Distributor Liability

Under , publishers of defamatory content face for third-party material they disseminate, as their editorial control implies endorsement or responsibility for its truthfulness, whereas distributors—such as newsstands or bookstores—are subject only to negligence-based if they fail to remove known infringing content after notice. This distinction originated in cases like Cubby, Inc. v. CompuServe Inc. (1991), where a federal court treated an early online service as a distributor akin to a , shielding it from absent actual knowledge of . The tension arose in Stratton Oakmont, Inc. v. Prodigy Services Co. (1995), where a ruled that Prodigy's active —using software filters and human reviewers to enforce standards—elevated it to publisher status, exposing it to for user-generated defamatory posts about the brokerage firm. Prodigy's own promotional materials boasting of a "" environment and editorial oversight were cited as evidence of publisher-like control, creating a disincentive for online services to moderate content lest they forfeit distributor protections. Section 230(c)(1) addressed this dilemma by prohibiting treatment of interactive computer services as the "publisher or speaker" of third-party , with legislative indicating intent to immunize platforms from publisher to promote "good Samaritan" moderation without fear of enhanced responsibility. The statute's text does not explicitly reference , but courts have interpreted it to encompass both, rejecting attempts to impose post-notice duties as incompatible with the broad immunity. In Zeran v. America Online, Inc. (1997), the Fourth Circuit affirmed this expansive reading, holding that Section 230 forecloses distributor liability as a "subset" of publisher liability, as any failure-to-remove claim after notice would effectively treat the service as a publisher by imposing obligations. The court emphasized that Congress aimed to avoid the Prodigy chilling effect, allowing platforms like to remove objectionable content without risking liability for remaining third-party speech. Subsequent rulings, such as Batzel v. Smith (2003) and Carafano v. Metrosplash.com, Inc. (2003), reinforced that even selective moderation does not strip immunity, provided the platform does not materially contribute to the content's illegality. This has drawn for blurring traditional lines, potentially shielding platforms from accountability for amplified harmful content, though defenders argue it fosters by removing incentives to host unmoderated cesspools. Empirical data from the indicates over 200 federal cases by 2023 have upheld the immunity against claims, underscoring its durability despite calls for .

Scope of Immunity

Qualifying Entities and Services

Section 230(c)(1) grants immunity from treatment as a publisher or speaker to "providers or users" of an "interactive computer service" with respect to information provided by another information content provider. The statute defines an interactive computer service as "any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server," explicitly including services or systems that provide access to the Internet, as well as those operated or offered by libraries or educational institutions. This definition, codified in 47 U.S.C. § 230(f)(2), was intentionally broad to promote the development of the Internet by shielding entities facilitating user interactions from liability for third-party content. Qualifying providers encompass a diverse range of online intermediaries, including service providers (ISPs) that enable connectivity, web hosting services that store user data, and platforms hosting such as forums or marketplaces. Social media networks like and search engines like have been deemed interactive computer services by federal courts, as they enable multiple users to access and interact with server-hosted content. For example, in the foundational case Zeran v. America Online, Inc. (1997), the Fourth Circuit Court of Appeals ruled that qualified for immunity as a provider of interactive services for third-party postings on its bulletin boards, emphasizing the statute's aim to avoid imposing editorial burdens on online hosts. Similarly, sites like and review platforms like have successfully invoked Section 230 protections in suits, confirming their status as qualifying services when hosting user-submitted material. Users of these services—typically individuals accessing or contributing content—also qualify for immunity when not acting as the original content creators, though this protection applies narrowly to their role in transmitting or receiving third-party rather than originating it. Courts have extended eligibility to providers and facilitating multi-user interactions, but have excluded entities primarily functioning as traditional publishers without interactive elements, such as newspapers not hosting user comments . The immunity does not extend to providers who materially contribute to the illegality of content, as determined in cases like Fair Housing Council of San Fernando Valley v. Roommates.com, LLC (2008), where the Ninth Circuit denied full protection to a platform that actively structured discriminatory user inputs via dropdown menus. This distinction underscores that qualification hinges on the service's role in enabling rather than creating or substantially developing prohibited content.

Protected vs. Non-Protected Content

Section 230(c)(1) of the immunizes providers and users of interactive computer services from being treated as the publisher or speaker of any information provided by another information content provider, thereby protecting platforms from liability for third-party content they host, transmit, or moderate. This protection applies broadly to user-generated material, such as posts, comments, reviews, and uploads on forums, , or marketplaces, even if the content is , obscene, or otherwise unlawful, as long as the platform does not create or develop it. Courts have consistently upheld this immunity in early cases like Zeran v. America Online, Inc. (1997), where an online service was shielded from claims arising from third-party postings about the . Non-protected content arises when the platform itself qualifies as an "information content provider," defined under Section 230(f)(3) as any entity responsible, in whole or in part, for the creation or development of the information at issue. In such instances, immunity does not attach, exposing the platform to potential liability under standard publisher or distributor standards. For example, a platform's original editorial content, such as staff-written articles or videos, falls outside protection because the service provider is the sole creator. Similarly, courts have denied immunity where platforms materially contribute to the unlawful aspects of content, as in Fair Housing Council of San Fernando Valley v. Roommates.com, LLC (2008), where the site's mandatory questionnaire prompts elicited discriminatory housing preferences, rendering Roommates partially responsible for developing the illegal listings. The line between protection and non-protection hinges on the degree of platform involvement: passive hosting or good-faith moderation under Section 230(c)(2) preserves immunity, but active —such as designing features that encourage or shape illegal content—does not. Judicial interpretations emphasize a narrow view of "development," rejecting claims that routine editing (e.g., for clarity or removal) transforms a platform into a content creator, as affirmed in cases like Carafano v. Metrosplash, Inc. (). Exceptions carved out by subsequent laws, such as the Allow States and Victims to Fight Online Act (FOSTA) of 2018, further limit immunity for content facilitating , treating platforms as non-protected actors in those contexts. This distinction incentivizes platforms to avoid direct content authorship while permitting voluntary removals of objectionable material without risking loss of broad third-party protections.

Judicial Evolution

Foundational Cases (1997-2008)

The foundational judicial interpretations of Section 230(c)(1) emerged shortly after its enactment, with courts broadly construing the provision to grant interactive computer services immunity from liability for third-party , thereby rejecting prior precedents that treated as assuming publisher status. In Zeran v. America Online, Inc. (1997), the U.S. Court of Appeals for the Fourth Circuit addressed a claim against for anonymous postings on its bulletin boards advertising T-shirts mocking the victims, which included the plaintiff's phone number. The court held that Section 230 preempts state-law distributor liability claims, emphasizing that intended to immunize providers even after receiving of offending , as imposing a notice-and-takedown duty would undermine the policy of encouraging self-regulation to avoid the "" effect. This ruling, the first appellate decision on Section 230, established that providers are not "publishers or speakers" of user-generated information, insulating them from traditional liabilities like and distinguishing the statute from pre-1996 cases such as Stratton Oakmont, Inc. v. Services Co. (1995), where led to publisher status. Concurrent with Zeran, the U.S. in Reno v. (1997) invalidated other provisions of the for overbreadth under the First Amendment but left Section 230 intact, affirming its role as a standalone immunity mechanism for online intermediaries amid broader of speech. Early reinforcement of broad immunity appeared in cases like Ben Ezra, Weinstein & Co. v. America Online, Inc. (2d Cir. 1999), where the court dismissed claims over inaccurate stock quotes posted by subscribers, ruling that Section 230 bars suits against providers for failing to or remove third-party . Similarly, in Carafano v. Metrosplash.com, Inc. (9th Cir. 2003), the court granted immunity to a website for a fraudulent profile created by an anonymous user, which defamed Christianne Carafano by falsely claiming she was promiscuous and seeking extramarital affairs; the site's multiple-choice prompts did not materially contribute to the illegality, as the core defamatory content originated from the user. These decisions uniformly interpreted Section 230 to promote platform growth by shielding providers from , provided they did not develop the objectionable content themselves. By the mid-2000s, courts extended immunity to diverse claims, including privacy invasions and unfair competition, as in Doe v. America Online, Inc. (4th Cir. 2004), which rejected liability for disclosing subscriber identities in response to subpoenas, and Nemet Chevrolet, Ltd. v. Consumeraffairs.com, Inc. (4th Cir. 2009, argued pre-2008), affirming dismissal of fraudulent consumer reviews. However, the period culminated in Fair Housing Council of v. Roommates.com, LLC (9th Cir. 2008), the first decision to qualify immunity, holding that a roommate-matching site lost protection under Section 230 for required dropdown menus soliciting discriminatory preferences on sex, , and family status—violating the Fair Housing Act—because the site's design materially contributed to unlawful content creation, rendering it an " provider" jointly responsible with users. Immunity still applied to unmoderated free-text comments, illustrating a nascent distinction between passive hosting and active elicitation of illegal material. This case marked the onset of scrutiny over platforms' role in structuring user inputs, though earlier rulings had solidified expansive protections fostering innovation.

Exceptions and Narrowing (2008-2018)

In Fair Housing Council of v. Roommates.com, LLC (2008), the Ninth Circuit Court of Appeals articulated a key limitation on Section 230 immunity, holding that an interactive computer service loses protection under subsection (c)(1) when it materially contributes to the development of unlawful third-party content, thereby functioning as an " provider" rather than a passive host. Roommates.com operated a roommate-matching service that required users to select from mandatory dropdown menus specifying preferences for , , children, and other protected characteristics under the Fair Housing Act (FHA), prompting discriminatory listings; the court ruled this active solicitation transformed the site into a co-creator of illegal content for those features, denying immunity and allowing FHA claims to proceed. However, the court preserved immunity for the site's optional free-form "additional comments" section, where users inputted unprompted text, as Roommates.com provided only a neutral tool without shaping the illegality. This "material contribution" standard, derived from the statutory distinction in 47 U.S.C. § 230(c)(1) between hosting and creating content, marked a judicial narrowing by emphasizing that immunity requires neutrality in content formation; platforms cannot claim protection for features designed to elicit or structure prohibited material. The decision built on prior precedents like Carafano v. Metrosplash, Inc. (2003) but applied it more stringently to mandatory interactive elements, influencing subsequent analyses of platform design choices. Post-Roommates rulings through 2018 largely upheld broad immunity but invoked the exception in analogous contexts, such as when sites integrated or edited user inputs to amplify harm. In Jones v. Dirty World Entertainment Recordings LLC (2014, Sixth Circuit), the court extended immunity to a site hosting user videos of alleged , rejecting claims of material contribution despite titles added by the platform, as the core unlawful depiction originated with users. Conversely, district courts occasionally denied immunity under Roommates where platforms curated or prompted illegal specifics, though such denials remained rare and fact-bound, reinforcing that passive moderation or removal does not forfeit protection. Federal enforcement actions further tested boundaries without broadly eroding immunity; for instance, the (FTC) in 2009 secured settlements against sites like Accusearch for deceptive practices involving user-generated fake reviews, but these proceeded on non-publisher theories outside Section 230's core scope, such as direct violations of FTC Act prohibitions on unfair methods. By 2018, the Roommates framework had established that exceptions hinged on affirmative development of illegality, not mere failure to police, preserving Section 230's incentive for hosting while carving out accountability for complicit design.

Contemporary Challenges (2018-Present)

In 2018, passed the Allow States and Victims to Fight Online (FOSTA-SESTA), which amended Section 230 by carving out immunity for platforms facilitating , marking the first significant legislative alteration to the statute since its enactment. This change responded to bipartisan concerns over websites like .com enabling exploitation, but critics argued it increased platform caution, leading to over-removal of legal adult content and harming sex workers' safety. FOSTA-SESTA's implementation prompted lawsuits testing the amendment's scope, with courts upholding narrowed immunity while platforms faced heightened liability risks for user content related to commercial sex. Growing scrutiny of platform intensified after the 2016 U.S. , with accusations of anti-conservative bias fueling calls to reinterpret Section 230's publisher-distributor distinction. In response to Twitter's May 2020 fact-check labels on President Donald 's tweets about mail-in voting fraud, Trump issued Executive Order 13925 on May 28, 2020, directing federal agencies to limit Section 230 protections for platforms engaging in "editorial acts" akin to publishing, such as inconsistent moderation. The order aimed to treat selective content restrictions as forfeiting immunity but faced legal challenges and was largely enjoined by courts, which viewed it as exceeding executive authority without congressional action. It highlighted tensions over platforms' "Good Samaritan" blocking provisions under Section 230(c)(2), intended for objectionable content removal, now contested as enabling viewpoint discrimination. The 2022-2023 Twitter Files, internal documents released after Elon Musk's acquisition of Twitter (now X), exposed moderation practices prioritizing left-leaning viewpoints, such as suppressing the New York Post's Hunter Biden laptop story in October 2020 under FBI influence, bolstering arguments that platforms act as curators rather than neutral conduits, potentially eroding Section 230 eligibility. These revelations prompted renewed conservative pushes for reform, including proposals to condition immunity on political neutrality or transparency in algorithmic decisions, though platforms maintained such actions fell within protected moderation discretion. Judicial challenges reached the U.S. Supreme Court in 2023 with Gonzalez v. Google and Twitter v. Taamneh, consolidated cases alleging platforms aided terrorism by recommending ISIS content; the Court unanimously declined to narrow Section 230, vacating lower rulings and remanding on grounds that algorithmic recommendations did not constitute endorsement creating liability. In 2024, the Court in Doe v. Snap Inc. reaffirmed broad immunity, rejecting claims that Snapchat's design features encouraged predatory behavior, emphasizing Section 230's bar on treating platforms as publishers of third-party content. These decisions preserved the status quo amid over 50 reform bills introduced since 2018, including carve-outs for civil rights violations, paid ads, and child sexual abuse material (CSAM), yet none achieved passage due to partisan divides and fears of over-censorship. By 2025, debates persisted over Section 230's application to like AI-generated content, with proposals for sunset clauses or for algorithmic harms, but entrenched interests and free speech concerns stalled comprehensive overhaul. Platforms faced state-level challenges, such as and laws mandating non-discrimination in moderation, partially upheld but remanded by the in Moody v. NetChoice (2024) for First Amendment review, underscoring ongoing tensions.

Broader Impacts

Economic and Innovation Effects

Section 230 has facilitated in online platforms by shielding interactive computer services from liability for third-party content, enabling the proliferation of (UGC) models central to and . Without this immunity, platforms would face prohibitive legal risks from hosting vast volumes of unvetted material, such as the 500 hours of video uploaded to per minute or the 510,000 comments posted on per minute as of 2018, deterring experimentation with scalable UGC-dependent services. This has lowered , allowing nascent platforms to prioritize development over constant litigation defense, which can cost startups $3,000 for pre-complaint responses and $15,000–$150,000 for motions to dismiss absent immunity. Economically, Section 230 underpins a $2.6 trillion that supports 8.9 million jobs, as measured by U.S. data for 2022, by permitting platforms to aggregate UGC like online reviews—which influence 67% of U.S. purchases—without distributor liability. Analyses estimate that weakening or repealing these protections could erode $440 billion in GDP and 4.25 million jobs over a decade, based on models projecting litigation surges from unmoderated content volumes exceeding 600 billion posts annually. On , the provision has spurred inflows into UGC-reliant startups, with U.S. platform firms securing 2–3 times more total investment than European counterparts lacking equivalent protections; post-1996 enactment, such investments tripled. U.S. companies are five times more likely to raise over $10 million and ten times more likely to exceed $100 million in funding compared to peers, reflecting reduced risk premiums for platforms moderating while hosting third-party inputs. This dynamic has fostered competitive markets, with 71% of investors citing discomfort in funding intermediaries without Section 230 safeguards, thereby sustaining a diverse beyond dominant incumbents.

Effects on Free Speech and Content Moderation

Section 230(c)(1) immunizes interactive computer services from liability for third-party content, enabling platforms to host extensive user-generated material without facing lawsuits for , , or other harms originating from users. This protection has facilitated the proliferation of speech, contributing to the internet's role as a primary venue for public discourse and information dissemination since its enactment in . Empirical analysis of over 500 Section 230 cases from to 2009 found that intermediaries prevailed in nearly all instances where immunity applied, confirming the provision's broad shield against distributor liability and its role in reducing incentives for preemptive content removal. Complementing this, Section 230(c)(2) permits platforms to restrict access to content deemed "objectionable," whether to themselves or others, without forfeiting immunity under subsection (c)(1). This has empowered providers to implement policies targeting illegal activities like child exploitation or threats, with platforms reporting the removal of millions of such instances annually—for example, removed over 20 million pieces of child exploitation content in 2022 alone. However, the same mechanism allows extensive curation of lawful speech, including political expression, leading to variability in enforcement across platforms. The dual provisions have engendered a tension in free speech dynamics: while immunity under (c)(1) mitigates a by discouraging blanket to evade , (c)(2) grants platforms unchecked in moderation, potentially amplifying private editorial control over public forums. Absent Section 230, platforms might adopt conservative over-moderation to minimize risks, as pre-1996 distributor precedents suggested, thereby suppressing marginal or controversial speech more than current practices. Studies indicate that proposals conditioning immunity on "neutrality" could exacerbate this by forcing platforms to host harmful content or face lawsuits, indirectly curtailing user expression. Accusations of have intensified scrutiny, with claims that platforms leverage Section 230 to suppress conservative content disproportionately, as internal reviews at (pre-2022 ) revealed algorithmic and human moderation biases favoring left-leaning narratives. Yet, empirical data on remains contested; while over-removal affects diverse voices, including minority communities, the law's structure treats platforms as private actors exercising First Amendment rights rather than common carriers obligated to neutrality. This framework has sustained innovation in speech-hosting services but fueled calls for transparency in moderation algorithms, with platforms like disclosing that 94% of extremist content removals in 2023 were proactive via , raising questions about opaque decision-making shielded by immunity.

Criticisms from Conservative Perspectives

Alleged Platform Bias and Censorship

Conservatives have alleged that online platforms exploit Section 230's protections to engage in systematic viewpoint discrimination, particularly against right-leaning , while platforms deny for such moderation decisions. Under Section 230(c)(2), providers may restrict access to deemed "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable," but critics contend this provision enables subjective bias rather than neutral enforcement. High-profile cases, such as the permanent suspension of former Donald Trump's accounts on and following the , 2021, events, exemplify claims of disproportionate targeting of conservative figures, with platforms citing risks of despite similar tolerance for left-leaning rhetoric. The , internal documents released starting in December 2022 after Elon Musk's acquisition of the platform, provided evidence of algorithmic and manual interventions that reduced visibility of conservative voices without public disclosure. For instance, in 2022 disclosures by journalist , Twitter applied "visibility filtering" or shadowbanning to accounts including Stanford professor Jay Bhattacharya, who opposed , and activist , limiting their reach while internal communications revealed no equivalent throttling of progressive counterparts. Additionally, on , 2020, blocked sharing of the Post's report on Hunter Biden's , citing hacked materials influenced by FBI briefings, a decision later criticized as suppressing potentially election-relevant information ahead of the November 2020 vote. These revelations fueled arguments that platforms' editorial choices transform them into publishers, undermining 230's distributor immunity. Further allegations point to deplatforming of conservative outlets like ' in 2018 by , , and Apple, justified under "hate speech" policies but contested as selective given ongoing tolerance for analogous extreme content from other ideologies. Platforms' reliance on Section 230 to shield these actions has prompted conservative calls for reform, asserting that biased curation—evidenced by internal biases and disparate enforcement—violates the law's intent of fostering neutral intermediaries rather than ideologically driven gatekeepers. While some studies attribute higher conservative moderation rates to greater posting of policy-violating material, such as , the and deplatforming patterns substantiate claims of non-neutral application, particularly amid documented government-platform coordination on content flagging during the 2020 election cycle.

Overreach in Content Curation

Conservative critics contend that online platforms have exploited Section 230 immunity to engage in extensive that functions as viewpoint-based , particularly targeting right-leaning speech, while evading traditionally imposed on publishers. They argue that proactive —such as algorithmic , shadowbanning, and —transforms platforms from neutral conduits into editorial gatekeepers, undermining the statute's intent to protect passive hosts rather than active curators. This overreach, in their view, allows dominant platforms like (now X) and to suppress dissenting narratives on topics including election integrity and policies without accountability. A pivotal example occurred on , 2020, when President issued 13925, "Preventing Online ," in response to Twitter's addition of a fact-check label to one of his tweets regarding mail-in voting fraud claims. The order asserted that platforms' curation practices, including editorial interventions like third-party content, rendered them ineligible for Section 230(c)(1) protections against publisher liability, as such actions demonstrated they were not mere distributors but active shapers of information flow. It directed federal agencies to limit Section 230 defenses for platforms exhibiting "biased" moderation and proposed tying immunity to transparent, non-discriminatory policies, highlighting conservative concerns over perceived anti-conservative bias amplified by legal shields. Further illustrations include the January 2021 deplatforming of former President Trump following the Capitol riot, where platforms cited violations of policies against incitement, yet conservatives like Senators and argued this exemplified arbitrary enforcement that chilled political discourse without . Platforms' removal of from app stores in the same period was similarly critiqued as collusive overreach, forcing a conservative-leaning offline under the guise of while retaining Section 230 benefits for their own editorial choices. These actions fueled demands for reforms conditioning immunity on neutrality, with the of Justice's September 2020 legislative proposal clarifying that platforms assuming "information content provider" roles through heavy curation forfeit protections. Empirical analyses cited by conservatives, such as internal leaks from platforms revealing disproportionate flagging of conservative content, reinforce claims of enabled by Section 230's broad latitude, though platforms counter that moderation targets policy violations universally. Critics maintain this discrepancy erodes public trust and innovation by entrenching monopolistic control over discourse, urging statutory amendments to penalize over-moderation as a of immunity.

Criticisms from Progressive Perspectives

Inadequate Protections Against Harm

Critics from perspectives contend that Section 230's grant of immunity to interactive computer services for third-party content fails to impose sufficient obligations on platforms to prevent or mitigate foreseeable harms, thereby enabling the proliferation of dangerous material without adequate accountability. This view holds that the law's structure, particularly subsection (c)(1), disincentivizes proactive safeguards against harms such as cyber-harassment, algorithmic amplification of , and child exploitation, as platforms can host or distribute injurious content while avoiding civil liability. A primary concern involves harms to minors, where Section 230 has been described as facilitating and exploitation by shielding platforms from responsibility for like grooming or schemes. For instance, investigations have highlighted platforms' roles in enabling predatory interactions, with critics arguing that the immunity provided by Section 230 acts as a barrier to lawsuits that could enforce better detection and removal of such material, contributing to rising incidents of online child victimization reported by federal authorities. Progressive advocates, including Democratic lawmakers, assert this inadequacy extends to impacts, where addictive algorithms prioritize engagement over safety, exacerbating issues like linked to and harmful challenges on sites like and . Another focal point is the facilitation of and that incite real-world violence, with platforms allegedly profiting from unmoderated content that radicalizes users. Examples cited include the 2022 supermarket shooting, where the perpetrator referenced online ideologies, prompting state attorneys general to probe companies for inadequate content controls under existing laws, yet limited by Section 230's protections. Whistleblower testimony, such as that from former employee in 2021, has emphasized how Section 230 enables "dangerous algorithms" to boost extremist material, urging reforms to mandate transparency and liability for foreseeable harms like election interference or public health during the . In response, Democratic-led proposals seek to narrow Section 230's scope to compel greater platform responsibility. The SAFE TECH Act, reintroduced in February 2023 by Senators , , , and others, would strip immunity for certain algorithmic recommendations and paid content, permit civil suits for cyber-stalking, based on protected classes, and wrongful death arising from platform misuse, while allowing injunctive relief to halt irreparable harms. Proponents argue this would force providers to implement reasonable care standards without transforming them into publishers, addressing gaps exposed in cases of and bias amplification. President Biden, in a , called for outright revocation of Section 230, claiming it permits tech firms to disseminate known falsehoods with impunity. These efforts reflect a broader progressive push for treating platforms as distributors with duties akin to laws, though implementation remains stalled amid debates over enforcement mechanisms.

Enabling Exploitation and Misinformation

Progressive critics maintain that Section 230(c)(1) immunizes interactive computer services from liability for third-party content that facilitates child sexual exploitation, including sex trafficking and the distribution of child sexual abuse material (CSAM), thereby reducing incentives for platforms to prevent or mitigate such harms proactively. In Jane Doe No. 1 v. Backpage.com, LLC (2016), the U.S. Court of Appeals for the First Circuit affirmed dismissal of trafficking claims against the classifieds site, holding that Backpage's moderation of ads—such as removing terms like "underage" or "barely legal"—did not transform it into a liable publisher under Section 230, despite allegations of deliberate facilitation for profit. This ruling, echoed in subsequent cases, has been cited by advocacy groups as exemplifying how immunity enables platforms to host exploitative content under the guise of neutrality, even when aware of patterns like repeated postings in high-trafficking areas. The 2018 Allow States and Victims to Fight Online Act () introduced a partial exception for civil liability in cases, yet progressive and victim-advocacy voices argue that core immunities persist for related abuses, such as algorithmic recommendations of or failure to verify user ages. The National Center for Missing & Exploited Children (NCMEC) CyberTipline processed 20.5 million reports of suspected child sexual exploitation in 2024, including nearly 63 million files of , with platforms submitting the majority; reports of AI-generated surged 1,325% year-over-year, underscoring ongoing platform vulnerabilities. Organizations like the have testified that Section 230's framework, by treating platforms as passive distributors, exacerbates harms through "harmful design features" like infinite scrolling and engagement maximization, without sufficient accountability beyond federal reporting mandates. On , critics from progressive circles assert that Section 230 enables its unchecked proliferation by shielding platforms from consequences for hosting or amplifying false narratives, which undermine democratic processes and . During the 2020 U.S. , platforms disseminated unsubstantiated claims of voter fraud, contributing—per some analyses—to diminished trust, with surveys indicating 91% of users holding responsible for spread. In the context, algorithmic prioritization of sensational content correlated with ; a review of studies linked to incidents, including delayed adherence to guidelines and excess deaths estimated in the hundreds of thousands globally from hesitancy-fueled non-compliance. Democratic proposals, such as those tying immunity to verified reductions in health , reflect this view, positing that absent , platforms prioritize ad over curating accurate . Such critiques often draw from academic and policy research, though empirical causation between platform-hosted and behavioral harms remains debated, with meta-analyses showing modest effects overshadowed by preexisting beliefs and offline influences; many studies originate from institutions with documented left-leaning biases, potentially inflating perceived platform .

Reform Proposals and Developments

Legislative Reforms (FOSTA-SESTA and Beyond)

The Allow States and to Fight Online (FOSTA-SESTA), enacted on April 11, 2018, marked the first significant legislative amendment to Section 230 since its inception. This bipartisan measure, signed into law by President , carved out an exception to Section 230's liability protections by clarifying that the immunity does not extend to civil or criminal enforcement against interactive computer services that knowingly facilitate . Specifically, it amended 47 U.S.C. § 230(e)(5) to permit liability under federal and state laws, such as the Trafficking Reauthorization , when platforms promote or assist in such activities. Proponents, including lawmakers like Sen. Rob Portman (R-OH) and Rep. (R-MO), argued it addressed a loophole exploited by sites like .com, which was seized by federal authorities in April 2018 amid trafficking allegations. FOSTA-SESTA's implementation prompted immediate platform responses, including Craigslist's shutdown of its personals section and heightened moderation on sites like and to avoid perceived facilitation risks. Empirical data post-enactment revealed mixed outcomes: federal prosecutions for online-facilitated trafficking increased, with the Department of Justice reporting over 200 indictments tied to platforms by 2020, but studies indicated no significant decline in overall trafficking reports to the National Human Trafficking Hotline, which rose from 10,359 in to 11,500 in 2019. Critics, including sex worker advocacy groups and the , contended the law conflated consensual sex work with trafficking, driving activities and elevating risks such as and without third-party options; a 2020 survey by the Hacking//Hustling collective found 65% of sex workers experienced reduced online safety post-FOSTA-SESTA. Courts have interpreted narrowly, as in the Circuit's 2022 ruling in J.S. v. , holding that mere hosting of does not trigger liability absent knowing benefit from trafficking. Subsequent reform efforts have targeted material (CSAM) and platform accountability without broadly repealing Section 230. The Eliminating Abusive and Rampant Neglect of Interactive Technologies (, first introduced in 2020 by Sens. (R-SC) and (D-CT), sought to condition Section 230 immunity on platforms adopting "best practices" certified by a multi-stakeholder , potentially exposing non-compliant sites to state-level civil suits for distribution. Reintroduced in 2023 as S.1207 and H.R.2732, it advanced through the Committee with unanimous approval in May 2023 but stalled in broader congressional action, with opponents warning of encryption backdoors and over-moderation incentives. As of October 2025, EARN IT remains unpassed, reflecting persistent bipartisan support for targeted carve-outs amid concerns over free speech chilling effects. By 2025, bills like the (KOSA, S.1748 in the 119th ) have gained traction, reintroduced on May 14, 2025, by Sens. (R-TN) and (D-CT). KOSA imposes duties on "covered platforms" to mitigate harms to minors under 17, including default and risk assessments, while preserving Section 230 for good-faith compliance but allowing civil suits for knowing failures. Unlike FOSTA-SESTA's direct exception, KOSA emphasizes proactive safeguards without mandating removal, though implementation could indirectly pressure ; it advanced in committees but faced delays amid debates over parental rights and algorithmic censorship. Other proposals, tracked by outlets like , include over 50 s since 2018 proposing further exceptions for , , or algorithmic amplification, yet none have achieved FOSTA-SESTA's passage, highlighting Section 230's resilience despite ongoing scrutiny.

Executive and Regulatory Actions

On May 28, 2020, President issued 13925, titled "Preventing Online Censorship," directing federal agencies including the () to evaluate and potentially limit Section 230 immunities for platforms engaging in what the order described as "editorial acts" tantamount to publishing, particularly when restricting content in . The order responded to instances like Twitter's labeling of Trump's tweets on election fraud as misleading, arguing that such actions undermined Section 230's original intent to protect neutral platforms rather than active censors. It instructed the Acting Chairman of the to consider within 14 days to address discriminatory practices and prohibited federal agencies from contracting with platforms violating these principles. The Department of Justice (DOJ) under conducted a review of Section 230, culminating in a September 23, 2020, proposal for legislative reforms to Congress, recommending narrowed immunity for platforms that fail to address unlawful content or engage in inconsistent moderation. Concurrently, the (NTIA) petitioned the (FCC) on July 27, 2020, for to clarify Section 230's scope, emphasizing that platforms lose immunity when acting as publishers through biased curation. FCC Chairman affirmed the agency's interpretive authority over Section 230 as part of the Communications Act but declined to initiate formal during the administration. Under the Biden administration, 14029 on May 17, 2021, revoked Trump's 2020 order, rescinding related policy directives to refocus on holding platforms accountable for harms like and child exploitation rather than restricting moderation. The convened listening sessions in 2022, announcing principles for Section 230 reform, including stripping immunity for platforms failing to remove illegal content and requiring in algorithmic amplification. The administration argued before the in cases like Gonzalez v. Google (2023) that Section 230 does not immunize recommendations of terrorist content, though the Court declined to narrow the statute broadly. Regulatory efforts persisted without major rule changes; the issued a in early 2025 on practices, signaling scrutiny of platforms' Section 230 claims amid concerns over deceptive algorithms and user harms. The FCC, post the Court's 2024 Loper Bright decision overturning , faced constraints on interpreting Section 230, with debates over its authority to impose conditions on immunity. No binding regulations emerged by October 2025, leaving platforms' liabilities largely intact pending legislative action.

2024-2025 Sunset and Overhaul Debates

In May 2024, House Energy and Commerce Committee Chair (R-WA) and Ranking Member (D-NJ), along with Communications and Technology Subcommittee Chair (R-OH) and Ranking Member (D-CA), unveiled a bipartisan draft bill to sunset Section 230 immunity effective the final week of 2025. The measure sought to eliminate the existing liability shield without an immediate replacement, pressuring to negotiate and pass updated legislation tailored to modern dynamics, including algorithmic recommendations and pervasive content distribution. A House Communications and Technology Subcommittee hearing on the proposal occurred on May 22, 2024, featuring testimony from legal experts, industry representatives, and advocates. Supporters, including the bill's drafters, contended that Section 230's broad protections have enabled unchecked platform power, failing to evolve with technologies like AI-driven feeds that amplify harmful content, and that sunsetting would incentivize targeted reforms rather than perpetual inaction. Critics, such as the and Association of Research Libraries, argued the abrupt expiration would flood courts with lawsuits, compel over-moderation by smaller platforms and nonprofits, and disproportionately burden users by eroding safe harbors for , without guaranteeing superior replacements. Debates intensified into 2025 amid stalled legislative progress, with no enacted sunset by October. Overhaul advocates proposed alternatives like stripping immunity for algorithmic amplification or paid advertisements to address biases and harms without full repeal, as outlined by groups including Public Knowledge. Regulatory pushes gained traction, with Federal Communications Commission nominee Brendan Carr and President-elect Donald Trump signaling intent to reinterpret Section 230 narrowly via agency enforcement, potentially conditioning protections on transparency in content curation. Conservative reformers emphasized curbing perceived left-leaning censorship, while progressives prioritized accountability for misinformation and exploitation, highlighting partisan divides in envisioning post-230 liability.

International Comparisons

European Union Digital Services Act

The European Union's Digital Services Act (DSA), Regulation (EU) 2022/2065, establishes a framework for regulating digital services, including online intermediaries and platforms, to address illegal content, disinformation, and systemic risks while imposing due diligence obligations that condition liability protections. Enacted on October 19, 2022, and entering into force on December 17, 2022, the DSA applies general rules to all intermediary services from February 17, 2024, with enhanced requirements for very large online platforms (VLOPs)—those reaching over 45 million EU users—effective from August 17, 2024. In contrast to Section 230 of the U.S. Communications Decency Act, which grants broad immunity to platforms for third-party content regardless of moderation efforts, the DSA ties limited immunity to proactive compliance, exposing non-compliant providers to civil liability and fines up to 6% of global annual turnover for systemic failures. Under the DSA, online platforms must expeditiously remove or disable access to notified illegal content, such as material or terrorist propaganda, and provide reports on decisions, including reasons for content removal and mechanisms. Intermediaries are required to verify seller identities in online marketplaces and prohibit deceptive "dark patterns" that manipulate user choices. VLOPs face additional mandates, including annual risk assessments for systemic harms like election interference or threats, mitigation measures such as audits, and data access for researchers to evaluate impacts. The regulation restricts based on sensitive data like political views or location and mandates design choices for recommender systems to allow users to opt for less personalized feeds. This regulatory approach diverges sharply from Section 230's emphasis on platform discretion, as the 's enforcement—overseen by the for VLOPs and national services coordinators for others—imposes obligations that incentivize over-removal of content to mitigate fine risks, potentially chilling protected speech. Critics, including organizations focused on , argue that vague definitions of "systemic risks" and mandatory algorithmic interventions could export EU-style content controls globally via the "," where U.S. platforms adapt DSA compliance for users, indirectly affecting worldwide operations and eroding intermediary neutrality akin to Section 230's protections. from early enforcement, such as the Commission's designation of 22 VLOPs including , , and X by April 2023, indicates heightened moderation pressures, with platforms facing investigations for non-compliance by mid-2025, though comprehensive data on speech impacts remains limited due to the regulation's recency. Proponents claim the DSA fosters accountability without publisher liability, but analyses suggest it may amplify biases in by compelling platforms to prioritize regulatory signals over user-driven expression.

Legislation in Australia, UK, and Other Nations

In , there is no statutory equivalent to Section 230 providing broad immunity for online intermediaries against liability for third-party content. The Broadcasting Services Act 1992 offers limited exemptions for internet service providers acting as mere conduits or caching content, but these do not extend to platforms hosting or hosting user-generated material, leaving them potentially liable as publishers under for issues like if they fail to remove content after notice. The Online Safety Act 2021, which commenced operation on January 23, 2022, imposes regulatory obligations on designated internet services, including platforms, to address specific online harms such as cyber-abuse, image-based abuse, and targeting users. The eSafety Commissioner can issue formal removal notices for non-consensual intimate images or targeted abusive content, with non-compliance penalties reaching AUD 555,000 for individuals or up to AUD 5.55 million—or higher based on benefits derived—for corporations, and potential court-ordered blocking of services. This framework emphasizes proactive compliance and enforcement rather than immunity, with the Act's 2024 statutory review confirming its focus on expanding protections against harms like promotion and hateful language without introducing safe harbors. In the , the , receiving on October 26, 2023, establishes a for user-to-user services and search engines with significant user bases to identify, assess, and mitigate from illegal content, including , child sexual abuse material, and , with enhanced protections for children against priority harms like and harmful challenges. Regulated platforms must implement measures, conduct risk assessments, and remove illegal content expeditiously upon awareness, enforced by with fines up to £18 million or 10% of qualifying worldwide revenue, whichever is greater, and potential business disruption measures like service blocking. Unlike Section 230's blanket protections, the Act conditions any limited exemptions on fulfilling proactive obligations, exposing non-compliant intermediaries to direct regulatory liability for systemic failures rather than granting immunity for user content. Other nations have adopted varied approaches, often diverging from Section 230's model by imposing greater accountability on intermediaries without equivalent immunities. In , no comprehensive federal safe harbor exists akin to Section 230; while the includes provisions shielding intermediaries from certain liabilities, Canadian courts have broad interpretations of immunity, as seen in rulings holding platforms like accountable for user content in defamation or harms cases, and the Online Harms Act (Bill C-63, introduced February 2024) further mandates reporting and removal of harmful content with fines up to CAD 10 million or 3% of global revenue. India's require significant intermediaries to appoint grievance officers, trace originators of in serious cases, and remove unlawful content within 36 hours of complaints, stripping safe harbor protections for non-compliance and enabling government blocking, prioritizing rapid enforcement over U.S.-style non-liability. These regimes generally reflect a trend toward conditional liability tied to duties, contrasting Section 230 by emphasizing harm prevention through regulatory oversight.

References

  1. [1]
    Section 230: An Overview | Congress.gov
    Jan 4, 2024 · Section 230 of the Communications Act of 1934, enacted as part of the Communications Decency Act of 1996, provides limited federal immunity to providers and ...Text and Legislative History · Good Faith · Section 230(c)(2)(B): Enabling...
  2. [2]
    [PDF] Communications Act of 1934
    230. [47 U.S.C. 230] PROTECTION FOR PRIVATE BLOCKING. AND SCREENING OF OFFENSIVE MATERIAL. (a) FINDINGS.--The Congress finds the following: (1) The rapidly ...
  3. [3]
    [PDF] brief - Supreme Court of the United States
    Jan 19, 2023 · The following year, Rep- resentatives Cox and Wyden shepherded Section 230 through near-unanimous passage in the House of Rep- resentatives (420 ...<|separator|>
  4. [4]
    DEPARTMENT OF JUSTICE'S REVIEW OF SECTION 230 OF THE ...
    The US Department of Justice analyzed Section 230 of the Communications Decency Act of 1996, which provides immunity to online platforms from civil liability.
  5. [5]
    Section 230: Legislative History | Electronic Frontier Foundation
    Section 230 (47 U.S.C. § 230) was Congress' response to two court cases decided in New York in the early 1990's that had conflicting results.
  6. [6]
    [PDF] Section 230: A Juridical History | Stanford Law School
    Feb 26, 2025 · Section 230 of the Communications Decency Act of 1996 provides online entities immunity from lawsuits related to content authored by third ...
  7. [7]
    S.314 - 104th Congress (1995-1996): Communications Decency Act ...
    Introduced in Senate (02/01/1995). Communications Decency Act of 1995 - Amends the Communications Act of 1934 to prohibit the use of any telecommunications ...
  8. [8]
    S.652 - Telecommunications Act of 1996 104th Congress (1995-1996)
    Array ( [actionDate] => 1995-06-15 [displayText] => Passed/agreed to in Senate: Passed Senate with amendments by Yea-Nay Vote. 81-18. Record Vote No: 268.
  9. [9]
    President Signs Telecommunications Act - Clinton White House
    PRESIDENT CLINTON SIGNS THE TELECOMMUNICATIONS ACT OF 1996. February 8, 1996. President Clinton today will sign the Telecommunications Act of 1996 in the Main ...Missing: date | Show results with:date
  10. [10]
    Overview of Section 230: What It Is, Why It Was Created, and What It ...
    Feb 22, 2021 · In the early days of the Internet, Congress had not yet clarified the issue of intermediary liability for online services, leaving it to the ...
  11. [11]
    47 U.S. Code § 230 - Protection for private blocking and screening ...
    47 U.S. Code § 230 - Protection for private blocking and screening of offensive material ; (1) No effect on criminal law ; (2) No effect on intellectual property ...
  12. [12]
    Section 230: A Brief Overview | Congress.gov
    Feb 2, 2024 · Section 230 of the Communications Act of 1934, 47 USC § 230, provides limited immunity from legal liability to providers and users of interactive computer ...
  13. [13]
    Interpreting the ambiguities of Section 230 - Brookings Institution
    Oct 26, 2023 · The Court was set to decide whether Section 230 immunizes platforms for the act of recommending third-party content to users—a question of ...
  14. [14]
    [PDF] The Erosion of Publisher Liability in American Law, Section 230, and ...
    CompuServe found that the internet-based company was not liable, another court arrived at the opposite conclusion in Stratton Oakmont v. Prodigy. Congress ...
  15. [15]
    Interpreting the Ambiguities of Section 230
    Apr 17, 2024 · In this essay I argue that Section 230, despite its simple-seeming language, is a deeply ambiguous statute.
  16. [16]
    Stratton Oakmont, Inc. v. Prodigy Services Co. - Quimbee
    Stratton Oakmont (Stratton) (plaintiff) sued Prodigy for libel based on a subscriber's allegedly defamatory messages posted on one of the boards.
  17. [17]
    Stratton Oakmont v. Prodigy Services: The Case that Spawned ...
    Feb 18, 2022 · On a partial summary judgment motion brought by Stratton, the court considered Prodigy's own statements and went through the classical libel ...
  18. [18]
    Stratton Oakmont, Inc. v. Prodigy Services Co. - Tom W. Bell
    (d) STRATTON was a "cult of brokers who either lie for a living or get fired." Plaintiffs commenced this action against PRODIGY, the owner and operator of the ...
  19. [19]
    Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997)
    Zeran v. AOL held AOL not liable for a false ad on its bulletin board, interpreting CDA 230 to protect service providers from liability for user content.
  20. [20]
    Zeran v. America Online, Inc., 958 F. Supp. 1124 (E.D. Va. 1997)
    In this case, Zeran seeks to hold AOL liable for its alleged negligence in allowing the bogus notices to remain and reappear after learning of their fraudulent ...
  21. [21]
    Zeran v. America Online, Inc. (4th Cir.) (1997) - Free Speech Center
    Jan 1, 2009 · “In this case, AOL is legally considered to be a publisher,” the Fourth Circuit wrote. ... “Optimal Liability System for Online Service Providers: ...
  22. [22]
    [PDF] Zeran v. AOL - Santa Clara Law Digital Commons
    • No distinction between publisher and distributor liability. • No notice-based liability. • “230 forbids the imposition of publisher liability on a service.
  23. [23]
  24. [24]
    47 U.S.C. § 230 and the Publisher/Distributor/Platform Distinction
    May 28, 2020 · Section 230 makes Internet platforms and other Internet speakers immune from liability for material that's posted by others.
  25. [25]
    The Test of Time: Section 230 of the Communications Decency Act ...
    § 230(c)(1). The law defines "information content provider" as "any person or entity that is responsible, in whole or in part, for the creation or development ...
  26. [26]
    Zeran v. America Online E-Resource by Eric Goldman, Jeff Kosseff
    Sep 29, 2020 · The Fourth Circuit's 1997 ruling in Zeran v. America Online played a critical role in Section 230 jurisprudence and, by extension, the ...
  27. [27]
    The Exceptions to Section 230: How Have the Courts Interpreted ...
    Feb 22, 2021 · Meanwhile, 230(c)(2) states that providers are not liable for “any action voluntarily taken in good faith to restrict access to or availability ...
  28. [28]
    Carafano v. Metrosplash.com, Inc., 339 F.3d 1119 (9th Cir. 2003)
    In this appeal, we consider to what extent a computer match making service may be legally responsible for false content in a dating profile provided by someone ...
  29. [29]
    Carafano v. Metrosplash.com, Inc., 339 F.3d 1119 (2003) - Quimbee
    However, § 230 of the Communications Decency Act provided providers of interactive computer services “broad immunity for publishing content provided primarily ...
  30. [30]
    Fair Housing Council of San Fernando Valley v. Roommates.com ...
    The Ninth Circuit held that Roommates.com could not claim immunity under CDA § 230 where as a condition of use, it required users to choose among set answers.
  31. [31]
    [PDF] FAIR HOUSING COUNCIL v. ROOMMATES.COM
    Apr 3, 2008 · The district court held that Room- mate is immune under section 230 of the CDA, 47 U.S.C.. § 230(c), and dismissed the federal claims without ...
  32. [32]
    Section 230 Under Fire: Recent Cases, Legal Workarounds, and ...
    Apr 6, 2025 · When these elements are met, Section 230 can immunize the platform from a wide array of state law claims (defamation, negligence, etc.), as well ...
  33. [33]
    What Has Congress Been Doing on Section 230? - Lawfare
    May 27, 2025 · The original model here is SESTA/FOSTA, legislation passed in 2018 that limited Section 230 protections for content related to sex trafficking.
  34. [34]
    Executive Order on Preventing Online Censorship – The White House
    May 28, 2020 · Section 230(c) was designed to address early court decisions holding that, if an online platform restricted access to some content posted by ...
  35. [35]
    Trump and Section 230: What to Know | Council on Foreign Relations
    In May 2020, Trump issued an executive order aimed at limiting the legal protection offered by Section 230. The move came after Twitter appended fact checks to ...
  36. [36]
    The Twitter Files, Section 230, and the Free Market
    Mar 22, 2023 · Daniel Lyons cites Adam Smith when arguing that the absence of Section 230 protections will lead to more censorship and less competition driven ...
  37. [37]
    Don't use 'Twitter Files' to eliminate Section 230 - The CGO
    Jan 5, 2023 · Section 230 means Twitter isn't held liable for legal content generated by its users. Doing away with these protections would mean the company ...Missing: implications | Show results with:implications
  38. [38]
    [PDF] 23-961 Doe v. Snap, Inc. (07/02/2024) - Supreme Court
    Jul 2, 2024 · The courts below concluded that §230 of the Communications. Decency Act of 1996 bars Doe's claims. 47 U. S. C. §230. The Court of Appeals denied ...
  39. [39]
    A Final Bow for Section 230? Latest Plea for Reform Calls for Sunset ...
    Jun 11, 2024 · The intent of the legislation is “not to have Section 230 actually sunset,” but to prompt Congress “to advance a long-term reform solution to Section 230.”
  40. [40]
    Intermediary Liability and Future Challenges for Section 230
    Feb 27, 2025 · ” Continued polarization of Internet policy and instability of Section 230's future presents a challenge to policy makers and businesses alike.
  41. [41]
  42. [42]
  43. [43]
    An Economic Case for Section 230 - Disruptive Competition Project
    Sep 6, 2019 · In addition to the competitive and innovative incentives Section 230 provides, it holds strong economic benefits as well.
  44. [44]
  45. [45]
    Repealing Section 230 Would Cost Americans Over $1.3 Trillion
    Jul 9, 2024 · The enormous financial costs of Section 230 repeal to investors result from removing the legal protections that underpin the $2.6 trillion digital economy.
  46. [46]
  47. [47]
  48. [48]
    Report: Section 230 Enables American Innovation to ... - NetChoice
    Jun 25, 2019 · Section 230 continues to enable strong economic growth. · Section 230 enables a world-leading, innovative and competitive tech industry.<|separator|>
  49. [49]
  50. [50]
    An Empirical Study of Intermediary Immunity Under Section 230 of ...
    Jun 18, 2010 · While section 230 has largely protected intermediaries from liability for third-party speech, it has not been the free pass many of its ...
  51. [51]
    How Section 230 reform endangers internet free speech | Brookings
    Jul 1, 2020 · Proposals to reform Section 230 of the Communications Decency Act, which grants internet platforms legal immunity for most of the content posted by their users.
  52. [52]
    The Internet as a Speech Machine and Other Myths Confounding ...
    Yes, Section 230 is in need of reform, but it must be the right kind of reform. Our reservations stem from misconceptions riddling the debate. Those now ...
  53. [53]
    Fact-Checking the Critiques of Section 230: What Are the Real ...
    Feb 22, 2021 · Section 230 immunity does not apply to federal criminal law. The government can still prosecute online services that engage in illegal activity.
  54. [54]
    Joint Center Releases Second Research Brief on the Section 230 ...
    Apr 29, 2025 · This brief also highlights how platform immunity under Section 230 can shield moderation decisions even when they result in over-removal, ...
  55. [55]
    Content Moderation Issues Online: Section 230 Is Not to Blame
    Section 230 presents issues such as over-moderation by Interactive Computer Service (ICS) providers that can go as far as to be considered censorship and under ...
  56. [56]
    Twitter, Facebook, Google have repeatedly censored conservatives ...
    Mar 29, 2022 · Twitter, Facebook, Google have repeatedly censored conservatives despite liberal doubts. Debate over Big Tech regulation remains pressing issue for both ...
  57. [57]
    Summarizing the Section 230 Debate: Pro-Content Moderation vs ...
    Jul 5, 2022 · Right now, platforms are protected by Section 230, but current federal proposals could have unintended side effects in wake of new abortion laws ...
  58. [58]
    [PDF] Latest 'Twitter Files' reveal secret suppression of right-wing ...
    Dec 8, 2022 · Independent journalist Bari Weiss detailed in a series of posts how Twitter used so-called “shadow banning” to limit the visibility of tweets ...
  59. [59]
    The Cover Up: Big Tech, the Swamp, and Mainstream Media ...
    Feb 8, 2023 · Former Twitter employees testified on their decision to restrict protected speech and interfere in the democratic process.
  60. [60]
    Section 230 Reform: What Websites Need to Know Now
    Jul 2, 2025 · Section 230 of the Communications Decency Act of 1996 has been credited with “creating” the internet by immunizing websites and platforms from lawsuits.Missing: 2008 2018
  61. [61]
    Section 230 Reform: Flawed Arguments and Unintended ...
    Jan 19, 2022 · Right-leaning critics accuse platforms of misusing the moderator's privilege by disproportionately targeting conservative content for removal.
  62. [62]
    Republican Midterm Agenda: Section 230, Censorship, and Big Tech
    Oct 18, 2022 · Republicans have had eyes on Section 230 reform for years claiming social media companies overly flag, deplatform, and discriminate against conservative ...
  63. [63]
    Preventing Online Censorship - Federal Register
    Jun 2, 2020 · Section 230(c) was designed to address early court decisions holding that, if an online platform restricted access to some content posted by ...Missing: curation | Show results with:curation
  64. [64]
    Remarks by President Trump Announcing an Executive Order on ...
    like “fact check” content — onto other people's content, and when they curate their collection, and when ...Missing: curation | Show results with:curation<|separator|>
  65. [65]
    The Justice Department Unveils Proposed Section 230 Legislation
    Sep 23, 2020 · The legislative proposal also adds language to the definition of “information content provider” to clarify when platforms should be responsible ...Missing: curation | Show results with:curation
  66. [66]
    Why repealing or weakening Section 230 is a very bad idea - FIRE
    Feb 20, 2023 · Conservatives complain that it allows platforms to censor conservative voices with impunity, while liberals criticize the law for allowing ...
  67. [67]
    Section 230 Reform: Left and Right Want It, for Very Different Reasons
    Apr 12, 2021 · Many on the left, however, believe Big Tech companies are not moderating enough content, particularly what they view as harmful or extremist ...
  68. [68]
    TRANSCRIPT: Children's Safety in the Digital Era - Tech Policy Press
    Feb 20, 2025 · Sextortion is only one of the many harms due to our children due to Big Tech's lack of accountability. Big Tech is the big tobacco of this ...
  69. [69]
    [PDF] Why Section 230 hurts kids, and what to do about it - Congress.gov
    Dec 8, 2020 · Since Section 230 of the 1996 Communications Decency Act was passed, it has been a get-out-of-jail-free card for companies like Facebook and ...
  70. [70]
    Legislation to Reform Section 230 Reintroduced in the Senate, House
    Feb 28, 2023 · The SAFE TECH Act would force online service providers to address misuse on their platforms or face civil liability.
  71. [71]
  72. [72]
  73. [73]
  74. [74]
    [PDF] A REVIEW OF PROPOSALS TO REFORM SECTION 230
    May 5, 2021 · The Democratic proposals are like the Republican proposals, however, in that they seek to accomplish their objectives by revoking Section. 230 ...
  75. [75]
    Advanced Constitutional Law : Jane Doe No. 1 v. Backpage.Com, LLC
    Mar 14, 2016 · A. Trafficking Claims. The appellants challenge the district court's conclusion that section 230 of the CDA shields Backpage from liability for ...
  76. [76]
    [PDF] The Problem Isn't Just Backpage: Revising Section 230 Immunity
    Section 230 provides immunity to platforms like Backpage, even when they facilitate illegal activity, such as sex trafficking, and even when they know about it.
  77. [77]
    [PDF] the indecency and injustice of section 230
    Mar 21, 2018 · to be, particularly when it comes to sex trafficking, pornogra- phy, child sex-abuse images, and exploitation. It is clear that, whatever ...
  78. [78]
    NCMEC Releases New Data: 2024 in Numbers - MissingKids.org
    May 8, 2025 · ... child sexual exploitation: online enticement and child sex trafficking. NCMEC is already seeing an increase in child sex trafficking reports ...
  79. [79]
    [PDF] 1 Prepared Written Testimony Dawn Hawkins Senior Advisor ...
    Mar 26, 2025 · ... Section 230,” National Center on Sexual Exploitation, February ... sexual abuse and sex trafficking, and their own harmful design features.Missing: criticism | Show results with:criticism<|separator|>
  80. [80]
    Generative AI CSAM is CSAM - MissingKids.org
    Mar 11, 2024 · ... child safety. In 2023, NCMEC's CyberTipline received 4,700 reports related to Child Sexual Abuse Material (CSAM) or sexually exploitative ...
  81. [81]
    [PDF] Disrupting the Narrative: Diving Deeper into Section 230 Political ...
    Jul 11, 2023 · Id. at 1, 3 (finding that ninety-one percent of users believe social media companies are respon- sible for the spread of misinformation).
  82. [82]
    The disaster of misinformation: a review of research in social media
    Feb 15, 2022 · The spread of misinformation in social media has become a severe threat to public interests. For example, several incidents of public health ...
  83. [83]
    The social media Infodemic of health-related misinformation and ...
    This paper discusses the role of social media algorithms in the spread of misinformation during the COVID-19 pandemic.
  84. [84]
    A Crash Course on Section 230: What it is and why it matters
    Feb 17, 2023 · Proposals from congressional Democrats have included efforts to make platforms liable for health misinformation, or in cases where online ...
  85. [85]
    Section 230 Reform and Disinformation | McCain Institute
    Aug 14, 2023 · In 1997, the Supreme Court in Reno v. ACLU sided with the ACLU stating the censorship placed an “unacceptably heavy burden on protected speech” ...<|separator|>
  86. [86]
    (Why) Is Misinformation a Problem? - PMC - NIH
    We examined different disciplines (computer science, economics, history, information science, journalism, law, media, politics, philosophy, psychology, ...
  87. [87]
    [PDF] Act - One Hundred Fifteenth Congress of the United States of America
    To amend the Communications Act of 1934 to clarify that section 230 of such Act does not prohibit the enforcement against providers and users of interactive ...Missing: passage | Show results with:passage
  88. [88]
    FOSTA Signed into Law, Amends CDA Section 230 to Allow ...
    Apr 11, 2018 · The law is intended to limit the immunity provided under Section 230 of the Communications Decency Act (“CDA Section 230”) for online services ...Missing: details | Show results with:details
  89. [89]
    How Congress Really Works: Section 230 and FOSTA
    May 20, 2023 · Congress sought to amend Section 230 so that it could not block lawsuits that would normally be allowed under federal sex trafficking laws. Most ...
  90. [90]
    Sex Sells, But Not Online: Tracing the Consequences of FOSTA ...
    Dec 4, 2021 · FOSTA-SESTA has pierced the shield of net neutrality, curtailed free speech, and severely impacted the emotional, physical, and financial well-being of people.Missing: details date
  91. [91]
    The impact of FOSTA-SESTA and the removal of Backpage on sex ...
    ... FOSTA-SESTA, became US law. Its stated goal was to reduce human trafficking by amending Section 230 of the Communications Decency Act and holding Internet ...
  92. [92]
    The Impact of FOSTA-SESTA and the Removal of Backpage 2020
    ... FOSTA-SESTA, became US law in 2018.The stated goal of this law was to reduce human trafficking by amending section 230 of the Communications Decency Act.
  93. [93]
    Ninth Circuit Interprets FOSTA Restriction on Section 230 Narrowly
    Oct 28, 2022 · On October 24, the Ninth Circuit ruled that Section 230 of the Communications Decency Act shielded Reddit Inc. from liability under the ...Missing: lawsuit | Show results with:lawsuit<|separator|>
  94. [94]
    Graham, Blumenthal Reintroduce EARN IT Act
    Apr 19, 2023 · “The EARN IT Act removes Section 230 blanket liability protection from service providers in the area of child sexual abuse material on their ...
  95. [95]
    Senate Judiciary Committee Unanimously Approves EARN IT Act
    May 4, 2023 · Section 230 of the Communications Decency Act gives “interactive computer services” significant immunity from civil liability, as well as state ...
  96. [96]
    Text - S.1748 - 119th Congress (2025-2026): Kids Online Safety Act
    May 14, 2025 · —It shall be unlawful for any covered platform to design, embed, modify, or manipulate a user interface of a covered platform with the purpose ...
  97. [97]
    S.1748 - Kids Online Safety Act 119th Congress (2025-2026)
    This bill requires covered online platforms, including social media platforms, to implement tools and safeguards to protect users and visitors under the age of ...Missing: 230 | Show results with:230
  98. [98]
    Whatever happened to the Kids Online Safety Act? - The Verge
    Apr 29, 2025 · Nearly four months into 2025, KOSA has yet to be reintroduced in Congress. It's clear changes will be required to suit House Republican leadership.
  99. [99]
    NTIA Petition for Rulemaking to Clarify Provisions of Section 230 of ...
    Jul 27, 2020 · ... (FCC or Commission) initiate a rulemaking to clarify the provisions of section 230 of the Communications Act of 1934, as amended. NTIA ...
  100. [100]
    Chairman Pai Statement on Section 230
    Oct 15, 2020 · Federal Communications Commission Chairman Ajit Pai issued a statement on Section 230 of the Communications Act.
  101. [101]
    The President Revokes Prior Administration's Executive Order on ...
    May 17, 2021 · The President Revokes Prior Administration's Executive Order on CDA Section 230. New Media and Technology Law Blog on May 17, 2021 ...
  102. [102]
    White House renews call to 'remove' Section 230 liability shield
    Sep 9, 2022 · The Biden administration used the “listening session” to announce six reform principles, including the removal of Section 230 liability ...
  103. [103]
    Biden admin tells Supreme Court law protecting social media ...
    Dec 8, 2022 · Section 230 holds that social media companies cannot be treated as the publisher or speaker of any information provided by other users. The law ...
  104. [104]
    FTC Seeks Comment on Tech Content Moderation Policies
    Feb 21, 2025 · The RFI signals a focus on tech platform practices in the new Administration and is likely to be a prelude to further investigation efforts by the FTC.
  105. [105]
    The FCC Still Can't Interpret Section 230 - The Federalist Society
    May 1, 2025 · The Commission's ability to interpret Section 230 is particularly constrained because the Supreme Court overturned Chevron in Loper Bright Enterprises v. ...
  106. [106]
    The FCC's Authority to Interpret Section 230 of the Communications ...
    Oct 21, 2020 · The FCC has the authority to interpret all provisions of the Communications Act, including amendments such as Section 230.
  107. [107]
    Energy and Commerce Leaders Unveil Bipartisan Draft Legislation ...
    May 12, 2024 · May 12, 2024 ... It's time to make that a reality, which is why we are unveiling today bipartisan draft legislation to sunset Section 230.
  108. [108]
    Lawmakers debate ending Section 230 in order to save it | The Verge
    May 22, 2024 · A pair of legislators have a plan to save Section 230: kill it so that Congress is forced to come up with a better version.
  109. [109]
    Legislative Proposal to Sunset Section 230 of the Communications ...
    May 22, 2024 · Communications and Technology Subcommittee Hearing: "Legislative Proposal to Sunset Section 230 of the Communications Decency Act". May 22, 2024 ...
  110. [110]
    Legislative Proposal to Sunset Section 230 of the Communications ...
    Legislative Proposal to Sunset Section 230 of the Communications Decency Act118th Congress (2023-2024)
  111. [111]
    Sunsetting Section 230 Will Hurt Internet Users, Not Big Tech
    May 20, 2024 · This Wednesday, Congress will hold a hearing on a bill that would end Section 230 in 18 months. As EFF has said for years, Section 230 is ...
  112. [112]
    Sunsetting Section 230 Will Limit Free Expression
    May 21, 2024 · This week, the House Energy and Commerce Subcommittee on Communications and Technology will hold a hearing on a legislative proposal to “sunset” ...
  113. [113]
    What would happen if Section 230 went away? Legal expert ...
    Apr 11, 2025 · Opponents warn that repealing Section 230 could lead to increased censorship, a flood of litigation and a chilling effect on innovation and free expression.
  114. [114]
    Public Knowledge Proposes Section 230 Reforms That Address ...
    Mar 10, 2025 · We have two proposals for Section 230 reform we believe can pass our own tests: Remove the platforms' liability shield for paid advertising ...Missing: 2018-2025 | Show results with:2018-2025
  115. [115]
    Content Moderation: Reforming FCC's Section 230 - Akin Gump
    Dec 6, 2024 · President-elect Donald Trump and Commissioner Brendan Carr have both expressed support for the FCC taking action to limit the scope of Section 230.
  116. [116]
    [PDF] The Failed Experiment of Section 230 of the Communications ...
    Apr 3, 2025 · It corrects the false argument made by the tech industry in their attempt to redefine § 230's origin as one singularly focused on internet ...Missing: controversies | Show results with:controversies
  117. [117]
    Tech Regulation Digest: Sunsetting Section 230—The Future of ...
    Mar 3, 2025 · In 2024, platforms shielded by Section 230 controlled 65 percent of total US digital ad spend, according to the St. Louis Fed. This legal ...
  118. [118]
    The Digital Services Act package | Shaping Europe's digital future
    Aug 22, 2025 · The Digital Services Act and Digital Markets Act aim to create a safer digital space where the fundamental rights of users are protected.Very Large Online Platforms · Regulation - 2022/2065 · Online platforms and e...
  119. [119]
    The EU's Digital Services Act - European Commission
    Oct 27, 2022 · The DSA regulates online platforms to prevent illegal activities, protect user rights, and ensure a fair online environment, while fostering ...The impact of the Digital... · A Europe fit for the digital age · Supervision of VLOPs
  120. [120]
    [PDF] The Digital Services Act - Latham & Watkins LLP
    The DSA imposes obligations on all information society services that offer an intermediary service to recipients who are located or established in the EU, ...
  121. [121]
    [PDF] Fighting Disinformation Online: The Digital Services Act in the ...
    Jan 1, 2025 · This article discusses the threats disinformation poses to online users and provides a case study on how the European. Union's Digital Services ...
  122. [122]
    A guide to the Digital Services Act, the EU's new law to rein in Big Tech
    Clear rules for dealing with illegal content: The DSA updates the process by which digital service providers must act to rapidly delete illegal content based on ...
  123. [123]
    The EU Digital Services Act (DSA): everything you need to know
    What are the key obligations under the DSA? · Restrictions on targeted advertising · Recommender Systems · Prohibition on dark patterns · Removal of illegal content ...
  124. [124]
    The Digital Service Act: Overview and Key Obligations - WILLIAM FRY
    Feb 1, 2024 · The DSA aims to protect the fundamental rights of users of digital services and establish a level playing field for businesses.
  125. [125]
    EU Digital Services Act - ISD
    Aug 13, 2024 · The European Union's Digital Services Act (DSA), which fully came into force on 17 February 2024, is the world's first systemic online safety law.
  126. [126]
    The Digital Services Act and the Brussels Effect on Platform Content ...
    The EU's latest regulation of social media platforms—the Digital Services Act (DSA)—will create tension and conflict with the U.S. speech regime applicable ...
  127. [127]
    Europe's Digital Services Act: On a Collision Course With Human ...
    Oct 27, 2021 · The DSA is now steaming full-speed-ahead on a collision course with even more algorithmic filters - the decidedly unintelligent “AIs” that the ...Missing: criticisms perspective
  128. [128]
    The Brussels Effect?: Potential Domestic Impacts of International ...
    Aug 23, 2023 · While the DSA may be seeking to limit the impact of “bad” content, it is also likely to impact access to content more generally. The Act gives ...Missing: criticisms perspective
  129. [129]
    Digital Services Act: keeping us safe online - European Commission
    Sep 22, 2025 · Since February 2024, it applies to all other platforms in the European Union, except for micro and small enterprises. Enforcement of the DSA is ...
  130. [130]
    The enforcement framework under the Digital Services Act
    Feb 12, 2025 · Starting from 17 February 2024, the Commission can: apply fines up to 6% of the worldwide annual turnover in case of: Breach of DSA obligations ...
  131. [131]
    The Digital Services Act's lesson for U.S. policymakers: Co ...
    Aug 23, 2022 · ... Section 230, which deals with intermediary liability. This would at best be an indirect and highly risky way of getting to a solution. While ...
  132. [132]
    Europe Wants To Be the World's Speech Police
    Mar 6, 2025 · Global adoption of EU standards could have devastating consequences for free speech online. ... EU's Digital Services Act (DSA) by ...Missing: criticisms perspective
  133. [133]
    How Other Countries Have Dealt With Intermediary Liability | ITIF
    Feb 22, 2021 · The first, Section 230(c)(1), prevents online services from facing liability for third-party content on their platforms. The second, Section 230 ...
  134. [134]
    Internet intermediary liability for defamatory thi... - Clayton Utz
    Sep 28, 2023 · If a plaintiff obtains judgment against a defendant in a defamation matter, the Court may make orders against a third party digital intermediary ...
  135. [135]
    Learn about the Online Safety Act | eSafety Commissioner
    Aug 29, 2025 · The Online Safety Act 2021 expands Australia's protections against online harm, to keep pace with abusive behaviour and toxic content.Missing: intermediary | Show results with:intermediary
  136. [136]
    [PDF] Statutory Review of the Online Safety Act 2021
    Apr 26, 2024 · The review covers online harms like abuse, exploitation, self-harm, and hateful language, and the Act's focus on Basic Online Safety ...
  137. [137]
    Online Safety Act: explainer - GOV.UK
    Any site that allows users to share content or interact with each other is in scope of the Online Safety Act. These laws also require sites to rapidly remove ...
  138. [138]
    Online Safety Act 2023 - Legislation.gov.uk
    (1)This section sets out the duties about risk assessments which apply in relation to all regulated user-to-user services. (2)A duty to carry out a suitable and ...Section 10 · Cookies on Legislation.gov.uk · Section 59 · Section 74
  139. [139]
    Section 230 - Online Safety Act 2023 - Legislation.gov.uk
    Changes to legislation: There are currently no known outstanding effects for the Online Safety Act 2023, Section 230.
  140. [140]
    Did US-style “Section 230” Internet Platform Immunity Sneak into ...
    May 15, 2023 · Did US-style “Section 230” Internet Platform Immunity Sneak into Canada through CUSMA? No, It did Not (As Google Just Learned–the Hard Way).Missing: equivalent | Show results with:equivalent
  141. [141]
    CIPPIC Releases New Report on Intermediary Liability in Canada ...
    Jul 15, 2020 · While CUSMA provides for a liability shield quite similar to Section 230, the provisions differ in that CUSMA permits courts to order ...