Fact-checked by Grok 2 weeks ago

Murthy v. Missouri

Murthy v. Missouri is a landmark Supreme Court case decided on June 26, 2024, addressing allegations that senior Biden administration officials, including staff, the , and FBI agents, violated the First Amendment by pressuring platforms to suppress user content labeled as on COVID-19 policies and the 2020 election. The 6–3 , authored by Justice Barrett, held that the plaintiffs— and , along with individual epidemiologists, journalists, and a whose posts were removed—lacked Article III standing due to insufficient traceability of past harms to government actions and speculative future injury risks. The dispute arose amid extensive federal communications with companies like , , and X (formerly ), where officials flagged posts for removal or demotion, sometimes expressing frustration over non-compliance and hinting at regulatory consequences such as antitrust scrutiny or reforms. Plaintiffs contended this constituted coercive "jawboning," transforming platforms' editorial decisions into subject to First Amendment scrutiny, supported by evidence from including internal emails showing platforms yielding after repeated entreaties. Lower courts, including the Western District of Louisiana and the Fifth Circuit, found preliminary evidence of likely coercion and issued injunctions barring such influence, though the latter narrowed the scope to targeted officials. While the majority emphasized platforms' independent choices and attenuated causation—citing platforms' pre-existing policies and voluntary changes post-2022—the dissent by Justice Alito, joined by Justices and Gorsuch, argued the record demonstrated a "coordinated " of amounting to , with standing established for at least one via direct platform suppressions linked to demands. The ruling avoided merits , leaving unresolved whether such government-platform interactions cross constitutional lines, but highlighted ongoing debates over public-private speech amid documented surges in federal-platform contacts during the pandemic.

Background and Context

Pre-Lawsuit Government-Social Media Interactions

Following the inauguration of President Joe Biden on January 20, 2021, White House officials promptly contacted Twitter to demand the removal of a tweet by Robert F. Kennedy Jr. criticizing COVID-19 vaccine safety, with an email urging action "ASAP" just three days into the administration. This early interaction set a pattern of direct communications aimed at content moderation, particularly on health-related topics. In March 2021, Digital Strategy Director Rob Flaherty emailed executives expressing frustration over content, demanding greater transparency on suppression efforts and criticizing the platform's handling of discouraging posts. responded on by outlining policy adjustments, including outright removal of and algorithmic demotion of content questioning without violating explicit rules. Similar pressures extended to , where Flaherty emailed on April 12 inquiring how the platform could "crack down on " and proposing collaborations to target "borderline content." A follow-up meeting on April 21 emphasized concerns at senior levels, prompting to schedule additional briefings and refine its policies. By April 2021, White House Senior Advisor contacted regarding a viral meme likening to asbestos, expressing outrage and demanding its removal during a call, while pushing for broader measures. Flaherty followed up on April 9 and 14, questioning the prominence of posts like one by doubting efficacy and urging limits on viral dissemination across platforms including . On May 10, detailed steps to boost acceptance, but Flaherty critiqued persistent promotion of anti- pages. The U.S. 's Office amplified these efforts in July 2021, issuing an advisory on July 15 declaring health , especially on and vaccines, an "urgent threat" and calling on platforms to enhance detection, removal, and demotion of such content. met with on August 6, followed by an requesting a two-week update on actions, which led to implementing four aggressive policy options by August 19 to combat . Publicly, President Biden stated on July 16 that platforms like were "killing people" by allowing to persist. Parallel FBI engagements with , ongoing since at least 2018, intensified pre-2022 with regular meetings to flag potential election-related , including foreign influence operations, though specific Biden-era demands focused on domestic narratives like and post-2020 election claims. These interactions, documented in internal platform records, involved repeated flagging of content for review or suppression without formal .

Emergence of Censorship Allegations

Allegations of federal government censorship on social media platforms first gained prominence during the COVID-19 pandemic in 2020 and 2021, as officials from agencies including the White House, Centers for Disease Control and Prevention (CDC), and Food and Drug Administration (FDA) repeatedly contacted companies such as Facebook, Twitter, and YouTube to flag and remove content deemed misinformation on topics like vaccine efficacy, mask mandates, and virus origins. On July 16, 2021, President Joe Biden publicly stated that social media platforms were "killing people" by failing to adequately suppress vaccine hesitancy content, prompting platforms to adjust algorithms and policies in response. These interactions escalated into claims of coercion when internal records later showed persistent demands; for instance, in an August 26, 2024, letter to the House Judiciary Committee, Meta CEO Mark Zuckerberg revealed that senior Biden administration officials had pressured the company for months in 2021 to censor COVID-19-related posts, including those containing humor or satire, and that Meta had demoted such content despite internal reservations about overreach. Parallel concerns emerged over election-related suppression, particularly Twitter's October 19, 2020, decision to block sharing of the New York Post's report on Hunter Biden's laptop contents, which the platform justified under its hacked materials policy amid prior FBI briefings warning of potential Russian disinformation operations—briefings that plaintiffs later argued misled companies into preemptively censoring authentic material. The scope of alleged government influence broadened with the December 2022 release of the , a series of internal documents published by after acquiring the platform, which documented over 10,000 federal requests from agencies including the FBI and (DHS) to moderate content since at least 2018. Independent journalists reviewing the files, such as , highlighted the FBI's role in weekly meetings with executives, maintenance of a dedicated portal for content flags, and payments exceeding $3.4 million to the company in 2020 for processing such requests, often targeting conservative-leaning posts on policies and 2020 election claims. These revelations, corroborated by Freedom of Information Act (FOIA) requests and depositions from officials like CDC Director , who admitted to daily communications pressuring platforms, fueled accusations that voluntary cooperation masked unconstitutional jawboning, where public and private threats induced to avoid regulatory reprisals like antitrust scrutiny or reforms. Congressional hearings, including those by the House Committee's Select Subcommittee on the Weaponization of the starting in 2023, amplified the claims by subpoenaing records showing demands for specific user and algorithmic changes, attributing the pattern to a systemic effort rather than isolated advisories. While platforms maintained that many actions aligned with independent policies, the documented volume of communications—far exceeding prior administrations—shifted public and legal discourse toward viewing them as coercive, setting the stage for litigation asserting First Amendment violations.

Plaintiffs' Case

State and Individual Plaintiffs

The states of Missouri and Louisiana initiated the lawsuit on May 5, 2022, led by then-Missouri Attorney General Eric Schmitt and Louisiana Attorney General Jeff Landry, alleging that federal officials coerced social media platforms to suppress speech critical of government policies, including on COVID-19 vaccines, election integrity, and climate change. Missouri claimed harms to its officials' and citizens' speech, such as platform demotions affecting state communications, while Louisiana cited instances like Facebook flagging a state representative's post on children and COVID-19 vaccines. Both states argued these actions violated the First Amendment by turning private moderation into state action through government pressure. The individual plaintiffs comprised five social media users whose content faced removal, demotion, or restrictions, which they attributed to federal influence on platforms: three physicians—Jayanta Bhattacharya (Stanford epidemiologist), (former Harvard epidemiologist), and Aaron Kheriaty (former UC Irvine psychiatrist)—who alleged censorship of COVID-19-related views on natural immunity, vaccine efficacy, and lockdowns starting in 2021 across and . Jim Hoft, owner of news website, claimed platforms suppressed election-fraud allegations, including 's removal of a story in December 2020. Jill Hines, a healthcare activist and co-director of Health Freedom Louisiana opposing vaccine and mask mandates, reported deletions of groups, page restrictions, and reduced visibility for posts on trial data and risks from October 2020 through 2023. These plaintiffs sought injunctive relief to halt ongoing government-platform communications perceived as coercive, asserting traceable injuries from fears and diminished audience reach, though the later ruled none demonstrated III standing for such broad injunctions. Their claims drew on discovery revealing emails and meetings between officials and platforms, but focused here on personal and professional impacts from alleged viewpoint discrimination. The plaintiffs asserted that federal officials, including those from the White House, FBI, CDC, and Surgeon General's office, violated the First Amendment by coercing or significantly encouraging social media platforms to suppress protected speech on topics such as COVID-19 vaccines, election integrity, and the Hunter Biden laptop story. This alleged "jawboning" transformed private moderation decisions into state action, as platforms altered policies and removed content in response to government pressure rather than independent judgment. The claims centered on a coordinated campaign involving thousands of communications, public threats, and private demands, which the district court deemed likely to succeed on the merits for establishing coercion or significant encouragement. Evidence of White House coercion included repeated emails demanding "immediate" action, such as a January 23, 2021, directive to to remove a tweet, and a February 6, 2021, request to suspend a account, resolved within 45 minutes. Officials like Rob Flaherty accused of "hiding the ball" on in March 2021 and proposed "stronger demotions" for anti-vaccine content, while referencing potential reforms and antitrust scrutiny. Biden's July 16, 2021, statement that platforms were "killing people" by allowing prompted further demands, with internal documents showing a shift in post removal rates from 0.2% to 94% following pressure. The Fifth Circuit characterized this as "unrelenting" pressure crossing into coercion through implied threats of adverse consequences. FBI actions involved flagging content for removal, with agent Elvis Chan contacting platforms 1-5 times monthly, resulting in content takedowns in approximately 50% of cases, including pre-2022 election on poll hours. Pre-2020 election warnings about "hack-and-leak" operations influenced Twitter's policies, contributing to the suppression of the laptop story on October 14, 2020, after FBI misleading briefings in December 2019. The district court cited these as examples of authority leveraging inherent power to induce compliance without explicit threats. CDC and NIAID evidence included flagging specific posts, such as 16 instances on May 6, 2021, and providing "misinformation hot topics" lists that platforms adopted for labeling and removal. Dr. Anthony Fauci and coordinated to "take down" the in October 2020, with Collins emailing for a "devastating" response. The Surgeon General's July 15, 2021, advisory demanded algorithm redesigns and consequences for "super-spreaders," paired with private follow-ups expressing disappointment. CISA functioned as a "switchboard" for flagging via the Election Integrity Partnership starting July 9, 2020. Lower courts found these efforts, particularly by the and FBI, constituted coercion, while CDC actions amounted to significant encouragement through authoritative guidance.

Lower Court Proceedings

District Court Filing and Discovery

The lawsuit, originally titled Missouri v. Biden, was filed on May 5, 2022, in the United States District Court for the Western District of (case number 3:22-cv-01213) by the attorneys general of and , alleging that senior Biden administration officials and federal agencies had violated the First Amendment by coercing platforms to suppress speech on topics including origins, vaccine efficacy, election integrity, and the Hunter Biden laptop story. Individual plaintiffs, including epidemiologist Dr. Aaron Kheriaty, journalist Jill Sanborn, and others who claimed personal censorship of their views, joined the suit on August 2, 2022. The district court, presided over by Judge , permitted limited and expedited focused on communications between government officials and companies to support the plaintiffs' motion for a preliminary , overriding initial government objections that such was unduly burdensome. Plaintiffs sought and obtained production of thousands of emails, internal documents, and records of meetings, revealing patterns of persistent government flagging of content for removal or demotion, including directives from officials like Rob Flaherty to and executives. Discovery included motions for depositions of key figures, such as digital strategy director Rob Flaherty, whose communications showed repeated demands for platforms to specific posts, with plaintiffs arguing these demonstrated coercive rather than mere . The government resisted broader depositions, claiming and irrelevance, but the court authorized targeted written discovery and some oral examinations, yielding evidence of over 10,000 flagged items and platform responsiveness to agency complaints, which the plaintiffs cited as establishing a enterprise. This phase produced a voluminous record, including FBI briefings to platforms on "hack-and-leak" operations and CDC collaborations with on , which Doughty later deemed sufficient to infer likely in his July 4, 2023, preliminary injunction ruling.

Preliminary Injunction and Findings

On July 4, 2023, District Judge of the Western District of Louisiana granted the plaintiffs' motion for a preliminary injunction in Missouri v. Biden, finding a substantial likelihood of success on the merits of their First Amendment claims against certain federal defendants. The court determined that defendants, including officials from the , Surgeon General's office, Centers for Disease Control and Prevention (CDC), National Institute of Allergy and Infectious Diseases (NIAID), (FBI), (CISA), and State Department, had engaged in coercion or significant encouragement of platforms to suppress protected speech, particularly content challenging policies, the laptop story, and 2020 election integrity. This conduct, the court concluded, constituted viewpoint discrimination subject to , as it targeted conservative-leaning viewpoints without adequate justification. The court's findings rested on extensive discovery evidence, including over 1,400 pages of documents, emails, and depositions revealing persistent government pressure. For instance, White House Deputy Assistant to the President Rob Flaherty emailed platforms on January 23, 2021, demanding immediate action to remove a tweet by Robert F. Kennedy, Jr., criticizing vaccines, and followed up aggressively when compliance lagged. NIAID Director Anthony Fauci and NIH Director Francis Collins coordinated a "devastating take down" of the Great Barrington Declaration advocating focused protection over lockdowns, labeling its authors as fringe epidemiologists in emails and public statements. FBI agents, including San Francisco field office head Elvis Chan, regularly flagged content for removal, achieving a reported 50% success rate in censorship requests, while misleading platforms about the authenticity of the Hunter Biden laptop story prior to the 2020 election. CISA operated as a "switchboard" for misinformation reports, often prioritizing domestic conservative content under the guise of protecting "cognitive infrastructure." Surgeon General Vivek Murthy issued advisories and Requests for Information post-July 2021 to amplify pressure on platforms to curb COVID-19 "misinformation," coordinated with entities like Stanford's Virality Project. The court rejected defendants' claims of mere persuasion, citing the platforms' policy changes and content removals directly following these communications as evidence of effective coercion blurring public-private lines. Judge Doughty applied the standard four-factor for preliminary , concluding that plaintiffs faced irreparable from ongoing suppression of their speech and associational , as monetary could not redress constitutional violations. The balance of equities and public interest favored injunction, as the government's interest in combating did not override First Amendment protections against compelled silence. The court characterized the alleged scheme as "an almost dystopian scenario" and "the most massive attack against free speech in ' history," emphasizing empirical patterns of entwinement over isolated jawboning. The injunction prohibited the covered defendants—enumerated as President Joseph R. Biden Jr., officials including and Rob Flaherty, , HHS Secretary , NIAID's Fauci, CDC representatives, FBI and DOJ officials like Lauren Dehmlow, CISA Director , and State Department personnel—from taking actions to coerce or significantly encourage social media firms to suppress viewpoint-protected speech. Specific bans included:
  • Communicating with platforms to flag or request removal/suppression of disfavored .
  • Convening meetings or engaging in persistent follow-ups aimed at changes facilitating .
  • Threatening adverse consequences, such as antitrust scrutiny or reforms, to induce compliance.
No relief applied to agencies like the FDA, Treasury, or Election Assistance Commission, where evidence of coercion was deemed insufficient. The order took effect after a brief administrative stay, binding defendants and those acting in concert.

Fifth Circuit Appeal

Appellate Analysis of Coercion

The Fifth Circuit Court of Appeals analyzed the plaintiffs' First Amendment claims by determining whether the defendants' interactions with social media platforms amounted to coercion or significant encouragement, thereby attributing the platforms' content moderation decisions to state action. The court drew on precedents like Bantam Books, Inc. v. Sullivan (1963), which held that informal threats of enforcement can constitute coercive censorship, and applied a four-factor test for coercion derived from cases such as New York State Rifle & Pistol Ass'n v. Bruen companion analyses: (1) a demeaning or threatening tone in communications; (2) invocation of governmental authority to demand compliance; (3) credible threats of adverse consequences, such as regulatory changes or legal reforms; and (4) the recipients' reasonable perception of compulsion. For significant encouragement, the court required evidence of the government's active, meaningful control over private decisions, beyond mere approval or acquiescence, as articulated in Blum v. Yaretsky (1982). This framework distinguished permissible government persuasion—such as sharing information or public criticism—from unconstitutional jawboning that effectively compels speech suppression. The found substantial evidence of coercion by White House officials, including repeated demands to platforms like to remove specific content "ASAP," accusations that failure to act was "killing people," and implicit threats to overhaul immunity if moderation proved insufficient. Platforms' internal records showed they viewed these interactions as pressuring algorithmic and policy changes, with one executive noting White House frustration led to fears of "retaliation." Similarly, the General's office engaged in coercive communications, issuing public calls for platforms to report "hesitancy" content and privately urging removals under threat of legislative reform, which the deemed leveraged the office's to compel action. FBI communications were also deemed coercive, involving weekly meetings and content flagging that resulted in approximately 50% removal rates for election-related posts, alongside suggestions to adjust platform policies on "malinformation"—true but potentially harmful speech—backed by the agency's authority. Platforms perceived these as directives, with internal documents reflecting compliance to avoid antagonizing . In contrast, the CDC's actions did not rise to but constituted significant encouragement through authoritative guidance on , such as lists of disfavored narratives on vaccines and masks, which platforms adopted to inform demotions and removals, effectively ceding control over moderation criteria. The court rejected arguments that these were isolated or voluntary engagements, noting a persistent "system of pressure" including follow-ups on non-compliance and alignment with broader campaigns against "disinformation." No such findings applied to other defendants like the Cybersecurity and Infrastructure Security Agency (CISA) or National Institute of Allergy and Infectious Diseases (NIAID), where evidence showed mere facilitation or criticism without compulsion. Overall, the panel concluded that plaintiffs demonstrated a substantial likelihood of success on the merits against the White House, Surgeon General, FBI, and CDC, as their conduct likely violated the First Amendment by transforming private moderation into government-directed censorship of protected viewpoints on elections, COVID-19, and other topics. The court affirmed the preliminary injunction but narrowed its scope to prospectively bar these officials from similar coercive or encouraging practices.

Reaffirmation of Injunction

On September 8, 2023, a three-judge panel of the of Appeals for the Fifth , in Missouri v. Biden, affirmed in part the district court's preliminary , finding that plaintiffs were likely to succeed on the merits of their First Amendment claims against specified federal officials and agencies. The court upheld the specifically as to the , Surgeon General Vivek Murthy, the Centers for Disease Control and Prevention (CDC), the (FBI), and the (CISA), determining that these entities had engaged in conduct amounting to coercion or significant encouragement of platforms to suppress protected speech. The panel distinguished permissible government persuasion from unconstitutional jawboning, concluding that the officials' actions exceeded mere requests by involving implicit threats of adverse consequences, such as regulatory reforms to liability protections or antitrust enforcement, alongside persistent demands for content removal and policy changes. For instance, the court cited communications pressuring platforms to alter algorithms and moderation practices, including statements like demands for "fundamental reforms" if platforms failed to comply, which platforms acknowledged influenced their decisions to censor viewpoints on topics such as origins, election integrity, and gender-transitioning minors. FBI contacts with platforms, including preemptive briefings and repeated flagging of accounts, were deemed coercive due to the agency's authority and the platforms' subsequent suppression of flagged content. Regarding standing, the Fifth Circuit held that individual plaintiffs demonstrated traceability and redressability through evidence of induced by government-platform coordination, creating a credible threat of future enforcement absent an . State plaintiffs' injuries were similarly linked to harms to their residents' speech rights and the states' interests in protecting those rights. The reversed the as to other defendants, including the National Institute of Allergy and Infectious Diseases (NIAID) and the State Department, finding insufficient evidence of coercion by those entities. In modifying the injunction, the panel narrowed its scope to prohibit only "coerc[ion] or significant[ly] encourag[ing] social-media companies to suppress the viewpoints of their users," vacating broader prohibitions against any communication with platforms while preserving the core relief against the affirmed defendants. This adjustment aimed to target unconstitutional conduct without unduly restricting legitimate government dialogue, emphasizing that the First Amendment bars government actions converting private moderation into state-compelled . The decision reinforced prior precedents like Bantam Books, Inc. v. Sullivan (1963), where informal censorship pressures were deemed violative, but required a higher showing of threats or control than in cases of isolated persuasion.

Supreme Court Review

Oral Arguments and Key Issues

Oral arguments in Murthy v. Missouri were held before the on March 18, 2024. , of , argued on behalf of the plaintiffs, asserting that federal officials had engaged in a systematic campaign of against platforms, compelling them to suppress disfavored speech on topics including origins, efficacy, and integrity. He cited a 20,000-page evidentiary record, including emails and depositions, as demonstrating "arguably the most massive attack against free speech in American history," per the district court's findings, and argued that standards from cases like Norwood v. Harrison applied to prohibit such government encouragement of censorship. Solicitor General Elizabeth Prelogar represented the government, maintaining that all communications constituted permissible persuasion rather than , framed within the urgent context of combating misinformation during a crisis and threats to . She contended that platforms retained independent judgment, as evidenced by instances where they rejected or partially complied with requests, and invoked Bantam Books, Inc. v. Sullivan as the governing test for , which requires implicit threats rather than mere aggressive advocacy. Prelogar emphasized no tangible harms were traceable solely to government actions, noting platforms' pre-existing moderation policies and voluntary changes post-contact. A central issue debated was Article III standing, particularly traceability of plaintiffs' injuries—such as account restrictions or content removals—to conduct rather than platforms' autonomous decisions. Justice Kagan pressed Aguiñaga on causation gaps, referencing a May 2021 email where a platform acted independently two months after a query, whether "a lot of things can happen in two months" severed the link. Justice Sotomayor expressed doubt about the evidence's reliability, stating, "I don’t know what to make of all this," while Justice Alito defended the lower courts' factual findings on traceability, arguing against reversal of determinations endorsed by both district and appellate levels. The distinction between coercion and permissible jawboning emerged as another key issue, with justices probing the threshold for unconstitutional pressure. Justice Alito highlighted the government's "partnership" with platforms, including regular meetings and suggested rules, as well as pointed language like Vivek Murthy's public statements implying accountability for failing to remove content, questioning whether such "hectoring" crossed into . Prelogar conceded that inducements could coerce but denied threats like reforms qualified, attributing rhetoric to pandemic exigencies rather than implicit menaces. Justice Gorsuch inquired about broader standards beyond , while Justice Barrett explored whether platforms fully ceding moderation to government would constitute , eliciting government agreement that it could. Justices also touched on emergency powers and viewpoint discrimination, with Justice Jackson asking if government could suppress speech during crises; Aguiñaga affirmed applies, rejecting exceptions for viewpoint-based suppression. Justice Kavanaugh analogized routine government-media contacts to non-coercive interactions, underscoring platforms' editorial discretion under the First Amendment. Overall, conservative justices like Alito, Thomas, and Gorsuch appeared more receptive to merits concerns over aggressive tactics, while the majority leaned toward standing barriers, foreshadowing the Court's ultimate 6-3 ruling vacating the for lack of traceability and redressability.

Majority Decision on Standing

In Murthy v. Missouri, the Supreme Court held in a 6-3 decision, authored by Justice Amy Coney Barrett, that neither the individual nor the state plaintiffs established Article III standing to seek injunctive relief against the federal defendants. The majority emphasized that standing requires a concrete and particularized injury that is fairly traceable to the defendant's conduct and likely to be redressed by a favorable judicial decision, rejecting the Fifth Circuit's broader approach to traceability at a "high level of generality." For the individual plaintiffs—including Jill Hines, Jim Hoft, and several medical professionals—the Court acknowledged past instances of restrictions on their content, such as Facebook's demotion of Hines's posts about vaccines. However, the majority found no sufficient causation, as the record demonstrated that platforms like had independently developed and applied content-moderation policies prior to significant communications, including flagging Hines's content before involvement in 2021. Platforms continued such moderation even after contacts ceased, underscoring their autonomous decision-making. On redressability, the Court concluded that enjoining the defendants would not likely prevent future platform suppression, given the platforms' ongoing independent policies and the absence of evidence linking specific actions to ongoing harms at the time of filing in August 2022. The state plaintiffs— and —similarly failed to demonstrate standing, as their claimed involved suppression of residents' and officials' speech, such as a Louisiana representative's post on election integrity. The determined that states lack standing to sue in parens patriae for generalized to individual citizens, absent a quasi-sovereign interest like protection against interstate commerce harms, which was not shown here. was deficient due to the platforms' pre-existing practices, and redressability faltered for the same reasons as with individuals: an against officials would not compel platforms to alter their content decisions. The noted, "The platforms’ content- decisions remain their own," rendering any assumption of government-induced future mere conjecture. This ruling vacated the Fifth Circuit's without addressing the merits of the plaintiffs' First Amendment claims.

Dissenting Opinions

Justice filed a , in which Justices and joined. Alito contended that the majority erred in dismissing the case for lack of standing, particularly for Jill Hines, whose injuries from Facebook's of her COVID-19-related speech satisfied Article III requirements of concrete injury, to government actions, and redressability by . He criticized the majority for imposing an unduly stringent standard and shirking judicial responsibility to address what lower courts had found to be likely unconstitutional , thereby permitting a "coercive model" of influence over platforms to persist unchecked. Alito maintained that extensive evidence from discovery demonstrated the federal government's coercive pressure on companies, distinguishing it from permissible persuasion. officials, including Rob Flaherty and Andrew Slavitt, sent emails expressing frustration and implicit threats, such as references to content "killing people" and potential antitrust reforms under of the . Vivek issued a July 2021 advisory urging platforms to combat misinformation more aggressively, followed by public statements demanding immediate action. Platforms responded by altering moderation policies, including Facebook's demotion of vaccine-hesitant posts after , 2021, and commitments to "gain [the government's] trust" through increased . Drawing on precedents like Bantam Books, Inc. v. Sullivan (372 U.S. 58 (1963)) and National Rifle Association of America v. Vullo (602 U.S. 175 (2024)), Alito emphasized that arises not from overt threats but from government officials' use of authority to intimidate, leading platforms to suppress disfavored views on topics like election integrity and policies. He cited Hines's specific harms, including the removal of her group and a 90-day posting restriction after sharing articles questioning narratives, as directly linked to government-demanded changes in platform algorithms and enforcement. Such actions, Alito argued, presumptively violate the First Amendment by outsourcing to private entities, undermining democratic discourse and scientific debate. On the merits, Alito concluded that plaintiffs were likely to succeed, as the government's persistent, outcome-altering demands constituted unconstitutional jawboning rather than benign advocacy. He warned that affirming the preliminary would not silence government speech but would prevent indirect suppression of protected expression, preserving platforms' . The dissent urged reversal of the majority's standing dismissal to confront this "disturbing" threat to free speech.

Evidence of Government Influence

Specific Communications and Depositions

In the district court proceedings, numerous emails and messages from Biden administration officials to social media platforms were documented, illustrating persistent demands for content removal or suppression, particularly regarding COVID-19 vaccines, election integrity, and the Hunter Biden laptop story. For instance, on January 23, 2021, White House official Clarke Humphrey emailed Twitter requesting the removal of an anti-vaccine tweet by Robert F. Kennedy Jr., copying Rob Flaherty, then Director of Digital Strategy; Twitter complied by labeling the post. On February 6, 2021, Flaherty emailed Twitter to remove a parody account mimicking Finnegan Biden, which was resolved within 45 minutes. That same week, on February 7, Twitter proposed a "Partner Support Portal" to Flaherty for expediting White House censorship requests amid frequent demands. Further communications escalated pressure on vaccine-related content. On March 15, 2021, Flaherty emailed accusing it of being a "top driver" of based on a Washington Post article and demanding immediate policy changes, followed by a suggestion for a meeting to discuss. CDC official Carol Crawford's deposition confirmed regular flagging of to , with the understanding that such reports would prompt to prioritize "credible health information," including weekly meetings dedicated to this purpose. On July 16, 2021, after Biden publicly stated that platforms were "killing people" by allowing , suspended journalist for contradicting official COVID narratives. FBI communications focused on election-related suppression. FBI agent Elvis Chan's deposition revealed routine sharing of "tactical information" with platforms via encrypted channels like Signal, expecting content removal with approximately 50% success in 2020, including warnings about potential "hack-and-leak" operations that influenced decisions to downrank the Hunter Biden laptop story as suspected Russian disinformation. Twitter's then-Head of Trust and Safety, Yoel Roth, declared that FBI briefings shaped platform policies, leading to preemptive moderation of such stories ahead of the 2020 election. Surgeon General Vivek Murthy's office also engaged directly; advisor Eric Waldo's deposition admitted using the position's "" for public and private pressure to curb health , including preemptive calls to platforms before Murthy's July 15, 2021, advisory labeling certain accounts as "superspreaders" warranting action. CISA's Brian Scully deposed that the agency operated a "switchboarding" system routing flagged disinformation to platforms, with increased biweekly industry meetings by October 2022 and interns overlapping with external observatories like Stanford's Observatory. NIAID Director Anthony Fauci's deposition denied personal involvement in suppressing lab-leak discussions but acknowledged staff flagging of and coordination with platforms on messaging, while confirming public efforts to discredit alternative COVID origin theories. These interactions, detailed in the district court's findings, often combined private flagging with public threats—such as altering protections or antitrust scrutiny—prompting platform policy shifts, though platforms occasionally resisted milder requests while yielding to repeated or escalated ones.

Debunking Claims of Mere Persuasion

The government's defense in Murthy v. Missouri portrayed its interactions with platforms as lawful "jawboning"—informal persuasion protected under the First Amendment, akin to public advocacy without threats of reprisal. However, evidence adduced in the district and appellate courts demonstrated a pattern of communications that exceeded advisory suggestions, incorporating demands for immediate compliance, repeated , and implicit or explicit threats of regulatory overhaul, which platforms interpreted and acted upon under pressure. Specific White House emails exemplified this coercive dynamic. For instance, officials issued directives such as requiring the removal of flagged posts "" and ordering account suspensions "immediately," while expressing frustration when platforms failed to act swiftly on prior flaggings. The further demanded detailed data on policies at least 12 times, framing non-responsiveness as unacceptable and tying it to broader failures. The Fifth Circuit characterized these as "unrelenting pressure" that compelled platform changes, rather than optional dialogue, noting platforms' internal acknowledgments of aligning policies with White House concerns. Threats of adverse consequences amplified the coercive element. officials referenced "fundamental reforms" to of the and "increased enforcement actions" if platforms did not enhance , implying an "unspoken 'or else'" backed by executive authority. President Biden publicly accused platforms of "killing people" by hosting certain content, followed by announcements reviewing their legal immunities, while aides linked inaction to risks of "insurrection"-like unrest. The Fifth Circuit deemed these "express threats" and "implied references to adverse consequences," distinguishing them from mere by evidencing platforms' fear-driven , such as preemptive content removals in response to warnings about election-related operations. This evidence counters assertions of benign influence, as platforms' depositions and internal records revealed viewpoint-specific suppressions—targeting conservative-leaning posts on elections, , and school policies—that correlated directly with government escalations, not independent moderation evolution. While the vacated the injunction on standing grounds without adjudicating , the lower courts' merits , grounded in these documented interactions, substantiates that the involved over protected speech, undermining claims of alone.

Criticisms and Viewpoints

Government and Supporter Perspectives

The U.S. government maintained that its officials' interactions with platforms, including the , Surgeon General's office, CDC, and FBI, involved permissible persuasion and advisory communications rather than . These discussions, spanning 2020 to 2022, focused on flagging content deemed misinformation related to vaccines, election security, and foreign influence operations, with officials providing data-driven feedback and requesting assurances on platform policies. The Department of Justice argued in its briefs that such engagements constituted traditional use of the "" to inform private actors, without issuing threats or demands that overrode platforms' editorial discretion. Government representatives emphasized platforms' independent decision-making as evidence against claims, noting instances where companies rejected specific requests, consulted external experts, or expanded policies prior to federal input—for example, Facebook's February 2021 policy updates predating intensified communications. They further pointed to platforms' continued content restrictions in 2023, even as government contacts diminished, attributing to companies' own incentives amid regulatory scrutiny like reforms. During oral arguments on March 18, 2024, Elizabeth defended these practices as essential for addressing threats and crises, arguing that general references to potential antitrust actions were not targeted threats but broad policy discussions. Supporters of the government's approach, including organizations focused on and , asserted that such communications were vital for mitigating tangible harms, such as contributing to excess deaths during the and foreign election interference. The First Amendment Institute described the interactions as necessary government speech to counter systemic on platforms with vast reach, warning that injunctions would impair officials' ability to collaborate on lawful content prioritization. Advocates like the echoed that the case underscored the government's factual role in urging voluntary platform actions, without endorsing viewpoint-based suppression, and applicable across administrations facing similar challenges.

Conservative and Free Speech Critiques

Conservative legal scholars and free speech advocates argued that the Court's 6-3 decision on , 2024, improperly resolved Murthy v. Missouri on Article III standing grounds, evading a substantive First Amendment analysis of government coercion against platforms. They contended this procedural dismissal left unaddressed evidence of federal officials' persistent pressure campaigns, which allegedly suppressed speech on topics including origins, side effects, and , often targeting conservative-leaning content. In his dissent, joined by Justices and Gorsuch, Justice Alito emphasized that the record demonstrated rather than voluntary persuasion, pointing to over 10,000 pages of communications where officials repeatedly demanded platforms remove or demote posts, framed relationships as "partnerships," and threatened regulatory reforms to immunity. Alito highlighted specific instances, such as a official's email stating platforms were "killing people" by hosting , followed by content suppression, and FBI warnings about "mainstream" narratives on election misinformation that prompted preemptive moderation of the Hunter Biden laptop story. He argued these actions met traditional tests—repeated demands, of , and tangible policy changes—contrasting with the majority's view of attenuated causation. Free speech organizations, including the Foundation for Individual Rights and Expression () and the National Coalition Against Censorship, supported the plaintiffs via amicus briefs, warning that government "jawboning" erodes platform independence and chills protected expression across ideologies, not merely partisan lines. Critics from the expressed concern that the ruling incentivizes future administrations to intensify off-the-record pressures without judicial scrutiny, potentially normalizing viewpoint discrimination under the guise of combating "misinformation." The described the decision as a missed chance to delineate permissible government advocacy from unconstitutional influence, noting platforms' documented responsiveness to federal entreaties despite internal resistance. These viewpoints underscored fears of entrenched executive overreach in digital discourse, absent clearer doctrinal boundaries.

Aftermath and Implications

Case Remand and Ongoing Litigation

Following the Supreme Court's 6–3 decision on June 26, 2024, which reversed the Fifth Circuit's affirmation of standing and remanded the case for proceedings consistent with its holding that neither the states nor individual plaintiffs demonstrated redressable injury under Article III, the litigation returned to the United States District Court for the Western District of Louisiana. The Court's analysis emphasized that social media platforms had independently altered their moderation policies post-2021, undermining any causal link between alleged government coercion and ongoing harms, thus failing the redressability prong for injunctive relief. On remand, the district court, bound by the jurisdictional standing ruling, dismissed the plaintiffs' claims without reaching the merits, as the Court's determination precluded preliminary or permanent injunctive relief against the federal defendants. This effectively concluded the core litigation initiated by , , and the individual plaintiffs, with no viable path forward under the established record. As of October 2025, no active proceedings remain in Murthy v. Missouri itself, though Missouri's has indicated intent to pursue additional related to pre-decision communications, potentially for evidentiary purposes in future or challenges; however, such efforts face the same standing barriers absent new factual developments. Related suits alleging jawboning, such as Kennedy v. Biden in the Fifth Circuit (decided November 4, 2024, partially remanding on alternative standing theories), continue in parallel but do not revive the original consolidated action.

Policy and Legislative Responses

Following the Supreme Court's June 26, 2024, decision in Murthy v. Missouri, which dismissed the case for lack of standing without addressing the merits of government influence on , lawmakers introduced aimed at enabling future challenges to such practices. On July 31, 2024, U.S. Representative (R-WY) introduced the Standing to Challenge Government Censorship Act (H.R. 9236), which would amend III standing requirements to allow states, state officials, and individuals to seek injunctive relief against federal officials or agencies for allegedly coercing private entities to suppress protected speech. The bill explicitly references the Murthy ruling's barrier to redress and seeks to codify a for unlawful viewpoint discrimination through government pressure on platforms. Advocacy organizations also pushed for related reforms to promote transparency in government-tech interactions. The Foundation for Individual Rights and Expression () urged to enact the Stopping Malicious Algorithms and Regulations Targeting Information and News (SMART) Act, which would mandate public disclosure of federal requests to companies for content removal or suppression, arguing that Murthy's outcome heightened risks of informal without accountability. As of October 2025, neither the Hageman bill nor the SMART Act has advanced beyond introduction, reflecting partisan divides on the scope of permissible government communication with private platforms. No significant executive branch policy shifts were announced in direct response to the ruling; the Biden administration maintained that its communications with firms constituted lawful persuasion rather than , consistent with pre-decision practices. State attorneys general, including those from and who were plaintiffs, continued litigation on remand to the Fifth Circuit, focusing on establishing of harms to specific actions post-2022 changes in policies. These efforts underscore ongoing debates over balancing public interest in combating against First Amendment protections, with critics of the decision warning of unchecked administrative influence absent legislative intervention.

References

  1. [1]
    [PDF] 23-411 Murthy v. Missouri (06/26/2024) - Supreme Court
    Jun 26, 2024 · MURTHY v. MISSOURI. Opinion of the Court mendations for groups with a history of COVID–19 or vac- cine misinformation.” 54 Record 16,870 ...
  2. [2]
    [PDF] the censorship-industrial complex: how top biden white house
    May 1, 2024 · Contrary to their claims of wanting to combat alleged so-called “misinformation” and foreign disinformation, the Biden Administration pressured ...
  3. [3]
    [PDF] The White House Covid Censorship Machine - Congress.gov
    Mar 28, 2023 · They included “removing vaccine misinformation” and “reducing the virality of content discouraging vaccines that does not contain actionable ...
  4. [4]
    [PDF] Confronting Health Misinformation - HHS.gov
    Jul 14, 2021 · I am urging all Americans to help slow the spread of health misinformation during the COVID-19 pandemic and beyond. Health misinformation is ...
  5. [5]
    [PDF] What's actually in the Twitter Files? Capitalisn't Research Brief
    Matt Taibbi, December 16, 2022: Twitter, the FBI Subsidiary. Twitter's ... Twitter's Covid-19 misinformation policy. Yoel Roth,. Twitter's former head ...<|control11|><|separator|>
  6. [6]
    [PDF] FBI And DHS Directors Mislead Congress About Censorship
    Nov 2, 2023 · White House officials also demanded that social media companies censor accurate information about the side effects of the Covid vaccine.
  7. [7]
    Did Biden's White House pressure Mark Zuckerberg to censor ...
    Aug 27, 2024 · In July 2021, Biden said social media platforms such as Meta-owned Facebook are “killing people” with COVID-19 misinformation. Though he ...
  8. [8]
    Mark Zuckerberg says Meta was 'pressured' by Biden administration ...
    Aug 27, 2024 · President Biden said in July of 2021 that social media platforms are “killing people” with misinformation surrounding the pandemic. Though Biden ...
  9. [9]
    Zuckerberg says Biden administration pressured Meta to censor ...
    Aug 27, 2024 · ... White House requests to take down misinformation about the coronavirus and vaccines ... "In 2021, senior officials from the Biden ...<|control11|><|separator|>
  10. [10]
    The Cover Up: Big Tech, the Swamp, and Mainstream Media ...
    Feb 8, 2023 · In October 2020, Twitter censored the New York Post's story about the Biden family's business schemes based on the contents of Hunter Biden's ...
  11. [11]
    Yes, you should be worried about the FBI's relationship with Twitter
    Dec 23, 2022 · The Twitter Files show the FBI, DHS, and other federal agencies have had regular contacts with Twitter since as early as 2018 concerning alleged ...
  12. [12]
    [PDF] Written Statement Matt Taibbi “Hearing on the Weaponization of the ...
    Mar 9, 2023 · Following the trail of communications between Twitter and the federal government across tens of thousands of emails led to a series of ...
  13. [13]
    H. Rept. 117-648 - Congress.gov
    ... misinformation or disinformation on Hunter Biden or Dr. Anthony ... misinformation or disinformation concerning certain individual social media users.
  14. [14]
    Missouri v. Biden, 3:22-cv-01213 – CourtListener.com
    Citation: Missouri v. Biden, 3:22-cv-01213, (W.D. La.) Date Filed: May 5, 2022. Date of Last Known Filing: Oct. 20, 2025.
  15. [15]
    [PDF] Case 3:22-cv-01213-TAD-KDM Document 293 Filed 07/04/23 Page ...
    Jul 4, 2023 · In this case, Plaintiffs allege that Defendants suppressed conservative-leaning free speech, such as: (1) suppressing the Hunter Biden laptop ...
  16. [16]
    Missouri, et al. v. Biden, et al. (f/k/a Murthy, et al. v. Missouri, et al.)
    Jayanta Bhattacharya and Martin Kulldorff, as well as Dr. Aaron Kheriaty and Jill Hines. Social media platforms, acting at the federal government's behest, ...
  17. [17]
    [PDF] United States Court of Appeals for the Fifth Circuit
    Sep 8, 2023 · The district court agreed with the Plaintiffs and granted preliminary injunctive relief. In reaching that decision, it reviewed the conduct of ...Missing: "court | Show results with:"court
  18. [18]
    Case: Missouri v. Biden - Civil Rights Litigation Clearinghouse
    Case: Missouri v. Biden. 3:22-cv-01213 | U.S. District Court for the Western District of Louisiana. Filed Date: May 5, 2022.
  19. [19]
    [PDF] Case 3:22-cv-01213-TAD-KDM Document 86 Filed 10/14/22 Page 1 ...
    Plaintiffs cannot demonstrate that still more discovery—particularly burdensome depositions—is warranted at this stage of the case. See ECF No. 34. Through ...
  20. [20]
    [PDF] Case 3:22-cv-01213-TAD-KDM Document 391 Filed 09/17/24 Page ...
    Sep 17, 2024 · As a result of this Court's award of limited, expedited preliminary injunction-related discovery, Plaintiffs uncovered a coordinated censorship ...
  21. [21]
    [PDF] hearing before the united states house of representatives
    Mar 30, 2023 · Last July, the Plaintiffs in Louisiana and Missouri v. Biden received limited discovery of communications about misinformation and censorship ...<|separator|>
  22. [22]
    [PDF] Case 3:22-cv-01213-TAD-KDM Document 137 Filed 12/01/22 Page ...
    The Court Should Authorize Written Discovery from Rob Flaherty to Ascertain. Whether a Deposition of Mr. Flaherty is Needed. The Fifth Circuit noted that “with ...Missing: phase | Show results with:phase
  23. [23]
    [PDF] Case 3:22-cv-01213-TAD-KDM Document 399 Filed 10/29/24 Page ...
    Oct 29, 2024 · It has already established that more than 10 depositions will be necessary to meet its burden of proof[.]"). And what Plaintiffs did manage to ...
  24. [24]
    [PDF] 23-30445-CV0.pdf - Fifth Circuit Court of Appeals
    Sep 8, 2023 · subtle asks accompanied by a “system” of pressure (e.g., threats and follow- ups) are clearly coercive. Still, it is rare that coercion is so ...
  25. [25]
    Fifth Circuit Sides with Missouri Attorney General Andrew Bailey ...
    Sep 8, 2023 · The United States Fifth Circuit Court of Appeals has upheld his free speech case, Missouri v. Biden, by enjoining the White House, Surgeon General, FBI, and ...Missing: reaffirmation | Show results with:reaffirmation
  26. [26]
  27. [27]
    Summary of Murthy V. Missouri Oral Arguments | Cato at Liberty Blog
    Mar 19, 2024 · The Supreme Court heard oral arguments on the Murthy v. Missouri case, which looks at the issue of when government communications with social media companies ...
  28. [28]
    Supreme Court Justices Question Standing, Evidence in Murthy v ...
    Mar 20, 2024 · On Monday, March 18, the US Supreme Court heard the oral argument for Murthy v. Missouri. Gabby Miller and Ben Lennett assess the hearing.
  29. [29]
    Murthy v. Missouri | Oyez
    Mar 18, 2024 · The plaintiffs argue that the defendants used public statements and threats of regulatory action, such as reforming Section 230 of the Communications Decency ...
  30. [30]
    Murthy v. Missouri Supreme Court Decision Signals End of Baseless ...
    Jun 26, 2024 · The US Supreme Court announced its decision in Murthy v. Missouri, a lawsuit arguing that the Biden administration improperly coordinated with social media ...
  31. [31]
    Knight Institute Comments on Supreme Court Ruling in Murthy v ...
    Jun 26, 2024 · WASHINGTON—The U.S. Supreme Court today reversed the Fifth Circuit's ruling in Murthy v. ... coercion and persuasion; articulate the First ...
  32. [32]
    The Supreme Court Was Right on Murthy v. Missouri
    Jun 28, 2024 · Murthy v. Missouri is a complicated case that highlights serious questions about balancing First Amendment rights with a collective need for ...
  33. [33]
    Will the Supreme Court's Decision in Murthy v. Missouri Lead to ...
    Jul 11, 2024 · Both cases presented similar allegations of censorship by stealth, which occurs when a private entity limits its customers' or members ...
  34. [34]
    Murthy v. Missouri (2024) | The First Amendment Encyclopedia
    Jun 26, 2024 · The 6-3 majority opinion in Murthy v. Missouri, authored by Amy Coney Barrett, concluded that the lower courts had erred in extending standing to the parties.
  35. [35]
    Reflections on Murthy v. Missouri: Opportunities Missed, Lessons ...
    Jul 2, 2024 · The Supreme Court dismissed claims that the government had unlawfully coerced social media companies into removing protected speech; ...
  36. [36]
    Unwavering Advocates: NCAC, FIRE And Other Free Speech ...
    Mar 20, 2024 · The Murthy case transcends politics and partisanship, as censorship knows no party allegiance. NCAC's involvement underscores a nonpartisan ...
  37. [37]
    [PDF] Who Can Stand? Murthy v. Missouri and Social Media Censorship
    Jan 16, 2025 · the evidence in Murthy came to light because of discovery efforts and a congressional subpoena,117 forming a “voluminous record.”118 The ...
  38. [38]
    [PDF] Government and Social Media Corruption After Murthy v. Missouri
    Jan 6, 2025 · The Supreme Court in Murthy v. Missouri in 2024 dismissed a suit by multiple plaintiffs alleging that the Biden Administration's efforts to.
  39. [39]
    SCOTUS Clears the Way for Attorney General Bailey to Obtain More ...
    Jun 26, 2024 · The record is clear: the deep state pressured and coerced social media companies to take down truthful speech simply because it was conservative ...Missing: findings | Show results with:findings
  40. [40]
    Kennedy v. Biden, No. 24-30252 (5th Cir. 2024) - Justia Law
    Nov 4, 2024 · The district court concluded that the Supreme Court's Missouri decision foreclosed Sampognaro's listener-standing theory, but that Kennedy ...
  41. [41]
    Hageman Introduces the Standing to Challenge Government ...
    Jul 31, 2024 · Importantly, the bill is drafted to resolve the standing question in Murthy v Missouri which prevented the states and citizens from receiving ...
  42. [42]
    Government transparency is critical when it comes to fighting ... - FIRE
    Jul 11, 2024 · Congress should pass FIRE's SMART Act to rein in informal government censorship. In its Murthy v. Missouri ruling last week, the Supreme Court ...
  43. [43]
    Free Speech Online at Risk? Implications of the Murthy v. Missouri ...
    Sep 17, 2024 · ... jawboning operation to limit unfavorable speech online. Zuckerberg's ... Murthy v. Missouri. Originally filed as Missouri v. Biden ...