Center for American Progress

Free Speech or Free Rein? How Murthy v. Missouri Became a Soapbox for Misinformation Advocacy
Article

Free Speech or Free Rein? How Murthy v. Missouri Became a Soapbox for Misinformation Advocacy

The U.S. Supreme Court case Murthy v. Missouri has significant national security implications and could hamstring private and public efforts to combat disinformation.

Photo shows a closeup of a person's hands cradling a dark gray iphone
A person holds a smartphone in Cupertino, California, September 2022. (Getty/Justin Sullivan)

On March 18, 2024, the U.S. Supreme Court will hear oral arguments in Murthy v. Missouri, originally filed as Missouri v. Biden. This case is emblematic of broader debates over the role of government in regulating online platforms and the protections afforded by the First Amendment in the context of speech online. In this case, the plaintiffs—the states of Missouri and Louisiana, as well as five social media users—alleged that governmental communication with social media platforms regarding concerns about COVID-19 misinformation and election interference amounted to coercion, violating the First Amendment.

The First Amendment safeguards free speech by prohibiting government censorship and undue influence on individuals and private sector entities, including social media. Concerns arise when government actions, such as threats or pressure, coercively sway social media companies to remove or censor content. However, far from being coerced into censorship, social media companies have actively sought collaborations with government entities and have organized themselves to share critical information in combating foreign interference in U.S. elections and addressing misinformation. This proactive stance by social media platforms signals a clear demand for information sharing and underscores a collaborative effort to navigate the complexities of moderating content that could harm public welfare.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Murthy v. Missouri makes its way to the Supreme Court after an extreme decision by Judge Terry Doughty in the U.S. District Court for the Western District of Louisiana. His ruling initially barred a wide range of federal government entities from engaging in any form of communication with social media platforms, particularly targeting issues such as the COVID-19 pandemic, election misinformation, and foreign malign influence threats—any action taken by or at the direction of foreign actors or their proxies. The U.S. Court of Appeals for the 5th Circuit affirmed in part Doughty’s injunction on government communication with the platforms, narrowing its application to the White House, the FBI, the Centers for Disease Control and Prevention (CDC), the Office of the Surgeon General, and the Cybersecurity and Infrastructure Security Agency (CISA). The Supreme Court has paused the injunction until justices can hear arguments on the matter.

The injunction could lead to a scenario where federal agencies cannot share critical information with platforms regarding national security threats and election integrity.

The underlying case was filed before Doughty, a Trump appointee who was assigned 90 percent of the cases in his division and has been a favorite jurist of extreme right-wing interests seeking to strike down progressive policies such as COVID-19 mandates for Head Start child care workers and the Biden administration’s pause on oil and gas leases on federal land; the latter was reversed by the 5th Circuit. In Murthy, the plaintiffs alleged that governmental communication with social media platforms regarding COVID-19 misinformation and election interference violated the First Amendment. Though the government did not threaten any official action, Doughty blocked all communication and information sharing between the platforms and federal agencies, labeling the government efforts to stem mis- and disinformation an “Orwellian ‘Ministry of Truth’.”

Though the injunction is on hold, it has already created confusion on both the government’s and companies’ sides regarding content moderation collaboration. For example, the U.S. Department of State canceled its monthly meeting with Facebook officials to discuss 2024 election preparations and hacking threats, citing the need for further guidance. The chair of the Senate Select Committee on Intelligence, Sen. Mark Warner (D-VA), highlighted a number of dangers this injunction could pose from a national security standpoint in the amicus brief he filed with the Supreme Court and in his recent interview with the Center for American Progress. Specifically, Warner cautioned that government-platform collaboration is a key tool in combating foreign malign influence operations—whether related to elections or other issues—that would be compromised if the court upholds the bar on communication. This is especially concerning as the swift rise of artificial intelligence is posing an “unprecedented threat” of mis- and disinformation to American democracy in this presidential election year.

The who, what, and how of Murthy v. Missouri

Who:

The state attorneys general of Missouri and Louisiana and five social media users filed the lawsuit. Two of the plaintiffs are notable for their seemingly broader roles in disseminating mis- and disinformation through social media. Jim Hoft is the founder of the Gateway Pundit—a far-right website known for spreading debunked conspiracy theories—which was noted by Facebook in 2019 as a “common misinfo offender” at the center of multiple lawsuits for allegedly engaging in “the deliberate spread of dangerous and inflammatory political disinformation designed to sow distrust in democratic institutions.” Another plaintiff, Jill Hines, is an anti-vaccine advocate who was featured in the debunked film “Vaxxed” and is part of the movement to oppose vaccines, a campaign that social media platforms have been combating for years. Notably, Hoft and Hines have filed a separate lawsuit against Stanford University and internet researchers to prevent them from communicating with the government and social media companies about misinformation on the internet.

What:

As noted above, the 5th Circuit revised the U.S. District Court injunction but kept in place significant prohibitions on communication between social media companies and the White House, the Office of the Surgeon General, the CDC, CISA, and the FBI. Specifically, the injunction prevented the federal agencies from engaging in two key actions in broad and undefined terms:

  1. They may not “coerce” or “significantly encourage” social media platforms to make content moderation decisions.
  2. They may not “meaningfully control” social media platforms’ content moderation processes.

By imposing such restrictions, the 5th Circuit’s decision could hamper the government’s ability to collaborate with platforms in identifying and mitigating harmful content that poses a threat to public safety and democracy.

How:

If the Supreme Court upholds this broad and ill-defined injunction, it could create confusion on both the government’s and companies’ sides regarding content moderation collaboration. This confusion might deter the necessary dialogue to identify and mitigate harmful content, affecting how social media platforms manage content; this may be especially true for disinformation from foreign malign influences who seek to sow division in American society and undermine U.S. democracy, as exemplified in Warner’s amicus brief.

For example, the injunction could lead to a scenario where federal agencies cannot share critical information with platforms regarding national security threats and election integrity. The injunction’s failure to clearly define what constitutes permissible and impermissible government communications with social media platforms only exacerbates this problem. This ambiguity leaves both government officials and platforms uncertain of the legal boundaries for their interactions.

Implications of upholding the injunction

National security: Under a strict reading of the injunction, any direct outreach from government agencies to platforms regarding the moderation of content that poses a national security threat could be restricted. This could impede the government’s capacity to promptly communicate with platforms about critical issues such as foreign interference in elections, disinformation campaigns, or activities of terrorist groups online. In past elections, U.S. intelligence agencies have identified and flagged sophisticated foreign malign influence campaigns orchestrated by foreign actors—notably Russia and Iran—aimed at influencing electoral outcomes and sowing discord. Direct communication with social media platforms allowed these agencies to share intelligence about covert state-sanctioned operations, leading to identifying and removing numerous accounts and posts intended to spark real-world violence and to destabilize democratic governments. Although government entities frequently flag such operations to social media platforms, the platforms’ subsequent actions—such as removing accounts and content—are often carried out independently and at each platform’s discretion. Sometimes these actions are in direct response to government alerts, and other times they are initiated by the platforms’ own investigations and monitoring systems. The lack of clarity in the injunction could lead to a situation where platforms may require government communications to occur solely through court-sanctioned mechanisms such as subpoenas and warrants. This indirect effect of the injunction could result in a slowed or diminished response to fast-moving national security threats on social media platforms, leaving the public at greater risk from unchecked malicious online activities.

Increased threat vulnerability: Sharing information about threats is a key strategy in cybersecurity and content management to strengthen defenses against disinformation campaigns. The absence of clear channels for threat sharing could inadvertently benefit bad actors who engage in disinformation campaigns and often target platforms that are perceived as vulnerable—those with less robust user protections, weaker content moderation policies, and limited capabilities to detect and remove fake content and inauthentic accounts. Without the ability to leverage government intelligence and resources, these platforms could become more vulnerable to manipulation, allowing disinformation to proliferate more freely. The inability to share threat information could lead to a disparate response to disinformation, with some platforms better equipped to manage risks than others due to varying internal resources and capabilities.

Increased confusion and inconsistencies: Social media platforms recognize the importance of not becoming conduits for foreign malign influence. But without authoritative guidance and a motivated desire to prevent the spread of harmful content, platforms could become excessively cautious. They might decide to proactively remove or restrict a wide range of discussions on sensitive issues such as COVID-19 to curb as much damaging information as possible or to create a blanket approach, disallowing any content related to contentious or highly regulated topics—moves that would starkly contrast with the plaintiffs’ supposed objectives in this case. Alternatively, platforms may choose to not remove any information at all, thereby potentially exacerbating the spread of misinformation. On the other side of the equation, in the NetChoice cases, the Supreme Court is considering whether private social media companies themselves can remove mis- and disinformation, which would pave the way for foreign malign influence operations to spread on social media unmitigated.

Read more about the NetChoice cases

Conclusion

Without concrete guidelines, the risk of unintentionally crossing the boundary from lawful information sharing or persuasion to unlawful coercion or encouragement becomes a tangible concern for government officials. This lack of clarity may lead to an overly cautious approach, where government agencies refrain from engaging in any communication that could potentially be misconstrued as coercive or overly influential—even when such communication is intended to serve the public interest—increasing vulnerability to mis- and disinformation and national security threats. Furthermore, if the injunction is upheld, it would set a precedent that may prompt similar constraints on the government’s voluntary interactions with private entities in other sectors, such as financial institutions or critical infrastructure such as utilities.

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. A full list of supporters is available here. American Progress would like to acknowledge the many generous supporters who make our work possible.

Authors

Nicole Alvarez

Senior Policy Analyst

Devon Ombres

Senior Director, Courts and Legal Policy

Team

Technology Policy

Our team envisions a better internet for all Americans, advancing ideas that protect consumers, defend their rights, and promote equitable growth.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.