Supreme Court Clarifies First Amendment and Standing Standards Applicable to Social Media Content Moderation Policy Challenges

Snell & Wilmer
Contact

Snell & Wilmer

Social media companies have long moderated the type of content that appears on a person’s home page by, for instance, deleting explicit posts or “downgrading” posts containing misinformation. Based on the belief that these policies tend to favor one side of the political spectrum, state governments and private actors have increasingly sought to curtail these moderation policies through regulations or private actions. Until recently, it was not entirely clear if (and if so, how) the First Amendment protected social media companies’ abilities to moderate content, or if private individuals even had standing to challenge these policies based on First Amendment concerns due to the fact that the social media companies are private entities. The Court began to clarify these issues in two recent cases, NetChoice, LLC v. Paxton and Murthy v. Missouri 1, which will have wider impacts on how companies, not just social media companies, may moderate their employees’ comments on social media.

NetChoice is the more consequential of the two cases. There, Florida and Texas passed laws that limit social media companies’ abilities to “censor” social media posts by, among other things “deleting, altering, labeling, or deprioritizing them … based on their content or source.” The laws also require social media companies to notify any user that has had their post “censored” with an explanation for the censorship. Although the Supreme Court remanded challenges to these laws back to the Circuit Court of Appeals for procedural reasons,2 it clarified several important aspects of law relating to social media content moderation. 

In particular, the Supreme Court explained that by moderating and curating posts to begin with, social media companies engage in their own “expressive conduct” that is protected by the First Amendment. This was true even though: (1) social media companies only moderate a small amount of content that is posted on their sites and (2) it is unlikely that any person believes social media companies are expressing their own viewpoints when moderating posts. 

The Supreme Court also held that a state’s interest in “balancing” political speech online was not sufficient to overcome the First Amendment’s protections; in other words, a state cannot justify a law that infringes on the social media company’s First Amendment rights based only on some interest in “balancing” or “diversifying” viewpoints expressed online. Importantly, in coming to these conclusions, the Supreme Court relied on cases addressing government attempts to impose moderation policies on newspapers, television stations, newsletters, and other expressive activities. As such, moving forward, the Supreme Court has clarified that the First Amendment applies with equal force to speech online as it does to speech offline. 

Although less consequential, Murthy v. Missouri still holds some important lessons for private parties that may look to bring an action against social media companies or other entities that provide a platform for the public to comment. In that case, Missouri, Louisiana, and five individual social media users brought lawsuits against Twitter, Facebook, and YouTube based on allegations that individuals in the FBI, White House, and other executive agencies had “coerced” or “significantly encouraged” social platforms to demote or remove posts about the COVID-19 pandemic and the 2020 General Election. The Supreme Court held that none of these plaintiffs had standing to challenge this “coercion” for several reasons. 

Most relevant here, the Supreme Court explained that because social media companies had already been moderating posts about the COVID-19 pandemic and the election before the “coercion,” the plaintiffs could not establish any injury. The Supreme Court also rejected the plaintiffs’ arguments that their “right to listen” — i.e., the right to hear someone else’s speech — gave them standing to challenge rules infringing on someone else’s speech. 

At bottom, the long-term takeaway from Murthy is that any litigant seeking to challenge moderation policies will have to establish specific injuries stemming directly from this moderation – they cannot rely on the mere existence of moderation policies. However, another real issue exists about the propriety of a government agency demanding a social media (or other entity) to take certain action or face uncertain government action in other context (e.g., a threat that the entity will face consequences on other regulated issues if they do not concede to the agency demands). As the Supreme Court did not address this issue, it is unclear how agencies will utilize this still existing ambiguity to contact and make demands on private entities that are not related to existential threats (e.g., national security, breach of peace, etc.) and only because of misinformation.

Taken together, NetChoice and Murthy establish significant hurdles to any challenge towards social media companies’ content moderation policies and interactions with government agencies in demanding such moderation as to specific topics (perceived misinformation or otherwise). These hurdles come on top of Section 230 of the Communication Decency Act, which provides social media companies very broad immunity from defamation lawsuits for statements made by their users.3 Navigating this legal landscape is challenging, but not impossible. 

Organizations that have on-line public comment features may consider reviewing such platforms, understanding the policies of the social media companies that the organization uses, and updating policies associated with public comment. This may include updating the user agreement that is tacitly governing an individual’s access to the organization’s website to make clear that the organization may remove any content at will. To the extent there are internal employee comment platforms, similar notices may be appropriate.

Footnotes 

  1. NetChoice LLC is available at: https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf. Murthy v. Missouri is available at: https://www.supremecourt.gov/opinions/23pdf/23-411_3dq3.pdf [Back]
  2. Specifically, the Supreme Court held that the Fifth and Eleventh Circuits had improperly evaluated the plaintiffs’ claims based on an “as applied” standard rather than a “facial” standard. [Back]
  3. For more information on Section 230 and similar state laws, see R. Feinberg & I. Joyce, What’s Up Dox? (Oct. 25, 2021), https://www.swlaw.com/publications/legal-alerts/whats-up-dox. [Back]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Snell & Wilmer | Attorney Advertising

Written by:

Snell & Wilmer
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Snell & Wilmer on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide