In the Supreme Court’s NetChoice Rulings, the Court Leaves the Door Open for Future Social Media Content Moderation Regulations

Pillsbury - Internet & Social Media Law Blog
Contact

Pillsbury - Internet & Social Media Law Blog

[co-author: Michael J. Cox]

Are social media companies more like newspapers or phone companies? This oft-debated question in social media legal circles, while seemingly trivial on the surface, represents a momentous debate over whether—and how much—social media companies should be allowed to moderate user-generated content on their platforms. If social media companies are more like newspapers, they have the right to censor, tailor or remove content as they see fit, similar to how an editor at a publication has the right to choose which stories make the headlines. On the other hand, if social media companies are more like phone companies, then the government has more freedom to limit the companies’ editorial powers, ensuring that they serve merely as a conduit for their users to express themselves freely.

Ever since the Communications Decency Act was passed in 1996, Section 230 has granted interactive internet service providers immunity from lawsuits arising out of content posted on their platforms and is widely credited with allowing many internet companies to grow unobstructed from excessive litigation over their users’ posts. However, Section 230 has also come under fire for contributing to the proliferation of unchecked hate speech and misinformation—even though Section 230 allows companies to moderate or remove content they consider objectionable.

In 2020, social media posts containing misinformation about the pandemic and presidential election were rampant and many social media companies decided to step in by adding disclaimers to certain pieces of content, removing content, and occasionally suspending and banning users from their platforms. Many users called foul, and some among them, including some members of state governments, accused these social media companies of having a political bias against certain voices. Out of this controversy spawned two laws—one out of Florida and one out of Texas—to combat the alleged bias of social media companies against users with controversial political views. In Florida, the law fined internet companies for banning political officials for more than 60 days. In Texas, the law prohibited internet companies with more than 50 million users from removing or hiding user content with political viewpoints. NetChoice, an internet business trade association that advocates for free speech, filed preliminary injunctions against the attorneys general of Texas and Florida to prevent them from enforcing the bills. The cases, Moody v. NetChoice and NetChoice v. Paxton, ended up at the Supreme Court and were decided jointly on July 1, 2024.

While the Supreme Court did not make a definitive decision over whether the Florida and Texas laws violated the First Amendment, the Court did signal disapproval for the laws in its opinion before ultimately remanding the cases back to their respective appellate circuits. At the same time, because the Supreme Court did not issue a ruling on whether the laws’ applications were, as a whole, more unconstitutional than constitutional, the Court left the door open for potentially more significant regulation of social media. Thus, while many publications framed the decision as a win for social media companies, the Supreme Court’s decision to sidestep the issue rather than hold that the laws were unconstitutional was a surprising outcome that now offers significant room for tailored versions of future regulations aimed at chipping away at social media companies’ content moderation abilities.

So, are social media companies more like newspapers or phone companies? While the Supreme Court did not provide a clear answer, its opinion suggests the distinction may not make as much of a difference as once thought. The Court’s discussion seemed to favor a nuanced approach to content moderation regulation, weighing factors like the interests of the state, the mode of moderation (e.g., curating what people see versus what they say), the degree to which content moderation is automated by AI versus intentionally programmed, and the extent to which moderation is driven by certain viewpoints and intended to silence others. Moreover, its decision to decline to rule on a significant question of content moderation policy tracks with its holding in Murthy v. Missouri in late June, which also sidestepped the question of whether the government could influence the content moderation practices of social media companies without violating the first amendment.

For the time being, social media companies and any companies that allow for user-submitted content on their websites should be on the lookout for future legislation that attempts to add or remove filters placed on user content. For now, Section 230 still generally permits interactive internet companies to curtail or allow content on their platforms with impunity, but depending on the future ambitions of legislators, the interests of the state, and the content sought to be controlled, the days of a hands-off, common-carrier approach to content hosting may be numbered.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Pillsbury - Internet & Social Media Law Blog

Written by:

Pillsbury - Internet & Social Media Law Blog
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Pillsbury - Internet & Social Media Law Blog on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide