[co-author: Stephanie Kozol]*
Missouri’s attorney general (AG) announced on X.com (formerly Twitter) that he is “issuing a rule requiring Big Tech to guarantee algorithmic choice for social media users.” [X.com post (January 17, 2025, roughly 3:35 p.m. EST)] He intends to use his authority “under consumer protection law,” known as the Missouri Merchandising Practices Act in that state, “to ensure Big Tech companies are transparent about the algorithms they use and offer consumers the option to select alternatives.” [x.com post] The Missouri AG touts this rule as the “first of its kind” in an “effort to protect free speech and safeguard consumers from censorship.” [Press release]
Although the text of the rule has not been released, the press release indicates that the rule will “clarify that it is an unfair, deceptive, fraudulent, or otherwise unlawful practice” to “operate a social media platform unless the platform permits users the opportunity to select a third-party content moderator of their choice, rather than rely on the content moderation provided directly by the social media platform.” [Press release] To comply, social media platforms (not defined) must:
- Provide users with a screen “to choose among competing content moderators” upon activation and every six months after that;
- Ensure that the choice screen does not default to any selection;
- Ensure that the choice screen does not “favor the social media platform’s content moderator over those of third parties;”
- Permit that “content moderator interoperable access to data on the platform in order to moderate what content is viewed by the user” should a user choose a different third-party content moderator; and
- Refrain from allowing a social media company to “moderate, censor, or suppress content” if a user’s “chosen content moderator would otherwise permit viewing that content.” [Press release]
Aside from allowing users to choose their own content moderator, the Missouri AG has not described how the rule would impact social media companies’ algorithms. The details of the rule should be forthcoming as the AG has promised to hold forums and follow a public comment process.
The proposed rule allegedly follows the “roadmap” laid down in the recent U.S. Supreme Court decision in Moody v. NetChoice, LLC, 603 U.S. 707 (2024). In that case, laws enacted by Florida and Texas sought to regulate social media companies’ content moderation practices. Although the Court did not ultimately decide the constitutionality of those laws, it did outline some applicable free speech principles related to state actors regulating social media. It explained that (1) curating and editing a third party’s speech is itself protected speech, (2) private speakers excluding disfavored speech is an expressive choice, and (3) states cannot advance some points of view or “better balance” the marketplace of ideas by burdening others’ speech. Due to the undeveloped record and missteps in the legal analysis, several justices opined on possible applications of the law under the First Amendment, but the Court remanded the case for further proceedings.
This proposed rule represents another data point showing an upward trend of state AGs exercising their powers under consumer protection statutes and long-standing state laws against “Big Tech” and in the arena of artificial intelligence (AI). In the past two weeks, the New Jersey and Oregon AGs have both issued AI guidance under their respective state’s anti-discrimination and consumer protection statutes, joining their colleagues from Texas and Massachusetts. State AGs are making clear that they will not hesitate to enforce state consumer protection, privacy, or antidiscrimination laws as they deem necessary. Though the “devil is in the details,” the Missouri AG’s proposed rule attempts to traverse perilous ground implicating First Amendment and other constitutional freedoms. This development is worth watching as the details of the rule emerge.
*Senior Government Relations Manager