[co-author: Raj Gambhir]
Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.
The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.
In this issue, we discuss the Federal Trade Commission’s (“FTC” or “Commission”) complaint against Automators AI (“Automators”). Our key takeaways are:
- On August 8, 2023, the FTC filed a complaint against Automators and affiliated entities and individuals, alleging that they had “caused consumers across the country over $22 million in harm” through false and misleading claims.
- Some of the claims under FTC scrutiny concern the efficacy of the AI tools leveraged by Automators and affiliated entities and individuals. This makes the Automators complaint the Commission’s first individual case concerning AI-related misrepresentations.
- The Automators complaint demonstrates the willingness of the FTC and other agencies to regulate AI in the absence of clear guidance from Congress.
FTC Draws on Regulatory Authority to Bring a First-of-Its-Kind Individual Case Involving AI
As we have discussed in a previous newsletter, Federal Trade Commission (“FTC” or “Commission”) guidance has signaled that the Commission is prepared to crack down on false or misleading claims related to AI. In an August 8, 2023 complaint filed against Automators AI (“Automators”) and affiliated entities and individuals, the Commission has utilized this authority for the first time in an individual case. This complaint, along with a district court’s subsequent decision to temporarily halt Automator’s operations, demonstrates that agencies are willing and able to regulate AI in the absence of explicit regulatory authority on AI from Congress.
The FTC’s Case Against Automators
On August 8, 2023, the FTC filed a complaint alleging that Automators AI (“Automators”) and affiliated entities and individuals had “caused consumers across the country over $22 million in harm” by deceiving consumers “into purchasing a ‘venture capital–backed’ and ‘artificial intelligence–integrated’ ecommerce business opportunity…”
The FTC alleges that since early 2020, three individuals identified in the complaint have operated successive business entities (most recently, Automators) that have promised clients large profits through the operation of “third-party stores on platforms like Amazon, Walmart, and Facebook.” For instance, Automators allegedly claimed that through their services, clients could expect to make “over $10,000 per month in sales.”
According to the complaint, Automators supported their claims regarding expected client profits with appeals to the efficacy of their AI tools. One Automators advertisement allegedly claimed that the company uses “AI tools for our 1 on 1 Amazon coaching program, helping students achieve over $10,000/month in sales!” Another advertisement claimed that coaches could help clients leverage tools like ChatGPT “to scale an Amazon store to [$10,000] a month and beyond.”
As the majority of clients allegedly “do not recoup their investment, let alone make the advertised amounts,” the Commission charges that “Automators’ earnings claims regarding its business opportunities are false and unsubstantiated.” Due to these allegedly misleading representations and other business practices, the FTC claims that Automators and affiliated entities and individuals have violated the FTC Act, the Business Opportunity Rule, and the Consumer Review Fairness Act (“CRFA”).
On August 11, 2023, the US District Court for the Southern District of California entered a temporary restraining order against Automators. In its order, the court held that the FTC “has shown that immediate and irreparable harm will result from Defendants' ongoing violations of the FTC Act, the Business Opportunity Rule, and the CRFA unless Defendants are restrained and enjoined by order of this Court.” The court has set this case’s preliminary injunction hearing for September 19, 2023.
The Commission Follows Through on Its Warnings
In a previous newsletter, we discussed recent statements and actions by the FTC signaling the Commission’s willingness to apply its existing regulatory authority to the domain of AI. Settlements with Cambridge Analytica, Everalbum Inc., and WW International Inc. have demonstrated the Commission’s willingness to utilize its authority under Section 5 of the FTC Act to mandate the destruction of algorithms developed with illegally collected data.[1] The April 2023 “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems” has signaled the resolve of the FTC and other consumer protection authorities to prevent automated systems from perpetuating unlawful discrimination.
Now, the Commission’s complaint against Automators has demonstrated the FTC’s resolve to regulate false or misleading claims related to AI. The FTC signaled its willingness to apply its authority accordingly in a February 2023 blog post entitled “Keep your AI claims in check.” As we discussed in a previous newsletter covering FTC and AI, this blog post warns marketers of AI products “not to overpromise what your algorithm or AI-based tool can deliver.” Almost six months after the publication of this post, the FTC has followed through on its promise to regulate accordingly with its complaint against Automators.
Conclusion: Enforcement, With or Without New Regulation
Many newsletters in this series have covered the progress that Congress is making in moving towards the enactment of a comprehensive regulatory framework on AI. In our review, we’ve found that even the most optimistic legislators estimate that it may take months for such a framework to be proposed and implemented. In the interim, the use of novel generative AI tools has rapidly expanded, giving rise to new questions, such as the relationship between copyright law and generative AI, and exacerbating ongoing issues such as illegal bias and fraud.
In the absence of clear guidance from Congress, regulatory agencies have begun to extend their existing authority to address these issues. For example, the Copyright Office has issued guidance on the eligibility of AI-generated work for copyright, applying its interpretation of the Copyright Act to the novel domain of artificial intelligence. The FTC’s recent actions on AI, including the Automators case, can be seen in a similar light. By interpreting statutes such as the FTC and the Fair Credit Reporting Acts as covering the conduct of operators providing products and services utilizing AI, the FTC has made a claim to possess the authority to regulate AI in certain circumstances.
Whether the Commission will continue to succeed in bringing these AI-related enforcement actions — and whether regulation from Congress grants the FTC explicit regulatory authority over AI — is yet to be seen. What is certain, however, is that the current FTC will continue to bring cases against AI-related entities, like Automators, whose conduct may violate the agency’s existing regulatory authority. We will continue to monitor, analyze, and issue reports on these developments.
[1] This enforcement paradigm, known as “algorithmic disgorgement,” is discussed at length in a paper co-authored by FTC Commissioner Rebecca Kelly Slaughter. We provide a brief summary of Slaughter’s analysis in our newsletter on recent FTC statements and actions on AI.
[View source.]