The speed of development of AI tools has been staggering. Each week in 2023 feels like years’ worth of innovation. Ever-more-powerful AI tools arrive daily to push the boundary of possibility forward. They also raise many types of legal issues, from IP ownership and infringement, to data privacy, to false advertising.
When it comes to advertising claims, the Federal Trade Commission (FTC) is paying close attention. So far this year, the FTC published several blog posts highlighting its focus on AI (here, here, and here). The FTC also issued a Joint Statement with several other U.S. agencies addressing discrimination and bias in AI (more below). This is not new for the FTC. It provided detailed AI guidance for businesses in 2020, and has held hearings and provided other guidance over the years.
Of course, the FTC and its attorneys are not providing this guidance as a hobby. When the FTC spends time releasing extensive public guidance in a particular area, investigations and enforcement actions are certain to follow.
So, what does all of this guidance tell us about FTC enforcement priorities in the AI space? A few themes emerge.
Do not overstate the use of AI or its capabilities in your product or service. Think “AI-washing.” This principle targets companies that try to cash in on AI hype by falsely touting the use of AI in products or services, or embellishing the capabilities of AI-based products or services. Like any other industry, claims about the use of AI in products or services must be truthful, non-misleading, and substantiated. The FTC has reiterated that, in its view, “artificial intelligence” is an ambiguous term, which opens the door to consumer perception and substantiation issues. Is your product really AI-powered?
Do not understate the use of AI or its risks in your product or service. Conversely, companies whose products or services actually do run on AI should not hide the robot behind the curtain. For instance, AI chatbots should not masquerade as humans. Same goes for customer reviews and ratings – AI-generated customer reviews and ratings that look like human reviews and ratings are becoming more prevalent. Relatedly, companies should not place advertising content within generative AI features or outputs that is not identified as advertising. This is similar to the “native advertising” and “material connection” disclosure issues seen over the years in every medium from print, to social, to the metaverse. And, as stated by the FTC, “[p]eople should know if an AI product’s response is steering them to a particular website, service provider, or product because of a commercial relationship.” Finally, companies should disclose important risks of using AI, including foreseeable downstream uses and the need to train staff and contractors.
Do not use AI tools that discriminate against certain groups (of humans, for now). The FTC is concerned that datasets, models, algorithms and other technology underlying automated systems like AI may perpetuate biases and discrimination in areas such as finances, health, education, housing, and employment. Simply put, AI systems are trained on existing data, and any biases or flawed assumptions baked into that data may become part of how the AI makes decisions. The FTC’s April 25, 2023 Joint Statement, along with the Consumer Financial Protection Bureau (CFPB), U.S. Department of Justice (DOJ) Civil Rights Division, and Equal Employment Opportunity Commission (EEOC), underscored the regulatory focus in this area. The FTC also raised this issue in the context of targeted ads in a recent blog post: “Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases.”
It’s hard to imagine an industry that is not already using AI. All businesses should be considering legal risks and policies regarding AI across the enterprise, including marketing and advertising. The broad principles above provide a starting point for advertising compliance as the world sprints forward into an AI-powered future.
[C]ompanies whose products or services actually do run on AI should not hide the robot behind the curtain.