Truth-in-AI and Robo-Deception: How Regulation Is Evolving to Address Deepfakes, Robocalls and More to Avoid the Erosion of Consumer Trust

Pillsbury Winthrop Shaw Pittman LLP
Contact

Takeaways

  • The rise of generative AI in advertising challenges existing regulatory frameworks and threatens consumer trust.
  • Businesses using AI-generated content must grapple with legal, ethical and practical implications surrounding false advertising.
  • Despite the lack of controlling authority, businesses should aim to safeguard their credibility and provide transparency.

While major legal cases involving AI have largely focused on copyright issues, few cases thus far have directly addressed truthful advertising of AI products and AI-generated content. Indeed, the ease with which consumers and the public can be deceived by AI, as well as the fear of mal-intentioned interference in political elections, has underscored the urgency of considering legislation and regulations that are capable of addressing these issues directly.

Regulatory bodies are increasingly cognizant of the rise in AI’s use and have made their enforcement authority known. Last year, several U.S. agencies, including the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission (FTC), jointly pledged to protect individuals from potential harms associated with AI and emerging technologies. Their goal is to monitor the development and use of AI, balancing innovation with adherence to legal protection, ensuring that the use of AI does not perpetuate unfair or deceptive practices. This effort is consistent with the FTC’s mission and past efforts, including warning advertisers against the deceptive and fraudulent use of chatbots, deepfakes and voice clones, which could result in FTC enforcement actions. Meanwhile, the Federal Communications Commission (FCC) recently ruled that robocalls using AI-generated voices are illegal. This ruling was prompted by an incident where robocalls simulating President Biden’s AI-generated voice were distributed to thousands of voters ahead of a state primary.

Lawmakers have taken notice as well. The AI Disclosure Act, introduced by Representative Ritchie Torres (D-NY), aims to mandate disclaimers on all AI-generated materials, such as videos, photos, text and audio. In Florida, the legislature acted to prohibit the use of manipulation of AI in the manufacturing of political advertising. In April of this year, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act, which seeks to increase transparency by requiring creators of generative AI systems to file a notice with the Copyright Office detailing all copyrighted works contained in the training dataset before releasing the systems to the public. Other states have also taken action to regulate AI, however state legislation over practices that cross state lines may only achieve a piecemeal approach to providing the clarity that protects consumers and businesses from deceptive practices.

Global Perspective
While existing U.S. laws against false advertising provide a framework, they do not fully address the latest concerns posed by AI-generated content or deep fakes to the extent that Europe’s Artificial Intelligence Act (AI Act) and similar legislative initiatives do. The AI Act, recently approved by the EU Parliament, seeks to address the consumer deception risks posed by AI and AI-generated content, and positions Europe to play a leading role globally in adopting appropriate legislation. As part of the AI Act’s transparency obligations, developers and deployers must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes). In South America, Brazil is spearheading AI regulation, proposing legislation aimed at governing the development and application of this transformative technology. Likewise, China recently established a set of measures for generative AI usage and development, requiring generative AI service providers to establish and implement internal control systems, conduct regular self-inspections, mark generated content with appropriate labels and report any violations to the authorities.

In the absence of U.S. federal legislation, existing state and federal consumer protection laws still play some role, as does corporate self-governance, particularly among companies that have adopted proactive measures to maintain transparency and credibility. OpenAI recently introduced Sora, its text-to-video tool that can rapidly generate up to one minute of startlingly realistic video content. Although not yet released to the general public, Sora’s AI-generated videos have been widely circulated, generating excitement—and skepticism—about Sora’s hyperrealism and susceptibility to misuse for deepfakes, political propaganda and consumer deception. OpenAI plans to safeguard Sora against being used for misinformation, hateful content and bias by working with experts to test the platform. OpenAI also joins Meta in proactively incorporating tools to combat the potentially deceptive nature of AI-generated content by planning to include standards that allow publishers, companies and others to embed metadata in media for verifying its origin and related information.

The speed at which the technology evolves will continue to raise questions on what the proper solution is. For now, these companies are leading the way and attempting to address these issues head on. For example, ahead of the 2024 presidential election, Midjourney has taken steps to prevent users from creating fake images of presidential candidates. And Midjourney is not alone in its efforts to thwart election misinformation. Earlier in the year, a number of major technology companies signed a pact to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt elections around the world. Customers value transparency, so businesses should work hard to safeguard their credibility. Thus, companies may seek to implement innovative, proactive solutions, such as clear labeling of AI-generated content to ensure transparency. This will allow companies to strike a balance between leveraging the benefits of AI and maintaining ethical advertising practices.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Pillsbury Winthrop Shaw Pittman LLP

Written by:

Pillsbury Winthrop Shaw Pittman LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Pillsbury Winthrop Shaw Pittman LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide