Artificial Intelligence Briefing: Lawsuits, Agreements and Comments – Oh My!

Faegre Drinker Biddle & Reath LLP

Recent developments in AI continue to highlight its increasing prevalence and the regulatory challenges it poses. The DOJ has sued RealPage, alleging its AI software enables rent price-fixing, while NIST partnered with OpenAI and Anthropic to enhance AI safety research. In California, Senate Bill 1047 proposes strict regulation of large AI models, sparking debate on the balance between innovation and risk. Meanwhile, a North Carolina musician has been charged with fraudulently generating royalties from AI-produced music. The CFPB emphasized that existing consumer protection laws apply to AI in finance, and lawsuits target generative AI products, including deepfakes and misrepresented AI tools in health care. Finally, the U.S. signed an international AI treaty, underscoring the need for ethical AI development aligned with human rights and democratic principles. We provide further details below.

Regulatory, Legislative and Litigation Developments

  • DOJ Sues RealPage for Helping Fix Rental Rates. The U.S. Department of Justice (DOJ) and eight states filed an antitrust lawsuit against RealPage, alleging the company's AI-powered revenue management software enables landlords to collude and artificially inflate apartment rents. The complaint claims RealPage’s software uses algorithms and nonpublic, competitively sensitive data shared among competing landlords to recommend higher rents than would occur in a competitive market. The DOJ alleges this has harmed renters by suppressing competition and driving up housing costs. The lawsuit marks the first major civil antitrust case where the role of AI algorithms in pricing manipulation is central to the allegations. If successful, the case could have significant implications for the use of AI and data sharing in pricing across industries.
  • NIST Signs Agreements With Anthropic and OpenAI. On August 29, the National Institute of Standards and Technology (NIST) entered into agreements with Anthropic and OpenAI that allow the organizations to collaborate on AI safety research, testing and evaluation. The groundbreaking agreements allow the U.S. AI Safety Institute at NIST to receive access to major new models from Anthropic and OpenAI before they are released publicly. The U.S. AI Safety Institute will research and assess the models’ capabilities and safety risks and provide feedback to the organizations on potential safety improvements in collaboration with the UK AI Safety Institute.
  • California Senate Proposes Several AI-Related Regulations. Before closing its legislative session on August 31, 2024, California’s state legislature passed a slew of legislative proposals aimed at regulating artificial intelligence. One of the more divisive bills, California Senate Bill 1047, seeks to regulate potential extreme risks of AI development, requiring powerful AI models that cost more than $100 million to train or $10 million to fine-tune to undergo safety testing prior to being released to the public. The bill would allow the California State Attorney General to sue AI developers if their models cause certain severe harms. SB 1047 has divided tech industry leaders and legislators along sometimes unexpected lines, with AI developers such as Elon Musk and former officers of AI tech companies cautiously supporting the bill, and AI researchers and Bay Area Congressional Democrats like former Speaker Nancy Pelosi, Rep. Anna Eshoo and Rep. Ro Khanna urging the governor to veto the bill. Proponents say the bill proactively addresses large scale harms that could be caused by AI and is limited to large AI models. Opponents of the bill state that it is premature, does not address current harms that AI may cause such as through deepfakes or misinformation, and will stop smaller AI entrepreneurs from entering the AI space to provide needed innovation. SB 1047 and the other state AI legislation now waits for California Governor Gavin Newsom’s decision to sign or veto by September 30.
  • Musician Charged With Music Streaming Fraud Aided By AI. Federal prosecutors in the Southern District of New York have charged Michael Smith, a North Carolina resident, with a scheme to receive fraudulent royalties from AI-generated music on streaming platforms. According to the indictment, Smith, along with the CEO of an AI music company, used AI to generate hundreds of thousands of songs and publish them on major streaming services such as Spotify, Apple Music, YouTube and Amazon Music. Smith then used thousands of automated “bot” accounts on the streaming services to stream the songs and generate over $10 million in royalties.
  • CFPB Weighs in on Uses, Opportunities and Risks of Artificial Intelligence in the Financial Services Sector. On August 12, 2024, the Consumer Financial Protection Bureau (CFPB) submitted a comment in response to a Department of Treasury Request for Information, re-emphasizing that there is no exception to federal consumer protection laws in the financial services sector for “fancy new technology” like AI, and that existing laws and regulations such as the Equal Credit Opportunity Act, Consumer Financial Protection Act and Fair Credit Reporting Act continue to apply. The comment stresses the need for firms to perform regular testing to assess disparate impact risks that may result from the use of AI tools in lending decisions, loan servicing and debt collection practices. Financial services institutions must also be able to provide accurate and specific reasons when taking adverse actions against a consumer regardless of the complexity or opacity of any underlying models. Other areas of risk mentioned by the CFPB include consumer-facing customer service tools (such as chatbots) that incorporate LLMs, which can give rise to liability if they provide inaccurate information, fail to provide access to legally required dispute resolution processes, or violate a consumer’s privacy or security rights; and the increasing prevalence of “fraud screening” tools to assess creditworthiness. The message from the CFPB is that “[i]f firms cannot manage using a new technology in a lawful way, then they should not use the technology.” Notably, the CFPB did not propose new rules or guidance to govern AI. Accordingly, financial institutions should assess their use of AI tools for compliance with existing regulations and laws, particularly those mentioned in the comment.
  • Government Attorneys Bring Statewide Class Actions Challenging Generative AI Products Against Technology Companies. Two government attorneys, representing the citizens of their state, have filed class actions against technology companies relating to the creation and dissemination of generative AI products since our last briefing. First, the San Francisco City Attorney filed a first-of-its-kind class action under California statute § 1708.86 that prohibits the creation and dissemination of sexually explicit deepfakes. The lawsuit, brought by the San Francisco city attorney on behalf of the people of California against a number of tech companies, alleges the defendants provide websites that allow users to create sexually explicit images in violation of numerous California criminal and civil statutes. Just one week later, the attorney general of Texas filed a petition seeking court approval of an Assurance of Voluntary Compliance relating to a generative AI software manufacturer aimed at physicians and medical staff that purported to create summaries, charts, and draft clinical notes with hallucination rates of less than .001%. The AVC alleges that the representations regarding the hallucination rates were false, misleading, or deceptive in ways that violate the Texas Deceptive Trade Practices – Consumer Protection Act and places disclosure requirements and limitations on how the company may discuss and represent the hallucination rate of its software.
  • U.S., EU and Others Sign International AI Treaty. On September 5, 2024, the U.S. joined eight other countries and the European Union in signing the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. This international treaty establishes a legal framework which requires parties to adopt measures to ensure that AI systems are developed and used in ways that respect human rights, uphold democratic principles, and maintain the rule of law. It emphasizes accountability, transparency, privacy and the need to protect against discrimination and misuse of AI, particularly in contexts that may undermine human dignity or individual autonomy. The Convention also requires states to establish procedural safeguards and risk mitigation measures for certain AI systems, to bolster international cooperation concerning AI, and to establish independent oversight mechanisms to oversee compliance with obligations under the treaty.

What We’re Reading

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Faegre Drinker Biddle & Reath LLP

Written by:

Faegre Drinker Biddle & Reath LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Faegre Drinker Biddle & Reath LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide