AI News Roundup – Treaty regarding AI standards, fake song AI lawsuit, new YouTube tools for controlling AI content, and more

McDonnell Boehnen Hulbert & Berghoff LLP

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

  • Over 50 countries and supranational organizations, including the U.S., United Kingdom and the European Union, have unveiled the world’s first binding treaty regarding AI standards, according to a report from the Financial Times. The Council of Europe’s “Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law,” which was drafted over the past two years, emphasizes human rights and democratic values in its approach to regulating both public and private sector AI systems. The treaty requires signatories to be accountable for harmful and discriminatory outcomes of AI systems, respect equality and privacy rights and provide legal recourse for victims of AI-related rights violations. While the agreement is described as “legally enforceable,” critics note that it lacks strong sanctions like fines, relying primarily on monitoring for compliance. Regardless, supporters see it as an important first step in international cooperation regarding AI. Věra Jourová, vice president of the European Commission for values and transparency, said to the Financial Times that the framework “should bring trust and reassurance that AI innovations are respectful of our values — protecting and promoting human rights, democracy and rule of law.”
  • The New York Times reports on federal charges brought against a North Carolina man who allegedly used AI to create fake songs to garner royalties from music streaming services. Michael Smith, 52, is accused of orchestrating a sophisticated scheme that netted him approximately $10 million over a period of seven years. Prosecutors allege that Smith employed AI to generate hundreds of thousands of fake songs by non-existent bands, which he then uploaded to music streaming platforms like Spotify, Apple Music and Amazon Music. To make the scheme appear legitimate, Smith reportedly created thousands of fake streaming accounts and used bots to play the AI-generated music on loop. The indictment claims that Smith went to great lengths to avoid detection, including spreading his activity across numerous fake songs and generating plausible names for the AI-created artists and tracks. If convicted, Smith faces up to 20 years in prison for each charge of wire fraud and money laundering conspiracy.
  • Professionals in the entertainment industry are learning about how to use AI technologies as they face fears that machines may replace their jobs, according to the Los Angeles Times. The article profiles several Hollywood workers, including cinematographers, editors, costume designers and voice actors, who are taking courses and experimenting with AI tools to understand their potential impact on their respective fields. While some see AI as a helpful assistant in tasks like creating preliminary storyboards or generating placeholder shots, others view it as a potential threat to jobs. Industry organizations like the worker’s union IATSE have also created AI commissions to study the technology and consider new contract provisions and regulations. Despite replacement concerns, many in the industry, including management and labor, have recognized the need to understand AI to effectively regulate its use in entertainment production.
  • Hong Kong’s South China Morning Post reports on a new agreement among major Chinese and American technology corporations to create an international AI supply chain standard. China’s Ant Group, Tencent Holdings and Baidu reached agreement with Microsoft, Google and Meta to develop the world’s first international standard for large-language model (LLM) security in supply chains. The initiative, unveiled at a conference in Shanghai, falls under the World Digital Technology Academy’s (WDTA) AI Safety, Trust, and Responsibility program. Established under a United Nations framework, the standard aims to address security risks such as data leaks, model tampering and supplier non-compliance throughout the LLM lifecycle. This collaboration highlights the growing importance of international cooperation on AI standards as the technology continues to advance and impact various sectors globally. The agreement follows earlier generative AI standards, including the European Union’s AI Act, and comes amid increasing calls for AI safety measures from both businesses and governments worldwide.
  • The National Novel Writing Month, known as NaNoWriMo, has faced backlash from authors over its decision to allow the use of AI, according to an explainer from The Washington Post. The nonprofit organization, which challenges writers to complete a 50,000-word novel draft each November, stated that it would not condemn the use of AI tools, citing the potential “classist” and “ableist” implications of doing so. This stance has led to strong reactions from the writing community, with several prominent authors resigning from NaNoWriMo’s boards and at least one sponsor withdrawing support. Critics argue that the organization’s position undermines the creative process and ignores concerns about AI’s potential for plagiarism. NaNoWriMo has since updated its statement to acknowledge concerns about unethical AI use while maintaining its neutral stance on the technology. The controversy follows recent debates in creative industries about AI’s role in content creation and intellectual property rights, reflected in ongoing lawsuits by publishers and artists against AI corporations regarding copyright infringement by AI models trained on copyrighted works.
  • YouTube has announced new tools to provide creators with greater control over content including their likenesses and voices made by generative AI. In a blog post, the company’s Vice President of Creator Products Amjad Hanif said that “[a]s AI evolves, we believe it should enhance human creativity, not replace it.” To that end, YouTube said it is developing new technologies to protect creators and artists, including a synthetic-singing identification technology within YouTube’s existing Content ID recognition system to detect and manage AI-generated content that simulates singing voices, as well as tools to help people detect and manage AI-generated content showing their faces on the platform. The company is also working on ways to give creators more control over how third parties might use their content for AI development. YouTube emphasizes that while it uses content to improve its own AI features, it opposes unauthorized scraping of content by third parties. The blog post also addresses the responsible use of AI-generated content on the platform, reminding creators that such content must adhere to Community Guidelines and encouraging careful review before publishing.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© McDonnell Boehnen Hulbert & Berghoff LLP

Written by:

McDonnell Boehnen Hulbert & Berghoff LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

McDonnell Boehnen Hulbert & Berghoff LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide