AI News Roundup – California AI regulation bill, AI model collapse, AI updates to Alexa’s voice, and more

McDonnell Boehnen Hulbert & Berghoff LLP

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

  • California’s legislature has sent S.B. 1047, a sweeping AI regulation bill, to the desk of Governor Gavin Newsom, according to KQED, San Francisco’s NPR affiliate. As summarized in our previous coverage (here and here), the bill would require powerful AI models, among other things, to adhere to specific safety protocols before deployment and places reporting requirements on developers of such models. The bill has been controversial and has gone through several rounds of revisions before being finalized in its current form. Notably, the bill no longer establishes a division within the California Department of Technology “to ensure continuous oversight and enforcement” of the bill, no longer establishes criminal penalties for perjury related to the bill’s provisions, and no longer contains provisions that would allow California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred; rather, a company may only be sued for civil penalties after a harm has occurred. The bill has attracted opposition from a broad range of political and industry leaders, including former U.S. House of Representatives Speaker Nancy Pelosi, who represents San Francisco, OpenAI Chief Strategy Officer Jason Kwon, as well as Google and Meta. Notable supporters of the bill include Elon Musk, CEO of X (formerly Twitter), Anthropic, developer of the Claude AI chatbot, and prominent AI researchers Yoshua Bengio and Geoffrey Hinton. Governor Newsom has yet to indicate if he will sign or veto the bill, which he must do before September 30.
  • ML Commons, an AI and machine learning industry consortium, released the results of its latest AI inferencing hardware competition, according to IEEE Spectrum. The MLPerf Inference v4.1 benchmarks featured first-time submissions from AMD Instinct accelerators, Google’s latest Trillium accelerators and chips from startup UntetherAI, as well as the debut of Nvidia’s new Blackwell chip. While Nvidia’s GPUs still dominated overall performance, particularly in the data center category, competitors showed promising results in power efficiency and specific use cases. Notably, Nvidia’s Blackwell chip outperformed previous iterations by 2.5x on the LLM Q&A task, leveraging 4-bit precision and increased memory bandwidth. UntetherAI’s speedAI240 demonstrated superior power efficiency in image recognition tasks, while also excelling in edge computing scenarios. Additionally, companies like Cerebras and FuriosaAI announced new inference chips outside of the MLPerf competition, indicating a rapidly evolving and competitive landscape in AI inferencing hardware as companies rush to catch up to industry leader Nvidia’s hardware capabilities.
  • The New York Times’ Upshot reports on a growing problem in the AI industry: AI systems being trained on AI-synthesized data. As generative AI systems increasingly generate vast amounts of text and images that flood the internet, AI companies risk inadvertently ingesting this synthetic content when training future models. This creates a feedback loop that can lead to a phenomenon called “model collapse,” where AI output gradually deteriorates in quality and diversity. Researchers have shown that when AI is repeatedly trained on its own output, it can result in distorted images, incoherent text and a narrowing range of responses. This issue not only affects the quality of AI-generated content, but also poses challenges for the industry’s growth, potentially increasing training costs and energy consumption. To mitigate these problems, experts suggest that companies pay for high-quality human-generated data, develop better AI detection methods, including watermarking, and carefully curate synthetic data to mitigate the risk of model collapse.
  • OpenAI and Anthropic have agreed to work with the U.S. federal government’s AI Safety Institute to evaluate their in development models, according to reports from Bloomberg. The collaboration, announced this past week, will involve the AI Safety Institute receiving early access to major new AI models from these companies to assess their capabilities and potential risks. This initiative aims to enhance safety testing and develop methods to mitigate potential issues associated with advanced AI technologies. The AI Safety Institute, part of the National Institute of Standards and Technology (NIST) in the Department of Commerce, will work closely with its UK counterpart to provide feedback on potential safety improvements. Both OpenAI and Anthropic have expressed strong support for the institute’s mission, viewing it as crucial for responsible AI development and U.S. leadership in the field.
  • WIRED reports that several major companies and publishers have opted out their websites from being used to train Apple’s new AI products. The opt-out feature was included in the most recent update to Apple’s web crawler Applebot, dubbed Applebot-extended. Prominent organizations such as Facebook, Instagram, Craigslist, The New York Times, The Financial Times, Condé Nast and others have chosen to exclude their data from Apple’s AI training efforts. This decision reflects a growing trend of websites blocking AI crawlers to protect their intellectual property as lawsuits over copyright and AI training proceed around the country. The article notes that while only about 6%-7% of high-traffic websites currently block Applebot-Extended, the number is gradually increasing, particularly among major news and media outlets. Some publishers view this as a strategic move, potentially withholding data until partnership or licensing agreements are in place (as Condé Nast has made with OpenAI this past month). The situation highlights the evolving landscape of web crawling and data usage in the AI era, with robots.txt files and other formerly obscure elements of web-scraping architecture becoming a crucial battleground for AI training data access.
  • Amazon’s upcoming updates to its Alexa voice assistant products will feature Anthropic’s Claude AI models, according to Reuters. The report, based on information from five unnamed sources familiar with the matter, states that Amazon turned to Anthropic’s AI after its in-house AI software struggled with performance issues. The new “Remarkable” version of Alexa, expected to launch in October 2024, will offer more advanced capabilities powered by Claude’s generative AI and is planned to be a paid service costing between $5 to $10 per month, while the classic voice assistant will remain free of charge. This move is part of Amazon’s strategy to make Alexa more profitable and competitive in the rapidly evolving AI landscape. The company aims to enhance Alexa’s functionality, enabling it to handle complex queries, offer shopping advice, and serve as an improved home automation hub. While Amazon acknowledges using various technologies to power Alexa, including its own models, the reliance on Anthropic’s Claude marks a significant shift in its approach to AI development for its voice assistant.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© McDonnell Boehnen Hulbert & Berghoff LLP

Written by:

McDonnell Boehnen Hulbert & Berghoff LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

McDonnell Boehnen Hulbert & Berghoff LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide