New Tech, Old Theories: The DOJ Antitrust Division's Workshop on Artificial Intelligence

Axinn, Veltrop & Harkrider LLP
Contact

Axinn, Veltrop & Harkrider LLP

Overview

On May 30, 2024, the DOJ’s Antitrust Division hosted a workshop for global antitrust authorities, academics, financiers, and private sector representatives to discuss competition across the AI stack, potential threats AI poses to content creators, and AI regulation. Attendees included, e.g., Jonathan Kanter (DOJ), Doha Mekki (DOJ), Susan Athey (DOJ), Karen Croxson (CMA), Vera Jourova (EC), Condoleezza Rice (Stanford), Andrew Ng (DeepLearning.AI), Chris Wolf (VMware), Venky Ganesan (Menlo Ventures), among others. Representatives from Microsoft, Amazon, Nvidia, OpenAI, and Google were not included as panelists.

Key Themes

All eyes on major technology players, including Nvidia and OpenAI. Jonathan Kanter set the tone early with his opening remarks, referring to the potential advantages of what he called “dominant firms” and “existing power[s]” in accessing the data and computing power required for AI. He also cited the potential for the “problems market power on the internet has caused in journalism” to spread to other content creation markets via AI. But beyond Mr. Kanter’s more general concerns, the enforcement priority clearly appears to be identifying “bottlenecks” across the full AI “stack,” from access to chips, engineering talent and data, and Cloud computing capacity, to AI distribution channels, as well as closely scrutinizing deals and partnerships among key players across the stack. And, while participants recognized the benefits of vertical integration, some made calls for scrutiny of players who are purportedly “dominant” at one level of the stack potentially threatening consumer choice at other layers of the stack. One private sector attendee warned about certain tech companies “throwing their weight around” and attempting to conduct “regulatory capture” by proposing AI regulation that favors established players by driving up the costs of regulatory compliance, which disproportionately harms their rivals.

Since the workshop, the DOJ and FTC reportedly reached a deal to divide up responsibility for investigating Microsoft, OpenAI, and Nvidia’s roles in the AI industry. DOJ will focus on Nvidia, with FTC taking Microsoft and OpenAI. That follows on the FTC’s January announcement of its investigation into Microsoft’s investment into OpenAI, as well as Amazon and Google’s separate investments into Anthropic.

When to regulate, and how? Many other private sector attendees (but also academics and Dr. Condoleeza Rice) called for the government to not hastily regulate AI. AI is moving incredibly swiftly, but a popular view was that there is not an appropriate understanding of the risks AI poses, which could be improved by developing AI benchmarks and evaluation standards. Ill-informed regulation may hamper competition, threaten innovation, “fix” problems that never existed, act as a guise for protectionism, or disadvantage the U.S. as compared to other countries in the global race for technology dominance. While regulation is almost certainly inevitable, the popular view was that any regulation should focus on how people use and apply AI (and be tailored to those individual use cases and the specific risks), as opposed to regulating AI research or the technology itself.

Open source v. proprietary models? There was a consistent and robust debate around the benefits and drawbacks of open-source vs. proprietary AI models. Many took the side of open-source (and interoperability) as fostering transparency and competition by providing more access to the baseline technology and data, which then can be innovated upon by smaller competitors and prevent “lock-in” with larger technology companies. On the other hand, open-source AI technology may be more subject to being used to promote disinformation (a particular focus area of the Biden Administration and Democrats like Senator Klobuchar), to build weapons, or to launch cyberattacks.

Take-Aways

  1. For now, the antitrust agencies appear focused on what they perceive to be the potential of certain tech companies to control “bottlenecks” across the AI stack. Any AI partnership, collaboration, investment, or acquisition involving these tech players is going to be subject to intense scrutiny. Expect recurring calls for additional transparency (including regarding data collection), interoperability, and open sourcing, as well as scrutiny of vertical integration as a competitive virtue.
  2. New AI regulation or litigation is probably not likely in the immediate future, but AI is clearly under the agency spotlight with many ongoing agency investigations across the globe. However, for the moment, agencies and legislatures appear squarely in “investigation” mode with only preliminary theories of potential AI harms (including harms to competition). However, that could all change quickly should a true AI leader start to emerge, particularly given AI’s geopolitical implications.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Axinn, Veltrop & Harkrider LLP | Attorney Advertising

Written by:

Axinn, Veltrop & Harkrider LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Axinn, Veltrop & Harkrider LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide