Generative AI hardware - the other arms race

A&O Shearman
Contact

A&O Shearman

Generative AI has captivated the world since November 2022. But this transformative technology would not be possible without the specialized hardware and infrastructure necessary for the training and operation of the large language models that underpin Generative AI – and model developers have been facing challenges in securing hardware that is powerful enough. In this article, we explore three core themes: (i) the constraints facing the AI hardware industry, (ii) the impact of AI regulations and export controls, including the rise of new network models, and (iii) strategies for protecting IP in AI hardware design.

What are AI chips?

With the release of OpenAI’s ChatGPT in 2022 and the proliferation of competing large language models, the demand for Generative AI use cases has skyrocketed and, with it, the demand for processing power and computer chips powerful enough to develop and operate AI models.

But AI chips are not ordinary computer chips. These chips require greater processing power than general purpose central processing units (CPUs), and are specially designed to handle the massive amounts of data and calculations that Generative AI algorithms require. They are faster and more energy efficient than CPUs, which is achieved by incorporating greater numbers of smaller transistors, executing calculations simultaneously rather than sequentially, by storing an entire AI algorithm on a single chip, and using programming languages that optimize AI code execution. While there are different types of AI chips for different tasks, the most widely used are graphics processing units (GPUs), which are most often used for training AI algorithms.

Because of these technical requirements, AI chips are more expensive, more complex, and more difficult to produce than CPUs. The latest generation of Generative AI systems require state-of-the-art AI chips – older AI chips, with their larger, slower and more power-hungry transistors, incur huge energy consumption costs that quickly balloon to unaffordable levels. The shortage of AI chips has become a major bottleneck for the AI industry, as well as a potential risk factor for investors. For the first time, Microsoft’s annual report identified the availability of GPUs as a possible risk factor for investors. Nvidia’s latest generation of AI chips had been expected to ship this year, but was delayed until 2025 due to design flaws. This may cause disruption for customers such as Google, Microsoft, and Meta.

This points to the increasing need for management and boards to be even more alive to the risks that may threaten the supply chains of critical hardware. This includes gaining a deeper understanding of the current market dynamics and the players that dominate the AI chip arms race. Nvidia’s technology is currently considered the world leader in this generation of AI chip design, built over the past decade, with industry giants such as Intel and AMD moving quickly to catch up. Because AI hardware is key to both training and operating AI models, it can cause a bottleneck for the companies racing to build large language models and their customers that run Generative AI applications. At the same time, AI chip manufacturers themselves remain focused on their own supply issues as competing AI GPUs are often produced at the same foundries, which have their own production limits.

AI regulations: a double-edged sword

In addition to understanding the supply chain for Generative AI hardware, companies must also keep on top of legal and geopolitical considerations, which often culminate in regulatory restrictions. Most countries have focused on regulating the consumer-facing applications of Generative AI – such as chatbots, deepfakes, and automated decision making – rather than on the technology underpinning those applications. But the US has is managing a different risk: supply. It has imposed restrictions on the export of AI chips and chip manufacturing equipment with a view to limiting China’s access to AI computing power. Although AI regulations in the Asia Pacific region have also been directed towards applications, the manufacturing of Generative AI chips and associated processes may be impacted by the decision to capture AI hardware and chips within the US’s sanction and export controls. These moves have significant implications for the global AI hardware industry, as well as for the users and customers of Generative AI services – particularly in China.

However, these restrictions have not stopped China from pursuing its Generative AI ambitions. If anything, the constraints will force innovation and self-reliance in Generative AI hardware. China has been developing its own domestic AI chip industry, and accessing Generative AI computing power via the cloud. Nvidia is also reportedly planning to release a version of its latest AI chip aimed at the Chinese market that will comply with heightened US trade restrictions. To address this cloud loophole, however, the U.S. Congress is now considering further measures to block China from remotely accessing American semiconductors and chips that they cannot purchase under the export controls. In July 2023, Representative Jeff Jackson introduced a bill that would prohibit U.S. persons and subsidiaries from providing support for the remote use or cloud use of any integrated circuits. To learn more, read our recent article on sanctions and export controls here.

Despite the push towards innovation for China, the U.S. sanctions have also caused disruptions and shortages in other sectors that rely on Generative AI chips, such as smartphones and smart cars. For Huawei, one of China's leading technology companies, the U.S. sanctions have forced it to diversify its Generative AI chip production, at the expense of the chips it produces for use in smartphones and driver assistance functionality.

New network models: a way forward

In response to these constraints and challenges in the manufacture of Generative AI chips, we are witnessing the emergence of novel responses by chip manufacturers and customers to the shortages in and constraints on Generative AI chips. These are examples of strategies being used by companies looking to circumvent the growing cost of supply chain and regulatory constraints.

On one level, we are seeing the rise of AI “Infrastructure-as-a-Service” models, where Generative AI hardware is virtualized and accessed in a manner similar to ordinary cloud-based computing offerings. With the constraints in AI hardware, access to and availability of AI chips are critical to developing these capabilities. This has led chip manufacturers to partner directly with cloud service providers to allow for commercializing capacity rather than providing chips to OEMs only. They can therefore access a different strategy to deliver to more end-users without incurring much risk, while end-users can access GPU computing power without suffering delays to hardware manufacture and supply.

We also see the complementary rise of “edge AI”, where AI chips are deployed directly on local edge devices, like sensors or internet-of-things devices – rather than on centralized servers in data centers. This reduces latency and bandwidth consumption, permitting edge systems to run Generative AI applications efficiently, although this approach is limited to running “smaller” or narrower focused language models.

These evolving models can, however, give rise to challenging legal issues for companies engaging in Generative AI cloud computing or edge AI.

We are seeing chip manufacturers applying significantly more commercial leverage in AI chip supply arrangements – mandating significant pre-commitments, revenue sharing or equity models. We are also seeing regulators engage with the scope of export restrictions for Generative AI hardware – including whether to extend it to cloud models. This will be front of mind for service providers and their customers further down the service chain. For edge AI, industries must contend with additional regulations – for example, car companies developing smart cars or autonomous vehicles must ensure compliance with privacy, data processing, and telecommunications regulations, and this may extend to AI regulations depending on the adoption of Generative AI models and hardware.

Protecting trade secrets

In response to increased demand, technology hardware companies are investing in the accelerated development of chip design and technologies to drive increasingly energy-efficient and powerful AI hardware.

While the classic means of protecting this valuable intellectual property is patents, patent protection requires the disclosure of the innovation to the world – including to competitors. Trade secret protections, by contrast, have the benefit of preserving the secrecy of valuable intellectual property.

Typically, for information to qualify as a trade secret, it must: (a) not be generally known, (b) have had reasonable steps taken to ensure the information remains secret, and (c) have actual or potential value arising from the fact that it is not generally known. So long as the test is met, trade secrets can cover all kinds of information, and that information does not need to be novel or unique, as is the case with patents. For example, the recipe for Coca-Cola and the make-up of WD40 are both trade secrets. The protection provided by trade secrets can - potentially - last indefinitely without the need to file an application or seek approval. Unlike patents, however, owners of trade secrets have no defensive rights – they do not have the right to exclude others from the trade secret and can only prevent others from misappropriating the trade secret.

The major advantage of relying on trade secrets as the manner of protection for intellectual property is that it keeps one’s competitors in the dark for as long as that information remains “secret”. And, in an area such as AI chip technology - where what is known and being discovered is evolving so rapidly - keeping any new discovery a secret can be immensely valuable.

Conclusion

Although the demand for Generative AI development and applications are exploding, there are constraints in the manufacture and supply of the underlying hardware. These, and the imposition of regulatory restrictions, pose challenges for the industry. Companies that use Generative AI in their businesses – whether internally or as part of their market offerings - must remain alert to the supply chain and geopolitical considerations affecting the Generative AI chip supply. In response, companies may look to new computing and networking models, such as AI “Infrastructure-as-a-Service” models and “edge AI”, that are beginning to gain ground. In doing so, however, they must address novel legal issues that can arise. Companies looking to develop new intellectual property in Generative AI chip design to meet increasing demand must also seek the appropriate protection. As Generative AI becomes integrated into our businesses and lives, the stakes for securing the necessary AI hardware infrastructure will only increase. It is essential for companies to stay informed, proactive, and adaptable in this dynamic environment.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© A&O Shearman

Written by:

A&O Shearman
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

A&O Shearman on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide