As we enter into this second full year of the artificial intelligence (AI) revolution, a clear understanding of the technology and its legal implications becomes crucial for every General Counsel (GC).
From understanding the technology itself and its limitations to navigating through legal uncertainties and establishing best practices, this alert covers the most pressing issues related to AI in the legal landscape in 2024. In addition to legal considerations, such as IP protection and compliance under the newly effective Corporate Transparency Act (CTA), GCs will also gain insight into business use cases and the opportunities AI offers to maximize financial returns.
1. Understanding the Technology
Much of the current literature and discussion on AI conflates the various types and subsets of AI technologies. “AI” broadly refers to any machines or software designed to mimic human intelligence. “Machine learning,” another widely used term, is a subset of AI that learns from data to make decisions. While AI and machine learning have been around for decades (think IBM’s Watson), it was the wide release of generative AI (Gen AI) tools like ChatGPT in 2022 that arguably kicked off the current era in AI. Broadly speaking, Gen AI tools create new content such as text, images, songs, computer code, and video in response to user prompts. Each subset of AI tools and technology has its own unique advantages and legal risks, so understanding the distinctions between them is critical for businesses to achieve their goals while effectively managing risk.
2. Knowing the Limitations
As we delve deeper into the world of AI, it is critical to recognize its limitations, especially in the context of Gen AI. One should not assume anything created using Gen AI is suitable for commercial use. AI-generated content can potentially be inaccurate, biased, or infringe on third-party intellectual property rights, particularly copyrights, trademarks, and publicity rights. Gen AI tools, for instance, have been known to reproduce the likenesses of famous celebrities and create songs that sound like real musicians. Trademarks or designs created using Gen AI could be confusingly similar to existing third-party marks. Further, Gen AI-authored software may contain code that is subject to restrictive or cumbersome third-party or open-source license agreements. Commercial entities using such content without the necessary rights or approvals may face significant liability. In light of these limitations, it’s critical for GCs to remain skeptical and diligent when evaluating potential use of Gen AI tools and content.
3. Hedging Against Legal Uncertainty
The next year may determine whether the current moment in Gen AI will be remembered as its “Napster Era” or its “Spotify Era.” A handful of pending lawsuits are challenging a core assumption among Gen AI developers: that the use of third-party copyrighted works to train Gen AI tools is “fair use” under the US Copyright Act. If a court concludes that this assumption is incorrect, Gen AI developers could be exposed to staggering damages for copyright infringement. Such a ruling could potentially open the floodgates for a new wave of lawsuits alleging vicarious or contributory copyright infringement. On the other hand, a finding of “fair use” would bring much-needed clarity to the industry and might alleviate some of the legal risks. As we await an authoritative court decision on this issue, GCs should consider whether and how to account for this legal uncertainty when evaluating the risk of specific uses your business teams may desire.
4. Establishing Best Practices to Protect Your IP
A recent decision by the US Copyright Office Review Board could have far-reaching implications for works of art created in part by Gen AI. On December 11, 2023, the Review Board affirmed a refusal to register a work of art partially created by Gen AI, concluding that the work lacked the “human authorship” necessary to claim copyright protection. This decision marks the third time in recent months that the Review Board issued a written opinion analyzing the impact of Gen AI on copyright protection and continues a trend of courts and the Copyright Office rejecting copyright protection for AI-generated works. This decision has significant implications for rights owners. If a work contains too much Gen AI content, it could lead to a loss of copyright protection for the work, either in whole or in part. Moreover, copyright applicants must disclose the inclusion of AI-generated content in their copyright applications. Failure to do so can lead to cancellation of the copyright registration and, consequently, loss of access to federal courts and the ability to seek statutory damages from infringers. GCs should work closely with creative teams to implement best practices and policies to help reduce the risk that any particular work product will be unprotectible under US Copyright laws.
5. Helping the Business Identify and Evaluate Potential Use Cases
There is no shortage of Gen AI tools currently available on the market covering a wide variety of potential uses. While business teams may ultimately make the final call on which use cases and tools best fit your company’s needs, legal has a critical role to play, both in prioritizing use cases and in evaluating third-party tools and technology. For example, legal can help the business identify and distinguish between “high” risk and “low” risk use cases. Legal should also work closely with their IT department in vendor selection and negotiations, paying particular attention to data security, IP protection, and tool transparency to ensure the chosen AI tools align with the company’s legal and compliance frameworks.
6. Building Your Own Tools or Applications
While “off-the-shelf” Gen AI solutions may be sufficient for many businesses, others may go a step further by building custom AI tools or “fine-tuning” a third-party tool to better suit their needs. Bespoke AI tools and applications are likely to become increasingly common as Gen AI technology improves and adoption increase. To take one example, the market for bespoke Gen AI legal tools is already flourishing with tools designed specifically for contract drafting, legal research, and more. Companies that build their own tools or fine-tune existing tools can improve output, make tools easier to use, and potentially reduce their infringement risk. However, GCs will also have to contend with critical issues, such as ensuring the bespoke tool operates in a transparent and reliable manner, securing any necessary rights and permissions for content in company data sets, carefully drafting vendor contracts, and adequately protecting IP and confidential data.
7. Updating Template Agreements and Terms
With the rising use of Gen AI, it is crucial for companies to reevaluate their standard contracts and agreements to ensure they account for Gen AI-related uses. This could include template publicity releases, copyright license agreements, employment agreements, and consulting services agreements. GCs should ask: do these agreements grant the right to use acquired content or information for machine learning model training or deployment? Are the Software as a Service (SaaS) licenses or agreements inclusive of appropriate warranties, permissions, work product ownership, and disclaimers? Similarly, companies should take a fresh look at their Terms of Use to ensure they adequately address potential AI-related uses. For instance, companies may want to consider broadening the scope of any licenses they obtain to user-generated content or data. They may also want to consider prohibiting certain third-party conduct on their sites, such as web scraping or other unauthorized data harvesting, and informing users that certain features or access to services may be restricted or modified in response to legal or regulatory changes.
8. Preparing for More Governmental Oversight
As we previously examined, federal agencies have been actively evaluating the use of AI learnings. For the past three months, the government has sharpened its focus on how US data can and is being utilized in AI systems across various sectors, including chip processing, access to cloud computing, and training on national security issues. US agencies have launched work force education and R&D funding initiatives in addition to the various regulatory proposals required by President Joe Biden’s Executive Order, such as the US Department of Commerce’s (DOC) rule requiring reporting by companies that provide computing power for foreign AI training or have large computing clusters for training AI models. US agencies are also assessing the risk AI systems pose to critical infrastructure and weighing the AI safety test results and other vital information obtained from the developers of powerful AI systems. Companies working in defense, technology, economy, or public health will need to consider how to respond to further requests and how they plan to safeguard consumer data used to train or that is processed by AI systems. The US Federal Trade Commission (FTC) is also increasingly focused on the effect of AI’s rapid development and deployment, particularly in relation to consumer products and privacy laws, and how AI tools impact the work of creators. This echoes the FTC’s launch of an investigation into the role that major cloud service providers are playing in Gen AI companies and follows on the five-year ban the FTC imposed on Rite Aid’s use of facial recognition technology in December 2023. US companies developing or deploying general-purpose AI models or AI systems into the European Union (EU) will have to plan for compliance with the EU AI Act in addition to the US federal and state laws and regulations in process.
9. Complying with Employment Laws
From recording employee productivity to maximizing efficiency in human resources, AI holds tremendous potential for employers. However, the capabilities and potential pitfalls of AI are challenging the limits of federal and state labor laws, including the Fair Labor Standards Act (FLSA), which has defined the 40-hour work week for nearly a century. AI tools that monitor the keystrokes, mouse activity, and/or webcams of remote and in-person employees may undercount (or overcount) compensable time spent away from a computer. Similarly, cameras and sensors that might be used on factory floors to monitor worker productivity might not account for compensable time off the floor, such as changing into or out of uniforms or gear. Moreover, Gen AI tools used in employment decisions may perpetuate unlawful discrimination or bias. Before implementing any such tools, companies should carefully consider their obligations under the FLSA and how much they can rely on an AI tool.
10. A Note for the AI Startups
Are you the GC for an AI startup? If so, don’t overlook the CTA. As of January 1, 2024, this federal law requires many companies to disclose information about the individuals who own or control them to the Financial Crimes Enforcement Network (FinCEN), a bureau of the US Department of the Treasury (USDT). Covered companies must report the identities of its “beneficial owners” — these are generally the individuals who own certain interests in the company or have the right to exercise certain control over the company. Beneficial owners are individuals, not corporate entities or trusts. A reporting company that is formed or registered in 2024 must file its initial report with FinCEN within 90 days of its formation or registration. Companies that were formed or registered prior to January 1, 2024, must file their initial report by January 1, 2025. After filing its initial report with FinCEN, a reporting company must update its FinCEN registration within 30 days of a change in its beneficial ownership or certain other information. If you are using AI to manage your entity formation and reporting guidelines, be mindful of this new law (and others like it), especially as many Gen AI tools do not account for changes in legal and regulatory requirements.
[View source.]