Companies in all industries remain focused on implementing policies and training programs to govern employees’ use of generative AI (GenAI) tools. Meanwhile, early adopters with policies in place continue to evaluate and update them.
To do so, company leadership and legal counsel must calibrate the company’s risk tolerance. The important risks of using GenAI tools must be balanced against the benefits—both generally and in the company’s specific competitive, regulatory, and consumer context.
We are seeing companies consider the full spectrum of risk-tolerance-based policies, from full-stop prohibition to passive allowance. For conceptual simplicity, the sliding scale can be grouped into four “levels,” from least restrictive to most restrictive:
Level 1: Permitting the use of GenAI tools for internal or external purposes. This may include warnings or education regarding high-risk uses.
Level 2: Permitting the use of GenAI tools for internal experimentation and ideation only—not for use in “live” or external content.
Level 3: Permitting the use of only certain GenAI tools, which have been vetted and pre-approved (“whitelisted”) on a case-by-case basis. Companies may specify how whitelisted tools may be used (e.g., internal and external use, internal use only, certain use cases only).
Level 4: Full prohibition of GenAI tools. This may include technological measures to block access on company networks and devices.
Some companies implement a restrictive policy as a temporary stop-gap to buy time to get their arms around the risks and benefits of GenAI. After further business and legal analysis, those companies may (or may not) decide to revise the policy to something less restrictive.
Where on the scale a particular company falls is often driven by the nature of the business. There are obvious examples. A cutting-edge manufacturer rich with trade secrets likely falls on the restrictive end. The same may be true for companies in highly regulated industries. On the other hand, an online “content farm” with an advertising-based revenue model based on speed and volume of content generation may be more willing to accept risk. Many businesses end up somewhere in the middle.
To move beyond philosophical discussions and implement a policy, many companies assemble a cross-functional AI committee (or convene an existing information security committee or similar group) to undertake a practical cost-benefit analysis. The following questions provide a starting point for risk calibration:
- Do we have existing company policies or risk frameworks regarding information security, software licensing, procurement, or the like that may apply or can be adapted to, GenAI?
- Do we operate in a highly regulated industry?
- Are we B2B, B2C, or a combination?
- Is our business dependent on trade secrets and other competitively sensitive information?
- Is our business dependent on owning or licensing IP rights (e.g., copyright, patent, trademark)?
- Is our business dependent on creating digital assets or generating digital content?
- Does our business hold sensitive personal data?
- Would use of GenAI provide specific competitive advantages?
- Is use of GenAI necessary to compete in our industry?
- How do our various business teams want to use GenAI?
- Are our specific GenAI use cases low-risk or high-risk?
- Can we easily separate high-risk use cases from low-risk use cases, given the nature and organizational structure of our business?
- Can we use GenAI for our intended use cases without inputting confidential, proprietary, personal, or other sensitive data?
- Will we use the output from GenAI tools in external materials, products, or services?
- Do we plan to tout in external marketing that our products or services are powered by or involve GenAI?
- Is controlling and monitoring employee use of GenAI realistic, given the nature and organizational structure of our business?
- Do we rely on third-party agencies and vendors to create our marketing materials, design our products, etc.?
- Do we plan to use GenAI tools to make decisions in areas such as finances, health, education, housing, employment, or other areas in which regulators have warned about the risks of AI bias and discrimination?
Importantly, AI policies must clearly define the GenAI tools subject to the policy. Nearly all software today includes AI functionality—and that is snowballing. To avoid throwing out the baby with the bathwater, the policy may need to differentiate GenAI tools from software that includes prior-generation AI functionality.
To go a step further, even “true” GenAI tools are not created equal. Policies may treat publicly available GenAI tools that lack important IP, confidentiality, and other protections differently from commercially licensed GenAI tools with more robust contractual and technological protections. This is where “whitelisting” may be useful. A company may choose to start with a Level 3-type policy, which prohibits GenAI tools by default, but provides a list of approved commercially licensed tools. Based on recent announcements from the largest tech companies, GenAI is already featured (or will be imminent) in ubiquitous software used by nearly all businesses. As such, a blanket prohibition on the use of all GenAI, without exceptions, may not be feasible.
To be clear, however, commercially licensed GenAI tools may not be risk-free. The providers of such tools may agree not to disclose information inputted as prompts by users, which may address some concerns. But other open legal and regulatory issues surrounding GenAI may still apply. For example, ongoing court cases will test whether scraping and using third-party content as “training data” and creating outputs based on such data infringes copyright or violates right of publicity or other laws. While some GenAI tools claim to hold ownership or licensed rights to all training data, it is difficult (or impossible) for licensees to independently verify that. Representations, warranties, indemnification, and other contractual terms may shift or mitigate but likely not eliminate risk. Separately, the law continues to develop regarding who, if anyone, owns IP rights in outputs created by GenAI tools.
Obviously, there is a lot to consider. If you are still working through GenAI policy, training, and compliance issues, you are certainly not alone.