Your business may want to jump on the Generative AI (GAI) bandwagon and discover how your company may become more productive, competitive, reduce costs, and make the most of new technology. There are many intriguing and effective GAI programs available for use.
However, there are important considerations your business should evaluate prior to adopting a GAI program. There are many new risks that GAI can create for your business, employees, and consumers. You should ensure that you are aware of these risks and take steps to mitigate them prior to launching a GAI program.
The OECD’s AI Principles are one of the many available sets of overall considerations that businesses should weigh prior to launching GAI in the workplace. The OECD’s AI Principles generally include:
1. Inclusive growth, sustainable development and well-being.
- Include relevant stakeholders in evaluating whether to implement GAI, including executives, legal, data privacy, subject matter experts, human resources, marketing/customer support, etc.
- Consider the potential beneficial and negative outcomes of the GAI on GAI users and the people whose information will be processed by the GAI.
- GAI uses an immense amount of power, and it should not be used without considering the carbon footprint it creates.
2. Human rights and democratic values, including fairness and privacy.
- Ensure compliance with applicable laws, including existing laws in intellectual property, e.g., copyright law, and data protection laws. This includes making sure the GAI is non-discriminatory, ensures the autonomy of individuals, honors privacy and data protection rights, and is fair to individuals.
- Make sure GAI is not subject to distortion from misinformation and disinformation.
- Implement and ensure there are mechanisms and safeguards in the GAI, including oversight and control of humans, and the ability to quickly stop GAI from functioning, if needed.
- Have policies and procedures governing the use of GAI in your business.
3. Transparency and explainability.
- Be transparent with users that GAI is being used. Obtain their consent, where required by applicable laws.
- Provide meaningful information to users for a general understanding of AI systems, their capabilities and limitations.
- Provide plain and easy-to-understand information on the sources of data/input of GAI training and the logic that leads to the prediction, content, recommendation or decision of the GAI output.
4. Robustness, security and safety.
- All AI systems should be robust, safe, and secure throughout their lifecycles so that conditions of normal use, foreseeable use or misuse, or other adverse conditions do not pose an unreasonable threat to safety or pose security risks.
- Have an AI Incident Response Plan in place.
- Mechanisms should be in place to ensure that if GAI is causing undue harm or undesired behavior, it can be overridden, repaired, or decommissioned safely, as needed.
5. Accountability.
- Identify person(s) or departments who are responsible for the proper functioning and oversight of the AI systems.
- These persons should maintain documentation of the data set used to train the GAI, processes and decisions made during the AI system lifecycle, and snapshots of the algorithm’s functionality at particular times, in the event the company needs to refer back to these snapshots if the AI begins to work improperly.
- Conduct periodic risk assessments of the GAI’s functionality and outputs.
In addition to published sources of AI use principles, there are also many AI frameworks available for companies to consider when evaluating and implementing AI in their business. For example, NIST’s AI Risk Management Framework can be helpful for a variety of U.S. companies.
[1] See https://www.oecd.org/en/topics/sub-issues/ai-principles.html, visited 8.27.2024.
[2] See the National Institute of Standards and Technology’s AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework, visited 8.27.2024.
[View source.]