ChatGPT has taken the world by storm. Though ChatGPT is a form of artificial intelligence (AI), the risks that it presents to businesses are very real. Employers are struggling to devise policies to regulate the use of ChatGPT and other AI tools in the workplace. There is no “one-size-fits-all” policy for employers – every business will need to evaluate its own needs and risks when developing a policy. There are, however, a few policy elements that are universal.
At the most basic level, an employer’s AI policy should identify (1) the prohibited uses of AI; (2) the permitted uses of AI in the workplace after authorization has been granted from a specific authority or expert internal to the employer; and (3) the permitted uses of AI that do not require any prior authorization from the employer. Beyond those basic elements, employers should remember to be “SMARTT.” The acronym SMARTT highlights additional important elements to include in an AI workplace policy: security, measurements, authorization, reporting, training, and transparency.
Security
AI workplace policies should include security measures to protect confidential business information. For example, a policy should make it clear that users of AI in the workplace should never upload any confidential client or consumer information to the AI system without express consent from the client or consumer.
Measurements
Policies should also include three aspects of measurement: (1) the establishment of criteria for measuring the level of risk that certain uses of AI pose; (2) the creation of a system that measures the actual uses of AI in the workplace through recording; and (3) an oversight that measures the accuracy and biases of the information generated by AI. Policies should require users of AI to keep an updated record of their use of the tool, including the prompt entered, the date of the use, the purpose of the use, and the result of the use.
Authorization
As mentioned, one of the most important elements of an employer policy is clarifying which specific uses require authorization from a certain authority.
Reporting
Policies should include a reporting process that mandates the reporting of all recorded uses of AI in the workplace to an internal technology team or committee. The policy should assign such an internal team with the responsibility of reviewing the uses, evaluating the risks of these uses based upon the criteria for measuring the risk levels, and monitoring both reported and unreported uses to determine whether a violation of any of the employer’s policies has taken place.
Training
Employers should mandate training of employees on how to use AI in compliance with company policies. Such training should review the permitted and prohibited uses of AI, the flaws that are inherent in the tool and the risks that the tool presents, and the employer’s systems and workplace policies that are in place to mitigate such risks. As AI is a quickly developing model, the training should be periodic in order to keep those in the workplace informed on the qualities and characteristics of this tool as it continues to evolve and update.
Transparency
Policies should also require full transparency. Employees must provide some indication of when content was produced by AI or when AI played a role in the process of completing work. Disclosure of the use of AI should be required internally and externally.
[View source.]