Use of Generative AI Poses Risk to Companies

Robinson+Cole Data Privacy + Security Insider
Contact

Many companies are exploring the use of generative artificial intelligence technology (“AI”) in day-to-day operations. Some companies prohibit the use of AI until they get their heads around the risks. Others are allowing the use of AI technology and waiting to see how it all shakes out before determining a company stance on its use. And then there are the companies that are doing a bit of both and beta testing its use.

No matter which camp you are in, it is important to set a strategy for the organization now before users adopt AI and the horse is out of the barn, much like we are seeing with the issues around TikTok. Once users get used to using the technology in day to day operations, it will be harder to pull them back. Users don’t necessarily understand the risk posed to organizations when they use AI while performing their work.

Hence, the need to evaluate the risks, set a corporate strategy around the use of AI in the organization, and disseminate the strategy in a clear and meaningful way to employees.

We have learned much from the explosion of technology, applications, and tools through our experience over the last few decades with social media, tracking technology, disinformation, malicious code, ransomware, security breaches and data compromise. As an industry, we responded to each of those risks in a haphazard way. It would be prudent to learn from those lessons and try to get ahead of the use of AI technology to reduce the risk posed by its use.

A suggestion is to form a group of stakeholders from the organization to evaluate the risk posed by the use of AI, how the organization may reduce the risks, set a strategy around the use of AI within the organization, and put controls in place to educate and train users on its use within the organization. Setting a strategy around AI is no different than any other risk to the organization and similar processes can be used to develop a plan and program.

There are myriad resources to consult when evaluating the risk of using AI. One I found to be helpful is: A CISO’s Guide to Generative AI and ChatGPT Enterprise Risks published this month by the Team8 CISO Village.

The Report outlines risks to consider and categorizes them into High, Medium, and Low, and then outlines how to make risk decisions. It is spot on and a great resource guide if you are just starting the conversation within your organization. 

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Robinson+Cole Data Privacy + Security Insider

Written by:

Robinson+Cole Data Privacy + Security Insider
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Robinson+Cole Data Privacy + Security Insider on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide