Center for Audit Quality comes to the rescue for audit committees tasked with AI oversight

Cooley LLP
Contact

Cooley LLP

In this 2023 article in Fortune, a survey of 2,800 managers and executives conducted by management consulting firm Aon showed that business leaders “weren’t very concerned about AI….Not only is AI not the top risk that they cited for their companies, it didn’t even make the top 20.  AI ranked as the 49th biggest threat for businesses.” Has “the threat of AI been overhyped,” Aon asked, or could it be that the “survey participants might be getting it wrong”? If they were, it wasn’t for long. Fast forward less than a year, and another Fortune article, citing a report from research firm Arize AI, revealed that 281 of the Fortune 500 companies cited AI as a risk, representing “56.2% of the companies and a 473.5% increase from the prior year, when just 49 companies flagged AI risks. ‘If annual reports of the Fortune 500 make one thing clear, it’s that the impact of generative AI is being felt across a wide array of industries—even those not yet embracing the technology,’ the report said.”  This widespread recognition of the potential risks of genAI will likely compel companies to focus their attention on risk oversight, and that will almost certainly entail oversight by the audit committee.  To assist audit committees in that process, the Center for Audit Quality has released a new resource—an excellent new report, Audit Committee Oversight in the Age of Generative AI.

As noted in thecorporatecounsel.net, the CAQ has very recently published a new report designed to assist audit committees in those efforts. According to the report, a “recent CAQ survey found that one in three audit partners see companies in their primary industry sector deploying or planning to deploy AI in their financial reporting process….The rise of genAI is raising important questions about when and how to invest in appropriate technologies that may have an impact on the finance organization and the speed of transformation.” Yet 66% of respondents in the survey said that their audit committees had “spent insufficient time in the past 12 months discussing AI governance.”

With so many potential risks associated with the use of genAI, it will be important for audit committees to raise their levels of knowledge about AI to enable and facilitate oversight of those risks.  While the report is focused in large part on the use of genAI in processes relevant to financial reporting and internal control over financial reporting, the report also includes some basic information about AI, designed to provide audit committees with a “foundational understanding of some fundamental principles of genAI, including key features of the technology and how it differs from other technologies that companies may be using,” along with other guidance of general application. For example, for those whose knowledge of AI is like mine—practically zero—the report discusses the differences between AI, machine learning, deep learning and genAI.   In addition, the report discusses, in very basic terms, how genAI works, explaining that, because genAI technologies are

“predictive technologies,…the outputs are based on what the genAI technology has determined is a probable response[,] a key distinction from other technologies that may have historically been used in a company’s financial reporting processes. If a user asks the same question multiple times, they might get different answers each time. Different answers may result because genAI technologies are designed to generate varied responses and are trained on diverse datasets, which leads to a wide range of probable responses to a single prompt. Accordingly, genAI technologies are especially helpful for tasks that need creativity or diversity of responses, including generating new content or information, but genAI may not always provide reliable or repeatable information. GenAI technologies do not work like search engines finding facts within their training data but are instead creating new coherent, human-like text.” 

The report also discusses the challenge that genAI can be a “‘black box,’ meaning that the process to arrive at a specific output is not readily explainable or interpretable, resulting from the inherent complexity of AI algorithms and the nonlinearity of the relationships between the underlying data and the outputs or decisions made.” With regard to financial reporting, the report acknowledges, “explainability and interpretability may become increasingly important for effective human oversight of the technology,” especially as the use of genAI becomes more sophisticated over time.

In the context of financial reporting, the report indicates that, in general, companies “will initially use it to augment processes (rather than fully automate them), which enables efficiency but does not eliminate human judgment and decision making. Particularly in financial reporting processes and ICFR, humans continue to be involved to oversee, understand, and evaluate the relevance and reliability of the outputs from genAI technology. In the future, companies may evolve to deploy more advanced and complex use cases or decrease the level of human involvement.”

Among other concerns, the report advises, companies will need to consider privacy and security needs, including determining whether use of publicly available genAI technologies (such as some genAI chatbots) are appropriate, given that the data may be saved to be used by the third-party technology provider for further development of the genAI model. For genAI technologies used in financial reporting processes and ICFR, “companies may want to be sure that information entered into genAI technologies is not tracked, saved, or used by third parties” to ensure that the company retains control “over how information entered into the genAI technology is managed and saved.” The report also cautions that “GenAI technologies may also be susceptible to cyber-attacks which could impact the reliability of outputs provided by the technology or put the company’s confidential data at risk.” In addition, according to the report, the “use of genAI can introduce increased risks of fraud for companies, including risks of fraud perpetrated by management and risks that the company is a victim of fraud perpetrated by external parties.”

Strong oversight and governance, including by the audit committee, will be critical in successfully deploying AI technologies, the report advises. Among the key considerations highlighted by the report are identifying who within the company is responsible for oversight; development of frameworks and policies for responsible, acceptable and ethical use of genAI, along with a process for monitoring compliance; and identifying those genAI uses that are subject to the oversight, framework and policies.  The report advises that it is “important for companies to track and monitor the use of genAI throughout the company, including use by third-party service providers, in order to understand the impact of those technologies on processes and to identify, assess, and manage risks arising from their use.” Companies will also want to “establish processes to monitor the ongoing effectiveness of genAI technologies to verify that they continue to operate effectively and as intended.“ Other issues for consideration include the “knowledge and skills of employees who will operate genAI technologies, training provided to employees regarding use of prompts, output reliance, and other relevant topics, and the policies and procedures established to promote human review of outputs from genAI technologies.“ Companies will also need to be knowledgeable about the “regulatory environment and any contractual agreements, laws, or regulations that impact how the company may use genAI.”

The report (see Appendix A) also provides a slew of questions for the audit committee to pose to management and the auditor regarding a number of important issues, including governance considerations, data privacy and security, selection and design of genAI technologies, deploying and monitoring genAI technologies, fraud and the regulatory environment. For example, the report counsels that audit committees will want to understand “where genAI is being deployed and why management has selected the specific genAI technology for use,” including “how management determines whether to build or buy genAI technologies that have appropriate capabilities to meet the company’s needs.”  In that regard, the report suggests the following questions for management:

  • “How does management identify processes that are appropriately suited for augmentation by genAI?
  • How does management design genAI technologies, including determining which genAI technologies to use (such as selecting an existing genAI technology, using a foundation model with added customizations, or developing the company’s own model) and the data needed for those technologies?
  • How does management select third-party genAI technologies for use?”

On this topic, the audit committee may want to ask the auditor how “the company’s use of a foundation model or development of its own model impact the auditor’s risk assessment?”

The report concludes by reminding us that the AI regulatory environment is rapidly evolving, with “increased calls for stronger regulations related to the safe and responsible development and use of AI, including genAI. Although existing regulations in many countries already govern the use and protection of data or emerging technologies and are applicable to AI, many countries have also begun to adopt new regulations and frameworks specifically to mitigate security and safety risks of AI as well as to advance the ethical and responsible use of AI. It is important for audit committees to exercise oversight and understand whether management involves the appropriate parties to monitor, evaluate, and comply with applicable laws and regulations,” including compliance departments, legal counsel and other external advisors.

There’s a lot more in this useful resource, so be sure to check out the CAQ report!

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Cooley LLP

Written by:

Cooley LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Cooley LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide