
On September 23, 2024, DOJ announced updates to its Evaluation of Corporate Compliance Programs guidelines (Guidance). The updates are the latest in a series of updates (including in 2019, 2020, and 2023) since DOJ first released the Guidance in 2017. The Guidance is used by prosecutors to evaluate corporate compliance programs in the context of criminal investigations. The updated Guidance, while largely unchanged overall, includes additions that focus on emerging risk factors identified by DOJ to account for changing circumstances. The additions, which fall into three main areas, include: (i) an evaluation of how companies assess and manage risk related to new technologies such as artificial intelligence (AI); (ii) a set of questions to evaluate whether companies are encouraging employees to report misconduct; and (iii) an assessment of whether compliance programs have appropriate access to data to evaluate corporate risks and compliance program effectiveness.
The first area of updates in the Guidance involves evaluating how companies assess and manage risk around AI and other new technologies. As stated in the Guidance, “[w]here relevant, prosecutors should consider the technology—especially new and emerging technology—that the company and its employees are using to conduct company business, whether the company has conducted a risk assessment regarding the use of that technology, and whether the company has taken appropriate steps to mitigate any risk associated with the use of that technology.” The Guidance poses several questions for compliance program officers to consider in working to manage emerging risks to ensure compliance with the law, including:
- Is management of risks related to use of AI and other new technologies integrated into broader enterprise risk management (ERM) strategies?
- How is the company curbing any potential negative or unintended consequences resulting from the use of technologies, both in its commercial business and in its compliance program?
- How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?
- How does the company train its employees on the use of emerging technologies such as AI?
- How quickly can the company detect and correct decisions made by AI or other new technologies that are inconsistent with the company’s values?
The second area involves asking questions to determine whether companies encourage employees to report misconduct and whether companies engage in practices that chill reporting of potential noncompliance. One particular measure of interest to the DOJ is how a company assesses its employees’ willingness to report misconduct. A related point of emphasis is the importance of a company’s commitment to whistleblower protection and anti-retaliation. Key questions on this point include: (1) whether a company trains employees on both internal anti-retaliation policies and external anti-retaliation and whistleblower protection laws; and (2) whether employees who report misconduct internally are disciplined the same as employees involved in the misconduct who did not report it.
The third area assesses whether compliance personnel have knowledge and access to data resources that help measure a compliance program’s efficiency and effectiveness. The Guidance asks how a company is managing the quality of its data sources and how the company is measuring the accuracy, precision, or recall of data analytics models it uses. The Guidance also inquires whether a company allocates assets, resources, and technology to compliance and risk management functions at a level that is proportionate to other areas of the company.
Finally, DOJ continues to emphasize the importance of being proactive about risk management. The Guidance includes new language instructing prosecutors to consider whether a company’s approach to risk management is proactive or reactive. To ensure risks are well-managed, companies should strive to be able to demonstrate that they can proactively identify potential misconduct or compliance program issues as early as possible.
A copy of the Guidance is available here. The meaning of “artificial intelligence,” as used in the Guidance, is set forth on pages 26-27 of a White House OMB memo that is available here.