In the face of rapid technology changes, the Department of Justice usually has to play catch up. When cryptocurrency and blockchain entered the United States economy, the Justice Department played catch up. Fraudsters and other criminals innovate and embrace new technologies to prey on victims. The Justice Department usually catches up but there is often a lag as “innovative” criminals gain certain market advantages.
DOJ has now set its sights on criminals using artificial intelligence technology (“AI”) to execute their criminal ventures. Criminals are using AI to execute false telephone voice-related schemes and are seeking illegal access to company funds through “fake vendor” and other AI-assisted schemes.
In two recent public appearances, Attorney General Merrick Garland and Deputy Attorney General Lisa Monaco announced AI-focused initiatives.
During an interview, Attorney General Garland announced the hiring of computer scientists to fight AI-driven crime. In particular, AG Garland noted that AI accelerates the threat of cyberattacks against companies and individuals. To respond to this evolving threat, DOJ plans to employ AI solutions to defend against cyberattacks against our systems.
DOJ has hired Jonathan Mayer, a Princeton University academic, to assist with DOJ’s initiative. DOJ intends to hire additional computer science Ph.D.s to take advantage of AI technology.
In a recent speech, DAG Monaco announced an AI initiative to enhance its ability to detect and prosecute crimes committed through AI technology. As part of this initiative, DOJ will seek harsher criminal sentences for AI-enhanced crimes. DOJ also plans to study ways in which DOJ can be used internally to advance its operations.
Building on the Biden Administration’s Executive Order on the “Safe, Secure, and Trustworthy Development and Use of AI,” DAG Monaco noted that AI is “accelerating risks to our collective security.” In particular, DAG Monaco noted two specific areas will focus on for AI enforcement.
National Security: Last year, DOJ and the Commerce Department created the Disruptive Technology Strike Force (“DTSF”) to enforce export control laws. The DTSF has added AI at the top of its enforcement priorities to protect against delivery to adversaries.
Election Security: DAG Monaco noted that foreign adversaries may employ AI to accelarate disinformation campaigns on social media to “misinform voters by impersonating trusted sources and spreading deepfakes.” In addition, DAG Monaco noted that AI can be used for “chatbots, fake images and even cloned voices” to spread disinformation.
DAG Monaco stated that DOJ will continue to prosecute AI-enhanced crimes using existing legal tools, and noted that “discrimination using AI is still discrimination,” “price fixing using AI is still price fixing,” and “identity theft using AI is still identity theft.”
Citing firearms offenses as an analogy, DAG Monaco explained that AI can increase the danger of a crime and therefore should be subject to an enhanced sentence.
In the future, DOJ may enhance its Corporate Enforcement Policy to incorporate sentencing increases to reflect use of AI-assisted technology. DOJ is currently using AI to support certain activities, such as tracing of opioids, triaging tips submitted to the FBI and synthesizing huge volumes of evidence.
DOJ intends to study use of AI within DOJ and appointed a Cyber AI Officer to bring together law enforcement and civil rights resources. In addition, DOJ created an “Emergency Technology Board” to advise the Attorney General on the responsible and ethical uses of AI. The board members include representatives from civil society, academia, science and industry.