Deputy U.S. Attorney General Lisa Monaco recently sparked debate by stating, “Like a firearm, AI can enhance the danger of a crime.” And just as prosecutors can seek enhanced sentences for offenses involving firearms, the Department of Justice (DOJ) will be seeking enhancements for certain offenses involving artificial intelligence (AI).
In a recent speech[1] at Oxford University, Monaco announced that, moving forward, prosecutors at the DOJ will seek “stiffer sentences” for offenses made “significantly more dangerous by the misuse of AI”; however, that speech raised more questions than answers, providing little guidance on what the DOJ’s plan to seek such enhanced sentences will look like in practice as well as whether federal courts will agree with the DOJ.
Key takeaways
- On February 14, in a speech at Oxford University in the U.K., Deputy U.S. Attorney General Monaco announced that federal prosecutors will pursue harsher sentences for offenses made more dangerous by AI.
- Over the next six months, the DOJ will convene individuals from various sectors and industries to give their perspectives on AI. These convenings will inform a report to President Biden on the use of AI in the justice system.
- The DOJ – acknowledging that it is not “exempt from AI governance” – is creating guidance with other federal agencies to regulate the government’s use of AI. This guidance could provide a framework for other entities seeking to implement AI usage policies.
Background
The Deputy US Attorney General recently traveled to the U.K. to promote U.S. and U.K. collaboration in combating threats to global security. While giving a speech at Oxford, Monaco framed AI as one such perceived threat, describing its ability to disrupt elections around the world. She also spoke on the ways DOJ believes bad actors can use AI to perpetrate unlawful conduct, from price fixing to identity theft.
The DOJ, however, has a plan to deter misuse of AI. As Monaco announced, “where Department of Justice prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI, they will. … And if we [the DOJ] determine that existing sentencing enhancements don’t adequately address the harms caused by misuse of AI, we will seek reforms to those enhancements to close that gap.”
While the announcement represents a major new policy enforcement priority for the DOJ, Monaco provided little guidance on how the government will actually seek to use sentencing enhancements to combat the purported dangers of AI. In particular, Monaco did not specify which sentencing enhancement(s) from the U.S. sentencing guidelines, if any, might apply to the misuse of AI. She did note, however, that if existing sentencing enhancements do not sufficiently address the harms caused by AI’s misuse, the DOJ will “seek reforms” to “close that gap.” This suggests that the DOJ might seek amendments to the sentencing guidelines that explicitly address the potential dangers of AI should the existing guidelines prove insufficient.
Monaco also did not explain how the DOJ will determine whether AI has made an offense “significantly more dangerous” so as to warrant an enhanced sentence. Although, in her speech, Monaco compared the use of AI in an offense to the use of a firearm in an offense, the dangers of firearms are clear and there are explicit statutory penalties and sentencing guidelines applicable to firearm use. It is less obvious whether the mere use of AI in an offense automatically makes the offense more dangerous or harmful, and Monaco’s comments suggest that the DOJ will seek enhancements not in all such cases but rather in an undefined category of cases where it believes AI substantially augmented the dangerousness of the offense.
It is also unclear how courts will receive the DOJ’s arguments for enhancing sentences (especially absent an amendment to the sentencing guidelines addressing AI). As technology has changed, so too have the ways bad actors engage in unlawful conduct. And while certain forms of AI represent cutting-edge technology in 2024, other technologies that are commonplace today were similarly groundbreaking years ago. If prosecutors sought enhanced sentencing for every offense made more dangerous by technology, the sentences could be unfairly harsh and not properly reflect modern society. Courts might therefore hesitate to set a precedent allowing enhanced sentencing for offenses involving future technology. Courts might also not be receptive to the position that offenses involving AI are somehow unique and thus warrant enhanced sentences.
Moreover, Monaco did not define “AI” or explain what the DOJ considers to be “AI.” There are different kinds of AI, which run the gamut from less to more advanced. “Traditional AI,” for example, builds off algorithms that can be combined and directed in expected ways, following specific sets of rules and inputs. This type of technology is already commonplace, e.g., voice-controlled speakers on cellphones, scripted chatbots and automated manufacturing approaches. “Generative AI,” on the other hand, generates new, often unique (and sometimes unexpected) outputs that include text, audio, images, video and code. Tech companies have been, and are, developing technologies using subsets, variations and even combinations of these forms of AI. But no nuance regarding those technologies was accommodated for by Monaco, who did not say which forms of AI the DOJ will consider for purposes of enhanced sentencing.
What Can We Expect?
Although Monaco did not reveal many details regarding the DOJ’s plan to seek enhanced sentences for AI-related offenses, she did share some steps the DOJ is taking that may soon provide guidance. First, acknowledging that the DOJ itself “cannot be exempt from AI governance,” Monaco announced that the DOJ is currently “undertaking a major effort with our fellow federal agencies to create guidance to govern our own use of AI.” According to Monaco, the guidance would ensure that the DOJ, and the U.S. government as a whole, “applies effective guardrails for AI uses that impact rights and safety.” Indeed, as Monaco did not shy away from mentioning, the U.S. government currently employs AI to fight and investigate crime. According to Monaco, the government has already used AI to classify and trace sources of drugs, to understand the millions of tips submitted to the FBI annually, and to synthesize large volumes of evidence collected in major cases (such as the January 6 insurrection). This forthcoming guidance, even if only applicable to the government, could serve as a framework for other entities, including corporations, seeking to implement governance and controls for their own AI use. Indeed, the DOJ may, in the future, consider such guidance in evaluating the adequacy and effectiveness of corporate compliance programs.
Second, Monaco announced the “Justice AI” initiative, which will, over the next six months, “convene individuals from across civil society, academia, science and industry” to provide their varied perspectives on AI. The discussions will “include foreign counterparts grappling with many of the same questions” (underscoring the DOJ’s view of AI as a threat to global security). The initiative will inform a report to President Biden regarding the use of AI in the justice system by the end of the year. This report could illuminate the government’s specific plans for seeking enhanced sentences against bad actors who misuse AI, as well as any other government plans to combat the perceived threats of AI.
Conclusion
As the announcement from Monaco shows, the DOJ is taking the perceived threat of AI seriously. In Monaco’s words, “every new technology is a double-edged sword, but AI may be the sharpest blade yet.” Enhanced sentencing for offenses involving AI is just one step in tackling that perceived threat; however, the DOJ has not yet specified which sentencing enhancements from the sentencing guidelines might apply to AI-related offenses or what it takes for AI to make an offense “significantly more dangerous.” The DOJ’s forthcoming guidance regarding its own AI use as well as the Justice AI convenings and report to President Biden will hopefully clarify the government’s plans for addressing AI and potentially provide a framework for entities facing similar AI-related questions and concerns.
[1]https://www.justice.gov/opa/speech/deputy-attorney-general-lisa-o-monaco-delivers-remarks-university-oxford-promise-and
[View source.]