[co-author: Stephanie Kozol]*
On September 5, the attorney generals (AGs) of 54 U.S. states and territories called on Congress to address bad actors who generate child sexual abuse material (CSAM) using artificial technology (AI). Framing the issue as a “race against time,” the letter highlights the harms of AI-generated CSAM, and asks Congress to study and propose solutions.
What Happened
In their letter, the AGs highlighted that AI is already being used to generate CSAM. They allege that AI tools are being used to create so-called deepfakes (i.e., media of an individual whose face or body has been digitally altered to appear as someone else) by studying real photographs of abused children to generate new exploitative images, or by combining photographs of abused and nonabused children to animate media depicting nonexistent children.
The AGs further state that even non-deepfake CSAM generated by AI causes harm. The letter provides four reasons why such CSAM is harmful: (1) the images are often still based on images of abused children; (2) using images of nonabused children risks real-life future harms to the children and their parents, even when CSAM is not based on images of abused children; (3) CSAM continues to support the growth of the child exploitation market; and (4) the widely available AI tools make generating such images easy.
The AGs concluded by asking Congress to establish an expert commission to study how AI can be used to generate CSAM, with a specific request for the commission to operate on an ongoing basis. The AGs also asked Congress to act on any recommendations proposed by the commission to ensure that prosecutors have adequate tools to prosecute violative conduct.
*Senior Government Relations Manager