In a rare example of bipartisan agreement, the dramatically named DEFIANCE (Disrupt Explicit Forged Images and Non-Consensual Edits) Act unanimously passed the Senate. The bill allows victims of AI-generated deepfake pornography to sue anyone who knowingly creates, receives, or distributes such images. It only addresses civil liability and only applies to pornographic content, but we would not be surprised to see more expansive legislation in this area in the future.
Utah’s stringent social media laws survived a court challenge when a federal judge dismissed a claim by NetChoice that the legislation was preempted by federal law under Section 230. The dismissal by U.S. District Judge Robert J. Shelby cited autoplay, seamless pagination, and notifications on minors’ accounts as outside the scope of Section 230. Utah has become a flashpoint in the debate over social media, its impact on children, and related First Amendment concerns.
Meta has removed more than 60,000 accounts involved in sexual extortion scams aimed predominantly at men in the United States. The accounts were based in the fraud hotbed of Nigeria and used sexually compromising photos—both real and fake—to blackmail victims. The vast majority of the now-deleted accounts were on Instagram, but Meta also deactivated more than 7,000 Facebook accounts, pages, and groups that offered advice (including scripts) on how to defraud people. Although mostly targeting adults, there were also attempts against minors, which Meta reported to the U.S. National Center for Missing and Exploited Children.
In another uncharacteristically bipartisan vote, the Senate voted 91-3 in favor of the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA). This is significant legislation that would create a legal “duty of care” for social media companies to prevent and mitigate harm. Violators would be subject to penalties enforced by the Federal Trade Commission. Opponents argue that the bill’s definition of harm is too broad and could lead to censorship of content that promotes politically polarizing issues, gender equality, or abortion rights. The bill will now go to the House, where its passage is less certain. Should it pass, this legislation would dramatically impact the operations of social media companies. We’re watching this very closely.
Red-hot AI startup Perplexity has entered into agreements with several major media companies including Time, Fortune, and Automattic (owner of WordPress). Perplexity will begin featuring its partners’ content in AI-assisted search results and providing a percentage of the ad revenue. This move is on the heels of plagiarism accusations by news organizations such as Forbes and Condé Nast, the latter of which reportedly sent a cease and desist letter to the startup.
The US Copyright Office issued a report calling on Congress to take urgent action on the distribution of unauthorized deepfakes. The report is a result of its 2023 initiative to examine the intellectual property implications of generative AI. The speed of technology has always challenged lawmakers to keep up with bad actors, but AI deepfake capability has moved at staggering speed. Microsoft has also called for Federal legislation addressing deepfakes.
[View source.]