This week, we hosted a conversation about the future of AI regulation across the 50 states following California Governor Gavin Newsom’s veto of the controversial AI safety bill, SB 1047. Key takeaways are below.
Joining us were:
Key Takeaways
- In vetoing the legislation, Gov. Newsom created a path to address what he saw as its flaws. In vetoing the bill, Gov. Newsom voiced concern about regulation that is untethered to specific, articulated risks and said he would bring in a trio of experts to evaluate generative AI guardrails. SB 1047 didn’t account for whether an AI system was deployed in high-risk environments involving critical decision-making or the use of sensitive data. It also applied stringent standards to basic AI functions, says Yavorsky, co-leader of Orrick’s AI practice. Therefore, she predicts the trio of experts will focus on a new bill that addresses these flaws of SB 1047. (53:23)
- States will focus on specific harms, transparency and labeling. What can we learn from the 17 AI bills that Gov. Newsom signed in the past month? Many of these bills focused on specific harms such as deepfakes, pornography, and elections and disinformation, as well as transparency and labeling. Moving forward, Kemp expects state lawmakers to follow California’s lead, focusing on specific harms. (8:37)
- The key to state legislative advocacy is to narrow the focus. Focus on the states that have the highest probability of passing comprehensive AI bills that will impact the business, as well as precedent-setting states, Kudon says. In addition, look to the states that have acted on similar issues – with respect to AI, look to privacy laws. Learn about the states that Kudon (31:46) and Harkins (57:49) are watching.
- Industry stakeholders should take a principled approach. As a business, consider what you can take a stand for, says Harkins. (43:57) Companies have an important role to play in educating lawmakers on how the technology works and the upside – and by being open to addressing the challenges. Learn about a few of the trade associates that Harkins’ recommends (49:24).
- More is coming at the federal and international levels. States are being motivated by inaction at the federal level. However, Harkins says there’s more activity at the federal level than you may expect. (46:41) The U.S. is coordinating with the G7 to develop a code of conduct for advanced AI developers, and leading AI companies are advancing the discussion on frontier systems in D.C. In addition, lawmakers in Sacramento are sharing ideas with EU lawmakers in Brussels.