Congress is increasingly focused on potential new approaches to the responsible development and use of artificial intelligence (AI), with some members calling for enhanced oversight of AI systems. Most recently, on June 21, Senate Majority Leader Chuck Schumer (D-NY) released a SAFE Innovation Framework for AI (Innovation Framework), which is designed to provide policy guidance for the development of AI, emphasizing principles such as security, accountability, and explainability.
Senator Schumer’s announcement of a Framework is just the latest in a string of activities from Congress related to AI. In the past several months, the Senate Judiciary Committee has held hearings on Oversight of A.I.: Rules for Artificial Intelligence (which included discussions about requiring certifying or licensing AI tools) and a hearing on Artificial Intelligence and Human Rights, and multiple legislative proposals have been put forward.
Below, we outline some of the legislative proposals and hearings from the last several months, including bills that would have a particular impact on government contractors that work with agencies.
Pending Legislation and Proposals
SAFE Innovation Framework for AI: Senator Schumer’s Innovation Framework highlights many of the same principles that have been advocated in the White House Office of Science and Technology’s (OSTP) AI Bill of Rights, the Department of Defense’s ethical principles, and the National Institute for Standards and Technology’s (NIST) AI Risk Management Framework. The five elements of the Innovation Framework are: (1) Security, (2) Accountability, (3) Foundations, (4) Explain, (5) Innovation. The Framework articulates these policy goals not only to influence how AI systems are designed but also to influence how they are used to support US-led AI innovation that promotes democratic principles, addresses bias and misinformation concerns, and safeguards national security.
Proposals with Impact on Government Contractors: On June 8, Senators Gary Peters (D-MI), Mike Braun (R-IN), and James Lankford (R-OK) introduced S. 1865, the Transparent Automated Governance Act (TAG Act), which seeks to encourage transparency when AI decision-making is utilized by the federal government. The TAG Act proposes that the Office of Management and Budget (OMB) release guidance for federal agencies utilizing AI for critical decisions. At a high level, the Act defines “critical decisions” to include an “assignment of a score or classification, related to the status, rights, property, or wellbeing of specific individuals or groups,” where the outcome is likely to have different effects on different individuals or groups or affect access to the cost of essential services or benefits. Further, the Act would require the government to have alternative, non-automated methods of making such decisions.
The language of the TAG Act makes clear that it will apply to third parties acting on behalf of the agency (i.e., government contractors) that use an automated system to determine or substantially influence the outcome of critical decisions. The TAG Act requires the government to provide plain language notice and an opportunity to appeal when such procedures are used. If the TAG Act is passed, more granular requirements will largely be dictated by future OMB transparent automated governance guidance for proper use of algorithmic decision-making, which will in turn be implemented by agencies. Companies interested in providing AI to the federal government should take note of OMB’s guidance if this law goes into effect and update their compliance programs accordingly. Given the TAG Act’s mandate that OMB solicit input from the private sector when developing this guidance, government contractors may have an opportunity to provide input before the guidance is implemented.
Two other pieces of related pending legislation, both introduced by Senator Michael Bennet (D-CO), might also have implications for companies that intend to sell AI products or services to the government. First, the Oversee Emerging Technology Act (S. 1577) provides for the appointment of an emerging technology lead for each covered agency, who will, among other things, “provide input for procurement policies.” Government contractors should expect this input to make its way into the solicitation requirements of covered agencies. Second, the Assuring Safe, Secure, and Ethical Systems for AI (Assess AI) Act (S. 1356) contemplates the creation of a cabinet-level AI Task Force that would have the ability to set the standard for “AI risk assessment and auditing” for federal agencies. Whether contractors are capable of meeting the standard set will likely become part of the contract solicitation, evaluation, and administration processes.
Congressional Hearing on AI and Human Rights
On June 13, 2023, the Senate Judiciary Committee’s Subcommittee on Human Rights and the Law held a hearing entitled, “Artificial Intelligence and Human Rights.” Members of the subcommittee in attendance included Senator Jon Ossoff (D-GA), Senator Marsha Blackburn (R-TN), Senator Dick Durbin (D-IL), and Senator Richard Blumenthal (D-CT). The hearing primarily focused on issues such as “deepfake” ransom scams, AI’s potential risks to the information ecosystem and its potential for use in scams, the possible risks of using AI in facial recognition, China’s use of AI for surveillance, and the general need for the United States to lead internationally on AI innovation.
Notably, Senators and witnesses did stress that, although AI has risks, it also promises benefits. The hearing, however, also discussed potential legislative approaches to address the potential harms of AI. For example, Senators Ossoff and Blackburn suggested that there may be a need for new criminal statutes for certain AI-specific harms, such as deepfake scams. Senators Ossoff and Blackburn also advocated for a national privacy law to help protect against certain AI abuses. Senators Blumenthal and Durbin questioned witnesses over whether Section 230 of the Communications Decency Act applies to AI. Senator Blackburn also warned about China’s more authoritarian uses of AI, but she also stressed that the United States should not handicap its AI development, which would cede AI leadership to China.
The subcommittee hearing is just one of what will likely be many AI-related Congressional hearings. As lawmakers grapple with the challenges and potential benefits of AI, future regulations likely will be considered. These could take the form of AI development oversight requirements, new criminal or regulatory statutes, or further calls for a national privacy law.
Takeaways
With the uptick in Congressional interest in responsible AI development, companies that are developing and deploying AI tools—including those that hope to secure AI-related government contracts—should pay close attention to Congressional priorities and consider proactively implementing guiding principles to address AI risks.
[View source.]