Earlier this month, the National Telecommunications and Information Administration (NTIA) published its AI Accountability Policy Request for Comment (RFC). The RFC seeks comment on artificial intelligence (AI) system accountability measures and policies. NTIA will use input from the RFC to draft a report on AI accountability policy development, which will focus specifically on “the AI assurance ecosystem.” Several of the RFC’s questions address the possibility of future AI assurance regulations relating to AI audits and risk assessments.
The RFC comes alongside other federal efforts to address AI, such as the White House’s Blueprint for an AI Bill of Rights, the National Institute of Science and Technology’s (NIST) AI Risk Management Framework (AI RMF), and the Federal Trade Commission’s consideration of potential privacy rules.
Businesses that develop or deploy AI systems should consider responding to NTIA, as the agency’s report could influence the direction of future policy and regulation. Comments on the RFC are due June 12, 2023.
Accountability for Trustworthy AI:
As the key objective of the RFC, NTIA states that it is hoping to identify the current state of adequate accountability for AI systems. The RFC seeks input on assurance measures to support “trustworthy AI,” which is intended to “encapsulate a broad set of technical and socio-technical attributes of AI systems such as safety, efficacy, fairness, privacy, notice and explanation, and availability of human alternatives.” NTIA also seeks comment on how accountability measures might “mask or minimize” AI risks. The RFC also asks for ways that government action might support and enforce AI accountability practices.
While the RFC acknowledges that many entities are already engaged in accountability regarding cybersecurity, privacy, and other digital technology risks, “[t]he selection of AI and other automated systems for particular scrutiny is warranted because of their unique features and fast-growing importance in American life and commerce.”
As part of its inquiry, the RFC discusses important observations and trends with respect to AI accountability, including the following:
- Growing Regulatory Interest in AI Accountability Mechanisms. The RFC notes that governments—both U.S. and abroad—are starting to require accountability mechanisms, including audits and risk assessments of AI systems. Specifically, the RFC cites the EU’s Digital Services Act, which requires audits of very large online platforms systems; the draft EU Artificial Intelligence Act, which requires conformity assessments of certain high-risk AI tools before deployment; and New York City Law 144, which requires bias audits of certain automated hiring tools. NTIA also references the American Data Privacy and Protection Act (H.R. 8152) and the Algorithmic Accountability Act of 2022 (H.R. 6580), neither of which have been enacted. Additionally, the RFC notes that federal regulators have been addressing AI risks in certain sectors, such as the Federal Reserve’s SR–11–7 Guidance on Algorithmic Model Risk Management. The RFC also highlights concerns about potential harms from AI system. Specifically, NTIA states that information services like social media, generative AI models, and search engines have unique risks, such as “harms related to the distortion of communications through misinformation, disinformation, deep fakes, privacy invasions, and other content-related phenomena.”
- Audits and Assessments. NTIA frames assessments and audits as “among the most common” mechanisms to provide AI trustworthiness. NTIA notes that there are differing definitions for audits and assessments, but that audits tend to mean external review of an AI system to test performance against benchmarks, while assessments often refer to internal review to identify risk. The RFC states that audits and assessments generally focus on “harmful bias and discrimination, effectiveness and validity, data protection and privacy, and transparency and explainability.” NTIA also explains that audits may be conducted internally or by independent third parties, and that they may be public or given limited circulation to regulators. NTIA situates audits in a larger “socio-technical context” and notes that the most useful audits and assessments “should extend beyond the technical to broader questions about governance and purpose. These might include whether the people affected by AI systems are meaningfully consulted in their design and whether the choice to use the technology in the first place was well-considered.”
- Legal Standards and Policy Considerations in Accountability Mechanisms. The RFC explains that some accountability mechanisms use legal standards as a baseline—as an example, discrimination laws can form the basis of AI audits or legal compliance actions. The RFC highlights that some firms and startups are offering AI testing for bias and/or disparate impact. However, NTIA also cautions that “for some features of trustworthy AI, consensus standards may be difficult or impossible to create.”
- The Need for Flexibility with Respect to Accountability Mechanisms. Importantly, the RFC also highlights the diverse range of AI systems being deployed and emphasizes the need for flexibility with respect to accountability mechanisms. Specifically, NTIA explains that AI systems are being utilized across a wide swathe of use cases. The RFC notes that the “appropriate goal and method to advance AI accountability will likely depend on the risk level, sector, use case, and legal or regulatory requirements associated with the system under examination.” Further, NTIA notes that there are often tradeoffs when considering AI accountability measures. For example, NTIA explains that some mechanisms may require datasets that include sensitive data that could create privacy or security risks. Notably, the RFC also explains that several private bodies are working to develop metrics and benchmarks for trustworthy AI, but that it will be difficult to harmonize standards, especially where goals involve contested ethical judgments. NTIA notes “[i]n some contexts, not deploying AI systems at all will be the means to achieve the stated goals.”
Request for Comments:
NTIA seeks comment on several areas for addressing barriers and complexities proposed by commentators, such as: “mandating impact assessments and audits, defining ‘independence’ for third-party audits, setting procurement standards, incentivizing effective audits and assessments through bounties, prizes, and subsidies, creating access to data necessary for AI audits and assessments, creating consensus standards for AI assurance, providing auditor certifications, and making test data available for use.”
The RFC also seeks comment on several areas related to AI accountability, grouping the questions into the following categories: (1) AI Accountability Objectives, (2) Existing Resources and Models, (3) Accountability Subjects, (4) Accountability Inputs and Transparency, (5) Barriers to Effective Accountability, and (6) AI Accountability Policies.
- AI Accountability Objectives. The RFC seeks comment on the scope and goals of AI accountability mechanisms, asking particular questions about policy tradeoffs and the general purpose of accountability mechanisms, such as audits and assessments. Notably, the RFC asks whether AI accountability practices can have “meaningful impact in the absence of legal standards and enforceable risk thresholds[.]”
- Existing Resources and Models. The RFC also asks about the state of existing AI accountability mechanisms currently in use, focusing on lessons that can be learned and the best definitions for accountability policies. The RFC seeks comment on the most and least useful laws and regulations that already require AI audits and assessments.
- Accountability Subjects. The RFC requests input on how to best consider AI value chains and supply chains, as well as the AI system lifecycle, when implementing accountability mechanisms.
- Accountability Inputs and Transparency. The RFC seeks comment on documentation and recordkeeping obligations as well as the flow of information needed for AI accountability.
- Barriers to Effective Accountability. The RFC asks about the most significant barriers to AI accountability in the private sector, such as costs, trade secret protections, a lack of standards and benchmarks, and the costs of AI audits and assessments. Notably, NTIA asks if lack of a federal data protection/privacy law or lack of a federal law on AI systems generally are barriers to effective AI accountability.
- AI Accountability Policies. The RFC asks several questions about possible AI accountability policies and/or regulation. Among other regulatory questions, NTIA specifically asks about desirable features of a federal law focused on AI systems as well as whether AI accountability regulation should focus on increasing access to AI systems for auditors.
[View source.]