The White House has announced a set of binding Artificial Intelligence (AI) policies for federal agencies, which are intended to protect the privacy, rights, and safety of the American people. Other than federal contractors developing AI systems for agencies, these policies are not enforceable against private sector entities. However, the purchasing power of the federal government will encourage AI developers and service providers to incorporate ethical, responsible, safe, and transparent attributes into the AI tools they are creating.
President Biden’s executive order on AI and these policies have the force and effect of law within the executive branch, but absent congressional legislation, the United States continues to rely on private industry choosing its own standards. Nevertheless, industry leaders should be mindful that the government’s adoption of internal standards may become a de facto baseline for AI systems, similar to the European General Data Protection Regulation (GDPR) establishing the foundational concepts for data privacy beyond the EU’s jurisdictional reach.
When it comes to actual compliance standards for AI systems, the private sector should look at the AI Accountability Policy Report published by the National Telecommunications and Information Administration (NTIA), a suborganization of the Commerce Department. NTIA’s report states, “Providing targeted guidance, support, and regulations will foster an ecosystem in which AI developers and deployers can properly be held accountable, incentivizing the appropriate management of risk and the creation of more trustworthy AI systems.” NTIA expects this ecosystem will be maintained through audits and independent evaluations.
[View source.]