A provision that would significantly disrupt the use of artificial intelligence in the workplace is buried deep in a bipartisan federal proposal to legislate data privacy. If the American Privacy Rights Act is passed by Congress as currently proposed, employers would be required to notify applicants and workers when AI is used for workplace decisions – and also allow workers to force employers to remove AI from the equation if they choose to “opt out” of its use for consequential employment decisions. Many questions remain and the proposal is still a long way off from becoming the law of the land. But this issue bears close monitoring due to its potential to be an absolute game-changer. What do employers need to know about this breaking news development?
Summary of Data Privacy Proposal
You can read about the 10 things that employers need to know about the bipartisan data privacy proposal here. The sole focus of this Insight will be on the AI implications for employers.
2 Key Requirements for Employers
The AI portion of the proposal would require employers to do two things if they use AI for certain workplace uses:
- NOTICE: Provide notice to applicants and employees that AI is being used; and
- OPT-OUT RIGHTS: Give applicants and employees the opportunity to opt out of the use of AI.
What AI is Covered?
The proposal applies to “covered algorithms,” which expressly includes the use of AI. It defines a covered algorithm as any computational process, including one derived from machine learning, statistics, or other data processing “or artificial intelligence techniques.” This definition is incredibly broad. It could apply to an incredibly broad array of programs used by employers to manage human capital.
If the proposal becomes law, employers will need to take stock in exactly what products they are using that may contain some aspect of machine learning, statistics, or other data processing, including AI to ensure compliance. The proverbial “I didn’t know” defense will not work.
What AI Uses are We Talking About?
The proposal says that the obligation would kick in when employers use AI to make “or facilitate” a “consequential decision.”
- We have somewhat of an idea what “consequential decisions” are from the statute’s definition section – they are decisions that could impact someone’s access to, or equal enjoyment of, a job offer or some other workplace determination. How broadly that phrase could be read is currently open to debate, however. It would certainly include decisions like whether an employer offers an applicant a job or fires a worker. It probably includes such key actions as promoting or demoting an employee. But what about if AI is used to help create a performance review? Or to aid in job placements during a restructuring? Or even helping to create a job description or job posting?
- More concerning is the ambiguity related to whether AI is being used to make “or facilitate” such decisions. That use of “facilitate” could be interpreted broadly. Certainly an AI program that selects an employee for termination would fall under this category. But what if an AI-fueled resume screener culls through thousands of resumes to pluck out the best 50 for human review? What if an AI program provides daily recommendations to management about human capital improvements and human judgment decides which to pursue and which to ignore? And what if an application’s use of AI is not readily apparent? Employers increasingly rely on AI programs as a support system for all kinds of tasks, and a broad reading of this statute could ensnare all sorts of commonplace actions.
What Would the Notice Require?
Once we determine what AI uses would be covered by the law, employers would be required to provide notice of that use to applicants and workers. To comply with the law, employers would need to provide “meaningful information” to the applicant or employee about how the AI tool makes or facilitates the consequential decision, including the range of potential outcomes.
The form of the notice would need to be:
- clear, conspicuous, and not misleading;
- provided in each language in which the business provides a product or service; and
- reasonably accessible to and usable by individuals with disabilities.
More Importantly – What Would the Opt-Out Requirement Entail?
The next step could be the toughest. Employers who use AI to make or facilitate consequential decisions (however is defined) would also be required to provide an opportunity for applicants and workers to “opt out” of such use.
This leaves many questions unanswered. What if an AI tool is designed to review thousands or millions of data sets to provide an efficient summary of information to an employer, and one employee opts out of AI use? Would an employer have to scrap the entire system or could it conceivably arrive at a usable work-around by introducing human judgment into the process with respect to that one worker? At the other end of the spectrum, could opt-out rates become so high they defeat the purpose of the AI software being deployed in the first place? And, if employees choose to opt-out, are employers required to provide an alternative pathway for the employment decision?
Questions Might Be Answered – But Not in Time
The statute would require the Federal Trade Commission to coordinate with the Commerce Department (neither of which is necessarily known for their detailed grasp of employment-related dynamics, unlike the EEOC or the Department of Labor) to issue guidance regarding this law – but the agency would have a two-year deadline from the law’s effective date to do so. This gap could cause real problems for employers, since enterprising plaintiffs’ attorneys could take action against employers during this limbo period while questions remain unanswered.
This is especially concerning given the fact that the proposed law gives applicants and employees the right to file private lawsuits in court against employers for alleged AI-related violations. The law would allow them to recover actual damages plus attorneys’ fees.
Small Businesses Would Be Excluded
The silver lining is that smaller employers would not be covered by the proposed law. If your average annual gross revenue for the period of the three preceding calendar years (or for the period during which you have been in existence if less than three years) did not exceed $40 million, you can breathe a sigh of relief.
California Also Considering Similar Proposal
As we reported a few weeks ago, California lawmakers are pondering a similar measure. A bill aimed at “prohibiting algorithmic discrimination” would prohibit employers from using AI tools to make consequential workplace decisions that result in “algorithmic discrimination.”
That bill also includes a notice requirement and an opt-out provision – but that mechanism would only be triggered and require human decision-making if “technically feasible,” an escape hatch that does not exist in the federal proposal. You can read all about this and other California proposals here.