As companies increasingly leverage automated technologies in their recruiting and hiring processes, legislators and regulators are increasingly focused on establishing guardrails to ensure fairness. As a result, companies implementing recruiting and hiring technologies that surpass a certain automation threshold may now be subject to comprehensive compliance frameworks requiring proper notice, risk assessments, and ongoing audits.
What Is the Threshold for Applicability?
Lawmakers around the world are targeting varied uses of automated hiring tools, but the threshold for triggering additional regulation generally requires:
- Autonomy: The tool operates with a degree of autonomy, such that it is not entirely or substantially reliant on human involvement to facilitate its processing.
- Influence: The tool replaces or has a substantial influence over a traditionally human decision-making process.
- Impact: The decision made by the tool, or based on the tool’s output, has a legal or similarly significant effect on an individual’s life, including in relation to their access to or the terms of employment or job opportunities (such as hiring, promotion, termination, task allocation or pay).
Are My Hiring and Recruiting Practices Subject to AI Regulation?
While the regulatory frameworks around automated technologies are changing rapidly and vary by jurisdiction, here are seven questions companies can ask now to assess whether their recruiting and hiring practices may be subject to AI regulation:
- Do candidates interact directly with the automated tool?
Certain jurisdictions require notification when a job applicant directly interacts with an automated system. This may include notice before an applicant interacts with the automated system (including in California, Colorado, Illinois, New York City and Europe) or notice after an applicant has already started interacting with the automated system and asks the system whether it is a human (including in Utah).
- Does the automated tool leverage sensitive personal data about candidates in making decisions?
If the automated tool leverages sensitive personal data, privacy laws might also impose particular requirements or restrictions. For example, the EU GDPR prohibits companies from using automated decision-making based on sensitive or “special categories” of personal data, with limited exceptions. Many other privacy laws require express consent to process sensitive personal data under any circumstances.
- Can individuals only hear about or access a job opening when the automated tool determines they are a good fit?
Automated tools used to target particular people for job openings might qualify as high-risk AI systems under the EU AI Act, which would mean they are subject to more rigorous regulatory standards. In the United States, the Equal Employment Opportunity Commission has also focused on algorithmic targeting of job advertisements and has investigated such practices for potential violations under Title VII. Such systems are also at risk of being considered automated decision-making tools subject to heightened regulations in some jurisdictions since regulators may view the initial choice of who has access to the job opening as a pre-hire employment decision for anyone who is excluded.
- Does the automated tool choose which applications to reject and which applications to move forward without human review?
When an automated system rejects or moves applicants through an employer’s hiring process without human involvement or in a way that overrides human decisions, AI regulations are likely to apply. For example, when employers and employment agencies use automated decision-making tools without sufficient human involvement, New York Local Law 144 may require them to conduct annual bias audits of the tools, notify applicants subject to the tools, and allow applicants to request an alternative selection process or accommodation.
- Do HR teams weigh the automated tool’s assessment of the candidate as a significant factor in their hiring decision?
In several jurisdictions, heightened regulatory requirements apply when an automated tool’s output is used as a significant factor in making an employment decision. If HR teams use an automated tool’s evaluation of a candidate (for example, a predictive “fit” for the job) as a substantial factor in their decision-making, regulations are likely to apply (such as in Colorado).
- Is the automated system intended to support HR decision-making and make hiring processes more efficient?
ForthcomingCalifornia employment regulations will apply to automated decision systems that include not only tools that replace human decision-making or are a significant factor in decision-making, but also those that merely “facilitate” human decision-making. If HR uses an AI system to support its hiring processes — for example, using an AI tool’s assessment of a candidate as a starting point for whether to move the candidate forward — AI regulations like those soon to take effect in California may apply.
- Does the automated system use facial recognition or analysis software?
Some jurisdictions prohibit or regulate the use of facial recognition/analysis tools. For example, many jurisdictions require appropriate notice and consent before such tools can be used. Other jurisdictions prohibit the use of such tools entirely for certain purposes. For example, the EU AI Act prohibits the use of AI systems to infer the emotions of employees in work settings or candidates during the selection and hiring process (with some exceptions) and classifies any other AI system intended to be used for emotion recognition as a high-risk AI system subject to heightened regulatory requirements.
Next Steps
If the answer to one or more of these questions is “Yes,” your company’s recruiting and hiring practices may be subject to current or forthcoming AI regulation, such as the Colorado AI Act, Illinois Automated Decision Tools Act, NYC Local Law 144, California’s Proposed Employment Regulations Regarding Automated-Decision Systems Regulations and the European Union Artificial Intelligence Act as well as generally applicable privacy laws — which may have special requirements for automated systems — like the EU’s General Data Protection Regulation and the California Consumer Privacy Act.
To meet compliance obligations or put in place guardrails to avoid the use of automated tools subject to heightened regulation, we recommend companies consider adopting an AI Recruiting, Hiring & Human Resources Policy designed to address the company’s approach to the use of automated tools in the employment context.
Understanding how these obligations apply in practice and the right policies to put in place to govern the use of automated employment tools is context-dependent and requires a nuanced analysis. If you think your use may be subject to these emerging regulations or need assistance in addressing their potential application, please reach out to Julie Totten, Alexandra Stathopoulos, Alexandria Elliot, Nick Farnsworth, Tom Zick or other members of the Orrick team.