Over the past decade, employers and employee recruiting platforms have increasingly relied on advanced technology, such as artificial intelligence (AI) and algorithms, to assist with screening, interviewing, and selecting candidates. Proponents of AI recruiting tools argue that they improve efficiency in the hiring process by providing a method of quickly identifying qualified candidates. Likewise, because algorithms can consider and weigh significantly more factors than human decision makers, the technology could arguably reduce the impact of implicit bias in hiring decisions. However, several legal issues remain for employers considering using AI-based hiring technology.
First, anti-discrimination laws still apply to hiring decisions that are aided by an algorithm or AI. As a result, the unchecked use of AI may give rise to liability for disparate treatment claims. For example, some hiring programs use AI-powered “chatbots” to communicate with potential candidates. Employers should pay close attention to the types of questions asked and information elicited by chatbots to limit the inadvertent receipt of information employers are not otherwise permitted to rely upon to make employment decisions. Further, machine learning—a common component in algorithm-assisted decision-making—can be unintentionally designed with its programmers’ biases, or can learn and repeat the user’s bias in the selection process. A learned bias could arguably result in a discrimination claim. As an illustration, software designed to disregard candidates with gaps in their employment history may disproportionately impact women candidates because they are statistically more likely to leave the workforce to care for their children. However, software which correlates data regarding multiple issues or which flags certain issues, but does not disregard the candidate based on a single issue such as gaps in employment, would allow for further review of the relevant circumstances and less chance of legal issues.
Even otherwise “neutral” AI tools may also have unanticipated consequences. Consider a tool that that asks employers to rank potential matches selected by an algorithm for a particular job. For every potential applicant the employer rates well, the platform presents the job posting to other, similar candidates. This is, in theory, a facially neutral tool to identify potentially qualified candidates to apply for the position. However, if the applicants the employer rates highly all have a similar race or gender or age, and if the software uses those categories as relevant factors for matching candidates, the software may inadvertently result in more applications from primarily candidates in the same groups. Employers may want to review the categories used as relevant criteria for any AI tools, including candidate matching tools, and designate as irrelevant any information they do not want to receive.
Further, there are also candidate privacy considerations with AI-assisted hiring. AI functions best when it has a large dataset of information by which to suggest decisions. Since a resume provides only a limited amount of information about a potential candidate, several well-known platforms locate and synthesize information available online to assist with determining the candidate’s likelihood of success upon hire. Candidates may be unaware that their private information, some of which they may not have known was available and accessible to the employer, has been used during the recruiting process. Likewise, various state and federal laws govern the use and storage of certain protected information. To the extent the algorithm utilized by an employer uses such protected information, the employer should ensure that they are in compliance with such privacy laws.
Finally, a few states and local governments have passed or are considering legislation restricting the use of AI in recruiting. Illinois was one of the first jurisdictions to require employers to notify candidates that AI will be used in video interviews, obtain consent to use AI, and explain to the candidate how the AI tool works and what information is being used to evaluate the candidate. Likewise, beginning on January 1, 2023, employers in New York City will be subject to compliance obligations relating to their use of AI-based employment decision tools. Further, the Equal Employment Opportunity Commission (“EEOC”) recently launched an initiative focused on ensuring that AI does not become a “high-tech pathway to discrimination.” Through this initiative, the EEOC issued a technical assistance document regarding the Americans with Disabilities Act and the use of software, algorithms and artificial intelligence to access job applicants, which provides insight into how the EEOC views the use of these types of technology in the workplace. We anticipate the EEOC will continue to update this publication to address other employment laws.
Given these considerations, employers that use or plan to use AI recruiting tools should carefully review the technology used to make hiring decisions. Agreements with AI vendors should require the vendors to comply with all applicable employment and non-discrimination laws. Employers should be aware of the factors that the particular algorithm uses to ensure they are non-discriminatory, as well as conduct regular audits of the tools to ensure compliance with federal, state, and local employment and privacy laws.