Artificial Discrimination: AI Vendors May Be Liable for Hiring Bias in Their Tools

Clark Hill PLC
Contact

A federal court in California largely rejected the workplace screening company WorkDay’s motion to dismiss a hiring discrimination lawsuit brought against the company for its role in screening and evaluating job applicants. Central to the Court’s decision was its conclusion that “drawing an artificial distinction between software decision-makers and human decision-makers would potentially gut anti-discrimination laws in the modern era.” The decision is of importance to any business that leverages vendor AI or automated tools to evaluate and make decisions concerning the rights of individuals, including hiring, promotion, and firing decisions, and to the third-party service providers and vendors who provide those tools.  This decision makes it clear that both employers and service providers are responsible for ensuring that the AI tools being used are unbiased and do not unintentionally discriminate.

The Equal Employment Opportunity Commission (EEOC) filed an amicus brief in support of Plaintiff in this case, further solidifying the EEOC’s position that federal employment laws can be used to address discrimination by algorithmic or AI tools. Indeed, the EEOC remains focused on eradicating barriers to quality jobs for unrepresented communities and ending systemic discrimination (discrimination built into the recruiting, hiring, and employment policies and practices), and has acknowledged that the use of AI tools that are not screened for unintentional biases can contribute to this type of discrimination.

Prudent businesses and service providers are well advised to:

  • Educate themselves on the technology being used
  • Ask for bias testing or assessment results
  • Use a sample or test group before widespread roll-out to look at the data and results and self-correct if necessary
  • Self or third-party audit the results periodically to ensure reliable, consistent, and unbiased results
  • Form cross-operational teams to vet and select qualified service providers and AI partners
  • Consider risk mitigation measures in your contractual arrangements to prevent unanticipated financial impacts.

The decision is Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. July 12, 2024) (ECF No. 80)where Judge Rita F. Lin of the U.S. District Court for the Northern District of California granted in part and denied in part Workday’s Motion to Dismiss Plaintiff’s Amended Complaint concerning allegations that Workday’s algorithm-based screening tools discriminated against applicants on the basis of race, age, and disability.

Plaintiff Mobley is an African American male over the age of 40, with a bachelor’s degree in finance from Morehouse College, an all-male Historically Black College and University, and an honors graduate degree. Plaintiff also alleges he suffered from anxiety and depression.

According to the amended complaint, Plaintiff Mobley applied to more than 100 jobs with companies that use Workday’s screening tools on the Workday platform. These screening tools include a “Workday-branded assessment and/or personality test” and the use of “pymetrics” or behavioral assessments. According to the Plaintiff, Workday’s screening “likely . . . reveal mental health disorders or cognitive impairments,” so those who suffer from anxiety and depression are “likely to perform worse  … and are screened out.”

Plaintiff was allegedly denied employment through Workday’s platform for all his applications. Some of the job rejections came within minutes of Plaintiff submitting his application on the WorkDay platform, and in the middle of the night – providing plausible evidence that the rejection was automatic and due to the application of some algorithm. For example, Plaintiff claims that in one instance, he applied for a position at 12:55 a.m., and his application was rejected less than an hour later.

For this, Plaintiff alleges Workday’s algorithmic decision-making tools discriminate against job applicants who are African-American, over the age of 40, and/or are disabled. Plaintiff brought claims under Title VII of the Civil Rights Act of 1964 (“Title VII”), the Civil Rights Act of 1866 (“Section 1981”), the Age Discrimination in Employment Act of 1967 (“ADEA”), and the ADA Amendments Act of 2008 (“ADA”), for intentional discrimination on the basis of race and age, and disparate impact discrimination on the basis of race, age, and disability. Plaintiff also brought a claim for aiding and abetting race, disability, and age discrimination against Workday under California’s Fair Employment and Housing Act (“FEHA”).

Under many of these anti-discrimination laws, an employer, employment agency, or agent may be liable. Workday argued it did not fit within any of these categories, but the Court ultimately found that Workday was an agent of employers and could be directly liable under the agency theory.

Specifically, the Court found that Plaintiff plausibly alleged Workday’s customers delegated their traditional function of rejecting candidates or advancing them to the interview stage to Workday. The Court determined if it reasoned otherwise and accepted Workday’s arguments, companies would “escape liability for hiring decisions by saying that function has been handed over to someone else (or here, artificial intelligence).”

The Court noted that Workday was alleged to play a “crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.” In regard to artificial intelligence, the Court noted “Workday’s role in the hiring process was no less significant because it allegedly happens through artificial intelligence,” and the Court declined to “draw an artificial distinction between software decision-makers and human decision-makers,” [sic] as any distinction would “gut anti-discrimination laws in the modern era.” This is a critical reminder to all businesses, including employers and their vendors, that delegating a function to an automated or AI system does not insulate the employer or vendor from liability for the decisions made by those tools.

For this reason, the Court denied Workday’s motion to dismiss Plaintiff’s federal discrimination claims.

The Court also allowed Plaintiff’s disparate impact claim to proceed but rejected his intentional discrimination claim against Workday. With respect to disparate impact, the Court found that “the zero percent success rate at passing Workday’s initial screening” combined with Plaintiff’s allegations of bias in Workday’s training data and tools plausibly supported an inference that Workday’s algorithmic tools disproportionately rejects applicants based on factors other than qualifications, such as a candidate’s race, age, or disability. The Court therefore denied Workday’s motion to dismiss the disparate impact claims under Title VII, the ADEA, and the ADA.

This part of the decision affirms the responsibility of employers and vendors to evaluate AI tools used in the employment processes for disparate impact, which at least one city law (New York Local Law 144) affirmatively requires bias assessments of AI tools for disparate impact. It is critical to work with sophisticated outside counsel and competent vendors on such assessments and to implement any remediation, which may be subject to regulatory review and customer inquiry.

Last, the Court held that Workday could not be held liable for intentional discrimination where the Complaint alleged only that “Workday was aware of the discriminatory effects of its applicant screening tools” and that this awareness allegation was not enough to satisfy his intentional discrimination pleading burden. Accordingly, the Court granted Workday’s motion to dismiss Plaintiff’s intentional discrimination claims under Title VII, the ADEA, and § 1981, without leave to amend.

This past year has seen the onset of discrimination claims brought by plaintiffs’ firms against employers, insurers, vendors, and others for their use of AI tools in making a decision that affects the federally protected rights of individuals. These claims are likely only the beginning of the rise of AI discrimination litigation.

Whether used for credit lending or claims handling, or used for candidate evaluation and hiring, the use of AI tools for decisions that impact the rights of individuals should be subject to a robust governance framework. This governance framework will likely include bias/fairness assessments by vendors or by businesses of their vendors, the provision of opt-outs to individuals who may be subjected to automated decisions by AI tools, and an AI incident response plan that addresses potential discriminatory harms and outcomes as they arise.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Clark Hill PLC

Written by:

Clark Hill PLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Clark Hill PLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide