AI Workplace Screener Faces Bias Lawsuit: 5 Lessons for Employers and 5 Lessons for AI Developers

Fisher Phillips
Contact

Fisher Phillips

A California federal court just allowed a frustrated job applicant to proceed with an employment discrimination lawsuit against an AI-based vendor after more than 100 employers that use the vendor’s screening tools rejected him. The judge’s July 12 decision allows the class action against Workday to continue based on employment decisions made by Workday’s customers on the theory that Workday served as an “agent” for all of the employers that rejected him and that its algorithmic screening tools were biased against his race, age, and disability status. The lawsuit can teach valuable lessons to employers and AI developers alike. What are five things that employers can learn from this case, and what are five things that AI developers need to know?

AI Job Screening Tool Leads to 100+ Rejections

Here is a quick rundown of the allegations contained in the complaint. It’s important to remember that this case is in the very earliest stages of litigation, and Workday has not yet even provided a direct response to the allegations – so take these points with a grain of salt and recognize that they may even be proven false.

  • Derek Mobley is a Black man over the age of 40 who self-identifies as having anxiety and depression. He has a degree in finance from Morehouse College and extensive experience in various financial, IT help-desk, and customer service positions.
  • Between 2017 and 2024, Mobley applied to more than 100 jobs with companies that use Workday’s AI-based hiring tools – and says he was rejected every single time. He would see a job posting on a third-party website (like LinkedIn), click on the job link, and be redirected to the Workday platform.
  • Thousands of companies use Workday’s AI-based applicant screening tools, which include personality and cognitive tests. They then interpret a candidate’s qualifications through advanced algorithmic methods and can automatically reject them or advance them along the hiring process.
  • Mobley alleges the AI systems reflect illegal biases and rely on biased training data. He notes the fact that his race could be identified because he graduated from a historically Black college, his age could be determined by his graduation year, and his mental disabilities could be revealed through the personality tests.
  • He filed a federal lawsuit against Workday alleging race discrimination under Title VII and Section 1981, age discrimination under the ADEA, and disability discrimination under the ADA.
  • But he didn’t file just any type of lawsuit. He filed a class action claim, seeking to represent all applicants like him who weren’t hired because of the alleged discriminatory screening process.
  • Workday asked the court to dismiss the claim on the basis that it was not the employer making the employment decision regarding Mobley, but after over a year of procedural wrangling, the judge gave the green light for Mobley to continue his lawsuit.

Judge Gives Green Light to Discrimination Claim Against AI Developer

Direct Participation in Hiring Process is Key – The judge’s July 12 order says that Workday could potentially be held liable as an “agent” of the employers who rejected Mobley. The employers allegedly delegated traditional hiring functions – including automatically rejecting certain applicants at the screening stage – to Workday’s AI-based algorithmic decision-making tools. That means that Workday’s AI product directly participated in the hiring process.

Middle-of-the-Night Email is Critical – One of the allegations Mobley raises to support his claim that Workday’s AI decision-making tool automatically rejected him was an application he submitted to a particular company at 12:55 a.m. He received a rejection email less than an hour later at 1:50 a.m., making it appear unlikely that human oversight was involved.

“Disparate Impact” Theory Can Be Advanced – Once the judge decided that Workday could be a proper defendant as an agent, she then allowed Mobley to proceed against Workday on a “disparate impact” theory. That means the company didn’t necessarily intend to screen out Mobley based on race, age, or disability, but that it could have set up selection criteria that had the effect of screening out applicants based on those protected criteria. In fact, in one instance, Mobley was rejected for a job at a company where he was currently working on a contract basis doing very similar work.

Not All Software Developers On the Hook – This decision doesn’t mean that all software vendors and AI developers could qualify as “agents” subject to a lawsuit. Take, for example, a vendor that develops a spreadsheet system that simply helps employers sort through applicants. That vendor shouldn’t be part of any later discrimination lawsuit, the court said, even if the employer later uses that system to purposefully sort the candidates by age and rejects all those over 40 years old.

5 Tips for Employers

This lawsuit could have just easily been filed against any of the 100+ employers that rejected Mobley, and they still may be added as parties or sued in separate actions.  That is a stark reminder that employers need to tread carefully when implementing AI hiring solutions through third parties. A few tips:

  • Vet Your Vendors – Ensure your AI vendors follow ethical guidelines and have measures in place to prevent bias before you deploy the tool. This includes understanding the data they use to train their models and the algorithms they employ. Regular audits and evaluations of the AI systems can help identify and mitigate potential biases – but it all starts with asking the right questions at the outset of the relationship and along the way.
  • Work with Counsel on Indemnification Language – It’s not uncommon for contracts between business partners to include language shifting the cost of litigation and resulting damages from employer to vendor. But make sure you work with counsel when developing such language in these instances. Public policy doesn’t often allow you to transfer the cost of discriminatory behavior to someone else. You may want to place limits on any such indemnity as well, like certain dollar amounts or several months of accrued damages. And you’ll want to make sure that your agreements contain specific guidance on what type of vendor behavior falls under whatever agreement you reach.
  • Consider Legal Options – Should you be targeted in a discrimination action, consider whether you can take action beyond indemnification when it comes to your AI vendors. Breach of contract claims, deceptive business practice lawsuits, or other formal legal actions to draw the third party into the litigation could work to shield you from shouldering the full responsibility.
  • Implement Ongoing Monitoring – Regularly monitor the outcomes of your AI hiring tools. This includes tracking the demographic data of applicants and hires to identify any patterns that may suggest bias or have a potential disparate impact. This proactive approach can help you catch and address issues before they become legal problems.
  • Add the Human Touch – Consider where you will insert human decision-making at critical spots along your hiring process to prevent AI bias, or the appearance of bias. While an automated process that simply screens check-the-box requirements such as necessary licenses, years of experience, educational degrees, and similar objective criteria is low risk, completely replacing human judgment when it comes to making subjective decisions stands at the peak of riskiness when it comes to the use of AI. And make sure you train your HR staff and managers on the proper use of AI when it comes to making hiring or employment-related decisions.

5 Tips for Vendors

While not a complete surprise given all the talk from regulators and others in government regarding concerns with bias in automated decision making tools, this lawsuit should grab the attention of any developer of AI-based hiring tools. When taken in conjunction with the recent ACLU action against Aon Consulting for its use of AI screening platforms, it seems the time for government expressing concerns has been replaced with action. While plaintiffs’ attorneys and government enforcement officials have typically focused on employers when it comes to alleged algorithmic bias, it was only a matter of time before they turned their attention to the developers of these products. Here are some practical steps AI vendors can take now to deal with the threat.

  • Commit to Trustworthy AI Make sure the design and delivery of your AI solutions are both responsible and transparent. This includes reviewing marketing and product materials.
  • Review Your Work – Engage in a risk-based review process throughout your product’s lifecycle. This will help mitigate any unintended consequences.
  • Team With Your Lawyers – Work hand-in-hand with counsel to help ensure compliance with best practices and all relevant workplace laws – and not just law prohibiting intentional discrimination, but also those barring the unintentional “disparate impact” claims as we see in the Workday lawsuit.
  • Develop Bias Detection Mechanisms – Implement robust testing and validation processes to detect and eliminate bias in your AI systems. This includes using diverse training data and regularly updating your algorithms to address any identified biases.
  • Lean Into Outside Assistance Meanwhile, collaborate with external auditors or third-party reviewers to ensure impartiality in your bias detection efforts.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Fisher Phillips | Attorney Advertising

Written by:

Fisher Phillips
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Fisher Phillips on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide