Client Alert: Avoiding Legal Pitfalls and Risks in Workplace Use of Artificial Intelligence

Whiteford
Contact

Whiteford

Recent surveys indicate the widespread use of generative AI (artificial intelligence) and other artificial intelligence tools by employees in the workplace. This is hardly surprising, given the astonishing level of efficiencies that AI tools offer for content generation, predictions, recommendations and a seemingly endless number of other outcomes.

Yet few employers have adopted formal AI workplace policies that are needed to properly train employees on the safe adoption and use of AI tools and mitigate a growing list of material security, accuracy, privacy, intellectual property and other legal and operational risks.

Some of those risks have recently made headline news in the context of media companies, authors, artists, and celebrities who have sued generative AI platforms in federal court for alleged copyright infringement. For example, The New York Times has sued the creators of ChatGPT and other popular AI platforms, over copyright issues associated with its articles. The Authors Guild, on behalf of its author-members, and a number of authors like John Grisham, have likewise sued OpenAI, claiming that their materials have been taken by OpenAI without consent. In addition, several visual artists have filed a putative class action lawsuit against AI companies, on behalf of themselves and other artists, alleging that those companies have used images of their artwork, registered with the U.S. Copyright Office, in the development of AI image generators.

Key Considerations in Workplace AI Policies

These headlines underscore only some of the many legal risks posed by the use of AI in the workplace. An important tool in mitigating those risks is an AI policy. An effective AI policy should highlight to employees the growing spectrum of unique risks and limitations of AI tools and the parameters for using AI tools in the workplace. While each policy should be tailored to each organization’s specific needs and priorities, we have outlined below some of the key considerations that any workplace AI policy should address.

Data Privacy and Security Considerations

Using AI tools to process personal information can result in the disclosure of protected information to third parties. As we have previously reported, unauthorized disclosures can lead to allegations of data breaches and false and deceptive practices, among other risks. Consequently, data privacy and security considerations must form a critical component of any AI policy and ensure that your organization appropriately considers any legal or regulatory compliance issues that may arise from data collection, processing, or sharing, using an AI tool.

As an example, if your organization is subject to the European Union’s General Data Protection Regulation (“GDPR”), among other requirements, care should be taken to evaluate (i) any restrictions that may be applicable to your anticipated use case(s) and (ii) all rights that must be extended to data subjects to the extent profiling or automated-decision making are involved. Even if your organization is not subject to the GDPR, other comprehensive consumer privacy laws, including a growing number of US state consumer privacy laws, must be considered. Although generally laxer than the GDPR, many US state privacy laws nevertheless contain restrictions and obligations related to profiling or automated decision-making, and a growing number of such privacy laws have been amended in recent years in ways that align more closely with the GDPR.

Confidentiality Considerations

Perhaps the most widely-used flavor of AI tool is generative AI – a type of AI that can generate code, images, or text, such as chatbots (ChatGPT, Google’s Bard, Microsoft Copilot) and image generators (DALL-E 2, Midjourney). Many of these generative AI tools are free and available to all users, regardless of the context (workplace or private use), and are known as “open” systems because they do not limit how user input is subsequently stored and used by the system. This kind of unfettered use and storage of content by AI tools may compromise the confidential nature of the content and should thus be adequately addressed by your AI policy. Even in instances of “closed” systems, which may limit the sharing of your information with third parties, your organization’s AI policy should provide sufficient guidance to your staff to ensure that data privacy and security concerns are fully understood and properly evaluated by the appropriate staff before any AI tool is implemented.

Copyright Considerations

While the governing terms of a generative AI tool may purport to grant copyright ownership to you and your staff in any output or content created, it may well be the case that the company behind the AI tool does not have sufficient rights to grant those rights because some or all of the output is owned by others. This scenario is best illustrated by a recent lawsuit brought by The New York Times against the creators of ChatGPT and other popular AI platforms over copyright issues associated with its articles. According to the complaint in that lawsuit, many of the articles published by The New York Times were impermissibly used to train certain generative AI tools, which in turn generated output that merely recited The New York Times content. As illustrated by this example, your organization may, in some instances, receive infringing content from a generative AI tool, and any AI policy must address these risks. These challenges are further compounded by the fact that a user cannot always determine which sources were used by AI tools to generate a response, further limiting the user’s ability to assess the output for potential intellectual property violations.

Data Accuracy

In addition to output that may infringe third-party copyrights, generative AI tools may produce incorrect or even nonsensical information (known as “hallucinations”). Further, the results or decisions produced by an AI tool may reflect biased or incomplete data sets on which each AI tool has been trained. Understanding these critical limitations and accounting for them in your AI policy is, therefore, critical, especially if the intended use cases involve decision-making or content creation by AI tools. There are several best practices that should be considered in this context, including the need for internal transparency when AI tools are used, and the need for proper human oversight to ensure that quality control measures are implemented to mitigate against inaccuracies, bias, or discriminatory effects.

Provide Guidelines for Onboarding AI tools

As with all workplace policies, simply having a static workplace policy is not sufficient to properly guard against legal and regulatory risks, especially in the context of fast-evolving technologies, like AI. If your organization adopts an AI policy that permits the use of AI tools, your policy should require training before any user is allowed to use an AI tool and explain the process for onboarding and licensing appropriate AI tools. Those policies and processes should be tailored to your organization to properly reflect its established governance and oversight frameworks.

Federal and State Laws Regarding the Use of AI in the Workplace

A casual observer of AI may believe that President Biden’s October 30, 2023, Executive Order 14110 ("Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence") calling for a coordinated U.S. government approach to ensure the responsible and safe development and use of AI,[1] was the federal government’s first meaningful step towards adopting sensible AI regulations. In fact, a number of federal and state laws and regulations, and some litigation, concerning the use of AI in the workplace preceded Executive Order (“EO”) 14110. So, in addition to adopting and implementing an AI policy, employers should be mindful of those AI-related laws and regulations adopted both before and after the EO. Some of those laws and regulations are addressed below.

US Government Regulation of AI in the Workplace

Risks of Algorithmic Discrimination: Bias Concerns in Hiring and Promotion

One way in which the U.S. government seeks to regulate the use of AI in the workplace is by preventing algorithmic discrimination in the application of AI tools. In 2021, the Federal Trade Commission warned businesses about the risk of discriminatory bias resulting from the use of algorithm-based tools, saying that biased AI tools may violate consumer protection laws. The FTC has said that businesses could be prosecuted under the Equal Credit Opportunity Act or the Fair Credit Reporting Act for biased and unfair AI-generated decisions and that unfair and deceptive practices could also fall under Section 5 of the FTC Act.

Also in 2021, the U.S. Equal Employment Opportunity Commission (“EEOC”) — the federal agency that enforces federal workplace anti-discrimination laws like Title VII of the Civil Rights Act of 1964, as amended, the Americans with Disabilities Act Amendments Act of 2008, and the Age Discrimination in Employment Act, among other laws — launched the “Artificial Intelligence and Algorithmic Fairness Initiative” to “ensure that the use of software, including AI, machine learning, and other emerging technologies used in hiring and other employment decisions, complies with the federal civil rights laws that the EEOC enforces.” In May 2023, the EEOC published a technical assistance document with guidance for employers on how to monitor their AI tools used to make hiring, promotion, and termination decisions for discrimination.

More recently, the EEOC issued two different technical assistance guidance documents pertaining to the use of AI in hiring and its interplay with certain anti-discrimination laws. The first one is entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” and the second one addresses disparate impact discrimination when using AI tools. It is titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act.”

After the issuance of President Biden’s EO, various federal government agencies weighed in on the use of AI, including with respect to the U.S. workplace. As noted in Whiteford’s recent Client Alert, in a joint statement issued on April 4, 2024, five federal agencies, including the EEOC and the U.S. Department of Labor (“DOL”), announced they will apply their enforcement authority to scrutinize the use of artificial intelligence selection tools in employment because of the potential for discriminatory bias in employee selection. As those agencies collectively put it, “[a]lthough many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

US DOL and OFCCP: AI Impact on EEO and Wage/Hour Practices

Equal Employment Opportunity Considerations

All U.S. workplaces must provide equal employment opportunities to job applicants and employees without regard to an employee’s legally protected category. On April 29, 2024, the DOL’s Office of Federal Contract Compliance Programs (“OFCCP”), which has oversight over federal contractors to ensure they are complying with federal laws and regulations, issued new guidance for federal contractors using AI tools in hiring and other employment actions. Pursuant to that guidance, federal contractors and subcontractors must comply with EEO laws that prohibit discrimination in employment based on a person’s legally protected status, such as race and sex (among others) and take affirmative action to recruit and advance qualified minorities, women, persons with disabilities, and covered veterans. OFCCP’s new guidance clarifies that federal contractors’ EEO obligations extend to the use of automated systems, including AI when making employment decisions. Additionally, the OFCCP updated its compliance review process to require federal contractors to document the use of AI and automated systems in recruitment, screening, and hiring.

Wage/Hour Considerations

As recently noted in our Client Alert addressing AI and wage/hour claims, employers may use AI and other technologies to track employees and make determinations concerning employee work hours, set employee work schedules, assign tasks, manage break time requests, and assess worker productivity. Employers may also use AI-automated timekeeping systems and use those systems to calculate pay, for instance. However, if an employer uses an AI tool that inputs faulty data or provides inaccurate output, the employer may face legal liability for violating the federal Fair Labor Standards Act (“FLSA”) as well as applicable state labor laws.

On April 29, 2024, the DOL also issued guidance for its Wage and Hour Division (“WHD”) field staff regarding the application of the FLSA and other federal labor standards that the DOL enforces, to the use of AI by employers. Its guidance addresses some of the potential pitfalls of using AI tools and recommends proper human supervision over AI tools to avoid legal risks and the potential for violation of laws enforced by its WHD.

Court Case Challenging the Use of AI in the Employment Context

One recent proposed class action case captioned Mobley v. Workday, Inc, 23 CV 00770 (U.S. District Court, N.D. Calif.) (filed Feb. 21, 2023) bears mention. Workday is a human resources software vendor that uses AI to screen which job candidate resumes get passed to employers. Mobley, a job applicant who is African-American and over the age of 40, filed a proposed class action lawsuit in federal court in California alleging that Workday’s job applicant screening software uses certain algorithms that disproportionately disqualify (and thus result in disparate impact discrimination against) job candidates who are African-American, age 40 and over, and/or disabled, in violation of Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the ADA Amendments Act of 2008, and California’s Fair Employment and Housing Act.

On July 12, 2024, the court granted Workday’s motion to dismiss the Amended Complaint, in part, and denied it, in part. The court granted the motion to dismiss without leave to amend as to the Title VII, ADEA, and ADA claims, to the extent they are based on an employment agency theory, and the intentional discrimination claims under Title VII, the ADEA, and Section 1981; and granted the motion with leave to amend as to the FEHA claim. However, the court otherwise denied the motion to dismiss the complaint. Notably, in denying the motion to dismiss, in part, the court held that, under the federal anti-discrimination laws (i.e., Title VII, the ADEA, and ADA) and case law interpreting those laws, a third-party agent may be liable as an employer where the agent has been delegated functions traditionally exercised by an employer. It opined that the plaintiff had plausibly alleged that Workday’s customers had delegated traditional hiring functions, including rejection of applicants, to Workday’s algorithmic decision-making tools, and that Workday is, therefore, an agent of its client-employers, and an “employer” under those antidiscrimination laws.

The court found that “although there are allegedly variances in Workday’s screening tools based on customer hiring preferences,” the plaintiff had plausibly alleged, in connection with its disparate impact discrimination claim, “that there is a common component that discriminates against applicants based on a protected trait. This is supported by allegations that Mobley was rejected from over one hundred jobs that he was allegedly qualified for, across many different industries and employers.” Thus, the court found that the plaintiff had plausibly alleged a specific employment practice that caused disparate impact discrimination. For now, at least, in California, vendors of software platforms that screen employee applicants may be vulnerable to potential liability for discriminatory outcomes, even if there is no intent to discriminate, where they have been delegated responsibility over traditional employment functions, and are thus, acting as an agent of their client (and can, therefore, be deemed an “employer”).

Local and State Legislation

New York City and the State of Colorado are two of the first jurisdictions in the United States to enact legislation governing employers’ use of automated employment decision-making tools like resume and job-screening software, in order to prevent discrimination in hiring.

New York City

New York City enacted a law (Local Law 144 of 2021), effective January 1, 2023 (“AEDT Law”), prohibiting employers and employment agencies from using automated employment decision-making tools (“AEDT”), such as algorithm-based resume and job candidate screening software, to screen job applicants and make employment decisions and promotions, unless:

  1. the tools undergo a bias audit within one year before the use of the tools (and annually thereafter),
  2. information about the audit is made publicly available on a website with (a) a summary of the bias audit results, (b) the date of the most recent bias audit of the AEDT, and (c) the distribution date of the AEDT, and
  3. certain other notices are provided to job applicants. The purpose of the audit is to determine whether the AEDT has a disproportionately negative impact on women and minorities.

The AEDT Law also requires any employers and employment agencies that use an AEDT to screen an employee or candidate for hire or promotion to notify that employee or job candidate — if they reside in New York City — that an AEDT will be used to assess or evaluate their candidacy, and the job qualifications and characteristics that will be used in assessing the candidate or employee. The notice must allow a candidate to request an alternative selection process or accommodation. Additionally, if information about the type of data collected for AEDT, the source of such data, and the employer or employment agency's data retention policy is not disclosed on the employer or employment agency's website, it must be made available to a candidate or employee within 30 days of a written request (unless otherwise prohibited by law).

Colorado

Colorado became the first state in the nation to enact a comprehensive law (SB 205) regulating the use of AI tools in hiring. The law, which takes effect on February 1, 2026, creates a private right of action for job applicants rejected from employment due to bias in the use of AI to sue the prospective employer. The law requires covered entities to provide detailed notices to consumers who are negatively affected by AI-assisted decisions and an opportunity to appeal those decisions and have their applications reviewed by human beings. The law’s final requirements will depend on rulemaking or guidance of Colorado’s attorney general and any potential final changes to the bill requested by Colorado’s governor Jared Polis to the legislature.[2]

Next Steps

In sum, AI tools hold the potential to increase the pace of innovation and workplace efficiency but the technology also presents a variety of legal and other risks. To help minimize these risks, employers and employees should treat all AI tools with due caution, such as by avoiding any presumption that an AI tool is reliable or secure from a data privacy or data security standpoint, or that any use of AI tools is inherently compliant with applicable laws and regulations (or cannot — in their application — result in violation of any law).
Instead, as noted in Whiteford’s recent Client Alert, employers may want to take note of the “Promising Practices” put forward by the DOL’s OFCCP for federal contractors in preventing discrimination when using AI. Those practices encourage federal contractors to, among other things:

  1. Verify AI tools and vendors.
  2. Understand the specifics of each AI tool (data, reliability, safety, etc.).
  3. Provide advanced notice of AI tools uses and practices in an employee handbook or separate policy.
  4. Monitor the use of AI tools in making employment decisions and track the resulting data to standardize the system(s).
  5. Provide effective training.
  6. Create internal governance structures with clear standards and monitoring requirements.
  7. Conduct routine tests of AI tools to ensure that they are working properly.
  8. Don’t rely solely on AI tools in employment decisions; instead, ensure that there is meaningful human oversight in decisions supported by AI tools for accuracy and to avoid unintentional bias.
  9. Confer with your employment counsel before using AEDT in hiring or promoting employees or implementing AI-driven tools for EEO or wage/hour compliance and stay abreast of federal, state and local laws and regulations concerning the use of AI.

[1] Additional detail on the Executive Order’s application to the use of AI in healthcare is available in a Client Alert recently published by Whiteford.

[2] On March 13, 2024 (SB 149), Utah’s Artificial Intelligence Policy Act (“AIPA”) was signed into law and on May 1, 2024, it took effect. This consumer protection law is the first state law in the U.S. addressing the use of generative AI technology. AIPA requires certain disclosures by businesses when a person is interacting with generative AI or viewing material created by generative AI.
Additionally, on July 1, 2024, Tennessee’s ELVIS (Ensuring Likeness Voice and Image Security) Act took effect. The law updates its right of publicity law which protects an artist’s name, photograph or likeness to prohibit the use of a person's name, photograph, voice or likeness without that person’s permission, to protect songwriters, performers, and music industry professionals’ voice from the misuse of AI.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Whiteford

Written by:

Whiteford
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Whiteford on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide