The FTC Urges Companies to Confront Potential AI Bias … or Else

Pillsbury - Internet & Social Media Law Blog
Contact

Pillsbury - Internet & Social Media Law Blog

It might be a little meta to have a blog post about a blog post, but there’s no way around it when the FTC publishes a post to its blog warning companies that use AI to “[h]old yourself accountable—or be ready for the FTC to do it for you.” When last we wrote about facial recognition AI, we discussed how the courts are being used to push for AI accountability and how Twitter has taken the initiative to understand the impacts of its machine learning algorithms through its Responsible ML program. Now we have the FTC weighing in with recommendations on how companies can use AI in a truthful, fair and equitable manner—along with a not-so-subtle reminder that the FTC has tools at its disposal to combat unfair or biased AI and is willing to step in and do so should companies fail to take responsibility.

The FTC’s blog post is an important read for those utilizing (or thinking of utilizing) AI algorithms in their business. As summarized briefly below, the FTC’s post provides a preview of the laws that the FTC may turn to in order to enforce compliance and a set of high-level guidance on how the FTC expects companies to approach AI usage.

First, the FTC advises readers that it “has decades of experience enforcing three laws important to developers and users of AI,” the applicability of which will vary with the type of potential harm under consideration:

  • Section 5 of the FTC Act – Potentially applicable in cases of unfair or deceptive practices, such as the sale or use of racially biased algorithms.
  • The Fair Credit Reporting Act – Potentially applicable in cases where people are denied employment, housing, credit, insurance or other benefits.
  • The Equal Credit Opportunity Act – Potentially applicable in cases of credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age or because a person receives public assistance.

Second, as a piece of umbrella guidance, the FTC advises that “a practice is unfair if it causes more harm than good.” Put another way, if a company’s AI “causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition—the FTC can challenge the use of that model as unfair.” To that end, the FTC post provides some more specific guidance on how companies should approach AI usage.

  • Know the data set being used to train your AI. Companies need to consider if the data set being used to train its AI is missing information from particular populations. If there are data gaps, then the AI model should be designed to account for such shortcomings and its use should be limited accordingly.
  • Be truthful about data collection. The FTC advises that companies “be careful about how they get the data that powers their model.” To that end, companies that have possession of user/consumer data should doublecheck that there will not be any issues in using that data to train or power their AI (e.g., ensure the company has obtained the requisite user permissions and has properly informed users that their data may be used in this manner).
  • Test your AI for discriminatory outcomes. Companies need to know if their AI is producing discriminatory outcomes. As such, the FTC believes it’s “essential to test your AI algorithm—both before you use it and periodically after that—to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.”
  • Be transparent and allow for independent review. The FTC post explains that the use of transparent practices and the opportunity for independent review of AI outcomes may improve the ability to find and correct for bias in AI. As such, the FTC recommends that companies “think about ways to embrace transparency and independence—for example, by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection.”
  • Don’t exaggerate what your AI algorithm can do. As a general rule, a company’s statements to its customers and the public must be truthful, non-deceptive, and backed up by evidence in order to not run afoul of the FTC Act. Thus, those using/developing AI must be careful about overpromising what their algorithm can deliver and must also be careful about claiming that an algorithm is “unbiased” when the algorithm may be built on a flawed data set.

As noted at the top of this post, the failure to put into place practices to account for and eliminate biased outcomes in AI will be an invitation for FTC enforcement. In view of current U.S. court cases over AI discrimination and proposed European Union AI regulations, the FTC guidance should encourage companies using and/or developing AI to start taking proactive concrete steps to guard against potential bias and discrimination.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Pillsbury - Internet & Social Media Law Blog

Written by:

Pillsbury - Internet & Social Media Law Blog
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Pillsbury - Internet & Social Media Law Blog on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide