This week, the Federal Trade Commission (FTC) issued a proposed consent order to settle allegations against IntelliVision Technologies Corp. (IntelliVision) for making false, misleading, and unsubstantiated claims that its facial recognition software, powered by artificial intelligence (AI), was free of gender and racial bias.
According to the proposed consent order, IntelliVision must cease publicizing misrepresentations of its facial recognition software’s accuracy and efficacy, as well as its claims that the software was created with all different genders, ethnicities, and skin tones in mind. The FTC’s complaint, alleges that IntelliVision did not have any supportive evidence of its claim that the software had “one of the highest accuracy rates on the market and performs with zero gender or racial bias.” Additionally, the complaint alleges that IntelliVision claimed that it trained its AI-powered software on millions of images when, instead, it trained the software on the facial images of roughly 100,000 unique individuals and then created variations of those same images.
Lastly, the FTC alleges that IntelliVision did not have evidence to substantiate its advertising claim that the software’s anti-spoofing technology does not allow the system to be “tricked” by a photo or video image.
The Director of the FTC’s Bureau of Consumer Protection, Samuel Levine, warned businesses, “Companies shouldn’t be touting bias-free artificial intelligence systems unless they can back those claims up. Those who develop and use AI systems are not exempt from basic deceptive advertising principles.”
This is only the second case where the FTC alleged that AI facial recognition technology had been misrepresented. In December 2023, the FTC entered a consent order with Rite Aid for its failure to implement reasonable procedures related to using AI facial recognition in its stores and to prevent harm to consumers.
[View source.]