Dechert Cyber Bits - Issue 66

Dechert LLP

Articles in this Issue:

  • FTC Settles Allegations of Over Inflated Reviews with AI-Enabled Review Platform Sitejabber
  • California Privacy Protection Agency Launches Investigation to Ensure Data Brokers’ Compliance
  • Global Privacy Authorities Issue Statement on Data Scraping
  • U.S. Regulators Seek to Protect Employees from Use of AI Surveillance and Scoring
  • UK Data Regulator Issues Guidance on AI Recruitment Tools
  • Dechert Tidbits

FTC Settles Allegations of Over Inflated Reviews with AI-Enabled Review Platform Sitejabber

On November 6, 2024, the Federal Trade Commission (“FTC”) announced a proposed settlement with GGL Projects, Inc., doing business as Sitejabber (“Sitejabber” or the “Company”), an AI-enabled consumer review platform. The FTC alleged that certain practices by the Company misled consumers and constituted unfair or deceptive acts in violation of Section 5(a) of the FTC Act. The Commissioners voted 5-0 in favor of the proposed settlement order, with Commissioners Holyoak and Ferguson issuing concurring statements.

According to the FTC’s complaint, over “130,000 businesses across a range of industries” have profile pages on Sitejabber’s site which feature “a prominent star rating of the business and purport to display real customer reviews.” The FTC alleges that some of this data was collected by Sitejabber before the customer received or experienced the product or service ordered. The FTC alleged that these surveys were used to inflate ratings from purchasers who had actually used the products or services. In addition, the FTC charged that Sitejabber had supplied its clients with a digital tool that allowed them to falsely present these earlier customer reviews and ratings as if they were from purchasers or users who had received their products or services. The FTC claims that these practices misled consumers and constituted unfair or deceptive acts in violation of Section 5(a) of the FTC Act. Sitejabber has not admitted any wrongdoing in connection with the settlement.

Under the FTC’s proposed order with the Company (“Proposed Order”), Sitejabber would be prohibited from misrepresenting product ratings, reviews, and other consumer feedback and providing the “means or instrumentality” to their clients to misrepresent reviews, along with several other standard provisions relating to notification and compliance with the Order. Commissioners Holyoak and Ferguson issued concurring statements available here and here, in which both Commissioners applaud the FTC’s “proper” use of the “means and instrumentalities” liability theory.

Takeaway: This enforcement actions follows on the heels of “Operation AI Comply,” which we previously covered here, where the FTC brought actions against multiple companies that allegedly “relied on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers.” Unlike recent FTC orders, this unanimous 5-0 vote demonstrates the FTC’s continued prioritization of the fairness of the consumer review ecosystem – especially those using tools powered by AI. Platform and tool providers, especially those powered by AI, should consider taking steps to determine whether their tools have the potential to deceive consumers and take steps to limit any potential for misuse. More generally, companies using AI should consider how they can implement robust compliance measures to confirm that their products’ claims are accurate and transparent and do not mislead consumers.

California Privacy Protection Agency Launches Investigation to Ensure Data Brokers’ Compliance

The California Privacy Protection Agency (“CPPA”) is conducting a “public investigative sweep” to ensure data brokers comply with California’s Delete Act, an act governing the business practices of data brokers. Among other things, the Delete Act requires covered businesses to register annually, pay a fee, and disclose certain information regarding consumer deletion requests and collection of sensitive categories of information. Data brokers, defined as businesses that collect and sell personal information without direct consumer relationships, must register with the CPPA by January 31 if they have operated in the previous year.

The Delete Act’s annual fee requirement funds the CPPA’s data broker registry and the development of the Data Broker Requests and Opt-Out Platform (“DROP”), set to launch in 2026. DROP will allow consumers to request the deletion of their personal information from all data brokers in a single action, enhancing consumers’ control over their personal data. The CPPA emphasizes the importance of these measures due to the potential privacy threats posed by the extensive volume of data sold by brokers. Consumers can access the data broker registry on the CPPA's website and submit complaints regarding non-compliant data brokers.

Takeaway: The CPPA’s investigation regarding data brokers’ compliance with the Delete Act underscores the agency’s concerns with this industry and its commitment to enforcement of its provisions. It is important to remember that companies that sell third-party consumer information may qualify as “data brokers” under California law even if that is not their primary business. This “sweep” is an opportunity for brokers to review their compliance and make adjustments, if necessary.

Global Privacy Authorities Issue Statement on Data Scraping

Sixteen global privacy authorities, including the UK's Information Commissioner's Office (“ICO”) and the Office of the Privacy Commissioner of Canada (“OPC”), have issued a joint statement on data scraping. Data scraping involves extracting large amounts of publicly available data from websites, often without the consent of the data subjects or the website owners, raising significant privacy and security concerns. The subject has been a matter of increasing concern as data scraping to support AI development (through use of the data to train AI models, among other things) has attracted more regulatory attention.

The joint statement expands on an earlier one on the same subject issued in 2023. This follow-up statement comes after extensive engagement with industry stakeholders to address the growing concerns around the practice of data scraping, including with some of the largest social media platforms.

The joint statement highlights the risks associated with data scraping, such as the potential for misuse of personal information, identity theft, and other forms of cybercrime. It provides additional guidance for companies to implement measures to protect personal data and comply with privacy laws. The authorities also call for greater transparency from companies regarding their data scraping practices and the steps they are taking to mitigate associated risks.

Takeaway: Whilst the 2023 joint statement on data scraping focused on steps social media platforms could take to control third party data scraping, this new joint statement also addresses social media companies’ scraping of data from their own platforms. Social media platforms’ use of public user data to train their own AI models is increasingly under scrutiny.

U.S. Regulators Seek to Protect Employees from Use of AI Surveillance and Scoring

On October 24, 2024, the Consumer Financial Protection Bureau (“CFPB”) issued guidance on employers’ use of third-party technologies to assess employee performance. According to the CFPB, companies are using AI-driven technologies, including non-transparent "black box" algorithms, to score employees’ effectiveness. The CFPB has expressed two primary concerns about these technologies. First, they often involve the collection of employees’ personal and biometric data without consent. Second, the CFPB warns that the use of such tools may violate the Fair Credit Report Act (“FCRA”), which aims to “protect people from the abuse and misuse of background dossiers and scores.” Thus, under the FCRA, organizations must be transparent about their monitoring technologies, obtain employee consent, and allow employees to correct any inaccuracies in the collected data.

The CFPB, in collaboration with the Department of Labor (“DOL”), held a field hearing to discuss the necessity of the new guidelines. CFPB Director Rohit Chopra emphasized that the guidance seeks to enhance employee protections as new technologies reshape the workplace. The DOL’s Acting Secretary of Labor, Julie Su, further noted that these protections extend to contractors and third-party employees, who are granted the same rights as employees when they are controlled or monitored by workplace surveillance. Acknowledging concerns about the potential negative impacts of AI surveillance, regulatory bodies, including the CFPB and the DOL, have stated that they remain committed to balancing innovation with the enforcement of workers’ rights and protections.

Takeaway: Despite these concerns about the potential negative impacts of AI surveillance, U.S. agencies with jurisdiction over employment issues appear committed to allowing AI applications to be used for hiring and other workplace-related tasks, but not at the expense of violating workers’ legal rights and protections, including anti-discrimination laws. (See our report here.) While consistent with this approach, this joint effort by the CFPB and the DOL to protect against the potential abuse of AI in the employment context underscores the continuing importance of maintaining transparency, obtaining legally valid employee consents, and ensuring the accuracy of any data collected and used in the employment context.

UK Data Regulator Issues Guidance on AI Recruitment Tools

The UK's Information Commissioner's Office (“ICO”) issued recommendations to AI developers and providers in the area of recruitment tools, with the goal of improving the protection of job seekers’ personal data.

The ICO's recommendations follow an audit of several providers of AI recruitment products, which raised numerous data privacy concerns. The ICO found that AI tools were often collecting much more personal data than necessary and storing the personal data for an indefinite period. The ICO also determined that databases consisting of data from job seekers were being developed without adequately informing the job seekers. The audit also revealed privacy compliance issues with tools used to filter candidates, including discrimination.

The ICO has published a list of key questions for organizations to consider before procuring an AI tool for recruitment. These questions highlight the need for companies to clearly determine the lawful basis for their processing of job seekers’ personal data, ensure that AI tools are used in a transparent manner, and limit the processing of personal data to what is necessary.

Takeaway: AI is becoming more prevalent in the recruitment process as organizations look to streamline evaluation processes and identify candidates more efficiently. However, recruitment is a sensitive area in which biases can arise in decision-making that have a significant impact on individuals. The ICO’s guidance is a useful tool for compliance with data protection laws, but organizations looking to use AI in recruitment also will want to take into account employment laws and be mindful that AI systems used for recruitment are treated as “high risk” under the EU AI Act.

Dechert Tidbits

EDPB Adopts Review of EU-U.S. Data Privacy Framework

The European Data Protection Board (“EDPB”) adopted its first report under the EU-U.S. Data Privacy Framework for data exports to the U.S., assessing the framework's implementation and effectiveness. The report is generally supportive of the Data Privacy Framework, but highlights certain areas for ongoing improvement.

DOJ to Restrict Data Swapping with Countries of Concern

The Department of Justice (“DOJ”) has issued a Notice of Proposed Rulemaking that would restrict the transfer of data to six “countries of concern.” These rules come in response to an executive order issued by President Biden earlier this year directing the DOJ to put rules in place restricting the transfer of sensitive data to countries or individuals of concern. The new rules apply to six categories of data (including genome, biometric, health, and financial data) and designate six countries of concern subject to the restrictions: Cuba, China, Iran, North Korea, Russia and Venezuela. This regulation, set to have significant implications for vendor engagements, investment activities, and employment agreements, includes substantial investigative and enforcement authorities for the DOJ. Companies involved in cross-border data transactions will need to develop compliance programs to adhere to these new requirements.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Dechert LLP

Written by:

Dechert LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide