Dechert Cyber Bits - Issue 60

Dechert LLP

New BIPA Ruling: Dismissal of Claims Against Samsung Over its Face App Data

On July 24, 2024, a federal judge in Illinois dismissed the case GT v. Samsung Electronics America, Inc., in which a putative class of Samsung phone and tablet users alleged violations of the Illinois Biometric Information Privacy Act (“BIPA”). Among other rulings, the court’s decision addresses the kind of “collection” activity a defendant must have undertaken to be liable under BIPA.

The plaintiffs had alleged that Samsung possesses and collects consumers’ biometric data on cell phone and tablet devices through the use of a pre-installed app that scans and creates face templates and organizes these images into groups. In dismissing the case, the court determined that companies must actually receive or access user biometrics in order to “collect” it or be “in possession of” it as governed by BIPA. The court also held that the at-issue data was not governed by BIPA because the data, as alleged, could not be used to identify the plaintiffs—indeed, the faces grouped together by the app were unidentified. The court dismissed all claims, while granting Plaintiffs leave to amend.

Takeaway: The results in this case demonstrate that, for purposes of BIPA: (i) control of technology alone does not demonstrate possession or collection; and (ii) to qualify as user biometrics under BIPA, data must identify an individual; the documentation of an individual’s features alone is insufficient. As the plaintiffs’ bar makes attempts to stretch BIPA to its limits, it is encouraging to see this kind of careful analysis by the court ensuring that the elements of the claims can be met.

Second GDPR Review Finds Enforcement and Consistency Issues

The European Commission (“Commission”) has published its second report on the application of the GDPR (“Second Report”), finding that “the risk-based, technology-neutral approach provides strong protection for data subjects and proportionate obligations for data controllers and processors.” A copy of the first report can be found here.

However, the Second Report also identified areas for further progress, including:

  • Strengthening enforcement of the GDPR, starting with adoption of the Commission’s proposed procedural rules for cross-border cases, which are currently being negotiated by the European Parliament and the Council and are intended to harmonize procedural aspects of cases to support swifter resolution and remedies;
  • More proactive support for small and medium-sized enterprises in their compliance efforts, with the Commission encouraging data protection authorities (“DPAs”) to provide tailor-made guidance and tools;
  • Clearer and more actionable guidance from DPAs and the European Data Protection Board, as stakeholder feedback noted that guidance was often overly theoretical, too long and not reflective of the GDPR’s risk-based approach; and
  • More consistent interpretation and enforcement of the GDPR across the EU. Stakeholders flagged current inconsistencies on topics such as legal bases for clinical trials, controller and processor categorizations, and the processing of criminal convictions data.

Takeaway: The Second Report highlights that, while the GDPR has delivered important results thus far, challenges in ensuring consistency across EU member states remain. The Commission has already taken steps to attempt to address this through its proposed procedural rules, but these are still progressing through the EU legislative process. In the meantime, businesses operating across the EU will want to monitor the steady flow of decisions from the Court of Justice of the EU (and our Cyber Bits commentary on those) as it seeks to offer definitive interpretations of GDPR requirements.

Texas Attorney General Settles with Meta Platforms, Inc. for $1.4 Billion

On July 30, 2024, the Texas Attorney General (“Texas AG”) announced a major $1.4 billion settlement with Meta Platforms, Inc. (“Meta”), the largest settlement ever reached by a single state attorney general. The Texas AG alleged that Meta engaged in the unauthorized capture of biometric data in connection with its “tag” feature, improperly (and in many cases automatically) applying facial recognition technology to identify users in photos and videos. Meta settled the matter without any admission of wrongdoing in connection with the matter.

Specifically, the Texas AG alleged that Meta, in violation of Texas’s Capture or Use of Biometric Identifier Act (“CUBI”) and Deceptive Trade Practices Act: (i) captured biometric data from user photos and videos without their informed consent and used the data for a commercial purpose; (ii) disclosed such biometric data to third parties; (iii) failed to destroy the biometric data in a reasonable time; and (iv) misrepresented to users how it used biometric data. This is the second time that Meta has been faced with allegations of wrongdoing arising from the use of its “tag” feature. In 2022, Meta agreed to a $650 million settlement for violations of Illinois’ Biometric Information Privacy Act.

Takeaway: CUBI has been in place since 2001, but the Meta settlement is the Texas AG’s first one under CUBI and the penalties are massive. Coming on the heels of Texas’s state specific consumer privacy law (the Texas Data Privacy and Security Act) going into effect just last month, the settlement signals that the Texas AG likely intends to actively, and aggressively, pursue privacy and data protection enforcement actions against companies doing business in Texas.

UK Data Regulator Puts Social Media and Video Sharing Platforms on
Notice Over Children’s Privacy Practices

The UK Information Commissioner’s Office (“ICO”) has published a blog post about its ongoing review of social media platforms and video sharing platforms under its Children’s Code Strategy (“Review”). This Review is part of the ICO’s broader effort to enhance children’s privacy, one of the ICO’s key priorities for 2024-2025.

The Review involved the ICO assessing 34 social media and video sharing platforms by creating accounts using proxies for children of different ages, replicating the sign-up process that children would follow. The ICO stated that it observed key account settings and privacy information presented but did not interact with other users.

The Review revealed varying levels of compliance with the Children’s Code, leading the ICO to question eleven platforms about their default privacy settings, geolocation and age assurance practices. Additionally, the ICO is also speaking to some platforms about targeted advertising. The blog post states that “where platforms do not comply with the law, they will face enforcement action.” The ICO’s Deputy Commissioner stressed that online services and platforms have a duty of care to children and that failures to protect their personal information would lead to action from the ICO.

The Review also identified areas requiring further investigation, particularly the use of children’s personal information in recommender systems (algorithms that analyze user details to tailor content) and recent advancements in age assurance for identifying children under the age of 13. A call for input from stakeholders was initiated on these matters until October 11, 2024.

Takeaway: The Review highlights the proactive way in which the ICO approaches enforcement, creating accounts to observe the process that a child would follow. Businesses operating in areas that are a key priority for the ICO (such as children’s data, online tracking and biometrics) should be particularly mindful of this approach and will want to take proactive compliance efforts with a view to avoiding scrutiny.

FTC Confirms (Again) that Hashing Does Not Render Personal Information Anonymous

On July 24, 2024, the Federal Trade Commission (“FTC”) reaffirmed its 2012 publication, Does Hashing Make Data “Anonymous?,” when it stated on its Technology Blog that hashing sensitive data does not anonymize that data.

Specifically, the FTC explained that data is only anonymized “when it can never be associated back to a person.” The FTC stated that hashing occurs when a piece of data is put through a mathematical equation and turned into a new number. While on the face of it the new number appears random, in reality, it is not, as the same input will always create the same output. In the FTC’s view, this means that the “hashed” data is merely a unique identifier from which a person or device can still be traced, meaning that the data is not—in fact—anonymized.

The FTC has actively brought cases against companies that have relied on hashing or other techniques to reduce data sensitivity, as witnessed in the FTC’s recent cases against Nomi Technologies (2015), BetterHelp (2022), and InMarket (2024), making clear that the “opacity of an identifier cannot be an excuse for improper use or disclosure.”

Takeaway: In the FTC’s view, personal information is only truly rendered anonymous when it cannot ever be traced back to an individual, with no exceptions. There is not universal agreement as to whether, as a practical matter, certain hashing method data could actually be re-identified, and some government agencies may be oversimplifying that process. Nonetheless, companies relying on hashing and other obfuscation techniques will want to assess their specific protocols to determine whether the underlying data is being properly defined in its disclosures in accordance with the current regulatory thinking.

Dechert Tidbits

Ill. Gov. Pritzker Signs BIPA Amendments Clarifying Liability Rules

On August 2, 2024, Illinois Governor J.B. Pritzker signed PA 103-0769 into law, modifying the Biometric Information Privacy Act (“BIPA”) to drastically reduce statutory damages under BIPA from being calculated per incident to being calculated per person.

Kids Online Safety and Privacy Act Passed by U.S. Senate

On July 30, 2024, the U.S. Senate passed the bipartisan bill combining the proposed Kids Online Safety Act, the Children and Teens’ Online Privacy Protection Act, and the Filter Bubble Transparency Act, with the goal of enhancing the Federal Trade Commission’s ability to protect children’s privacy and safety online. The legislation now returns to uncertain prospects in the U.S. House.

NIST Provides Hundreds of Recommendations to Address Generative AI Risks

The National Institute of Standards and Technology (“NIST”) has released over 200 recommendations to mitigate risks associated with generative AI. The suggested actions, which cover data privacy, intellectual property and environmental impact, are not binding, but a number of major tech companies have voluntarily agreed to follow government guidelines regarding the responsible use of AI. Key suggestions include documenting data origins, using well-defined contracts, conducting regular AI output assessments and addressing cybersecurity and environmental concerns.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Dechert LLP

Written by:

Dechert LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide