Dechert Cyber Bits - Issue 63

Dechert LLP

FTC Staff Report on Social Media Platforms’ Privacy and Security Practices

On September 19, 2024, the Federal Trade Commission (“FTC” or the “Commission”) announced the release of its staff report, “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services” (the “Report”).

The 129-page Report specifically looked at certain social media and video streaming platform companies and stated that it found, among other things, that:

  • The Companies collected and retained “troves of data” from both users and nonusers, yet failed to adequately control and handle it;
  • The Companies engaged in extensive targeted advertising as one of their main sources of revenue;
  • Both users and non-users of the Platforms had their data collected and fed into algorithms and artificial intelligence (“AI”) systems; and
  • The Companies treated teens on the Platforms like adults.

Based on its assertions, the Commission made several recommendations, which included calls that: (i) Congress pass comprehensive federal legislation to limit surveillance, address baseline protections, and grant consumers data rights; (ii) companies implement appropriate data collection and retention policies; (iii) companies not use tracking technologies to collect sensitive information; and (iv) companies enforce greater protections for teenage users. While all five FTC Commissioners voted to issue the Report, four Commissioners issued separate statements. Notably, Commissioners Holyoak and Ferguson both issued partial dissenting statements expressing concerns about suppressing online free speech and misclassifying advertising AI systems as harmful.

Takeaway: While the FTC used its 6(b) authority to investigate these technology companies, the FTC could in the future bring enforcement actions under Section 5 of the FTC Act against any company that does not implement the recommendations from the Report. Prudent companies should consider conducting a gap analysis of their practices as compared to the Report’s findings to determine their risk level and implement fixes as needed.

California Legislature Passes Several New AI Laws, But Governor Vetoes the Most Controversial Measure

California’s legislature was active in the AI space this year, with the state’s legislature advancing four measures to Governor Gavin Newsom’s desk for signature this fall. Governor Newsom signed three of these measures into law but vetoed the most sweeping—and controversial—of these measures, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (the “Bill” or “SB 1047”).

On September 19, 2024, Governor Newsom signed three narrow bills aimed at addressing ethical concerns surrounding AI and protecting individuals from the misuse of digital content. The new laws require AI-generated content to be watermarked for transparency (SB 942), criminalize the creation and distribution of AI-generated sexually explicit images intended to cause serious emotional distress (SB 926), and require social media platforms to address reports of such content, leading to its blocking and deletion where necessary (SB 981). SB 942 will go into effect on January 1, 2026, and SB 926 and SB 981 will go into effect on January 1, 2025.

While the Governor signed these three bills, he did not sign SB 1047, a far-reaching AI safety measure. The Bill, which was first introduced by Senator Scott Wiener (D-San Francisco), and passed the California Senate 37-1 with strong bipartisan support, sought to institute various requirements on developers of AI models, which included “implementing the capability to promptly enact a full shutdown” of an AI model and “implement[ing] a written and separate safety and security protocol” regarding the model. The Bill also required companies to take “reasonable care” to prevent their AI models from causing catastrophic harm. The Bill likewise sought to create a new state agency, the Board of Frontier Models within the Government Operations Agency, which would have been tasked with developing a framework to “advance the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable[.]”

SB 1047 divided tech companies and legislators alike. For example, while Elon Musk and AI developer Anthropic voiced support for the Bill—with some caveats—numerous other technology giants, such as OpenAI and Meta, penned letters urging Governor Newsom to veto the measure. Though Governor Newsom indicated that the SB 1047 was “well-intentioned,” he did not think that the Bill was “the best approach to protecting the public[.]” Governor Newsom expressed support for measures regulating the AI space but cautioned that such measures “must be based on empirical evidence and science.” He also outlined various measures in furtherance of developing such evidence and science, but concluded that at this juncture, he could not sign SB 1047 into law.

Takeaway: The AI bills that were passed by the California legislature and signed by the Governor were narrow in scope and use-case specific. SB 1047 on the other hand was an attempt to regulate how the AI industry should build its technology, starting with its more powerful models. Ultimately, the Governor found the Bill’s approach to be incompatible with the kind of regulation he believes the AI market needs at this moment—namely, laws capable of “protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good.” California’s experience suggests that although state legislatures appear eager to draft their own legislation governing AI, such regulations may be further away than most expect. But with 32 of the 50 leading AI companies based in California, expect California’s government to continue to be a leading voice in determining the future shape of AI regulation.

“Operation AI Comply”: The FTC Targets 5 Companies for “Deceptive AI” Practices In Massive Sweep

On September 25, 2024, the Federal Trade Commission (“FTC”) announced five enforcement actions against companies alleged to have engaged in deceptive and unfair consumer practices through the use or sale of artificial intelligence (“AI”)—an enforcement sweep that the FTC has entitled, “Operation AI Comply.” Those included in this sweep were: (i) DoNotPay; (ii) Ascend Ecom; (iii) Ecommerce Empire Builders; (iv) Rytr; and (v) FBA Machine. The allegations in each action are briefly summarized here.

  • DoNotPay. DoNotPay uses AI technology to offer legal services, claiming to provide “the world’s first robot lawyer.” In its complaint, the FTC alleged that DoNotPay overstated the capabilities of its services because it did not conduct testing on the quality of the AI’s output and/or retain attorneys in connection with its business. The parties have agreed to a proposed order in which DoNotPay must, among other things: (i) pay $193,000, and (ii) cease making misrepresentations regarding the AI’s abilities. DoNotPay does not admit to any wrongdoing in connection with this matter.
  • Ascend Ecom. The FTC alleged that Ascend Ecom represented to consumers that its AI technology could earn consumers’ passive income on online storefronts. In its complaint, the FTC further alleged, among other things, that Ascend has defrauded consumers out of $25 million due to unsubstantiated claims regarding its AI-powered tools and persuaded consumers into withholding negative reviews. The U.S. District Court in the Central District of California has issued a temporary restraining order prohibiting Ascend Ecom from continuing the alleged scheme. Ascend Ecom disputed the allegations.
  • Ecommerce Empire Builders (“EEB”). EEB offers customers “AI-powered Ecommerce Empire[s]” which customers can build by enrolling in training programs or purchasing already-made store fronts. In its complaint, the FTC alleged, among other things, that EEB did not have evidence to substantiate its money-making claims regarding its AI technology and customers made little to no money on their storefronts. The company disputed the allegations. The U.S. District Court for the Eastern District of Pennsylvania has issued a temporary restraining order prohibiting EEB from continuing the alleged scheme.
  • Rytr. Rytr offers customers an AI “writing assistant” that can be used for, among other things, creating testimonials and reviews based upon a generic input. In its complaint, the FTC alleged that the reviews were false because they were not related to the users’ input and stated that customers would use this product to mass produce false reviews. The parties agreed to a proposed order which, among other things, prohibits Rytr from marketing or selling any generated testimonial service. Commissioners Holyoak and Ferguson issued dissenting statements, with Commissioner Holyoaks’s found here and Commissioner Ferguson’s found here. Both took the position that the action is inconsistent with the FTC’s Section 5 authority and bad for innovation. Rytr did not admit to any wrongdoing in connection with the matter.
  • FBA Machine. FBA Machine represented to customers that FBA Machine’s storefronts utilizing AI technology would be guaranteed income akin to a “7-figure business” and marketed this scheme as risk free. In its complaint, the FTC alleged that FBA Machine defrauded its customers over $15.9 million due to its unsubstantiated claims regarding its AI-powered tools. FBA Machine disputed the allegations. The U.S. District Court for the District of New Jersey issued a temporary restraining order prohibiting FBA Machine from continuing its alleged scheme.

Takeaway: The FTC will continue to scrutinize the marketing claims that companies make about their respective AI products, solutions, and services. Simply put, companies should have a reasonable basis for the claims they make about their AI technologies, and feel-good or hyperbolic marketing speak (e.g., our solution can replace a live human!) can turn into a costly enforcement action. Operation AI Comply builds from the FTC’s enforcement actions involving AI, beginning with Rite Aid in late 2023, which we covered here, and which sets forth the parameters of what the FTC considers the baseline for an “comprehensive algorithmic fairness program.” Companies may want to revisit their AI marketing claims to confirm they meet the FTC expectations and that their statements regarding their uses of AI are accurate.

Texas Attorney General and Pieces Technology Reach First-Of-Its-Kind Generative AI Settlement

On September 18, 2024, Texas Attorney General Ken Paxton (“Texas AG”) announced a first-of-its-kind settlement with Pieces Technology (“Pieces”), a healthcare artificial intelligence (“AI”) company that creates generative AI technology to assist providers with charting and drafting clinical notes in inpatient medical facilities and hospitals. According to the Texas AG, Pieces violated the Texas Deceptive Trade Practices-Consumer Protection Act (“DTPA”) by misrepresenting the precision of its AI through its statement that the AI had a hallucination rate of less than 1 per 100,000, thereby deceiving hospitals “about the accuracy and safety of the company’s products.” After investigating, the Texas AG alleged that Pieces’ metrics regarding the hallucination rate were likely inaccurate but did not go into detail on this point in its settlement. Pieces “vigorously denies” wrongdoing in connection with the matter and, in a public comment, stated that it "accurately set forth and represented its hallucination rate."

Under the Assurance of Voluntary Compliance (“Assurance”), Pieces is required to, among other things: (i) clearly and conspicuously disclose the meaning and definition of any metrics used in marketing and the method or procedure used to calculate those metrics; (ii) cease making misrepresentations regarding its products; and (iii) clearly and conspicuously disclose any harmful uses or misuses of its products to current and future customers. No monetary penalty was imposed.

Takeaway: The Texas AG’s action and settlement, in conjunction with the FTC’s recent Operation AI Comply, discussed above, makes clear that companies should carefully vet their marketing claims concerning AI. This action follows Texas AG Ken Paxton’s recent comments regarding how to protect Texas residents against the misuses of AI. Those operating in Texas should tread carefully, as the Texas AG’s office has proven to be one of the most active and aggressive enforcers of state consumer protection laws in the technology space in recent months, as we covered here.

Dechert Tidbits

Irish Data Regulator Fines Meta €91 Million For GDPR Security Violations

The Irish Data Protection Commission (“DPC”) announced a final decision after an inquiry into Meta Platforms Ireland Limited’s (“MPIL”) GDPR compliance that was initiated in 2019. The inquiry began after MPIL reported that it had inadvertently stored user passwords without cryptographic protection or encryption. The decision, which emphasized the principles of integrity and confidentiality, resulted in a reprimand and a €91 million fine for MPIL’s failure to implement appropriate security measures and properly document and report personal data breaches.

EU AI Pact Gains Over 100 Signatories

The European Commission announced that over 100 companies, including multinationals and SMEs from various sectors, have signed the EU AI Pact and its voluntary pledges. The Pact encourages early adoption of the AI Act principles, with a focus on AI governance, high-risk AI systems mapping and enhancing AI literacy. Additional commitments include human oversight, risk mitigation and transparent labeling of AI-generated content. The European Commission also launched the AI Factories initiative to drive AI innovation in key sectors such as healthcare, energy and defence and aerospace.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Dechert LLP

Written by:

Dechert LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide