Dechert Cyber Bits - Issue 56

Dechert LLP

Aticles in this Issue
  • SEC Fines the New York Stock Exchange’s Parent Company $10 million for Failure to Promptly Notify Its Subsidiaries of Cybersecurity Breach
  • UK Data Regulator Consults on Individuals’ Rights in Generative AI
  • Colorado Governor Signs AI Bill
  • FTC Warns Vehicle Manufacturers Regarding Collection and Misuse of Consumer Data
  • FTC Releases Fiscal Year 2023 Annual Report
  • Dechert Tidbits

SEC Fines the New York Stock Exchange’s Parent Company $10 million for Failure to Promptly Notify Its Subsidiaries of Cybersecurity Breach

O

n May 22, 2024, the Securities and Exchange Commission (“SEC”) imposed a $10 million fine on the Intercontinental Exchange (“ICE”) alleging that due to ICE’s failure to promptly notify its subsidiaries (including the New York Stock Exchange) of a 2021 breach, ICE caused those subsidiaries to fail to report the breach to the SEC in violation of the SEC’s Regulation Systems Compliance and Integrity Rule (“Reg SCI”).

The breach at issue involved the exploitation of a zero-day vulnerability in one of ICE’s virtual private network (“VPN”) devices that allowed a threat actor to install malicious code on the VPN device in an attempt to illicitly collect information passing through that device. The SEC alleged that ICE was notified of the vulnerability by a third party and was immediately able to determine that a VPN device used to remotely access ICE’s corporate network was infected with malicious code. While ICE eventually determined that the incident was a de minimis event – one that had no or minimal impact – ICE did not inform its subsidiaries of the intrusion until after it had made that determination, four days later. The SEC concluded that under Reg SCI ICE’s subsidiaries should have reported the breach to the Commission if the de minimis determination was not made immediately upon determining the intrusion had occurred. Because ICE did not inform the legal and compliance officers at its wholly owned subsidiaries – in violation of its own cyber incident reporting policies – the subsidiaries were in turn prevented from fulfilling their obligations to notify the Commission of the event.

In its press release, the SEC’s enforcement division emphasised that prompt notifications of breaches are crucial for protecting markets and investors, especially for critical market intermediaries. ICE and its subsidiaries, without admitting or denying the SEC’s findings, agreed to a cease-and-desist order in addition to the monetary penalty levied on ICE.

Takeaway: This enforcement action reflects the SEC’s current aggressive enforcement posture. This is the new reality under the recently adopted, more onerous SEC reporting obligations, which require companies to assess and react, even before they have accurate and full information, and often while the team is busy trying to thwart the attack. It also underscores the need for companies to know how they will respond before an incident occurs and how to document their decisions along the way, to fend off the kind of Monday morning quarterbacking that many regulators engage in. Incident response plans should reflect familiarity with obligations informed not only by the relevant regulations, but also by internal company policies. Lastly, the action underscores the need to follow your own policies in a breach.

UK Data Regulator Consults on Individuals’ Rights in Generative AI

As part of its series of consultations on privacy in the context of generative AI, the UK Information Commissioner’s Office (ICO) considered how generative AI developers and users can comply with requests from individuals to exercise their rights under data protection laws, such as the right to be given access to their data and the right to rectification of inaccurate personal data.

In the context of generative AI, personal data can be included in training data, outputs of generative AI systems and user queries. In its analysis, the ICO highlighted that generative AI developers often collect personal data directly from individuals or from other sources such as web-scraping. The ICO emphasized that in both kinds of cases, developers must provide clear information to individuals about how their data is processed. The ICO also invited views on several key areas, including the effectiveness of input and output filters in preventing the unintentional output of personal data, and the use of privacy-enhancing technologies and pseudonymization techniques to limit data identifiability.

Takeaway: The ICO’s consultation on generative AI underscores the importance of data protection and individual rights in the development and use of AI, as well as the ICO’s position as the de facto AI regulator in the UK. The inclusion of data subject rights in the ICO’s generative AI consultation series may indicate that this is an area it will be monitoring particularly closely as the use of AI continues to expand. Businesses looking to use generative AI tools will therefore want to ensure that they cover compliance with rights requests as part of their due diligence and assessment of a proposed AI tool.

Colorado Governor Signs AI Bill

On May 17, 2024, Colorado Governor Jared Polis signed Senate Bill 24-205, Consumer Protections for Artificial Intelligence (“Colorado AI Act”) into law. As it stands, the law will not go into effect until February 1, 2026, just short of two years from now. However, Governor Polis also released a signing statement outlining his reservations surrounding the passage of this first-of-its-kind artificial intelligence (“AI”) legislation.

The Colorado AI Act seeks to eliminate bias and discrimination within AI decision making. The Act focuses predominantly on automated decision-making systems; specifically, systems that are designated as “high risk” because, when deployed, they either: (i) make consequential decisions; or (ii) are substantial factors in such decisions. Under the statute’s terms, “consequential” decisions are those that pertain to, among other things, an individual’s education, employment, or health care.

Among other things, the Colorado AI Act requires: (i) developers to provide extensive disclosure packets to deployers regarding the creation and training of high-risk AI systems; (ii) deployers to implement a risk management policy and program if using high-risk AI systems; and (iii) deployers and developers to make clear to consumers when an AI system is being used for a task, regardless of whether it is considered high risk.

In his signing statement, Governor Polis recognized concerns surrounding the passage of the Colorado AI Act, including the potential different effects of regulation on innovative thinking or hampering of technological development. He called on the federal government to pass a nationwide regulation that would pre-empt the Colorado AI Act to promote cohesion in regulation amongst the states, and he requested that the Colorado General Assembly improve the law prior to it taking effect. It is unclear what, if anything, the Assembly will do to address the Governor’s concerns.

Takeaway: Colorado is the first state to pass new legislation regarding the regulation of AI. So far efforts to create federal legislation relating to data protection and AI have failed, and the passage of this law could mark the beginning of a patchwork of AI regulation on a state-by-state basis. The state data breach notification laws started in the mid-aughts, and more recently, 18 states have enacted comprehensive state privacy laws (starting with CCPA in CA), when the federal government failed to act. This may be a case of déjà vu.

FTC Warns Vehicle Manufacturers Regarding Collection and Misuse of Consumer Data

On May 14, 2024, the Federal Trade Commission (“FTC”) issued a Technology Blog publication regarding the collection of sensitive consumer data by manufacturers of “connected cars.” Connected cars allow a consumer to, among other things, remotely unlock or lock a vehicle or check the location of a vehicle on a smartphone device.

The FTC noted that these features provide connected vehicle manufacturers with access to an assortment of sensitive consumer data – including biometric, geolocation, and personal data – and raises concerns over the collection, storage, and use of this data. The FTC highlighted three takeaways from recent enforcement actions, warning manufacturers that: (i) geolocation data is considered sensitive and receives enhanced protection (X-Mode; InMarket); (ii) the covert disclosure of sensitive information is an unfair practice (BetterHelp; Cerebral); and (iii) it can be unlawful to make automated decisions with the use of sensitive information (Rite Aid).

Takeaway: This warning indicates that the FTC is cracking down on data collection by vehicle manufacturers. We anticipate that investigations, enforcement actions and settlements are likely to follow. Given this emphasis, connected car manufacturers will want to be on high-alert regarding their data collection and disclosure practices and take steps to proactively remediate issues identified in the cited FTC orders. If vehicle manufacturers are not clear on exactly what data is collected, what its source is or what the disclosures are surrounding it, now is the time to review all of those things—from gathering the relevant facts to policy review.

FTC Releases Fiscal Year 2023 Annual Report

O

n May 15, 2024, the Federal Trade Commission (“FTC”) released its Fiscal Year 2023 Annual Report (“Annual Report”) outlining its efforts to protect consumers from unlawful practices in a tide of technological advancements. As part of this effort, the FTC reported that it filed 43 complaints in federal court, entered 19 administrative orders, and reviewed compliance in 370 matters. Enforcement actions focused on, among other issues, alleged COPPA violations, alleged misuses of health and other sensitive personal data, and cases involving allegedly problematic data security practices.

The Annual Report also recognized the agency’s development of an aggressive and adaptive approach to combating allegedly unlawful practices in an advancing technological landscape. According to FTC Chair Lina Khan, in Fiscal Year 2023 the FTC was able to “stay on the cutting edge of next-generation technologies,” as “artificial intelligence and algorithmic decision-making tools [] proliferat[ed].” On the same day that the FTC released its Annual Report, Chair Khan testified before the House Appropriations Subcommittee on Financial Services and General Government to discuss her request for an increase to the FTC’s budget in Fiscal Year 2025, during which she described her agency’s need for “critical IT investments” to keep up with “big data.”

Takeaway: The FTC has aggressively pursued allegedly wrongful practices regarding the collection and use of consumer data. The Annual Report makes clear that future enforcement targets may include what the Commission regards as problematic uses of AI technology and algorithmic tools. We expect the FTC will continue its active enforcement unabated.

Dechert Tidbits

EU Launches AI Office to Shape Future AI Governance


On May 29, 2024, the European Commission launched the new AI Office to support the development of safe and trustworthy AI and to ensure the coherent implementation of the AI Act, expected to come into force in the coming weeks. The AI Office will operate as the EU’s centre of AI expertise, providing advice on best practices for AI adoption, regulation, compliance, safety, and innovation.

UK Data Protection Reform Dropped

Proposed legislation to amend the UK GDPR has been dropped from the parliamentary agenda ahead of the UK parliamentary election due to take place on July 4, 2024. If, as widely anticipated, the current Conservative government is not re-elected, it seems unlikely that the proposed legislation will be revived.

Vermont Approved Comprehensive Privacy Law with a Private Right of Action

On May 10, 2024, the Vermont House and Senate passed H.121, the Vermont Data Privacy Act (“VDPA”). The VDPA is slated for enactment on July 1, 2025, but is still awaiting signature by Governor Phil Scott. If signed, the VDPA would become one of the most comprehensive data privacy laws in the country because of, among other things, its: (i) private right of action; (ii) eventual application to all businesses serving state residents; and (iii) data minimization standards. If passed as is, the private right of action would be of limited duration, taking effect in 2027 and expiring in 2029.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dechert LLP | Attorney Advertising

Written by:

Dechert LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide