Articles in this issue
- European Parliament Approves EU AI Act
- U.S. House of Representatives Passes Bill to Prevent Sales of Americans' Data to "Foreign Adversaries"
- CJEU Ruling Pressures IAB Europe To Reform Adtech Framework
- EU Data Regulators Can Order Erasure of Personal Data Even If Not Requested by Data Subject
- California Privacy Protection Agency Releases Strategic Plan for 2024-2027
- Dechert Tidbits
European Parliament Approves EU AI Act
On March 13, 2024, the European Parliament approved the EU Artificial Intelligence Act (“AI Act”). A first of its kind legal framework for AI, the AI Act has extraterritorial effect, impacting EU and non-EU businesses that develop, provide, or use AI within the EU. In its current version, serious breaches of the AI Act can result in fines up to 7% of global annual turnover or EUR 35 million (whichever is higher). The AI Act adopts a “risk-based” staggered approach, classifying AI systems into four categories:
- Prohibited AI: Certain AI systems that are considered to threaten individuals’ fundamental rights are banned. These include biometric categorisation systems based on sensitive characteristics, systems that manipulate human behaviour or exploit vulnerabilities, systems for emotion recognition in the workplace and schools, and social scoring.
- High Risk AI: AI systems in this category include those used in critical infrastructure sectors such as energy and transport as well as the insurance and banking sectors, education assessments, recruitment and employment. They are subject to strict requirements, including technical documentation, data governance, human oversight, security, conformity assessments, and reporting obligations. This category is the focus of the AI Act and is subject to the most extensive obligations.
- General Purpose AI Models: These are models that display significant generality and can perform a wide range of distinct tasks and be integrated into a variety of systems. They must meet certain transparency requirements, assess, and mitigate risks, and undertake testing, among other things.
- Other/Low Risk AI: Examples include AI systems such as chatbots or those performing general tasks like content creation or image and speech recognition. The primary obligations for these systems revolve around ensuring transparency for the end-user (i.e., informing the end user that they are interacting with an AI system).
The AI Act is now undergoing final legal-linguistic checks and will be subject to the Council of the EU’s formal final endorsement before being published in the Official Journal, expected to take place around May or June 2024, and will then enter into force 20 days later. Requirements will then be gradually phased into effect with all provisions applicable within 24 months.
Takeaway: Businesses developing or using AI will want to get started with mapping their current usage of AI, considering whether they are within the territorial scope of the AI Act, and assessing what risk-level category each AI deployment falls under, to ensure they are in compliance as the requirements phase into effect. Developers of AI (in particular systems that would fall within the high-risk category) will want to have the requirements of the AI Act front of mind during the development process with a view to such systems being well positioned to comply as the AI Act comes into effect, rather than potentially requiring costly fixes following implementation. Early preparation will also position businesses favorably in navigating the AI Act’s anticipated “Brussels effect” across different regions.
U.S. House of Representatives Passes Bill to Prevent Sales of Americans' Data to "Foreign Adversaries"
On March 21, 2024, the U.S. House of Representatives passed the Protecting Americans’ Data from Foreign Adversaries Act of 2024 (“H.R. 7520” or the “Bill”). H.R. 7520 focuses on the bulk sale of data to “foreign adversary” countries, such as China, North Korea, Iran and Russia. The Bill seeks to make it unlawful for data brokers to provide a foreign adversary, or any company in which a foreign adversary has 20% or more ownership interest, with access to the “sensitive data” of any U.S. person. The Bill defines “sensitive data” broadly and includes, among other things, government identification numbers, health data, financial data, and log-in credentials. A U.S. person’s private communications, calendar information, intimate imagery, and information identifying online activities over time and across websites are also protected under the Bill.
H.R. 7520 would empower the U.S. Federal Trade Commission (“FTC”) to treat violations of the ban on sensitive data disclosure as an unfair trade practice under the FTC’s existing enforcement powers. H.R. 7520 does not yet have a counterpart in the U.S. Senate and its fate therefore remains uncertain.
Takeaway: H.R. 7520 demonstrates continuing concerns over the collection and sale of U.S. consumer data, and reflects an increasing focus of lawmakers on the dangers associated with U.S. consumer data falling into the hands of foreign adversaries. Data brokers and companies that share sensitive personal information need to be aware of who is purchasing their data, and whether it was lawfully acquired, as legislative and regulatory focus on the data broker space continues unabated.
CJEU Ruling Pressures IAB Europe To Reform Adtech Framework
The Court of Justice of the European Union (“CJEU”) ruled on the Transparency and Consent Framework (“TCF”) operated by IAB Europe, the European-level trade association for the digital marketing and advertising industry. In 2022, the Belgian data regulator fined IAB Europe for GDPR violations. IAB Europe appealed, and the appeal court referred questions to the CJEU. The TCF was designed to support compliance with the GDPR in relation to consents for targeted advertising by using ‘TC Strings,’ digital signals that capture website users’ preferences about how their data is used.
The CJEU held that TC Strings constitute personal data where the information contained in the string may be linked to an identifier, such as an IP address, that enables the data subject to be identified. This is the case even if IAB Europe cannot access the data used by its members.
The CJEU also indicated that, through the TCF, IAB Europe exerts sufficient influence over the recording of preferences in a TC String for IAB Europe to be a ‘joint controller’ under the GDPR (even where IAB Europe does not itself have access to the data). IAB Europe is not a controller, however, in relation to the subsequent use of the preference information by IAB members and others in the adtech ecosystem (such as data brokers and advertising platforms).
Takeaway: The CJEU’s comments and factual assumptions are subject to confirmation by the Belgian court, but the CJEU has given a clear steer that IAB Europe is a joint controller in connection with the TCF. This will put pressure on IAB Europe to remedy privacy issues with the TCF. Organizations that participate in the TCF can take comfort in the fact that an action plan for bringing the TCF into compliance has been approved by the Belgian data regulator.
EU Data Regulators Can Order Erasure of Personal Data Even If Not Requested by Data Subject
The Court of Justice of the European Union (“CJEU”) has confirmed that data regulators have the power to order erasure of unlawfully processed personal data of their own volition.
In 2020, a Hungarian municipal administration decided to provide financial support to certain individuals who had been made vulnerable by the COVID-19 pandemic. To verify eligibility for this aid, the administration requested personal data from other Hungarian state institutions. The Hungarian data regulator launched an investigation into the scheme and found that the municipal administration had failed to comply with its transparency obligations under the GDPR. It fined the administration and ordered erasure of the unlawfully processed data.
The municipal administration appealed the erasure order relying on a decision of the Hungarian Supreme Court that found that erasure was a right of data subjects and could not be enforced by data regulators if the data subject had not exercised that right. The CJEU disagreed, finding that even if a data subject has not made an erasure request, the GDPR imposes an obligation on data controllers to erase personal data that is processed unlawfully, and regulators are empowered to order compliance with that erasure obligation.
Takeaway: Finding that data regulators can independently order the deletion of unlawfully processed personal data is a logical outcome, and the CJEU has confirmed an important power in a regulator’s enforcement arsenal. The CJEU’s decision also emphasizes that, rather than waiting for data subjects to exercise their rights, organizations would be well-advised to pro-actively manage their data protection compliance, including on issues where data subjects have express rights.
California Privacy Protection Agency Releases Strategic Plan for 2024-2027
The California Privacy Protection Agency (“Agency”) released its strategic plan for 2024-2027 (the “Plan”). The Plan outlines the Agency’s goals to: (1) “strengthen public education, outreach and engagement,” (2) “vigorously enforce privacy laws,” (3) “strengthen Californians’ privacy rights,” and (4) ensure “operational excellence.” The goals and the strategic plan also lay out the Agency’s plan for enforcement of the California Consumer Privacy Act (“CCPA”).
The Plan includes objectives designed to effectuate each of the four goals. For example, to “vigorously enforce the law,” the Plan suggests several objectives, including advancing “strategic enforcement priorities that will provide the greatest impact to Californians,” the undertaking of successful “enforcement actions” to “protect consumers through quality, diligent, and timely investigations,” and identifying “trends through complaint data and adjust[ing] audit and enforcement protocols to mitigate consumer harm.” Moreover, to “strengthen public education, outreach, and engagement,” the Plan suggests developing “supplemental business guidance” and instituting “a statewide public education campaign.”
Takeaway: The Agency’s Plan, which the CCPA calls a “road map for the future,” is ambitious. It emphasizes development of protocols and processes intended to foster a robust regulatory landscape, including measures meant to facilitate timely responses to privacy-related issues, encourage cross-industry collaboration, and bolster partnerships between the Agency and non-government entities. A key Plan objective is encouraging compliance by empowering consumers through educational efforts, and facilitating businesses’ understanding of their obligations through publishing “supplemental business guidance.” And it is important to note that enforcement remains a priority: the Plan commits the Agency to protecting consumer privacy rights through “engagement with the regulated community, timely investigations, and enforcement actions.”
Dechert Tidbits
UN Passes Resolution Promoting Collective Action on "Safe, Secure and Trustworthy" AI
The United Nations General Assembly on March 21, 2024, unanimously adopted the first global resolution regarding artificial intelligence (“AI”). In a joint statement from the resolution’s co-sponsors, the United States explained that the resolution, titled "Seizing the Opportunities of Safe, Secure, and Trustworthy Artificial Intelligence Systems for Sustainable Development," calls on Member States to promote AI systems that are safe, secure, and trustworthy. The resolution also seeks to guide Member States’ leveraging of AI in their efforts against poverty, global health inequality, food insecurity, and education inequality.
EDPS Criticizes Council of Europe's Proposal for Convention On AI
The Council of Europe is negotiating a “Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law” to create obligations between members of the Council of Europe to respect human dignity, the rule of law, and democratic principles when artificial intelligence is used. The European Data Protection Supervisor has expressed concern that the value of the treaty has been undermined by limiting its scope to public bodies (with merely an option for member states to opt-in private companies) and by excluding technologies developed for national security.
New Hampshire Enacts Privacy Legislation
New Hampshire Governor Chris Sununu signed SB 255, the state’s first consumer privacy law, into law earlier this month. Similar to other U.S. state privacy laws, SB 255 empowers New Hampshire consumers to access the personal data companies process, understand how that data is processed, and delete that data upon request. The law also contains data minimization principles. The law does not apply to financial institutions and data regulated by the federal Gramm-Leach-Bliley Act. Violations of the law will be enforced exclusively by the New Hampshire’s Attorney General. SB255 takes effect on January 1, 2025.