In a significant move to regulate the growing impact of artificial intelligence, Oregon lawmakers recently passed Senate Bill 1571, requiring campaigns to disclose when they use AI to manipulate audio or video images, including deepfakes, to influence voters. Although SB 1571 applies only to political campaigns, the Attorney General has issued guidance that may be helpful to businesses seeking to minimize their legal risks in connection with the use of AI.
In an official Guidance document issued on December 24, Oregon Attorney General Ellen Rosenblum acknowledged the benefits and risks associated with AI. She recognized the ability of AI to streamline tasks and analyze data, but also highlighted the significant concerns surrounding privacy, discrimination, and accountability. Oregon businesses should be vigilant about how AI can intersect with the Oregon Unlawful Trade Practices Act, the Oregon Consumer Privacy Act, Oregon’s data breach notification statute, and the Oregon Equality Act. Below, we break down how these laws apply to AI and discuss ways businesses can reduce their risk of legal exposure.
Oregon’s Unlawful Trade Practices Act
The UTPA was designed to protect consumers against unfair and deceptive business practices, including misrepresentations in consumer transactions. The UTPA mirrors Section 5 of the Federal Trade Commission Act and is intended to apply to emerging technologies like AI. The Guidance issued by AG Rosenblum addresses several examples in which the UTPA would directly apply to the AI technology. This includes the following:
- Known material defects. AI developers, as well as companies that deploy AI in their operations (which are referred to in the Guidance as “deployers”), may be held liable under ORS 646.608(1)(t) if their product regularly generates false or misleading information and fails to disclose these limitations to purchasers and users.
- Misrepresentation of characteristics, benefits, uses, or sponsorship. Developers or deployers who claim that their AI has specific characteristics, uses, benefits, or qualities, or who claim that their AI has sponsorship, approval, affiliation, or connection falsely, may be held liable under ORS 646.608(1)(e). For example, the use of deep fakes to create a false celebrity endorsement or affiliation may directly violate this statute.
- False urgency: Developers or deployers who use AI to falsely claim that a discount is being offered for a limited time when a similar discount is available year-round may be a “flash sale” in violation of ORS 646.608(1)(j).
- Price gouging: Companies that use AI to set unconscionable prices during a state of emergency may violate ORS 646.607(3).
- AI-generated robocalls: AI-generated voice calls or robocalls containing false information may violate ORS 646.608(1)(ff).
- Unconscionable tactics: Use of AI to knowingly take advantage of a consumer’s ignorance or to knowingly permit them to enter into a transaction with no material benefit may subject a company to liability under ORS 646.607(1).
Oregon Consumer Privacy Act
The OCPA imposes strict requirements on the use of consumer data and is particularly applicable when that data is used to train AI systems. The OCPA requires clear and conspicuous privacy notices, and AG Rosenblum makes clear that this is especially pertinent for developers and deployers of AI seeking to train their AI with consumer data. Liability under the OCPA may arise in the following areas:
- Notice and consent. AI developers and deployers must disclose the use of personal data in a clear and conspicuous privacy notice. The notice must clearly disclose that the business intends to use personal information to train its AI and must clearly explain to consumers their statutory rights. These rights include (1) the right to know whether the company is processing their data, (2) the right to request access to their data, (3) the right to amend and correct inaccuracies, (4) the right to delete their personal data, and (5) the right to opt out of the use of AI models for profiling in decisions that have legal or similarly significant impacts like housing, education, or lending.
- Sensitive data. AI developers and deployers who use sensitive data as specified under the OCPA must first obtain explicit consent before using the data to train their AI model.
- Controller liability. Developers that purchase or use another company’s data set for model training may be considered a “controller” under the OCPA, meaning they will be considered the person who determines the purposes and means for processing personal data.
- Retroactive privacy notices. Under the OCPA, retroactive privacy notices that purport to legitimize the use of previously collected personal data to train AI models are prohibited. Developers must instead obtain explicit, affirmative consent for the secondary use of previously collected data and must provide the consumer with a mechanism to withdraw previous consent.
- Data protection assessments. Oregon businesses must conduct a data protection assessment before processing personal data for the purposes of profiling or other activities that are considered “heightened risk.” Notably, AG Rosenblum considers the use of personal data to train AI models to create a heightened risk.
Oregon Consumer Information Protection Act
The Oregon Consumer Information Protection Act is the state’s data breach notification statute. It requires businesses to notify consumers in the event of a breach and to comply with statutory baseline safeguards for protecting consumer personal information. Developers or deployers of AI may therefore need to notify affected individuals and the Oregon Attorney General if there is a security breach that affects personal data within AI systems.
Oregon Equality Act
The Oregon Equality Act prohibits discrimination based on identity characteristics including race, color, religion, sex, sexual orientation, gender identity, national origin, marital status, age, or disability. AI systems are trained by human beings and thus may inadvertently be susceptible to discriminatory results. For example, an AI loan approval system that consistently denies loans to qualified applicants from certain ethnic backgrounds may violate the Oregon Equality Act. AG Rosenblum hopes to head this off by calling on deployers and developers to address these concerns during the development process as they consider potentially discriminatory inputs or biased outcomes.
Looking ahead: Increased legal scrutiny and evolving liability
Undoubtedly, AI and emerging technologies will continue to change and guide new business standards in Oregon and across the nation, and businesses must stay up to date to ensure compliance.