The Evolving Landscape of AI Regulation in Financial Services

Goodwin

Artificial intelligence (AI) is increasingly woven into financial services operations, transforming everything from consumer interactions through chatbots and targeted marketing to essential functions like underwriting, credit decisions, fraud detection, fair lending, and collections. Financial institutions increasingly rely on AI to analyze consumer complaints, manage customer relationships, and craft business strategies. But as AI adoption accelerates, the question of which agencies will regulate its use remains unsettled.

When AI gained momentum in financial services, federal agencies initially took charge. The Federal Housing Finance Agency and Consumer Financial Protection Bureau issued AI compliance directives in September 2022, April 2023, and September 2023, respectively. Other federal agencies, including the Federal Trade Commission, Department of Justice, Office of the Comptroller of the Currency, Federal Reserve, and Equal Employment Opportunity Commission, quickly followed with their own AI oversight statements.

However, neither a consensus nor a binding law on AI regulation at the federal level formed. As the federal momentum faded, state regulators stepped in, passing legislation focused on bias, transparency, and compliance in the use of AI-driven decision-making for lending and employment. Several states also clarified that discriminatory AI behavior would be assessed under their Unfair or Deceptive Acts or Practices (UDAP) laws, creating a patchwork of oversight.

Earlier this year, the Trump administration moved to deregulate the use of AI. President Trump signed Executive Order 14179 on January 23, 2025, revoking President Biden’s comprehensive AI Executive Order, which sought to place guardrails for AI use. Shortly thereafter, the One Big Beautiful Bill (OBBB) Act was introduced. The OBBB Act, which passed the House on May 22, 2025, seeks a 10-year moratorium on state and local AI regulation, with exceptions only for laws that encourage AI adoption or avoid imposing requirements on AI systems. If passed by the Senate, state regulators would be stripped of their ability to enforce AI-specific regulations — both those pending and those already enacted — for a decade, leaving only UDAP laws or other generally applicable laws as backstops.

The ongoing evolution of AI regulation is challenging to follow for the most sophisticated compliance teams and in-house counsel, yet it is critical to understand to remain competitive in the financial services industry today. Below, to help understand where AI regulation currently stands, we provide an overview of UDAP statements and guidance related to AI (which would likely remain regardless of whether the OBBB Act is enacted in its current form), followed by enacted and pending AI legislation that could be preempted and thus rendered unenforceable — or put on a 10-year hold — if the OBBB Act passes the Senate.

Further, we offer practical guidance on adhering to consumer protection principles amid an uncertain and emerging regulatory landscape.

State Guidance on Application of UDAP and Existing Laws to AI

State enforcement through existing consumer protection laws would remain intact under the federal moratorium. Several states have already issued guidance explicitly stating that their UDAP laws or existing consumer protection laws apply to AI:

  • California issued a legal advisory on January 13, 2025, explicitly highlighting the fact that existing consumer protection laws apply to AI-driven decisions. The legal advisory cautioned entities that develop or use AI systems to ensure that their systems comply with California law, including its Consumer Privacy Act and Unfair Competition Law. 
  • Oregon provided guidance on AI-related compliance requirements on December 24, 2024. The guidance emphasized that AI development and its use must prioritize consumer protection, privacy, and fairness. The guidance, though not exhaustive, highlighted several Oregon laws that are not specific to AI but may apply in the AI context, including its Unlawful Trade Practices Act, Consumer Privacy Act, Consumer Information Protection Act, and Equality Act. 
  • Massachusetts issued an advisory on April 16, 2024, to clarify for consumers, developers, suppliers, and users of AI systems that existing state laws and regulations apply “to this emerging technology to the same extent as they apply to any other product or application.” The advisory highlighted an entity’s respective obligations under the Massachusetts Consumer Protection Act, the Massachusetts Anti-Discrimination Law, and the Data Security Law.
  • The New York Department of Financial Services issued an industry letter directed to the entities it regulates on October 16, 2024, providing guidance on the risks posed by AI.  The letter does not impose any explicit additional requirements on regulated entities but illustrates how the existing cybersecurity regulation framework in 23 NYCRR Part 500 should be used to assess and address the cybersecurity risks presented by AI.

Enacted State AI-Specific Legislation Relating to Financial Services

Several states have gone beyond UDAP enforcement and introduced legislation or initiatives specifically targeting AI use in financial services, employment decisions, and data privacy. However, if enacted in its current form, the OBBB Act would render all the following enacted and pending state legislation unenforceable.

  • California enacted the Generative Artificial Intelligence: Training Data Transparency Act (Assembly Bill 2013) in the autumn of 2024. This act, which becomes effective on January 1, 2026, purports to tackle one of AI’s biggest challenges: the black box problem of understanding how an AI system arrives at its decision. The act requires developers of AI systems and services to publicly disclose specified information related to the datasets used to train, test, and validate their models and products.
  • Colorado enacted two laws in 2024 that directly target the use of AI in the consumer finance industry:
    • Senate Bill 24-205: The law is a consumer protection statute, which becomes effective February 1, 2026, that requires financial institutions to disclose how AI-driven lending decisions are made, including the data sources that informed the AI model and how its performance was evaluated. Colorado has publicly stated that this law aims to reduce the risk of discrimination in employment, housing, and credit decisions that rely on AI-based “consequential decisions.” 
    • House Bill 24-1468: The law, which became effective on June 6, 2024, altered the name, membership, and issues of study for the previously established task force for consideration of facial recognition technologies to the Artificial Intelligence Impact Task Force. The law also increased the size of the task force from 15 to 26 members, and the task force panel will now include experts in generative AI and advocates for individuals historically affected by AI-driven discrimination, bias, and facial recognition technologies.
  • Illinois amended the Consumer Fraud and Deceptive Business Practices Act in the summer of 2024. The amendment, which becomes effective January 1, 2026, expands regulatory oversight of the use of predictive data analytics and AI applications used to determine a consumer’s creditworthiness, including the assignment of specific risk factors. 
  • New York City enacted the Bias Audit Law (Local Law 144) in 2021, which became effective as of July 2023. The law mandates that companies operating and hiring in New York City must conduct independent audits of automated employment decision tools to assess potential biases in AI-driven hiring processes.
  • The Texas attorney general introduced a data privacy and security initiative in the summer of 2024, which established a team that is focused on “aggressive enforcement of Texas privacy laws” within the Consumer Protection Division of the Office of the Attorney General. This team will specifically address AI risks in consumer transactions to “protect Texans’ sensitive data from illegal exploitation by tech, AI, and other companies.”
  • Utah passed the Artificial Intelligence Policy Act in 2024, which became effective on May 1, 2024. The act established an Office of AI Policy and a requirement that if a business uses AI, such as a chatbot, to interact with an individual in connection with commercial activities, it must clearly and conspicuously disclose to the individual that they are interacting with AI, not a human. The act also granted enforcement powers to the Utah Division of Consumer Protection with penalties for violations of up to $2,500 plus any  attorneys’ and investigative fees.

Proposed State AI-Specific Legislation Relating to Financial Services

Several states have proposed legislation specifically targeting AI use in financial services, employment decisions, and data privacy. All of this proposed legislation would likely not advance any further in their respective states and eventually fall flat under the federal moratorium.

  • California introduced the following legislation in the 2025–2026 legislative session: 
    • Senate Bill (SB) 813: This bill provides civil immunity to developers for harms caused by an AI model or application if it is certified by a private “multistakeholder regulatory organization” (MRO) that has been designated by the attorney general. An MRO would be tasked with certifying that the developers of the models or applications exercised heightened care and compliance with best practices for the prevention of personal injury and property damage. The definition of “developer” is broad and could encompass a financial institution that customizes an AI model or application for its use. A hearing was held in the Senate Appropriations Committee on May 23, 2025. 
    • SB 833: This bill requires California state agencies that oversee critical infrastructure and deploy AI systems to establish a human oversight mechanism to monitor their AI systems’ operations and to conduct annual safety and human oversight compliance assessments of their AI and automated decision systems (ADS). The legislation defines “critical infrastructure” to include “financial services,” which could include the oversight of any vendor contracted by a state agency to provide those services. The Senate passed SB 833 on June 3, 2025, and it moved to the Assembly, where it is currently in the Consumer Privacy and Protection Committee.   
    • SB 7: This bill regulates the use of ADS in the employment setting, including a written notice provision when ADS is used by an employer. SB 7 was passed by the Senate on June 2, 2025, and moved to the Assembly. 
    • Assembly Bill (AB) 1018: This bill creates a comprehensive regime designed to ensure human oversight over ADS used in “consequential decisions” to mitigate bias and unreliability in these systems. The legislation defines “consequential decisions” to include “a decision that materially impacts the cost, terms, quality, or accessibility of … [f]inancial services, including a financial service provided by a mortgage company, mortgage broker, or creditor.” AB 1018 was passed in the Assembly on June 2, 2025, and moved to the Senate for consideration. 
  • Connecticut introduced SB 2, focusing on AI governance and transparency and the alignment of AI applications with state regulatory requirements. The bill also establishes an AI task force and requires the Department of Economic and Community Development to establish several oversight programs including an AI regulatory sandbox program. The bill passed the Senate on May 14, 2025, and is currently awaiting a vote in the House. 
  • Hawaii introduced SB 59, prohibiting discriminatory “algorithmic eligibility determinations.” The bill defines “[a]lgorithmic eligibility determination” as “a determination based in whole or in significant part on an algorithmic process that utilizes machine learning, artificial intelligence, or similar techniques to determine an individual’s eligibility for, or opportunity to access, important life opportunities.” The bill further defines “important life opportunities” to include “access to, approval for, or offer of credit.” SB 59 has been referred to the Senate Labor and Technology Committee for review. 
  • Illinois introduced SB 2203, which requires annual impact assessments for any automated decision tools. Further, when consequential decisions are involved, the bill requires companies to notify consumers that an automated decision tool is being used. The bill adopts the same definition of “consequential decisions” as AB 1018 in California (i.e., “a decision that materially impacts the cost, terms, quality, or accessibility of … [f]inancial services, including a financial service provided by a mortgage company, mortgage broker, or creditor”). SB 2203 has been referred to the Senate Assignments Committee.

Now What?

With the proposed decade-long federal moratorium and the patchwork of pending state legislation, the future of AI regulation remains uncertain. One consistent theme across all potential outcomes is an emphasis on transparency. Whether AI is used in customer-facing chatbots or in back-end decision-making processes such as lending, state AI-specific legislation and existing state consumer protection legislation alike are converging on the need for clear disclosure and accountability in AI deployment.

Despite the present state of uncertainty, financial institutions should still take measures to ensure their AI systems are in compliance with the basic tenets of consumer protection law. Companies would be wise to act now and implement the following best practices to stay ahead of the approaching regulatory landscape and ensure compliance with existing consumer protection laws:

  • Build a robust AI governance framework. Establish oversight bodies that include compliance, legal, risk, and technical stakeholders. Implement clear accountability structures for AI system outcomes. Document the AI system life cycle — data sources, model development, and deployment decisions. 
  • Prioritize transparency and explainability. Eliminate the black box model and, if feasible, use explainable AI (xAI), especially in high-stakes areas such as credit scoring and fraud detection. Ensure traceability of model decisions for both internal audit processes and future regulatory requirements. 
  • Align with emerging global standards. Monitor existing frameworks such as the EU AI Act and the OECD AI Principles, and consider adopting voluntary standards to stay ahead of the regulation, because certain states may look abroad for legal models to apply to evolving technologies. 
  • Maintain data hygiene and governance. Ensure high-quality, unbiased data inputs and clear data lineage. Conduct data privacy impact assessments, especially under the General Data Protection Regulation (GDPR), the California Consumer Privacy Act, or other data protection laws.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Goodwin

Written by:

Goodwin
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Goodwin on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide