Machine Learning New Technology Implicates Old Problems

Kilpatrick
Contact

Kilpatrick

[co-author: Bennett Gillogly]

The financial services industry has seen an explosive growth in Artificial Intelligence (AI) to supplement, and often supplant, existing processes both customer-facing and internal. Given the potential created by rapid advancements in AI sophistication and functionality, more and more financial services firms are leveraging the technology to deploy new use cases for improved decision-making processes – particularly in the areas of anti-money laundering, fraud prevention, risk management, and lending.

While the first wave of AI was generally focused on automating manually-intensive and repetitive tasks, banks are now turning to machine learning systems (ML) to uncover more dynamic ways of interpreting their vast swaths of customer data. Whereas AI, at a fundamental level, permits a machine to imitate intelligent human behavior, ML is a specific application (or subset) of AI that enables systems automatically to learn and improve – e.g., reduce errors or maximize the likelihood that their predictions will be true – without being explicitly programmed to make such adjustments.

This development has an exciting potential to expand the products available to underbanked communities and improve services and customer experience as a whole. However, the move towards ML also warrants revisiting the legal and regulatory concerns that already have been implicated at every stage of computer automation advancement in recent decades. Specifically, financial services firms utilizing or considering ML processes must recognize that existing biases in underlying data sets cause the decisions made by unchecked ML algorithms to be particularly susceptible to violations of federal and state anti-discrimination laws. Such firms therefore must tread cautiously when deploying this technology.

ML has more potential than traditional AI to result in de facto discrimination because of the way in which ML identifies and acts on historical trends in the data. Unlike de jure discrimination, which manifests itself in actions directly designed to result in discriminatory results, de facto discrimination refers to the unintentional perpetuation of discriminatory outcomes against people in protected classes. This type of discrimination often results in “disparate impact” claims wherein the focus is upon the discriminatory consequences of the defendant’s actions rather than any discriminatory intent.. For example, the United States Supreme Court in Texas Dep’t of Housing & Community Affairs v. The Inclusive Communities Project, Inc., 135 S.Ct. 2507 (2015), held that such disparate impact claims are cognizable under the Fair Housing Act (FHA). In the wake of that ruling, residential leasing companies using ML for ad targeting have had to take extra precautions to ensure, for example, that their algorithms don’t violate the FHA’s requirement that housing advertisements not be directed only at certain ages, races, or genders.

In the fintech industry, ML is already changing the loan underwriting business by replacing traditional credit scoring models with decisions by AI-based neural networks. Prior to AI and ML, credit default probability was based primarily on a rigid analysis of the linear relationship between an applicant’s salary and total monthly payments, often resulting in a denial of credit to deserving borrowers. By expanding the dataset and analyzing a broader range of consumer behaviors such as payment and debt histories, residence status, and an applicant’s overall relationship with their checking and savings accounts, ML is identifying complex patterns and trends that more accurately indicate a person’s credit-worthiness. As these algorithms continue to provide novel conclusions about the relationships between alternative variables and the relative credit risks they pose, consumers will benefit from untapped financial opportunities for both new loan distributions and existing loan premium reductions. However, if the quality and diversity of the dataset reflects the same prejudices that have historically prevented certain groups of people from receiving loans, and the ML algorithms aren’t built to identify and address those inherent biases, then ML decisions will likely result in the same de facto discrimination by replicating the same discriminatory methodologies that have been outlawed and litigated for decades.

The Equal Credit Opportunity Act of 1974 (ECOA) prohibits a lender from considering a potential borrower’s race, religion, national origin, age, sex, familial status or handicap when issuing a loan. If lending institutions do not purge these elements from the training data used to formulate their ML algorithms, they risk creating an auditable trail of discriminatory – albeit, automated – lending decisions. Even after the data is scrubbed, however, disparate impact discrimination could arise based on historic trends if the algorithm goes unchecked by coders during the testing and deployment stages. As an example, if an ML algorithm identifies a pattern associating a higher credit risk correlated to certain zip codes, it’s possible that the outcomes will disproportionately affect certain consumers of a protected class. Furthermore, the outcomes of credit reports generated using ML would still be required to meet ECOA’s mandate to provide specific justifications for a denied loan application – a prospect that becomes increasingly troublesome when these protected variables are commingled with more traditional data elements.

The Supreme Court has not yet weighed in on whether their holding on disparate impact claims in Inclusive Communities also applies to ECOA, and the Trump Administration has demonstrated a lack of support for extending that ruling any further. However, fintech companies developing ML for the lending industry, and the lending companies utilizing such technology, should nonetheless expect that the outcomes of their algorithms will be heavily scrutinized by civil liberties and consumer protection groups in such a way that could lead to reputational risk and costly litigation.

As algorithmic automation continues to progress with systems that learn from and act on historic trends in data, fintech companies should continuously monitor the international regulatory landscape as these issues continue to play out in other industries. In the United States, for example, a group of Democratic lawmakers – Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), and Rep. Yvette D. Clarke (D-NY) – recently introduced the Algorithmic Accountability Act (“AAA”) which would require certain covered entities to conduct assessments of automated decision systems impacting and to address any biased or discriminatory results. Although the AAA still has a ways to go before becoming law, it’s notable that the proposal actually extends further than similar regulations in Europe by mandating that companies fix any algorithm in violation. In California, the new California Consumer Privacy Act goes into effect on Jan. 1, 2020 and is intended to replicate many of the stringent controls put in place by the General Data Protection Regulation (“GDPR”) in Europe. Article 22 of the GDPR already prohibits some types of automated decision making based on certain categories of personal information. The GDPR also provides consumers with a right to opt-out of certain decisions based solely on automatic decision making, and Article 13 Paragraph 2(f) gives consumers a right to receive an explanation for certain decisions based solely on automated processing. Further, the European Union’s Open Banking Initiative and Revised Payment Service Directive (a/k/a PSD2, 2015/2366/EU) give consumers more control over the use of their data.

Any company operating in the heavily-regulated financial services industry should expect that these types of regulations will soon reach all of their data retention and algorithmic automation practices. A failure to recognize and address these risks is likely to result in widespread and costly litigation, at a minimum, as well as regulatory backlash. Fintech companies, financial service firms and financial institutions should therefore take proactive steps to head off this issue by, at a minimum, closely monitoring and auditing their current AI and ML systems to identify potential bias or discrimination in both the input and output of those systems.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Kilpatrick | Attorney Advertising

Written by:

Kilpatrick
Contact
more
less

Kilpatrick on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide