FTC Hosts FinTech Forum on Artificial Intelligence and Blockchain Technologies

Hogan Lovells
Contact

Hogan Lovells

[co-author: Laurie Lai]

On Thursday, March 9th, the Federal Trade Commission (FTC) hosted a forum on the consumer implications of recent developments in artificial intelligence (AI) and blockchain technologies. This was the FTC’s third forum on issues in FinTech. Previous FinTech Forums covered marketplace lending and crowdfunding and peer-to-peer payments.

In opening remarks, the FTC acknowledged the benefits of technological developments in AI and blockchain technologies: AI promises better decision-making and personalized consumer technologies, while blockchain technologies would increase the efficiency of financial transactions and eliminate the need for the middleman, among other benefits. But, the FTC stressed that advancements in these technologies must be coupled with an awareness of and active engagement in identifying and minimizing associated risks. For AI, this means countering biased or incomplete results, improving the transparency of decision-making, and addressing general lack of consumer awareness and understanding. For blockchain, it means strengthening data security, increasing oversight, and preventing abuse of the technology. The need to carefully consider the challenges raised by technological advancements was echoed by panelists throughout the forum, suggesting that the FTC will likely expect companies in these industries to have assessed and taken steps to mitigate the novel risks they face as they continue to innovate and break new ground in these spaces.

This is the first of two entries on the March 9th FinTech Forum. Today’s post focuses on Artificial Intelligence, with coverage of blockchain technologies to follow.

Artificial Intelligence (AI)

The AI panel discussion focused on topics familiar to those in the space, such as how the values of privacy, autonomy, and fairness are affected by the advent of AI systems as well as how to ensure safety and security in the development and deployment of individual and connected AI systems.

  • Privacy

Panelists recognized that several of the Fair Information Privacy Principles (FIPPS), which has served as a foundation of privacy law for decades, appear to be in tension with emerging AI technologies. For example, the principle of data minimization states that organizations should collect only the personal information that is directly relevant and necessary to accomplish specified purposes and retain it only as long as is necessary to fulfill those purposes. But AI systems often require enormous amounts of information and may use this information in ways that may not be anticipated at the time of collection. The panelists suggested that one way to resolve this tension is to develop principles for using pseudonymous or anonymous data in algorithms where possible. In addition, panelists discussed whether practitioners should consider coding certain limitations into the algorithms themselves, to prevent any incidental reidentification.

  • Autonomy

The effectiveness with which AI systems use information about individuals to cater to their particular interests gives these systems the potential to nudge consumers, even subconsciously, toward one decision over another. Skilled marketers armed with knowledge of psychology and consumer preferences are already aptly doing this, but AI systems exponentially improve their ability to influence consumer opinion. As AI continues to develop, participants noted that marketing and advertising regulators will likely review the application of transparency principles to personalized AI technologies. Industry members may proactively address concerns by implementing measures to prevent algorithms from unnecessarily or unintentionally causing certain consumer behaviors.

  • Fairness and Bias

Concerns about fairness and bias are commonplace in the world of AI, as evidence of algorithmic bias continues to emerge. A recent ProPublica finding of racial bias in criminal recidivism models is just one example of how AI systems trained on data that reflects systemic bias may then perpetuate those biases.

Algorithmic bias may be rooted in one or multiple issues, each of which requires a different form of remediation. Deirdre Mulligan, Associate Professor at the UC Berkeley School of Information, proposed that organizations consider structuring their algorithmic design processes to eliminate three types of biases: (1) intentional bias, which stem from features that are intentionally built into the algorithm; (2) process bias, which results from blind spots during the design process that reflect the designers’ values; and (3) complexity bias, which arises from the interactions among many different potential sources of bias.

An often discussed part of process bias is the accuracy and integrity of the data inputs, and what data can and should be used in developing or operating AI. Panelists suggested that companies build into their AI design process a step to evaluate what data is considered relevant, whether there are gaps or asymmetries in the available data, how to clean the data, and whether the data is truly representative. Companies may wish to review their algorithmic design processes to ensure their data evaluation step includes these and other, similar, considerations.

Panelists also emphasized that programs designed to identify and eliminate AI bias must consider not only the net effect of an algorithm, but also the ways in which it may disproportionately create winners and losers. Even an algorithm does not display traces of discrimination and results in a net benefit to society, if the losers were disproportionately members of one particular group, this fact should be identified and addressed in the algorithmic design process.

All of the participants agreed that eliminating bias is a costly and difficult task. One promising solution, put forth by Rayid Ghani of the University of Chicago and accepted by the other panelists, was to use one AI system to audit another. This proposal, of course, begs the question of how one would prevent one AI system from reinforcing the biases of the other. Nonetheless, Ghani’s suggestion was well received by the panelists and may be a promising avenue of further exploration.

  • Safety and Security

As a final principle, the panelists discussed how to make AI systems that are safe and secure. Mulligan emphasized that achieving safety with AI systems requires considering “composability”—how algorithms may interact with one another in an ecosystem of connected devices. A product that is completely secure in isolation may no longer be so when connected to other Internet of Things devices. For example, autonomous vehicles built by different makers and trained on different data sets may not interact properly with each other if they use different reasoning. Panelists warned that AI developers should consider not only the context in which algorithms were created and initially tested, but also anticipate the risks associated with potential future uses and interactions between AI systems.

  • Tradeoffs among Privacy, Fairness, Accuracy, and Transparency

Finally, the panelists discussed the inevitable tradeoffs among values: To achieve accuracy and fairness, it might be necessary to compromise privacy interests in order to collect individualized data. Similarly, providing transparency may introduce inaccuracies, by enabling people to use the published rules to “game” the system. As the use of AI systems becomes increasingly commonplace and expands to new sectors, the panelists stressed that companies developing these systems should consider and incorporate these principles as appropriate given the risks that they face and the nature of the data and field in which they operate.

*           *           *

Video and transcripts from the forum will be available here.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Hogan Lovells

Written by:

Hogan Lovells
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide