•  
  •  
 
Vanderbilt Journal of Entertainment & Technology Law

First Page

289

Abstract

Consumer interaction with the financial market ranges from applying for credit cards, to financing the purchase of a home, to buying and selling securities. And with each transaction, the lender, bank, and brokerage firm are likely utilizing artificial intelligence (AI) behind the scenes to augment their operations. While AI’s ability to process data at high speeds and in large quantities makes it an important tool for financial institutions, it is imperative to be attentive to the risks and limitations that accompany its use. In the context of financial markets, AI’s lack of decision-making transparency, often called the “black box problem,” along with AI’s dependence on quality data, present additional complexities when considering the aggregate effect of algorithms deployed in the market. Owing to these issues, the benefits of AI must be weighed against the particular risks that accompany the spread of this technology throughout the markets.

Financial regulation, as it stands, is complex, expensive, and often involves overlapping regulations and regulators. Thus far, financial regulators have responded by publishing guidance and standards for firms utilizing AI tools, but they have stopped short of demanding access to source codes, setting specific standards for developers, or otherwise altering traditional regulatory frameworks. While regulators are no strangers to regulating new financial products or technology, fitting AI within the traditional frameworks of prudential regulation, registration requirements, supervision, and enforcement actions leaves concerning gaps in oversight.

This Article examines the suitability of the current financial regulatory frameworks for overseeing AI in the financial markets. It suggests that regulators consider developing multi-faceted approaches to promote AI accountability. This Article recognizes the potential harms and likelihood for regulatory arbitrage if these regulatory gaps remain unattended and thus suggests focusing on key elements for future regulation—namely, the human developers and regulation of data to truly “hold AI accountable.” Therefore, holding AI accountable requires identifying the different ways in which sophisticated algorithms may cause harm to the markets and consumers if ineffectively regulated, and developing an approach that can flexibly respond to these broad concerns. Notably, this Article cautions against reliance on self-regulation and recommends that future policies take an adaptive approach to address current and future AI technologies.

Share

COinS