Does the existing anti-manipulation framework effectively deter algorithmic manipulation? With the dual increase of algorithmic trading and the occurrence of “mini-flash crashes” in the market linked to manipulation, this question has become more pressing in recent years. In the past thirty years, the financial markets have undergone a sea change as technological advancements and innovations have fundamentally altered the structure and operation of the markets. Key to this change is the introduction and dominance of trading algorithms. Whereas initial algorithmic trading relied on preset electronic instructions to execute trading strategies, new technology is introducing artificially intelligent (“AI”) trading algorithms that learn dynamically from data and respond intuitively to market changes. These technological developments have exposed significant shortcomings in the effectiveness of anti-manipulation laws, particularly regarding one of their fundamental goals: deterring market manipulation.
Preventing manipulation remains a key feature of the legal regime governing the financial markets. Rampant manipulation undermines the viability of the market and, in the case of algorithmic manipulation, increases systemic risks within the market. Deterring algorithmic manipulation is thus essential to the viability and stability of the market. But credible and effective deterrence of wrongdoing requires certainty of punishment, which is increasingly unattainable with respect to algorithmic manipulation under the existing legal regime. Specifically, the law of manipulation tethers liability to scienter, which algorithms cannot legally form. Further, deciphering the intent of the human behind the algorithm can be a near-impossible task in all but the most egregious cases. The scienter-focused nature of the anti-manipulation framework therefore diminishes the disciplinary power of the law, weakening deterrence and incentivizing algorithmic manipulation.
This Article demonstrates that the scienter-centric analysis undergirding anti-manipulation laws creates gaps in the detection and punishment of algorithmic manipulation that weaken the current legal regime’s deterrent effect. The acute failure of the law to punish algorithmic manipulation incentivizes potential wrongdoers to utilize algorithms to cloak their misdeeds, exposing the markets to significant systemic harm. Notably, unlike other scholars and policymakers that view transparency as the ultimate solution to increase accountability for algorithms, this Article highlights the potential limitations of relying primarily on transparency. Rather, the Article urges changes to the legal framework to modernize its applicability: eschew the scienter requirement and, instead, focus on the resulting harm of the algorithm on the market. Together, these proposals are likely to credibly deter algorithmic manipulation, safeguarding the viability, efficiency, and stability of the markets.
Gina-Gail S. Fletcher,
Deterring Algorithmic Manipulation,
74 Vanderbilt Law Review
Available at: https://scholarship.law.vanderbilt.edu/vlr/vol74/iss2/2