•  
  •  
 
Vanderbilt Journal of Entertainment & Technology Law

First Page

145

Abstract

The advent of sophisticated artificial intelligence (AI) agents, or bots, raises the question: How do we ensure that these bots act appropriately? Within a decade, AI will be ubiquitous, with billions of active bots influencing nearly every industry and daily activity. Given the extensiveness of AI activity, it will be nearly impossible to explicitly program bots with detailed instructions on permitted and prohibited actions, particularly as they face unpredictable, novel situations. Rather, if risks to humans are to be mitigated, bots must have some overriding moral or legal compass--a set of "AI Laws"--to allow them to adapt to whatever scenarios they face.

This Article demonstrates how to develop a "Bot Legal Code," a system of AI Laws that can ensure AI compliance with legal (but not moral) requirements. Most proposals for AI Laws--such as Isaac Asimov's Laws of Robotics--have emphasized that bots must be moral or "good," but moral precepts operate at a level of abstraction that computers simply cannot grasp. In contrast, a bot can understand legal requirements specifically because the law eschews abstraction in a variety of ways, including through a rich history of case law and rules of conflict resolution. In making this argument, this Article draws a parallel between AI architecture (on the one hand) and legal rules and standards (on the other), demonstrating that AI architecture is already optimized for understanding rules through explicit coding and standards through data processing. This Article then describes the ideal qualities for the Bot Legal Code and addresses how government and peer production communities can develop open-source software to implement the Bot Legal Code.

Share

COinS