
First Page
573
Abstract
Even as artificial intelligence (“AI”) promises to turbocharge social and economic progress, its human costs are becoming apparent. For example, selfdriving technology will someday make traffic jams a thing of the past, but technologists now acknowledge that it will never eliminate all traffic deaths. By design, AI behaves in unexpected ways. That is how it finds unanticipated solutions to complex problems. But unpredictability also means that AI will sometimes harm us. To curtail these harms, scholars and lawmakers have proposed strict regulations (to help ensure firms develop safe algorithms) and strict corporate liability (for injuries that nonetheless occur). These rigid approaches go too far. They dampen innovation and disadvantage domestic firms in the international technology race.
The law needs a more nuanced approach to holding corporations liable for their AI, one that balances progress with fairness. Tort law offers a compelling template. The challenge is to adapt its distinctly human notion of fault to algorithms. Tort law’s central liability standard is negligence, which compares the defendant’s behavior to an external reasonableness benchmark. But there is no clear comparison class for AI. Assessing algorithms by reference to how reasonable people behave would set too low of a bar—AI can and should outperform humans on many tasks. Assessing AI instead by reference to itself is often impossible—there are not enough algorithms in many contexts to establish a meaningful baseline.
This Article offers a novel negligence standard for AI. Instead of comparing any given AI to humans or to other algorithms, the law should evaluate it against both. By this hybrid measure, an algorithm would be deemed negligent if it causes injury more frequently (e.g., causes more accidents per mile driven) than the combined incident rate for all actors—both human and AI— engaged in the same type of conduct. This negligence standard has three attractive features. First, it offers a baseline even when there are very few comparable algorithms. Second, it incentivizes firms to release all and only algorithms that make us safer overall. Third, the standard evolves over time, demanding more of AI as algorithms improve and proliferate.
Recommended Citation
Mihailis E. Diamantis,
Reasonable AI: A Negligence Standard,
78 Vanderbilt Law Review
573
(2025)
Available at: https://scholarship.law.vanderbilt.edu/vlr/vol78/iss2/4