There is a pervading sense of unease that artificially intelligent machines will soon radically alter our lives in ways that are still unknown. Advances in artificial intelligence (AI) technology are developing at an extremely rapid rate as computational power continues to grow exponentially. Even if existential concerns about AI do not materialize, there are enough concrete examples of problems associated with current applications of AI to warrant concern about the level of control that exists over developments in this field. Some form of regulation is likely necessary to protect society from harm. However, advances in regulatory capacity have not kept pace with developments in new technologies, including AL. This is partly because regulation has become decentered; that is, the traditional role of public regulators such as governments commanding regulation has dissipated, and other participants including those from within the industry have taken the lead. Other contributing factors are dwindling government resources on one hand and the increased power of technology companies on the other. These factors have left the field of AI development relatively unregulated. Whatever the reason, it is now more difficult for traditional public regulatory bodies to control the development of AI. In the vacuum, industry participants have begun to self-regulate by promoting soft law options such as codes of practice and standards. This Article argues that despite the reduced authority of public regulatory agencies, the risks associated with runaway AI require regulators to begin to participate in what is largely an unregulated field. In an environment where resources are scarce, governments or public regulators must develop new ways of regulating. This Article proposes solutions to regulating the development of AI ex ante through a two-step process: first, governments can set expectations and send signals to influence participants in AI development. The Authors adopt the term "nudging" to refer to this type of influencing. Second, public regulators must participate in and interact with the relevant industries. By doing this, they can gather information and knowledge about the industries, begin to assess risks, and then be in a position to regulate those areas that pose the most risk first. To conduct a proper risk analysis, regulators must have sufficient knowledge and understanding about the target of regulation to be able to classify various risk categories. The Authors have proposed an initial classification based on the literature that can help to direct pressing issues for further research and a deeper understanding of the various applications of AI and the relative risks they pose.
Michael Guihot, Anne F. Matthew, and Nicolas P. Suzor,
Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence,
20 Vanderbilt Journal of Entertainment and Technology Law
Available at: https://scholarship.law.vanderbilt.edu/jetlaw/vol20/iss2/2