Personalization on digital platforms drives a broad range of harms, including misinformation, manipulation, social polarization, subversion of autonomy, and discrimination. In recent years, policy makers, civil society advocates, and researchers have proposed a wide range of interventions to address these challenges. This Article argues that the emerging toolkit reflects an individualistic view of both personal data and data-driven harms that will likely be inadequate to address growing harms in the global data ecosystem. It maintains that interventions must be grounded in an understanding of the fundamentally collective nature of data, wherein platforms leverage complex patterns of behaviors and characteristics observed across a large population to draw inferences and make predictions about individuals.
Using the lens of the collective nature of data, this Article evaluates various approaches to addressing personalization-driven harms under current consideration. It also frames concrete guidance for future legislation in this space and for meaningful transparency that goes far beyond current transparency proposals. It offers a roadmap for what meaningful transparency must constitute: a collective perspective providing a third party with ongoing insight into the information gathered and observed about individuals and how it correlates with any personalized content they receive across a large, representative population. These insights would enable the third party to understand, identify, quantify, and address cases of personalization-driven harms. This Article discusses how such transparency can be achieved without sacrificing privacy and provides guidelines for legislation to support the development of such transparency.
Aylet Gordon-Tapiero, Alexandra Wood, and Katrina Ligett,
The Case for Establishing a Collective Perspective to Address the Harms of Platform Personalization,
25 Vanderbilt Journal of Entertainment and Technology Law
Available at: https://scholarship.law.vanderbilt.edu/jetlaw/vol25/iss4/1