First Page
121
Abstract
Livestreams on social media represent a unique form of User-Generated Content. Delivering events and experiences in real time, while turning passive listeners into an involved audience, livestreaming facilitates users’ engagement and participation. Streaming promotes freedom of expression and facilitates diversity and pluralism in news reporting. It further allows users to maintain truth and authenticity in a potentially fabricated realm of Generative Artificial Intelligence (AI). Speakers are eager to capitalize on the instantaneous nature of livestreaming; some are using it to document newsworthy events or promote business interests, and others exploiting it by spreading unlawful and malicious content in pursuit of harmful goals. In the United States, Section 230 of the Communications Decency Act protects social media companies from being sued for their decisions to host or remove a wide range of content. Online livestreams, hence, are primarily governed by private ordering mechanisms: social media platforms’ Terms-of-Use and Community Guidelines, along with their AI-based moderation systems. These private ordering mechanisms, however, tend to take a content-based approach to content moderation. That is, all account holders face similar contractual restrictions that are embedded in a single optimization model. This predictive model assesses the legitimacy of new User-Generated Content based on content-based labeling (lawful vs. unlawful), but overlooks important factors related to the identity of the speaker. Current systems of content moderation are often blamed for lack of accuracy and bias. Livestreaming amplifies these concerns because its exploited by livestreamers, whether for positive, democratic causes or for harmful ones. Accordingly, livestreaming serves as a significant test case for rethinking content moderation approaches. This Article proposes a speaker-based approach to livestreaming moderation. Beyond the current one-size-fits-all-users approach, it suggests leveraging Machine Learning (ML) and AI to personalize the general optimization function of AI-based filtering models, and tailor it to the specific characteristics of the account holders, including their preferences, interests, and past digital activity. Indeed, advanced predictive capabilities are already exploited by social media platforms for content curation purposes, to match content to users, and maximize platforms’ revenues. Accordingly, under the proposed approach, potentially violent users should be subject to stricter moderation thresholds, while non-violent speakers could face more lenient thresholds. This nuanced approach to content moderation could enhance the integrity of livestreams, ensuring that livestreams that foster democratic discourse and informed citizenship outweigh harmful and unlawful streams. real-time nature challenges existing filtering techniques that are mainly designed to address prerecorded content. This vulnerability may be
Recommended Citation
Maayan Perel,
A Speaker-Based Approach to Content Moderation: The Case of Livestreaming,
28 Vanderbilt Journal of Entertainment and Technology Law
121
(2025)
Available at: https://scholarship.law.vanderbilt.edu/jetlaw/vol28/iss1/1