>

Determining Free Speech Rules for AI Algorithms

The online content we consume through search engines, social media platforms, and news applications is largely determined by AI algorithms. They curate, prioritize, recommend, and generate content for millions of Americans, playing a significant role in online public discourse. Yet courts have not decided whether or how the First Amendment should apply to these technologies, while lawmakers, scholars, and tech companies have reached little consensus on the matter.

In their law review article “Algorithmic Speech,” Vanderbilt Law Professor Francesca Procaccini and Wendy K. Tam, Professor of Computer Science at Vanderbilt, argue that content-crafting AI algorithms do fall within the First Amendment’s coverage and are entitled to free speech protections determined by context. The authors go on to demonstrate how these algorithms have created a “fundamentally new speech context” that requires a new constitutional standard to govern how they can be regulated.

Why Are AI Algorithms Covered by the First Amendment?

Given the ubiquity of these algorithms in shaping public discourse online, the authors pose a “first-order constitutional question” – do the acts of designing, training, and deploying these systems qualify as First Amendment-protected speech?

Yes, the authors argue, because the AI algorithms — whether simple rule-based systems, sophisticated machine-learning models, or the large language models (LLMs) powering tools like ChatGPT — all embed human choices, values, and intentions at every stage of their design and use. They are not neutral, autonomous machines running on their own. They are, as the authors put it, “the agent(s) of human expression.”

While courts agree that, on the one hand, certain categories of expressive algorithms merit First Amendment protection and, on the other hand, that not every algorithm is expressive, they are stuck on whether more autonomous and complex AI algorithms are expressive. The authors illustrate that these algorithms are expressive for First Amendment purposes with several concrete examples.

For example, TikTok’s AI recommendation algorithm awards videos different point values — 1 point for a “like,” 3 for a “share,” 5 for a rewatch — reflecting deliberate human choices about what constitutes engagement. Elon Musk has tweeted about planned “algorithm tweaks” that will influence the user experience, and Meta executives have publicly announced shifts in Facebook’s content philosophy for the same reason. These are not incidental details; they reveal that the humans behind these platforms are actively shaping what billions of people see every day through the AI algorithms that power their platforms.

Human expression can be found even in circumstances where an algorithm’s outputs seem unpredictable or opaque, the authors show. The “randomness” built into machine-learning models is a deliberate design choice. The computer is not acting independently, but rather executing the expressive intent of its creators.

The Contextual Freedom of Speech

The article argues that First Amendment protections have always been “incredibly varied, contextual, and accommodating.” The authors cite the diverse and context-specific doctrines governing, for example, speech in public forums, courtroom proceedings, police interrogations, government offices, public schools, and prisons. They also point out the nuanced rules that shape constitutional protection for speech in private environments like the home, private schools, and private workplaces and organizations.

“Rather than impose generalized free speech rules broadly applicable to all speech contexts,” they argue, “the free speech right is governed by doctrines that reflect the myriad social dynamics that shape specific speech environments.”

How Does Free Speech Evolve with Technology?

Procaccini and Tam point to the evolution of free speech law in the wake of developments in communications technology like the printing press, radio, and internet. “These types of technological innovations instigate structural transformations in the ecology of human expression,” they write. In turn, free speech law adapts accordingly.

What makes the current moment distinctive, the authors contend, is that AI algorithms have not merely changed how speech is transmitted — they have restructured the very conditions under which public discourse occurs. The traditional model of a speaker communicating a message to a listener has given way to something far more complex: a system in which opaque algorithms, optimized for user engagement and advertising revenue, decide what speech gets amplified and what gets buried.

The result, as the authors put it, is that “the marketplace of ideas has been supplanted by the marketplace of engagement.” Content that is emotionally provocative, outrage-inducing, or addictive tends to be amplified, while nuanced or complex information gets deprioritized. Personalized feeds mean that different users inhabit entirely different informational universes, eroding the shared reality that democratic self-governance depends upon.

At the same time, the authors note a paradox in what appears to be a proliferation of choices. While users can technically choose from dozens of platforms — Facebook, Instagram, TikTok, Reddit, YouTube, and more — all of these platforms run on essentially the same engagement-maximizing logic. The authors liken this to “having a plethora of public school options in a county that all follow the same curriculum.” More platforms have not meant more genuine alternatives.

A New Constitutional Standard for Regulating AI Algorithms

In response to this new speech environment, Procaccini and Tam propose a new constitutional standard for evaluating regulations of AI algorithms. Under their framework, a law regulating algorithmic speech would be constitutional if it pursues an important governmental interest in health, safety, or individual autonomy; is unrelated to the suppression of particular viewpoints; and is no more restrictive than necessary given current technological capacities.

Two categories of regulation would pass muster under the proposed standard. The first involves new tort liability frameworks that would hold platforms accountable for algorithmic harms — for example, when recommendation systems systematically amplify content that causes concrete harm to users. The authors suggest drawing on models from civil rights and labor law, including concepts like “pattern or practice” liability and vicarious liability, tailored to the unique enforcement challenges of the online environment.

The second category involves what the authors call “autonomy-enhancing interventions” — measures designed to give users more genuine control over the content they encounter. These include requiring platforms to make their engagement-based algorithms opt-in rather than opt-out by default, mandating easier data deletion tools, requiring platforms to offer open interfaces to allow third-party developers to build alternative recommendation systems, and incentivizing the development of decentralized, interoperable networks that break up the current concentration of power among a handful of dominant platforms.

The Supreme Court is already grappling with these questions of regulation. In its 2024 decision in Moody v. NetChoice, the Court confirmed that social media algorithms can constitute protected speech but left important questions unresolved, including how to treat AI-powered recommendation systems, engagement algorithms, and the outputs of LLMs. Multiple justices expressed uncertainty about how the First Amendment should apply when a platform “hands the reins to an AI tool.”

As the authors conclude, the task for courts and policymakers is neither to freeze free speech doctrine in place nor to abandon its core principles, but rather “to adapt long-settled principles to the emerging realities of the AI era.”

Algorithmic Speech” is forthcoming in the California Law Review. Francesca Procaccini is an Associate Professor of Law at Vanderbilt Law School. Wendy Tam is the Stevenson Chair, Professor of Computer Science, Political Science, Law, and Biomedical Informatics at Vanderbilt University.

Recent News