>

New VPA Report Recommends “Net Neutrality for AI”

Innovation in artificial intelligence applications, including the rising tide of AI agents, is based on startups accessing AI foundation models offered by Anthropic, OpenAI, and Google. Each of these companies also competes with those startups, creating conflicts of interest. A new report by Vanderbilt Policy Accelerator director of artificial intelligence and technology policy Asad Ramzanali and Akhil Rajan analyzes the foundational model market, highlights a case study of a startup that experienced unfair treatment from a foundation model provider, and recommends a requirement for AI neutrality similar to “net neutrality” rules in broadband.

Just as the concept of net neutrality restricts internet service providers from prioritizing different types of internet traffic, including from services they own or have paid partnerships with, AI neutrality would prohibit companies from prioritizing particular AI applications, including ones they own or in which they have material interests, with limited exceptions.

Startups developing AI products usually build their software atop a foundational model accessed via an application programming interface (API). The market for foundation model APIs is dominated by Anthropic, OpenAI, and Google, which command nearly 90% of the market by revenue, according to Menlo Ventures. These three companies also offer their own AI applications.

“Because of the essential role that AI model providers—like Anthropic, OpenAI, and Google—play in enabling a wide swath of AI startups, they have become gatekeepers of innovation,” said Asad Ramzanali. “Increasingly, those AI model providers compete with their customers, creating unavoidable conflicts of interest that policymakers can help mitigate.”

“Fair, competitive, and contestable markets are critical for innovation,” said Akhil Rajan. “AI neutrality requirements are one way that policymakers can help prevent large companies from exploiting their control of critical infrastructure to pick winners and losers.”

The report uses as a case study the AI coding agent Windsurf. The company’s access to the Claude large language model was famously cut off by Anthropic after public reports indicated that OpenAI was in talks to acquire Windsurf.

Ramzanali and Rajan’s report recommends a neutrality rule that would prohibit AI model providers that make available an API for external parties from unjust or unreasonable discrimination among similarly situated users in terms of access, latency, cost, and quality of service. The most straightforward way to enact such a requirement would be Congress passing legislation to require AI model providers to retain neutrality among customers, and the report includes model legislation.

To learn more, read the full paper or visit the VPA’s Substack.

About the VPA

The Vanderbilt Policy Accelerator for Political Economy and Regulation (VPA) focuses on cutting-edge topics in political economy and regulation to swiftly bring research, education, and policy proposals from infancy to maturity. To learn more about our work, visit vu.edu/vpa.

Recent News