Converting Regulations into AI Agents

John Nay, MS’13 / Ph.D.’17, a Vanderbilt Law Visiting Scholar and the founder of norm.ai, presented a public lecture on converting regulations into AI agents earlier this month. His talk centered primarily around business and regulatory AI agents, their broader use, and the implications of such technologies.

Nay began with an overview of the various uses for business AI agents. “AI agents are being deployed by businesses to do anything that the business would want them to do,” he said. “So this could be customer service, it could be marketing, it could be any other business activity that they would want to at least partially automate and then regulate.” Regulatory AI agents, by contrast, serve to monitor business AI agents.

He then discussed the creation and implementation of regulatory AI agents through a process of marrying symbolic systems and interpretable AI, along with generative AI and large language models. Together, all of these agents do many of the things that have traditionally required human judgment of legal standards to execute while also solving a gap in transmission.

“Typically, you go from a domain expert to a software engineer and then build an underlying system, but there’s a lot of translation there,” Nay said. “And so the idea is to have domain experts — whether it’s for law, policy, or another topic — directly build the AI agents.”

From this point, the program can be applied in a variety of ways, to a multitude of laws and to a broader industry.

“You’re building a computational representation of something that could be used by many different programs,” Nay said. “It could be used by the regulators and many others to better understand the regulation.”

He used a decision tree as an example of how these regulatory agents work — decomposing the problem into many different constituent units to create a reliable and comprehensive analysis. When given a problem or piece of work to evaluate, the agents put the work through the tree, evaluating it at each stop for specific provisions and ultimately revealing if the work is compliant.

Nay pointed out that this cannot be done with simple code. “As you can imagine, a lot of these standards are very nuanced, and you can’t capture them in traditional code, and those are calls to large language models,” he said. “That’s where we are marrying together this interpretable, more traditional way of representing decision processes and code along with large language models.”

Agents are built out with guidance from the domain experts who have the regulatory knowledge but no longer need to translate it to another person.

“We’re able to zoom in and unpack each other’s decision tree and provide a context of large language models upfront before we run it on anything and build out this whole representation of the regulation with expert guidance embedded directly in it. [This is done] through the lawyers and other domain experts that are able to use a no-code tool to embed their expertise and give more guidance to the models from outside counsel.”

From there, the model can then be applied to whole industries with ease.

“It’s a lot of upfront work, but then you’re amortizing that over potentially the whole industry eventually, and everyone who is subject to the regulation, but then it can also change,” Nay said.

Humans currently regulate business AI agents, creating a compliance gap – humans are not able to keep up with the speed and scale in which they operate – that can potentially be filled with regulatory AI. “[A] future state with regulatory AI agents enables you to still have humans supervising business AI agents, and in the loop, ultimately, at the highest level, but have much of the lower part be automated,” he explained.

In this light, regulatory AI agents serve two purposes: to help with compliance and serve as a check on business AI agents as their use continues to grow.

Read more about how regulators can utilize AI in Nay’s recently published paper.