AI Agents and the Law

Last month, John Nay, a Fellow at Stanford University’s Center for Legal Informatics (CodeX), CEO at Norm Ai, and a Visiting Scholar in Artificial Intelligence & Law at Vanderbilt Law, gave a presentation at the Law School on the potential for AI agents to reduce regulatory burdens.  

Here are some key takeaways: 

As Artificial Intelligence improves, its purpose remains the same. 

AI capabilities have advanced rapidly in recent years through new models and AI agents  that, in Nay’s estimation, require a reevaluation of how society looks and treats AI. Yet as complex as the technology is, AIs function has remained the same throughout its development: a tool of efficiency and innovation for humans. Nay described three ways to manage AI systems. AI used to require constant human feedback at every new input step– humans were necessary in the process of machine learning. It later advanced to needing only periodic human review. What Nay hopes for the future is that AI becomes self-learning to the degree that some agents will not require human review.  

There are difficulties in aligning AI with our societal framework, values, and goals. Nay addressed two particular challenges– specification and societal.  

Specification – the process of getting AI to do what a human agent wants – is difficult because of the variety of language inputs for AI to understand and balance, as well as the nuances in legal language. To succeed in its task, AI must distinguish between law and policies, as well as rules and standards, all while taking context and historical precedent into account.  

The societal aspect of the alignment issue is how AIs actions and decisions impact the world; according to Nay, AI framework depends on countless other entities with overlapping endorsements.  

Potential solutions to improve alignment come with benefits and drawbacks. Nay outlined three possible approaches to combat the problem of alignment:  

  • Descriptive ethics, which takes human behavior that exists in the world and applies machine learning to analyze that behavior, using that data to further train the AI model on how to act. This method effectively teaches AI to replicate human behavior. Nay noted that this approach requires a system to distinguish between bad and good behavior so that the AI does not mimic the former.  
  • Prescriptive ethics provides AI with actions that are good and bad– a clear normative stance on issues. The downside of this approach is that it does not provide AI with empirical data to take and learn from, making it incapable of advising based on real-world scenarios.  
  • The middle ground between both of these solutions is one that trains AI on real-world, interpretable, and normative data. The challenge  to this approach lies in the nature of the training materials – public policies and laws that often focus on what not to do as opposed to what to do.  

Law-informed AI can be very beneficial for the future of the legal profession. Obstacles aside, Nay expressed optimism about the use of AI in law, citing the opportunity to make processes more efficient and offer citizens a well-rounded understanding of policies. AI has the potential to improve laws, improve its practice, analyze policies, and cut through “legal sludge.”  To Nay, AI has a very promising future in law, considering the speed of its evolution, its accessibility, and its potential to be designed with human-centeredness.