AI Agents: Balancing Autonomy and Accountability

By Chloe Roslin

In recent years, artificial intelligence (AI) has crept its way into nearly every aspect of human life. From advanced AI systems used in medicine and self-driving cars to basic tools that assist high school students with essay writing, the prevalence of AI has made it hard to imagine a world without it. While traditional AI systems have undeniably revolutionized the modern world, their lack of true autonomy has long preserved a clear distinction between machines and humans. However, with the recent development of AI agents, that gap has begun to close.

Unlike traditional AI chatbots such as ChatGPT, AI agents are goal-directed systems capable of processing information, making decisions, and acting.[1] Because these systems can take independent action rather than merely responding to user input, they are transforming the legal field by producing actionable outcomes tailored to specific legal needs.[2] Attorneys can now offload certain tasks to AI agents allowing them to increase efficiency and reduce client costs.[3]

Yet, while the benefits of AI agents are significant, their adoption raises complex legal and ethical questions. Since AI agents act independently, often without human prompting, their behavior can be unpredictable.[4] In law, this unpredictability can carry serious implications for client confidentiality and attorney accountability.[5] When a human acts on behalf of a client, liability for a breach of duty that causes an injury to a client is clear. When a machine acts, however, determining responsibility becomes far more complicated. Attorneys may face strict liability for negligent actions taken by an AI agent, forcing the profession to grapple with whether the promise of efficiency outweighs the risk of continuous, unsupervised, autonomous machine action.[6]

As AI agents become more deeply embedded in the legal profession, the balance between innovation and accountability will be critical. Their potential to enhance the legal field is immense, but so too are the risks that come with their use. The coming years will likely define how the law adapts to govern these autonomous systems and whether attorneys can integrate them responsibly without compromising the ethical foundation of the legal field.

Chloe Roslin is a second-year law student from Bloomfield Hills, Michigan. She graduated from the University of Michigan with a major in Movement Science and a minor in the Sociology of Health and Medicine, and she plans to work in transactional law in New York after graduation.


[1] Ken D. Kumayama, Pramode Chiruvolu & Daniel Weiss, AI agents: greater capabilities and enhanced risks, REUTERS (Apr. 22, 2025, at 8:51 AM CT), https://www.reuters.com/legal/legalindustry/ai-agents-greater-capabilities-enhanced-risks-2025-04-22/.

[2] David A. McCarville, AI Agents Are Revolutionizing the Legal Profession and Expanding Access to Justice, FENNEMORE (July 3, 2025), https://www.fennemorelaw.com/ai-agents-are-revolutionizing-the-legal-profession-and-expanding-access-to-justice/.

[3] Id.

[4] Danny Tobey, et al., The rise of “agentic” AI: Potential new legal and organizational risks, DLA PIPER (June 9, 2025), https://www.dlapiper.com/en/insights/publications/ai-outlook/2025/the-rise-of-agentic-ai–potential-new-legal-and-organizational-risks.

[5] See McCarville, supra note 2.

[6] See Tobey, et al., supra note 4.

Explore Story Topics