The Use of Generative A.I. Technology in Judicial Decision-Making: A Promising Tool or Recipe for Disaster?

By Sam Sidle

As generative artificial intelligence (A.I.) technology has proliferated over the past few years, judges around the country have begun utilizing the technology as part of their judicial decision-making. While some judges have effectively implemented the technology as a useful tool for statutory interpretation, others have improperly generated factually and legally inaccurate court orders. As a result, there is a need for continued regulation and oversight into generative A.I. technology usage from the bench and clear frameworks developed for best practices.

First, the positive. Judge Newsom (11th Cir.) has, in a series of recent concurrences, advocated for the use of generative A.I. technology to understand the discernable public meaning of statutory terms.[1] To Newsom, because of the broad-based training that large learning models (LLMs) receive, the ideal output of textualist jurisprudence (the ordinary meaning of a term) and the output of LLMs (the aggregated meaning of a term according to the training data) seem to be in alignment. Because of this, LLMs—in contrast to traditional tools such as dictionaries—may be particularly useful for determining how ordinary people would understand statutory terms. In Snell,  Newsom summarizes the potential benefits into the following categories: LLMs can “understand” context; they are accessible; the research behind them is relatively transparent; and they hold advantages over other empirical interpretive methods, such as surveys of ordinary people or corpus linguistics. Because of these advantages, the use of generative A.I. technology “offers the promise of ‘democratizing’ the interpretive enterprise.”[2] Further, in a separate concurrence, Newsom noted another key advantage: That the “substantively-identical-and-yet-marginally-different answers . . . pretty closely mimic what we would expect to see, and in fact do see, in every day speech patterns.”[3] While Newsom recognizes some of the inherent risks of using LLM technology in interpreting statutes, he still views LLMs as an exciting new tool for developing theories of statutory interpretation.

Some judges, however, use LLMs in ways which are downright destructive for the judiciary. For example, as noted by Senate Judiciary Committee Chairman Chuck Grassley, multiple judges around the country have used generative A.I. to draft factually inaccurate court orders which “misquoted state law, referenced individuals who didn’t appear in the case[,] and attributed fake quotes to defendants, among other significant inaccuracies.”[4] As generative A.I. tools continue to develop, it is clear that some oversight or internal policy changes are required. For example, the judges admonished here have implemented changes such as requiring all drafts to undergo independent review with all cited cases printed and attached and prohibitions on all law clerks and interns from utilizing A.I. when drafting opinions or orders.[5]

Although A.I. presents promising new opportunities for the judiciary’s work, some new guideposts, internal rules, or oversight systems are necessary.

Sam Sidle is a second-year law student from Baltimore, Maryland. Before law school, he attended Vanderbilt University and studied Economics and American History. He will be working in New York City this summer.


[1] See Snell v. United Specialty Ins. Co., 102 F.4th 1208 (11th Cir. 2024) (Newsom, J., concurring).

[2] See Snell,102 F.4th at1228 (Newsom, J., concurring).

[3] United States v. Deleon, 116 F.4th 1260, 1275 (11th Cir. 2024) (Newsom, J., concurring).

[4] Press Release, U.S. Senate Comm. on the Judiciary, Grassley Releases Judges’ Responses Owning Up to AI Use, Calls for Continued Oversight and Regulation, (Oct. 23, 2025), https://www.judiciary.senate.gov/press/rep/releases/grassley-releases-judges-responses-owning-up-to-ai-use-calls-for-continued-oversight-and-regulation.

[5] See Id.

Explore Story Topics