Using Math to Get the Law Right

Edward K. Cheng, Hess Professor of Law, develops a method for detecting and correcting case publication bias.

By Grace Renshaw
Edward Cheng (Vanderbilt University/Joe Howell)

Soon after he began to study law, Edward K. Cheng realized how often scientific questions factored into legal decisions made by lawyers and judges—many of whom, he jokes, “go to law school because they don’t like math and science.” By the time Cheng entered the legal academy, his goal was to improve the accuracy of scientific testimony and statistical data presented in the courtroom to support better legal decision-making.

Cheng came to the study of law with a solid background in applied math and science. He earned a B.S.E. in electrical engineering at Princeton University and an M.Sc. from the London School of Economics before earning his J.D. at Harvard Law School. He joined the faculty at Brooklyn Law School in 2003 after working as a law clerk on the D.C. Circuit and serving as a Searle Fellow at Northwestern Law School.

During his tenure at Brooklyn Law, Cheng decided to bolster his ability to apply rigorous statistical analysis to his study of evidence law by earning a Ph.D. in statistics at Columbia University. His doctorate was awarded in 2018, but Cheng has been using his hard-won mathematical skills to examine problems in evidence law, such as case publication bias, for years. His study, “Detection and Correction of Case Publication Bias,” published in the peer-reviewed Journal of Legal Studies in 2018, develops a sophisticated technique for identifying case publication bias using multiple systems estimation, a statistical approach for estimating hidden populations such as human rights violations. “Judges and attorneys commonly rely on case law surveys to make decisions, and serious problems can arise if the available published cases don’t accurately represent the group as a whole,” Cheng said. “My goal was to determine how surveys of published case law might be statistically biased and propose a method to detect and correct that bias.”

Cheng first tested his method for detecting publication bias on groups of cases where the publication bias was known. He then applied it to a set of rulings by trial judges on whether to admit expert testimony related to false confessions as evidence. “Confessions are one of the most devastating potential pieces of evidence against a criminal defendant, but in recent years we’ve discovered that defendants sometimes confess to crimes they didn’t commit,” he said. Defense attorneys occasionally hire experts to testify about how and why their client’s confession was false; Cheng wanted to determine if a survey of the available admissibility rulings regarding expert testimony reflected what was happening in the real world.

When Cheng compared the results of his traditional case survey to the results yielded by his statistical model, he discovered a significant difference. The survey indicated that judges admitted experts in 16 percent of cases; his statistical model indicated they did so in at least 28 percent of cases. “An attorney or judge looking at the available legal databases would likely conclude that false-confession expert testimony has been poorly received by courts,” he said. “But the model suggests a higher likelihood the expert testimony will be admitted.”

photo of Ed Cheng at a microphone
Hess Professor of Law Edward K. Cheng is the host of Excited Utterance, a podcast in which he interviews other scholars of evidence law and discusses their work. It now has nearly 3,500 followers. Since joining Vanderbilt’s law faculty in 2010, Cheng has won seven student-selected Hall-Hartman Awards. He and Alex Nunn ’16, an assistant professor of law at the University of Arkansas, have an article, “Beyond the Witness: Bringing a Process Perspective to Modern Evidence Law,” forthcoming in the Texas Law Review. (Vanderbilt University/Joe Howell)

Cheng is quick to point out that his method for correcting case publication bias should be a temporary solution. “Complicated statistical models are often required only when the available data are flawed,” he said.

He proposes a simple solution: Judges should publish more cases. “If trial judges release their opinions regardless of outcome and state court systems make all trial transcripts more widely available, legal research databases could obtain and provide more case materials to lawyers and judges seeking to understand prior cases and outcomes,” he said.

Cheng cites the legal analytics company Lex Machina’s initiative to collect and provide a comprehensive database of intellectual property cases as a good example. “If these kinds of databases were available for all types of legal cases, a statistical model to correct bias would be unnecessary—and that would be a good thing,” he said.

For Associate Dean for Academic Affairs Chris Serkin, Cheng’s conclusion―that the real “publication bias” problem is that too few cases are actually published―illuminates that his work is motivated by a desire to improve the legal system. “Ed’s research focuses on investigating data that either cannot be directly observed or that are subject to deliberate manipulation,” said Serkin. “He uses sophisticated statistical modeling to shine light in these areas and provide a clear-eyed view of what’s actually happening—and ends by recommending a pragmatic solution: more and better data. This is the work of someone committed to the truth.” 

Explore Story Topics