AI engines have grown increasingly adept at producing realistic images and audio, heightening concerns around the misuse of deepfake evidence in the legal process. Yet scholars, policymakers, and courts have yet to arrive at a consensus on how to manage this new challenge.
In his essay “Deepfakes, Photographs, and Trust in Evidence,” Vanderbilt Law Professor and evidence expert Edward Cheng argues that the rise of deepfakes does not warrant new rules of authentication or other corrective measures. Instead, he offers up a model for authentication involving base rates, drawing parallels between deepfakes and past technological advancements before highlighting what he sees as the larger problem with legal proof today: a polarized society.
A model for assessing the authenticity of evidence in the age of deepfakes
Professor Cheng’s proposed model relies on “base rates of trust,” which depends on the prevalence of forgery in a particular context. Evidence that is difficult to falsify (like paper currency, absent suspicious circumstances) comes with a presumption of authenticity, while evidence that is easier to fabricate (oral statements and testimony) is subject to greater skepticism.
The base rate of evidence depends, he argues, on available technology. The trust ranking of printed materials declined as high-quality printing equipment became cheaper and more prevalent. Conversely, the same technological advancement increased the relative trust ranking of handwritten materials, which became more difficult to forge relative to typed ones.
“This base rate perspective strongly suggests that deepfakes are an evolutionary, not revolutionary problem,” Cheng explains. Audiovisual evidence that used to be regarded as highly reliable will more frequently be called into question. Cheng admits that jurors who are unaware or unfamiliar with deepfake technology may improperly assess the evidence, but he notes that risk is “scarcely different” from other common juror mistakes, like overestimating the reliability of eyewitness identifications or their ability to detect lies based on demeanor.
“In general, no special evidentiary rules exist to address these problems,” he writes. “Cross-examination, the presentation of expert evidence, and in rare cases, the use of cautionary jury instructions are usually sufficient responses.”
Photographs: the original deepfake?
Cheng draws parallels between the deepfake challenge and the history of photographic evidence. Early on, courts worried that photographs would supplant juries and witness testimony, so the legal system moved to “defang” them. Thus, even today, photographs are frequently treated by courts as “pictoral testimony,” equivalent to a diagram or drawing that serves as an illustrative aid to witness testimony. To the legal system, the value of a photograph derives from its sponsoring witness, not the photograph itself. From a layperson’s perspective though, this construction gets things exactly backwards. At least before deepfakes, we verified witness accounts using recorded images, not the other way around. “Pictoral testimony was thus a legal construct that never captured the essence of what photographs were as evidence,” Cheng writes.
Cheng argues that the irony of deepfakes is that they vindicate the previously “clunky” pictoral testimony approach. As deepfakes become pervasive, the evidentiary value of images will indeed rest entirely on their sponsoring witnesses. Images will no longer hold independent or inherent reliability. “We may see a shift back to pictoral testimony theory in earnest. Alternatively, we may see the emergence of technological fixes to guarantee reliability. Either way though, the existing conceptual framework remains perfectly serviceable,” he writes.
A Matter of Trust
Whatever course of action courts may take to address deepfake evidence, Cheng argues that the deepfake problem reveals a strong conceptual link between legal proof and trust. The value of evidence comes in large part from the extent to which we trust it, represented by the base rates of reliability that we ascribe to different forms of evidence.
In an increasingly polarized world, however, such a trust theory of evidence carries “troubling normative implications,” he writes. Distrust of marginalized groups, the mainstream medical and scientific establishment, or people of different political persuasions can lead to “wildly disparate base rates” and prevent the legal system from reaching consensus on trusted sources of information.
“(I)f legal proof is about trust, and we fundamentally cannot agree on whom or what to trust, then there is very little that the rules of evidence can do,” Cheng writes. Instead, he argues that the solution must be found in “broader social and political discourse.”
“This is the evidentiary challenge of our age,” he concludes. “How do we get back to some kind of consensus on what constitutes good or acceptable evidence?”
“Deepfakes, Photographs, and Trust in Evidence” is forthcoming in Virginia Law Review Online. Edward K. Cheng is the Hess Chair in Law and Director of the Cecil D. Branstetter Litigation & Dispute Resolution Program at Vanderbilt Law.