Lying Eyes: Deepfakes, Rules of Evidence, and Disinformation
“Who are you going to believe…Me? Or your lying eyes?”
Richard Pryor, Live on the Sunset Strip
As a lawyer working with innovative and disruptive technology for about 10 years, I have been increasingly concerned with the role disinformation, particularly AI-enabled “deepfakes” could play in corrupting the reliability of evidence on which decision are made. These decisions, reliant on the veracity of photo and video evidence, can take a variety of forms – political leaders making strategic decisions in international affairs, military commanders making targeting decisions, entertainment icons whose reputation can be burnished or destroyed by images, business leaders making decisions about product releases, and judicial outcomes in domestic and international courts. The rules for assessing the reliability of photographs or videos in evidence, in the age of deepfakes, provide a useful template for assessing reliability and veracity of photographic or videographic evidence in other contexts. In the age of AI-enabled deepfakes, the process of assessing reliability and veracity will become more complex, technical and deliberate. Deepfakes make “lying eyes” no longer a joke, but an unpleasant fact as we grapple with the technology of truth.
Traditionally, in court, photo or video evidence is assessed in a two part linear process, which is, ultimately, a pure epistemological process:
(1) a competent witness testifies to authenticate a photo or video as an accurate depiction of the occurrence or matter being represented, based on his or her memory; and
(2) the finder of fact assesses the photo or video for probative value.
Because most people’s knowledge of the legal system comes from the movies and television, two scenes illustrate the two steps in the process reasonably well..
Part 1: Authentication:
In the military courtroom thriller “A Few Good Men”, defense counsel Daniel Kaffee wields a “transfer order” before fact witness Colonel Nathan R. Jessup. The defense theory was that the transfer order away from Guantanamo Bay, Cuba had been falsified to create the illusion that the murdered Private Santiago would have been safe from the lethal hazing of his fellow Marines, but for the intercession of the rogue Corporal Harold Dawson and Private Louden Downey. The movie glosses over the transfer order’s authenticity. In reality, for the prosecution to introduce the transfer order as a business record to show intent to transfer the Marine off the island to keep him safe, the personnel officer who signed the transfer order would have had to take the stand and answer questions like this:
- What is your duty position?
- What duties do you perform routinely in that position?
- Are you familiar with transfer orders?
- What are transfer orders?
- What is their purpose?
- When do you prepare transfer orders?
- Where are these documents stored after they are prepared?
- Is it a regular part of your duties to keep and maintain records of this type?
- Are these documents of the type that would be kept under your custody or control?
- Are you familiar with this transfer order (Private Santiago’s transfer order)?
- Do you recall preparing this transfer order?
- Do you recall directing someone under your supervision to prepare this transfer order?
- Do you recall whether that person actually prepared the transfer order?
- Was this transfer order prepared in the ordinary scope of business in your military unit?
- Do you recognize the signature on the document?
- Whose signature is that? (the personnel officer’s)
After satisfactorily laying this foundation, the prosecutor would have moved the document into evidence. Of course, in the case of Private Santiago’s transfer order, the personnel officer would have had to perjure himself to lay this foundation, because the transfer order was fabricated after the murder to cover the tracks of the officers directing the hazing that led to the Marine’s death. In point of fact, the purpose of authentication is that a live witness is able to testify truthfully under oath that a certain document, photo or video is in fact genuine, real, and an accurate record or depiction.
Only after authentication and introduction is the exhibit – the piece of documentary, photographic or videographic evidence – assessed for probative value – that is, how convincing it is to establish the fact it purports to establish.
Part 2: Probative Value:
In 1992, Joe Pesci played first time trial lawyer and many time bar exam taker Vinny Gambini in a crime drama/comedy about a murder trial where his young cousin and his friend are falsely accused of murder in a convenience store heist gone bad. In one pivotal scene, attorney Gambini illustrates this process with surprising alacrity as he uses witness and girlfriend Mona Lisa Vito assess a key photo’s probative value, even as the parties appear to skip the entire photo authentication step. For the purposes of this exercise, we can assume both sides stipulated to the authenticity of the photo showing dual tire marks leaving the scene of the crime and jumping a curb.
Mona Lisa Vito: The car that made these two, equal-length tire marks had positraction. You can’t make those marks without positraction, which was not available on the ’64 Buick Skylark!
Vinny Gambini: And why not? What is positraction?
Mona Lisa Vito: It’s a limited slip differential which distributes power equally to both the right and left tires. The ’64 Skylark had a regular differential, which, anyone who’s been stuck in the mud in Alabama knows, you step on the gas, one tire spins, the other tire does nothing.
Juror #1: That’s right.
Vinny Gambini: Is that it?
Mona Lisa Vito: No, there’s more! You see? When the left tire mark goes up on the curb and the right tire mark stays flat and even? Well, the ’64 Skylark had a solid rear axle, so when the left tire would go up on the curb, the right tire would tilt out and ride along its edge. But that didn’t happen here. The tire mark stayed flat and even. This car had an independent rear suspension. Now, in the ’60s, there were only two other cars made in America that had positraction, and independent rear suspension, and enough power to make these marks. One was the Corvette, which could never be confused with the Buick Skylark. The other had the same body length, height, width, weight, wheelbase, and wheel track as the ’64 Skylark, and that was the 1963 Pontiac Tempest.
Vinny Gambini: And because both cars were made by GM, were both cars available in metallic mint green paint?
Mona Lisa Vito: They were!
Vinny Gambini: Thank you, Ms. Vito. No more questions. Thank you very, very much.
[kissing her hands]
Vinny Gambini: You’ve been a lovely, lovely witness.
Having established Ms. Vito’s bona fides to testify on the matter of automobile technical and mechanical characteristics, to which the prosecutor ultimately stipulated, this line of questioning turned out to be extraordinarily probative on the issue of whether there were two similar cars in the area of the murder – one driven by Mr. Gambino’s cousin, and one driven by the murderer who robbed the store.
The danger of deepfakes, in court or in extrajudicial life, is that they constitute synthetic or manufactured evidence. In court, they cannot be authenticated absent perjury or an egregious testimonial error that can be exposed on cross-examination, so arguably Model Rule of Evidence 901 already is sufficient to keep out deepfake evidence. However, assuming deepfake evidence is allowed to enter into evidence in court, the integrity of the second prong – probity – is threatened by their mere existence. The control for ensuring unreliable deepfakes do not corrupt the evidentiary system of American courts hinges on authentication.
Authenticating digital images:
Because digital photographs and videos can be so easily manipulated, altered, or changed, authenticating digital media requires a witness – sometimes the photographer or videographer, and sometimes a third party witness familiar with the scene depicted in the photograph or video, to give “pictoral testimony” wherein the witness testifies the media is a true (or accurate) accurate representation of what the person saw, from memory. As with all evidence, courts rely on the adverse part to expose altered evidence through cross-examination or impeachment by extrinsic evidence. This is not a new problem.
There is no heightened authenticity standard for digital media, even though they are relatively easy to manipulate. Over time, some legal commentators have argued for a more stringent, demanding foundation, including, perhaps, a requirement to certify the photograph is unaltered. The current standard only requires introduction of prima facie evidence of authenticity, which shifts the burden to the opponent to challenge the authenticity. Faith in the adversarial system substitutes for a higher initial threshold of reliability. This is a fairly well settled proposition.
The New and Unique Challenge of Deepfakes and an Evidentiary Rule Modification
Deepfakes present a new and relatively unprecedented challenge by virtue of the fact they often are not manipulated or altered media, but are completely fictionalized from the ground up. Dr. Maura Grossman, a computer scientist from Canada’s University of Waterloo, and Honorable Paul W. Grimm (Retired), a former U.S. District Judge from Maryland, now a professor at Duke University, recognize the unique and extraordinary threat to the evidentiary system presented by deepfakes. They have co-authored a Proposed Modification of Current Rule 901(b)(9) to address authentication issues regarding Artificial Intelligence evidence to the Advisory Committee on Evidence Rules, which met on October 27.
Federal Rule of Evidence 901 (and Model Rule of Evidence 901) currently states:
(a) General provision. The requirement of authentication or identification as a condition precedent to admissibility is satisfied by evidence sufficient to support a finding that the matter in question is what its proponent claims.
(b) Illustrations. By way of illustration only, and not by way of limitation, the following are examples of authentication or identification conforming to the requirements of this rule:
(1) Testimony of witness with knowledge. Testimony that a matter is what it is claimed to be.
(2) Nonexpert opinion on handwriting. Nonexpert opinion as to the genuineness of handwriting, based upon familiarity not acquired for purposes of the litigation.
(3) Comparison by trier or expert witness. Comparison by the trier of fact or by expert witnesses with specimens which have been authenticated.
(4) Distinctive characteristics and the like. Appearance, contents, substance, internal patterns, or other distinctive characteristics, taken in conjunction with circumstances.
(5) Voice identification. Identification of a voice, whether heard firsthand or through mechanical or electronic transmission or recording, by opinion based upon hearing the voice at any time under circumstances connecting it with the alleged speaker.
(6) Telephone conversations. Telephone conversations, by evidence that a call was made to the number assigned at the time by the telephone company to a particular person or business, if
(A) in the case of a person, circumstances, including self-identification, show the person answering to be the one called, or
(B) in the case of a business, the call was made to a place of business and the conversation related to business reasonably transacted over the telephone.
(7) Public records or reports. Evidence that a writing authorized by law to be recorded or filed and in fact recorded or filed in a public office, or a purported public record, report, statement, or data compilation, in any form, is from the public office where items of this nature are kept.
(8) Ancient documents or data compilation. Evidence that a document or data compilation, in any form,
(A) is in such condition as to create no suspicion concerning its authenticity,
(B) was in a place where it, if authentic, would likely be, and
(C) has been in existence 20 years or more at the time it is offered.
(9) Process or system. Evidence describing a process or system used to produce a result and showing that the process or system produces an accurate result.
(10) Methods provided by statute or rule. Any method of authentication or identification provided by Act of Congress or by other rules prescribed by the Supreme Court pursuant to statutory authority.
Dr. Grossman and Judge Grimm propose an amendment to subparagraph (9) which states:
(9) Evidence about a Process or System. For an item generated by a process or system:
(A) Evidence describing it and showing that it produces a reliable result; and
(B)If the proponent concedes that — or the proponent provides a factual basis for suspecting that — the item was generated by artificial intelligence, additional evidence that:
(i) Describes the software or program that was used; and
(ii) Shows that it produced reliable results in this instance.
The revised rule places primacy on “reliability” over “accuracy” and clarifies the trial judge also has an obligation to make a decision on the preliminary question of admissibility under Federal Rule of Evidence 104(a). Dr. Grossman and Judge Grimm maintain the judge’s role as evidentiary gatekeeper while supplying new and revised standards which would apply to the unique category of AI-assisted or AI-generated evidence, including deepfakes. The question remains whether the subtle shift to “reliability” over “accuracy” would be sufficient to bar the introduction of deepfakes as probative evidence, given the process continues to rely on litigants’ and judges’ understanding of the technology and the ability of an opponent of the evidence to test it adequately through cross-examination and extrinsic impeachment evidence.
The struggle over this issue in the Federal Rules of Evidence, which are orderly and process bound, portends a far less methodical process for assessing the “reliability” of deepfake evidence in other contexts less bound by rules and procedure. Deepfakes will challenge the courts in the coming years and decades, but threaten the foundations of society through disinformation in other contexts.
Judging the authenticity of deepfakes outside of court, where images and videos affect decisions of people in real time, is fraught with even more risk. In court, there is a process for vetting images and photos by applying rules of evidence and only admitting evidence that presents sufficient indicia of reliability. In the 24 hour global news cycle, the process is less structured and more fluid, and people make decisions more rapidly based on what they see without properly evaluating whether it is real. The danger becomes more acute as AI becomes more perfected and deepfakes appear to be more authentic to the naked eye. Deepfakes represent the most formidable form of disinformation and challenge to authenticity in truth for all time, or certainly since kings could order the authoring of whole manuscripts and have them circulated as history. Finally, the impact on markets which rely on uniqueness and genuineness, such as art, cryptocurrency, and NFTs is apparent – the replication of counterfeits in an AI-driven enterprise could devalue entire markets to zero in a flash.
Ironically, technology has simultaneously made access to truth more accessible – through photography, image enhancement, satellite imagery, GPS, electronic recordkeeping, and the application of science – and more difficult – through deepfakes, disinformation and other forms of spoofing facts. One thing is certain: understanding the technology of truth is not an option for judges, legal professionals, diplomats, general and admirals, business groundbreakers, and political leaders. Nothing is necessarily as it seems any more, and deliberate, fast processes are required to separate fact from fiction, in court, in business, in government and in diplomacy.
Crenshaw, Ware & Martin PLC is a 100-year old Norfolk business law firm. Our legal professionals stay on the cutting edge of issues that affect business, including those creating, distributing or working with advanced technologies such as cybersecurity systems, artificial intelligence and autonomous systems. Contact Managing Partner Darius Davenport at DDavenport@cwm-law.com, Business Disputes and Government Contracting Practice Group Chair Ryan Snow at WRSnow@cwm-law.com, Litigation Practice Group Chair Jim Chapman at JChapman@cwm-law.com, or Attorney Robert Bracknell at RBracknell@cwm-law.com for assistance and counsel on these complex issues at the intersection of law, business and technology.