Tricks, Lies, and the Administration of Justice

There currently exists almost limitless access to technology that can be used to modify images. No longer the sole province of specialists with high-end computers, the democratization of image modification capabilities means that untrained civilians can create wholly false images and transform real images into something other than a true depiction of reality. When this is done for entertainment purposes, most people will see it for what it is – a bit of fun. However, when this is done for the express purpose of gaining an unfair advantage in a judicial setting and obstructing the course of justice, it is no longer fun – it threatens the very fabric of the judicial process. A recent civil case in California serves to illustrate this point.

The California case

In Mendones et al. v. Cushman and Wakefield, Inc. et al. (Case number 23CV028772, September 9, 2025, Superior Court of California, County of Alameda), the plaintiffs sought an order of summary judgment in their litigation against the defendants. If the plaintiffs were successful, that would have brought the civil action to an end (in their favour). In support of their application, the plaintiffs tendered several video recordings that had obviously been improperly modified or created using generative AI. Fortunately, the judge who had conduct of the case readily determined that the tendered video recordings were not legitimate. In particular, the judge found that some of the video recordings were deepfakes based upon various obvious anomalies in the video content. Further, while the plaintiffs alleged that the same person was depicted in certain videos, they were clearly different people. In other video recordings, it was apparent that some creative licence had been used in adding people to a camera view. This was actually done quite well except for the fact that the video recording was in black and white, but the added person was in colour. The judge ordered that the plaintiffs provide the following information:

Plaintiffs must provide all the metadata of the purported audio and video testimonials for “Salinas” (Ex. 3), Barbara Clark (Ex. 6D), Juliann Smith (Ex.27), Geri Haas (Exs. 6A & 6C), Sarah Davis (Ex. 6B), and an unidentified person (Ex. 21). Please include the file format, date created, date modified, file type, identity of device capturing video, lens used, shutter speed, file type, and the like. Please also identify the operator of the camera that captured the purported video testimonials noted above.

In their attempt to provide the requisite information, the plaintiffs provided information that only served to heighten the judge’s concerns about the tendered images. Having found that the plaintiffs sought to deceive the court, the judge addressed the issue of what sanctions should be imposed. The judge noted:

Plaintiffs submitted at least two exhibits created by GenAI. Further, to an even greater extent than expressed by the Hayes court, the use of deepfakes in a case significantly undermines the Court’s ability to administer justice, significantly erodes the public’s confidence in the judicial system, and significantly burdens under-resourced and overworked courts with the time-consuming task of assessing whether evidence presented to it during pretrial proceedings was a deepfake. As such, a more severe sanction is appropriate.

In this case, the judge was of the view that the appropriate sanction was to dismiss the civil claim with prejudice, thus bringing an end to the litigation, whether or not it had any substantive merit.

Commentary

This case benefitted from a trifecta of important factors. First and foremost, the judge took the time to thoroughly examine the tendered exhibits and visually assess their authenticity and did so without the benefit of expert assistance. Second, the use of deepfake technology and image alteration in this case was amateurish and readily apparent to an observant person tasked with making decisions based upon the images. Third, the plaintiffs were not credible, and their proffered explanations only served to a dig a deeper hole for them. Thus, this case represents attempts to obstruct the course of justice that reside at one end of the spectrum – namely, where there is a high likelihood of detection. The malfeasance in this case was so obvious that expert witnesses were not required to assist the court. However, not every case will be resplendent with such low-hanging fruit. At the other end of the spectrum are cases wherein still images and video recordings have been modified in such a way that the opposing party and the court are highly unlikely to detect the deceptions without expert assistance. And therein lies the problem.

Criminal and civil cases frequently involve the submission by the parties of image-based evidence as part of the evidence to be considered. In most cases, there is no need to question the integrity of the party presenting such evidence and the evidence itself. But there are cases where that is not so – where suspicion should be the default. Opposing parties and the court do not have the luxury of forensic imagery experts being available for every case. There are not enough resources to provide such services on demand, even if cost was of no concern. So how do counsel and the court contend with the more subtle attempts by litigants to subvert justice by placing a false evidential construct before the court?

Since neither the parties to litigation nor the court can expose all tendered image-based evidence to expert analysis, it is essential that lawyers and judges receive adequate training on the use of image-based evidence, including challenges regarding its authenticity and integrity, and the need to seek expert assistance in some cases. Improving the visual literacy of lawyers and judges is intended to heighten their awareness of potential issues associated with image-based evidence, not to make them experts. Lawyers and judges are not expected to solve the problem themselves, but they must be able to determine when there is a need to retain the services of a forensic imagery expert. Forewarned is forearmed. The goal must always be to seek the truth. When one or more parties is intent on deceiving others, it must fall to opposing counsel and the court to be alert to potential image-based evidential integrity issues.

This is easier said than done. If neither counsel nor the court have any particular reason to question the integrity of image-based evidence that has been tendered in a case, AI-generated and improperly modified images may be included in the evidential record and assessed as legitimate images. This has no doubt happened many times already. When there is reason to be concerned, then counsel and the court can take the appropriate steps. This was exemplified in a 2024 Washington state murder prosecution wherein the defence sought to introduce an AI-enhanced video as part of its case. This led to a full exposition of the issue with experts, the submissions of counsel, and a considered judgment of the court in the context of a Frye hearing. This case (State of Washington v. Puloka) was the subject of an earlier article on this site. The Mendones and Puloka cases are good examples of what should be done when concerns have been raised about the integrity and validity of tendered images.


Discover more from Jonathan W. Hak KC PhD

Subscribe to get the latest posts sent to your email.