Posted in Forensic Video Analysis

AI Enhanced Video Ruled Inadmissible in US Court

AI Enhanced Video Ruled Inadmissible in US Court Posted on April 17, 2024

It was only a question of time before counsel would tender AI enhanced video evidence before a court in support of their trial position. This occurred in February 2024 in a triple homicide prosecution in Washington state when defence counsel called an expert witness to present such visual evidence and the prosecution challenged the evidence through their own expert witness who articulated the myriad problems with AI enhanced imagery. On March 29, 2024, the Court delivered its ruling in this significant Frye hearing. This article discusses this case and the use of AI enhanced imagery for fact finding purposes more generally.

The Case

State of Washington v. Puloka (Superior Court of Washington for Kings County, March 29, 2024) is a currently unreported decision. In this case in which the defendant was alleged to have opened fire outside a Seattle-area bar killing three people and injuring two others, defence counsel sought to introduce AI enhanced cell phone video through an expert witness. The expert used at least one AI tool to enhance seven videos that were tendered before the Court in the Frye hearing. The defence intended to admit at least one of these videos, which was an AI enhanced version of iPhone video recorded by a witness to the event. The original recording was streamed to Snapchat and remained available in its original format; thus, the Court had the benefit of viewing the original video and the AI enhanced version.

The defence expert was not a forensic expert, nor did he claim to be one. He identified himself as a videographer and film maker with over thirty years of experience and acknowledged that he had no forensic training. Commenting on the quality of the video recordings he was asked to enhance, he noted that the original video was of low resolution and contained substantial motion blur. He stated that he used Topaz Labs AI software to generate a clearer version of the video, which was further processed with Adobe Premier Pro. The expert stated that the Topaz Labs AI tool uses AI to ‘intelligently scale up the video’ for the purpose of increasing resolution. He stated that the tool adds sharpness, definition, and smoother edges to objects in the original video, contrary to its pre-existing ‘blocky’ edges.

The expert could not say whether Topaz Labs AI was used by the forensic community. He said that peer usage was ‘corporate’ (whatever that means). He was unaware of any testing, publications, or discussion groups within his production video peer group that were involved in evaluating the reliability of AI tools for video enhancement purposes. He was unaware of what videos the AI enhancement models were trained on or whether they used generative AI. He agreed that any such algorithms were opaque and proprietary. In short, the defence expert did not know very much about the tool that he chose to use for forensic purposes. Defence counsel argued that the use of AI was not based on novel science and further that the relevant scientific community was not forensic video analysts but rather the ‘video production community’.

Contrary to the defence expert, the prosecution expert was a very experienced forensic video analyst. He stated that his focus as an analyst is on image integrity, rather than creating a smoother, more attractive product for the viewer. He noted that the AI tool used by the defence added approximately sixteen times the number of original pixels to the images to generate the enhanced version. He testified that the AI tool was unknown to the forensic video community and had not been reviewed by any forensic video expert. He demonstrated that the AI tool generated false image details and that such enhancement was not acceptable in the forensic community because it did not just enhance the video, it changed the meaning of portions of it. Specifically, it eliminated motion blur and smoothed edges, which resulted in objects in the original video failing to maintain their original shape and colour.

He further noted that the AI tool removed visual information from the original video and added new information. It removed artifacts, altered shapes, and negatively impacted the ability of an expert to forensically analyze which images were reference images, predictive, or bi-directional. In sum, the AI tool made proper forensic imagery analysis impossible. He described the AI process as opaque and contrasted it with approved and transparent image enlargement techniques such as nearest neighbour, bi-cubic, and bi-linear interpolation, which are reproducible on many video processing programs. Reference was made to SWGDE guidance on image enlargement which cautions against using AI for forensic image processing. The prosecution expert stated that the relevant scientific community was the forensic video analysis community.

The Ruling

The Court ruled that the use of AI tools to enhance video recordings in a criminal case is a novel technique and therefore is the proper subject of a Frye hearing. To gain admission under Frye, it must be shown that the proposed technique or methodology has achieved general acceptance in the relevant scientific community, which in this case must be the forensic video analysis community. The Court noted that Topaz Labs AI has not been peer reviewed by that community, nor is it at present reproducible or generally accepted. The defence was not able to produce any case support from any US court approving of the use of AI enhanced video in a criminal or civil trial, nor did they offer any articles, publications, or secondary legal sources approving of such methodology. The Court found that the AI enhanced video did not accurately show what happened and that it used opaque methodology to generate what it ‘thinks’ should be shown. Thus, the admission of such evidence was substantially outweighed by the danger of unfair prejudice. The best evidence was the original video recording and therefore the AI enhanced video recording was ruled inadmissible.

Commentary

In my recent book, Image-Based Evidence in International Criminal Prosecutions: Charting a Path Forward (Oxford University Press, 2024), one of the themes that I develop is that there is a general lack of visual literacy among counsel and the court. I do not make this comment to be dismissive or disrespectful but rather to make the point that lawyers and judges are not imbued with any specialized knowledge of image-based evidence simply by virtue of their professions. Typically, any in-depth knowledge of images, image processing, and forensic methodology stems from case specific education, which may be either deficient or missing altogether. This case is an example of this problem.

Some information about the use of AI in video processing may be instructive. AI refers to ‘computational models of human behavior and thought processes that are designed to operate rationally and intelligently’, or in short, computer programs capable of autonomous self-improvement.(i) Machine learning is a type of AI that allows computers to learn directly from examples, data, and their own processing experience, ultimately carrying out complex processes by learning from data, rather than following pre-programmed rules.(ii) Machine learning and AI may make changes to media that are not transparent to viewers.(iii) There is sound reason to be concerned about image alteration that occurs innocently or inadvertently and that which is done intentionally because in both cases the final product is an altered image that has the effect of misleading viewers.

The primary problem in this case is that defence counsel employed the services of an expert whose vocation is to produce visually pleasing products with no concern for forensic reliability and accuracy, and who used a wholly unsuitable tool to create evidence for use at trial. At the very least, defence counsel fundamentally misunderstood the nature of the evidence that had been commissioned or worse, was aware of this problem and perhaps hoped that neither the prosecution nor the Court would challenge it. While defence lawyers are certainly expected to take advantage of opportunities to expose latent or patent reasonable doubt in a case, retaining an expert who uses black box, non-forensic technology to hallucinate evidence is a step (or two) too far.

Experts who provide fact finding services in a courtroom require a forensic mindset that has been acquired through robust training and objectively suitable qualifications. AI enhanced video is not a forensic enhancement of the original media. Rather, using machine learning, AI hallucinates (creates) what it thinks the video should look like. One can hardly criticize a production video expert for trying to make a video recording appear more pleasing and helpful to their client. That is what we would expect to see in commercially viable videos and those delightful social media videos of kittens playing. However, such opaque, untested, and forensically unsound methodology has no role in a courtroom. Just because it can be done doesn’t mean it should be done. Fortunately, in this case, the prosecution was alert to this issue and had a competent forensic expert to educate the Court as to the folly and danger in what was being tendered as putative evidence. Lastly, the Court wisely exercised its gatekeeper role to exclude this highly problematic evidence.

This case is novel. It raises an important issue in the application of video enhancement technology in the context of a criminal trial. While this may be one of the first known cases wherein counsel sought to hallucinate their version of the truth for the court, it is unlikely to be the last. In the event of a conviction, the defence will likely file an appeal and if so, it would be most interesting if they include this Frye ruling in their grounds of appeal. An appellate decision on this point, while not binding outside of Washington state, could prove to be instructive, nonetheless. In other articles on my website, I have written about the dangers of using experts whose focus is production value rather than forensic reliability. Since the goal of a criminal trial should be the ascertainment of the truth, choosing the wrong kind of expert frequently misses the mark.

Endnotes

i. Marie-Helen Maras and Alex Alexandrou, ‘Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos’, The International Journal of Evidence and Proof 23, no. 3 (2019): 255-62, at 256; John Fletcher, ‘Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance’, Theatre Journal 70, no. 4 (2018): 455-71.
ii. The Royal Society, ‘Machine Learning: The Power and Promise of Computers that Learn by Example’ (2017) https://royalsociety.org/-/media/policy/projects/machine-learning/publications/machine-learning-report.pdf.
iii. Jeff Ward, ‘10 Things Judges Should Know About AI’, Judicature 103, no. 1 (2019): 12-18.