What happens when AI deepfakes fool a judge?

We keep hearing about AI “hallucinations.” But the Mendones case out of California may be one of the first major reported cases where a court had to deal directly with deepfake evidence. On September 9, 2025, the Superior Court of California, Alameda County, dismissed Mendones v. Cushman & Wakefield, Inc., No. 23CV028772, with prejudice after finding that the self-represented plaintiffs submitted deepfake videos and altered images in support of their motion for summary judgment.

For a while now, I have warned that this day would come. Generative AI has given us remarkable tools, but it has also created technologies that can mimic human voices, generate images, and even create fake videos.

The Mendones case makes it real. The judge reviewed video exhibits that were supposed to capture witness testimony. Instead, they appeared “robotic” and lacked natural expressiveness. Mouth movements did not sync with the words being spoken, and the metadata did not match the story.

Confronted with deepfakes, doctored photographs, and metadata that raised red flags, the judge decided to dismiss the case entirely. Interestingly, the court stated that it did not have “the time, funding, or technical expertise” to determine the authenticity of all of the exhibits offered by plaintiffs, even though it suspected more of the evidence had been altered.

And that makes sense. As you know, judges are already managing heavy caseloads so if every disputed voicemail, video, or screenshot required a forensic investigation, the system would collapse under its own weight. Strong sanctions like dismissal send a powerful message, but they cannot be the only answer. We also need proactive safeguards and clear rules.

Earlier this year, Louisiana passed Act 250, requiring attorneys to verify the authenticity of digital evidence before offering it in court. It creates procedures for raising authenticity concerns and mandates disclosure when parties know evidence has been altered or AI-generated. Importantly, it pushes these disputes to the pretrial phase, before a jury ever sees them.

Meanwhile, the Federal Advisory Committee on Evidence Rules is weighing two different paths. One is a proposed new Rule 707, which would require AI-generated evidence to meet the same reliability standards as expert testimony. The other path involves proposed amendments to Rule 901, including a possible new subsection 901(c)that would specifically address evidence suspected of being AI-fabricated.

But it is important to recognize that Mendones involved self-represented litigants. Rules like Act 250 or a proposed Rule 707 might not have stopped what happened here. Even so, they are a start. And in this case, the court caught the problem pretrial anyway, before the evidence ever reached a jury.

Sam Altman said not long ago that “AI kinda sucks today.” He is right. A lot of what we see is still a bit off or easy to spot. But six months ago, we could usually tell if a photo or voice clone was fake. Today, we often cannot.

What happens six months from now?

That is the problem. It is not about what GenAI can do today; it is about the pace of change. If courts can barely keep up now, how will they fare when the fakes become indistinguishable from reality?

The Mendones case is a warning shot. It shows the cost of letting AI forgeries seep into the system. The deepfakes in that case were crude enough that the judge could spot them, but the technology has already advanced to the point where many of us would struggle to tell the difference. Thankfully, Louisiana and the Federal Courts are beginning to sketch a better path, but we are racing the clock. Because once trust is broken, no amount of technology can put it back together again.

Deepfakes in the courtroom are no longer hypothetical. They are here. And the clock is ticking. The crude fakes of today will soon look primitive, yet they are already capable of wasting judicial resources and undermining trust.

The technology will keep moving. The question is whether the law will move with it. The time to act is not just now, it is yesterday.

Next
Next

You Already Know How to Do This