There Was Only One Cat Lawyer, So Why Are There So Many Avianca Cases?

We can all close our eyes and see the image of an attorney trapped on Zoom with a filter that he could not turn off declaring to Judge Roy Ferguson, "I'm not a cat." This embarrassing, and hysterical moment soon went viral and is still discussed to this day. Thankfully, appearing as a cat did not cause any issues for the attorney or his client. Yet that single tech mishap triggered a wave of training seminars and CLE courses on virtual courtroom appearances. The legal profession learned its lesson fast.

Two years later came the first true generative‑AI fiasco, and it should have ended the experiment then and there. In Mata v. Avianca, a New York lawyer relied on ChatGPT to locate supporting precedent, which produced citations for a number of cases that never existed. A federal judge sanctioned the attorney, stressing that “[a]n attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.” This incident too went viral.

Yet the warnings went unheeded. Today, briefs containing phantom authorities still arrive in dockets nationwide, even from the companies that build these tools. Just last month, in the music‑publishers' copyright suit against Anthropic, counsel from a major firm filed an expert report that included hallucinations. Opposing counsel discovered that Claude, Anthropic's own chatbot, had fabricated key citation details when used to format an academic reference. A federal magistrate labeled the lapse "a very serious and grave issue."

If an AI developer's elite lawyers, armed with every incentive to get it right, cannot keep hallucinations out of court filings, what hope do overworked solo practitioners have? Generative models remain sophisticated guessers, not research assistants. A fabricated citation looks real, sounds authoritative, and slips past hurried reviewers. So be careful when using these tools, especially if you have never used them before. I used to think that court fines and newspaper headlines would stop the madness. But unfortunately, they have done little to fix the problem, which somehow seems to be growing.

The solution is not to ban the technology but to demand competence. Maybe bars should start requiring that every lawyer who touches a generative model complete mandatory CLE training on how these systems work, why they hallucinate, and how to verify outputs before using them professionally? And annual refreshers should follow. The pace of change leaves no room for one‑and‑done education.

The kitten on Zoom was funny because everyone instantly recognized something was wrong. AI hallucinations are dangerous precisely because they look right.

If you cannot verify it, do not file it.

Subscribe to my Substack newsletter today so you don’t miss out on a post. https://judgeschlegel.substack.com

Previous
Previous

War Games: When AI Lawyers Go Rogue

Next
Next

The Model Context Protocol (MCP): How an Open Standard Could Transform the Justice System