Nearly One Thousand Cases Later, Is It Time for Mandatory AI Literacy?

Should state bars, in coordination with their state Supreme Courts, require a mandatory one-hour CLE on artificial intelligence fundamentals for every licensed lawyer?

That question may sound aggressive at first glance, but consider where we are.

In 2023, I wrote an open letter to judges across the country arguing that the legal profession already had the rules it needed to address the use of artificial intelligence. The duty of candor to the tribunal, the duty of competence and Rule 11 all predate generative AI. Lawyers have always been required to verify the authorities they cite and to ensure that what they file is grounded in fact and law. I argued then that layering on new AI-specific prohibitions would not solve the problem. What we needed was education within the framework that already governs us.

The intervening years have tested that thesis.

Three years later though, the rules remain intact and courts are enforcing them, yet the problem continues. Publicly tracked cases involving AI-generated hallucinations in court filings now approach one thousand worldwide, according to the most comprehensive database tracking the issue. The true number is almost certainly higher.

Last month, in the Eastern District of Louisiana, Judge Brandon Long sanctioned a lawyer after discovering that authorities cited in a filing did not exist. The lawyer acknowledged the mistake and accepted responsibility. Around the same time, a panel of the United States Fifth Circuit confronted hallucinated citations at the appellate level. Writing for the panel, Chief Judge Jennifer Walker Elrod made clear that AI-generated fabrications in court filings are a growing problem and that ignorance of the risks associated with generative AI is no longer a defensible excuse. She also acknowledged that generative AI can be helpful if used properly and carefully.

In fact, the Fifth Circuit considered adopting a generative-AI-specific rule in 2024 and declined, concluding that existing professional obligations are sufficient. That balance is important. The court recognizes the value of the tool, but it insists that responsibility remains with the lawyer who signs the filing.

The enforcement tools are there. Courts are using them. Yet fabricated authority continues to appear in filings.

That persistence tells us something important. This is not a regulatory gap. It is a literacy gap.

The instinct in moments like this is to respond courtroom by courtroom. One judge drafts a standing order requiring disclosure of AI use. Another requires certification language in pleadings. A third contemplates a local rule. Each action is understandable, but collectively they create fragmentation. Lawyers who practice across jurisdictions must track slightly different requirements depending on where they file. We have spent decades trying to reduce unnecessary variation in procedural rules. There is no reason to recreate that pattern here.

If hallucinated authority continues to appear more than three years after generative AI entered mainstream legal practice, then it is time to address the issue upstream.

State bars, working in coordination with their state Supreme Courts, could require a mandatory one-hour CLE on AI fundamentals as part of annual education requirements. Such a course would not be a speculative debate about whether artificial intelligence is good or bad, nor would it be a marketing presentation about efficiency gains. It would be a focused explanation of how large language models generate text, why they predict language probabilistically rather than retrieve verified truth, why hallucinations occur, and how lawyers must verify AI-generated authority before incorporating it into a filing. It would also address confidentiality considerations and the differences between consumer tools and secure enterprise systems so that lawyers understand the risks alongside the benefits.

We already require continuing legal education to maintain professional competence. Generative AI is now embedded in drafting and research platforms used daily in practice. Expecting responsible use without requiring foundational understanding is unrealistic.

We do not need another rule layered on top of duties that already exist. We need education that allows those duties to be fulfilled in a world where drafting tools can generate persuasive but fabricated authority in seconds.

If sanctions are being imposed and the same problem continues, the appropriate response is not to multiply local directives. It is to raise the baseline of competence across the profession.

One mandatory hour of real AI fundamentals, statewide, would be a modest step with meaningful impact.

Enough is enough.

Next
Next

Shifting Sands, A Cautionary Tale