The Pragmatic Court, Part 1

Courts are starting to get surrounded by AI products whether we asked for them or not. They are showing up in the tools we already use, inside the platforms our staff open every morning, and inside the contracts that get renewed every year. That is the reality. So the question is not whether courts will encounter AI. The question is whether we will use it with intention, whether we will pick the right entry points, and whether we will put sensible boundaries in place before bad habits harden into routine.

A lot of the public debate still frames this as if courts must either build an AI system or refuse AI altogether. That is not how modernization actually happens. Most courts are going to use what arrives through normal channels, meaning the products and features bundled into enterprise software, cloud platforms, document systems, and case management tools. We are going to learn them the way we learned every other major shift, by using them, by making mistakes early in low stakes settings, by writing down what works, and by setting limits where the risks are higher.

That posture matters, because the easiest way to get courts stuck is to make this sound like a high ceremony decision. The moment AI is framed as a moonshot project, the conversation becomes procurement, fear, and paralysis. The more responsible approach is simpler. Courts should treat AI as a new class of productivity tool, then decide where it belongs in court work and where it does not.

This is also why I am less interested in arguments about whether AI can think like a judge and more interested in whether courts can use these products in ways that increase efficiency without creating new risk. Most of what slows courts down is not judicial reasoning. It is the operational reality of a high volume system. Filings that are missing required components. Deadlines that get miscalculated. Records that arrive incomplete. Briefs that do not track the rules. Small problems that become delay, and delay that becomes cost, confusion, and uneven treatment.

If AI is going to be useful in courts, it will be useful first as a support tool that helps courts manage that operational load. And courts do not need a sweeping strategy document to start. Courts need competence, shared norms, and a practical understanding of what these products do well and what they do poorly. The fastest way to build that competence is to start where the risk is low, the review is easy, and the value is immediate.

That is what I mean by low barrier entry points. Courts should begin with uses where the work is repeatable, the output is easy to review, and a mistake does not quietly alter a litigant’s rights. This is how courts learned e-filing, electronic notices, remote hearings, and every other major technological change. Not by writing philosophical treatises, but by choosing sensible starting points and learning how to use the tool in real time.

Once you start thinking this way, the conversation becomes practical. Which tasks in your court consume time, are repetitive, and are verifiable. Where are you doing work that is essentially administrative but still high friction. Where are your staff and judges rewriting the same communications over and over. Where is the work administrative and repetitive, but still important, and still consuming real judicial time.

Those are the places where AI products can help quickly, and that is also where courts can build the discipline that will matter later. Because the danger is not that courts will use AI. The danger is that courts will use it carelessly, without boundaries, and then defend the practice after the fact when something goes wrong.

In the next post, I am going to lay out what low barrier entry points look like in courts. I will stay focused on real workflows rather than hypotheticals, and I will also describe the guardrails that make these uses legitimate. The point is not to make courts feel high tech. The point is to save time where the work is repeatable, keep judgment where it belongs, and make sure every use stays inside a clear lane you can defend.

Previous
Previous

The Pragmatic Court, Part 2

Next
Next

What happens when the patient trusts the chatbot more than the doctor?