The Social Contract and AI-Driven Judicial Opinions
The Social Contract and AI-Driven Judicial Opinions
At the ABA Tech Show last month, I got into an interesting discussion about judges using AI. I told the person I was talking with that I'm fine with judges using AI to summarize, conduct legal research, draft orders and polish their opinions after they've done the thinking themselves. But I draw the line at judges using AI to help with the "white page problem." You know, that moment when you're staring at a blank screen trying to figure out where to start. That's when the real work happens.
The Judicial Social Contract
The legitimacy of judicial authority depends on more than institutional structure or legal correctness. It rests on a deeper moral and civic understanding: that judges will personally engage with the cases before them, applying human judgment, experience, and ethical reasoning to the facts and the law.
This is not a formal rule. It is a social contract. And as Rousseau argued, legitimacy arises not merely from the outcomes of a decision, but from participation in a process that is recognizably human and just. The judiciary's authority is accepted because the public believes that judges are not just following rules, but struggling with them, weighing competing principles, and making meaning of them in context.
This expectation is central to democratic governance. Judicial decision-making is not mechanical. It is interpretive, deliberative, and deeply human. When we are governed by courts, we are governed by people who have looked the law in the eye and taken responsibility for what it demands.
The White Page Problem Matters
The "white page problem"—that moment of staring at a blank screen before the writing begins is not a mere productivity hurdle. It is part of the intellectual and moral work of judging. In organizing one's thoughts, drafting an opinion, and committing to a particular reasoning path, the judge is doing more than typing. They are wrestling with complexity, doubt, and consequence.
To outsource this stage to AI is to bypass an essential part of the judicial process. The resulting opinion may be well-written and logically coherent, but it has not passed through the crucible of human deliberation. The judge may have reviewed it, approved it, even edited it—but the act of authorship, and the responsibility that comes with it, has been diluted.
A Question of Authorship
Judicial opinions carry the authority of the state, but their legitimacy depends on being authored by someone who has done the work. If the initial framing, logic, and analytical structure come from an AI, who is the real author?
Even if the judge agrees with the final result, the foundation has been laid by a machine. That presents a serious problem—not just for accountability, but for the trust that the public places in the courts. The social contract assumes that judges are not merely endorsing opinions, but owning them.
But What If the AI Is Better?
A common counterpoint is that AI might actually improve judicial opinions—making them more coherent, less biased, and more consistent. Isn't that a virtue?
Only if we confuse correctness with legitimacy.
A technically perfect opinion is not necessarily a legitimate one. The public expects, and deserves, more than formal accuracy. We expect that someone—a person—considered the arguments, bore the weight of ambiguity, and took moral and intellectual responsibility for the result.
An AI can simulate coherence. It cannot simulate judgment. And when judgment is displaced, so too is legitimacy. No one wants to hear that they can’t see their kids this weekend because the Judge used Grok.
Enhancing Rather Than Replacing Judgment
To be clear, this is not a call to reject AI from the judicial process entirely. There is a meaningful role for technology in helping judges refine their thinking, check citations, and improve the clarity of their work.
The key distinction is this: A human being who writes the first draft, establishes the analytical framework, and reaches conclusions independently before using AI to polish the final product or challenge the judge's position maintains their end of the social contract.
In this context, AI serves as a tool for second-order reflection, not first-order reasoning. The judge remains the originator of the judgment, and the ultimate author of the opinion.
Moving Forward
As courts integrate AI into their workflows, we must draw a bright line between enhancement and outsourcing. The tools we adopt should assist, not replace, the essential human struggle at the heart of judicial work.
Our legal system depends not just on outcomes, but on the legitimacy of the process. And that legitimacy depends, in turn, on the humanity of the decision-maker. If we allow machines to write what only people should decide, we risk trading efficiency for authority and that is a bargain no democracy can afford.
Subscribe to my Substack newsletter today so you don’t miss out on a post. https://judgeschlegel.substack.com