The 2026 Prediction: From AI to Delegation
In 2026, I predict that agents will begin appearing in chambers. They will arrive the way most operational change arrives in courts. Quietly and without much fanfare.
Specifically, I expect many chambers will utilize at least one agent in a narrow support role. That role may include editing and proofreading, checking citations, creating timelines, drafting bench memos, summarizing exhibits and transcripts, or perhaps all of the above.
For the past two years though, judges have largely discussed generative AI as if it were merely a better search engine or a smarter autocomplete. That framing was already outdated by the time it took hold. As judges become more comfortable with the technology, however, it is inevitable that they will begin experimenting with agents, especially chambers without law clerks.
Thus, courts should engage this issue now and with discipline. Agents shift the question from what AI can do to what we should allow it to do. They also force the next question. What guardrails must we establish when these tools begin to act autonomously or semi autonomously.
The primary risk is not adoption. It is drift. The chambers that succeed will be those that refuse the drift, define the lane, and enforce it.
So to test this idea, I built two distinct enterprise level agents, Danny and Beth, over the weekend.*
Danny is an English major and designed for writing discipline. It identifies awkward phrasing, inconsistent party names, vague referents, redundancy, and sentences that carry too much weight. In judicial work, clarity is not cosmetic. Clarity is a component of legitimacy.
Beth is a seasoned law clerk built with much tighter guardrails. It prepares neutral bench memos and drafts writ dispositions for smaller records after receiving direction from me. Beth is never used for drafting opinions on appeals involving larger records, and it is never a substitute for internal chambers review.
Beth also features a mandatory initial check. If I instruct it to start drafting, it will refuse unless I have reviewed the record and formed a preliminary direction. If I have not, it stops and asks that I do so before proceeding.
The most valuable lesson came from watching early versions fail. They were generic, vague, and fell short of the standards expected of chambers work product. Tools do not create structure or discipline. Process creates discipline.
That is why I built a record verification pass into Beth’s workflow. Before it presents a final draft, it must check factual assertions against the record and produce a certification block with record cites. This block lists what it verified and what it could not verify. That is the nonnegotiable line for any agent in any chambers.
If a tool cannot be audited, it does not belong. If it tempts you to delegate judgment, it does not belong.
Agents can assist in the modernization of the courts in 2026, but only if we refuse to use them as shortcuts around the safeguards that make the justice system legitimate. The shiny objects will change. The responsibility will not. The line is ours to draw.
*Beth is built around the “AI in Chambers: A Framework for Judicial AI Use” that I released a few months ago. Also, different GenAI tools respond differently to the same instructions, so experiment first before even considering implementation.
