The Handoff

Watch a 4x100 relay and you learn something quickly. The race is not always won by the fastest four runners. It is won by the team that knows the right moment to pass the baton.

That exchange is where the race becomes fragile. One runner is arriving at full speed while the next runner is already moving. If the pass comes too soon, the baton never settles into the next hand. If it comes too late, the exchange runs out of space, happens outside the zone, and the team is disqualified. A team can have enough speed to win and still lose everything because the handoff went wrong.

This is how you should think about AI in the practice of law.

I have written before that the white page still matters. The first move should belong to the human being. A judge or lawyer should not begin by asking the machine what the case is about, what the argument should be, or what the answer might be. The human being should encounter the problem first. But the white page only answers who starts the race. It does not answer when the baton should be handed off.

That is where legal AI becomes more complicated. Judges and lawyers will use AI. That part is settled. The harder part is timing. If the tool enters too early, before the human being has done enough of the work to know what should stay in human hands, the tool begins carrying something it should not carry. That is not assistance. That is a mistimed handoff.

A judge may ask for a neutral summary before having a real command of the record. The summary may look organized, balanced, and accurate. But if the judge has not yet developed an independent sense of what matters, the summary can become the frame. It can decide which facts feel central, which arguments seem stronger, and which issues deserve attention. The problem is not that the judge used a tool. The problem is that the handoff came too early.

Lawyers face the same risk. A lawyer may ask AI to organize facts, draft arguments, answer counterarguments, and refine tone. None of that sounds reckless. Much of it may be useful. But if the lawyer has not already formed the theory, understood the pressure points, and identified the weaknesses, the tool is not merely helping execute the strategy. It is helping create it.

This is why “human in the loop” has always felt incomplete to me. A runner can be on the track and still botch the exchange. A lawyer can review the output and still have handed off the wrong task. A judge can edit the result and still have allowed the tool to frame the analysis before the judge had enough command to test it. Being in the loop is not the same as controlling the timing of the handoff.

The law has always allowed support. Judges have law clerks and lawyers have associates. But support has boundaries because responsibility has boundaries. AI does not come with the instinct to know when the baton should be handed off, so the transfer line has to be defined before the race begins.

That is the second lesson from the relay. The teams that win do not figure out the exchange on race day. They practice the handoff until timing becomes discipline. Too many judges and lawyers are learning AI on live matters. They are entering the race before they have practiced the exchange, and the cost of a bad handoff in court is not a missed medal. It is a missed deadline, a distorted argument, a bad filing, a ruling that rests on the wrong frame, or a sanctions order that ends up in the news.

AI can make legal work faster and can even make some legal work better. But speed is not the same as control. In a relay, the fastest team can still lose if the baton leaves one hand before the next runner is ready to carry it.

The white page still matters because the first move should be human. The handoff matters because starting the work is not enough. The goal is not to keep our hands off the tool. The goal is to know when the baton is ready to pass.

Next
Next

What do GLP-1s and AI have in common?