The Difference Between Perceived Danger and Actual Danger

Twice in the past two days my car tried to save me from a danger that did not exist. The first time a pedestrian had already stepped into one of the lanes on a four-lane road. The second time a person on a scooter was moving at a good pace from the opposite side and had already begun to cross the highway. But in both situations, they had timed their movements and were waiting for me to pass before continuing behind me. Anyone who spends real time on the road can read that. The pace, the angle of the body, the way a person glances at your car and the small pause at the edge of a lane, are all cues that tell us what someone intends to do long before they do it.

My car could not read any of that. It registered bodies in motion and treated both moments as emergencies. The system locked the brakes and threw me forward even though there was no actual danger. There was only the appearance that something might happen. The machine could not tell the difference.

This is the heart of the problem with the idea that AI will replace, or should replace, human judgment anytime soon. Judgment, in this case, was not a matter of just seeing an approaching object or measuring its proximity. It was about knowing if the risk was real or not. Judgment is exercised by weighing context, reading intent, and recognizing the difference between a theoretical possibility and a likely event.

That distinction matters even more in court. People do not always communicate cleanly or confidently when something important is at stake. Stress, fear, culture, and personality shape how they speak and move. Someone may hesitate or stumble not because they are lying but because they are trying to tell the truth. Another may appear calm not because they are honest but because they have practiced their story many times over. These are the signals that require interpretation. No system has the lived experience to make those calls like we do.

There is another point that cannot be ignored. My car cannot be questioned about why it braked. It cannot explain what it thought was happening or why its reading differed from mine. It does not have reasons. It has codes and triggers. When judges assess credibility, weigh conflicting accounts, and determine what someone meant, they are not just predicting behavior. They are taking responsibility for a judgment that can be examined, challenged, defended, or reversed on appeal.

Accountability is not a luxury. It is the structure that allows a judge to wield authority. Even if AI someday becomes more accurate at forecasting outcomes, accuracy is not the sole measure of judgment. Judgment requires reasons that human beings can understand and a person who stands behind them when they are wrong.

AI is valuable in its place. It can process information, check rules, connect the record, and surface patterns we might overlook. It can also help us work more efficiently and effectively. But reading intent is different. Understanding what a person means when they move or pause or choose a word is different. Separating what looks dangerous from what is actually dangerous is different.

My car did what it was designed to do. It treated the possibility that the pedestrians were going to cross in front of me as a certainty. Human beings do not live that way. We read nuance. We adjust in real time based on context and experience. And we accept responsibility when we get it wrong.

Until AI can distinguish perceived danger from actual danger, and accept responsibility for that distinction, we should be very cautious about when and where to implement it.

The work of judging, on the road or in the courtroom, requires a human being.

Remember, AI is just a tool.

Previous
Previous

Back in My Day

Next
Next

Are We Reading the Same Brief?