Embracing AI in the Justice System: Lessons from the Road with Full Self-Driving Technology

Recently, I've experienced the technological marvel of a Full Self-Driving (FSD) system. And after only two weeks of commuting with this AI at the wheel, I've gained a few more insights about technology's promise and limitations—insights that directly apply to our ongoing efforts to modernize the justice system through artificial intelligence.

A Push of a Button: The Wonders of FSD

FSD is amazing technology. With a single push of a button, my car takes over, driving me to work without any input from me. It has navigated city streets and highways pretty flawlessly, even in pouring rain. The first time I watched it handle a downpour without missing a beat, I was genuinely astonished. This is the kind of efficiency and capability that makes you sit up and take notice—a machine doing something complex, reliably, and autonomously.

But as impressive as FSD is, it's not perfect. Over the past few weeks, I've noticed quirks that remind me technology has limits. Sometimes, it drives in a lane I don't like—not because it's unsafe, but because I have my own preferences from years of driving this route. Other times, it changes lanes when I wouldn't, making moves that feel unnecessary or abrupt. These moments aren't deal-breakers, but they highlight a gap between the AI's logic and my human judgment.

Weathering the Storm: Consistency vs. Context

One observation really stood out to me: FSD behaves similarly whether it's clear skys or raining. That consistency can be a strength, but it's also a flaw. For example, when approaching an exit, the car waited far too long to change lanes. In dry conditions, that's fine—no harm done. But in the rain, it felt risky; a human driver would likely have gotten over earlier to account for slick roads and reduced visibility. The AI didn't adjust its approach based on the weather, revealing a lack of contextual awareness that could spell trouble in more critical situations.

This mirrors a challenge we face with AI in the justice system. AI can process cases or data with remarkable consistency, treating every situation with the same logic. That's great for routine tasks like sorting filings or identifying precedents. But justice isn't always about consistency—it's about context. Every case has unique nuances, and applying a one-size-fits-all approach risks missing the bigger picture, just like FSD missing the need to adapt in the rain.

Anticipation: The Human Edge

Another moment on the road with FSD got me thinking. One day, the car got behind a bus, but I knew a stop was coming up because I've driven past that bus stop countless times. If I were behind the wheel, I'd have waited until after the bus had stopped to change lanes and make my turn. FSD, however, didn't anticipate this—it followed the bus without hesitation and got stuck behind it for a minute because it lacked the foresight I've gained from years of experience. It wasn't a disaster, but it showed how AI can miss what a human instinctively understands.

In the courtroom, this kind of anticipation is invaluable. Judges and lawyers draw on years of experience to read between the lines—sensing when a witness is credible, a defendant is remorseful, or a case is about to take an unexpected turn. AI can analyze data and spot patterns, but it can't replicate the human ability to anticipate based on intuition and lived experience. That's a gap we can't ignore as we integrate AI into legal decision-making.

Fatigue and Oversight: A Cautionary Tale

Perhaps the most thought-provoking lesson came from my own reaction to FSD after a few drives. At first, I'd correct it when it chose a lane I didn't prefer—nudging it back to my comfort zone. But after a while, I got tired of fighting it. It wasn't worth the effort for a commute, so I let it drive in the lane it wanted. That worked fine for getting to work, but it got me thinking: what if that attitude crept into the justice system?

Imagine judges or court staff growing weary of overseeing AI decisions—double-checking outputs, tweaking results—and eventually just letting the system run on its own. In a car, that might mean ending up in a less-than-ideal lane. In a courtroom, it could mean flawed rulings, overlooked details, or diminished accountability. The stakes are infinitely higher in justice, where lives and liberties hang in the balance. My experience with FSD underscored the danger of over-reliance—a risk we must guard against as AI takes on bigger roles in our courts.

AI in the Justice System: Promise and Perils

These observations from the road offer a powerful analogy for modernizing the justice system with AI. The promise is clear: AI can handle repetitive, data-intensive tasks with speed and precision, much like FSD navigates a commute. In my own court, I've seen technology cut through backlogs, streamline case management, and make justice more accessible. These are wins worth celebrating.

But the perils are just as real. AI might make decisions that don't align with a judge's seasoned judgment, like picking a lane I wouldn't choose. It might struggle to adapt to the unique "weather" of each case, applying rigid logic where flexibility is needed. And without the ability to anticipate—like spotting a bus stop ahead—it could miss critical context that shapes fair outcomes. Worst of all, if we let oversight slip due to fatigue or complacency, we risk ceding too much control to a system that isn't ready to fully take the wheel.

Addressing the Optimists' View

Some tech optimists might argue that these limitations are merely temporary growing pains. They suggest that future AI will develop contextual awareness, anticipatory capabilities, and even forms of intuition that rival human judgment. While technological advancement will certainly continue, we must acknowledge fundamental differences between algorithmic decision-making and human reasoning. Even the most sophisticated AI lacks lived experience, moral intuition, and the ability to truly understand human suffering and redemption—elements central to justice. Rather than waiting for AI to potentially overcome these limitations, we should design systems that leverage AI's strengths while preserving irreplaceable human judgment. Maybe that’s why it’s really known as Supervised FSD.

Striking the Right Balance

So, where does this leave us? I'm as pro-technology as ever—AI has the power to revolutionize our courts for the better. But my time with FSD has reinforced a core belief: AI should assist, not replace, human judgment. It's a tool, not a substitute. We can harness it for efficiency—processing filings, analyzing data, reducing delays—while reserving the nuanced, high-stakes decisions for human judges who bring experience, empathy, and foresight to the table.

To make this work, we need robust oversight. Judges and court staff must stay engaged, not just monitoring AI but understanding its limits.

Conclusion: Navigating the Future Together

FSD has shown me the thrill of cutting-edge technology—and the importance of keeping a hand near the wheel. As we integrate AI into the justice system, we're on a similar journey. The road ahead is full of potential: a smarter, faster, more accessible system that serves everyone better. But it's a road we must travel with care, balancing innovation with the human touch that defines justice.

Finally, legal education must evolve to include technology literacy, ensuring the next generation of legal professionals can effectively collaborate with and appropriately challenge these systems. The road to technological justice requires not just innovation but intentional, inclusive governance that keeps humanity at its center.

I'll keep using FSD, marveling at its capabilities while staying ready to intervene. In the same way, I'll keep pushing for AI in our courts—excited by its possibilities, but committed to ensuring it enhances, rather than eclipses, the judgment that keeps our system fair and true. Technology can drive us forward, but it's up to us to steer it right.

If you are a judge or law clerk curious about using GenAI, here are some general guidelines developed and designed to ensure that AI use in the courts strengthens public confidence while advancing the fair administration of justice.

Subscribe to my Substack newsletter today so you don’t miss out on a post. https://judgeschlegel.substack.com

Previous
Previous

AI and Ineffective Assistance of Counsel: New Questions for Criminal Defense

Next
Next

"Ghost in the Machine": AI in the Justice System