Who's Piloting the Ship?

GPT-5 was just released last week, and after reading about the launch and experimenting with it over the weekend, here are a few thoughts. They are very different from most of what I have seen written though as I am no technologist, and I am not qualified to say whether it is better than GPT-4o. But I am an expert in the justice system, and that lens gives me a different focus.

According to OpenAI's own description, GPT-5 now uses a unified system with a real-time router that decides which internal model to use for a given request, choosing between slower reasoning models for complex problems or faster models for simple queries. This routing logic is continuously trained on user interactions, shaping every response the system generates. While OpenAI has explained the general concept, we still know very little about the details of how these decisions are made.

Those instructions were not written by a legislature. And they were not debated in a public hearing or reviewed by a court. They were written by a small group of engineers, some whom are very young and who have spent their entire careers in the startup culture. They may be brilliant coders, but they are still people with their own experiences, blind spots, and beliefs. And they are the ones steering how this enormously powerful system "thinks."

I also read accounts from developers and users who built their workflows on top of older models. Some say their systems stopped working as expected when GPT-5 launched. According to these reports, older models were no longer available in the consumer interface, and chats were migrated to GPT-5. For those relying on specific model behaviors, this appeared to disrupt carefully built structures. These accounts also highlight concerns about how much control the company has over the direction of the system, and how few redundancies there may be if you prefer or depend on a different version.

Consider what all of this means in practice. When GPT-5 answers health questions, unseen rules may determine what gets emphasized or left out. When it writes code for critical systems, hidden parameters may shape its approach to security and reliability. When it weighs in on legal issues or drafts contracts, invisible instructions may shape its interpretation of fairness, duties, and rights. OpenAI has also described introducing "safe-completions" to block certain outputs. But who decides what is safe? What values guide those decisions? What vision of justice is being coded into these systems?

Someone is always piloting the ship. The question is who, and in what direction. And unlike the justice system, there is no appeals court, no independent review, and no parallel authority to take over if the pilot veers off course.

In the justice system, we already have an answer. Judges are elected or appointed under a legal framework that demands transparency, reasoning that can be read and challenged, and accountability to the public. You may not like every decision, but you know who made it and why. In AI, the "why" can be buried in routing algorithms and safety parameters no one outside the company ever sees.

So when people say AI will deliver "better justice" or be free of bias, remember that these systems are not neutral and they are definitely not omniscient. They are making decisions and being steered in a direction chosen by someone. Whether you agree with that course depends entirely on your own viewpoint and who is actually at the wheel.

As these systems become more powerful (with OpenAI describing GPT-5 as putting "Ph.D. level experts in your pocket") and as government officials begin to rely on them for policy analysis, drafting regulations, and even judicial research and writing, we need to ask harder questions. Should AI companies be required to disclose how their systems prioritize information when used by government? Should safety parameters be subject to review when they influence public policy? Who should have input into these design decisions that increasingly shape not only our access to information, but the machinery of government itself?

The marvel of GPT-5's capabilities should not distract us from these governance questions. The more seamlessly these systems integrate into our lives, the more critical it becomes to understand who is making the decisions that guide them, and to ensure those decisions reflect more than just the perspectives of a small group of technologists.

Subscribe to my Substack newsletter today so you don’t miss out on a post. https://judgeschlegel.substack.com

Next
Next

What Is the Point of the Justice System?