Even the Robots Are Prejudiced?
There was a time when some believed artificial intelligence would save us from ourselves. The promise was simple. AI would remove human emotion, eliminate prejudice, and deliver decisions based solely on logic and data. The machine would be objective, and fairness would follow.
But a recent study in the Proceedings of the National Academy of Sciences tested several large language models, including GPT-3.5 and GPT-4, with a simple task: choose between content written by a human and content generated by AI. The result? The AI systems consistently favored “their” own. These models did not just exhibit the well-known tendency to hallucinate citations. They also demonstrated a new kind of bias, preference for machine-generated content over human-written work.
Researchers have labeled this phenomenon “AI-AI bias.” But let us be clear about what that means. These systems are showing AI favor while dismissing the human perspective.
And yet we are now seriously considering using these tools in some of the most sensitive parts of the justice system. We are exploring their use in evaluating evidence, generating sentencing recommendations, summarizing briefs, and even drafting legal opinions. The belief is that these tools will help eliminate human prejudice. But we have not stopped to ask what happens if the machines bring their own biases and prejudices and start favoring machines over man.
This is not a hypothetical concern. Imagine a scenario in which opposing counsel submits an AI-generated brief. The judge’s AI assistant reviews the filing and flags it as persuasive. In response, a thoughtful, well-reasoned brief is submitted by a human attorney without assistance from AI. But the system gives that filing less weight simply because it was not written by a machine.
That is not fairness. And if we are not careful, we will automate injustice under the illusion of impartiality.
I have said before that judges are not editors of machine output. We are constitutional actors. Our role is not only to apply the law but to preserve the integrity of the process. That includes ensuring that the very tools we use do not quietly undermine the justice system. So if we are going to integrate these tools into the justice system, which I believe we should, then we must demand that they are built to reflect our values, not replace them. These findings about AI bias aren't reasons to abandon AI in law, but urgent calls to get it right.
The rule of law belongs to us, not to the robots and their AI friends. It is our responsibility to make sure these tools serve justice, not silently redefine it.
Subscribe to my Substack newsletter today so you don’t miss out on a post. https://judgeschlegel.substack.com