The Spice Must Flow, Part Two: The Hidden Hand Signals
When I first wrote about Dune and AI, my focus was automation bias. The danger was that lawyers and judges would begin treating AI not as assistance, but as validation, much the way powerful men in Dune grew so dependent on Bene Gesserit counsel that they stopped trusting their own judgment without it.
But Dune contains a second warning, and it may be the more important one.
The Bene Gesserit did not merely advise powerful men. They also communicated with one another in front of them through subtle signals and shared understanding in ways those men could not perceive. The lord believed he was receiving advice in real time. What he often did not see was that the advice had already been shaped by a hidden network operating around him.
That is where the analogy becomes more unsettling for the AI era.
Today, a judge or lawyer interacts with an AI system as if it were a single helpful assistant. A question goes in. An answer comes back. The exchange feels direct.
It is not.
Modern AI systems operate through layers the user never sees. Model choices. Fine-tuning. Ranking systems. Safety filters. Retrieval pipelines. Policy constraints. Reinforcement methods. Commercial incentives. Sometimes multiple models are involved. Sometimes prompts are routed and reframed. Sometimes the range of possible answers is narrowed before the user ever sees what the system considers responsive.
The interface looks like one voice speaking. In reality, there may be an unseen architecture shaping what that voice is allowed to say and how it says it.
That should matter a great deal in law.
In the legal system, we care about more than the answer. We care about process. We care about who made the decision, what inputs shaped it, what assumptions were built in, and whether those assumptions can be tested or challenged. If judges and lawyers begin relying on systems whose internal shaping is invisible, then part of the real judgment process may be taking place somewhere outside the courtroom and chambers.
That is the deeper problem. Not just automation bias. Invisible influence.
The human decision-maker may not realize how much of the reasoning has already been structured before he even begins to engage with the output. The machine is not merely helping him think. It may already be narrowing what appears reasonable, relevant, persuasive, or true.
The formal decision-maker still holds the title and still appears to be in charge. But influence is already being coordinated in ways he cannot fully see.
Used well, AI can expand capacity. It can accelerate research, organize information, improve drafting, and reduce pointless friction. But in law, capacity is not enough. Legitimacy depends on judgment. And judgment depends on understanding who or what is shaping the path to the answer.
The first danger is that we defer too easily to automated advice.
The second is that we never see the conversation behind it.
A legal system built on accountable reasoning cannot afford invisible architecture between the question and the answer.
And right now, it seems that is exactly what is being built.

