What happens when the patient trusts the chatbot more than the doctor?
Last week I read that ChatGPT had launched a dedicated health experience, ChatGPT Health. I filed it away as one more sign that these tools are moving from novelty to routine, assuming I would circle back later. Then I saw that Utah had approved a pilot allowing an AI system to handle certain prescription refills without a physician under a regulated sandbox. That is when I started drafting this piece about how AI in medicine may eventually reach the justice system. But right as I was ready to publish, I saw that Claude had launched its own healthcare push too, Claude for Healthcare. In the span of days, three separate signals pointed in the same direction.
This is how normalization happens. First, consumer products build a clearly labeled health lane and invite people to treat a chatbot as a companion for medical questions. Then a state takes a step beyond conversation and runs a controlled experiment where the AI is allowed to act, at least in limited circumstances. Then a competitor moves quickly and the whole category hardens overnight.
If you’ve been around long enough, you remember when “Dr. Google” and WebMD were all we talked about. But those sites mostly handed you links and left you to sort through them. The new tools do something far more persuasive. They don’t just retrieve information. They turn information into a narrative, delivered in a tone that sounds calm, personal, and confident. They make complexity feel manageable, and they do it in seconds. That matters because anything that feels effortless eventually starts to feel like something a responsible person should do. Over time, that feeling turns into expectation.
Now picture the next routine doctor’s visit in this new world. The patient doesn’t walk in with vague concerns and a few web printouts. The patient walks in with a story already formed, not just symptoms, but meaning. What it might be. What it probably isn’t. What tests make sense. And what the next steps should be. The patient didn’t go to medical school, but arrives sounding prepared, and the story reads coherent enough that it feels like knowledge rather than speculation.
That is where my mind goes to the justice system, because courts do not operate in a vacuum. Standards of care are not written on stone tablets. They are lived expectations that get argued about after something goes wrong, and the courtroom is where those expectations are tested and sometimes reshaped.
So the thought exercise is not whether these tools are good or bad. The thought exercise is what happens when a tool becomes normal and then starts showing up in the stories people tell about what should have happened.
One possible ripple effect is expectation inflation. If patients come to believe answers are always one prompt away, they may start assuming thoroughness should be quick, cheap, and routine. In a normal visit, that assumption could be harmless, even helpful, because it produces better questions and more engaged patients. But after a bad outcome, that same assumption could become fuel. Hindsight has a way of turning uncertainty into negligence, and a chatbot transcript has a way of making the counterfactual feel simple. The formal standard of care might not change on paper right away, but the public’s sense of what “reasonable care” looks like could shift over time as expectations shift.
At the same time, physicians will increasingly be surrounded by tools like this as well. So when the patient’s AI tool points one way and the physician’s AI tools point the other way, who carries the burden of explanation? When they align and both are wrong, who bears responsibility? When the tool is framed as a health product and endorsed by habit, does the failure to use it, or the failure to engage seriously with a patient who relied on it, start to look like omission?
The same tension could show up in informed consent. We’re used to thinking of consent as a conversation where the physician explains risks and alternatives and the patient asks questions. But what happens when the patient arrives anchored to a narrative already formed, one the patient trusts, and the physician’s job becomes not just explanation, but realignment?
I don’t have answers, and I’m not pretending to. What I have is a pattern worth noticing. Three announcements in a matter of days are not just product updates. They are signals of a race into healthcare, and races have a way of moving faster than governance, faster than professional norms, and faster than the legal system’s ability to understand what changed.
If health chatbots are being normalized for medical decisions, does that normalization increase litigation simply by inflating expectations and sharpening hindsight? Does it begin to tug on the standard of care over time because culture quietly decided that ignoring the “best available tool” must be a breach? And does it add new weight to informed consent, because the conversation is no longer starting from scratch, but starting from a script the patient already believes?
These are the questions I can’t shake, and this feels like the moment to ask them while there is still room to shape how this integrates into medicine and the justice system rather than accepting whatever emerges by default.

