What if Doctors Stop Verifying the Results?

I read an article over the weekend about a woman who went in for a routine check-up but was told that she had had a heart attack and needed to follow up with her cardiologist ASAP. Or more precisely, the AI that read the EKG said she had a heart attack. Unfortunately, her primary care physician didn’t make an independent interpretation, and simply accepted the AI’s conclusion. Only later, when a cardiologist reviewed the test, did she learn that there had been no heart attack at all.

The failure wasn’t just in the machine. The machine did what machines do: it processed patterns and produced probabilities. The failure was in the absence of human judgment. The doctor simply relied on the algorithm without reviewing the patient’s chart and passed the diagnosis along as fact.

This case reveals a troubling progression from AI as assistant, to AI as colleague, to AI as replacement for clinical judgment. Doctors are overworked and AI promises efficiency and relief. But efficiency cuts both ways. If physicians become conditioned to trust the machine more than their own eyes, errors will slip through. And those errors are not trivial. They are life-altering.

The law will inevitably be drawn into this tension. The standard of care has always asked what a reasonably prudent physician would do under the circumstances. AI does not change this obligation. It makes it more critical than ever.

Medical malpractice panels and courts will soon face these cases. They will have to decide whether reliance on AI without verification is consistent with professional duty or a breach of it. They will need to determine whether the machine is an aid or whether it has become an abdication of responsibility.

When AI is right, it saves time. When it is wrong, as in this woman’s case, it does not just create a medical error. It creates a cascade of consequences, including unnecessary treatments, psychological trauma, insurance complications, and shattered trust.

Malpractice review panels, in particular, will need to evolve too. They will no longer be judging only the doctor’s training, experience, and clinical choices. They will also be asked to evaluate how AI was used, whether it was reviewed properly, and whether its role enhanced or undermined the physician’s independent judgment. In other words, the panel will not just be reviewing medicine. It may also be determining whether medicine, as a profession of human judgment, still exists at all.

Previous
Previous

Guess How Much My Robe Weighs

Next
Next

What happens when AI deepfakes fool a judge?