Medical Records Meet AI: A Looming Challenge for Courts

The Associated Press sounded an alarm in a recent report that should concern every legal professional: Healthcare providers using AI to transcribe doctor-patient interactions are discovering that certain AI tools being used don't just transcribe – they also hallucinate, fabricating details that never occurred. Even more troubling, in the name of privacy protection, the original audio recordings are being deleted after transcription, leaving no way to verify or correct these AI-generated fictions.

This isn't a hypothetical concern. Just last month, my wife experienced this firsthand during her annual check-up. Her doctor asked permission to use AI for transcribing their conversation, assuring her the audio would be deleted afterward for privacy. When she later reviewed her medical records though, she discovered that the AI had included details that were a bit exaggerated from what she had discussed with her doctor during the visit. The tool had apparently extrapolated and drawn conclusions based on its improper understanding of the conversation, creating inaccuracies in the medical record. My wife, being conscientious, caught the error and intends to speak with her doctor to correct it. But how many patients actually review their medical records in detail? And of those who do spot errors, how many will take the time and effort to have them corrected? The sad reality is that most of these AI-generated inaccuracies will likely go unnoticed and unchallenged until something goes wrong – when a future medical decision is based on fabricated information, or when these records become critical evidence in a legal proceeding.

The implications for our justice system are staggering. Medical records form the backbone of countless legal proceedings, from personal injury cases to medical malpractice suits. And under the current rules of evidence, certified medical records hold a privileged position in our courts – they're typically self-authenticating and often sidestep hearsay rules that might otherwise keep some things out of evidence.

The AP article highlights an even more alarming possibility: AI hallucinations could lead to misdiagnosis and improper treatment. When fabricated symptoms or medical history appear in a patient's record, subsequent healthcare providers may base their clinical decisions on these AI-generated fictions. From my position on the bench, this raises novel questions of liability. If a physician relies on AI-corrupted records and delivers inappropriate care, how do we apportion fault between the treating physicians, the provider who implemented the AI system, and perhaps even the AI vendor itself? These cases would stretch traditional medical malpractice cases in new directions, potentially requiring us to develop entirely new frameworks for causation and liability in AI-mediated healthcare.

The challenge extends beyond mere accuracy. The traditional presumption of reliability that medical records have long enjoyed in our courts rests upon human observation and professional judgment. When AI systems intervene in this process, that presumption may demand reassessment. Rather than maintaining blanket rules about medical records' admissibility, courts might need to adopt more nuanced approaches that account for how these records were created. A medical record produced through direct physician documentation might warrant different treatment than one generated through AI transcription.

We may need to begin asking more pointed questions when medical records are offered into evidence. Was AI transcription involved? If so, what verification processes were in place? Do timestamped markers exist that could link transcribed text back to specific moments in the original conversation? While protecting patient privacy through prompt deletion of recordings is important, this practice leaves parties without the means to verify contested transcriptions.

The traditional rules of evidence were crafted in an era of direct human documentation. As AI increasingly assists with the creation of medical records, we may need to evolve our approach. This might mean requiring additional foundation before admitting AI-transcribed records and developing new standards for challenging the accuracy of AI-generated content.

These changes need not upend our entire rules of evidence. Instead, they require thoughtful adaptation of existing principles to new technological realities. The fundamental goal remains unchanged: ensuring that juries can rely on the evidence before them to render just decisions. How we achieve that goal, however, must evolve with the technology that increasingly shapes the evidence we consider.

As our justice system continues to grapple with artificial intelligence, this latest challenge reminds us that AI's impact isn't limited to obvious applications like legal research or document review. It's seeping into the very evidence we rely on to determine truth in our courtrooms. The question isn't whether we'll need to adapt our evidentiary rules for an AI world – it's how quickly we can do so while preserving the integrity of our judicial process.

Subscribe to my Substack newsletter today so you don’t miss out on a post. https://judgeschlegel.substack.com

Previous
Previous

Why I 'Sentenced' My IT Professional to Probation: The Crisis of Tech Talent in Courts

Next
Next

Building Justice: A LEGO Approach to Court Modernization