AI in the Courtroom: When Victims “Speak” from Beyond the Grave
I know the story about the AI-generated victim impact statement was reported a week or so ago, but I’ve been carefully considering whether to weigh in or not. As a former prosecutor who has tried many murder cases and a former trial court judge who has sentenced countless defendants, I wanted to be careful about how I wrote something related to a grieving victim’s family, who have every right to express themselves. This article is in no way meant to be critical of the family here but is meant to address the overarching question of how we should be considering the proper integration of AI in the justice system.
As many of you know, I’m a strong proponent of using AI and technology to improve our justice system, but not if it undermines public confidence. So this recent news where an AI-generated victim impact statement may have led to a sentence harsher than prosecutors requested, demands thoughtful examination. Having presided over countless emotional sentencings, I can only imagine the power of that moment when the victim’s family used artificial intelligence to let their loved one “speak” directly to the court and to his killer.
I deeply respect this family’s decision. Victim impact statements are a sacred right, and I’ve seen firsthand how they help families process their grief. However, after spending many years in the courtroom, I can’t ignore the broader implications of what allowing this type of impact statement might bring.
Victim impact statements already carry tremendous emotional weight. They turn case numbers into names and statistics into stories. I’ve watched hardened defendants break down in tears and seen families find a measure of peace through this difficult process. That’s exactly what it’s for.
But using AI to recreate the voice and image of someone who is no longer with us introduces something entirely different. It moves beyond mere emotional appeal, entering a realm that feels almost supernatural. It bypasses our rational faculties in a way that could potentially overwhelm the careful balance judges must strike at sentencing. Imagine a murdered child “speaking” to the court through AI. What judge could remain truly impartial in the face of such a powerful, haunting recreation?
This emotional power cuts both ways. In the Arizona case, it may have contributed to a longer sentence. But what happens when an AI-recreated victim pleads for mercy and asks for leniency? Would a judge, despite years of experience, feel compelled to honor that request, even if community safety demanded otherwise?
Sentencing isn’t just about emotional impact. It demands a careful balance of punishment, deterrence, rehabilitation, and public safety. So when we introduce something this emotionally charged into the process, we risk disrupting that delicate equilibrium.
I also noticed something telling about this Arizona case. The AI recreation wasn’t played during the trial, only at sentencing. Clearly, everyone understood it might unduly influence the jury’s fact-finding. But aren’t judges susceptible to the same human emotions?
This case is just a glimpse of the challenges we’ll face as AI becomes more deeply integrated into our courtrooms. Consider when a murder case needs to be retried, but a key witness has died. Normally, prosecutors would offer the transcript of the witness’ prior testimony. But what if they create an AI recreation of that witness delivering the exact same testimony?
Technically, the legal basis for admission would be identical. But the impact of seeing and hearing the deceased witness “speak” again would be profoundly different from having a transcript read aloud. Would jurors find this recreation more credible than someone just reading the text from the witness stand?
These aren’t abstract concerns. As AI becomes increasingly lifelike, we need to carefully consider where to draw the line. Our legal system currently lacks any established framework for evaluating AI-generated evidence so how do we ensure these digital recreations accurately represent the deceased? What standards should govern their admissibility?
As we stand on this new frontier, the question isn’t whether this technology is powerful, but whether its probative value is substantially outweighed by the danger of unfair prejudice. A justice system swayed by digital recreations risks losing its fundamental commitment to impartiality.
So should we permit these digital recreations at sentencing, or should they be excluded entirely? One approach could be to maintain traditional victim impact statements delivered in person or through written statements before sentencing and then, if the judge permits, AI-generated digital representations of the victim could be presented immediately after sentencing, while the defendant remains in the courtroom. This approach would ensure the judge’s decision remains free from the potentially overwhelming emotional impact of an AI recreation while still honoring the family’s desire to let their loved one 'speak' in a meaningful setting and allowing families to express grief in their own way.
This is a difficult question with no easy answers. It’s a challenge the justice system will need to confront as technology continues to evolve. Balancing the rights of victims’ families with the need for impartial justice is no small task, but it’s one we must face if we are to preserve the integrity of our courts.
Subscribe to my Substack newsletter today so you don’t miss out on a post. https://judgeschlegel.substack.com