AI in the Courtroom: Experts, Judges, and the Twist in Weber
As a sitting judge with expertise in the intersection of AI and law, I've long anticipated the day when artificial intelligence would significantly impact our legal proceedings. Well, it looks like that day has arrived, as evidenced by the recent Matter of Weber case in the Surrogate's Court of Saratoga County, New York. This case, involving a trust accounting dispute, not only highlights the challenges we face but may have also set a precedent for us to follow on how courts might handle experts who rely upon AI in the future.
The crux of the Weber case revolved around allegations that a trustee had breached her fiduciary duty in managing a property in the Bahamas. During the proceedings, the court was presented with expert testimony on potential damages. It was this expert's methodology that brought the issue of AI to the forefront of the case.
The court confronted an expert witness, Charles Ranson, who relied on Microsoft Copilot, an AI chatbot, to cross-check his calculations for a supplemental damages report. Ranson couldn't recall his specific inputs to Copilot or explain how the AI arrived at its outputs though. This lack of transparency raised serious concerns for the court about the reliability and admissibility of his testimony.
What makes this case truly groundbreaking is the court's response. The judge took the unprecedented step of establishing new guidelines, ordering that attorneys now have an affirmative duty to disclose the use of AI in generating evidence. Furthermore, such evidence should be subject to a Frye hearing to determine its admissibility. This proactive approach sets a clear precedent that other courts may wish to follow when handling expert opinions and reports that are formed using AI in future cases.
But did anyone catch that the judge used AI too? In an unexpected move, the judge directly engaged with AI, using Microsoft Copilot to test the reliability of the expert's methodology. The court entered prompts similar to those used by the expert and found inconsistent results. This hands-on approach bears a striking resemblance to Judge Newsom's recent use of ChatGPT in the Snell case.
While this direct engagement demonstrates a commendable effort to understand and evaluate the technology, it also raises important questions. Are we, as judges, going too far by conducting our own AI experiments in cases pending before us? The parallels between this case and Judge Newsom's actions highlight a growing trend of judicial engagement with AI that warrants careful consideration. (See previous posts at www.judgeschlegel.com/blog for my thoughts on this issue).
The Weber case brings to light several critical issues we'll need to grapple with. The 'black box' problem of AI systems makes it nearly impossible for opposing counsel to effectively cross-examine experts or for the court to assess the reliability of the underlying data and methodology. We must reconsider whether we can treat AI-generated information under the existing hearsay exception for expert testimony, or if we need to establish new standards given the unique challenges AI presents.
Moreover, we need to carefully consider the implications of judges using AI tools to evaluate evidence or assist in decision-making, balancing the need for judicial understanding of technology with the risk of overstepping traditional roles.
As we navigate this new terrain, we may need to develop new standards specifically for AI-assisted expert testimony. These could require experts to provide detailed information about the AI system used, including its known limitations, the specific inputs provided, and steps taken to verify the AI's outputs.
The Weber case should serve as a wake-up call for the legal community. We're entering uncharted territory where the foundations of expert testimony - reliability, transparency, and the ability to withstand cross-examination - are being challenged by the black box nature of AI systems. Moving forward, we must stay informed about AI developments and engage in ongoing discussions about adapting our evidentiary rules to this new reality. We need to consider not only the use of AI by experts but also its role in judicial decision-making. The integrity of our judicial process depends on our ability to effectively evaluate and challenge the evidence presented, even when that evidence is generated by artificial intelligence.
The proactive approach suggested by the judge in the Matter of Weber may be a great way forward, but it also underscores the need for ongoing discussions about the boundaries of AI use in our courtrooms, by experts and judges alike. As we face this new era, we must ensure that our legal system evolves to meet these challenges while maintaining its fundamental principles of fairness and justice.
Subscribe to my Substack newsletter today so you don’t miss out on a post. https://judgeschlegel.substack.com