Apple's AI Integration: A Boon for Consumers, a Challenge for Courts
Apple's recent announcement at the 2024 Worldwide Developers Conference (WWDC) has gotten some folks pretty excited. The integration of OpenAI's capabilities into Apple’s ecosystem, branded as Apple Intelligence, introduces several innovative features that are designed to enhance our experience. Siri, now powered by ChatGPT, promises to be a game changer. And the Clean Up tool in the Photos app, akin to Google’s Magic Eraser, will allow us to effortlessly remove unwanted objects from our photos.
While these advancements are exciting from a consumer’s perspective, they also terrify me as a judge as they could present significant challenges for the justice system. The proliferation of tools that can alter reality, like Apple’s Clean Up, raises serious concerns about the integrity of photographic evidence in court. Traditionally, photographs introduced as evidence were printed at local stores like Walgreens or CVS and then presented in court, with a witness simply verifying their accuracy and authenticity before being admitted. This process relies heavily on the honesty of the witness and the inherent trust in the photographic evidence presented though.
The ability to manipulate images easily with tools like Clean Up means that key details or figures can be erased from photos, posing a significant risk to the integrity of evidence used at trial. Such alterations could go undetected, leading to very bad verdicts. This scenario underscores the urgent need for the legal system to adapt to these technological advancements.
Courts and litigants may want to consider stricter verification processes for digital evidence. In the short term, this might include requiring parties to verify where the photograph was taken before trial and possibly provide the digital file for metadata analysis to ensure the integrity of digital files before they are used in court. In the long term, the development of standards like C2PA or hardware solutions capable of detecting manipulations in digital images will hopefully be developed so we can implement them in courtrooms across the country. Unfortunately, this might take awhile.
Recent developments also highlight the ongoing struggle to adapt legal standards to the challenges posed by AI. Professor Maura Grossman and Judge Paul Grimm (Ret.) proposed (1) a modification to Federal Rule of Evidence 901(b)(9) and (2) a brand new 901(c) to address the challenges posed by AI-generated evidence, particularly deepfakes. Their proposal aimed to establish stricter guidelines for authenticating AI-generated media in court. But the U.S. Judicial Conference's Advisory Committee on Evidence Rules decided in April not to adopt the proposed changes immediately, suggesting the need for further refinement and discussion.
The proposed modification by Grossman and Grimm underscores the importance of developing robust standards for the admission and authentication of AI-generated evidence. As AI continues to evolve, so too must the legal standards and practices to safeguard the integrity of evidence and uphold the principles of justice.