TikTok’s New Deepfake Feature - Symphony Avatars

Have y’all seen the new announcement from TikTok on AI? TikTok recently announced the introduction of Symphony Avatars, which allows users to create personalized avatars. While TikTok asserts that its avatars will be labeled “AI Generated” to distinguish them from real footage, the broader implications of this technology cannot be ignored.

The introduction of deepfakes on a platform as massive as TikTok means that this technology will become more sophisticated and accessible to everyone in a very short period of time. This democratization of deepfake technology that costs nothing for users to create convincing avatars will also result in an increased public skepticism towards video content. This is particularly concerning from my perspective, where the authenticity of evidence is paramount.

In a recent article, I discussed the potential need for new rules in light of Apple’s announcements regarding AI integration into their products, including its new CleanUp feature that allows its users to alter photos from their phones. You can read more about that HERE.

Given the rise of such technologies, the time to take a hard look at modifying rules related to lawyers' duties to the Court maybe sooner rather than later. While I am generally cautious about quickly modifying rules, believing most current ones are sufficient to handle the new world of generative AI, one area that might need re-evaluation are rules like ABA Model Rule 3.3 (a)(3), and its state equivalents, which prohibit lawyers from knowingly offering false evidence. However, in the age of deepfakes, I would suggest that we strengthen this rule as it relates to photographic, video, and audio evidence.

Lawyers should not be allowed to close their eyes and offer “smoking gun” evidence that their clients give them without asking some serious questions first. Instead, lawyers should be required to verify the authenticity of any evidence, to the best of their ability, before they present it in court. This means becoming more aware of the potential for manipulation inherent in digital media and implementing stringent internal checks before offering any evidence. Such a requirement could help mitigate the risk of deepfakes undermining the integrity of judicial proceedings. More specifically, lawyers should not be permitted to offer evidence they knew “or should have known” was false. Furthermore, lawyers and litigants should not be permitted to challenge any evidence in front of the jury based upon a “deepfake” theory unless they have a reasonable belief that the evidence being offered has been manipulated. The “liars dividend” is real and needs to be discouraged at all costs.

For reference, you can view ABA Model Rule 3.3 HERE.

As we navigate this new landscape, it is essential to strike a balance between embracing innovative technologies and safeguarding the integrity of our justice system. Ensuring the authenticity of evidence that originates from a digital file is a critical step in maintaining trust in the legal process amidst the rise of AI-generated content.

Feel free to share your thoughts or reach out if you have any questions or insights on this topic. Let's continue this important conversation about the intersection of technology and the law.

Previous
Previous

New Voices: An AI Writing Experiment

Next
Next

Using ChatGPT Is Like Getting Dinner At The Cheesecake Factory