AI and Ineffective Assistance of Counsel: New Questions for Criminal Defense

We all remember the Avianca case where a lawyer cited fake cases generated by ChatGPT in a civil case in federal court. But did you hear about the 2023 criminal case where AI was used to draft a closing argument in a federal conspiracy trial?

In United States v. Michel, Grammy-winning rapper Prakazrel "Pras" Michel of The Fugees was convicted on 10 federal counts. After his conviction, he filed a motion for a new trial claiming ineffective assistance of counsel because his attorney had used GenAI to draft portions of the closing argument. Michel's new legal team argued this constituted ineffective assistance of counsel, especially given that the AI-generated closing argument reportedly confused charges, misattributed song lyrics, and missed key weaknesses in the prosecution's case. After an evidentiary hearing where both Michel's new attorneys and his former counsel testified, the court issued a detailed ruling denying the motion.

Thanks for reading [sch]Legal Tech! Subscribe for free to receive new posts and support my work.

Inside the Court's Analysis

According to the court's opinion, Kenner admitted during the evidentiary hearing that he had indeed used the AI program to generate a portion of his closing argument. The AI was given the prompt: "I am a passionate attorney who believes in my clients [sic] innocence. Write a powerful, emotionally compelling closing argument and integrate lyrics from Ghetto Superstar by the band the Fugges [sic]."

The court found that the AI's output included a misattribution of lyrics from Puff Daddy's "I'll Be Missing You" to Michel's group The Fugees, an error that Kenner failed to catch. However, after analyzing the issue under the Strickland standard for ineffective assistance claims, the court concluded that Michel failed to demonstrate prejudice sufficient to warrant a new trial.

Critical to the court's reasoning was that the AI-generated content "did not relate to any evidence in the case" but consisted mainly of "general sympathetic statements and one lyrical quote." The court found that Michel had not shown there was a reasonable probability that correctly attributing the song lyrics would have changed the outcome of his trial.

Questions Without Easy Answers

The Michel case raises some interesting questions:

  • Under what circumstances would AI-generated content constitute performance "below an objective standard of reasonableness"? Is it the use of AI itself or failure to adequately verify its output?

  • How can a defendant demonstrate that AI-related errors specifically caused prejudice such that there is a reasonable probability that, but for counsel's error, the result of the proceeding would have been different?

  • How does the "prevailing professional norms" component of Strickland's deficiency prong apply when technology is rapidly evolving?

While the court didn't find the AI use grounds for a new trial in this specific instance, its analysis provides an initial framework for how courts might approach these issues in the future.

The Flip Side: Could NOT Using AI Eventually Constitute Ineffective Assistance?

While the Michel case highlights potential dangers of improper AI use, an equally intriguing question lurks on the horizon: Could failing to use AI eventually constitute ineffective assistance in certain contexts?

Consider the public defender with an overwhelming caseload:

  • What if AI could review hundreds of hours of body camera footage that a time-strapped attorney simply cannot get through?

  • What if AI could analyze thousands of jailhouse phone calls to identify potentially exculpatory evidence that would otherwise go undiscovered?

  • What if AI could meticulously compare witness statements to identify inconsistencies that a sole practitioner might miss due to their enormous caseload?

As these technologies become more reliable and commonplace, will courts eventually need to consider whether an attorney's failure to utilize available technological tools constitutes a failure to provide competent representation?

Imagine a scenario where an overworked public defender manually reviews just 10% of available body camera footage before trial, missing key exculpatory evidence that an AI review could have flagged. If such technologies become standard practice, could their non-use become grounds for an ineffective assistance claim?

Where Do Courts Draw the Line?

The legal profession now stands at a crossroads. AI tools for legal research, document review, and trial preparation are developing rapidly. Some offer genuine potential to improve representation, particularly for public defenders and prosecutors with limited resources. Others may represent dangerous shortcuts that undermine the human judgment at the heart of effective advocacy.

Courts will navigate this terrain as they have other technological developments—by applying existing precedent to new fact patterns. In the Michel case, the court followed the well-worn path of a Strickland analysis, focusing on whether the AI use caused prejudice sufficient to change the outcome. This approach isn't novel—it's the same high bar courts have consistently applied to ineffective assistance claims regardless of the technology involved. The key legal issue may not be AI use itself, but rather an attorney’s failure to diligently supervise AI output to prevent avoidable errors—an expectation courts have already affirmed in prior rulings.

But should courts establish bright-line rules about when AI use is appropriate so that there is a standard to measure? Should they focus on process rather than technology, examining how attorneys verify and take responsibility for AI-generated work? Should they distinguish between different types of legal work, perhaps permitting more technological assistance for research while requiring more direct human involvement in advocacy?

Looking Forward

As criminal defense attorneys and prosecutors alike incorporate AI into their practices, courts will inevitably confront more cases that raise these issues. The Michelcase represents a predictable extension of how courts have addressed technological changes in the practice of law over the years.

For defense attorneys, the message remains unclear: the court didn't categorically reject AI use, but the case highlights potential pitfalls. Using AI without proper supervision may lead to errors, but as the technology improves and becomes standard practice, failing to utilize helpful AI tools might also one day raise similar concerns.

For courts, the challenge will be balancing technological innovation with the fundamental human elements of legal representation that the Sixth Amendment was designed to protect. As AI capabilities advance, this balancing act will only become more difficult—and more essential.

The outcome of Michel's motion demonstrates the continued relevance and adaptability of the Strickland framework. Rather than creating new standards for AI, the court predictably applied the same two-pronged test—deficient performance and prejudice—that has evaluated attorney performance for decades. The technology may be new, but the legal analysis follows familiar patterns that have successfully accommodated technological change throughout the digital revolution.

Ultimately, the Michel case and evolving AI tools remind us that the core principle remains unchanged: legal advocacy depends on human judgment, and attorneys will remain accountable for the responsible integration of technology into their practices.

Subscribe to my Substack newsletter today so you don’t miss out on a post. https://judgeschlegel.substack.com

Previous
Previous

The AI Prosecutor: Transforming Case Management, Brady Decisions, and Prosecutorial Discretion

Next
Next

Embracing AI in the Justice System: Lessons from the Road with Full Self-Driving Technology