Two recent federal court decisions are the first to address a question of growing importance to businesses and individuals involved in litigation: Can your opposing party obtain your communications with generative AI tools like ChatGPT or Claude through discovery? The short answer is it depends. These rulings reached opposite conclusions, but together offer guidance on how to protect AI-generated materials in litigation.
AI Communications Are Not Privileged When Used Without Counsel's Direction
In United States v. Heppner1, a criminal defendant ran queries through the publicly available version of Anthropic’s Claude on his own initiative—not at the direction of his lawyer—while under government investigation. When the government sought access to those AI exchanges, the defendant argued they were protected from discovery by the attorney-client privilege and the work-product doctrine. Judge Rakoff rejected both arguments. He held that an AI chatbot “is not an attorney,” and that privilege requires a “trusting human relationship” that does not exist between a person and an AI tool.
The court also found that users of publicly accessible AI platforms do not have “substantial privacy interests” in their communications, particularly where the platform’s privacy policy reserves the right to disclose user data to third parties, including government authorities. Because the defendant used AI on his own, without counsel’s involvement, neither the privilege nor the work-product doctrine applied. Judge Rakoff noted, however, that the analysis could differ if a lawyer had directed the client’s AI use, in which case the AI tool might function “akin to a highly trained professional who may act as a lawyer’s agent.”
Work-Product Protection May Apply When AI Reflects a Litigant’s Own Thinking
On the same day as the Heppner ruling, Magistrate Judge Patti of the Eastern District of Michigan reached the opposite result. In Warner v. Gilbarco, Inc.,2 a pro se plaintiff in an employment dispute used OpenAI’s ChatGPT to help draft filings and analyze her case. When the defendants moved to compel production of all the plaintiff’s AI communications, the court denied the request.
The court held that the plaintiff’s ChatGPT exchanges reflected her own mental impressions and litigation strategy, making them protected work product. Critically, the court ruled that sharing information with an AI tool is not the same disclosing it to an adversary and therefore does not waive work-product protection. The court warned that accepting the defendants’ position “would nullify work-product protection in nearly every modern drafting environment.” The court also characterized the defendants’ pursuit of AI-related materials as “a distraction from the merits of this case” and directed that their “preoccupation with Plaintiff’s use of AI needs to abate.”
What Made the Difference?
The key distinction between these cases was the relationship between the AI user and legal counsel. In Heppner, the defendant acted entirely on his own—counsel had no involvement in the AI use and did not direct it, so the court found no basis for privilege or work-product protection. In Warner, the plaintiff was representing herself, and the court treated her AI communications as an extension of her own case-preparation process—the functional equivalent of a lawyer’s notes or draft arguments.
The law in this area is new and evolving, but these decisions offer several lessons for companies and individuals who may use AI tools during litigation or investigations:
- Involve counsel before using AI for litigation-related tasks. The Heppner court strongly implied that AI use directed by a lawyer may receive greater protection. Those involved in a dispute or investigation should coordinate with their legal team before turning to AI tools for research, analysis, or drafting.
- Treat AI communications as potentially discoverable. Publicly available AI platforms may not offer confidentiality protections, and their terms of service often permit disclosure of user data. Assume that anything shared with or generated by a consumer AI tool could be subject to a discovery request or subpoena.
- Document the purpose and context of AI use. Courts in both cases examined why and how AI was used. Maintaining a clear record that AI tools were used as part of litigation preparation—at counsel’s direction and for the purpose of developing legal strategy—may strengthen privilege and work-product arguments.
- Review and update organizational AI policies. Companies should ensure their policies address the use of AI tools in the context of litigation, investigations, and legal matters—including guidance on what types of information may and may not be entered into AI platforms.
1No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 17, 2026).
2No. 2:24-cv-12333 (E.D. Mich. Feb. 10, 2026).