Search
 
 

Practices

 

Search

FILTERS

  • Please search to find attorneys
Close Btn

Alerts

March 25, 2026

Your AI Chats Might Be Used Against You In Court

What Recent Decisions Mean for Any Party Using AI In Legal Disputes

Imagine that you are in the legal fight of your life. Your lawyers gave you advice protected by the attorney-client privilege, but you want to use an AI model to get a second opinion on legal arguments and strategy. To get the AI model’s advice, you use prompts that contain the key facts and issues plus advice that your lawyers gave you. You plan to share the AI model’s results with your lawyers, but you have not talked with them about your plan to use the AI model.

Your opponent files a motion seeking access to your AI chats about the legal dispute, and you are shocked when the court grants that motion. The court holds that your use of the AI model is not protected by the attorney-client privilege or the work product doctrine. The court also holds that you waived any attorney-client privilege or other protection covering your lawyer’s advice when you shared it with the AI model.

That is based on a real story. It happened to Bradley Heppner in his federal criminal case, and that case highlights potential dangers when parties use AI models to assist them in legal disputes.

The questions and answers below explain the Heppner decision, a contrasting decision from the U.S. District Court for the Eastern District of Michigan, and a ruling requiring OpenAI to provide 20 million AI chat logs to plaintiffs in a copyright infringement lawsuit.

QUESTIONS AND ANSWERS
Q: What were the facts in the Heppner case?
The U.S. government brought a criminal case against Bradley Heppner alleging securities fraud, wire fraud, and other crimes. Before Mr. Heppner was indicted, the government sent a grand jury subpoena and told his lawyers that he was the target of an investigation. His lawyers met with him, and – at the time – those discussions were protected by the attorney-client privilege.

Later, Mr. Heppner used the free, consumer version of Anthropic’s AI tool, Claude, to analyze the government’s potential claims and develop defense arguments. His chats with the AI tool were reflected in approximately 31 documents. His lawyers had not asked him to use the AI tool, but he shared the AI model’s output with his lawyers after-the-fact. The government learned of the AI documents after seizing his electronic devices pursuant to a search warrant. When Mr. Heppner’s lawyers argued that the government could not review those documents because they were protected as attorney-client privileged and work product, the government filed a motion asking the court to let the government review the documents.

Q: What did the court decide in the Heppner case?
On February 10, 2026, Judge Jed Rakoff ruled from the bench that the 31 AI-generated documents were not protected by the attorney-client privilege or work product doctrine. He confirmed his reasoning in a written opinion issued on February 17, 2026.

Q: Why did the court in the Heppner case hold that the AI documents were not protected by the attorney-client privilege?
In the Heppner case, the court identified three main reasons why the attorney-client privilege did not prevent the government from obtaining copies of the AI documents:

First, an AI model is not a lawyer, and a party’s communications with an AI model lack the key features that are the foundation of the attorney-client privilege. The attorney-client privilege is based on a “trusting human relationship” that a party has “with a licensed professional who owes fiduciary duties and is subject to discipline.”

Second, the AI model did not adequately preserve confidentiality. Among other things, the AI model’s terms of service for the free consumer version that Mr. Heppner used expressly permitted Anthropic to (a) disclose user data to the government and other third parties under certain circumstances, and (b) use data to train its AI model. Users had no reasonable expectation of privacy because the AI company retained the data in the ordinary course of its business.

Third, Mr. Heppner did not make the documents privileged when he sent them to his lawyers after-the-fact. Settled law holds that a non-privileged document does not become privileged simply because it is later transmitted to a lawyer.

Q: Why did the court in the Heppner case hold that the AI documents were not protected work product?
The work product doctrine generally protects materials prepared by – or at the direction of – a lawyer during or in anticipation of litigation. Mr. Heppner created the AI documents entirely on his own initiative. Because the documents were not prepared by a lawyer or at a lawyer’s direction, the court held they did not qualify as work product.

Q: Can using an AI tool waive the privilege that protects a person’s communications with the person’s lawyers?
Yes, in certain situations. This is one of the most significant results in the Heppner case. Not only were the parts of the prompts that Mr. Heppner created on his own unprotected, but he waived the privilege and work product protections that originally covered his lawyers’ communications with him by entering that legal advice into his AI prompts.

Q: Has another court reached a different conclusion based on different facts?
Yes. On the same day that Judge Rakoff first ruled in the Heppner case, Magistrate Judge Anthony Patti of the U.S. District Court for the Eastern District of Michigan issued a written order in Warner v. Gilbarco, Inc. In that case, Magistrate Judge Patti held that a plaintiff representing herself as a pro se litigant did not waive work product protection when she used an AI model to prepare legal documents and develop her legal strategy.

Q: Why did the Michigan court protect the AI materials?
In the Michigan case, the court reasoned as follows:

First
, Ms. Warner’s AI chats and AI-generated documents were protected by the work product doctrine because she prepared them in anticipation of litigation while representing herself. When Ms. Warner used ChatGPT to research legal questions and draft filings, that was inseparable from her litigation strategy. On those facts, the court viewed her use of the AI model as classic work product.

Second
, Ms. Warner did not waive work product protection by using the AI model. Voluntary disclosure to any third party may waive the attorney-client privilege, but work product protection is waived only if the information is disclosed either (a) to the opposing party, or (b) in some other way that makes it sufficiently likely the opposing party will receive the information. Disclosing information to ChatGPT did not trigger a waiver under that standard because AI programs “are tools, not persons, even if they may have administrators somewhere in the background.”

Third
, allowing the defendant to receive Ms. Warner’s AI prompts and the model’s responses would effectively force disclosure of Ms. Warner’s opinions regarding the litigation. That result could undermine work product protection in virtually every modern drafting environment.

Q: Why did the courts reach different conclusions in the Heppner and Warner cases?
Because the cases involved different facts, the two decisions may not be as contradictory as they might first appear. Mr. Heppner was a criminal defendant who had lawyers but used the AI model on his own initiative without his lawyers’ direction or knowledge. In contrast, Ms. Warner was representing herself as a pro se litigant.

But in addition to the factual differences, the courts in the two cases adopted different views on fundamental points. Those different views included whether using a consumer AI tool counts as disclosure to a “third person” that can destroy confidentiality protections. Judge Rakoff seemed inclined to treat the AI platform as equivalent to a third person, while Magistrate Judge Patti characterized AI programs as tools, not third persons. The two courts also took different approaches on some of the applicable legal principles.

Q: Can AI companies be required to turn over your prompts and the AI model’s answers?
Yes. In certain circumstances, AI companies may be required to turn over your prompts and the AI model’s answers. For example, in a ruling that Judge Rakoff cited in his Heppner opinion, a court ordered OpenAI to produce approximately 20 million ChatGPT conversation logs to plaintiffs suing OpenAI for copyright infringement. The conversation logs were anonymized and were covered by a confidentiality order, but those facts may not have provided much comfort to any users who had included confidential information in their AI chats. Confidentiality orders formally provide specified protection during pretrial stages of a lawsuit, but the protection may be less robust at later stages of the case such as trial.

The significance of this ruling for AI users is straightforward. OpenAI retained billions of chat logs in the ordinary course of business. When plaintiffs sought those logs to support their copyright claims, the court held that the users in question who voluntarily submitted their communications to OpenAI lacked a sufficient privacy interest to block production.

The court rejected OpenAI’s argument that it should be permitted to search only for conversations referencing the plaintiffs’ specific copyrighted information. Instead, the court ordered production of the full 20 million log sample.

In the Heppner case, Judge Rakoff cited the OpenAI ruling for the proposition that AI users do not have substantial privacy interests in their conversations with AI platforms that retain user data in the normal course of business. Taken together, the OpenAI ruling and the Heppner decision reinforce the same fundamental point: some courts may not protect information that you have voluntarily shared with an AI model, particularly depending on the model’s terms of service.

Q: What might happen on these issues in the future?
As courts face new issues that AI creates, the law is uncertain and evolving. Congress or state legislatures may pass new statutes clarifying the consequences of a party’s use of AI during lawsuits. Also, courts might amend their rules of civil procedure or evidence to clarify those consequences. But in the meantime, results may depend on how each judge views things. For example, some judges might treat confidentiality risks posed by AI models as largely hypothetical and too remote to justify waiving privilege or other protections. Other judges, though, may conclude that parties abandoned any reasonable expectation of privacy when they voluntarily shared their information with an AI model.

When judges analyze these issues, some will reach rulings that seem very fact-specific. For example, some rulings might focus on the AI model’s particular terms of service. In some situations, using free consumer AI models and some types of individual or team paid models might risk waiver, while using some types of enterprise AI models might not. Other decisions might turn on facts such as whether the party’s lawyer asked them to use the AI model. For now, one thing is clear: a party who uses an AI model to work on a lawsuit takes a risk.

Q: What should parties do right now if they are involved in a legal dispute?
If you are involved in a legal dispute, take the following steps:
  • If you have already used an AI tool in connection with the legal dispute, tell your lawyers promptly so they can assess the potential risk and develop a strategy.
  • Unless your lawyers advise you otherwise, stop using AI tools for anything related to your legal matter.
  • Do not input any information that you received from your lawyers into an AI tool without first discussing that with your legal team.
  • If your company uses AI tools, review your company’s AI use policies with a lawyer.
  • Ask your lawyers about any protocols for using AI.

THE BOTTOM LINE
AI tools can be extremely valuable but using them carelessly in the context of a legal dispute may have serious consequences. If a court concludes that your AI interactions were not privileged or otherwise protected, the other side may be able to see your AI chats. Before using any AI tool in connection with a legal matter, talk to your lawyers.

McGrath North’s lawyers are ready to assist you in developing your AI strategy, including how to obtain the benefits of AI when working with your legal team.