Generative AI is changing how businesses and individuals access legal information. AI hallucinations are a well-known headache, but they aren’t the only risk. A brand-new federal decision out of New York is an even bigger caution sign: using a public AI tool to help “think through” your legal situation can cost you legal protections you assumed were automatic.
In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), United States District Court Judge Jed S. Rakoff held that a defendant’s communications with a public generative AI platform about legal strategy were not protected by attorney-client privilege or the work-product doctrine.
That’s the headline. Here’s what happened and what it means for you.
What Happened in Heppner
The defendant, facing federal charges, used the consumer version of Anthropic’s chatbot Claude to generate written analyses of defense arguments. Law enforcement later seized materials that included roughly 31 documents memorializing those AI exchanges.
After the seizure, the defense argued the materials should be protected because the defendant had used information he received from counsel and intended to share the AI-generated work with his lawyer.
The court rejected that.
Why Attorney-Client Privilege Didn’t Apply
Attorney-client privilege generally requires: (1) a communication, (2) between attorney and client, (3) made in confidence, (4) for the purpose of obtaining legal advice.
The court found the AI communications failed key parts of that test:
- The AI platform is not your attorney (we often wish this point were more obvious to AI users). Privilege protects communications between a lawyer and client, with narrow exceptions.
- A public AI platform is not automatically an agent of the lawyer. And the court emphasized that confidentiality is the point of privilege.
- No reasonable expectation of confidentiality. The court pointed to the platform’s terms and privacy practices that allowed retention and potential disclosure or use of user inputs and outputs. If you voluntarily share sensitive information with a third party that reserves broad rights, privilege can be lost.
Practice Pointer: If you put privileged or sensitive legal strategy into a consumer AI tool, you are taking a real risk that a court will treat it like you published it to a third party – and that means – in plain English – that you put it on a billboard.
What About the Work-Product Doctrine?
The defendant had a second argument: even if it’s not privileged, it should be protected as work product.
The court said no, for a practical reason that matters a lot in the real world: work product generally protects materials prepared by or at the direction of counsel in anticipation of litigation. Here, the defendant used the AI tool on his own initiative, not because his lawyer told him to do it.
And the court drew a line worth remembering: the doctrine is meant to protect lawyers’ strategic thinking, not independent side work a client generates using public tech tools.
Practical Takeaways for Clients and Businesses
If you’re an individual with a legal problem:
- Don’t paste emails from your lawyer into ChatGPT, Claude, or any public AI tool.
- Don’t summarize legal advice you received and ask the AI to “improve it” or “tell me what to do next.”
- If you want to use AI at all, ask your lawyer first. Don’t assume it’s safe.
If you run a business (or manage people who do):
You should treat this as a governance issue, not a tech curiosity.
At a minimum, your AI policy should:
- Prohibit entering confidential, proprietary, regulated, or privileged information into public AI tools.
- Set clear rules for use cases that are usually fine (brainstorming marketing copy, rewriting non-confidential drafts, formatting checklists).
- Require an approval path for any higher-risk use (client data, HR issues, litigation, contracts, financials).
- Address who can use enterprise AI tools and under what contractual safeguards.
Yes, enterprise AI tools with negotiated confidentiality terms may raise different questions. But the law is moving fast and not always in ways you’ll like. This decision is a warning shot.
The Simple Rule that Keeps You Out of Trouble
If you wouldn’t forward it to a stranger, don’t paste it into a public AI prompt.
Need An AI-Use Policy or Training?
If you want help drafting or tightening an AI-use policy, assessing where your team is exposed, or training leadership on practical AI compliance, we can help.
Reach out to schedule a call. We’re here to help.
