In February 2026, Judge Jed Rakoff of the Southern District of New York ruled that conversations with AI chatbots are not protected by attorney-client privilege or work product doctrine. The case — United States v. Heppner — is the first federal ruling on AI communications in criminal cases, and it changes how everyone should think about what they type into ChatGPT, Claude, or any AI tool.
The short version: anything you tell an AI chatbot can be subpoenaed and used against you in court.
What happened
Bradley Heppner was indicted on securities and wire fraud charges. Without consulting his lawyers, he used Claude to outline potential defense strategies and anticipated arguments — generating approximately 31 written exchanges with the platform.
The government subpoenaed those chat logs. Heppner’s lawyers argued they were protected by attorney-client privilege (confidential communication with a legal advisor) and work product doctrine (materials prepared in anticipation of litigation).
Judge Rakoff rejected both arguments:
Not attorney-client privilege because Claude is not an attorney. The privilege requires communication between a client and a licensed lawyer. An AI chatbot, regardless of how sophisticated, is not a lawyer and cannot form an attorney-client relationship.
Not work product because Heppner created the exchanges on his own initiative, without direction from his attorneys. Work product doctrine protects materials prepared by or at the direction of an attorney — not materials a defendant creates independently using a public tool.
What this means for developers
Your code-related AI chats are discoverable
If you use Claude Code, Codex CLI, or ChatGPT to discuss proprietary code, those conversations are stored by the provider and can be subpoenaed. This matters if:
- You’re involved in an IP dispute
- Your company is under investigation
- You’re working on code that’s subject to litigation
- You discuss trade secrets with an AI
Your company’s AI usage creates legal exposure
Every employee using AI tools is generating discoverable records. If your company is sued, opposing counsel can subpoena AI chat logs from every provider your employees use.
Data retention policies matter
Each provider retains data differently:
| Provider | Default retention | Can you delete? | API vs consumer |
|---|---|---|---|
| OpenAI (ChatGPT) | 30 days (API), longer (consumer) | API: yes, Consumer: limited | API data not used for training |
| Anthropic (Claude) | 30 days (API) | API: yes | API data not used for training |
| Google (Gemini) | Varies | Settings-dependent | Workspace: admin controls |
For sensitive work, use the API with data retention disabled rather than the consumer chat interface. See our AI data retention guide for detailed provider comparisons.
What the ruling does NOT say
The ruling doesn’t say AI chats are always discoverable in all contexts. It specifically addresses:
- Public consumer AI platforms (not self-hosted or enterprise deployments)
- Communications without attorney involvement (if your lawyer directs you to use AI as part of legal work, different rules may apply)
- Criminal proceedings (civil cases may develop different standards)
If you use a self-hosted AI model on your own infrastructure, there are no third-party logs to subpoena. The data never leaves your network.
Practical recommendations
For individual developers
- Don’t discuss legal matters with AI chatbots. If you’re involved in any legal proceeding, keep AI conversations strictly technical.
- Use API access with retention disabled for sensitive work. Consumer chat interfaces retain more data.
- Assume everything you type is permanent and discoverable. Treat AI chats like emails — don’t write anything you wouldn’t want read in court.
For companies
- Create an AI usage policy that addresses legal discovery. Employees should know their AI chats can be subpoenaed.
- Use enterprise AI plans with admin controls over data retention.
- Consider self-hosted AI for sensitive work — no third-party data to subpoena.
- Train employees on what not to discuss with AI tools (trade secrets, legal strategy, HR matters).
For AI product builders
- Implement data retention controls — let users delete their data.
- Offer self-hosted deployment for enterprise customers.
- Document your data handling in your privacy policy.
- Consider encrypted storage for chat logs.
The bigger picture
This ruling is the beginning, not the end. Courts will continue to define how AI communications fit into existing legal frameworks. The trend is clear: AI chats are treated like any other electronic communication — discoverable, subpoenable, and not inherently privileged.
For developers, the practical takeaway is simple: don’t put anything in an AI chat that you wouldn’t put in an email to a colleague. Both are equally discoverable.
Related: AI and GDPR · AI Data Retention Policies · Where Does Your Code Go? · Best AI Coding Agents for Privacy · Self-Hosted AI for Enterprise · Best Encrypted Cloud Storage