In February 2026, a federal judge in New York made a ruling that should have landed on every CISO’s desk the same day it was issued. Most of them still haven’t heard about it.

In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that a defendant’s conversations with Anthropic’s Claude — a publicly available AI chatbot — were not protected by attorney-client privilege or the work product doctrine. It was the first ruling of its kind in the country, and the reasoning has implications that stretch far beyond the courtroom.

If your employees are using ChatGPT, Claude, Gemini, or any other consumer AI tool for anything remotely sensitive, this ruling just made your risk exposure a lot more concrete.

What Actually Happened in the Heppner Case

Here’s the short version. Bradley Heppner was indicted on federal securities and wire fraud charges in October 2025. After receiving a grand jury subpoena and retaining defense counsel, he used a consumer version of Claude to prepare reports outlining potential defense strategies and legal arguments. He later shared those reports with his lawyers.

When the FBI searched his home, they seized his devices and found 31 documents from his Claude conversations. The government argued those documents weren’t privileged. Heppner’s defense team — from Quinn Emanuel, no less — argued they were, because the information came from counsel and was created for the purpose of obtaining legal advice.

Judge Rakoff disagreed on every count.

The Three Reasons Privilege Failed

The court’s reasoning boiled down to three points, each of which matters for anyone thinking about AI governance in an enterprise setting.

First, Claude isn’t a lawyer. The court held that attorney-client privilege requires a communication between a client and an attorney. Claude is neither. Heppner wasn’t directed by his counsel to use the tool — he did it on his own initiative. That alone was enough to disqualify the privilege claim, but the court didn’t stop there.

Second, the conversation wasn’t confidential. This is the part that should worry security leaders. The court looked at Anthropic’s privacy policy and noted that it explicitly states the platform collects user inputs and outputs, may use that data to train its models, and reserves the right to disclose data to third parties — including government authorities. In the court’s view, you can’t claim a reasonable expectation of confidentiality when the platform’s own terms tell you there isn’t one.

Third, the purpose wasn’t to obtain legal advice. Heppner used Claude on his own, without counsel’s direction. The court emphasized that what matters is whether the user intended to get legal advice from the AI tool — not whether they later shared the output with their lawyer. You can’t retroactively make something privileged by forwarding it to counsel after the fact.

Most of the commentary on Heppner has come from law firms writing for other lawyers. That makes sense — the immediate privilege implications are a legal professional’s nightmare. But the real blast radius is much wider.

Think about what your employees are doing with AI tools right now. They’re pasting customer data into ChatGPT to draft emails. They’re feeding internal financial projections into Claude to format presentations. They’re dropping source code with API keys into Copilot. They’re asking Gemini to summarize confidential strategy documents.

Every single one of those interactions shares information with a third-party platform whose privacy policy — like the one Judge Rakoff scrutinized — typically allows data collection, model training, and disclosure to third parties.

Before Heppner, the risk was theoretical. “Someone might subpoena those conversations someday.” Now there’s a federal court ruling that says: yes, they can, and no, you can’t stop it by claiming privilege after the fact.

This matters for any organization in a regulated industry. Healthcare companies with HIPAA obligations. Financial firms with SEC and FINRA compliance requirements. Government contractors with CMMC and ITAR restrictions. And law firms themselves, who now need to worry about what their clients are doing with AI, not just their own attorneys.

The Enterprise AI Loophole (That Isn’t Quite a Loophole)

One of the more interesting threads in the post-Heppner commentary is the distinction between consumer and enterprise AI platforms. Judge Rakoff’s ruling leaned heavily on the fact that Heppner used a publicly available version of Claude with consumer-grade privacy terms. Multiple law firms have noted that the analysis might differ for enterprise platforms with stricter data handling.

Venable’s analysis put it directly: the opinion applies established doctrine to a specific fact pattern — “a defendant acting alone, using a publicly available Gen AI platform governed by consumer-facing privacy terms.” Enterprise tools operating under signed data processing agreements with no-training covenants and contractual confidentiality commitments present a materially different picture.

But here’s the thing — as Bloomberg Law’s analysis pointed out, several products marketed as “business” or “enterprise” AI tools offer no more legal protection than the consumer versions. The label alone doesn’t protect you. You need to actually examine the terms, the data handling, the retention policies, and the training practices.

And even if you get the enterprise licensing right, that only covers the tools you’ve approved. It does nothing about the employee who pastes confidential data into the free version of ChatGPT because they didn’t know they weren’t supposed to.

What To Do About It: Practical Steps for Security Leaders

The law firms writing about Heppner have produced some genuinely useful guidance. Synthesizing across the major analyses from Gibson Dunn, Venable, Ogletree, Debevoise, and others, here’s what the practical playbook looks like:

Define an acceptable use policy for AI. If you don’t already have one, this is overdue. It should cover which tools are approved, what types of data can and cannot be entered, and what the consequences are for violations. Make it specific — “don’t share confidential information” is too vague to be useful.

Audit your enterprise AI contracts. Don’t assume the enterprise tier of your AI vendor is actually protecting you. Review the data processing agreement. Look for explicit no-training clauses, zero data retention commitments, and contractual confidentiality obligations. If those don’t exist, you have a problem.

Implement technical controls, not just policies. Policies are necessary but insufficient. People make mistakes. They copy-paste without thinking. They don’t read privacy policies. If the only thing standing between your sensitive data and a public AI platform is a policy document on the intranet, you’re relying on human perfection — and Heppner is a case study in what happens when you do.

Train everyone, not just legal. The Heppner risk isn’t limited to lawyers and their clients. It applies to any employee using AI tools to work with information that could be subject to regulatory scrutiny, litigation holds, or confidentiality obligations. That includes finance, HR, product teams, and engineering.

Ensure counsel directs any AI use for legal matters. One of the key threads in the Heppner ruling is that privilege might have been preserved if counsel had directed Heppner to use the AI tool. For any work that touches legal strategy, litigation preparation, or regulatory response, make sure an attorney is supervising and documenting the use of AI.

The Bigger Picture: AI Governance Is No Longer Optional

Harvard Law Review published an analysis of Heppner just last week, arguing that the court’s reasoning “veers toward categorically excluding a client’s use of generative AI from attorney-client privilege” and that a more nuanced, fact-dependent analysis would be appropriate. The New York State Bar Association called it a “wake-up call and a warning.” Morgan Lewis extended the analysis to tax departments. Healthcare Law Insights applied it to life sciences. The ripple effects are still expanding.

What all of this points to is a simple reality: organizations can no longer treat AI governance as a future problem. The legal framework is being built right now, in real time, and the decisions being made today — by courts, by regulators, and by your employees at their desks — are setting precedents that will define how AI risk is managed for years to come.

The companies that take this seriously now — implementing real technical controls, auditing their AI contracts, and building governance frameworks with actual teeth — will be in a fundamentally different position than those who wait for the next ruling to force their hand.

A Note on Prevention

Full disclosure: AegisPrompt was built specifically to address the technical gap in this problem. It’s a Chrome extension that detects and blocks sensitive data — PII, API keys, financial identifiers, and custom restricted terms — before it gets submitted to AI platforms like ChatGPT, Claude, Gemini, and others. Everything runs locally in the browser, and an admin dashboard provides visibility into what’s being flagged across your organization.

No single tool solves the full scope of what Heppner raises. This is a policy problem, a training problem, a contract problem, and a technology problem all at once. But the technology piece — having something that actually intercepts sensitive data before it reaches a third-party platform — is the part that most organizations are still missing.

If you’re evaluating how to close that gap, AegisPrompt offers a 30-day free trial with no credit card required.


Sources and Further Reading:

  • United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026)
  • Harvard Law Review, “United States v. Heppner” (March 2026)
  • Gibson Dunn, “AI Privilege Waivers: SDNY Rules Against Privilege Protection for Consumer AI Outputs” (February 2026)
  • Venable LLP, “AI, Privilege, and the Heppner Ruling: What the Court Actually Held—And How to Structure AI Use Safely” (February 2026)
  • Ogletree Deakins, “The Intersection of AI and Attorney-Client Privilege—A Cautionary Tale” (February 2026)
  • Debevoise & Plimpton, “SDNY Rules AI-Generated Documents Are Not Protected by Privilege” (February 2026)
  • Bloomberg Law, “Heppner Shows Attorney-Client Privilege’s Fragility in AI Era” (March 2026)
  • Morgan Lewis, “Using AI in Tax Workflows? What Heppner Means for Tax Departments” (March 2026)
  • New York State Bar Association, “Loose AI Prompts Sink Ships: How Heppner Shook the Legal Community” (March 2026)
  • Healthcare Law Insights, “AI, Privilege, and Confidential Business Information: The Heppner Case and What It Means for Life Sciences Teams” (March 2026)