When AI Breaks Privilege: Courts Confront Generative AI in the Age of Enterprise Adoption 

By Marco Nasca, Esq 

When executives, HR leaders, and attorneys use generative AI to analyze legal risk, a record is created. Courts are increasingly being asked whether that record is discoverable — and whether it is protected.  

In a recent analysis of In re OpenAI, Inc. Copyright Infringement Litigation, I examined the court’s decision ordering the production of millions of AI-generated chat logs, reinforcing that generative AI interactions are treated as ordinary electronic evidence governed by established discovery principles. A recent ruling from the Southern District of New York now addresses a related and equally consequential question: whether those interactions are shielded by attorney-client privilege or the work product doctrine. 

In United States v. Heppner, No. 25-cr-00503 (S.D.N.Y. Feb. 17, 2026), the court concluded that, under the circumstances presented, they were not. 

Traditional Privilege Doctrine and the Third-Party Problem 

In Heppner, the defendant charged with securities and wire fraud, sought to withhold AI prompt transcripts and AI-generated analyses, arguing that they were either confidential communications seeking legal advice or work product prepared in anticipation of litigation. The court rejected both arguments. 

Attorney-client privilege protects confidential communications between attorney and client made for the purpose of obtaining legal advice. The doctrine is narrowly construed because it withholds otherwise relevant evidence from the truth-seeking process. Communications with third parties generally waive privilege unless the third party is necessary to facilitate the provision of legal advice and operates within a defined agency relationship. 

The defendant argued that the AI platform should be treated analogously to standard software tools — akin to word processors or cloud-based document systems. The court rejected that comparison. Unlike a passive software interface, the AI provider collected user inputs and outputs, retained data, and reserved rights to use information under its published policies. Critically, the court emphasized that the relationship lacked the type of “trusting human relationship” or fiduciary obligation that underpins privilege. In that context, the defendant could not demonstrate a reasonable expectation of confidentiality. 

The doctrinal analysis was conventional. The implication is not.  

Work Product and the Protection of Counsel’s Mental Processes 

The court likewise rejected application of the work product doctrine. While work product protection is broader than attorney-client privilege, it is designed to safeguard the mental impressions, conclusions, and legal strategies of counsel. 

Materials may qualify even if prepared by non-lawyers, but only where they are created at the direction of counsel and serve to protect or reflect counsel’s litigation strategy. In Heppner, the AI-generated materials were created by the defendant on his own initiative. They did not reflect counsel’s mental processes, nor were they prepared at counsel’s direction in anticipation of litigation. Under those circumstances, the doctrine did not apply. 

Importantly, the ruling does not foreclose the possibility that AI-generated materials could qualify for protection in different circumstances. But where communications occur voluntarily, outside counsel’s direction, and without reasonable confidentiality safeguards, protection will not attach. 

Corporate Legal Departments and Decentralized AI Use 

For corporate legal teams, the ruling underscores a governance reality that is often overlooked: AI adoption within enterprises is rarely centralized. Legal departments may implement structured policies, but business units, compliance teams, and operational leaders frequently experiment independently. 

Managers may use AI to summarize internal disputes before elevating them to legal. Compliance teams may test regulatory interpretations. Finance leaders may model exposure scenarios. These interactions often occur before counsel is formally engaged and without clear instruction regarding confidentiality or retention. 

From a privilege standpoint, subject matter alone does not create protection. A conversation about legal risk is not privileged unless it is a confidential communication with counsel for the purpose of seeking legal advice and conducted in a manner that preserves a reasonable expectation of confidentiality. When AI tools are used outside that structure, the resulting transcripts may not be protected. If litigation follows, those transcripts may be discoverable. 

The HR and Executive Governance Gap 

The most immediate exposure may arise within Human Resources, however, can extend throughout the organization.  

Imagine an HR director responding to a discrimination complaint. Before contacting counsel, she uses an AI platform to draft a preliminary investigative framework. She prompts the system to suggest how to evaluate theallegations, how to document performance deficiencies defensibly, and how to structure interview summaries. The system generates suggested language and analytical approaches. Those interactions are stored within the AI provider’s environment. 

If litigation later arises, those transcripts may become part of the evidentiary landscape. They may reflect exploratory reasoning, internal concerns, or hypothetical justifications that would never appear in a final memorandum prepared under counsel’s supervision. Even if the ultimate employment decision was lawful and appropriately vetted, the AI interaction itself may shape how the narrative is constructed in discovery. 

Now extend that analysis to the C-suite. 

A CFO uses an AI tool to model financial exposure in connection with a pending regulatory inquiry. A CEO asks an AI system to summarize potential litigation risks associated with a strategic acquisition. A board member explores how to structure a workforce reduction to mitigate shareholder claims. In each case, the subject matter is inherently legal in dimension, yet the interaction occurs independently of counsel. 

Under settled doctrine, those communications are not privileged merely because they concern legal risk. Privilege attaches to structured communications with counsel. When executives independently engage AI systems to analyze legal exposure, they may generate discoverable artifacts that precede — and potentially complicate — formal legal advice. 

As generative tools migrate from experimentation to executive decision-making infrastructure, the line between business analysis and legal advice may blur operationally. It does not blur doctrinally. Courts are unlikely to collapse that distinction. 

Parallel Considerations for Law Firms 

Law firms face related considerations. Attorneys are increasingly using AI tools to summarize deposition transcripts, outline arguments, and test analytical approaches. While enterprise deployments may include contractual confidentiality protections, privilege and work product protection ultimately turn on how and why the tool is used. 

Materials prepared at counsel’s direction for litigation strategy are more likely to qualify for protection than independent exploratory prompts conducted outside structured workflows. Courts will continue to examine purpose, direction, and confidentiality — not technological sophistication. 

The novelty of the interface does not alter the governing standard. 

When Evidentiary Doctrine Shapes Discovery Reality 

Although Heppner is an evidentiary ruling, evidentiary determinations inevitably shape discovery obligations. If AI transcripts and outputs are not privileged by default, they may constitute discoverable electronically stored information. 

That possibility introduces practical questions that extend beyond doctrine. Where are AI transcripts stored? How long are they retained? Do provider policies permit retention, model training, or third-party disclosures? Can these materials be defensibly preserved and collected if litigation arises? 

As I noted in my analysis of AI chat logs in In re: OpenAI, Inc., Copyright Infringement Litigation, courts are increasingly treating new forms of digital interaction as ordinary evidence. Privilege analysis now joins that broader judicial engagement. The intersection of evidentiary doctrine and data governance is no longer theoretical. 

Governance as the Differentiator 

None of this suggests that organizations or law firms should retreat from generative AI. The efficiencies and analytical benefits are substantial. However, privilege attaches to structure, direction, and confidentiality — not to convenience or subject matter. 

As AI becomes embedded across legal, HR, compliance, and executive workflows, governance must mature accordingly. That governance cannot be abstract. It requires clarity around when AI use falls under legal supervision, when interactions must occur within enterprise-controlled environments that preserve reasonable expectations of confidentiality, and when sensitive decision-making processes demand documented direction by counsel. It also requires coordination between legal and HR functions, defined retention parameters for AI-generated transcripts, and an understanding of how provider policies may affect confidentiality analysis. 

In practice, this means distinguishing between productivity uses of AI and legally sensitive uses; ensuring that enterprise deployments are aligned with confidentiality expectations; and integrating AI artifacts into existing preservation and litigation hold protocols where appropriate. Absent that structure, organizations risk treating generative tools as informal aides while courts treat them as formal records. 

Courts are not resisting generative AI. As Judge Rakoff noted, AI’s novelty does not exempt it from longstanding legal principles. Artificial intelligence, at least for now, remains a third party in the eyes of the court. Organizations that embed it within structured governance workflows, rather than layering it informally across sensitive decision-making processes, will be better positioned to harness its benefits without unintended evidentiary consequences.  

About the Author 

Marco Nasca is the Chief Innovation Officer at Lineal Services and a 2001 graduate of DePaul University College of Law. For more than two decades, he has worked at the forefront of eDiscovery and legal technology, advising corporations and law firms on the defensible application of technology in complex litigation, investigations, and regulatory matters. His work focuses on structured data, emerging technologies, and the practical implications of evolving judicial doctrine. 

About Lineal 

Lineal is a global provider of legal technology and services, helping law firms and corporate legal departments navigate complex data challenges across eDiscovery, investigations, and information governance. Built around a data-centric approach, Lineal combines advanced technology with deep subject-matter expertise to deliver scalable, defensible solutions across both structured and unstructured data. Through its Amplify platform and global service teams, Lineal enables clients to gain faster insight, control costs, and adapt to the evolving demands of modern discovery. 

Lineal is not a law firm. The information contained in this article is provided for general informational purposes only and should not be construed as legal advice.