The recent rise in prevalence of AI chatbots and tools have generated significant conversations in the data privacy world regarding the risks of compliance with data privacy laws and the integrity, availability, and confidentiality of information that is input into AI chatbots and tools.
Bringing the conversation to the legal field, a New York federal court just issued the first ruling to tackle head-on whether conversations with a public AI chatbot can be protected by attorney–client privilege or the work product doctrine. The short answer: they cannot.
Bottom line: If you or anyone at your company pastes legal advice, investigation materials, or other sensitive information into a public AI tool, you are opening the henhouse door and handing your privileged information directly to the fox.
Meet the Fox
In case you missed it, here is what happened: A target of a federal investigation used a public AI platform to create strategy-focused “reports” about the facts and law of his case. Federal agents later seized electronic devices containing those AI exchanges during a search.
The defendant claimed privilege and work product protection, arguing that because he eventually shared the AI outputs with his lawyers (and had fed in information he had gotten from his lawyers), the materials should be shielded. The court said no, for three straightforward reasons:
- The AI is not your lawyer. There was no attorney–client relationship between the defendant and the AI platform. Privilege protects confidential communications with your attorney. Chatbots do not qualify.
- There was no real expectation of confidentiality. The AI provider’s terms of service allowed the company to collect user inputs and outputs, use them to train its models, and even disclose them to third parties, including regulators. Under those conditions, nobody can reasonably claim they expected the conversation to stay private.
- The chats were not about getting legal advice from counsel. The defendant initiated these conversations on his own. The AI tool itself disclaimed giving legal advice. That is a far cry from the kind of attorney-directed communication privilege is designed to protect.
The court also disregarded the work product argument. Work product protection is meant to shield a lawyer’s thinking and strategy. These documents were created by the client, on his own, using a public tool, not prepared by or at the direction of counsel.
The court went on to make it clear that even information that starts out privileged loses its protection once it is pasted into a public chatbot. In other words, the moment the fox is in the henhouse, the damage is done.
The Fox Has a Bigger Appetite Than You Think
You do not need to face a criminal case for this ruling to hit home.
While the court’s findings in the opinion were narrowly tied to the facts of the case, the court’s rationale and logic for applying long-standing legal principles of privilege to AI naturally extends to other common legal scenarios. Litigation discovery rules, regulatory investigations, enforcement actions, and internal investigations all depend on keeping certain information confidential and privileged.
You Left the Henhouse Door Open
There must be an actual attorney-client relationship; the communication must be made for the purpose of obtaining legal advice; and the parties must maintain a reasonable expectation of confidentiality for attorney-client privilege to apply.
Typing a question into a public chatbot or Generative AI tool checks none of those boxes, no matter how legal the subject matter feels.
Under the court’s framework, every AI chat about a legal issue that happens outside the attorney-client relationship is a potential exhibit waiting to be produced.
And employees are having these conversations with chatbots every day, from asking what compliance obligations may apply to certain activities, to preparing terms for inclusion in internal memos and policies, to gut-checking whether a security incident rises to the level of a data breach. None of these conversations involve an attorney, none carry a reasonable expectation of confidentiality, and under the court’s reasoning, all of them could end up as exhibits in a lawsuit, a regulatory proceeding, or worse. Every one of those conversations is another gap in the fence.
Once the Fox Is Inside, You Cannot Walk It Back Out
Privilege protection is not a permanent label that follows information wherever it goes. The moment privileged content is shared with a public AI tool, that act of sharing constitutes a waiver of privilege, making the information fully discoverable by adversaries (such as individuals impacted by a data breach), regulators, and opposing parties.
The court’s waiver analysis leads to an uncomfortable conclusion: any person who copies privileged material into a public AI tool, whether to summarize, brainstorm, or reorganize, is stripping that material of its protection in real time.
It does not matter whether the content is an investigation report, a legal memo, or a lawyer’s negotiation strategy. If it goes into a chatbot whose terms allow the provider to access, train on, or disclose user data, the privilege may already be gone by the time the user hits ‘enter.’ In each case, the user may believe the interaction is harmless, but under this ruling, those exchanges could be fair game in litigation, a regulatory review, or a government investigation. The fox is in the henhouse before anyone realizes the door was open.
Paying for the Premium Coop Does Not Keep the Fox Out
In this case, the court zeroed in on the AI provider’s specific privacy policy, which allowed it to collect user inputs and outputs and use that data to train its models and disclose it to third parties. The court’s reasoning is not limited to free or open-access tools.
Following its logic, any AI platform, whether free, paid, or commercially licensed, could present the same problem if its terms of service reserve the right to review, train on, or disclose user data. That means paying for a premium subscription or even a corporate license does not automatically fix the confidentiality problem. A fancier henhouse is still vulnerable if the door is unlocked.
The Growing AI Discovery Risk
Going forward, expect opposing parties and regulators to ask pointed questions about AI usage in legal processes, such as depositions, custodian interviews, regulatory investigations, audits, or subpoena negotiations. Being asked during a regulatory investigation, “Did you use any AI tools to determine your data privacy compliance obligations or prepare your internal policies?” or receiving a subpoena that specifically demands “all communications with AI-based tools, including prompts, inputs, and outputs, related to the investigation, mitigation, or remediation of the data breach” are no longer hypothetical scenarios. They are the natural next step after this ruling.
Three Ways to Keep the Fox Out of Your Henhouse
- Set Clear Rules and Explain Why They Matter. Create or update a straightforward rule: no one uses public, consumer-grade AI tools for anything related to legal advice, attorney-provided materials, investigations, audits, disputes, or trade secrets. For example: “Employees may not input, paste, upload, or otherwise transmit any legal advice, attorney communications, investigation materials, draft policies, audit findings, or trade secrets into any AI tool that is not expressly approved by the Legal Department.” A clear, concrete rule is far more effective than a vague directive to “use caution.”
- Build a Secure Coop and Put Lawyers on Guard. Deploy an enterprise AI solution that contractually and technically prevents model training on your data, blocks provider access and disclosure, restricts use of your information in the outputs for other customers or clients (and vice versa, to avoid receiving confidential information of third parties leading to potential misappropriation claims), and keeps all interactions in your controlled environment. Review and negotiate your AI vendor agreements to ensure they include robust confidentiality, transparency, data segregation, and audit rights. But do not stop at the technology. The court hinted that the outcome might have been different if a lawyer had directed the defendant’s use of AI. That means attorney direction is not just a best practice; it may be the key to preserving privilege. Require that any AI-assisted work involving legal content happen only under counsel’s direction, within a documented workflow built for privileged communications.
- Train Your Teams to Check the Latch. The risk this case exposed is not just about what AI produces. It is also about what employees put into AI platforms. Every prompt, every uploaded document, and every pasted paragraph is a potential disclosure to a third party, regulatory agency, or investigator. Build a “pause before you paste” culture: before anyone touches a chatbot, they should ask whether what they are about to type or upload is privileged, confidential, or sensitive. If the answer is yes, or even maybe, stop and consult legal first. Make AI governance a part of your regular compliance training so that the rules are reinforced, not just read once during onboarding. Enact strict restrictions on employees using systems or tools that are not approved by your IT and legal departments (such as public AI tools on company or personal devices). Run periodic tabletop exercises of short, scenario-based sessions where teams walk through realistic situations drawn from this ruling’s facts.
Who Is Guarding Your Henhouse?
This ruling did not break new legal ground. It applied longstanding privilege principles to a new technology and reached the conclusion most lawyers and data privacy professionals have been discussing for some time. But that is exactly what makes it so important. The court confirmed that public AI tools are third parties, that sharing information with them can waive privilege just as easily as forwarding a confidential email to a stranger, and that no amount of after-the-fact attorney involvement can drive the fox back out once it is inside.
Until more courts weigh in, the playbook is clear: set enforceable rules that explain the consequences, build secure AI channels with real confidentiality protections and lawyers on guard, instill a “pause before you paste” culture, and train your teams so that no one ends up like the defendant in this case, sitting on a pile of AI chat transcripts that just became the other side’s best evidence.
Companies that act now can keep reaping AI’s benefits. Those that do not are gambling that their employees will never leave the henhouse door open, and this ruling shows exactly what happens when they do.