What Law Firms Need to Know in 2026 about AI and Legal Privilege
Generative AI has become a transformative force across the legal sector driving efficiency, assisting with research, and accelerating document drafting. But as tools like ChatGPT become more capable and more deeply embedded in daily workflows, law firms face a critical question…Does the use of public AI systems risk waiving legal privilege?
Recent developments across the industry spanning legal scholarship, bar‑association commentary, policy updates, and statements from major AI providers show that the answer is more nuanced than many lawyers expect. Below is a clear breakdown of what firms need to know.
1. Client Communications with ChatGPT Are Not Privileged
Attorney–client privilege protects communications made for the purpose of seeking or receiving legal advice. But privilege attaches only when the communication is with a licensed attorney.
That means anything a client types into ChatGPT is not privileged, no matter how “legal” the question may be. As Bloomberg Law notes, prompts submitted to ChatGPT “aren’t privileged because the initial communication isn’t made with an attorney,” regardless of the tool’s security features.
OpenAI has also confirmed this directly: conversations with ChatGPT do not carry confidentiality protections comparable to attorney–client privilege.
2. AI Chat Data May Be Stored, Retrieved, and Even Subpoenaed
Another major risk: the data you submit to public AI systems may be retained by the provider, even if you believe you’ve deleted it.
In recent litigation, courts have already ordered OpenAI to preserve user chat logs – including deleted conversations – which raises serious implications for discoverability. Lexology reports that OpenAI could “be required to produce” user content in legal proceedings, meaning sensitive information submitted to the system can be treated like any other third‑party business record.
Moreover, as ComplexDiscovery highlights, all interactions with ChatGPT are digitally recorded and potentially accessible through subpoenas.
This stands in stark contrast to privileged attorney communications, which are generally shielded from compelled disclosure.
3. ChatGPT Is Now Prohibited from Giving Tailored Legal Advice
In late 2025, OpenAI introduced new usage policies that explicitly prohibit the system from providing individualized legal advice without licensed professional oversight. These changes reinforce a critical point:
ChatGPT is not a lawyer and cannot be relied upon as one.
The Blizzard & Zimmerman summary of the updated policy makes this explicit: users cannot expect ChatGPT to analyze case‑specific facts or provide strategic legal guidance.
This policy shift does not create privilege—but it underscores the platform’s limitations and the need for human legal judgment.
4. What About Lawyers Using AI Tools? Privilege Is Not Automatically Lost
The more nuanced issue is whether lawyers risk waiving privilege when they use generative AI to support client work.
Here, the answer is encouraging—but bounded.
As Bloomberg Law explains, using third‑party technology (including cloud‑based AI tools) does not automatically waive privilege. Courts have long accepted that lawyers can use external tools as long as they take “reasonable precautions” to safeguard confidentiality.
However, Frantz Ward LLP cautions that using a public generative AI platform without proper safeguards—such as by inputting identifiable client information—can constitute disclosure to a third party and risk waiving privilege.
In other words:
The tool is not the problem. The way the tool is used is.
5. Why Public AI Tools Fail to Meet Legal‑Ethical Safeguards
Regardless of their intelligence, generative AI systems lack several attributes fundamental to legal practice:
- No duty of confidentiality
- No ethical obligations
- No privilege protections
- No malpractice liability
As Lexology notes, even the most advanced models cannot replicate the safeguards inherent to human legal professionals, and disclosures made through AI tools may have few (or no) legal remedies
This makes unguarded use of public AI systems particularly risky for legal practice.
6. Practical Guidance for Law Firms
To navigate privilege concerns while benefiting from AI efficiencies, firms should adopt clear policies and governance frameworks:
Do:
✔ Use enterprise or “zero‑retention” AI platforms with contractual confidentiality protections
✔ Strip identifying information before using GenAI for drafting or research
✔ Update internal policies and include AI in confidentiality training
✔ Inform clients about the firm’s approach to AI use
✔ Monitor evolving ABA and state‑bar guidelines
Do Not:
✘ Input client names, facts, strategy, or confidential documents into public AI tools
✘ Assume AI‑based document review or drafting is secure without proper controls
✘ Treat ChatGPT output as legal advice
✘ Let clients rely on ChatGPT for case‑specific questions
Conclusion: Privilege Isn’t Dead—But It Can Be Broken
The rise of generative AI represents a defining moment for the legal profession. The technology offers extraordinary potential, but it also poses real risks—especially where privileged communications are concerned.
Clients using ChatGPT for legal issues have no privilege.
Lawyers using ChatGPT without safeguards may inadvertently waive privilege.
Law firms that proactively manage these risks, through policy, training, and secure technology choices can harness the benefits of AI while protecting one of the profession’s most fundamental obligations: preserving the confidentiality and trust of their clients. Speak to us today to help you draft an AI Governance Policy at no cost.

