On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York sanctioned attorneys Steven Schwartz and Peter LoDuca of the firm Levidow, Levidow & Oberman for submitting a brief containing six entirely fabricated case citations generated by ChatGPT. The case – Mata v. Avianca – became an instant landmark, not because AI hallucination was a new phenomenon, but because it forced the legal profession to confront what happens when attorneys outsource legal reasoning to systems they do not understand and cannot verify.

Mata v. Avianca, however, was the wrong crisis. The hallucination problem is solvable through verification. The real crisis – the one the legal profession is only beginning to reckon with – is what happens to attorney-client privilege when lawyers type confidential client information into AI systems owned and operated by third parties with their own data retention policies, training pipelines, and metadata collection practices.

A 2025 survey by the American Bar Association found that 58% of attorneys reported using generative AI tools in their practice. Of those, 41% acknowledged having entered client-specific information into AI systems. Only 19% said their firm had a formal AI use policy. The legal profession is running a massive, uncontrolled experiment with the foundational principle of its professional identity.

Privilege, Confidentiality, and the Third-Party Problem

Attorney-client privilege and the duty of confidentiality are related but distinct legal protections. Privilege is an evidentiary doctrine: it prevents compelled disclosure of communications between attorney and client made for the purpose of obtaining legal advice. Confidentiality is an ethical obligation: Rule 1.6 of the ABA Model Rules of Professional Conduct prohibits lawyers from revealing information relating to the representation of a client unless the client gives informed consent.

The critical distinction for AI use is the scope of each protection. Privilege is narrow – it covers communications, not underlying facts, and it can be waived. Confidentiality is broad – it covers all information relating to the representation, regardless of source, and the duty persists even after the representation ends.

Both protections are threatened by AI tool use, but the mechanisms differ.

Privilege Waiver Through Disclosure

The attorney-client privilege is waived when the privileged communication is disclosed to a third party outside the attorney-client relationship. The question is whether entering client information into an AI system constitutes disclosure to a third party.

The answer depends on the relationship between the attorney and the AI provider. If the AI provider is a “service provider” or “agent” of the attorney – comparable to a legal secretary, a document review vendor, or an e-discovery platform – then the disclosure falls within the Kovel doctrine (United States v. Kovel, 1961), which extends privilege to agents necessary for the attorney’s representation.

But the Kovel doctrine requires that the third party’s involvement be necessary for the attorney’s work and that the relationship be structured to maintain confidentiality. An attorney using ChatGPT’s free tier – where, per OpenAI’s data practices, conversations may be used for model training – is not in a structured confidential relationship. The argument for privilege protection is weak at best. As one ethics scholar put it: if you tell your secrets to a system designed to learn from them, you have not maintained confidentiality.

The Confidentiality Breach

Even if privilege were somehow preserved, the duty of confidentiality under Rule 1.6 operates independently. A lawyer who enters client information into an AI system must ensure that the disclosure is either authorized by the client or falls within an exception to Rule 1.6.

ABA Formal Opinion 477R (2017) addressed technology-based confidentiality obligations, holding that lawyers must make “reasonable efforts” to prevent unauthorized access to client information when using technology. The opinion identified several factors: the sensitivity of the information, the likelihood of disclosure if additional safeguards are not employed, the cost of additional safeguards, the difficulty of implementing the safeguards, and the extent to which the safeguards adversely affect the lawyer’s ability to represent clients.

Applied to AI, this analysis requires lawyers to evaluate the data handling practices of every AI tool they use – a task that requires understanding the technical architecture of systems like those analyzed in the AI provider privacy scoreboard. The distinction between a provider that trains on user data and one that does not is dispositive. A lawyer who enters privileged information into a training-enabled system is harder to defend than one who uses a system with contractual guarantees against training use.

What the State Bars Have Said

The response from state bar ethics committees has been rapid but inconsistent. By early 2026, over 35 state bars had issued formal opinions or guidance on attorney AI use. The emerging consensus, such as it is, coalesces around several principles:

California (Practical Guidance, 2024): Attorneys must exercise competence in understanding AI tools, must not reveal confidential information without client consent, and must supervise AI-generated work product. The guidance emphasized that AI tools are not “other lawyers” and do not benefit from work product protection.

New York (NYSBA Task Force Report, 2024): Recommended that attorneys treat AI outputs as drafts requiring substantive review, disclose AI use to clients where material, and evaluate the data handling practices of AI vendors. The report specifically flagged the risk of “data commingling” – where one client’s information in a training dataset could influence outputs for another client.

Florida (Ethics Opinion 24-1, 2024): Required attorneys to obtain informed consent from clients before using AI tools with client information, to ensure adequate data protection, and to maintain competence in AI technology. The opinion took a notably conservative position, suggesting that the safest approach is to avoid entering any client-identifying information into AI systems.

Texas (Ethics Opinion 709, 2024): Held that the duty of competence extends to understanding the confidentiality implications of AI tools, that attorneys must ensure AI tool use is consistent with the duty of confidentiality, and that attorneys must supervise and verify AI-generated work product.

The inconsistency across jurisdictions creates a compliance burden for multi-state practitioners and national law firms. A practice that is acceptable in one state may violate ethics rules in another. And because disciplinary enforcement is state-based, the consequences of getting it wrong vary widely.

The Mata v. Avianca Fallout

The sanctions in Mata v. Avianca were modest – $5,000 against the firm – but the reputational damage was severe. More importantly, the case triggered a cascade of court-level responses. By early 2026, over 30 federal district courts had adopted standing orders or local rules addressing AI use in litigation.

Some courts require disclosure of AI use in briefs and filings. Others require attorneys to certify that AI-generated content has been verified for accuracy. The Northern District of Texas requires attorneys to certify that no AI was used in drafting filings, or, if it was, that every citation has been independently verified.

These judicial responses address the hallucination problem – the fabrication of legal authorities. They do not address the confidentiality problem. A brief that discloses accurate legal citations after being drafted with AI has satisfied the court’s verification requirement. But if confidential client information was entered into an AI system during the drafting process, the ethical breach has already occurred, regardless of the accuracy of the final product.

The profession has focused on the visible risk (fake cases in court filings) while largely ignoring the invisible risk (confidential data flowing into training pipelines). The invisible risk is far larger in scope and far harder to remediate.

Firm-Level Responses: The Emerging AI Governance Frameworks

Large law firms have responded to the AI challenge with varying degrees of sophistication.

Restrictive approach: Some firms (including several AmLaw 50 firms) have banned the use of external AI tools entirely and are developing proprietary AI systems running on firm-controlled infrastructure. Allen & Overy’s partnership with Harvey AI was an early example; the system processes client data within an environment subject to the firm’s own data governance policies. Davis Polk, Latham & Watkins, and Clifford Chance have deployed similar internal systems.

Managed approach: Other firms permit the use of approved AI tools under specific conditions – typically requiring Enterprise-tier subscriptions with contractual data protection guarantees, prohibiting the entry of client-identifying information, and requiring review and approval by a designated AI governance committee.

Unmanaged approach: A significant number of firms, particularly smaller practices, have issued no formal guidance. In these firms, individual attorneys make their own decisions about AI tool use, creating an uneven risk landscape across the practice.

The managed approach faces a specific practical challenge: how do you prevent attorneys from entering client information into AI systems when the boundary between “client information” and “general legal question” is often blurred? An attorney researching a legal theory may describe the factual scenario in enough detail to identify the client, even without using the client’s name. The PII stripping approach – where identifying information is tokenized and replaced before the prompt reaches the AI system – offers a technical solution, but adoption among law firms remains negligible.

The Malpractice Dimension

Beyond ethical violations, AI use creates malpractice exposure. Legal malpractice requires a duty (the attorney-client relationship), a breach (failure to meet the standard of care), causation, and damages.

Two malpractice scenarios are now within the realm of foreseeable risk:

Hallucination-based malpractice: An attorney relies on AI-generated legal research without verification, a fabricated case is cited, and the client suffers adverse consequences. After Mata v. Avianca, this risk is well-understood, and the standard of care clearly requires verification of AI-generated citations.

Confidentiality-breach malpractice: An attorney enters privileged client information into an AI system, that information is subsequently exposed through a data breach, training data extraction, or model memorization (the model memorization problem), and the client suffers damages from the disclosure. This scenario has not yet produced reported case law, but the elements are all present, and the risk is growing as AI training data extraction techniques become more sophisticated.

Malpractice insurance carriers are watching. In 2025, several major legal malpractice insurers began including AI-specific questions in their renewal applications, asking firms about AI use policies, training programs, and data handling practices. Firms without formal AI governance may face higher premiums or coverage limitations. The insurance market, as often happens, may drive behavioral change faster than regulators.

What Ethical AI Use Looks Like for Lawyers

Given the current regulatory and ethical landscape, the minimum defensible position for attorney AI use requires:

Before Use

  • Client consent: Obtain informed consent from clients before using AI tools with their information. The consent should specify which tools, what safeguards are in place, and what data handling the AI provider commits to.
  • Vendor assessment: Evaluate the AI provider’s data practices with the same rigor applied to e-discovery vendors or cloud storage providers. This means reading the terms of service, understanding the training policy, and confirming BAA or equivalent data protection agreements where available. The framework in the AI compliance checklist applies directly.
  • Tool selection: Use Enterprise-tier or API-tier products with contractual data protection guarantees. Consumer-tier AI products should never be used with client information.

During Use

  • De-identification: Remove or tokenize all client-identifying information before entering it into AI systems. Use hypothetical fact patterns rather than actual case details where possible.
  • Prompt hygiene: Treat every prompt as a potential exhibit. If the prompt would be problematic if produced in discovery, it should not be sent to an AI system.
  • Verification: Every AI-generated legal citation, factual claim, and analytical conclusion must be independently verified against primary sources.

After Use

  • Documentation: Maintain records of AI tool use, including what tools were used, for what purpose, and what safeguards were applied. This documentation supports the “reasonable efforts” standard under Rule 1.6.
  • Monitoring: Track developments in AI ethics opinions, court orders, and bar guidance. The landscape is changing quarterly.

The deeper question is whether these measures are sufficient, or whether the zero-knowledge architecture approach – where the AI processing system itself is structurally incapable of retaining or learning from client data – is the only architecture that truly satisfies the duty of confidentiality. When the infrastructure guarantees that nothing persists, the burden of ensuring confidentiality shifts from individual attorney behavior to system design. For a profession built on confidentiality, that shift may prove essential.

The Stealth Cloud Perspective

Attorney-client privilege is not a policy preference – it is a constitutional cornerstone. When lawyers use AI systems that retain, log, or train on client data, they are not just creating compliance risk; they are undermining the structural foundation of legal representation. The only architecture that fully honors the duty of confidentiality is one where client data cannot persist, because the system was built to make persistence impossible.