Two AI companies. Two of the most capable large language models available. Two different corporate philosophies about what happens to the words you type into a chat window.

OpenAI, valued at $300 billion as of early 2026, operates ChatGPT with approximately 300 million weekly active users. Anthropic, valued at approximately $60 billion, operates Claude with a smaller but rapidly growing user base concentrated in enterprise and developer markets. Both companies produce frontier-capability language models. Both companies collect user data. The differences lie in how much they collect, how long they keep it, what they do with it, and how transparent they are about each decision.

For the 300+ million people using these tools to draft legal documents, process medical information, brainstorm business strategy, and handle personal communications, the privacy architecture of their chosen AI provider is not a theoretical concern. It is a daily operational risk.

Feature Comparison

CriteriaChatGPT (OpenAI)Claude (Anthropic)
Consumer Data Retention30 days (content); up to 36 months (account/usage data)30 days (feedback/safety); content not retained after response generation per current policy
Training on Consumer DataYes, by default (opt-out available via settings)No, by default (consumer conversations not used for training)
Training on API DataNo (API data not used for training unless customer opts in)No (API data not used for training by default)
Enterprise Data IsolationChatGPT Enterprise/Team: data not used for training; SOC 2 Type 2Claude for Business/Enterprise: data not used for training; SOC 2 Type 2
Data Processing LocationUS-based infrastructure (Microsoft Azure)US-based infrastructure (AWS, GCP)
Human Review of PromptsYes – safety review, red team evaluation, content moderationYes – safety review, trust & safety evaluation
Opt-Out MechanismSettings toggle; API; ChatGPT Team/EnterpriseDefault no-training; API; Business/Enterprise tiers
Data Deletion RequestAvailable via privacy portal; 30-day processingAvailable via privacy request; processing timeline varies
Third-Party SharingMicrosoft (infrastructure partner); service providersAWS/GCP (infrastructure); service providers
Privacy Regulation ComplianceGDPR (EU DPA appointed); CCPA; limited jurisdiction-specificGDPR; CCPA; compliance program expanding
Transparency ReportsAnnual transparency report (government requests)Published usage policy; transparency in safety documentation
Client-Side EncryptionNone – prompts arrive in plaintext at OpenAI serversNone – prompts arrive in plaintext at Anthropic servers

Deep Analysis

OpenAI: Scale, Ambition, and the Training Data Question

OpenAI’s data practices reflect the tension between building the most capable AI and respecting user privacy. The company’s position has evolved substantially since ChatGPT’s November 2022 launch, driven by regulatory pressure (Italy’s temporary ban in April 2023, GDPR enforcement actions across Europe), competitive pressure (from Anthropic and open-source models), and enterprise customer demands.

Consumer tier: training by default. ChatGPT’s consumer product (free and Plus tiers) uses conversation data for model improvement by default. When a ChatGPT Plus subscriber drafts a sensitive business email, discusses a medical condition, or brainstorms a legal strategy, that conversation may be reviewed by OpenAI employees (for safety evaluation and quality assessment) and may contribute to training data for future model iterations.

OpenAI introduced an opt-out mechanism in April 2023: users can disable “Chat History & Training” in settings, which prevents conversations from being used for model training. However, even with this toggle enabled, conversations are retained for 30 days for safety monitoring and abuse prevention. During this 30-day window, content may be reviewed by human safety raters.

The practical impact: the default setting for the product used by hundreds of millions of people feeds those users’ conversations into the model training pipeline. Users who do not know about the opt-out (likely the majority) or who forget to enable it for each conversation contribute their data by default. This is not a privacy policy designed for privacy – it is a data collection policy with a privacy escape hatch.

API tier: no training by default. OpenAI’s API (used by developers and enterprises to build applications on GPT-4 and successors) does not use customer data for training by default. API requests are retained for 30 days for abuse monitoring, then deleted. This bifurcation – consumer data is training material, API data is not – reveals the commercial logic: consumer users pay with data as well as money; enterprise users pay enough money to keep their data.

ChatGPT Enterprise and Team. These tiers, launched in August 2023 and January 2024 respectively, provide stronger guarantees: no training on conversation data, SOC 2 Type 2 certification, data encryption at rest and in transit, admin controls for data retention, and an enterprise-grade privacy agreement. Enterprise pricing starts at approximately $60 per user per month.

For organizations handling genuinely sensitive data, Enterprise is the minimum acceptable tier. The consumer product’s default training policy is incompatible with attorney-client privilege, HIPAA requirements, financial confidentiality, and most corporate information security policies.

For a deeper analysis of OpenAI’s data practices, see our dedicated OpenAI analysis.

Anthropic: Privacy as Architectural Philosophy

Anthropic’s approach to data privacy reflects a different set of founding assumptions. The company, founded in 2021 by former OpenAI researchers including Dario and Daniela Amodei, has positioned safety and responsible AI development as its primary differentiators. Privacy practices align with this positioning.

Consumer tier: no training by default. Claude’s consumer product does not use conversation data for model training by default. This is the single most significant policy difference between Claude and ChatGPT at the consumer level. A Claude Free or Pro user’s conversations are not ingested into the training pipeline. Anthropic states that conversations may be retained for up to 30 days for safety monitoring and trust & safety evaluation, but not for model improvement.

This default matters enormously at scale. The difference between opt-out (ChatGPT: you must actively disable training) and default-off (Claude: training is not enabled unless you consent) determines the privacy experience for the majority of users who never change default settings.

Safety review and human evaluation. Both OpenAI and Anthropic employ human reviewers who may access conversation content for safety evaluation – identifying harmful content, evaluating model responses for safety violations, and investigating abuse reports. This is a meaningful privacy caveat for both platforms: conversations that users assume are fully automated may be read by human employees.

Anthropic has published more detailed documentation about its data handling practices through its usage policy and has been generally more transparent about the conditions under which human review occurs. However, neither company publishes comprehensive statistics on the volume of conversations reviewed by humans or the specific criteria that trigger human review.

API and Enterprise tiers. Claude’s API does not use customer data for training by default, consistent with the consumer policy. Claude for Business and Enterprise provides additional controls: SOC 2 Type 2 certification, SSO integration, admin data controls, and contractual guarantees about data usage.

For a detailed examination of Anthropic’s privacy architecture, see our Anthropic analysis.

The Infrastructure Layer: What Both Companies Share

Beneath the policy differences, ChatGPT and Claude share a fundamental architectural characteristic: both process prompts in plaintext on their servers.

When you type a prompt into ChatGPT or Claude, that prompt travels (encrypted by TLS) to the provider’s infrastructure, is decrypted, processed by the language model, and the response is generated. During processing, the plaintext prompt exists on the provider’s servers – in memory during inference, potentially on disk in processing queues, and in logs for some retention period.

This is not a deficiency unique to either provider. It is the standard architecture of cloud AI services. But it means that:

  1. The provider can read your prompt. Both OpenAI and Anthropic have the technical capability to access, read, store, and analyze any prompt sent to their services. Their policies govern what they choose to do with this capability.

  2. Legal compulsion applies. Both companies are US corporations subject to US law. Court orders, subpoenas, and national security letters can compel the production of stored data, including prompts and conversation logs within their retention windows. Both companies publish information about government requests, but the legal infrastructure for compulsion exists.

  3. Data breaches expose plaintext. If either company’s infrastructure is breached during the retention window, prompts in storage are exposed. OpenAI disclosed a data breach in March 2023 where a vulnerability in an open-source library exposed some users’ conversation titles, first messages, and payment information. The breach was limited but demonstrated that stored conversation data is a target.

  4. Infrastructure providers have access. OpenAI runs on Microsoft Azure; Anthropic on AWS and GCP. These infrastructure providers operate the physical servers on which prompts are processed. While contractual and technical controls limit infrastructure provider access, the physical layer adds another entity in the trust chain.

Data Retention: The Devil in the Specifics

Both companies retain data. The differences are in duration, scope, and purpose.

OpenAI retains conversation content for 30 days (for safety monitoring, even when training opt-out is enabled), account data for up to 36 months, and usage metadata (timestamps, session duration, model used, token counts) indefinitely for service improvement. With training enabled (the consumer default), conversation content may be retained indefinitely as part of training datasets.

Anthropic retains conversation data for up to 30 days for safety evaluation, after which it is deleted. Trust & safety flagged content may be retained longer for investigation. Account data and usage metadata are retained for service operation purposes.

The 30-day safety retention window is present for both providers and represents an irreducible privacy compromise: both companies maintain the ability to access recent conversations for safety and abuse prevention. For users whose threat model includes the AI provider itself (or entities that can compel the AI provider), this retention window is a vulnerability that no policy setting eliminates.

Verdict

Anthropic’s Claude provides meaningfully better default privacy for consumer users. The no-training-by-default policy is the single most important privacy difference between the two platforms. A user who creates an account and uses the product without changing any settings receives better privacy protection from Claude than from ChatGPT. For individual users, professionals, and organizations evaluating AI tools with privacy as a priority, Claude’s defaults are superior.

OpenAI’s ChatGPT provides comparable privacy at the Enterprise tier, where the training exclusion, SOC 2 certification, and enterprise data controls create a privacy posture similar to Claude’s business offerings. For organizations willing to pay enterprise pricing and configure the appropriate settings, ChatGPT’s privacy is adequate. For consumer-tier users who do not disable training, ChatGPT’s privacy is substantively weaker than Claude’s.

Neither platform provides architectural privacy guarantees. Both process prompts in plaintext. Both retain data for safety review. Both are subject to legal compulsion. Both employ human reviewers who may access conversations. The differences are in policy and defaults – meaningful differences, but policy differences nonetheless. Policy can change. Defaults can be overridden. Retention windows can be extended.

The Stealth Cloud Perspective

The ChatGPT-versus-Claude comparison is a comparison of policies, not architectures. Anthropic has better policies. OpenAI has larger scale. Neither company has built an architecture where user privacy is a mathematical guarantee rather than a corporate commitment.

Stealth Cloud starts from the premise that the AI provider should never see the user’s actual data. The Ghost Chat architecture processes AI conversations through a pipeline designed to make privacy a structural property:

  1. PII tokenization – a client-side WebAssembly NER module strips personally identifiable information before any prompt leaves the browser.
  2. Client-side encryption – the sanitized prompt is encrypted with AES-256-GCM using keys generated in the browser via the Web Crypto API.
  3. Metadata stripping – the edge worker removes all identifying metadata before forwarding to the LLM provider.
  4. Ephemeral processing – the prompt is decrypted in a temporary V8 isolate, processed, and the isolate is destroyed. Nothing persists.
  5. Cryptographic shredding – session keys are destroyed on completion, rendering any hypothetically captured ciphertext permanently unreadable.

Under this architecture, the LLM provider (whether OpenAI, Anthropic, or any other) receives a sanitized prompt with no PII, no user identity, and no metadata linking it to a specific person. The provider’s retention policy becomes irrelevant because the retained data contains nothing identifiable. The provider’s training policy becomes irrelevant because the training data contains no user PII. The provider’s response to a subpoena becomes irrelevant because there is nothing meaningful to produce.

This is not a claim that Anthropic’s policies do not matter – they do, and Anthropic’s defaults are genuinely better for users who interact with AI directly. It is a claim that policy-based privacy has a ceiling that architecture-based privacy does not. The Stealth Cloud Manifesto articulates this distinction: privacy is not a setting you configure. It is a property of the system’s design. When the system is designed so that the provider cannot access user data, the provider’s policies about user data become a redundant safeguard rather than the primary one.

The question is not which AI provider has a better privacy policy. The question is whether a privacy policy is the right mechanism for protecting data that should never be exposed in the first place.

Read more: OpenAI Data Practices | Anthropic Privacy Architecture | What is Stealth Cloud?