Not all AI providers treat your data equally. Some train on your conversations by default. Some retain your prompts for years. Some encrypt data at rest; others don’t. Some give you the right to audit what they’ve stored; others offer nothing but a link to a privacy policy last updated when the product launched.
We evaluated 12 major AI providers across six privacy dimensions, scoring each from A (strongest protection) to F (weakest protection). The results reveal an industry where privacy practices range from genuinely protective to functionally nonexistent – and where marketing claims frequently diverge from architectural reality.
This scoreboard is based on publicly available privacy policies, terms of service, technical documentation, third-party audits, and regulatory filings as of March 2026. We contacted each provider for clarification on ambiguous policies; responses (or lack thereof) are noted where relevant.
Methodology
Each provider was scored on six dimensions:
Data Retention – How long does the provider store your prompts and responses? Shorter is better. Zero persistence is ideal.
Training Use – Does the provider use your data to train or fine-tune models? Default opt-out is penalized. Default opt-in (or no training use) is rewarded.
Encryption – What encryption is applied to data at rest and in transit? End-to-end encryption (where the provider cannot access plaintext) is the gold standard. Server-side encryption (where the provider holds the keys) is baseline.
Jurisdiction – Where is the provider headquartered and where is data processed? Jurisdictions with strong privacy laws and limited government access authority score higher. US-only providers with CLOUD Act exposure score lower.
Opt-Out Quality – How effective, accessible, and complete is the provider’s opt-out mechanism? Can users delete data? Is the opt-out granular? Does it cover all processing purposes?
Audit Rights – Can users access, export, or audit the data the provider holds about them? Are there third-party audit certifications? Is there transparency reporting?
Grades are assigned on the following scale:
- A: Industry-leading protection, architectural guarantees
- B: Strong protection, meaningful controls, minor gaps
- C: Adequate protection, standard industry practice
- D: Below-average protection, significant gaps
- F: Minimal or no protection in this dimension
The Scoreboard
| Provider | Data Retention | Training Use | Encryption | Jurisdiction | Opt-Out Quality | Audit Rights | Overall |
|---|---|---|---|---|---|---|---|
| Stealth Cloud | A | A | A | A | A | A | A |
| Anthropic (Claude) | B | B | C | D | B | B | B |
| OpenAI (ChatGPT) | C | D | C | D | C | C | C |
| Google (Gemini) | C | D | C | C | C | C | C |
| Microsoft (Copilot) | C | C | C | D | C | C | C |
| Mistral | B | B | C | B | B | C | B- |
| Cohere | B | C | C | C | B | B | B- |
| Perplexity | C | C | C | D | D | D | C- |
| xAI (Grok) | D | D | C | D | D | D | D+ |
| Meta (Llama/Meta AI) | D | F | C | D | D | D | D |
| Inflection (Pi) | D | D | C | D | D | D | D |
| Baidu (Ernie Bot) | D | D | C | F | F | F | D- |
Provider-by-Provider Analysis
Stealth Cloud – Overall: A
Data Retention: A – Zero-persistence architecture. Prompts exist only in volatile memory during processing and are cryptographically shredded after response delivery. No logs, no backups, no conversation history on the server. Data retention period: zero.
Training Use: A – Architecturally impossible. The infrastructure cannot access prompt content in plaintext due to zero-knowledge design. There is no training pipeline, no data collection mechanism, and no technical capability to use customer data for model improvement.
Encryption: A – End-to-end encryption via AES-256-GCM with client-side key generation. The provider never possesses decryption keys. PII stripping occurs client-side before encryption, providing defense in depth.
Jurisdiction: A – Swiss domicile (Zug) under the revFADP, one of the world’s strongest data protection frameworks. Edge processing via Cloudflare enables jurisdictional routing. Not subject to US CLOUD Act or equivalent extraterritorial data access mechanisms.
Opt-Out Quality: A – No opt-out needed because there is no data collection to opt out of. The architectural approach makes consent toggles unnecessary.
Audit Rights: A – Users have full visibility into data processing through client-side tooling. Because no data persists server-side, there is nothing to audit beyond the zero-persistence guarantee itself – which is verifiable through architectural analysis.
Learn more about Stealth Cloud’s approach.
Anthropic (Claude) – Overall: B
Data Retention: B – Anthropic retains conversations for 90 days by default for safety monitoring, then deletes them. Enterprise and API customers can negotiate custom retention periods. The 90-day default is shorter than most competitors but falls short of zero-persistence.
Training Use: B – Anthropic’s published policy states it does not train on free-tier or paid-tier conversations by default. This is a meaningful distinction from OpenAI and Google. However, the policy is contractual, not architectural – it depends on Anthropic’s continued adherence to its stated practices.
Encryption: C – Standard TLS in transit, encryption at rest with provider-managed keys. Anthropic holds the decryption keys, meaning it has technical access to conversation content. Not end-to-end encrypted.
Jurisdiction: D – US-headquartered (San Francisco). Subject to the CLOUD Act, national security letters, and other US government data access mechanisms. No EU or Swiss data processing alternatives for consumer users.
Opt-Out Quality: B – No training opt-out required since training is off by default. Data deletion available through account settings. Conversation history can be disabled. The absence of a needed opt-out toggle is itself a privacy advantage.
Audit Rights: B – Data export available. Anthropic publishes transparency reports and maintains SOC 2 certification. Users can request information about their stored data.
Read our deep-dive on Anthropic’s privacy architecture.
OpenAI (ChatGPT) – Overall: C
Data Retention: C – Conversations retained for 30 days after account deletion. Active account conversations stored indefinitely unless manually deleted. Enterprise customers have configurable retention. The default retention posture is “store everything” for active accounts.
Training Use: D – Free-tier conversations are used for training by default. The opt-out toggle (Settings > Data Controls) prevents future training but cannot reverse past use. The fundamental limitations of opt-out apply in full. ChatGPT Enterprise and API with zero-retention exclude training by default.
Encryption: C – TLS in transit, AES-256 at rest with provider-managed keys. OpenAI has technical access to all conversation content. Human reviewers access conversations for safety evaluation.
Jurisdiction: D – US-headquartered (San Francisco). Full exposure to CLOUD Act and domestic surveillance authorities. Data processed on US and Microsoft Azure infrastructure. The jurisdictional risk is significant for non-US users.
Opt-Out Quality: C – Training opt-out available but not default. Previously bundled with conversation history disable (now decoupled). The toggle’s effectiveness is limited by the retroactivity problem. Data deletion available but subject to retention periods.
Audit Rights: C – Data export available (Settings > Export data). SOC 2 Type II certified. Limited transparency reporting on government data requests.
Read our analysis of OpenAI’s data practices.
Google (Gemini) – Overall: C
Data Retention: C – Conversations with Gemini are retained for up to 3 years by default for product improvement. Users can adjust retention to 3 months, 18 months, or 36 months. Auto-delete settings available but not enabled by default. The 3-year default is among the longest in the industry.
Training Use: D – Conversations are used for model improvement by default. Human reviewers access conversations for quality assessment. Google’s privacy policy allows broad use of interaction data for “developing new products and services.”
Encryption: C – Google’s standard infrastructure encryption (TLS in transit, AES-256 at rest). Google holds all decryption keys. Data accessible to Google employees with appropriate access levels.
Jurisdiction: C – US-headquartered (Mountain View). Subject to CLOUD Act. However, Google operates data centers globally and offers data residency options for enterprise customers (Google Cloud), providing some jurisdictional flexibility not available to consumer Gemini users.
Opt-Out Quality: C – Activity controls allow users to disable conversation storage for training. Auto-delete timers available. The opt-out is accessible but requires navigation through multiple settings layers.
Audit Rights: C – Google Takeout provides comprehensive data export. Google publishes detailed transparency reports on government data requests. SOC 2 and ISO 27001 certified.
Read our analysis of Google’s Gemini data pipeline.
Microsoft (Copilot) – Overall: C
Data Retention: C – Consumer Copilot conversations retained according to Microsoft’s standard data retention policies. Enterprise Copilot (Microsoft 365 Copilot) inherits the tenant’s Microsoft 365 retention policies, providing more control.
Training Use: C – Microsoft states that enterprise Copilot data is not used for training foundation models. Consumer Copilot data handling is less clear, with policies referencing “product improvement” without specifying whether this includes model training.
Encryption: C – Standard Microsoft encryption (TLS in transit, BitLocker at rest for enterprise). Enterprise customers benefit from Microsoft’s broader security infrastructure. Consumer Copilot uses standard Microsoft account security.
Jurisdiction: D – US-headquartered (Redmond). Subject to CLOUD Act. Microsoft Azure offers data residency in multiple regions, but the legal jurisdiction of the corporate entity remains US.
Opt-Out Quality: C – Enterprise customers have granular controls through Microsoft 365 admin settings. Consumer controls are limited to conversation history deletion and general Microsoft privacy settings.
Audit Rights: C – Enterprise customers have access to Microsoft 365 compliance tools, audit logs, and eDiscovery. Consumer audit rights are limited. Microsoft publishes transparency reports.
Mistral – Overall: B-
Data Retention: B – Mistral retains API data for 30 days for abuse monitoring, then deletes. La Plateforme (their hosted platform) offers configurable retention. The shorter default retention period reflects a privacy-forward posture.
Training Use: B – Mistral states that API data is not used for model training. Free-tier (Le Chat) data handling policies are less restrictive. The API no-training commitment is clear and contractually binding.
Encryption: C – Standard TLS in transit, encryption at rest. Provider-managed keys. No end-to-end encryption option.
Jurisdiction: B – French-headquartered (Paris). Subject to EU GDPR and the EU AI Act. Not subject to US CLOUD Act. The EU jurisdictional position provides stronger privacy protections than US-based alternatives.
Opt-Out Quality: B – No training opt-out needed for API users (off by default). Le Chat users have deletion controls. The policy is clear and relatively accessible.
Audit Rights: C – Data export available for API users. GDPR data subject access rights apply. Limited public transparency reporting.
Cohere – Overall: B-
Data Retention: B – Cohere offers flexible retention policies with options for immediate deletion after processing. Enterprise customers can configure zero-retention policies.
Training Use: C – Cohere’s standard API terms include provisions for using data to improve services, though enterprise agreements can exclude training. The default position is less clear than Anthropic or Mistral.
Encryption: C – Standard encryption in transit and at rest. Provider-managed keys. Cohere offers private deployment options that can address encryption concerns.
Jurisdiction: C – Canadian-headquartered (Toronto). Subject to PIPEDA, which provides stronger privacy protections than US federal law but weaker than EU GDPR. Not subject to US CLOUD Act directly, though the Five Eyes intelligence-sharing relationship creates indirect exposure.
Opt-Out Quality: B – Enterprise customers have granular control over data processing. API users can request data deletion. The controls are accessible and relatively well-documented.
Audit Rights: B – SOC 2 certified. Data export available. PIPEDA-based data access rights apply. Cohere provides reasonable transparency about data handling.
Perplexity – Overall: C-
Data Retention: C – Perplexity retains search queries and conversation history to provide personalized results and improve the service. Retention periods are not clearly specified in public documentation.
Training Use: C – Perplexity’s privacy policy permits use of interaction data for service improvement, which may include model training. The specifics are less transparent than larger providers.
Encryption: C – Standard TLS in transit. Encryption at rest details not prominently documented.
Jurisdiction: D – US-headquartered (San Francisco). Full CLOUD Act exposure.
Opt-Out Quality: D – Limited opt-out controls. Account deletion available but data processing opt-out is not granular.
Audit Rights: D – Limited data export capabilities. No public transparency reporting on government data requests.
xAI (Grok) – Overall: D+
Data Retention: D – Grok conversations are retained according to xAI’s data policies, which provide limited specificity on retention periods. The integration with X (formerly Twitter) creates additional data sharing concerns.
Training Use: D – xAI’s terms permit use of conversations for model improvement. The default posture is training-permissive. The integration with X’s data ecosystem raises questions about the scope of data use.
Encryption: C – Standard encryption in transit and at rest. No distinguishing encryption features.
Jurisdiction: D – US-headquartered. Full CLOUD Act exposure. The association with X introduces additional data governance complexity.
Opt-Out Quality: D – Limited opt-out controls for training data use. The integration with X’s settings creates confusion about which controls apply to which data flows.
Audit Rights: D – Limited data export and audit capabilities. No prominent transparency reporting.
Meta (Llama/Meta AI) – Overall: D
Data Retention: D – Meta AI conversations through WhatsApp, Messenger, and Instagram are subject to Meta’s general data retention policies, which are designed for social media and apply broad retention periods.
Training Use: F – Meta has been the most aggressive of major providers in claiming rights to use interaction data for AI training. The use of WhatsApp and Instagram conversations for Meta AI training has drawn regulatory scrutiny in the EU and Brazil. Meta paused EU training on user data following GDPR enforcement pressure.
Encryption: C – WhatsApp provides end-to-end encryption for standard messages, but Meta AI interactions within WhatsApp are processed server-side, breaking the end-to-end encryption model. This is a significant and often misunderstood limitation.
Jurisdiction: D – US-headquartered (Menlo Park). Subject to CLOUD Act. Meta’s data practices have drawn enforcement actions in multiple jurisdictions, including the EU’s record GDPR fine.
Opt-Out Quality: D – Opt-out mechanisms for AI training vary by jurisdiction and have been criticized as inadequate. The EU opt-out initially required a free-text justification, which was struck down by regulators. Controls are fragmented across multiple Meta platforms.
Audit Rights: D – Basic data download available through Meta’s account settings. GDPR data subject access rights apply for EU users. Meta publishes transparency reports but has faced criticism about their completeness.
Inflection (Pi) – Overall: D
Data Retention: D – Pi retains conversation history to provide continuity across sessions. Retention policies are not prominently detailed.
Training Use: D – Inflection’s terms permit use of conversations for model improvement and training. The personal companion positioning of Pi encourages particularly intimate data sharing, amplifying the privacy impact.
Encryption: C – Standard encryption in transit and at rest. No end-to-end encryption.
Jurisdiction: D – US-headquartered. Full CLOUD Act exposure.
Opt-Out Quality: D – Limited opt-out controls. Account deletion available but data processing opt-out not well-documented.
Audit Rights: D – Limited data export and audit capabilities.
Baidu (Ernie Bot) – Overall: D-
Data Retention: D – Data retained per Chinese regulatory requirements and Baidu’s internal policies. Limited transparency on retention periods for international users.
Training Use: D – Ernie Bot conversations are used for model improvement. Baidu’s terms permit broad data use for AI development.
Encryption: C – Standard encryption practices. Details not prominently documented for international users.
Jurisdiction: F – Chinese-headquartered (Beijing). Subject to China’s Cybersecurity Law and National Intelligence Law, which require cooperation with state intelligence activities. Government data access is essentially unlimited. The regulatory landscape in China provides strong corporate privacy protections but no meaningful privacy from state access.
Opt-Out Quality: F – No meaningful opt-out for training data use. Chinese regulatory requirements mandate data retention that conflicts with user deletion requests.
Audit Rights: F – Limited data export or audit capabilities for international users. No meaningful transparency reporting on government data access.
Key Takeaways
The Privacy Spectrum Is Wide
The gap between the most and least privacy-protective providers is enormous. A user’s choice of AI provider determines more about their data protection than any setting or toggle within any given provider. Provider selection is itself a privacy decision.
Policy vs. Architecture
Most providers’ privacy protections are policy-based: they promise not to do things they are technically capable of doing. Only zero-knowledge architectures provide architectural guarantees – protections enforced by system design rather than corporate goodwill.
The distinction matters because policies change. Every provider reserves the right to update its terms of service. An A-grade privacy policy today can become a D-grade policy tomorrow with a single terms update. Architectural guarantees, by contrast, require rebuilding the system to revoke.
Jurisdiction Matters More Than You Think
The global regulatory heatmap reveals that US-based providers face structural privacy disadvantages due to the CLOUD Act and absence of federal privacy law. European providers (Mistral) and Swiss-domiciled providers (Stealth Cloud) benefit from stronger legal frameworks. Users should factor provider jurisdiction into their privacy calculus alongside technical features.
“Enterprise” Is Not a Privacy Solution
Enterprise tiers consistently score better than free and consumer tiers, but the improvement is incremental, not fundamental. Enterprise agreements add contractual protections and compliance certifications, but they don’t change the underlying architecture. The provider still has technical access to your data. The improvement is in the legal contract, not in the system design.
The Hidden Cost of Free
The pattern across the scoreboard is clear: free-tier products consistently score worse on privacy than paid alternatives. Free users are the product. Their data subsidizes the service. The AI training tax falls disproportionately on those who pay nothing – because their data is the payment.
The Stealth Cloud Perspective
This scoreboard measures what providers promise. The only grade that matters is the one measuring what the architecture guarantees. Policies are revocable; zero-persistence infrastructure is not. We built Stealth Cloud to score A across every dimension not through better policies but through architecture that makes privacy violations technically impossible – because the only trustworthy promise is one enforced by mathematics.