The global market for privacy-enhancing technologies reached $2.8 billion in 2025, up from $1.1 billion in 2022, with Gartner projecting it to exceed $25 billion by 2030. This growth is not driven by consumer demand – consumers have no idea what a TEE or FHE is. It is driven by three converging forces: regulatory mandates (GDPR, CCPA, EU AI Act, DORA, Brazil’s LGPD), the collapse of third-party tracking infrastructure (the death of third-party cookies in Chrome, Apple’s ATT framework), and the AI training data crisis (lawsuits, opt-outs, and the data poisoning movement making unconsented scraping increasingly risky).

The PET ecosystem is not a single technology. It is a stack of complementary techniques, each addressing a different point in the data lifecycle, each with distinct trust models, performance characteristics, and maturity levels. No single PET solves every privacy problem. The architecture that works is compositional – layering multiple PETs to create defense-in-depth.

This is a technical survey of every major PET category in 2026: what it does, how it works, where it is deployed, and where it falls short.

The PET Comparison Table

TechnologyWhat It ProtectsTrust ModelPerformance OverheadMaturityKey Limitation
Trusted Execution Environments (TEE)Data in useHardware manufacturer1-2xProductionHardware trust requirement
Secure Multi-Party Computation (MPC)Multi-party inputsThreshold (t-of-n honest)100-10,000xProduction (specific use cases)Communication overhead
Fully Homomorphic Encryption (FHE)Data in use (single party)Client only10,000-100,000xEarly productionPerformance
Differential Privacy (DP)Statistical outputsVaries (local vs global)MinimalProductionUtility trade-off
Zero-Knowledge Proofs (ZKP)Verification without revelationMathematical1,000-10,000x (prover)Production (blockchain)Prover computation
Federated Learning (FL)Training data localityAggregator2-10xProductionGradient leakage
Data Clean Rooms (DCR)Cross-org data collaborationPlatform operator1-5xProductionSingle vendor trust
Cryptographic ShreddingData destructionKey managementNegligibleProductionKey management complexity

Trusted Execution Environments (TEEs)

What They Are

A Trusted Execution Environment is a hardware-isolated region of a processor that provides confidentiality and integrity for code and data, even from the operating system, hypervisor, and physical machine operator. Data enters the TEE encrypted, is decrypted inside the enclave, processed, re-encrypted, and returned – the host system never sees the plaintext.

Major Implementations

  • Intel SGX (Software Guard Extensions). The most widely deployed TEE, available on Intel Xeon Scalable processors since 2015. SGX enclaves provide up to 512 MB of protected memory (expanded to 1 TB with SGX2). Azure Confidential Computing runs on SGX hardware.

  • Intel TDX (Trust Domain Extensions). The successor to SGX for VM-level isolation. TDX protects entire virtual machines rather than individual enclaves, making it compatible with existing application architectures. Available since 4th Gen Xeon Scalable (2023).

  • AMD SEV-SNP (Secure Encrypted Virtualization - Secure Nested Paging). AMD’s VM-level TEE, competing directly with Intel TDX. SEV-SNP encrypts VM memory with a per-VM key managed by the AMD Secure Processor. Google Confidential VMs and AWS Nitro Enclaves integrate with SEV-SNP.

  • ARM Confidential Compute Architecture (CCA). ARM’s TEE framework for ARM v9 processors, targeting mobile and edge devices. CCA introduces “Realms” – isolated execution environments managed by the Realm Management Monitor.

  • Apple Secure Enclave. A dedicated security coprocessor in every Apple device since the A7 chip (2013). Handles biometric data, key management, and Face ID processing. Apple’s Private Cloud Compute extends this to server-side processing.

Market Position

TEEs are the most mature and highest-performing PET for general-purpose computation. They introduce minimal overhead (typically 1-5% for computation, with some memory overhead for encryption) and support arbitrary code execution. This makes them the only PET currently viable for latency-sensitive workloads like AI inference.

The Trust Problem

The fundamental criticism of TEEs is the trust model: you must trust the hardware manufacturer. Intel, AMD, and ARM control the silicon, the firmware, and the attestation infrastructure. Side-channel attacks against SGX (Spectre, Meltdown, Foreshadow, Plundervolt, SGAxe) have demonstrated that hardware isolation is not impervious. Each attack has been patched, but the pattern suggests that hardware-based isolation is a best-effort guarantee, not a mathematical one.

For Stealth Cloud, TEEs serve as one layer of defense – not the sole guarantee. The architecture does not depend on hardware trust alone but combines TEEs with cryptographic shredding, end-to-end encryption, and zero-persistence design.

Secure Multi-Party Computation (MPC)

2026 Market Status

The MPC market has consolidated around two dominant use cases: cryptocurrency custody (Fireblocks, Coinbase, BitGo) and privacy-preserving data collaboration (Inpher, Cape Privacy, Sharemind).

Fireblocks alone secures over $50 billion in digital assets using MPC threshold wallets. The technology’s trust model – distributing key shares across independent parties so that no single party can access the complete key – has proven its value in an industry plagued by centralized key compromise (FTX, Mt. Gox, Celsius).

Maturity Assessment

MPC is production-ready for specific, well-defined computations: key management, sealed-bid auctions, aggregation queries, and simple statistical functions. For general-purpose computation (arbitrary programs, ML inference), MPC remains 100-10,000x slower than plaintext, limiting it to low-throughput, high-value applications.

The primary research frontier is reducing communication complexity. Protocols like SPDZ (with preprocessing), TinyOT, and Overdrive reduce online communication costs, but the preprocessing phase still requires substantial bandwidth between parties.

Fully Homomorphic Encryption (FHE)

2026 Market Status

FHE is transitioning from research to early production. Key milestones:

  • Apple shipped FHE-derived techniques in iOS 18 Private Cloud Compute
  • Google uses FHE for privacy-preserving ad measurement in Privacy Sandbox
  • Zama raised $73 million (Series A, 2024) to commercialize TFHE for blockchain and ML
  • DARPA’s DPRIVE program delivered first-generation FHE ASIC demonstrators in 2025

Total investment in FHE startups exceeded $380 million between 2020 and 2025. The market is pre-revenue at the platform level – most FHE deployments are internal to large technology companies – but the hardware acceleration pipeline (Intel HEXL, Cornami, Niobium Microsystems) could catalyze commercial adoption by 2028.

Maturity Assessment

FHE is production-ready for simple computations (addition, counting, basic statistics) and early-stage for ML inference (logistic regression, small neural networks). Large language model inference under FHE remains infeasible due to the multiplicative depth required for transformer attention mechanisms. The 10,000-100,000x performance overhead is the binding constraint. Hardware acceleration is the critical unlock.

Differential Privacy (DP)

2026 Market Status

Differential privacy is the most widely deployed PET by user reach. Apple deploys it across hundreds of millions of iOS devices. Google uses it in Chrome, Google Maps, and Gboard training. The U.S. Census Bureau used it for the 2020 Decennial Census. LinkedIn, Microsoft, and Uber have all disclosed DP deployments.

The open-source ecosystem is mature: Google’s DP library (C++, Java, Go, Python), OpenDP (Harvard), IBM’s Diffprivlib, and PyDP provide production-quality implementations.

Maturity Assessment

DP is the most mature PET for aggregate analytics. It is well-understood theoretically, has established best practices, and has been deployed at nation-scale. The primary challenge is calibration: choosing epsilon values that balance privacy and utility is a policy decision with no universal answer. Continuous data collection raises composition concerns (cumulative privacy budget depletion over time).

DP does not protect data during computation – it protects statistical outputs. It is complementary to, not a replacement for, encryption-based PETs.

Zero-Knowledge Proofs (ZKPs)

2026 Market Status

ZKPs are the most commercially successful PET in blockchain, where ZK rollups (zkSync, StarkNet, Polygon zkEVM, Scroll) collectively secure over $18 billion in total value locked and process 40+ million monthly transactions.

Outside blockchain, ZKP adoption is growing in identity verification (Microsoft ION, EU eIDAS 2.0 Digital Identity Wallet), financial compliance (ING, JPMorgan Onyx), and AI inference verification (EZKL, Modulus Labs).

Maturity Assessment

ZKPs are production-ready for blockchain scaling and identity verification. For general-purpose private computation, the prover-side computational overhead (1,000-10,000x) and circuit complexity constraints limit adoption. The distinction between ZKPs and other PETs is important: ZKPs prove statements about data without revealing the data. They do not, by themselves, perform arbitrary computation on private data – that requires MPC or FHE.

Federated Learning (FL)

What It Is

Federated learning trains ML models across distributed datasets without centralizing the data. Each participating device or organization trains a local model on its local data and sends only model updates (gradients or model weights) to a central aggregator, which combines them into a global model.

Key Deployments

  • Google Gboard: Federated learning has trained the keyboard prediction model across hundreds of millions of Android devices since 2017. User typing data never leaves the device.
  • Apple: Uses federated learning for Siri improvements, QuickType predictions, and Hey Siri detection.
  • Healthcare: The MELLODDY consortium (10 pharmaceutical companies, 2020-2023) used federated learning to train drug discovery models across proprietary compound libraries without any company sharing its data.

The Gradient Leakage Problem

Federated learning’s privacy guarantee is weaker than it appears. Research has demonstrated that model gradients – the “updates” that are shared with the aggregator – can leak significant information about the training data:

  • Gradient inversion attacks (Zhu et al., 2019): Given a model gradient, it is possible to reconstruct individual training examples (images, text) with high fidelity for small batch sizes.
  • Membership inference attacks: Determine whether a specific data point was in a participant’s training set by analyzing the gradient updates.

The standard mitigation is to combine federated learning with differential privacy (DP-SGD applied to the gradients before sharing) or secure aggregation (encrypting individual gradients so the aggregator can only see the aggregate, not individual contributions). Google’s production FL deployments use both.

Maturity Assessment

Federated learning is production-ready for mobile keyboard models, recommendation systems, and healthcare analytics. It is less mature for large-scale model training (the communication overhead of synchronizing large models across thousands of participants is substantial) and requires DP or secure aggregation to address gradient leakage.

Data Clean Rooms (DCRs)

What They Are

Data clean rooms are controlled environments where multiple organizations can combine and analyze their datasets without either party seeing the other’s raw data. The clean room enforces policies on what queries can be run, what results can be extracted, and what granularity of output is permitted.

Key Providers

  • Google Ads Data Hub: Launched 2018. Allows advertisers to join their first-party data with Google’s ad data inside a BigQuery-based clean room. Output is restricted to aggregated, privacy-safe metrics.
  • AWS Clean Rooms: Launched 2023. Provides a multi-party clean room service within AWS, supporting SQL-based analysis with configurable output restrictions.
  • Snowflake Data Clean Rooms: Leverages Snowflake’s data sharing architecture to create cross-organization analysis environments.
  • InfoSum, Habu, LiveRamp: Specialized clean room platforms for the advertising and media industries.

The Advertising Use Case

The collapse of third-party cookies has made data clean rooms the primary mechanism for cross-party ad measurement. Advertiser A can measure whether users who saw their ads on Publisher B’s platform subsequently made a purchase – without Publisher B seeing purchase data or Advertiser A seeing browsing data.

The global data clean room market was valued at $720 million in 2025 and is projected to reach $3.5 billion by 2030, driven almost entirely by advertising and media use cases.

Maturity Assessment

Data clean rooms are production-ready and widely deployed in advertising. However, their privacy model is the weakest of any PET category: the clean room operator (Google, AWS, Snowflake) typically has access to both parties’ data during processing. The “clean room” is a policy enforcement layer, not a cryptographic guarantee. Trust is placed in the platform operator, not in mathematics.

More advanced clean room architectures are emerging that combine clean room policy enforcement with TEEs (processing data inside enclaves so the platform operator cannot see it) or MPC (distributing computation across parties). AWS Clean Rooms supports a TEE-backed “Cryptographic Computing” option that uses C3R (Cryptographic Computing for Clean Rooms) to encrypt data with customer-held keys.

Cryptographic Shredding

2026 Market Status

Cryptographic shredding is not a standalone market category but a foundational technique embedded across cloud infrastructure. Every major cloud provider supports it through key management services:

  • Google Cloud KMS: Customer-Managed Encryption Keys with scheduled key destruction
  • AWS KMS: Customer master keys with configurable deletion waiting periods
  • Azure Key Vault: Soft-delete and purge operations for key destruction

The technique is particularly critical for GDPR compliance (right to erasure) and has been endorsed by NIST, the European Data Protection Board, and multiple national data protection authorities.

Maturity Assessment

Fully mature. The only implementation challenge is key management complexity: organizations must maintain a reliable mapping between data subjects and their encryption keys, ensure keys are properly rotated, and guarantee that key destruction is irreversible. For ephemeral architectures like Stealth Cloud, where keys exist only in session memory, these management challenges are eliminated by design.

The Composition Principle

No single PET addresses every threat. The effective privacy architecture is a composition:

Data at rest: AES-256-GCM encryption with cryptographic shredding for destruction.

Data in transit: End-to-end encryption with post-quantum key exchange.

Data in use: TEEs for general computation, FHE for specific workloads, MPC for multi-party scenarios.

Statistical outputs: Differential privacy for aggregate analytics.

Verification: Zero-knowledge proofs for compliance and correctness assertions.

Training data: Federated learning with DP for model training on distributed data.

Cross-organization collaboration: Data clean rooms (preferably TEE-backed or MPC-backed) for joint analysis.

The stack is additive. Each layer addresses a different attack vector. A system that encrypts data at rest but processes it in plaintext is vulnerable. A system that uses TEEs but does not destroy keys after session end retains risk. A system that uses DP for analytics but not encryption for storage has a gap. Defense-in-depth means deploying PETs at every layer where data is exposed.

Convergence of PETs and AI

The single largest driver of PET adoption is AI. Every AI workload that processes personal data – training, fine-tuning, inference, embedding generation, RAG retrieval – creates a privacy exposure point that PETs can address. The AI training data practices of major companies are under legal, regulatory, and public scrutiny. PETs offer a path to continue AI development while respecting data rights.

Specific convergence points:

  • FHE + ML inference: Encrypted AI inference (Zama Concrete ML, Microsoft SEAL)
  • DP + model training: Differentially private fine-tuning (Google DP-SGD, OpenAI’s DP research)
  • TEE + LLM serving: Confidential AI inference (Apple Private Cloud Compute, Azure Confidential VMs for AI)
  • ZKP + AI verification: Proving correct model inference without revealing inputs or weights (EZKL, Modulus Labs)

Hardware Acceleration

The performance gap between plaintext and encrypted computation is narrowing. FHE ASICs (DARPA DPRIVE, Cornami), TEE improvements (Intel TDX 2.0, ARM CCA Realms), and ZKP hardware accelerators (Cysic, Ingonyama) are all in development. The trajectory suggests that by 2030, encrypted computation will be within 10-100x of plaintext performance for most workloads – a threshold where the privacy benefit justifies the cost for a wide range of applications.

Regulatory Mandates

The EU AI Act (effective 2025-2026) mandates risk assessments for high-risk AI systems that implicitly require PETs for compliance. The EU Data Act (2024) imposes data sharing obligations that require privacy-preserving mechanisms. Brazil’s LGPD, India’s DPDP Act, and Saudi Arabia’s PDPL all include provisions that make PET adoption increasingly necessary.

Gartner predicts that by 2028, 60% of large enterprises will use at least one PET in production, up from 25% in 2024.

The Stealth Cloud Perspective

The PET landscape is a toolkit, not a solution. Each technology addresses a specific threat vector, and none is sufficient alone. Stealth Cloud composes the full stack – AES-256-GCM for encryption, cryptographic shredding for destruction, TEEs for compute isolation, and zero-knowledge proofs for verification – into an architecture where privacy is not a feature toggle but a structural property. The question for any system claiming privacy is not “which PET do you use?” but “which layer did you leave unprotected?”