Data has three states: at rest, in transit, and in use. Encryption for the first two has been standard practice for over a decade. AES-256 for storage. TLS 1.3 for transport. The third state — data in use, actively being processed by a CPU — remained unprotected until hardware vendors began shipping silicon that could compute on encrypted data without ever exposing the plaintext to the host operating system.
This is confidential computing: the use of hardware-based trusted execution environments to protect data during processing. It is the missing piece of the encryption lifecycle, and it fundamentally changes what cloud infrastructure can guarantee about privacy. When a CPU can prove — through cryptographic attestation — that it is running authorized code on encrypted data, and that no software on the host machine (including the hypervisor and the cloud provider’s management stack) can access that data, the trust model of cloud computing inverts.
The Confidential Computing Consortium — a Linux Foundation project with members including Intel, AMD, ARM, Google, Microsoft, Red Hat, and VMware — has grown from 7 founding members in 2019 to over 50 in 2026. Confidential computing VM instances are now available on all three major cloud providers. The technology has moved from research labs to production infrastructure — but understanding what it does and does not guarantee requires examining the silicon.
Intel TDX: Trust Domains
Intel Trust Domain Extensions (TDX), first shipped in 4th Generation Xeon Scalable processors (Sapphire Rapids) and expanded in 5th Generation (Emerald Rapids), introduces the concept of Trust Domains (TDs) — hardware-isolated virtual machines that are protected from the host VMM (Virtual Machine Manager), other TDs, and even Intel’s own management firmware.
Architecture
TDX operates through a new CPU mode called SEAM (Secure Arbitration Mode). The TDX Module — a signed Intel firmware component loaded during boot — runs in SEAM mode and manages the lifecycle of Trust Domains. The key architectural components are:
Memory encryption. TDX uses MKTME (Multi-Key Total Memory Encryption) to encrypt each TD’s memory with a unique AES-128-XTS key. The key is generated by the CPU hardware and is never accessible to software — not the hypervisor, not the TDX Module, not any management agent. Physical memory attacks (cold boot, DMA) yield only ciphertext.
TD partitioning. Each TD has its own private page tables, enforced by the CPU hardware. The hypervisor can allocate and deallocate memory pages to TDs but cannot read or write the contents of those pages. This is enforced at the hardware level — a hypervisor attempting to access TD memory receives an encrypted (and therefore useless) view.
Interrupt and exception handling. TDX virtualizes interrupt delivery so that the hypervisor cannot use interrupt injection as a side channel. TD exits (when the TD must yield to the hypervisor for I/O or other operations) are controlled by the TD itself, reducing the information leakage through the hypervisor interface.
Attestation
TDX attestation allows a remote party to verify that a TD is running the expected software stack on genuine Intel hardware. The attestation flow:
- The TD generates an attestation report containing a measurement of its initial state (code, data, configuration).
- The report is signed by the CPU’s hardware attestation key, which is derived from Intel’s Key Provisioning infrastructure.
- The remote verifier checks the report against Intel’s attestation service, confirming that the report was generated by genuine Intel TDX hardware and that the TD’s measurement matches the expected value.
The attestation chain roots to Intel’s manufacturing process — each CPU contains a unique identity key burned in during fabrication. This creates a trust dependency on Intel: you must trust that Intel’s manufacturing process is not compromised, that Intel’s attestation service is honest, and that Intel has not been compelled to provide attestation keys to a third party.
For organizations whose threat model includes nation-state adversaries with potential access to Intel’s supply chain, this trust dependency is significant. Intel is a US company subject to US legal process, including classified orders under FISA. Whether a FISA order could compel Intel to compromise its attestation infrastructure is a legal question without a public answer.
Performance
TDX imposes a performance overhead of 2-8% for compute-intensive workloads, measured across standard benchmarks. Memory-intensive workloads see higher overhead (10-15%) due to the encryption/decryption of memory pages. I/O-intensive workloads are most affected (15-25%) due to the cost of TD exits for device access.
Azure’s benchmark data for TDX-enabled instances shows a 5% average overhead for general-purpose workloads, making TDX viable for production deployments without significant capacity increases.
AMD SEV-SNP: Secure Encrypted Virtualization
AMD’s Secure Encrypted Virtualization (SEV), first introduced in EPYC Naples processors (2017), predates Intel TDX by several years. The current generation — SEV-SNP (Secure Nested Paging), available in 4th Generation EPYC (Genoa) — represents the most mature confidential computing platform in production.
Architecture
SEV-SNP builds on three generations of technology:
SEV (2017). Basic memory encryption per VM using AES-128. The hypervisor cannot read VM memory, but can remap memory pages (potentially redirecting a VM’s reads to attacker-controlled data).
SEV-ES (2019). Encrypted State adds register state protection. VM register contents are encrypted when the hypervisor handles VM exits, preventing the hypervisor from reading or modifying CPU register values.
SEV-SNP (2022). Secure Nested Paging adds integrity protection. The hardware maintains a Reverse Map Table (RMP) that tracks the ownership of every physical page. The hypervisor cannot remap, replay, or alias memory pages — eliminating the entire class of memory remapping attacks that SEV’s initial design was vulnerable to.
The AMD Secure Processor (ASP) — a dedicated ARM Cortex-A5 core embedded in every EPYC CPU — manages the SEV key hierarchy. Each VM receives a unique VEK (VM Encryption Key) generated by the ASP. The VEK never leaves the ASP and is not accessible to any x86 software.
SNP Attestation
SEV-SNP attestation is architecturally similar to TDX attestation but roots to AMD’s hardware:
- The VM requests an attestation report from the ASP.
- The ASP generates a report containing the VM’s launch measurement, the platform’s TCB (Trusted Computing Base) version, and a guest-provided nonce.
- The report is signed by the VCEK (Versioned Chip Endorsement Key), a key derived from the chip’s unique identity and the current firmware version.
- The verifier validates the report against AMD’s Key Distribution Service (KDS).
AMD’s attestation model has one advantage over Intel’s: VCEK derivation is deterministic from the chip ID and firmware version, allowing offline verification without contacting AMD’s service. Intel’s attestation requires online interaction with Intel’s provisioning infrastructure.
The trust dependency on AMD mirrors the trust dependency on Intel. AMD is a US company, subject to the same legal framework. The attestation infrastructure is a critical trust anchor — its compromise would undermine the confidential computing guarantee for every AMD SEV deployment globally.
Performance
SEV-SNP’s performance overhead varies by workload:
- Compute-bound: 1-5% overhead (AES-NI hardware acceleration handles memory encryption efficiently)
- Memory-bound: 5-10% overhead
- I/O-bound: 8-20% overhead
- Database workloads: 3-8% overhead (AMD’s benchmarks using PostgreSQL and MySQL)
SEV-SNP’s performance profile is generally stronger than TDX for compute-heavy workloads, reflecting AMD’s longer optimization cycle and architectural decisions around memory encryption offloading.
ARM CCA: Realms for the Edge
ARM Confidential Compute Architecture (CCA), introduced with the ARMv9 architecture, extends confidential computing to the ARM ecosystem — including mobile devices, IoT, and edge infrastructure.
Architecture
CCA introduces a new execution state called Realm. Realms are hardware-isolated execution environments protected from the normal world (the OS and hypervisor), the secure world (TrustZone), and other Realms. This four-world model (Normal, Secure, Realm, Root) provides finer-grained isolation than Intel or AMD architectures.
The Realm Management Monitor (RMM) — a new firmware component analogous to Intel’s TDX Module — manages Realm lifecycle. Memory encryption uses the Granule Protection Table (GPT) and per-Realm encryption keys managed by hardware.
CCA’s edge computing focus is strategically significant for privacy infrastructure. Ephemeral edge workloads — processing sensor data, running local AI inference, handling real-time analytics — benefit from hardware-enforced confidentiality without requiring cloud connectivity. Data processed in a CCA Realm on an edge device never needs to leave the device in plaintext.
Status
CCA silicon is available in ARM Neoverse V2 cores (used in AWS Graviton4 and Microsoft Cobalt) and will be standard in Neoverse V3 and subsequent designs. Cloud availability of CCA-enabled instances lags behind Intel TDX and AMD SEV, with initial offerings expected to scale through 2026-2027.
Attestation: The Trust Chain
Attestation is the mechanism that makes confidential computing verifiable rather than merely claimed. Without attestation, a user must trust the cloud provider’s assertion that confidential computing is active. With attestation, the user can cryptographically verify the claim.
The attestation process follows a standard pattern across all three hardware vendors:
1. Workload measures its own initial state (code hash, configuration)
2. Hardware signs the measurement with a platform-specific key
3. Remote verifier receives the signed measurement
4. Verifier checks:
a. Signature validity (was this produced by genuine hardware?)
b. Platform identity (which specific CPU/firmware version?)
c. Measurement match (is the workload running the expected code?)
d. Freshness (is this attestation recent, not replayed?)
5. If all checks pass: provide secrets (encryption keys, data access tokens)
The Attestation Gap
Attestation proves what code is running at boot time. It does not prove what happens after boot. A workload that passes attestation can subsequently be modified by a compromised runtime (though hardware memory protection limits this attack surface). Research in runtime attestation — continuous verification of workload integrity during execution — is active but not yet production-ready.
For Stealth Cloud architectures, this gap suggests a defense-in-depth approach: attestation for initial trust establishment, combined with zero-persistence design (limiting the window during which runtime compromise can exfiltrate data) and micro-segmentation (limiting what a compromised workload can access).
The Confidential Computing Consortium
The Confidential Computing Consortium (CCC), established under the Linux Foundation in 2019, serves three functions:
Standards development. The CCC’s Attestation SIG is developing vendor-neutral attestation protocols that allow verifiers to validate attestation reports from Intel, AMD, and ARM hardware using a common framework.
Open-source projects. Key CCC projects include:
- Open Enclave SDK. A cross-platform SDK for building confidential computing applications that run on both Intel SGX and ARM TrustZone.
- Enarx. A deployment framework that enables confidential workloads to run on any TEE hardware without code changes. Enarx abstracts the differences between Intel SGX, AMD SEV, and other TEE platforms.
- Gramine. A library OS that enables unmodified Linux applications to run inside Intel SGX enclaves.
- Veracruz. A framework for privacy-preserving collaborative computation, enabling multiple parties to compute on shared data without any party seeing the others’ inputs.
Industry coordination. The CCC’s membership has expanded from 7 founding members (Alibaba, ARM, Google, Huawei, Intel, Microsoft, Red Hat) to over 50, including AMD, Meta, NVIDIA, VMware, Fortanix, Anjuna, and Edgeless Systems. This breadth suggests that confidential computing is transitioning from a competitive differentiator to a baseline capability.
Threat Model: What Confidential Computing Protects Against
Confidential computing’s guarantees are specific and bounded:
Protected against:
- Cloud provider employees accessing customer data in memory
- Hypervisor-level attacks that read VM memory or registers
- Physical attacks on server memory (cold boot, DMA)
- Co-tenant side-channel attacks (with TEE hardware, not standard VMs)
- Legal process served on the cloud provider (ciphertext only, no plaintext)
Not protected against:
- Compromised software inside the TEE (the code must be trusted)
- Hardware-level supply chain attacks on the CPU manufacturer
- Side-channel attacks that exploit TEE implementation bugs (historical examples: Foreshadow against SGX, CacheOut, Plundervolt)
- Network traffic analysis (encrypted data patterns may reveal information)
- Denial of service (the hypervisor can still terminate VMs)
The threat model makes confidential computing highly effective against the most common cloud privacy concern — provider access to customer data — while leaving residual risks that require complementary controls. Software-defined perimeters address network visibility. Ephemeral infrastructure addresses persistence. Client-side encryption with external key management addresses key exposure.
Confidential AI: The Emerging Application
The highest-growth application for confidential computing is AI workload protection. Training AI models on sensitive data (medical records, financial transactions, private communications) creates an acute privacy challenge: the model must process plaintext data to learn, but the data owner does not want to expose plaintext to the infrastructure operator.
Confidential computing addresses this directly:
Confidential training. Model training runs inside a TEE. Training data is decrypted only within the hardware-protected enclave. The cloud provider — and anyone with access to the host — sees only encrypted memory. Azure’s Confidential GPU offering (using NVIDIA H100 with confidential computing support) enables this for large-scale model training.
Confidential inference. AI inference runs inside a TEE. User prompts are decrypted, processed, and the response is encrypted — all within the hardware boundary. The provider never sees the prompt or response in plaintext. This is directly applicable to privacy-preserving AI chat and is a core architectural component of Stealth Cloud AI services.
Confidential fine-tuning. Organizations can fine-tune foundation models on their proprietary data within a TEE, producing a customized model without exposing training data to the model provider or the cloud operator.
NVIDIA’s H100 GPU introduced the world’s first confidential computing-capable GPU, with hardware-enforced isolation between GPU memory and the host system. The H200 and B100 extend these capabilities with higher performance and broader TEE integration. NVIDIA projects that by 2027, all datacenter GPUs will include confidential computing support as a standard feature.
The implications for the privacy cloud market are substantial. Confidential AI removes the primary objection to cloud-based AI processing for sensitive workloads — that the cloud provider can access the data. If the hardware can be trusted (through attestation), the provider does not need to be.
The $5.8 Billion Question
Confidential computing infrastructure generated approximately $5.8 billion in spending in 2025, growing at 38% CAGR. By 2030, the market is projected to exceed $28 billion. This growth is driven by three forces:
Hardware availability. Intel TDX and AMD SEV-SNP are now standard in server-class CPUs. The incremental cost of confidential computing hardware is approaching zero — the capability is built into silicon that would be purchased regardless.
Cloud provider adoption. Azure, GCP, and AWS all offer confidential computing instances. Azure’s portfolio is the most comprehensive, with confidential VMs, containers, and managed services. As the technology becomes available as a checkbox rather than a specialty product, adoption accelerates.
Regulatory pressure. The EU’s proposed AI Act requires risk assessments for AI systems processing personal data. Confidential computing provides a technical measure that satisfies regulatory requirements for data protection during AI processing — making it not just a security feature but a compliance tool.
The Stealth Cloud Perspective
Confidential computing is the hardware foundation that makes Stealth Cloud architecture possible rather than aspirational. Without hardware-enforced data protection during processing, zero-knowledge cloud operations require trusting the provider’s software stack — a trust assumption that zero-trust principles reject. TEE hardware shifts the trust anchor from the provider’s operational practices to the silicon manufacturer’s attestation — a narrower, more verifiable, and more auditable trust dependency. The remaining challenge is ensuring that the silicon itself is trustworthy, which is why multi-vendor TEE support, open attestation standards, and hardware audit frameworks are not optional for privacy infrastructure — they are existential.