Every major cloud provider encrypts data at rest by default in 2026. AWS enabled default S3 encryption in January 2023. Azure has encrypted storage by default since 2017. Google Cloud has encrypted at rest since inception. This encryption is real, functional, and meaningfully protective against one specific threat: physical theft of storage media from a datacenter.

Against every other threat — legal compulsion, insider access, provider compromise, government surveillance — default encryption is a locked door where the landlord keeps a copy of the key. The data is encrypted. The provider holds the decryption capability. The encryption protects the provider’s customers from external attackers. It does not protect them from the provider itself.

Understanding this distinction — encryption that protects from external threats versus encryption that protects from the infrastructure operator — is the essential lens for evaluating cloud-native encryption architecture. Most cloud encryption falls into the first category. Meaningful privacy requires the second.

The Three Domains of Encryption

Cloud encryption operates across three domains, each with different threat models and different levels of maturity.

Encryption at Rest

Data stored on disk is encrypted using symmetric encryption — typically AES-256 in GCM or XTS mode. The encryption is transparent: applications read and write plaintext, and the storage layer handles encryption and decryption.

The key hierarchy for encryption at rest typically follows an envelope encryption model:

  1. Data Encryption Keys (DEKs): Unique per data object (file, database row, storage block). Generated by the storage service.
  2. Key Encryption Keys (KEKs): Encrypt the DEKs. Managed by the cloud provider’s KMS.
  3. Master Keys: Encrypt the KEKs. Stored in HSMs within the provider’s infrastructure.

When a hyperscaler says “your data is encrypted at rest with AES-256,” they mean: your data is encrypted with a DEK, which is encrypted with a KEK, which is encrypted with a master key, all of which the provider controls. The encryption protects against physical media theft. It does not protect against any entity with access to the KMS — including the provider itself and any government with legal authority over the provider.

According to Thales’s 2025 Cloud Security Study, 38% of organizations believe that cloud provider default encryption is sufficient for their compliance requirements. Among those organizations, 72% had not evaluated whether their threat model includes the cloud provider as a potential adversary. This gap between perceived and actual protection is the central risk of relying on provider-managed encryption.

Encryption in Transit

Data moving between services is encrypted using TLS (Transport Layer Security). As of 2026, TLS 1.3 is the standard, with TLS 1.2 as a fallback. All major cloud providers require TLS for API connections and encrypt inter-datacenter traffic.

TLS encrypts the payload but does not encrypt all metadata. The SNI (Server Name Indication) field in the TLS handshake reveals the destination hostname. Connection timing, packet sizes, and IP addresses are visible to network observers. Encrypted Client Hello (ECH), standardized in 2025, addresses SNI leakage but is not yet universally deployed.

Within service mesh architectures, mutual TLS (mTLS) provides both encryption and authentication for inter-service traffic. This is a meaningful improvement over standard TLS because it verifies identity in both directions and prevents a compromised service from impersonating another.

Encryption in Use

The hardest problem. Data being processed — in CPU registers, in RAM, in cache — must be decrypted to be operated on. Traditional encryption protects data at rest and in transit but leaves it exposed during processing.

Confidential computing addresses this gap using hardware-based trusted execution environments (TEEs): Intel SGX enclaves, AMD SEV-SNP, and ARM TrustZone/CCA. These technologies encrypt data in memory and restrict access to it — even the hypervisor and cloud provider cannot read data inside a TEE.

The privacy significance: confidential computing is the first technology that structurally prevents the cloud provider from accessing customer data during processing. It moves encryption from “protection against external attackers” to “protection against the infrastructure operator.”

AMD SEV-SNP is available on Azure (Confidential VMs), GCP (Confidential GKE Nodes), and AWS (via AMD instances). Intel TDX is available on Azure and GCP as of early 2026. Adoption remains limited — Gartner estimated that fewer than 5% of cloud workloads ran in confidential computing environments in 2025 — but the technology is production-ready.

The Key Management Hierarchy

Encryption is only as strong as its key management. A flawless AES-256 implementation is worthless if the key is stored in an environment variable, logged in a debug message, or accessible to anyone with provider console access.

Provider-Managed Keys

The default. The cloud provider generates, stores, rotates, and controls access to encryption keys. The customer never sees the key material. This is operationally simple and provides no protection against the provider.

Customer-Managed Keys (CMK)

The customer creates and manages keys within the provider’s KMS (AWS KMS, Azure Key Vault, Google Cloud KMS). The customer controls access policies and rotation schedules. The key material resides in the provider’s HSMs but is accessible only through customer-defined policies.

CMK provides meaningful protection against accidental cross-customer data access and limits the blast radius of internal provider breaches. It does not protect against the provider itself, because the key material resides in the provider’s hardware.

Bring Your Own Key (BYOK)

The customer generates keys externally and imports them into the provider’s KMS. The customer “owns” the key — they generated it and can delete the provider’s copy. However, during active use, the key material exists in the provider’s HSMs. BYOK is a trust improvement over provider-managed keys but not a trust elimination.

External Key Management (EKM)

The key material never enters the provider’s infrastructure. Encryption and decryption requests are routed from the provider’s services to the customer’s external key manager (on-premises HSM, third-party KMS like Thales CipherTrust or Fortanix DSM). The provider sees only ciphertext and an API call to the external KMS.

EKM provides the strongest protection against the provider. If the customer revokes EKM access, the provider loses the ability to decrypt the data — immediately, unilaterally, and verifiably. This is the “kill switch” that BYOK and CMK cannot provide.

The 2025 KMIP Industry Adoption Survey found that EKM is used by fewer than 8% of cloud-encrypted workloads. The barriers are operational complexity, latency (every cryptographic operation requires a network call to the external KMS), and cost (dedicated HSM infrastructure is expensive). These barriers are real but not insurmountable for organizations whose threat model includes the provider.

Client-Side Encryption: The Architectural Shift

All the encryption models discussed above share a common assumption: the cloud provider is involved in the encryption process. Even with EKM, the provider’s services interact with the key management system to encrypt and decrypt data.

Client-side encryption eliminates the provider entirely from the cryptographic path. Data is encrypted in the client environment — browser, mobile app, on-premises application — before it is transmitted to the cloud. The cloud provider stores, processes, and transmits ciphertext. The provider never has access to plaintext data or encryption keys.

How Client-Side Encryption Works

The typical client-side encryption flow:

  1. Key generation: The client generates a data encryption key using a cryptographically secure random number generator. Web Crypto API provides crypto.getRandomValues() and crypto.subtle.generateKey() for browser-based key generation.

  2. Encryption: The client encrypts the data using AES-256-GCM with the generated key. GCM (Galois/Counter Mode) provides both confidentiality and authenticity — the ciphertext includes an authentication tag that detects tampering.

  3. Transmission: The ciphertext is sent to the cloud service. The service stores it without any ability to decrypt it.

  4. Key management: The data encryption key is either held locally (browser storage, hardware key), encrypted with a master key derived from the user’s authentication (wallet signature, passphrase), or split across multiple locations using Shamir’s Secret Sharing.

  5. Decryption: When the data is needed, the client retrieves the ciphertext from the cloud, decrypts it locally, and processes the plaintext in the client environment.

The Performance Question

Client-side encryption introduces CPU overhead on the client device. Modern devices handle this efficiently — AES-NI hardware acceleration is standard on x86 processors since 2010 and available on ARM processors since ARMv8. Web Crypto API leverages hardware AES acceleration in all modern browsers.

Benchmarks on a mid-range 2025 laptop show:

OperationThroughput (AES-256-GCM via Web Crypto API)
Encrypt 1KB0.02ms
Encrypt 1MB1.8ms
Encrypt 100MB180ms
Decrypt 1KB0.02ms
Decrypt 1MB1.7ms
Decrypt 100MB175ms

For the typical cloud application — API requests, chat messages, document fragments — the encryption overhead is imperceptible. For bulk data operations (database migrations, large file transfers), the overhead is measurable but modest relative to network transfer time.

What Client-Side Encryption Cannot Do

Client-side encryption limits what the cloud can do with the data. If the data is encrypted before reaching the server, the server cannot:

  • Search the data (without specialized techniques like searchable encryption or homomorphic encryption, both of which remain impractical at scale for general-purpose computation)
  • Index the data for fast retrieval
  • Analyze the data for insights or recommendations
  • Validate the data against business rules
  • Compress the data efficiently (encrypted data is incompressible)

These limitations are real and represent the fundamental tradeoff of client-side encryption: privacy versus server-side functionality. Applications that require server-side search, analysis, or processing must either accept server-side plaintext access or adopt specialized cryptographic techniques.

For applications where the server is a storage and relay layer — messaging, file storage, chat interfaces — client-side encryption imposes minimal functional cost. The server does not need to understand the data to store and deliver it.

Encryption Architecture Patterns

Pattern 1: Envelope Encryption with External Root Key

The most common production pattern. Data is encrypted with a DEK. The DEK is encrypted with a KEK held in an external key manager. The encrypted DEK is stored alongside the ciphertext. Decryption requires calling the external KMS to unwrap the DEK.

Privacy strength: The cloud provider never holds the root key. If the external KMS is unavailable, the data is inaccessible. Revoking KMS access is equivalent to crypto-shredding.

Operational cost: Every data access requires a KMS call. Caching unwrapped DEKs in memory reduces call frequency but introduces a window where the key exists in cloud infrastructure RAM.

Pattern 2: End-to-End Client Encryption

Data is encrypted and decrypted exclusively in client environments. The server stores and transmits ciphertext. No server-side key material exists.

Privacy strength: Maximum. The server is structurally incapable of reading the data. Zero-knowledge architecture by construction.

Operational cost: Server-side search, indexing, and analysis are impossible without client-side processing or specialized cryptographic techniques. Key management becomes the client’s responsibility, introducing key loss risk.

Pattern 3: Confidential Computing with Attestation

Data is decrypted inside a hardware-attested TEE. The TEE’s integrity is verified via remote attestation before keys are released. The cloud provider cannot access data inside the TEE even with hypervisor-level access.

Privacy strength: High, but dependent on hardware trust (the CPU manufacturer must be trusted). Side-channel attacks against Intel SGX (Foreshadow, Plundervault) have demonstrated that TEE isolation is not absolute.

Operational cost: Limited to hardware that supports confidential computing. Performance overhead of 5-30% depending on workload and TEE technology. Application modifications may be required for enclave-compatible deployment.

Pattern 4: Layered Encryption

Combining multiple patterns. Client-side encryption for data content. TLS for transport. Service mesh mTLS for inter-service communication. Confidential computing for processing that requires server-side decryption. Each layer addresses a different threat vector.

Privacy strength: Defense in depth. Compromise of any single layer does not expose plaintext data.

Operational cost: Highest complexity. Multiple key management systems, multiple encryption libraries, and multiple performance overhead sources. Requires careful architecture to avoid the “encryption everywhere, security nowhere” pattern where overlapping encryption schemes introduce bugs without adding security.

The Post-Quantum Consideration

Current cloud encryption relies on AES-256 for symmetric encryption and RSA or ECDH for key exchange. AES-256 is believed to be quantum-resistant (Grover’s algorithm reduces effective key length to 128 bits, which remains secure). RSA and ECDH are not — Shor’s algorithm, running on a sufficiently capable quantum computer, can break both.

NIST finalized its post-quantum cryptography standards in August 2024: ML-KEM (Kyber) for key encapsulation and ML-DSA (Dilithium) for digital signatures. Cloud providers are beginning integration:

  • AWS has integrated ML-KEM into its TLS libraries for API connections.
  • Google has deployed hybrid post-quantum key exchange (X25519+ML-KEM) in Chrome and Google Cloud TLS endpoints.
  • Azure has published guidance on post-quantum migration for Key Vault and Azure TLS.

For cloud encryption architecture, the post-quantum transition primarily affects key exchange (TLS handshakes, KMS API authentication) rather than data encryption (AES-256 remains secure). However, the “harvest now, decrypt later” threat — adversaries storing encrypted traffic today to decrypt with future quantum computers — means that data encrypted with non-quantum-resistant key exchange is already at risk if it must remain confidential for decades.

Organizations handling data with long-term confidentiality requirements (medical records, government secrets, intellectual property) should be implementing hybrid post-quantum TLS today, not waiting for full standard adoption.

Measuring Encryption Effectiveness

Encryption exists on a spectrum. These metrics help organizations assess where they fall:

MetricTargetWhat It Measures
Encryption coverage100% of data at restPercentage of data stores with encryption enabled
Key management sovereignty100% for sensitive dataPercentage of encryption keys held outside the provider
Default encryption postureEncrypt-by-defaultWhether new resources are encrypted without manual configuration
Certificate rotation frequency<24 hours for mTLSHow often TLS/mTLS certificates are rotated
Key rotation frequencyAnnual minimumHow often data encryption keys are rotated
Client-side encryption coverage100% for PIIPercentage of sensitive data encrypted before leaving the client
Crypto-shred capability<1 hourTime to destroy all keys and render data unrecoverable

The most revealing metric is key management sovereignty. An organization that encrypts 100% of its data at rest but uses provider-managed keys for 100% of that encryption has achieved protection against media theft and nothing else. An organization that uses external key management for all sensitive data has achieved protection against the provider — a meaningfully different security posture.

The Stealth Cloud Perspective

The cloud encryption landscape in 2026 offers a full spectrum of options from provider-managed keys (convenient, minimal protection) to client-side encryption with no server-side key material (operationally constrained, maximum protection). Most organizations cluster at the convenient end of this spectrum because the operational cost of stronger encryption has historically been prohibitive.

Stealth Cloud is built on the conviction that client-side encryption with zero server-side key access is the only encryption architecture that delivers genuine privacy. Not privacy from external attackers (which provider-managed encryption already provides) but privacy from the infrastructure itself.

The architectural choices follow directly: AES-256-GCM encryption in the browser via Web Crypto API. Keys derived from wallet signatures, held only in the client environment. PII stripping before encryption so that even the plaintext seen by the client’s encryption layer contains no identifying information. Ephemeral server-side infrastructure that processes only ciphertext and retains nothing after the request completes.

This is not encryption as a compliance checkbox. It is encryption as an architectural constraint that structurally prevents the wrong entity from seeing the data. The distinction matters because compliance-driven encryption protects against auditors, while architecture-driven encryption protects against reality. In reality, providers can be compelled, insiders can be compromised, and infrastructure can be subpoenaed. Cryptography that accounts for these realities is cryptography that works. Everything else is a locked door with the key under the mat.