Amazon Web Services generated $107.6 billion in revenue in 2024. Microsoft Azure crossed $96 billion. Google Cloud hit $43.1 billion. Together, these three companies control approximately 67% of global cloud infrastructure spending — and every dollar of that revenue is built on a model that treats user privacy as an acceptable casualty of scale.

This is not a conspiracy. It is an architecture decision.

The public cloud was designed to maximize resource utilization across the largest possible customer base. Multi-tenancy, centralized control planes, and metadata aggregation are not bugs — they are the foundational engineering choices that make hyperscale economics work. But those same choices create structural privacy risks that no amount of compliance certifications can fully mitigate.

Understanding these trade-offs is not optional for any organization handling sensitive data. It is the prerequisite for making an informed infrastructure decision.

The Multi-Tenancy Problem

Public cloud providers achieve their margins by running workloads from thousands of customers on shared physical hardware. A single AWS Nitro host may simultaneously process data for a healthcare startup, a defense contractor, and a cryptocurrency exchange. The isolation between these workloads depends entirely on the hypervisor layer — a software boundary that, however well-engineered, remains a software boundary.

Between 2018 and 2025, researchers disclosed over 40 side-channel vulnerabilities affecting shared cloud environments. Spectre, Meltdown, and their successors demonstrated that CPU-level data leakage across tenant boundaries is not theoretical — it is a persistent architectural limitation of shared silicon.

AWS, Azure, and GCP have invested billions in mitigation. AWS developed the Nitro System specifically to reduce the hypervisor attack surface. Azure introduced confidential computing enclaves. GCP shipped Shielded VMs. These are genuine engineering achievements. But they are patches on a model that, by design, places unrelated workloads in physical proximity.

The alternative — dedicated hardware per customer — exists in every major cloud provider’s catalog. AWS Dedicated Hosts, Azure Dedicated Hosts, and GCP Sole-Tenant Nodes all offer single-tenancy. The price premium ranges from 30% to 70% over standard instances. Most organizations cannot justify the cost, which means most organizations accept shared tenancy and the residual risk it carries.

For workloads requiring zero-persistence architecture, shared tenancy introduces an additional concern: data remnants. When a VM is terminated on shared hardware, the memory and storage it occupied are reallocated to the next tenant. Cloud providers zero memory on deallocation, but the window between deallocation and zeroing — measured in microseconds — represents a non-zero risk surface that ephemeral infrastructure models eliminate entirely.

Data Residency: Where Your Data Actually Lives

Every major public cloud provider operates across multiple geographic regions. AWS spans 34 regions. Azure operates in 60+ regions. GCP covers 40 regions. This global footprint is marketed as a feature — low-latency access from anywhere, disaster recovery across continents, compliance with local data residency laws.

The reality is more complicated. While compute instances run in specific regions, the control plane — the management layer that orchestrates everything — often centralizes in the United States. API calls, billing data, support tickets, and operational telemetry frequently traverse US-based infrastructure regardless of where the customer’s workload resides.

In 2023, the Austrian Data Protection Authority ruled that a company’s use of Google Analytics violated GDPR because telemetry data was transmitted to US servers. The same logic applies to cloud control planes. An EU-based organization running workloads in Frankfurt on AWS still sends IAM authentication requests, CloudTrail logs, and billing metadata through AWS’s globally distributed (and US-anchored) management infrastructure.

This is not hypothetical non-compliance. It is the structural consequence of how hyperscale clouds are built. The control plane cannot be regionalized without fundamentally re-architecting the service — which is precisely what sovereign cloud initiatives are attempting.

The most consequential privacy risk of public cloud is not technical. It is legal.

The US CLOUD Act of 2018 (Clarifying Lawful Overseas Use of Data Act) grants US law enforcement the authority to compel any US-headquartered company to produce data stored on its servers, regardless of where those servers are physically located. A subpoena served to Amazon in Virginia can demand data stored in AWS’s Bahrain region. A warrant issued to Microsoft in Seattle can reach Azure’s Switzerland North datacenter.

This is not speculation. Microsoft fought this exact battle in United States v. Microsoft Corp. (2018), challenging a warrant for data stored in Ireland. The case was mooted by the CLOUD Act’s passage, which codified the government’s position: if the company is American, the data is reachable.

The CLOUD Act includes an executive agreement framework that theoretically allows foreign governments to negotiate bilateral access treaties. As of early 2026, only the UK, Australia, and Canada have signed such agreements. The EU has not. Switzerland has not. This means data stored by US cloud providers in European or Swiss datacenters remains subject to US government access without equivalent protections under local law.

For organizations bound by Swiss data protection law (FADP/revFADP), GDPR, or sector-specific regulations like HIPAA, this creates an irreconcilable conflict: the data is simultaneously subject to two incompatible legal regimes. The standard public cloud response — contractual clauses, Binding Corporate Rules, supplementary measures — represents legal mitigation, not architectural resolution.

The Stealth Cloud model addresses this at the infrastructure level: data that cannot be decrypted by the provider cannot be meaningfully produced in response to a subpoena, regardless of where the hardware sits.

Metadata: The Data You Did Not Know You Were Generating

Even organizations that encrypt all data at rest and in transit on public cloud generate enormous volumes of unencrypted metadata. Every API call, every DNS lookup, every load balancer health check, every CloudWatch metric, and every access log creates a record that describes who accessed what, when, from where, and how often.

This metadata is not encrypted by default. It is not covered by customer-managed encryption keys. And it is extraordinarily revealing.

In 2024, researchers at ETH Zurich demonstrated that cloud API access patterns alone — without any access to the underlying data — could identify the type of application running, the approximate number of users, and in some cases the specific software stack with 89% accuracy. Metadata analysis of S3 access patterns could determine whether a bucket contained medical records, financial data, or media files based solely on object sizes, access frequencies, and request timing.

Cloud providers retain this metadata for operational purposes: debugging, capacity planning, abuse detection, billing. Retention periods range from 90 days (GCP default) to indefinite (AWS CloudTrail, if configured by the customer but with AWS’s own operational logs having no customer-visible retention policy).

This metadata layer is fully accessible to the cloud provider and, by extension, to any legal process served on them. A CLOUD Act request need not demand the encrypted database — the metadata surrounding it may be sufficient to answer the question the government is actually asking.

The Compliance Theater Problem

Public cloud providers maintain extensive compliance programs. AWS lists 143 compliance certifications. Azure advertises over 100. GCP publishes more than 40. SOC 2, ISO 27001, FedRAMP, HIPAA BAA, PCI DSS — the alphabet soup of compliance is prominently featured in every sales deck.

These certifications attest that the provider’s infrastructure meets specific control requirements. They do not attest that any individual customer’s deployment is compliant. The distinction matters enormously.

A HIPAA Business Associate Agreement with AWS means AWS commits to safeguarding Protected Health Information according to HIPAA requirements. It does not mean that the customer’s application architecture, access controls, or data handling practices are HIPAA-compliant. The shared responsibility model places the burden of application-level compliance squarely on the customer.

More critically, compliance certifications assess controls at a point in time. SOC 2 Type II covers a specific audit period. ISO 27001 is valid for three years with annual surveillance audits. Between assessments, the provider’s actual practices may deviate from the certified baseline without the customer’s knowledge.

The three paradigms of cloud computing framework clarifies this: public cloud compliance certifies the provider’s infrastructure, sovereign cloud adds jurisdictional controls, and Stealth Cloud eliminates the need to trust the provider’s compliance posture entirely by making the data architecturally inaccessible.

The Economic Lock-In Dimension

Privacy-motivated migration away from public cloud faces a secondary barrier: economic lock-in. AWS, Azure, and GCP have each built proprietary service ecosystems that create deep technical dependencies. An organization using AWS Lambda, DynamoDB, S3, SQS, and CloudFront has not merely chosen a hosting provider — it has adopted a proprietary application platform.

The cost of extracting a mature application from a hyperscale cloud is substantial. Gartner estimated in 2025 that cloud migration projects (in either direction) cost between $5,000 and $25,000 per workload, with complex applications exceeding $100,000. For organizations running hundreds of workloads, the migration cost alone creates a multi-million-dollar barrier to exit.

This lock-in is not accidental. It is the business model. Hyperscale cloud margins improve with customer dependency. Each proprietary service adopted increases switching costs and reduces the likelihood of migration — even when the organization’s privacy requirements change.

The strategic response is to architect for portability from the outset. Container-native workloads on Kubernetes, infrastructure-as-code with Terraform, and provider-agnostic encryption with customer-held keys reduce (but do not eliminate) the lock-in penalty. Zero-trust architecture principles further decouple security from the provider’s native controls.

What Public Cloud Does Well

Intellectual honesty requires acknowledging where public cloud excels. No other infrastructure model offers:

  • Global scale on demand. Scaling from 10 to 10 million users without provisioning hardware is a genuine engineering miracle.
  • Capital efficiency. Converting infrastructure from CapEx to OpEx transformed how companies form and grow.
  • Managed complexity. Services like Aurora, BigQuery, and Cosmos DB abstract operational burdens that would consume entire engineering teams.
  • Innovation velocity. The rate of new service launches across AWS, Azure, and GCP is unmatched by any alternative.

For workloads where privacy is not the primary constraint — public websites, non-sensitive analytics, development environments, open-source project hosting — public cloud remains the rational choice. The privacy cost is real, but it is not uniformly relevant.

The problem arises when organizations apply the public cloud model indiscriminately. When the same architecture that hosts the marketing website also processes medical records, financial transactions, or private communications, the privacy trade-offs of the scale-at-any-cost model become unacceptable.

The Stealth Cloud Perspective

Public cloud is the correct infrastructure for workloads that do not require privacy. For everything else, the architectural assumptions of hyperscale — shared tenancy, centralized control planes, provider-accessible metadata, US legal jurisdiction — represent structural risks that contractual safeguards cannot fully address. The Stealth Cloud model does not compete with public cloud on scale. It exists because scale and privacy are, under current hyperscale architectures, fundamentally incompatible objectives that demand a different engineering foundation.