Amazon Web Services reported $115.6 billion in revenue for fiscal year 2025, an increase of 19.1% over 2024. Microsoft’s Intelligent Cloud segment, which includes Azure alongside server products and enterprise services, reported $109.4 billion. Google Cloud crossed the $50 billion threshold for the first time, reporting $52.3 billion – a 28.7% year-over-year increase that made it the fastest-growing of the three hyperscalers by percentage.
Together, these three companies generated approximately $277 billion in cloud revenue in 2025. That figure exceeds the GDP of 140 countries. It also represents the economic output of an industry whose fundamental business model depends on one thing: concentrating the world’s data into infrastructure that these three companies own, operate, and – crucially – can access.
Understanding the revenue breakdown of the hyperscalers is not an exercise in financial analysis for its own sake. How these companies make money determines what data they need, how long they retain it, and what incentives they have to resist the privacy-first architectures that would reduce their informational advantage. The revenue structure is the privacy structure.
AWS: The Incumbent’s Economics
Amazon Web Services remains the largest cloud provider by revenue, though its growth rate has decelerated relative to Microsoft and Google. The $115.6 billion in 2025 revenue breaks down into several distinct categories, each with different privacy implications.
Infrastructure as a Service (IaaS): ~$52 billion
AWS’s core compute and storage business – EC2, S3, EBS, and related services – generated approximately $52 billion, or 45% of total revenue. This is the lowest-margin segment (estimated operating margin of 22-26%) but the foundation on which everything else is built.
The privacy calculus of IaaS is straightforward: AWS provides the servers, customers run their workloads. Under the shared responsibility model, AWS is responsible for security “of” the cloud (physical infrastructure, hypervisor) while customers are responsible for security “in” the cloud (their data, applications, configurations). This model gives customers theoretical control over their data but practical dependence on AWS infrastructure that AWS can access at the hypervisor level.
Notably, AWS processes an estimated 3.2 exabytes of customer data daily across its global infrastructure. While Amazon’s privacy policies limit internal access to customer data, the sheer volume of data flowing through AWS-controlled infrastructure creates an information asymmetry that no contractual provision can fully resolve. The infrastructure operator always has a structural advantage over the infrastructure tenant.
Platform and AI Services: ~$31 billion
AWS’s managed services – RDS, Lambda, SageMaker, Bedrock, and the growing portfolio of AI services – generated approximately $31 billion at substantially higher margins (estimated 35-42%). These services are more profitable because they require less raw compute per dollar of revenue and create deeper customer lock-in.
The privacy implications are more significant here. When a customer uses EC2, AWS provides the machine; the customer controls the workload. When a customer uses SageMaker or Bedrock, AWS mediates the interaction between the customer’s data and the ML models. Customer data necessarily passes through AWS-managed processing layers, and AWS’s visibility into that data increases commensurately.
AWS Bedrock, which provides API access to foundation models from Anthropic, Meta, Mistral, and others, processed an estimated 1.8 billion API calls per month by late 2025. Each call involves customer prompts flowing through AWS infrastructure to third-party models and back. AWS’s position as intermediary gives it metadata visibility into every interaction: which models customers use, how frequently, the size and nature of prompts (at a structural level even without reading content), and usage patterns that reveal strategic priorities.
Data and Analytics: ~$18 billion
Redshift, Athena, QuickSight, and the broader data analytics portfolio generated approximately $18 billion. This category has the highest estimated margins (40-48%) and the most direct privacy implications, because the entire value proposition is helping customers analyze their data – which means AWS’s infrastructure is purpose-built to read, index, and process the contents of customer datasets.
Enterprise and Marketplace: ~$14.6 billion
AWS Marketplace, professional services, support contracts, and training revenue accounted for the remainder. This segment is notable primarily for its stickiness: enterprise support contracts create multi-year relationships that make cloud migration extraordinarily expensive.
Azure: The Enterprise Trojan Horse
Microsoft’s Intelligent Cloud segment is more difficult to decompose because Microsoft bundles Azure revenue with on-premises server products and enterprise services. Analysts estimate that Azure-specific revenue was approximately $78 billion in fiscal 2025, with the remaining $31.4 billion attributable to Windows Server, SQL Server, and related on-premises products.
Azure’s growth story is inseparable from Microsoft 365 and the broader enterprise relationship. Organizations that use Microsoft 365 (over 400 million paid seats as of 2025) are pre-integrated with Azure’s identity, compliance, and infrastructure services. The migration path from Microsoft 365 to Azure is frictionless by design, and approximately 67% of Azure customers cite existing Microsoft licensing relationships as a factor in their cloud selection.
The Copilot Revenue Layer
Microsoft’s most consequential revenue development in 2025 was the scaling of Microsoft 365 Copilot, which generated an estimated $8.2 billion in its first full year of general availability. Copilot is priced at $30 per user per month on top of existing Microsoft 365 subscriptions, and Microsoft reported 22 million paid Copilot seats by Q4 2025.
The privacy implications of Copilot are profound and underappreciated. Copilot operates across a user’s entire Microsoft 365 environment: emails in Outlook, documents in OneDrive, conversations in Teams, presentations in PowerPoint. To function, it requires access to the full corpus of a user’s organizational data. Microsoft’s infrastructure processes this data through AI models to generate responses, and the telemetry from these interactions – what users ask about, which documents they reference, how they communicate – constitutes one of the most comprehensive corporate intelligence datasets ever assembled.
Microsoft states that Copilot data is not used for model training and is processed within the customer’s Microsoft 365 compliance boundary. But the structural reality is that Microsoft’s systems necessarily read, parse, and analyze the content of customer documents and communications to generate Copilot responses. The corporate espionage vector is not that Microsoft will deliberately misuse this data. It is that the infrastructure concentrates an unprecedented volume of sensitive corporate information in systems that a single vendor operates and can – under legal compulsion, security breach, or policy change – access.
Azure OpenAI: The AI Revenue Accelerator
Azure OpenAI Service, which provides enterprise access to GPT-4, DALL-E, and other OpenAI models through Azure infrastructure, generated an estimated $4.6 billion in 2025. The service is positioned as the “safe” way for enterprises to use OpenAI models – data processed through Azure OpenAI is not used for model training and is governed by Azure’s enterprise data protection commitments.
This positioning is revealing. Microsoft is effectively monetizing the privacy anxiety that OpenAI’s consumer product creates. The message to enterprise buyers is: “OpenAI’s models are powerful, but you cannot trust the consumer version with your corporate data. Pay us a premium, and we will mediate the interaction through our enterprise infrastructure.”
It is a sound strategy that also reveals the fundamental incentive structure. Microsoft profits from being the trusted intermediary between enterprises and AI models. The more privacy anxiety exists around direct AI model access, the more valuable Microsoft’s intermediary position becomes. This creates a structural disincentive for Microsoft to support architectures that would allow enterprises to use AI models without a centralized intermediary – architectures like client-side encryption with zero-knowledge proxies that would remove the intermediary from the data flow entirely.
Google Cloud: The Data Company’s Cloud
Google Cloud’s $52.3 billion in 2025 revenue marks a maturation from a distant third place into a credible enterprise alternative, though the company’s broader data-driven business model creates unique privacy tensions for its cloud division.
Google’s total advertising revenue in 2025 was approximately $265 billion – more than five times its cloud revenue. This ratio matters because it reveals Alphabet’s core business model: monetizing information about people. Google Cloud exists within a corporate structure whose primary expertise is extracting value from data, and whose institutional incentives are aligned with data accumulation rather than data minimization.
BigQuery and Vertex AI: ~$12 billion
Google’s data analytics and AI platform services generated approximately $12 billion, growing faster than any other category. BigQuery, Google’s serverless data warehouse, processes over 110 petabytes of data daily across its customer base. Vertex AI, the company’s managed ML platform, handled an estimated 900 million inference requests per month by late 2025.
Google’s technical advantage in AI and data processing is real and substantial. Its infrastructure for processing large datasets is arguably superior to both AWS and Azure. But that technical superiority comes with a structural privacy question that Google has never fully answered: how does a company whose core competency is extracting value from data operate a cloud division whose enterprise customers need assurance that their data will not be used for value extraction?
Google’s enterprise data processing agreements and Cloud Data Processing Addendum provide contractual boundaries, but the corporate incentive structure pushes in the opposite direction. Every piece of data that passes through Google Cloud infrastructure is data that the most sophisticated data analysis organization in history could, in principle, analyze. The restraint is contractual and voluntary. It is not architectural.
The Margin Story
Google Cloud achieved sustained operating profitability in 2024, reporting an operating margin of approximately 9% – modest compared to AWS’s estimated 28-32% and Azure’s estimated 24-28%. Google has been willing to accept lower margins to gain market share, particularly in AI workloads where it competes on the strength of its TPU hardware and Gemini model ecosystem.
The margin gap also reflects Google Cloud’s relative weakness in enterprise sales relationships. AWS and Azure benefit from organizational inertia and deep integration with corporate IT environments. Google Cloud wins primarily on technical merit, which means it competes more aggressively on price. For privacy-conscious buyers, this creates a paradox: Google Cloud may offer better AI capabilities, but Alphabet’s business model creates the most significant structural privacy concerns of any hyperscaler.
The Revenue Model’s Privacy Implications
The hyperscalers’ combined $277 billion in 2025 cloud revenue is generated by a business model with three structural privacy characteristics that no amount of policy or compliance can eliminate.
Characteristic 1: Data Gravity Creates Lock-In
Cloud providers benefit from data gravity – the principle that applications and services migrate toward where data is stored, because moving large datasets is expensive and slow. Once an organization’s data resides in a hyperscaler’s infrastructure, the cost of migrating away increases with every gigabyte added.
AWS, Azure, and Google all charge egress fees for data leaving their networks: $0.08-0.09 per gigabyte for standard transfers. For an enterprise with 500 terabytes of data, the egress cost alone exceeds $40,000 – and that excludes the engineering effort to reconfigure applications, retrain staff, and revalidate compliance certifications. Data gravity is the cloud’s moat, and it is a moat built from customer data.
The privacy implication is that switching costs make it economically irrational for organizations to leave a cloud provider even when privacy concerns arise. The sovereign cloud movement has highlighted this dynamic: European organizations that want to move data to European-sovereign infrastructure face migration costs that can exceed their annual cloud spend.
Characteristic 2: Managed Services Increase Data Exposure
The most profitable cloud services are managed services that require the provider to process customer data. The revenue incentive to push customers from IaaS (low margin, low data exposure) to managed services (high margin, high data exposure) is embedded in every cloud provider’s financial structure.
AWS, Azure, and Google all report that managed services are growing faster than raw IaaS, and all three are investing heavily in AI services that require even deeper access to customer data. The financial incentive and the privacy risk point in the same direction: toward more data flowing through more provider-managed processing layers.
Characteristic 3: AI Amplifies the Data Dependency
The AI revenue layer – Bedrock, Azure OpenAI, Vertex AI – represents the highest-growth, highest-margin opportunity for all three hyperscalers. But AI services are also the most data-intensive. They require access to customer prompts, documents, and organizational context to function. The better they work, the more data customers feed them. The more data customers feed them, the deeper the provider’s informational advantage.
This creates a flywheel that runs counter to privacy. Cloud providers that want to grow AI revenue need customers to trust them with more data. Customers who use AI services create more data within the cloud environment. More data means higher switching costs, deeper lock-in, and greater structural privacy exposure.
The cloud market trajectory for 2026 points toward this flywheel accelerating. AI workloads are projected to account for 35% of new cloud spending in 2026, up from 22% in 2025. Each dollar of AI cloud spending creates more data exposure than each dollar of traditional cloud spending. The revenue growth story and the privacy risk story are the same story.
What the Numbers Mean for Privacy Architecture
The hyperscalers are not malicious actors. They are rational economic actors whose revenue structure creates incentives that are fundamentally misaligned with data minimization. A company that generates $115 billion per year by storing, processing, and analyzing customer data does not have a structural incentive to build infrastructure that minimizes the data it touches.
This is not a problem that better privacy policies, stronger encryption at rest, or more rigorous compliance certifications can solve. It is a structural problem that requires a structural answer.
The structural answer is architecture that removes the cloud provider from the data flow. Client-side encryption that ensures the provider never sees plaintext. Zero-persistence infrastructure that retains nothing after processing. Zero-trust architecture that assumes the infrastructure operator is a potential adversary. These are not features to be added to existing cloud services. They are architectural principles that fundamentally change the relationship between the infrastructure operator and the data.
The Stealth Cloud Perspective
The $277 billion cloud revenue machine runs on a simple bargain: organizations surrender control of their data in exchange for infrastructure they could not economically build themselves. For most of computing history, this bargain was the only option. The economics of scale made centralized infrastructure irresistible, and the privacy cost was accepted as an externality.
That bargain is beginning to fracture. Edge computing reduces the infrastructure advantage of centralized data centers. Cryptographic advances make client-side processing feasible for an increasing range of workloads. And the AI revolution – which the hyperscalers expected to deepen their data advantage – has instead made organizations acutely aware of the cost of concentrating sensitive data in infrastructure they do not control.
Stealth Cloud operates on a different economic model. We do not monetize data gravity because we do not create data gravity. We do not profit from managed service data exposure because our architecture cannot expose data even to ourselves. The zero-knowledge, zero-persistence design means that our revenue model is aligned with our customers’ privacy interests rather than opposed to them.
The hyperscalers’ $277 billion in revenue is the measure of how much the world is willing to pay for the cloud bargain in its current form. The privacy premium research suggests that a meaningful segment of that market is willing to pay more – substantially more – for a version of the bargain that does not require surrendering data control. The question is not whether that market exists. The revenue data and the privacy premium data together confirm that it does. The question is who will build the infrastructure to serve it.