The average HTTP request from a user in Frankfurt to an AWS us-east-1 server in Virginia traverses approximately 6,500 kilometers, crosses at least two transatlantic submarine cables, passes through 12-18 network hops, and takes 85-120 ms round-trip. During that journey, the request’s content — encrypted by TLS, but still identifiable by its metadata — is observable by every network operator, transit provider, and intelligence agency with access to the submarine cable landing points.
The NSA’s UPSTREAM program, disclosed in the Snowden documents, collected data from the precise points where undersea cables make landfall. GCHQ’s TEMPORA program tapped over 200 fiber optic cables in the UK, processing 21 petabytes of data per day. The architectural decision to centralize computation in a small number of cloud regions and force global traffic through submarine cable chokepoints is, from a privacy perspective, a surveillance subsidy.
Edge computing inverts this model. Instead of transmitting user data to a distant data center, computation moves to the network location nearest the user. A user in Frankfurt is served by compute infrastructure in Frankfurt. The data never leaves the city. It never crosses a border. It never traverses a submarine cable. It exists, is processed, and is destroyed within a radius measured in kilometers, not continents.
This is not a performance optimization with privacy as a side effect. It is a fundamental change in the data exposure model of cloud infrastructure.
The Data Transit Problem
Every meter of network cable between the user and the compute node is a potential interception point. The longer the path, the more jurisdictions the data crosses, the more transit providers handle it, and the more opportunities exist for lawful interception, unlawful surveillance, and traffic analysis.
Submarine Cable Surveillance
Approximately 97% of intercontinental internet traffic travels through submarine fiber optic cables. There are approximately 550 active submarine cable systems globally, operated by a concentrated set of companies (SubCom, NEC, Alcatel Submarine Networks). The physical points where these cables make landfall — Cornwall in the UK, Long Island in the US, Marseille in France — are documented, fixed, and accessible to state intelligence agencies.
The Five Eyes intelligence alliance (US, UK, Canada, Australia, New Zealand) has demonstrated the capability and willingness to intercept submarine cable traffic at scale. The UK’s Investigatory Powers Act 2016 legalized bulk interception of communications data from fiber optic cables. The US Foreign Intelligence Surveillance Court has authorized collection from cable landing points under FISA Section 702.
For cloud infrastructure centralized in US East Coast data centers, the privacy implication is mechanical: data from European, Asian, and African users must cross submarine cables — and those cables are surveilled. The data is encrypted by TLS, but TLS protects content, not metadata. The source and destination IP addresses, the timing of the request, the size of the payload, and the frequency of access are all visible to anyone monitoring the cable.
Transit Provider Exposure
Between the user’s ISP and the cloud provider’s data center, traffic typically transits 2-5 intermediate autonomous systems (AS). Each transit provider has the technical capability to inspect, log, and store traffic metadata. Some transit providers are required by national law to retain metadata (Germany’s Telecommunications Act mandates 10-week retention of connection metadata by ISPs and transit providers).
Edge computing eliminates transit hops by co-locating compute with the user’s access network. A Cloudflare Worker executing in the same Internet Exchange Point (IXP) as the user’s ISP may require zero transit hops — the data moves from the ISP’s network to Cloudflare’s network within the same physical facility.
Cross-Border Data Transfer
The GDPR restricts the transfer of personal data outside the European Economic Area (EEA) unless the destination country provides “adequate” data protection (Article 45) or appropriate safeguards are in place (Article 46). The Schrems II decision (2020) invalidated the EU-US Privacy Shield, finding that US surveillance laws (FISA Section 702, EO 12333) were incompatible with EU fundamental rights.
The EU-US Data Privacy Framework, adopted in 2023, partially restored transatlantic data flows — but legal challenges are already underway (NOYB filed a challenge within months of adoption), and the framework’s durability is uncertain. A third invalidation (“Schrems III”) would again disrupt any cloud architecture that transfers European user data to US data centers.
Edge computing avoids this legal uncertainty entirely for the computation layer. If a German user’s request is processed in a German data center, no cross-border transfer occurs. The GDPR restrictions on international transfer apply to the storage and movement of personal data — not to the processing of data that never leaves the jurisdiction.
Edge Architecture Patterns
Pattern 1: Edge Termination, Central Processing
The minimal edge deployment: TLS termination and caching at the edge, with request forwarding to a central origin server for computation. This is the standard CDN model (Cloudflare, Akamai, Fastly) and provides latency reduction and DDoS protection but does not prevent cross-border data transfer for computation.
Privacy benefit: limited. The request content still travels to the central origin. The edge observes the metadata. The centralized origin processes the content in a potentially distant jurisdiction.
Pattern 2: Edge Compute, Central Storage
Computation executes at the edge, but persistent state is stored in a central or regional data store. Cloudflare Workers with KV or R2 follow this pattern: the Worker runs at the edge, but KV reads may be served from a nearby cache or from the central store.
Privacy benefit: moderate. The decrypted data exists only at the edge, in the user’s jurisdiction, for the duration of the computation. Persistent data (if any) is stored centrally, but can be encrypted with client-held keys so the central store holds only ciphertext.
Pattern 3: Full Edge Processing
All computation and all state management occur at the edge. No data is transmitted to a central origin. Cloudflare Workers with Durable Objects can implement this pattern: the Durable Object is instantiated at a specific edge location, manages state locally, and is destroyed when the session ends.
Privacy benefit: maximum. The user’s data never leaves the edge data center nearest to them. No cross-border transfer. No transit exposure. No centralized data store. The data exists in one location, for a bounded duration, and is destroyed locally.
For zero-persistence architecture, Pattern 3 is the target: process at the edge, store nothing, destroy locally.
Edge Computing and AI Inference
The rise of AI inference workloads makes edge computing both more valuable and more challenging for privacy.
The AI Centralization Problem
Large language model inference currently requires significant computational resources: NVIDIA A100 or H100 GPUs with 40-80 GB of VRAM, running in data centers with high-power density and cooling infrastructure. These resources are concentrated in a small number of cloud regions. OpenAI’s inference cluster is primarily in Azure’s US regions. Anthropic operates on AWS and GCP, primarily in US data centers. Google’s Gemini runs on Google’s TPU clusters, concentrated in a few global data centers.
This means that every AI prompt — regardless of where the user is located — travels to one of a handful of centralized inference clusters. A user in Zurich asking Claude a question sends their prompt across the Atlantic to a US data center. The prompt’s content, the user’s metadata, and the AI’s response all traverse the submarine cable surveillance infrastructure.
According to Statista, the number of AI API calls per day exceeded 1 billion in 2025, with projections reaching 10 billion by 2027. Each call represents a data transit event that edge computing could, in principle, eliminate.
Edge AI Inference
The technology for edge AI inference is maturing. Smaller models (7B-13B parameters) can run on edge hardware with acceptable latency. Cloudflare’s Workers AI provides inference for open-source models (Llama, Mistral) directly at the edge, with the model weights distributed across Cloudflare’s data centers. No user prompt needs to leave the edge location.
For privacy, edge AI inference is transformative: the prompt is processed in the same jurisdiction where it was created, by infrastructure under the edge operator’s control, with no third-party LLM provider seeing the data at all. The trust chain collapses from three parties (user, edge provider, LLM provider) to two (user, edge provider).
The trade-off is model capability. GPT-4-class models require too much compute for current edge hardware. The user must choose between the privacy of edge inference with a smaller model and the capability of centralized inference with a larger model.
For Stealth Cloud’s architecture, the approach is pragmatic: use edge inference when the model quality is sufficient for the user’s needs, and proxy to centralized providers (with full PII stripping and metadata removal) when the user requires a frontier model. The choice is the user’s — and the privacy implications of each choice are made explicit.
Jurisdictional Containment
Edge computing enables a privacy architecture that was previously impossible: jurisdictional containment of computation.
The Swiss Data Circle
Consider a user in Zurich using Stealth Cloud’s Ghost Chat:
- The user’s browser connects to Cloudflare’s Zurich data center via Swiss ISP infrastructure. The TLS connection terminates in Zurich.
- The Cloudflare Worker executing in Zurich decrypts the request payload (using keys derived from the user’s AES-256-GCM session) in the isolate’s memory.
- The PII stripping engine processes the decrypted prompt in the same isolate. Sanitized.
- If edge inference is selected, the prompt is processed by a model running on Cloudflare’s Zurich GPU infrastructure. The response is generated, encrypted, and returned to the user — all within Zurich.
- If a centralized LLM provider is selected, the sanitized, PII-stripped prompt exits the Zurich data center to the provider’s inference endpoint. The prompt contains no identifying information, and the user’s IP address is not forwarded — the LLM provider sees a Cloudflare IP.
In scenario 4, the user’s data never left Switzerland. The entire computation — decryption, PII stripping, inference, encryption — occurred within Swiss territorial jurisdiction, subject to the Swiss Federal Data Protection Act (revDSG) and Swiss constitutional privacy protections.
In scenario 5, the data that leaves Switzerland is sanitized and anonymized. The raw user data — including PII, identifying metadata, and the encryption keys — remained in Zurich throughout. What exits is a sanitized prompt from a Cloudflare IP, carrying no information that links it to the specific user.
Data Localization Compliance
The EU’s proposed Data Act (entered into force January 2024, with application dates through September 2025) and the GDPR’s data transfer restrictions create a complex compliance landscape for organizations processing European data. Edge computing simplifies compliance by keeping data within the required jurisdiction by default.
Cloudflare’s Data Localization Suite (DLS) provides three controls:
- Customer Metadata Boundary: Ensures that log metadata (not request content, which Workers does not log by default) is processed and stored only in the selected region (EU, US, etc.).
- Geo Key Manager: Ensures that TLS private keys are stored and used only in the selected region.
- Regional Services: Ensures that Workers and other compute services execute only in data centers within the selected region.
Combined, these controls provide verifiable jurisdictional containment: data enters the region, is processed in the region, and the keys never leave the region.
Edge Security Considerations
Edge computing is not a privacy panacea. It introduces its own security considerations.
Physical Security
Centralized cloud data centers have fortress-level physical security: mantrap entry, biometric access, 24/7 surveillance, dedicated security staff. Edge data centers — co-location facilities, Internet Exchange Points, street-level network cabinets — have varying physical security postures. A Cloudflare data center in a Tier III co-location facility has strong physical security. A smaller edge node in a less controlled environment has less.
For confidential computing, physical security is less relevant because the CPU hardware encrypts data in memory — even physical access to the server does not expose the contents of a TEE. For non-TEE edge deployments, physical security of the edge location is a valid concern.
Edge Node Compromise
A compromised edge node — through physical access, supply chain attack, or remote exploitation — exposes the data of all users served by that node. In a centralized model, compromising one data center exposes everyone. In an edge model, compromising one edge node exposes only the users served by that node — a much smaller blast radius.
The trade-off: there are more edge nodes to defend (330+ for Cloudflare, vs. 33 AWS regions), but each node is a smaller target. The attacker must compromise many nodes to achieve broad exposure, making mass surveillance through edge compromise more expensive than the equivalent attack on centralized infrastructure.
Consistency and Coordination
Edge-native architectures sacrifice strong consistency for low latency. Data replicated across edge nodes may be stale. Session state managed at one edge node is not instantly available at another. For zero-persistence architectures, this is not a problem — there is no persistent state to keep consistent. For architectures that maintain session state, the coordination overhead of edge-distributed state management is a real engineering challenge.
Cloudflare Durable Objects solve this for session-scoped state: the Durable Object is pinned to a specific edge location, providing strong consistency within the session without requiring global coordination.
The Privacy Cost of Centralization
The economic argument for centralized cloud infrastructure is efficiency: concentrating compute in large data centers enables economies of scale, reduces per-unit costs, and simplifies operations. The privacy cost of this efficiency is rarely quantified.
A centralized architecture for AI chat:
- Data transit: Every prompt crosses an average of 6,500 km (for European users accessing US infrastructure), through 12-18 network hops, across at least one jurisdiction boundary.
- Metadata exposure: Source IP, destination IP, timing, and payload size are visible at every hop.
- Surveillance exposure: Submarine cable monitoring, transit provider logging, and destination-country surveillance laws apply.
- Blast radius: A breach of the central data center exposes all users globally.
- Jurisdictional risk: The data is subject to the legal regime of the data center’s location, not the user’s location.
An edge-native architecture for the same workload:
- Data transit: The prompt travels from the user to the nearest edge data center — typically under 50 km, through 1-3 network hops, within the same jurisdiction.
- Metadata exposure: Limited to the local ISP and the edge provider.
- Surveillance exposure: Limited to the user’s national jurisdiction. No submarine cable transit.
- Blast radius: A compromise of one edge node exposes only users served by that node.
- Jurisdictional risk: The data is processed in the user’s jurisdiction, subject to their local legal protections.
The cost difference between these models is measurable. The edge-native model has higher per-unit compute costs (smaller data centers, less efficient hardware utilization) but dramatically lower privacy exposure. For organizations and individuals whose data has value to state-level adversaries, the privacy cost of centralization is the dominant concern.
The Stealth Cloud Perspective
Stealth Cloud is edge-native by conviction, not by convenience. Every Ghost Chat request is processed at the Cloudflare Workers edge node nearest the user. The architectural consequences of this decision are comprehensive.
A user in Zurich connects to Cloudflare Zurich. Their wallet-based authentication (SIWE) is verified at the edge. Their encrypted payload is decrypted in a V8 isolate in Zurich. Their PII is stripped in Zurich. If edge inference is available for their selected model, the entire request lifecycle completes in Zurich. If a centralized provider is required, only the sanitized, anonymized prompt leaves Zurich — and it leaves through Cloudflare’s network, masking the origin.
The zero-trust architecture means we do not trust the edge node any more than we would trust a centralized server. Client-side encryption ensures the edge node processes only what it must. PII stripping ensures it never sees identifying information. Ephemeral infrastructure ensures it retains nothing after the request completes.
But the edge node provides something a centralized server cannot: jurisdictional containment. The user’s data is processed under the legal protections of their own jurisdiction, for the minimum time required, with the minimum exposure necessary. The alternative — shipping that data across oceans, through surveilled chokepoints, to be processed under a foreign legal regime — is the status quo of cloud computing. Edge computing offers a different status quo: data processed where it is created, by infrastructure that exists to serve the user, not to surveil them.