Solomon Hykes, the creator of Docker, posted in 2019: “If WASM+WASI existed in 2008, we wouldn’t have needed to create Docker. That’s how important it is.” Seven years later, that statement reads less like a prediction and more like a description of what is happening. WebAssembly is becoming the universal runtime for edge computing, and its security properties — sandboxed execution, capability-based access, no filesystem access by default — make it uniquely suited for privacy-sensitive workloads.
The convergence is not coincidental. WebAssembly was designed for the browser — an environment where untrusted code runs on user devices and must be prevented from accessing anything it should not. The security model that makes WASM safe for running arbitrary web applications also makes it safe for running privacy-sensitive workloads on infrastructure you do not fully trust. The same sandbox that protects a user from a malicious website protects sensitive data from a curious infrastructure operator.
This is the runtime that makes zero-knowledge edge computing practical. Not as a theoretical possibility, but as a deployed reality processing billions of requests daily.
What WebAssembly Is
WebAssembly (WASM) is a binary instruction format for a stack-based virtual machine. It is not JavaScript. It is not a programming language. It is a compilation target — code written in Rust, C, C++, Go, AssemblyScript, or other languages compiles to WASM bytecode that runs in a WASM runtime.
The key properties:
Near-native performance. WASM bytecode executes at 80-95% of native speed. The bytecode is designed for efficient JIT (Just-In-Time) or AOT (Ahead-Of-Time) compilation to the host CPU’s instruction set. A Rust function compiled to WASM runs approximately 10-20% slower than the same function compiled to native x86-64, and substantially faster than interpreted languages.
Deterministic execution. Given the same inputs, a WASM module produces the same outputs regardless of the host platform. There are no undefined behaviors from hardware differences, OS differences, or runtime version differences. This determinism is critical for auditing — you can verify that a WASM module behaves identically on every edge node.
Sandboxed by default. A WASM module cannot access the host filesystem, network, environment variables, or system calls unless the host explicitly grants access. The module operates in its own linear memory space with bounds-checked access. Memory safety violations (buffer overflows, use-after-free) that would compromise native code cannot escape the WASM sandbox.
Compact. WASM binaries are significantly smaller than equivalent container images. A “Hello World” container image (Alpine Linux + Go binary) is approximately 10-15 MB. The equivalent WASM module is approximately 100-500 KB. This size difference directly translates to faster startup times and lower bandwidth for deployment to edge locations.
WASI: The System Interface
WebAssembly System Interface (WASI) is the standardized API through which WASM modules interact with the world outside their sandbox. WASI provides:
- Filesystem access: Read and write files, but only within directories that the host pre-opens. A WASM module cannot traverse beyond its granted directory scope.
- Network access: Socket creation and IO, but only if the host grants the
wasi:socketscapability. - Clock access: Time functions for the module, controllable by the host.
- Random number generation: Cryptographic randomness, provided by the host.
- Environment variables: Only variables the host explicitly passes.
WASI uses a capability-based security model. The module declares what capabilities it needs. The host decides which capabilities to grant. This is fundamentally different from the container model, where a container inherits broad Linux capabilities by default and restrictions are applied through configuration (seccomp profiles, AppArmor, dropped capabilities).
The privacy implication: a WASM module running a PII detection engine can be granted access to its input data and output channel and nothing else. No filesystem. No network. No environment variables. No clock (if timing attacks are a concern). The module is structurally incapable of exfiltrating data because it has no channel through which to do so.
WASI Preview 2 and the Component Model
WASI Preview 2, stabilized in early 2025, introduces the Component Model — a way to compose WASM modules into larger applications where each component has its own sandbox and communicates with other components through typed interfaces.
The Component Model enables:
- Module composition: A PII detection component, an encryption component, and a routing component can be composed into a pipeline where each component sees only its designated inputs and outputs.
- Language interoperability: The PII detection component can be written in Rust, the encryption component in C (wrapping libsodium), and the routing component in Go. Each compiles to WASM and communicates through WIT (WASM Interface Type) definitions.
- Granular sandboxing: Each component has its own memory space and capability grants. A compromised PII detection component cannot access the encryption component’s key material because they operate in separate sandboxes.
For privacy architectures, the Component Model is significant because it allows security-critical operations to be isolated at the granular level within a single application. The encryption component that handles key material operates in a separate sandbox from the components that handle user data. Even a memory corruption vulnerability in the data processing component cannot leak key material from the encryption component.
WASM on Edge Platforms
Cloudflare Workers
Cloudflare Workers was the first major platform to deploy WASM at edge scale. Workers supports both JavaScript/TypeScript (running in V8 isolates) and WASM modules (running in the same V8 environment or in dedicated WASM runtimes).
Workers’ WASM support enables:
- Rust and C/C++ at the edge: Performance-critical code (cryptography, PII detection, data transformation) compiled to WASM runs at near-native speed across Cloudflare’s 310+ locations.
- Predictable performance: WASM’s ahead-of-time compilation eliminates JavaScript’s JIT warm-up variability. The first request to a WASM module is as fast as the thousandth.
- Sub-millisecond cold starts: WASM modules start faster than JavaScript because there is no parsing or initial compilation step — the bytecode is pre-compiled.
Cloudflare reported in their 2025 developer survey that 34% of Workers deployments now include WASM components, up from 12% in 2023. The primary use cases: data transformation (28%), cryptography (22%), and image processing (19%).
Fastly Compute
Fastly’s Compute platform runs WASM modules as the primary execution model (not as an addition to a JavaScript runtime). Each request receives its own WASM instance with a fresh memory space. The instance is destroyed after the response is sent.
Fastly’s approach provides stronger per-request isolation than Cloudflare’s shared-V8 model. Each WASM instance has its own linear memory, and memory is not shared across requests. This eliminates the class of cross-request information leakage attacks that are theoretically possible in shared-runtime models.
Fastly’s cold start times for WASM modules are under 50 microseconds — orders of magnitude faster than container-based serverless platforms and competitive with Cloudflare’s V8-based cold starts.
Fermyon Spin and wasmCloud
Open-source WASM platforms are emerging for self-hosted edge deployments:
- Fermyon Spin: A framework for building and running WASM microservices. Spin applications define components that handle HTTP triggers, and each component runs in its own sandbox. Spin is designed for environments where Cloudflare or Fastly’s managed platforms are not an option (on-premises, private cloud, air-gapped networks).
- wasmCloud: A distributed application platform that uses WASM components connected by a capability-based linking system. Components communicate through well-defined interfaces, and capabilities (network access, key-value storage) are injected at runtime by the host.
These platforms bring WASM’s security properties to environments outside the major edge providers — relevant for organizations that need edge computing with privacy properties but cannot use third-party managed platforms.
Security Properties: A Detailed Assessment
Memory Safety
WASM’s linear memory model provides bounds-checked array access. A module that attempts to read or write beyond its allocated memory space triggers a trap (runtime error) rather than accessing adjacent memory. This eliminates buffer overflow attacks that are the foundation of most native code exploits.
However, memory safety within the WASM module’s own memory space is not guaranteed by WASM itself. A Rust program compiled to WASM benefits from Rust’s memory safety guarantees in addition to WASM’s sandbox. A C program compiled to WASM can still have use-after-free and buffer overflow bugs within its own linear memory — but these bugs cannot escape the WASM sandbox to affect the host or other modules.
Control Flow Integrity
WASM enforces structured control flow — indirect calls go through a function table with type checking. The module cannot jump to arbitrary memory addresses. This eliminates Return-Oriented Programming (ROP) and Jump-Oriented Programming (JOP) attacks that exploit native code’s ability to redirect execution to arbitrary addresses.
Speculative Execution Attacks
Spectre and Meltdown-class attacks exploit speculative execution in CPUs to leak data across security boundaries. WASM runtimes implement mitigations:
- V8 (Cloudflare Workers): Site isolation, constant-time operations for sensitive code paths, and timer resolution reduction
- Wasmtime: Process-level isolation options, compiler mitigations for Spectre v1
The 2024 Lucet security audit (conducted by NCC Group on Fastly’s WASM runtime) found that WASM’s sandbox effectively contains Spectre v1 leakage within the module’s own memory space, preventing cross-module or cross-tenant data leakage in properly configured multi-tenant environments.
Supply Chain
WASM modules are compiled artifacts. The supply chain risk exists in the source code, the compiler, and the build environment — the same attack surface as any compiled software. The supply chain security practices that apply to container images (signed builds, reproducible compilation, SLSA compliance) apply equally to WASM modules.
WASM’s advantage: the smaller binary size and simpler dependency tree (no OS, no system libraries) reduces the supply chain surface compared to container images. A WASM module compiled from Rust with no external dependencies has a supply chain consisting of the Rust compiler, the WASM target backend, and the source code. The equivalent container has the OS base image, package manager dependencies, system libraries, and the application code.
Performance Benchmarks
Academic and industry benchmarks comparing WASM to alternative edge runtimes:
Startup Time
| Runtime | Cold Start (Median) | Cold Start (p99) |
|---|---|---|
| WASM (Wasmtime AOT) | 0.04ms | 0.12ms |
| WASM (V8 via Workers) | 0.5ms | 2ms |
| V8 Isolate (JavaScript) | 1.5ms | 5ms |
| Container (gVisor) | 50ms | 150ms |
| Container (runc) | 100ms | 300ms |
| Firecracker microVM | 125ms | 200ms |
WASM’s startup advantage is 100-3,000x faster than container-based alternatives. For edge workloads where every request may hit a cold instance, this difference is the difference between imperceptible latency and noticeable delay.
Throughput
The ETH Zurich Systems Group published WASM throughput benchmarks in late 2025:
| Workload | Native (Rust) | WASM (Wasmtime) | Overhead |
|---|---|---|---|
| JSON parsing (10KB) | 1.2M ops/s | 1.05M ops/s | 12.5% |
| AES-256-GCM encrypt (1KB) | 4.8M ops/s | 4.2M ops/s | 12.5% |
| SHA-256 hash (1KB) | 6.1M ops/s | 5.3M ops/s | 13.1% |
| Regex matching | 2.3M ops/s | 1.9M ops/s | 17.4% |
| HTTP request routing | 3.4M ops/s | 3.0M ops/s | 11.8% |
The consistent 10-18% overhead is a consequence of WASM’s bounds-checked memory access and indirect call overhead. For privacy-sensitive operations (encryption, hashing, PII detection), the overhead is modest and the security benefits — sandboxed execution with no filesystem access — are substantial.
Memory Footprint
| Runtime | Per-Instance Memory |
|---|---|
| WASM module (typical) | 1-5 MB |
| V8 Isolate | 3-10 MB |
| gVisor container | 20-50 MB |
| Firecracker microVM | 5-20 MB |
| Standard container (runc) | 30-100 MB |
WASM’s lower memory footprint enables higher density — more concurrent workloads per host — which translates to lower cost per request at scale.
WASM for Privacy Workloads
Three privacy-critical workload patterns are particularly well-suited to WASM execution:
Pattern 1: Client-Side PII Detection
A WASM module running in the browser can detect and tokenize PII before data leaves the client device. The module is compiled from a Rust NER (Named Entity Recognition) model, runs at near-native speed in the browser, and operates in a sandbox that prevents the PII detection logic from exfiltrating detected data.
This is the pattern Stealth Cloud’s Ghost Chat uses: the PII engine runs as a WASM module in the user’s browser. The module receives plaintext, outputs tokenized text, and the plaintext PII never leaves the browser’s memory space.
Pattern 2: Edge Encryption Proxy
A WASM module at the edge receives encrypted requests, validates them (without decrypting content), routes them to the appropriate backend, and returns encrypted responses. The module handles transport-layer concerns (authentication, routing, rate limiting) without accessing the encrypted payload.
Pattern 3: Verifiable Computation
WASM’s deterministic execution enables verifiable computation: a client can verify that an edge node executed the correct WASM module by comparing the output to a locally-computed result. If the module is deterministic (same inputs produce same outputs), the client can audit the edge node’s behavior without trusting it.
This verification property supports zero-trust architecture at the compute layer: trust the code (which is verifiable), not the infrastructure (which is not).
The Stealth Cloud Perspective
WebAssembly is not merely a convenient runtime for edge computing. It is the architectural enabler that makes privacy-first edge computing technically viable.
The properties align precisely with the requirements. Zero-knowledge operation requires that the runtime cannot access data outside its designated scope — WASM’s capability-based sandbox provides this. Ephemeral execution requires sub-millisecond startup and teardown — WASM delivers 40-microsecond cold starts. Client-side PII processing requires near-native performance in the browser — WASM’s 10-18% overhead over native is imperceptible for per-message processing. Cross-platform deployment requires a universal binary format — WASM runs identically on every browser and every edge node.
Stealth Cloud’s architecture uses WASM at two critical points: in the browser (for PII detection and client-side encryption) and at the edge (for request routing, authentication, and transport-layer processing). At both points, the WASM sandbox ensures that the module processes data according to its specification and cannot exfiltrate, store, or leak data beyond its designated outputs.
This is the runtime that Docker’s creator said would have made Docker unnecessary. For privacy infrastructure, the statement can be strengthened: WASM does not merely replace containers. It provides a security model that containers were never designed to deliver — a model where the runtime itself enforces that code can only do what it is allowed to do, access what it is allowed to access, and output what it is allowed to output. For infrastructure where the fundamental question is “who can see the data,” WASM provides an answer that is architectural, not contractual.