Definition

A Data Protection Impact Assessment (DPIA) is a structured risk assessment process mandated by GDPR Article 35. It requires data controllers to evaluate the necessity, proportionality, and compliance of personal data processing operations that are likely to result in high risk to the rights and freedoms of natural persons. Unlike the broader Privacy Impact Assessment, a DPIA has specific legal triggers, prescribed content requirements, and enforcement consequences under EU law.

GDPR Article 35(3) identifies three scenarios where a DPIA is mandatory: systematic and extensive evaluation of personal aspects based on automated processing, including profiling; large-scale processing of special categories of data (health, biometric, genetic, racial, political, religious) or criminal conviction data; and systematic monitoring of a publicly accessible area on a large scale. National supervisory authorities publish additional lists of processing activities requiring DPIAs.

Why It Matters

The European Data Protection Board reported in 2024 that supervisory authorities across the EEA received over 14,000 DPIA consultations in the preceding 12 months—a 23% increase over 2023, driven substantially by organizations deploying generative AI systems. The French CNIL specifically identified LLM-based services as high-risk processing requiring DPIAs when they process personal data from user prompts or training datasets.

Failure to conduct a required DPIA carries penalties of up to EUR 10 million or 2% of annual worldwide turnover under GDPR Article 83(4)(a). The Belgian Data Protection Authority fined a company EUR 100,000 in 2023 solely for failure to conduct a DPIA before launching a facial recognition system—before any data breach occurred. The violation was the absence of the assessment itself, not any downstream harm.

For AI systems, DPIAs are becoming a de facto prerequisite for deployment. The European Commission’s AI Act requires conformity assessments for high-risk AI systems that substantively mirror DPIA methodology. Organizations deploying LLMs that process personal data—clinical documentation tools, legal research assistants, customer service chatbots—face overlapping DPIA obligations under both GDPR and the AI Act.

How It Works

A DPIA follows the structure prescribed by GDPR Article 35(7) and elaborated in WP248rev.01 guidance from the Article 29 Working Party:

  1. Systematic description: Document processing operations, purposes, legal basis, data flows, categories of personal data, and affected data subjects.

  2. Necessity and proportionality: Assess whether processing is necessary, whether less invasive alternatives exist, and whether data minimization principles are satisfied.

  3. Risk assessment: Identify risks to data subjects’ rights—re-identification, unauthorized access, discriminatory profiling, loss of control. Evaluate each for likelihood and severity.

  4. Mitigation measures: Identify countermeasures including encryption, pseudonymization, PII stripping, access controls, and cryptographic shredding.

  5. Supervisory authority consultation: If residual high risks remain after mitigation, GDPR Article 36 requires consultation with the supervisory authority, which has eight weeks to respond.

  6. Ongoing review: DPIAs must be updated when processing operations change, new risks emerge, or the processing context evolves.

Stealth Cloud Relevance

Stealth Cloud reduces the scope and complexity of DPIAs for AI interactions by eliminating server-side personal data processing. A DPIA for Ghost Chat evaluates the following processing chain: user inputs prompt in browser, PII stripping engine removes personal identifiers client-side, sanitized prompt is encrypted via Web Crypto API, encrypted payload is processed in ephemeral V8 isolates without persistence, and session keys are cryptographically shredded on completion.

The DPIA finding for server-side processing: no personal data is processed, stored, or accessible. The risk assessment yields minimal residual risk because the architecture eliminates the data that gives rise to the risk.

This does not exempt Stealth Cloud from DPIA requirements entirely. The client-side PII detection engine processes personal data (briefly, in browser memory) and must be assessed. The FADP also requires impact assessments for high-risk processing. But the architectural approach transforms the DPIA from a complex multi-system assessment into a focused evaluation of a single client-side component—the PII stripping engine—because that is the only component that ever encounters personal data.

The Stealth Cloud Perspective

A DPIA quantifies risk to individuals from data processing. Stealth Cloud minimizes that risk architecturally—not by adding safeguards to risky processing, but by ensuring personal data never enters the processing pipeline. The DPIA still applies. The risk score approaches zero because the architecture was designed to make it so.