On March 31, 2023, the Italian Data Protection Authority (Garante per la protezione dei dati personali) ordered OpenAI to stop processing Italian users’ data, effectively banning ChatGPT in Italy. The decision cited four GDPR violations: absence of a legal basis for collecting and processing personal data for training purposes, no age verification to prevent minors from accessing the service, failure to inform users about data processing, and inaccuracy of information provided by the model. OpenAI restored service 28 days later after implementing changes, but the Italian action signaled something that European regulators had been building toward for years: the GDPR framework and American AI services are fundamentally misaligned, and the enforcement apparatus is beginning to act on that misalignment.
The problem is not that AI companies ignore European privacy law. Most have GDPR compliance teams, Data Protection Officers, and pages of privacy documentation. The problem is structural: the GDPR’s requirements for data processing – lawful basis, purpose limitation, data minimization, storage limitation, and individual rights – were designed for a world where data processing has a defined purpose, a limited scope, and a knowable endpoint. AI training has none of these properties. A large language model trained on user data processes that data for a purpose (“improving the model”) that is boundless in scope, indefinite in duration, and irreversible in effect.
European companies that use American AI APIs face a compliance challenge with no clean solution. The data transfer mechanisms are fragile. The legal bases are contested. The enforcement gap creates an illusion of compliance that the next court decision or regulatory action could shatter overnight.
The Schrems II Problem and the Data Transfer Question
The Court of Justice of the European Union’s Schrems II decision (July 2020) invalidated the EU-U.S. Privacy Shield, the primary mechanism for transferring personal data from the EU to the U.S. The court found that U.S. surveillance laws – particularly Section 702 of the Foreign Intelligence Surveillance Act and Executive Order 12333 – provided insufficient protection for European citizens’ data.
In response, the EU and U.S. negotiated the EU-U.S. Data Privacy Framework (DPF), which came into effect in July 2023. The DPF was built on Executive Order 14086, which imposed new limitations on U.S. intelligence signals collection and established a Data Protection Review Court for EU citizens to challenge surveillance.
The DPF restored a legal mechanism for EU-U.S. data transfers. But the legal community is divided on its durability. Max Schrems’ organization, NOYB (None of Your Business), filed a challenge to the DPF in September 2023, arguing that the executive order’s protections are insufficient and reversible – a U.S. president could modify or revoke EO 14086 at any time. Legal scholars estimate a 40-60% probability that the CJEU will invalidate the DPF (“Schrems III”), which would once again leave EU-U.S. data transfers without a primary legal mechanism.
For AI API use specifically, the DPF challenge is existential. If the framework is invalidated, European companies would need to rely on Standard Contractual Clauses (SCCs) for data transfers to U.S.-based AI providers. But SCCs require a Transfer Impact Assessment (TIA) demonstrating that the data will receive “essentially equivalent” protection to GDPR in the recipient country. Given that U.S. surveillance law was the specific concern in Schrems II, completing a TIA that credibly demonstrates essential equivalence for data processed on U.S. servers remains deeply challenging.
The practical reality is that most European companies using AI APIs have not completed rigorous TIAs. They rely on the DPF or on SCCs with boilerplate TIAs that do not engage meaningfully with the surveillance risk. This creates a compliance posture that is defensible only so long as no one examines it closely – which is precisely the posture that Data Protection Authorities are beginning to scrutinize.
Legal Basis for AI Data Processing
GDPR requires a lawful basis for every instance of personal data processing. Article 6 provides six possible bases: consent, contractual necessity, legal obligation, vital interests, public interest, and legitimate interests. For AI companies processing European user data, the choice of legal basis determines the entire compliance architecture.
Consent (Article 6(1)(a))
Consent under GDPR must be freely given, specific, informed, and unambiguous. It must be as easy to withdraw as to give. For AI services, consent is the most commonly cited basis for training on user data.
The problem: GDPR consent for AI training is nearly impossible to obtain validly at scale. “Specific” consent requires the user to understand what they are consenting to. Consenting to “model improvement” is not specific – the user cannot know how their data will influence the model, what outputs it will affect, or who will ultimately interact with those outputs. “Informed” consent requires a level of transparency about AI training processes that most AI companies have not provided and may be technically unable to provide (since the specific impact of an individual’s data on model weights is not deterministic or traceable).
The EDPB (European Data Protection Board) Task Force on ChatGPT, which reported in May 2024, concluded that obtaining valid consent for AI model training presents “significant challenges” and that the distinction between consent for service provision (using the chatbot) and consent for training (improving the model) must be maintained. Users who consent to use a service are not, by that act alone, consenting to have their data used for training.
Legitimate Interests (Article 6(1)(f))
Legitimate interests requires a three-part balancing test: the processing must serve a legitimate interest of the controller, the processing must be necessary for that interest, and the interests of the data subject must not override the controller’s interest.
Several AI companies have shifted from consent to legitimate interests as the legal basis for training. This avoids the granular consent requirements but introduces the balancing test. The EDPB’s position has been skeptical: given the scale of personal data processing in AI training, the irreversibility of the training process (data cannot be “untrained” from a model), and the sensitivity of much of the data processed through AI chatbots, the argument that the data subject’s interests do not override the controller’s interest in model training is a hard one to sustain.
The Italian Garante’s initial action against OpenAI found that neither consent nor legitimate interests had been adequately established as a legal basis. This finding, if adopted more broadly by other European DPAs, would mean that AI companies processing European user data for training lack a valid legal basis under GDPR – making the processing itself unlawful regardless of any other safeguards.
The Purpose Limitation Problem
Even with a valid legal basis, GDPR’s purpose limitation principle (Article 5(1)(b)) requires that data be collected for specified, explicit, and legitimate purposes and not further processed in a manner incompatible with those purposes. A user who engages with an AI chatbot to get a question answered provides data for the purpose of receiving a response. Using that same data to train a model is a distinct purpose – and the compatibility of these purposes under GDPR’s framework is far from established.
The EDPB has indicated that AI model training is a separate processing purpose from service provision and requires its own legal basis and its own transparency obligations. This means that the data practices of every AI provider that trains on user data must maintain a dual legal basis: one for inference (providing the response) and one for training (improving the model). Most privacy policies do not draw this distinction clearly.
The DPA Enforcement Gap
Despite GDPR’s theoretical power, enforcement against AI companies has been slow, inconsistent, and fragmented across 27 member states plus the EEA.
The one-stop-shop mechanism – under which an AI company is supervised primarily by the DPA of the member state where it has its main establishment – has created bottlenecks. Ireland’s Data Protection Commission (DPC) serves as lead supervisory authority for most major U.S. tech companies (including Meta, Google, Apple, Microsoft, and LinkedIn), because they have European headquarters in Dublin. The DPC has faced persistent criticism for slow investigations and perceived reluctance to impose significant penalties on large tech firms that are major Irish employers.
For AI companies specifically, the enforcement picture in early 2026:
- OpenAI: Under investigation or subject to enforcement action by DPAs in Italy, France, Spain, Poland, and Austria. The Italian ban was the most visible action. The French CNIL issued a notice of potential violation in late 2024.
- Meta: The EDPB intervened in Meta’s plan to use European user data for AI training, resulting in Meta pausing the program in Europe in June 2024. The pause remained in effect through early 2026, making Meta AI less capable in Europe than in other markets.
- Google: The AEPD (Spain) and CNIL (France) opened inquiries into Gemini’s data processing practices in 2025. No enforcement actions have been concluded.
- Microsoft: The relationship between Microsoft (subject to GDPR) and OpenAI (subject to GDPR independently) creates a complex supervisory question. Microsoft’s Azure OpenAI Service is covered by Microsoft’s GDPR compliance infrastructure, but the underlying model training occurs at OpenAI.
The cumulative financial exposure is significant. GDPR AI fines to date have been modest compared to the penalties imposed for non-AI violations (Meta’s EUR 1.2 billion fine for data transfers, Amazon’s EUR 746 million fine for advertising targeting). But the enforcement trajectory is clear: DPAs are building AI-specific expertise and case law, and the penalties will escalate.
What European Companies Actually Face
For a European company evaluating whether to use an American AI API, the compliance analysis involves:
Data Transfer Assessment
- Does the AI provider self-certify under the EU-U.S. DPF?
- If yes, is the company comfortable with the risk that the DPF may be invalidated?
- If the DPF is invalidated, does the company have SCCs in place with the AI provider?
- Has the company completed a Transfer Impact Assessment that honestly evaluates the risk of U.S. government access to the data?
- Can the company implement supplementary measures (encryption where the key is held only by the European company) that provide additional protection?
Legal Basis Assessment
- What legal basis does the AI provider rely on for processing the company’s data?
- If the company’s employees enter personal data of the company’s customers into the AI system, what legal basis covers that secondary processing?
- Has the company assessed whether the AI provider’s training practices are compatible with the purpose for which the data was originally collected?
Data Subject Rights Assessment
- Can the company respond to access requests from data subjects whose data may have been processed through the AI system?
- Can the company ensure deletion of data from the AI system if a data subject exercises the right to erasure?
- Can the company provide meaningful information about automated decision-making if the AI system’s outputs influence decisions about individuals?
Risk Assessment
- What is the company’s exposure if the DPF is invalidated?
- What is the company’s exposure if the DPA investigates the AI tool’s use?
- What are the contractual remedies if the AI provider changes its data practices?
- Has the company’s DPO approved the use of the AI tool?
For many European companies, honest answers to these questions produce an uncomfortable conclusion: the legal basis for using most American AI APIs with personal data is uncertain, the data transfer mechanism is fragile, and the regulatory risk is quantifiable but unpredictable.
The Sovereign AI Response
The compliance challenges with American AI APIs are driving European investment in sovereign AI alternatives:
Mistral AI (France): Has raised over EUR 1 billion and positions itself as a European alternative with data processing within EU jurisdiction. Mistral’s models are available through European cloud infrastructure (OVHcloud, Scaleway), enabling data processing that never leaves the EU.
Aleph Alpha (Germany): Focused specifically on the European enterprise market with GDPR-compliant infrastructure and data sovereignty guarantees. Has partnered with German federal agencies for sovereign AI deployment.
European cloud providers: OVHcloud, Scaleway, Hetzner, and IONOS are positioning themselves as GDPR-compliant alternatives to AWS, Azure, and GCP for AI workloads. The French government’s “Cloud de Confiance” initiative and Germany’s Gaia-X project both emphasize data sovereignty for AI.
The sovereign AI movement addresses the data transfer problem by eliminating it: if data never leaves European jurisdiction, Schrems II and the DPF question become irrelevant. But sovereign AI faces a capability gap. European models, while improving rapidly, have not reached parity with frontier models from OpenAI, Anthropic, and Google. European companies choosing sovereign AI are currently accepting a capability trade-off for compliance certainty.
The alternative – infrastructure that enables the use of frontier models while providing zero-knowledge and zero-persistence guarantees that satisfy GDPR requirements regardless of where the processing occurs – represents a third path. If the AI processing layer retains no data, creates no logs, and is architecturally incapable of associating prompts with individuals, the GDPR analysis changes fundamentally. Processing of anonymized data falls outside GDPR’s scope entirely (Recital 26). The challenge is demonstrating that the anonymization is genuine and irreversible – a challenge that Stealth Cloud’s architecture addresses through cryptographic guarantees rather than contractual promises.
What Happens Next
Three scenarios will shape the GDPR-AI relationship over the next 24 months:
Scenario 1: Schrems III invalidates the DPF. European companies lose the primary legal mechanism for AI API data transfers. Panic ensues. Companies scramble to implement SCCs with supplementary measures or switch to European providers. This scenario forces the sovereignty question.
Scenario 2: DPAs issue AI-specific enforcement guidance. The EDPB publishes binding guidance on legal bases for AI training, data subject rights in AI contexts, and the application of purpose limitation to AI development. This scenario provides clarity but may impose requirements that current AI architectures cannot meet.
Scenario 3: The EU AI Act’s requirements come into full effect. The Act’s provisions for high-risk AI systems (which include many enterprise applications) impose transparency, documentation, human oversight, and accuracy requirements that interact with GDPR’s data protection requirements. This scenario creates a dual compliance burden that favors companies with strong governance frameworks and disfavors those relying on minimal-compliance approaches.
All three scenarios point in the same direction: the compliance cost of using American AI APIs in Europe will increase, not decrease. European companies that have not built compliance infrastructure for their AI use are accumulating regulatory debt that compounds with every month of non-compliant processing. The manifesto for a new approach to cloud infrastructure is, in part, a response to this reality.
The Stealth Cloud Perspective
The GDPR-AI conflict is not a regulatory nuance – it is a market fracture. European companies need frontier AI capability. American providers offer it. But the legal infrastructure connecting them is held together with contractual clauses that a single court decision can dissolve. The only durable solution is architecture that makes the data transfer question irrelevant: processing that is genuinely anonymous, genuinely ephemeral, and verifiably both. That is not a policy position. It is an engineering requirement.