In December 2024, the Federal Trade Commission settled with Amazon for $25 million over allegations that the company retained children’s voice recordings collected through Alexa-enabled devices and used them for product development purposes, in violation of the Children’s Online Privacy Protection Act (COPPA). The settlement was the largest COPPA enforcement action in the statute’s history. It was also, by the standards of Amazon’s $574 billion annual revenue, a rounding error.

The Alexa case is instructive not because it is exceptional but because it illustrates the structural mismatch between a regulatory framework designed for website registration forms in 1998 and the AI systems that children interact with in 2026. COPPA requires verifiable parental consent before collecting personal information from children under 13. But when a child asks ChatGPT to help with homework, confides in Character.AI about their feelings, or interacts with an AI tutor embedded in their school’s learning platform, the concept of “verifiable parental consent” encounters a reality it was never designed to address.

Common Sense Media reported in 2025 that 58% of U.S. children aged 12 to 17 had used an AI chatbot at least once, with 26% using one weekly. Among 8-to-11-year-olds, the figure was 31% for any use. These children are generating conversational data of extraordinary sensitivity – academic struggles, social anxieties, family situations, health questions, identity exploration – and feeding it into systems with data retention policies written for adult users and training pipelines optimized for model improvement, not child safety.

COPPA in the AI Era: A Square Peg in a Circular Pipeline

COPPA was enacted in 1998 and last substantively amended in 2013. Its core mechanism is straightforward: operators of websites and online services directed to children, or who have actual knowledge that they are collecting data from children under 13, must obtain verifiable parental consent before collection and must provide parents with access to and control over their children’s data.

Why COPPA Fails for Conversational AI

COPPA’s framework presumes a model of data collection that AI chatbots fundamentally alter:

The registration model. COPPA was designed for services that collect data through structured forms – name, email, birthday, address. AI chatbots collect data through unstructured natural language conversation. A child who tells a chatbot about their family, school, medical conditions, or emotional state has disclosed personal information through conversational context, not through a form field. The legal treatment of this disclosure remains contested.

The knowledge problem. COPPA’s obligations trigger when an operator has “actual knowledge” of a child user. Most AI chatbots have no reliable mechanism to detect whether a user is a child. Age self-declaration at sign-up is the standard approach, and it is precisely as effective as asking a child to honestly state their age before entering a website – which is to say, not effective at all. A 2025 Thorn survey found that 45% of children aged 9-12 who used AI chatbots had entered a false birthdate to access the service.

The consent architecture. Even where platforms attempt to implement age-gating, the consent architecture is problematic. Verifiable parental consent methods approved by the FTC – credit card transactions, government ID verification, video call confirmation – create their own privacy concerns. Requiring a parent to submit government identification to a technology company in order for their child to use an AI chatbot transfers the privacy problem from the child to the parent without solving it.

The training data problem. COPPA gives parents the right to request deletion of their child’s data. But if that data has already been used to train or fine-tune a model, deletion from a database does not remove the data’s influence from model weights. The model memorization problem means that a child’s personal disclosure may persist in an AI model’s learned representations indefinitely, even after the original conversation has been deleted from the operator’s servers.

The COPPA 2.0 Debate

The FTC issued an updated COPPA rule in 2024 that expanded the definition of personal information to include biometric identifiers and persistent identifiers used for behavioral advertising. However, the update did not fundamentally address the conversational AI data collection paradigm. Several congressional proposals for a “COPPA 2.0” have stalled over disagreements about age verification mandates, platform liability, and the scope of covered services.

The regulatory gap is widening. Every month that COPPA remains un-updated for the AI era, millions of additional children generate conversational data within systems that were designed for adult users and governed by legal frameworks designed for website registration.

The Landscape of Children’s AI Interactions

Children interact with AI across a broader range of contexts than most parents realize, and each context carries distinct privacy risks.

General-Purpose Chatbots

ChatGPT, Claude, Gemini, and Copilot are the most widely used AI chatbots, and none of them were designed for children. Their terms of service universally require users to be 13 or older (18 in some jurisdictions), but enforcement is minimal.

OpenAI reported 200 million weekly active users in early 2025. If even 5% of those users were under 13 – a conservative estimate given Common Sense Media’s survey data – that represents 10 million children weekly generating conversational data on a platform that explicitly excludes them from its user base while retaining their data under adult privacy policies.

The content of children’s AI conversations is particularly sensitive. Research by the Internet Watch Foundation in 2024 found that children’s chatbot interactions frequently included disclosures about bullying, family conflict, mental health struggles, sexual orientation and gender identity exploration, and academic difficulties – categories of information that would be protected under both COPPA and FERPA if collected in an educational context, but that receive no special protection when volunteered to a commercial AI chatbot.

Character AI and Social Chatbots

Character.AI, Replika, and similar platforms that allow users to create and interact with AI personas have become particularly popular with teenagers. Character.AI disclosed 20 million monthly active users in 2024, with a user demographic skewing significantly younger than other AI platforms.

The privacy risk here is qualitative, not just quantitative. Children and teenagers form emotional attachments to AI characters and disclose information they would not share with a parent, teacher, or therapist. A teenager exploring their identity through conversations with an AI companion is generating a detailed psychological profile of extraordinary intimacy. The platform’s data retention policies determine whether that profile persists for days, years, or indefinitely – and whether it becomes training data for future models.

In 2024, two separate wrongful death lawsuits were filed against Character.AI by families of teenagers who died by suicide after intensive interactions with the platform. The cases raised privacy-adjacent questions about the retention and use of these vulnerable users’ conversational data, and about the platform’s obligations when AI interactions reveal indicators of self-harm.

Educational AI

AI tutoring systems deployed in schools – Khan Academy’s Khanmigo, Duolingo’s AI features, and dozens of edtech platforms with embedded AI – collect student data in an educational context where FERPA protections should apply but where the boundaries between educational records and commercial data processing are increasingly blurred.

A 2024 audit by the Electronic Frontier Foundation found that 89% of edtech platforms using AI features shared student data with third-party analytics providers, and 41% used student interaction data to improve AI models that were also deployed in non-educational commercial products. The student who interacts with an AI tutor in a school-issued Chromebook may be generating training data for a commercial product marketed to entirely different users in entirely different contexts.

Voice Assistants and Smart Devices

Children are heavy users of voice-activated AI assistants in home environments. A 2023 study published in the Journal of Child Media found that children aged 5-10 in households with smart speakers interacted with the device an average of 4.3 times per day. These interactions are recorded, transmitted to cloud servers for processing, and retained according to policies that parents rarely review.

The Amazon Alexa COPPA settlement demonstrated that voice recordings of children were retained for years and used for model improvement – the exact training tax applied to adult users, but imposed on children who could not meaningfully consent and whose parents were unaware of the practice.

The Age Verification Paradox

Age verification is widely proposed as the solution to children’s AI privacy. It is, in practice, a paradox: every effective age verification method creates its own privacy problem.

Methods and Their Costs

Self-declaration (typing your age or checking a box) is the most privacy-preserving method and the least effective. It deters no motivated child and satisfies COPPA’s requirements only in the narrowest technical sense.

Credit card verification uses a parental credit card transaction as a proxy for age confirmation. It requires transmitting financial data to the platform, creates a linkable identity between parent and child, and excludes children from low-income households whose parents do not have credit cards.

Government ID verification provides strong age confirmation but requires the most invasive data collection. Uploading a driver’s license or passport to an AI chatbot company creates a honeypot of identity documents that becomes a high-value target for data breaches. The 2024 National Public Data breach, which exposed 2.9 billion identity records including those of minors, demonstrated the scale of risk when identity verification data is centrally stored.

Facial age estimation uses AI to estimate a user’s age from a selfie or video. This method introduces biometric data collection at the point of access – solving one privacy problem (children accessing adult services) by creating another (biometric data collection from all users, including children). The irony of deploying AI surveillance to protect children from AI privacy risks appears to be lost on its proponents.

Device-level age signals use parental control settings, device management profiles, or operating system age indicators as verification proxies. These methods are less invasive but trivially circumvented by children using unmanaged devices, shared household devices, or friends’ devices.

The fundamental paradox: protecting children’s privacy in AI systems requires collecting additional sensitive data to verify their age, which itself creates privacy risks. Every verification method trades one category of privacy harm for another.

What Children’s Data Is Worth

The economic incentive to collect and retain children’s data is substantial, even when – especially when – the collection violates regulatory frameworks.

Children’s data has long-term commercial value that adult data cannot match. A profile established at age 10 and enriched over a lifetime of AI interactions represents a longitudinal behavioral dataset of extraordinary marketing and modeling value. Early conversational data captures cognitive development, preference formation, identity development, and social relationship patterns at their most formative stage.

Data brokers have historically valued children’s data at a premium. A 2023 report by the Government Accountability Office found that children’s data was sold at 2x to 5x the per-record price of adult data in commercial data markets. The premium reflects the data’s predictive power over long time horizons and its scarcity relative to the volume of adult data available.

For AI model training specifically, children’s conversational data fills a critical gap. AI chatbots trained primarily on adult text data perform poorly in conversations with children – misunderstanding vocabulary, missing context, and providing age-inappropriate responses. Children’s conversational data is the training signal needed to fix this problem, creating a direct financial incentive for platforms to collect and retain it even when doing so violates their own terms of service.

International Approaches to Children’s AI Privacy

The global regulatory response to children’s AI privacy varies significantly, reflecting different cultural attitudes toward both children’s rights and data protection.

The United Kingdom

The UK’s Age Appropriate Design Code (Children’s Code), implemented in 2021, requires online services likely to be accessed by children to implement 15 privacy standards by default. The code’s provisions – including data minimization, high privacy settings by default, and restrictions on behavioral profiling of children – represent the most comprehensive children’s digital privacy framework globally. The Information Commissioner’s Office (ICO) has issued enforcement guidance specifically addressing AI services, requiring providers to conduct Data Protection Impact Assessments that account for child users even when the service is nominally directed at adults.

The European Union

The GDPR framework sets the age of digital consent between 13 and 16, depending on the member state. For AI services, Article 22’s restrictions on automated decision-making apply with heightened scrutiny when the data subject is a child. The EU AI Act’s risk classification framework identifies AI systems interacting with vulnerable populations, including children, as subject to elevated compliance requirements.

China

China’s Provisions on the Management of Children’s Personal Information Online (2019) and the subsequent Measures for the Management of Generative AI Services (2023) explicitly address children’s data in AI contexts. The Chinese framework requires separate consent for children’s data processing, prohibits the use of children’s data for model training without specific authorization, and mandates domestic data residency for children’s personal information.

Australia

Australia’s Online Safety Act and the proposed Children’s Privacy Code, expected to take effect in 2026, would require age assurance for social media and AI services and mandate privacy-by-design for services likely to be used by children.

The deepest challenge in children’s AI privacy is not regulatory gaps or technological limitations. It is that the consent model underpinning all data protection law is fundamentally incompatible with how children interact with technology.

Consent requires understanding. A child cannot meaningfully understand what it means for their conversational data to train a language model, how model memorization might surface their disclosures in other users’ interactions, or what the long-term implications of a permanent digital behavioral profile will be for their adult life. Parental consent substitutes the parent’s understanding, but parents are demonstrably unable to track, evaluate, and manage the privacy implications of every AI system their child encounters.

The consent model also presumes a transaction: the user agrees to data practices in exchange for service access. But children do not choose their AI environments. Schools deploy AI tutors. Household devices embed AI assistants. Social platforms integrate AI features. The child’s “consent” to these systems is as meaningful as their “consent” to the school curriculum or the furniture in their living room.

A privacy architecture that actually protects children cannot rely on consent. It must rely on technical constraints that make harmful data practices impossible regardless of whether anyone – child, parent, or regulator – notices or objects. This is the distinction between privacy as policy and privacy as architecture – and for children, the architectural approach is the only one with any realistic chance of working.

The Stealth Cloud Perspective

Children deserve the strongest privacy protections and receive the weakest. Every regulatory framework governing children’s AI privacy depends on the same fragile mechanism: trusting the entity that wants the data to not collect it, or to delete it when asked. Stealth Cloud’s zero-persistence architecture eliminates this dependency entirely. Data that never exists beyond the moment of processing cannot be retained for training, cannot be sold to data brokers, and cannot be subpoenaed, breached, or exploited – regardless of the user’s age. The right architecture makes the age verification paradox irrelevant: when no user’s data is retained, there is no children’s data to protect, because there is no data at all.