In September 2024, Google completed its $2.1 billion acquisition of Fitbit and began integrating Fitbit’s health data into Google’s broader AI infrastructure. The integration gave Google access to the continuous biometric data of 29 million active Fitbit users – heart rate variability, sleep staging, skin temperature, blood oxygen levels, stress response metrics, menstrual cycle tracking, and granular physical activity data collected 24 hours a day, 7 days a week.

Google had committed to the EU’s European Commission, as a condition of the acquisition’s regulatory approval, that it would not use Fitbit health data for advertising purposes for a period of 10 years. The commitment was narrower than it appeared: it restricted the use of health data for advertising but did not restrict its use for AI model training, product development, health research, or the development of new services. By 2025, Fitbit health data was being processed through Google’s AI systems for “health insights,” “wellness recommendations,” and “research partnerships” – uses that fell outside the advertising restriction while still monetizing the biometric data of millions of users.

The global wearable device market shipped 440 million units in 2024, according to IDC. Apple Watch, Samsung Galaxy Watch, Garmin, Oura Ring, Whoop, and an expanding ecosystem of fitness trackers, smart rings, and health-monitoring devices collect continuous physiological data from hundreds of millions of people. These devices are, collectively, the largest source of continuous health data ever assembled. And the AI systems that process this data operate under privacy frameworks designed for step counters, not for the clinical-grade biometric surveillance platforms that modern wearables have become.

What Wearables Collect

Modern AI-powered wearables collect physiological data at a granularity and continuity that would require hospitalization to replicate in a clinical setting.

The Biometric Data Stream

A current-generation smartwatch or fitness tracker continuously monitors:

  • Heart rate and heart rate variability (HRV): Sampled every 1-5 seconds, 24 hours a day. HRV is a sensitive indicator of autonomic nervous system function, reflecting stress, recovery, illness onset, and emotional state.
  • Blood oxygen saturation (SpO2): Monitored during sleep and activity, indicating respiratory health, sleep apnea risk, and altitude adaptation.
  • Skin temperature: Variations of 0.1 degrees Celsius are tracked, reflecting circadian rhythm, illness onset, and menstrual cycle phase.
  • Electrodermal activity (EDA): Measures skin conductance changes related to stress, emotional arousal, and autonomic nervous system activation.
  • Sleep staging: Classifies sleep into light, deep, and REM phases using motion, heart rate, and respiratory data.
  • Physical activity: Steps, distance, elevation, exercise type classification, and metabolic equivalents.
  • GPS location: Continuous or frequent location tracking during outdoor activity, and periodic tracking at all times for devices with cellular connectivity.

The Apple Watch Series 10 added a blood pressure estimation feature. The Samsung Galaxy Watch 7 incorporated a skin temperature sensor and irregular heart rhythm detection. Oura Ring Generation 3 added daytime stress monitoring and blood oxygen tracking. Each device generation expands the biometric data stream.

The Inferred Health Profile

The raw sensor data is only the beginning. AI models running on the device and in the cloud derive inferred health indicators from the raw signals:

Cardiovascular fitness estimates based on heart rate during exercise and recovery patterns. These estimates correlate closely with VO2 max measurements performed in clinical settings.

Stress scores computed from HRV, EDA, and behavioral patterns. These scores create a continuous emotional surveillance record – a moment-by-moment log of the wearer’s stress and calm states.

Illness detection from anomalies in resting heart rate, temperature, HRV, and sleep patterns. Research has demonstrated that wearable data can detect COVID-19 infection up to 5 days before symptom onset. The same detection capability applies to other infections, inflammatory conditions, and autonomic dysfunction.

Mental health indicators derived from sleep disruption, HRV patterns, physical activity changes, and social behavior metrics (reduced movement, changed routines). A 2024 study in Lancet Digital Health demonstrated that wearable data could predict depressive episodes with 78% sensitivity and 82% specificity using only passively collected biometric and behavioral signals.

Reproductive health tracking including menstrual cycle prediction, fertility window estimation, and pregnancy detection from temperature and HRV patterns. This category of health inference has become acutely privacy-sensitive in the post-Dobbs legal landscape, where reproductive health data can have legal consequences in U.S. states that restrict abortion access. The regulatory landscape varies dramatically by jurisdiction, with some offering far stronger protections than others.

The Regulatory Gap

The most consequential privacy failure in the wearable AI ecosystem is the regulatory gap between the clinical sensitivity of the data and the legal framework governing it.

HIPAA Does Not Apply

HIPAA protects health information held by covered entities – healthcare providers, health plans, and healthcare clearinghouses. Wearable device manufacturers and the AI platforms that process their data are not covered entities. The health data collected by your Apple Watch, Fitbit, Oura Ring, or Whoop band receives no HIPAA protection, regardless of how clinically relevant or medically sensitive it may be.

This regulatory gap means that data clinically indistinguishable from what a hospital cardiac monitor produces – continuous ECG, heart rate variability, blood oxygen saturation – receives fundamentally different legal protection depending on whether it was collected by a medical device in a clinical setting or by a consumer wearable on your wrist.

The healthcare privacy gap has been documented extensively for health apps and direct-to-consumer testing services. For wearables, the gap is uniquely dangerous because of the data’s continuous nature. A health app collects data when you use it. A wearable collects data while you exist.

FTC and State Protections

In the absence of HIPAA coverage, wearable health data falls under general consumer protection frameworks. The FTC has enforcement authority over deceptive and unfair data practices, and has used this authority against health data misuse (including the Flo Health settlement in 2021, where the period-tracking app shared health data with Facebook and Google analytics despite promises of confidentiality).

State health data privacy laws provide additional but inconsistent protection. Washington state’s My Health My Data Act (2023) covers consumer health data regardless of HIPAA status, including wearable health data. California’s CCPA provides data access and deletion rights. Several other states have enacted or are considering similar protections. But the patchwork creates compliance complexity without comprehensive coverage.

The EU Framework

The EU’s GDPR classifies health data as a special category requiring explicit consent or another specific lawful basis. This provides stronger baseline protection for European wearable users. However, the practical enforcement of GDPR consent requirements for continuous biometric data collection is challenging – the granularity and persistence of wearable data collection make meaningful informed consent difficult to achieve and maintain.

The EU’s proposed European Health Data Space regulation, expected to come into force in 2025-2026, would create a framework for secondary use of health data (including wearable data) for research and public health purposes. The regulation includes privacy protections but also creates new pathways for wearable data to flow to researchers, pharmaceutical companies, and government health agencies.

Where Your Wearable Data Goes

The data supply chain for wearable health data extends far beyond the device manufacturer.

Cloud Processing Infrastructure

Most wearable health analysis occurs in the cloud, not on the device. The raw sensor data is transmitted to the manufacturer’s cloud infrastructure, where AI models process it into health insights. Apple’s HealthKit data syncs to iCloud. Fitbit data syncs to Google’s cloud. Whoop data is processed on AWS infrastructure.

The cloud processing pipeline creates points of access for the manufacturer’s employees (for model development and debugging), for cloud infrastructure providers (who physically host the data), and for law enforcement (through subpoenas and court orders served on the manufacturer or cloud provider).

Research Partnerships

Wearable manufacturers routinely share health data with academic and commercial research partners. Fitbit’s research program has shared aggregated and individual-level health data with dozens of research institutions. Apple’s Research app facilitates data sharing for health studies. Oura has partnered with research institutions studying sleep, reproductive health, and COVID-19 detection.

These research partnerships provide genuine scientific value. They also create data flows that extend wearable health data to new entities, often with data use agreements that permit retention for years and analysis for purposes broader than the original study.

Insurance and Employer Programs

The intersection of wearable data and insurance underwriting represents one of the most privacy-sensitive data flows in the wearable ecosystem.

Employer wellness programs frequently incentivize or require wearable device use, collecting employee health data through partnerships with wearable manufacturers and wellness platforms like Virgin Pulse, Limeade, and Vitality. A 2024 survey by the Kaiser Family Foundation found that 53% of large employers with wellness programs incorporated wearable device data, with 24% providing financial incentives (premium discounts, HSA contributions) tied to meeting activity or health metric targets.

The health data collected through these programs flows to the wellness platform vendor, and potentially to the employer’s insurance broker, stop-loss carrier, and population health management consultants. An employee who wears a company-provided fitness tracker is generating a continuous health data stream that informs decisions about their health insurance, disability risk assessment, and potentially their employment status.

Life insurance companies have introduced programs that use wearable data for dynamic premium adjustment. John Hancock’s Vitality program, which requires all new life insurance policyholders to share fitness tracker data, adjusts premiums based on activity levels and health metrics. The program creates a direct financial linkage between continuous biometric surveillance and insurance pricing.

Data Broker Markets

Wearable health data has entered commercial data broker markets through multiple channels. A 2024 investigation by Duke University’s Sanford School of Public Policy found that health data derived from wearable devices – including heart rate patterns, sleep quality scores, and activity levels – was available for purchase from data brokers at prices ranging from $0.12 to $0.32 per record.

The investigation documented that data broker health datasets included wearable-derived data linked to demographic information, enabling purchasers to target individuals by health status and behavioral patterns. The data was available to any purchaser without verification of identity, purpose, or compliance with health data regulations.

The Reproductive Health Data Crisis

The U.S. Supreme Court’s Dobbs decision in 2022, which eliminated the federal right to abortion, transformed reproductive health data from a privacy concern into a potential legal liability.

Period-tracking features in wearable devices and companion apps collect data that could be used to establish pregnancy and pregnancy termination. In states where abortion is criminalized, this data is potentially subpoenable as evidence.

The concern is not speculative. In 2023, prosecutors in Nebraska obtained Facebook messages related to a teenager’s abortion and used them as evidence in criminal proceedings. Wearable health data – which includes temperature patterns, HRV changes, and menstrual cycle data that can indicate pregnancy with high accuracy – represents an even more comprehensive and difficult-to-delete evidence source.

Wearable manufacturers have responded with varying degrees of urgency. Apple implemented end-to-end encryption for HealthKit data synced to iCloud. Fitbit (Google) and Garmin have not implemented equivalent encryption for reproductive health data. Oura Ring encrypts data in transit and at rest but retains decryption capability.

The Protection Gap

For users in restrictive jurisdictions, the lack of HIPAA coverage for wearable data means there is no legal privilege protecting reproductive health data collected by consumer devices. Law enforcement can obtain this data through standard subpoenas served on the device manufacturer, without the heightened protections required for medical records.

The architectural solution is clear: reproductive health data should be encrypted with keys that only the user holds, stored only on the user’s device, and processed locally rather than in the cloud. The fact that most wearable manufacturers have not implemented this architecture reflects commercial priorities – cloud-based processing enables research partnerships, insurance integrations, and data monetization that on-device processing would foreclose.

Protecting Your Wearable Health Data

Review data sharing settings. Every major wearable platform includes data sharing controls. Review and restrict sharing with research programs, wellness platforms, and third-party apps. Disable features you don’t use.

Disable cloud sync for sensitive data. Where possible, configure wearable data to process and store locally rather than syncing to cloud services. Apple’s HealthKit supports on-device storage with selective sync. Other platforms offer more limited local-only options.

Use the most privacy-protective platform available. Apple’s combination of on-device processing, end-to-end encryption, and strong privacy commitments makes it the most privacy-preserving major wearable platform. This is not an endorsement of Apple’s broader privacy practices, but a recognition that the alternatives are measurably worse for health data.

Be cautious with employer wellness programs. Understand what data is collected, who receives it, how long it’s retained, and whether participation is truly voluntary. If your employer incentivizes wearable use, consider whether the financial benefit is worth the health data exposure.

Exercise data deletion rights. Under CCPA, GDPR, and emerging state health data laws, request deletion of wearable health data from manufacturers and third-party recipients periodically. The data you delete today is data that cannot be subpoenaed, breached, or sold tomorrow.

The Stealth Cloud Perspective

Wearable AI devices produce the most intimate continuous data stream in the consumer technology ecosystem – a 24/7 biometric diary of your physical and emotional state. This data deserves the strongest privacy protections available, yet receives some of the weakest. The gap between the data’s sensitivity and its legal protection is an architectural failure, not a regulatory oversight. Stealth Cloud was designed on the principle that sensitive data should exist only for the moment it is needed and should be cryptographically shredded thereafter. The wearable industry’s model – collect everything, retain indefinitely, monetize broadly – is the antithesis of this principle. The right to know your own health data should not require surrendering that data to a cloud infrastructure where it can be brokered, subpoenaed, and weaponized against you.