In January 2025, the SEC issued a risk alert identifying AI as a focus area for upcoming examinations of registered investment advisers and broker-dealers. In March 2025, the EDPB published its opinion on AI model development and deployment under GDPR. In June 2025, the EU AI Act’s first provisions came into force. In August 2025, the NIST AI Risk Management Framework received its first major update since its initial release. In every regulated industry – from healthcare to finance to defense to education – the regulatory apparatus is converging on the same question: what is your organization doing with AI, and how are you governing it?

Most organizations cannot answer that question with precision. A 2025 survey by Gartner found that 68% of enterprises reported using generative AI tools in at least one business function. Of those, only 24% had a comprehensive AI governance framework in place. The remaining 76% were operating with some combination of ad hoc policies, departmental guidelines, or – in 29% of cases – no AI governance at all.

The gap between AI adoption and AI governance is the single largest unmanaged risk in enterprise technology. And the person responsible for closing that gap is, in most organizations, the Chief Information Security Officer.

What follows is not a theoretical framework. It is a practical checklist – 20 questions that a CISO should be able to answer about their organization’s AI use. Each question is grounded in specific regulatory requirements, audit expectations, and incident scenarios. If your CISO cannot answer these questions today, they are operating blind.

Section 1: Data Flow Mapping

Question 1: What AI Tools Are in Use Across the Organization?

Why it matters: You cannot govern what you do not know about. Shadow AI – unauthorized use of consumer AI tools by employees – is the primary vector for uncontrolled data exposure. The Samsung ChatGPT incident demonstrated that employees will use whatever AI tools are available, regardless of corporate policy.

What good looks like: A continuously updated inventory of all AI tools in use, including sanctioned enterprise tools, departmental tools, and detected shadow AI usage. The inventory includes the tool name, vendor, tier (consumer/enterprise/API), data classification of data processed, number of users, and business justification.

How to audit: Network traffic analysis for connections to known AI API endpoints. Endpoint detection for installed AI applications. Employee surveys (anonymous) about AI tool usage. Procurement and expense system review for AI tool subscriptions.

Question 2: What Data Flows Into Each AI Tool?

Why it matters: The risk profile of an AI tool depends entirely on what data it processes. An AI tool used for public-facing marketing copy presents negligible data risk. The same tool used to summarize confidential board materials presents existential risk.

What good looks like: A data flow map for each AI tool showing: data sources (what systems or users provide input), data classification of inputs (public, internal, confidential, restricted, regulated), processing location (on-premise, vendor cloud, third-party API), output destinations, and metadata generated.

How to audit: Technical data flow analysis using DLP (Data Loss Prevention) tools configured to monitor AI tool traffic. Review of AI tool integration configurations. Sample analysis of actual prompts (with appropriate privacy protections) to verify data classification compliance.

Question 3: Where Does Data Go After It Reaches the AI Provider?

Why it matters: The AI provider’s internal data pipeline determines the actual risk, regardless of contractual promises. Understanding whether data is used for training, retained for abuse monitoring, shared with sub-processors, or stored in specific jurisdictions requires technical analysis, not just policy review. The OpenAI data practices analysis demonstrates the complexity of even a single provider’s data pipeline.

What good looks like: Documented understanding of each AI provider’s data pipeline, verified through technical assessment, not just policy review. This includes: training data use policy, retention periods for different data categories, sub-processor list, data center locations, and abuse monitoring practices.

How to audit: Provider questionnaires completed by the vendor’s engineering team (not just their sales team). Independent security assessments where available. Review of the provider’s SOC 2 report and privacy documentation. Comparison against the AI provider privacy scoreboard.

Question 4: Is Sensitive Data Being Transmitted to AI Systems Without Authorization?

Why it matters: Employees transmit sensitive data to AI systems through prompts, file uploads, and API integrations. DLP systems configured for email and cloud storage may not monitor AI tool traffic.

What good looks like: DLP rules that cover AI tool endpoints. Real-time monitoring for sensitive data patterns (PII, financial data, health records, source code, legal documents) in AI tool traffic. Automated alerting for policy violations. Client-side PII stripping technology deployed for approved AI tool access.

How to audit: DLP rule configuration review. Test transmissions of synthetic sensitive data to verify detection. Traffic analysis for unmonitored channels.

Section 2: Vendor Assessment

Question 5: Do We Have Data Processing Agreements With Every AI Vendor?

Why it matters: Without a DPA, the organization has no contractual control over how its data is processed, retained, or used. Under GDPR, processing personal data without a DPA is itself a violation (Article 28).

What good looks like: Executed DPAs with every AI vendor that processes organizational data. DPAs that specifically address: training data restrictions, data retention limits, data deletion procedures, sub-processor notifications, breach notification timelines, and audit rights.

How to audit: Legal review of all AI vendor agreements. Gap analysis against GDPR Article 28 requirements and/or applicable sector-specific regulations (HIPAA BAAs, FINRA requirements).

Question 6: What Are the Contractual Restrictions on AI Vendors’ Use of Our Data?

Why it matters: The contractual language matters less than the architecture, but the contract is the enforceable boundary. Terms like “service improvement” can encompass model training. Terms like “abuse monitoring” can encompass indefinite retention.

What good looks like: Clear contractual prohibitions on training use, with specific definitions. Retention limits that match or are shorter than the organization’s own policies. Deletion rights that are exercisable and verifiable. IP indemnification for data leakage through model outputs.

How to audit: Legal analysis of contract terms against a standardized vendor assessment framework. Comparison of terms across vendors to identify outliers.

Question 7: What Happens if the AI Vendor Changes Its Data Practices?

Why it matters: AI vendor terms of service change frequently. A vendor that does not train on user data today may change that policy tomorrow. The organization needs notification rights and exit options.

What good looks like: Contractual requirements for advance notification of material data practice changes. Defined exit procedures including data deletion confirmation. Capability to migrate to alternative providers if terms become unacceptable.

How to audit: Contract review for change notification provisions. Vendor communications monitoring for policy updates. Documented exit procedures for each vendor.

Section 3: Retention and Deletion

Question 8: How Long Is Data Retained by Each AI Provider?

Why it matters: Data retention determines the exposure window. A provider that retains conversation content for 30 days creates 30 days of breach exposure. A provider with indefinite retention creates permanent exposure. Zero-persistence architecture eliminates retention risk entirely.

What good looks like: Documented retention periods for each AI provider, broken down by data category (conversation content, metadata, account data, usage telemetry). Retention periods that align with the organization’s data retention policies and regulatory requirements.

How to audit: Provider documentation review. Direct inquiry to provider’s data protection team. Verification through data subject access requests (submit DSAR to the provider and verify what data is returned and after what period).

Question 9: Can We Verify That Deleted Data Is Actually Deleted?

Why it matters: Deletion in AI systems is more complex than in traditional databases. Data may exist in conversation logs, model training datasets, backup systems, monitoring logs, and embedding vectors. “Deleted” from the user interface may not mean deleted from all systems.

What good looks like: Documented deletion procedures for each AI provider. Technical verification that deletion covers all data instances (not just the primary datastore). Contractual right to receive deletion confirmation or certification.

How to audit: Submit test deletion requests and verify through subsequent access requests. Review provider’s deletion procedures documentation. Independent assessment where available.

Question 10: Are Our Data Retention Policies Updated for AI?

Why it matters: Most enterprise data retention policies were written before AI tool use was widespread. They may not cover AI-specific data categories: prompts, model outputs, conversation histories, AI-generated documents, and AI tool usage metadata.

What good looks like: Data retention policies explicitly address AI tool data. Policies specify retention periods for AI interactions, distinguish between content and metadata, and identify regulatory requirements applicable to each category. Policies are reviewed and updated at least annually.

How to audit: Policy review for AI-specific provisions. Gap analysis against regulatory requirements (GDPR storage limitation, HIPAA minimum necessary, SEC books and records).

Section 4: Incident Response

Why it matters: An AI-related data incident – unauthorized data exposure through an AI system, training data extraction, model memorization leakage – requires different response procedures than a traditional data breach. The model memorization problem creates a category of incident that has no precedent in pre-AI security.

What good looks like: Incident response plan with AI-specific scenarios, escalation procedures, and communication templates. Clear classification criteria for AI incidents (unauthorized data submission, provider breach, model extraction, prompt injection). Defined notification timelines aligned with regulatory requirements.

How to audit: Tabletop exercises using AI-specific incident scenarios. Review of incident classification criteria for AI coverage. Communication plan review.

Question 12: How Would We Detect if Proprietary Data Appeared in an AI Model’s Outputs?

Why it matters: If organizational data enters a training pipeline, it may surface in other users’ outputs. Detection is the first step in remediation. Without monitoring, the organization may never know its data has leaked.

What good looks like: Monitoring program that periodically queries AI services for organization-specific data patterns (proprietary terms, code snippets, document fragments). Canary data – synthetic, unique data strings inserted into AI tool interactions that can be searched for in model outputs.

How to audit: Review of monitoring procedures. Verify canary data program implementation. Test detection capability with synthetic data.

Section 5: Governance and Reporting

Question 13: Who Owns AI Governance in the Organization?

Why it matters: Without clear ownership, AI governance falls between existing functions (IT, legal, compliance, security) and receives inadequate attention from all of them.

What good looks like: Designated AI governance owner (CISO, Chief AI Officer, or cross-functional committee) with explicit authority over AI tool approval, policy development, vendor assessment, and incident response. Clear escalation paths to executive leadership and the board.

How to audit: Organizational chart review. RACI matrix for AI governance functions. Interview key stakeholders to verify functional clarity.

Question 14: Do We Have an AI Acceptable Use Policy?

Why it matters: Without a clear policy, employees make individual decisions about AI tool use based on convenience, not risk. The policy sets the organizational standard.

What good looks like: Documented AI acceptable use policy that specifies: approved tools for each data classification, prohibited uses, personal use guidelines, data handling requirements for AI tool use, and consequences for policy violations. The policy is communicated to all employees and included in onboarding.

How to audit: Policy document review. Employee awareness testing. Compliance monitoring.

Question 15: Are Employees Trained on AI Data Risks?

Why it matters: The best technical controls are undermined by untrained users. Employees must understand why AI data practices matter and what behavior is expected.

What good looks like: AI-specific training program covering data classification for AI use, approved tools, prohibited behaviors, incident reporting, and practical examples. Training is mandatory, tracked, and refreshed annually.

How to audit: Training completion records. Assessment scores. Post-training behavior monitoring.

Question 16: How Do We Report AI Risk to the Board?

Why it matters: AI risk is enterprise risk. Boards increasingly expect reporting on AI governance as part of their risk oversight responsibilities. SEC guidance, NIST frameworks, and EU AI Act requirements all create board-level reporting obligations.

What good looks like: Quarterly AI risk report to the board covering: AI tool inventory changes, data incident summary, regulatory developments, vendor risk assessment updates, and compliance posture. Metrics that track AI governance maturity over time.

How to audit: Board report review. Interview board members and executives about reporting adequacy. Benchmark against industry AI governance reporting practices.

Section 6: Regulatory Compliance

Question 17: Which Regulations Apply to Our AI Use?

Why it matters: Different regulations impose different requirements. An organization that processes health data (HIPAA), financial data, European personal data (GDPR), children’s data, or government data faces distinct compliance obligations for AI use.

What good looks like: Regulatory mapping document that identifies all applicable regulations and maps them to specific AI tool uses. Updated as new regulations come into effect (EU AI Act provisions, state privacy laws).

How to audit: Legal review of regulatory applicability. Gap analysis between regulatory requirements and current AI governance practices.

Question 18: Are We Prepared for the EU AI Act?

Why it matters: The EU AI Act imposes transparency, documentation, human oversight, and accuracy requirements on high-risk AI systems. Many enterprise AI applications – HR decision support, credit scoring, healthcare AI – fall within the high-risk category.

What good looks like: AI Act compliance assessment identifying which organizational AI uses fall within high-risk categories. Compliance roadmap with timelines aligned to the Act’s phased implementation. Technical documentation meeting the Act’s transparency and explainability requirements.

How to audit: AI system inventory classified against the Act’s risk categories. Gap analysis against specific Article requirements.

Why it matters: AI privacy fines are accelerating. Understanding the organization’s maximum exposure under each applicable regulation allows risk-appropriate investment in compliance.

What good looks like: Quantified risk assessment showing maximum fine exposure under each applicable regulation. Probability-weighted risk model incorporating enforcement trends and the organization’s compliance posture. Risk transfer analysis (insurance coverage for AI-related incidents).

How to audit: Legal and financial analysis of regulatory exposure. Insurance policy review for AI coverage. Comparison against enforcement trends.

Question 20: Do We Have a Regulatory Horizon Scanning Process for AI?

Why it matters: AI regulation is evolving faster than any other area of technology law. New requirements emerge monthly from legislatures, regulators, courts, and standard-setting bodies. An organization that is compliant today may not be compliant next quarter.

What good looks like: Dedicated regulatory monitoring process tracking AI-related legislative proposals, regulatory guidance, enforcement actions, and court decisions across all relevant jurisdictions. Regular briefings to AI governance leadership. Compliance roadmap updates triggered by material regulatory developments.

How to audit: Review of horizon scanning process. Verify coverage of key jurisdictions and regulatory bodies. Test response time to significant regulatory developments.

Using This Checklist

This checklist is not aspirational. It is the minimum standard for enterprise AI governance in 2026. Organizations that cannot answer these 20 questions are carrying unquantified risk on their balance sheets – risk that regulators, auditors, customers, and investors will increasingly demand be addressed.

The organizations that will navigate the AI governance challenge most effectively are those that recognize a fundamental principle: AI governance is not primarily a technology problem. It is an architecture problem. The best policies, the most thorough training, and the most rigorous vendor assessments are all undermined if the underlying infrastructure – the processing layer where data actually lives – does not enforce the governance principles structurally. Policies are aspirational. Architecture is deterministic.

Every question on this checklist becomes simpler to answer when the AI processing layer guarantees zero persistence and zero knowledge. If data cannot be retained, retention policies are trivially satisfied. If data cannot be used for training, training restrictions need not be negotiated. If data is cryptographically shredded after inference, incident response for data exposure simplifies dramatically. The checklist remains necessary. But the right infrastructure makes every answer easier.

The Stealth Cloud Perspective

Twenty questions may seem like a lot. In practice, the right architecture collapses most of them into a single answer: nothing persists, nothing is learned, nothing can leak. Governance is essential, but it should be the verification layer above a system that is correct by construction – not the only thing standing between sensitive data and an uncontrolled training pipeline.