Navigate GDPR, HIPAA, SOC 2, EU AI Act, and FTC requirements. Vendor assessment framework, policy templates, and board reporting strategies.
AI regulation is fragmenting globally. The EU leads with comprehensive AI Act requirements (effective 2025-2027). The US is more fragmented: FTC enforcement on deceptive AI claims, sector-specific regulations (HIPAA for healthcare, FINRA for finance), and emerging state-level privacy laws. This guide covers major frameworks and how they interact with AI.
The core principle: AI doesn't create new legal obligations—it amplifies existing ones. Data privacy law applies to AI processing. Consumer protection law applies to AI-generated claims. Employment law applies to AI hiring decisions. Understanding your baseline legal obligations is prerequisite to AI compliance.
GDPR Article 6 requires a lawful basis for processing personal data. Under GDPR, AI systems processing personal data must have a lawful basis (consent, contract, legal obligation, vital interests, public task, or legitimate interests).
Key requirement: You cannot process personal data in AI systems without identified lawful basis. "We're using AI" is not sufficient justification—you need explicit lawful basis documentation.
GDPR Articles 12-22 grant data subjects rights (access, correction, deletion, portability). AI systems complicate these rights. If an AI model makes decisions about a data subject, they may have right to explanation (Article 13-14).
Compliance requirement: Document how you handle data subject access requests in AI systems. If AI is used for automated decision-making with legal effect (hiring, credit decisions), provide explicit mechanisms for data subjects to request human review.
GDPR Article 35 requires DPIAs for high-risk processing. AI is frequently high-risk, especially when processing sensitive data or making automated decisions affecting individuals. DPIAs for AI systems should address: model bias, training data sources, decision transparency, and data retention.
Compliance requirement: Complete DPIA for any AI system processing personal data. Review quarterly as models evolve or data changes.
If your AI vendor processes personal data on your behalf, GDPR Article 28 requires Data Processing Agreement (DPA). This is often overlooked with modern AI vendors. CloudAI vendors may claim they're not processors, but if they access your data, DPA is required.
Compliance requirement: Ensure all AI vendors have signed DPAs before deployment. Verify data location, deletion rights, and sub-processor notification requirements.
HIPAA requires Business Associate Agreements for any vendor handling Protected Health Information (PHI). Many AI platforms (including ChatGPT, Claude) require Enterprise plans with BAAs before healthcare use. Standard terms are insufficient.
Compliance requirement: Verify BAA status before deploying any AI tool in healthcare. Request BAA from vendor if not available on standard plans. Without BAA, AI use in healthcare violates HIPAA.
HIPAA Security Rule requires encryption for data in transit and at rest. AI systems processing PHI must implement encryption and access controls. Cloud AI vendors should offer HIPAA BAAs with encryption options.
Compliance requirement: Require HIPAA BAAs, encryption, and audit logging for any AI system processing PHI. Document in your compliance audit trail.
SOC 2 comes in two levels: Type I (point-in-time assessment of controls) and Type II (ongoing assessment over 6+ months). Type II is significantly more rigorous and valuable for enterprise evaluation.
Vendor assessment: Require SOC 2 Type II certification from AI vendors handling sensitive data. Type I alone is insufficient for enterprise deployments.
SOC 2 assesses five principles: Security (data protection), Availability (system uptime), Processing Integrity (accuracy), Confidentiality (privacy), and Confidentiality (authorized access only). When evaluating AI vendors, verify which trust principles they're certified for.
Vendor assessment: Request SOC 2 audit reports. Review for certification of all five trust principles, not subsets. Audit currency is important (reports valid 1 year).
The EU AI Act (2025-2027 enforcement) categorizes AI systems by risk level. Most business AI falls into lower-risk categories, but understanding risk classification is essential.
If your AI system is high-risk (hiring, credit decisions, healthcare), EU AI Act requires: risk assessment, quality documentation, data governance, human oversight mechanisms, and conformity assessment (for EU deployment). Compliance deadline: 2027 for most requirements.
Compliance requirement: Map your AI systems to risk categories. For high-risk systems, develop conformity assessment and documentation plans now, targeting 2027 EU compliance deadline.
FTC issued guidance on AI and endorsed claims, deceptive AI marketing, and security. Key points: Don't claim AI capabilities you don't have. Disclose AI-generated content. Implement reasonable security for AI systems.
Compliance requirement: Audit AI marketing claims for truthfulness. If claiming AI-generated content, disclose it. Maintain security audit logs for AI systems.
CCPA grants consumers rights to access, delete, and opt-out of data sales. CPRA (effective 2023+) adds opt-out rights for "profiling" including AI decisions. California Privacy Protection Agency is actively enforcing.
Compliance requirement: If operating in California, document CCPA rights for consumers. If using AI for profiling, provide explicit opt-out mechanism. Maintain audit trail of consumer requests.
Recommended practice: Yes. At minimum, legal review for first deployment of each AI system type (hiring AI, customer-facing AI, healthcare AI). Subsequent deployments of same type need compliance check, not full legal review.
Request their compliance roadmap and timeline for certification. If handling sensitive data, use vendors with SOC 2. For non-sensitive use cases, lack of certification is acceptable with documented risk acceptance.
Yes. GDPR violations: up to 20 million EUR or 4% global revenue. HIPAA violations: $100-50,000 per violation. FTC enforcement: monetary penalties and operational restrictions. Consequences are serious.
Minimum annually. Quarterly audits recommended for high-risk systems. Continuous monitoring with event-based audits (after regulatory changes, system changes, or incidents).
Include vendor continuity requirements in contracts: data return/deletion guarantees, transition assistance, and notification obligations in case of acquisition or closure.
Document bias assessment for high-risk systems. If bias is detected, document mitigation (retraining, monitoring, or system replacement). Failure to address known bias is regulatory liability.
Board-level guidance on AI compliance:
AI compliance is no longer optional—it's existential for enterprise deployment. Organizations that treat compliance as an afterthought face regulatory risk, reputational damage, and operational disruption. Proactive compliance is competitive advantage.
AI Compliance Checklist