Why Compliance Matters: The Legal Risk
AI systems used in hiring can discriminate against protected groups (race, gender, age, disability) even unintentionally. In 2023, the EEOC published guidance making clear that employers are legally liable for discrimination caused by AI tools. Several lawsuits are already underway (Amazon's resume screening algorithm, HireVue video interviews).
Compliance is no longer optional. Organizations deploying AI for HR must conduct bias audits, maintain audit trails, and be prepared to defend their tools.
EEOC Guidance on AI in Hiring (April 2023)
Key Requirements
- Non-discrimination: AI tools must not discriminate based on protected characteristics (race, color, religion, sex, national origin, age, disability, genetic information)
- Validation: If disparate impact is detected, employers must validate that AI predictions correlate with job performance, not protected class
- Transparency: Employers should be able to explain AI decisions if challenged
- Monitoring: Employers should regularly audit AI tools and maintain records of monitoring
Disparate Impact: The 80% Rule
The EEOC uses the "80% rule" as a benchmark for detecting disparate impact. If your AI tool results in different selection rates for protected groups, it may indicate discrimination.
Example: Your AI resume screening tool results in:
- Men: 60% pass rate
- Women: 30% pass rate
The 80% rule says: (30% / 60%) = 50%. Since this is less than 80%, there's likely disparate impact. You must then prove that your AI tool is validly predicting job performance for both groups.
NYC Local Law 144 (Effective Jan 2024)
Requirements for Employers Using AI in Hiring
- Pre-deployment audit: Conduct and document a bias audit before using any AI tool for hiring
- Annual audits: Audit the tool annually and retain results
- Notification: Notify candidates in writing that AI was used and explain key factors in the decision
- Opt-out option: Candidates can request human review instead of AI screening
Applies to: Any employer using AI to screen, evaluate, or rank candidates for roles based in New York—even if the employer is not headquartered there.
Penalties: Violations can result in fines up to $500 per day, per individual affected.
EU AI Act (2024+)
Key Requirements
The EU AI Act classifies hiring AI as "high-risk." Required safeguards include:
- Impact assessments: Document potential harms before deployment
- Human review: High-risk decisions must have human oversight
- Transparency: Explain how the AI works and its limitations
- Documentation: Maintain audit trails and training data records
- Governance: Assign responsibility for monitoring and compliance
Penalties: Violations can result in fines up to 4% of global revenue (similar to GDPR).
California Automated Decision Systems Accountability Act (2024)
Requirements
- Disclose use of AI in hiring and provide information on algorithmic decision factors
- Maintain records of bias testing for 3 years
- Respond to candidate requests for explanation of AI decisions
Disparate Impact Testing: How to Audit Your AI
Step 1: Calculate Selection Rates by Group
For each protected group (race, gender, age), calculate: Selection Rate = Number Hired / Number Screened
- Men: 100 screened, 60 hired = 60% selection rate
- Women: 100 screened, 30 hired = 30% selection rate
Step 2: Apply the 80% Rule
Lowest selection rate / Highest selection rate = 30% / 60% = 50%
Since 50% is less than 80%, disparate impact is indicated.
Step 3: Validation Study
If disparate impact is detected, commission a validation study. Work with an industrial-organizational psychologist to prove that your AI tool:
- Correlates with actual job performance (hire candidates with high scores; measure their subsequent performance)
- Doesn't correlate with protected characteristics (even after controlling for performance)
- Uses legitimate, job-related predictors (not proxies for protected class)
Compliance Checklist for HR AI Tools
Pre-Deployment
- Document the purpose and scope of the AI tool (what decisions will it make?)
- Conduct a bias audit with independent third party
- Validate that tool correlates with job performance
- Review with employment counsel; document legal opinion
- Prepare disclosure language for candidates
- Establish audit schedule (annual minimum)
Deployment
- Notify candidates that AI was used in screening
- Provide opt-out option for human review (per NYC Law 144)
- Maintain audit logs of all AI decisions
- Track selection rates by protected groups (monthly)
- Train recruiters and hiring managers on responsible AI use
- Establish escalation process for unusual outcomes
Ongoing Monitoring
- Monthly: Review selection rate reports; flag disparities greater than 20%
- Quarterly: Audit tool performance; validate continued accuracy
- Annually: Commission independent third-party bias audit; publish results
- Annually: Update training data; retrain model if needed
- Continuously: Investigate complaints; maintain records of complaints and resolutions
Documentation
- Keep copies of all bias audits (at least 3 years)
- Maintain training data and model specifications
- Document validation study results
- Log all candidate requests for explanation or human review
- Keep records of monitoring and any corrective actions taken
Key Takeaways
- You are liable: Employers are responsible for discrimination caused by AI, even unintentional
- Audit before deploying: Don't wait for a lawsuit; conduct bias audits upfront
- Transparency matters: Tell candidates AI was used and explain decisions
- Keep records: Maintain audit logs and monitoring data for at least 3 years
- Use vendors with audits: Reputable AI vendors publish third-party bias audits; demand to see them
- Get legal review: Have employment counsel review before deployment