Regulatory Compliance

Illinois AI Video Interview Act: What Employers and HR Tech Vendors Must Know

Table of Contents

TL;DR:

  • Illinois’s AI Video Interview Act (AIVICA) requires employers to notify applicants, explain how AI works, and obtain consent before using AI to analyze video interviews — plus delete recordings on request within 30 days.
  • BIPA and AIVICA impose concurrent obligations — a federal court ruled in Deyerler v. HireVue (Feb. 2024) that complying with one doesn’t excuse you from the other.
  • Illinois HB 3773 (effective January 1, 2026) expanded AI employment obligations further — employers now face discrimination liability and detailed notice requirements for any AI used in hiring, promotions, or discipline.

Illinois was regulating AI in hiring before most states could even spell “algorithmic accountability.”

The Artificial Intelligence Video Interview Act (AIVICA, 820 ILCS 42) took effect on January 1, 2020 — making Illinois one of the first states in the country to put guardrails on AI-driven hiring. And with HB 3773 taking effect on January 1, 2026, Illinois didn’t just update the rulebook — it wrote a whole new chapter.

If you’re an employer using AI video interviewing tools (HireVue, myInterview, Spark Hire, or similar), an HR tech vendor building these systems, or a compliance officer trying to figure out what’s actually required — here’s everything you need to know.

What the AI Video Interview Act Actually Requires

AIVICA is short, specific, and surprisingly readable. It has four core requirements under Sections 5, 10, 15, and 20:

1. Pre-Interview Notification (Section 5)

Before asking applicants for Illinois-based positions to submit video interviews, employers must:

  • Notify the applicant that AI may be used to analyze their video interview
  • Explain how the AI works and what types of characteristics it evaluates
  • Obtain consent from the applicant to be evaluated by the AI

This isn’t a buried-in-the-terms-of-service situation. All three steps must happen before the interview. No consent = no AI analysis. The statute is explicit: “An employer may not use artificial intelligence to evaluate applicants who have not consented.”

2. Sharing Restrictions (Section 10)

Employers may only share applicant videos with “persons whose expertise or technology is necessary in order to evaluate an applicant’s fitness for a position.” That means your AI vendor can see it. Your marketing team cannot.

3. Deletion Rights (Section 15)

If an applicant requests it, employers must delete their video interview within 30 days — and instruct anyone who received copies to delete them too, including electronically generated backups.

This is stronger than it looks. “All electronically generated backup copies” means your cloud storage, your vendor’s copies, and any downstream backups. If your AI vendor retains training data from applicant videos, you’ve got a compliance problem.

4. Demographic Reporting (Section 20)

Here’s the one most employers miss: if you rely solely on AI analysis to determine which applicants get in-person interviews, you must collect and report demographic data (race and ethnicity) to the Illinois Department of Commerce and Economic Opportunity (DCEO) annually by December 31.

The DCEO then analyzes this data and reports to the Governor and General Assembly on whether the data shows racial bias in AI hiring.

RequirementWhat to DoWhen
NotificationTell applicants AI will analyze their videoBefore the interview
ExplanationExplain what the AI evaluatesBefore the interview
ConsentGet affirmative consentBefore the interview
Sharing limitsOnly share with evaluatorsOngoing
DeletionDelete video + all copies within 30 daysUpon applicant request
Demographic reportingReport race/ethnicity data to DCEOAnnually by Dec. 31

Where BIPA and AIVICA Collide

Here’s where it gets expensive.

Illinois’s Biometric Information Privacy Act (BIPA, 740 ILCS 14) regulates the collection and use of biometric identifiers — including facial geometry. BIPA carries statutory damages of $1,000 per negligent violation and $5,000 per willful or reckless violation, plus a private right of action.

AI video interview tools that analyze facial expressions, micro-expressions, or facial geometry to score candidates are collecting biometric identifiers under BIPA. And in February 2024, a federal court in the Northern District of Illinois confirmed exactly that.

Deyerler v. HireVue (N.D. Ill., Feb. 26, 2024)

In Deyerler v. HireVue, a class of plaintiffs alleged that HireVue’s AI-powered facial expression analysis technology collected biometric identifiers without complying with BIPA. HireVue argued two things:

  1. Facial scans aren’t “biometric identifiers” because they weren’t used to affirmatively identify individuals
  2. AIVICA preempts BIPA because it more specifically addresses AI video interviews

The court rejected both arguments. It found that BIPA’s definition of biometric identifiers explicitly includes “facial geometry,” and that AIVICA and BIPA impose different but concurrent obligations. Complying with AIVICA doesn’t satisfy BIPA, and vice versa.

The practical impact: if your AI video tool analyzes facial features in any way, you need to comply with both statutes simultaneously. That means:

  • AIVICA’s notification + consent + deletion requirements, and
  • BIPA’s written release requirements, biometric data retention policy, and prohibitions on selling or profiting from biometric data

The BIPA exposure is massive. In 2025, courts approved major BIPA class settlements including a $51.75 million settlement against Clearview AI and a $47.5 million settlement against a technology company for processing facial recognition data without consent.

HB 3773: Illinois’s Broader AI Employment Law (Effective Jan. 1, 2026)

AIVICA covers one specific use case — AI analysis of video interviews. But on August 9, 2024, Governor Pritzker signed HB 3773, making Illinois the second state (after Colorado) to pass broad legislation on AI in employment decisions.

Effective January 1, 2026, HB 3773 amends the Illinois Human Rights Act (IHRA) to prohibit employers from using AI that “has the effect of subjecting employees to discrimination on the basis of protected classes.” Key features:

  • Broad AI definition: Covers any “machine-based system that infers how to generate outputs such as predictions, content, recommendations, or decisions” — including generative AI
  • Wide scope: Recruitment, hiring, promotion, discharge, discipline, tenure, and terms of employment
  • Zip code prohibition: Can’t use zip codes as a proxy for protected classes
  • Notice requirement: Employers must notify employees when AI is used in employment decisions
  • IDHR rulemaking: The Illinois Department of Human Rights was directed to issue implementing rules

IDHR Draft Rules (December 2025)

In December 2025, the IDHR unveiled draft notice rules that significantly expand what employers must disclose when using AI. Under the draft rules, employer notifications must include:

  • The name of the AI product and its developer/vendor
  • Which employment decisions the AI influences
  • The purpose of the AI system and what data it collects
  • Job positions affected
  • A point of contact for questions
  • The right to request a reasonable accommodation

The notice obligation triggers whenever AI is used “to influence or facilitate” a covered employment decision — regardless of whether the AI actually causes discrimination. Using AI-powered resume screening, video interview analysis, or even chatbot recruiting? You need to disclose it.

The Real-World Stakes: HireVue and the ACLU

This isn’t theoretical. In March 2025, the ACLU of Colorado filed a complaint with the Colorado Civil Rights Division and the EEOC against Intuit and HireVue, alleging that HireVue’s AI-backed video interview platform discriminated against a deaf and Indigenous employee.

According to the complaint, the employee — an Intuit customer service representative applying for a promotion — requested human-generated captioning as an accommodation for her hearing disability. Intuit allegedly denied the request. After the AI-evaluated interview, the employee was rejected, and received AI-generated feedback recommending she “practice active listening.” The ACLU alleged this constituted disability and race discrimination under the ADA, Title VII, and Colorado’s Anti-Discrimination Act.

HireVue denied the claims, stating that Intuit didn’t actually use an AI-backed assessment in this instance. The case remains under investigation by the EEOC and Colorado Civil Rights Division.

Whether or not this particular case succeeds, it illustrates the exact risk: AI video interview tools that analyze speech patterns, facial expressions, or communication styles can systematically disadvantage candidates with disabilities or those who speak different dialects.

The Federal Vacuum

Making all of this more complex: the federal government has largely exited the AI hiring guidance space.

In January 2025, the EEOC removed its AI employment guidance from eeoc.gov — including the “Artificial Intelligence and Algorithmic Fairness Initiative” page and all related technical assistance documents. As of March 2026, those pages are still down.

But the underlying law hasn’t changed. Title VII still prohibits disparate impact discrimination. The Uniform Guidelines on Employee Selection Procedures (UGESP) still apply to AI screening tools. And the EEOC’s own Strategic Enforcement Plan for FY 2024-2028 — still live on eeoc.gov — explicitly identifies “technology-related employment discrimination” as an enforcement priority.

The result: a patchwork where states are filling the gap. Illinois, Colorado, NYC (Local Law 144), and several others are writing detailed AI hiring rules while federal guidance sits in a 404 page.

How Other States Are Copying Illinois’s Playbook

AIVICA was a first mover, and its influence is visible in the wave of state AI employment legislation that followed:

  • NYC Local Law 144 (effective July 2023): Requires annual bias audits and public notice for automated employment decision tools
  • Colorado SB 205 (effective February 2026): Broad AI consumer protection covering high-risk AI in employment, with impact assessment and disclosure requirements
  • Illinois HB 3773 (effective January 2026): Extended Illinois’s own framework beyond video interviews to all AI employment decisions
  • Pending bills in New York, New Jersey, Massachusetts, and others continue to draw on the consent-and-notice model Illinois pioneered

Compliance Checklist: What to Do Now

Whether you’re an employer, HR tech vendor, or compliance officer, here’s your action plan:

For Employers Using AI Video Interviews

  1. Audit your tools. Identify every AI-powered tool in your hiring pipeline — not just the obvious ones. Resume screeners, chatbot recruiters, and video analysis tools all count.

  2. Build a notification process. Create clear, plain-language disclosures that explain:

    • That AI will be used
    • How the AI works and what it evaluates
    • The applicant’s right to consent (or decline)
    • The applicant’s right to request deletion
  3. Get affirmative consent. Document it. A checkbox in the application flow works, but make sure it’s informed consent — not buried in page 47 of your terms.

  4. Map your BIPA exposure. If your video tool analyzes facial expressions or geometry, you need BIPA compliance in addition to AIVICA. That means a written biometric data policy, BIPA-specific consent, and data retention/destruction schedules.

  5. Implement deletion workflows. When an applicant requests deletion, you need to hit your AI vendor, your cloud storage, and any backup systems within 30 days.

  6. Set up demographic reporting. If you rely solely on AI to screen for in-person interviews, you must report race and ethnicity data to DCEO annually.

  7. Prepare for HB 3773 disclosures. Under the IDHR draft rules, you’ll need to disclose the AI product name, vendor, purpose, data collected, affected positions, and accommodation rights.

For HR Tech Vendors

  1. Build consent mechanisms into your product. Your employer clients need to collect AIVICA consent — make it easy for them.

  2. Support deletion requests. If an employer instructs you to delete applicant data, you need to comply — including backups.

  3. Document what your AI evaluates. Employers are required to explain your tool to applicants. Give them clear, accurate descriptions.

  4. Assess BIPA exposure. If your tool processes facial geometry, you’re in BIPA territory. Consider whether facial analysis is worth the legal risk.

So What?

Illinois’s AI hiring laws aren’t just about Illinois. They’re the template for where the rest of the country is heading.

AIVICA proved that AI-specific hiring regulation was workable. Deyerler v. HireVue proved that existing biometric privacy laws apply to AI video tools whether vendors like it or not. HB 3773 proved that states will keep expanding the scope. And the ACLU’s complaint against Intuit and HireVue proved that enforcement isn’t theoretical — it’s here.

If you’re using AI anywhere in your hiring pipeline and you haven’t mapped your obligations under AIVICA, BIPA, HB 3773, and the emerging state landscape — you’re already behind.

The good news: the compliance requirements are clear and manageable if you start now. Our AI Risk Assessment Template includes an AI use case inventory and risk assessment framework that covers employment AI — exactly the kind of documentation these laws require.


FAQ

Does the Illinois AI Video Interview Act apply to remote interviews with a live human?

No. AIVICA specifically applies when employers “ask applicants to record video interviews” and use “artificial intelligence analysis of the applicant-submitted videos.” Live video calls with a human interviewer (even if conducted remotely) are not covered. The law targets AI analysis of pre-recorded, asynchronous video submissions.

The statute prohibits employers from using AI to evaluate applicants “who have not consented.” However, AIVICA doesn’t explicitly require employers to offer a non-AI alternative interview path. This is a gray area — an employer could theoretically decline to consider an applicant who refuses consent, though doing so could raise discrimination concerns if it disproportionately screens out protected groups.

How do AIVICA and BIPA interact? Do I need to comply with both?

Yes. The court in Deyerler v. HireVue (N.D. Ill., Feb. 2024) ruled that AIVICA and BIPA impose “different but concurrent obligations.” If your AI video tool analyzes facial features or expressions, you need AIVICA consent and BIPA-compliant written releases, a publicly available biometric data policy, and data destruction schedules. Complying with one does not satisfy the other.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.