State AI Laws Tracker 2026: Every US AI Regulation You Need to Know
Table of Contents
TL;DR:
- As of March 2026, lawmakers in 45 states have introduced 1,561 AI-related bills — already surpassing the full-year 2024 total of 635. Colorado’s AI Act (effective June 30, 2026), Texas’s TRAIGA (effective January 1, 2026), and California’s ADMT regulations are the three to watch.
- There is no comprehensive federal AI law. States are filling the vacuum — fast. If your organization operates across state lines, you need a multi-state compliance strategy now.
- This tracker covers every enacted state AI law that affects your compliance program, organized by category: comprehensive AI laws, employment AI, biometric AI, and privacy laws with AI provisions.
The US has no federal AI law. Congress has debated hundreds of bills and passed exactly zero comprehensive ones. Meanwhile, state legislators have been on a tear — 1,561 AI-related bills introduced across 45 states as of March 2026, according to MultiState’s legislative tracker. That’s up from 1,208 bills in 2025 and just 635 in 2024.
For compliance teams, this creates a patchwork nightmare. Your hiring AI might be regulated under one set of rules in New York City, a completely different framework in Illinois, and a third standard in Colorado — all simultaneously. And that’s before California’s automated decision-making technology (ADMT) regulations layer on top.
This tracker breaks down every state AI law you need to know, organized by what it actually requires you to do. Bookmark it. You’ll be coming back.
The Big Three: Comprehensive State AI Laws
Three states have enacted — or are about to enforce — broad AI regulations that go beyond narrow use cases. These are the ones consuming the most compliance budget in 2026.
Colorado AI Act (SB 24-205)
| Detail | Info |
|---|---|
| Status | Enacted May 17, 2024 |
| Effective Date | June 30, 2026 (delayed from original Feb 1, 2026 via SB25B-004) |
| Enforcer | Attorney General only (no private right of action) |
| Penalties | Up to $20,000 per violation under Consumer Protection Act |
Colorado’s AI Act is the first comprehensive US state law regulating AI systems. It covers both developers (those who build AI) and deployers (those who use AI for consequential decisions), and it applies across employment, housing, credit, healthcare, education, insurance, government services, and legal services.
What developers must do:
- Exercise reasonable care to protect consumers from algorithmic discrimination
- Provide documentation (model cards, dataset cards) to deployers
- Publicly disclose summaries of high-risk AI systems offered
- Report known discrimination risks to the AG within 90 days
What deployers must do:
- Implement risk management policies governing AI deployment
- Complete annual impact assessments for each high-risk system
- Provide consumer disclosures before AI-assisted decisions
- Establish consumer appeal rights with human review
- Allow consumers to correct data used in AI decisions
The safe harbor: Organizations following NIST AI RMF or ISO 42001 get a rebuttable presumption of compliance. This is the strongest framework-based safe harbor in any US AI law — and it’s driving adoption of NIST AI RMF across industries.
The delay from February 1 to June 30, 2026 came after a special legislative session where lawmakers couldn’t reach a compromise on revisions. Colorado legislators have signaled further amendments during the 2026 regular session, so watch for changes before the June deadline.
For a deep dive, see our Colorado AI Act compliance guide.
Texas Responsible AI Governance Act (TRAIGA — HB 149)
| Detail | Info |
|---|---|
| Status | Enacted June 22, 2025 |
| Effective Date | January 1, 2026 |
| Enforcer | Texas Attorney General only |
| Penalties | Civil penalties for violations |
Texas became the third state to adopt a comprehensive AI law when Governor Abbott signed HB 149 on June 22, 2025. But don’t confuse this with Colorado’s approach. TRAIGA uses an intent-based liability framework rather than impact-based liability — a significant difference for compliance teams.
What’s prohibited:
- AI systems that intentionally manipulate behavior to incite self-harm, harm others, or encourage criminal activity
- AI systems developed with sole intent of infringing constitutional rights
- AI systems deployed with intent to unlawfully discriminate against protected classes
- AI systems designed to create or distribute child sexual abuse material
The keyword throughout is intent. Unlike Colorado (which uses a “reasonable care” standard that could capture unintentional discrimination), Texas requires proof of deliberate misconduct. That gives businesses more legal certainty — but doesn’t mean you can ignore disparate impact. You still need documentation of legitimate business purposes and testing protocols to defend against enforcement.
Innovation-friendly provisions:
- Safe harbors for organizations that discover violations through internal testing or substantially comply with NIST AI RMF
- A first-in-the-nation 36-month regulatory sandbox administered by the Department of Information Resources
- Local preemption — Texas cities and counties can’t layer on additional AI regulations
TRAIGA also imposes specific requirements on government entities, including mandatory disclosure when consumers interact with an AI system and prohibitions on social scoring.
California ADMT Regulations (CCPA/CPRA)
| Detail | Info |
|---|---|
| Status | Finalized September 2025 |
| Effective Date | Staggered, beginning January 1, 2026 |
| Enforcer | California Privacy Protection Agency (CPPA) + AG |
| Penalties | $2,500–$7,500 per intentional violation |
California hasn’t passed a standalone AI law, but the CPPA finalized ADMT regulations in September 2025 that significantly expand AI requirements under the existing CCPA/CPRA framework. For practical purposes, these regulations function as California’s AI law.
Key requirements:
- Pre-use notice before ADMT is used in significant decisions (employment, credit, housing, insurance, healthcare, education)
- Consumer opt-out rights — individuals can request human review or alternative processes
- Access to logic — businesses must explain how automated decisions are made
- Risk assessments — required for ADMT used in significant decisions, with compliance beginning January 1, 2026
- Attestation — businesses must submit attestations and risk assessment summaries to the CPPA by April 1, 2028
The staggered compliance deadlines mean risk assessment requirements are already in effect. If you haven’t started, you’re behind.
Employment AI Laws
Employment is where state AI regulation started — and where enforcement has been most active.
NYC Local Law 144 (Automated Employment Decision Tools)
| Detail | Info |
|---|---|
| Status | Enacted; effective July 5, 2023 |
| Scope | AEDTs used in NYC hiring and promotion |
| Penalties | $500 first violation; $500–$1,500 per subsequent violation per day |
| Enforcer | NYC Department of Consumer and Worker Protection (DCWP) |
NYC’s Local Law 144 remains the most mature AI-in-hiring regulation in the US. It requires:
- Annual independent bias audits for disparate impact by race, ethnicity, and sex
- Public posting of audit summaries
- 10-day advance notice to candidates before AEDT use
The December 2025 New York State Comptroller audit found that DCWP had received only two AEDT complaints during the audit period and criticized the agency’s complaint-based enforcement approach as inadequate for detecting non-compliance. Expect a more aggressive enforcement phase in 2026 — DLA Piper has warned employers to anticipate increased investigations and daily penalties.
For our detailed coverage, see NYC Local Law 144 explained.
Illinois: BIPA + AI Video Interview Act
Illinois has been regulating AI longer than most states realize, through two separate laws:
Biometric Information Privacy Act (BIPA) — effective since 2008, BIPA requires written consent before collecting fingerprints, face geometry, iris scans, voiceprints, or hand geometry. What makes BIPA unique: it has a private right of action, meaning individuals can sue directly. Penalties run $1,000 per negligent violation and $5,000 per intentional violation.
The financial impact is staggering. Facebook (Meta) paid $650 million to settle BIPA claims over facial recognition in 2021 — the largest all-cash privacy settlement at the time. In 2025, Clearview AI settled for $51.75 million, and over 1,500 BIPA lawsuits have been filed in Illinois courts since the Rosenbach decision confirmed no actual harm is required to sue.
Artificial Intelligence Video Interview Act (AIVIA) — effective since January 1, 2020, AIVIA requires employers using AI to analyze video interviews to: (1) notify applicants that AI will be used, (2) explain how the AI works and what it evaluates, (3) obtain applicant consent before the interview, (4) limit who can view the video, and (5) delete videos upon applicant request.
Illinois continues expanding employment AI regulation, with new disclosure and nondiscrimination requirements taking shape for 2026.
Maryland: Facial Recognition in Hiring
Maryland’s HB 1202 prohibits employers from using facial recognition during job interviews without applicant consent. Along with Illinois and NYC, Maryland is one of only three jurisdictions requiring consent before using AI during certain parts of the hiring process.
Biometric AI Laws
Beyond Illinois BIPA, several states regulate biometric data collection that directly affects AI deployments:
| State | Law | Private Right of Action | Key Requirements |
|---|---|---|---|
| Illinois | BIPA (2008) | ✅ Yes | Most stringent; written consent required; $1,000–$5,000/violation |
| Texas | CUBI (2009) | ❌ AG only | Notice and consent; up to $25,000/violation |
| Washington | HB 1493 | ❌ AG only | Notice required; enrollment consent |
| Arkansas | PIPA | ❌ AG only | Notice and consent requirements |
| Maryland | SB 169 | ❌ AG only | Facial recognition restrictions in employment |
Texas’s CUBI was recently amended by the Responsible AI Governance Act (HB 149) to create a limited exemption for AI development, effective January 1, 2026.
Privacy Laws With AI/Profiling Provisions
As of 2026, 19 states have comprehensive consumer privacy laws in effect — and nearly all of them include provisions affecting AI through profiling opt-out rights and data protection assessment requirements.
| State | Law | Effective | Profiling Opt-Out | Data Protection Assessments |
|---|---|---|---|---|
| California | CCPA/CPRA | Jan 1, 2023 | ✅ | ✅ |
| Virginia | VCDPA | Jan 1, 2023 | ✅ | ✅ |
| Connecticut | CTDPA | Jul 1, 2023 | ✅ | ✅ |
| Colorado | CPA | Jul 1, 2023 | ✅ | ✅ |
| Utah | UCPA | Dec 31, 2023 | Limited | Limited |
| Oregon | OCPA | Jul 1, 2024 | ✅ | ✅ |
| Montana | MCDPA | Oct 1, 2024 | ✅ | ✅ |
| Texas | TDPSA | Jul 1, 2024 | ✅ | ✅ |
| Delaware | DPDPA | Jan 1, 2025 | ✅ | ✅ |
| Indiana | ICDPA | Jan 1, 2026 | ✅ | ✅ |
| Kentucky | KCDPA | Jan 1, 2026 | ✅ | ✅ |
| Rhode Island | RIDPA | Jan 1, 2026 | ✅ | ✅ |
| Minnesota | MCDPA | Jul 31, 2025 | ✅ | ✅ |
| New Jersey | NJDPA | Jan 15, 2025 | ✅ | ✅ |
| New Hampshire | NHDPA | Jan 1, 2025 | ✅ | ✅ |
| Nebraska | NDPA | Jan 1, 2025 | ✅ | ✅ |
| Maryland | MODPA | Oct 1, 2025 | ✅ | ✅ |
Why this matters for AI teams: If your AI system profiles consumers — and most do — you likely need to honor opt-out requests and conduct data protection assessments in every state where you have users. Minnesota stands out for introducing an explicit right to contest automated decision-making, requiring businesses to explain profiling results and allow consumers to challenge them.
Emerging 2026 Legislation to Watch
The 2026 legislative session is still underway, and several categories of AI bills are advancing rapidly:
AI Chatbot Safety
Washington passed two AI bills in March 2026, including HB 2225 regulating AI companion chatbots. Oregon passed a similar bill. As of March 2026, 78 chatbot bills were alive in 27 states, according to the Transparency Coalition.
Deepfake Regulation
Deepfake laws were the most widely enacted AI law category in 2024, passing in 22 states. The 2026 session is expanding protections beyond intimate images — Washington’s HB 1205 covers all forged digital likenesses.
Healthcare AI
Vermont advanced HB 814 to regulate mental health chatbot use and require notice when generative AI is used for patient communications. Multiple states are considering similar measures.
Government AI Accountability
Several states — including California, Vermont, and Washington — have enacted or proposed restrictions on government AI use, requiring human oversight and accountability frameworks.
Federal Context: Why States Are Leading
Several federal laws already apply to AI deployment, even without a comprehensive AI statute:
- Title VII (Civil Rights Act): Prohibits employment discrimination, including via AI
- ECOA / Fair Housing Act: Prohibits lending and housing discrimination via AI
- FCRA: Governs AI used in credit and background checks
- ADA: AI accessibility requirements
- HIPAA: AI processing protected health information
Federal agencies have issued guidance within their domains — the EEOC on AI in employment (May 2023), the FTC on deceptive AI practices, the CFPB on AI in consumer financial services — but none of this adds up to the comprehensive framework that states are building.
The NIST AI Risk Management Framework has emerged as the de facto compliance standard, particularly since Colorado’s safe harbor provision. Even without a federal mandate, implementing NIST AI RMF is the closest thing to a universal compliance strategy.
State AI Law Comparison Matrix
| Requirement | Colorado | Texas | California | NYC | Illinois |
|---|---|---|---|---|---|
| Comprehensive AI Law | ✅ Enacted | ✅ Enacted | Partial (ADMT regs) | ❌ (city only) | Partial |
| Employment AI | ✅ | ✅ | ✅ (CPRA) | ✅ (LL 144) | ✅ (AIVIA) |
| Impact Assessments | ✅ Required | Via testing docs | ✅ (CPRA) | ✅ (LL 144 audit) | ❌ |
| Consumer Opt-Out | ✅ | ✅ (govt entities) | ✅ | ❌ | Limited |
| Private Right of Action | ❌ | ❌ | Limited | ❌ | ✅ (BIPA) |
| Safe Harbor (NIST/ISO) | ✅ | ✅ | ❌ | ❌ | ❌ |
| Liability Standard | Reasonable care | Intent-based | Risk assessment | Bias audit | Consent-based |
Multi-State Compliance Strategy
If you’re operating across state lines — and in 2026, who isn’t? — here’s the practical approach:
1. Adopt NIST AI RMF as your baseline. It’s referenced in Colorado’s safe harbor, Texas’s safe harbor provisions, and emerging as the standard regulators look for. One framework, multiple compliance boxes checked.
2. Inventory every AI system. You can’t comply with laws you don’t know apply. Map your AI systems against the categories above — employment, consumer-facing, biometric, automated decision-making.
3. Build to the highest common denominator. Colorado’s “reasonable care” standard is stricter than Texas’s intent-based framework. California’s ADMT regulations require the most detailed consumer disclosures. Build once to the strictest standard rather than maintaining separate compliance programs.
4. Conduct impact assessments now. Colorado requires them by June 30. California’s risk assessment requirements are already in effect. NYC requires annual bias audits. Starting these assessments is the single highest-ROI compliance activity for 2026.
5. Document everything. Even under Texas’s intent-based standard, the absence of documentation makes it difficult to refute allegations. Maintain records of AI system purposes, design decisions, testing protocols, and intended use cases.
So What?
The pace isn’t slowing down. In 2024, states introduced 635 AI bills. In 2025, that number hit 1,208 with 145 enacted into law. In 2026, we’re already at 1,561 bills introduced, and most legislative sessions are still underway.
If you’re waiting for a federal AI law to bring order to this chaos, you’ll be waiting a long time. The compliance obligations are here now — and they’re different in every state.
The organizations that are getting ahead are the ones building a single, robust AI governance program (anchored to NIST AI RMF) that satisfies the strictest applicable requirements. The ones that are falling behind are the ones treating each state as a separate problem.
Need help building the impact assessments and risk management framework these laws require? Our Data Privacy Compliance Kit includes templates and guides for data protection assessments, consumer rights management, and privacy impact analysis that map to these state requirements.
FAQ
How many states have AI laws in 2026?
As of March 2026, 19 states have comprehensive consumer privacy laws with AI-related provisions (profiling opt-out, data protection assessments). Three states — Colorado, Texas, and Illinois (via BIPA and AIVIA) — have laws specifically targeting AI systems. NYC has the most mature AI hiring regulation (Local Law 144). Lawmakers in 45 states have introduced 1,561 AI-related bills during the 2026 session.
Which state AI law is the strictest?
Colorado’s AI Act (SB 24-205) imposes the broadest obligations, requiring impact assessments, consumer disclosures, appeal rights, and risk management policies for any high-risk AI system. Illinois BIPA is the most financially punishing due to its private right of action — Facebook’s $650 million settlement and Clearview AI’s $51.75 million settlement demonstrate the scale of liability.
Is there a federal AI law in the United States?
No. As of April 2026, Congress has not passed comprehensive AI legislation. Existing federal laws (Title VII, ECOA, FCRA, ADA) apply to AI use within their domains, and agencies like the EEOC, FTC, and CFPB have issued AI guidance. The NIST AI Risk Management Framework is a voluntary standard, not a mandate — but Colorado and Texas both reference it in their safe harbor provisions.
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Keep Reading
Long Island Investment Adviser Pleads Guilty to $160 Million Fraud: What Compliance Teams Should Learn
Vincent Camarda of A.G. Morgan Financial Advisors pleaded guilty to $160M investment fraud. Here's what went wrong and the compliance red flags every firm should watch for.
Apr 3, 2026
Regulatory ComplianceAI in Consequential Decision-Making: Where Regulators Draw the Compliance Line
How state and federal regulators define consequential AI decisions — and what compliance teams must do before June 2026 to avoid enforcement.
Apr 3, 2026
Regulatory ComplianceWho Needs a Contingency Funding Plan? FINRA, OCC & Interagency Requirements Explained
Contingency funding plan requirements vary by regulator, but most banks and larger credit unions need a CFP now. Here’s what OCC, Fed, FDIC, NCUA, and FINRA expect.
Apr 3, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.