AI Regulation Compliance in 2026: What's Required and What's Coming
Table of Contents
TL;DR
- The EU AI Act’s high-risk system requirements hit August 2, 2026 — and they apply to any AI touching European customers, even if your servers sit in Dallas.
- At least three major U.S. state AI laws took effect January 1, 2026 (Illinois, Texas), with Colorado following June 30, 2026.
- The SEC has made AI a 2025 exam priority, already fined firms for “AI washing,” and is pushing for mandatory AI disclosures. Regulators are watching.
The Regulatory Ground Shifted While You Were Building Your AI Strategy
Here’s the uncomfortable truth about AI regulation in 2026: the “wait and see” window closed.
State legislatures introduced 1,561 AI-related bills across 45 states in just the first three months of 2026. That’s up from roughly 200 in all of 2023. The EU AI Act is actively enforcing its first set of rules. The SEC has already settled its first “AI washing” cases. And if you’re a financial services firm still treating AI governance as a future problem, your regulators disagree.
This is your current-state briefing. What’s already in force, what’s hitting in the next six months, and what you should be doing about it right now.
What’s Already in Effect: The Rules That Apply Today
EU AI Act: Prohibited Practices and AI Literacy (Since February 2, 2025)
The EU AI Act’s first enforcement phase took effect February 2, 2025, banning the highest-risk AI practices outright. That includes:
- Social scoring systems used by governments or private entities
- Real-time biometric identification in public spaces (with narrow law enforcement exceptions)
- Manipulative AI designed to exploit vulnerabilities of specific groups
- Emotion recognition systems in workplaces and educational settings
The same date triggered AI literacy obligations — organizations deploying AI must ensure their staff understand what they’re working with. That’s not a suggestion. It’s a legal requirement with teeth.
How big are the teeth? Article 99 of the EU AI Act sets fines for prohibited practice violations at up to €35 million or 7% of global annual turnover, whichever is higher. For context, 7% of JPMorgan Chase’s 2024 revenue would be roughly $14 billion. Nobody’s testing that ceiling yet, but the penalty structure signals intent.
Illinois: AI Discrimination Ban in Employment (Since January 1, 2026)
Illinois House Bill 3773 amended the Illinois Human Rights Act to expressly prohibit employers from using AI that “has the effect of subjecting employees to discrimination on the basis of protected classes.” Effective January 1, 2026.
What makes this law distinctive: it targets disparate impact, not just intentional discrimination. If your AI hiring tool, promotion algorithm, or performance scoring system produces discriminatory outcomes — even unintentionally — you’re on the hook. The Illinois Department of Human Rights is implementing draft notice rules requiring employers to disclose AI use in employment decisions.
Texas: Responsible AI Governance Act (Since January 1, 2026)
The Texas Responsible AI Governance Act (TRAIGA) took effect January 1, 2026, creating a comprehensive regulatory framework for AI development and deployment. Key provisions:
- Government disclosure requirements — agencies must tell consumers they’re interacting with AI
- Healthcare AI disclosure — providers must inform patients when AI is used in their care
- Broad AI definition — goes beyond generative AI to cover a wide range of automated systems
- Texas AG enforcement — no private right of action, but civil penalties start at $10,000 per violation
- Local preemption — TRAIGA overrides any municipal AI regulations in Texas
SEC: AI Is Now an Examination Priority
The SEC’s 2025 examination priorities explicitly flagged artificial intelligence as a focus area. Examiners are looking at two things:
-
Whether firms’ AI claims are accurate — the SEC settled its first “AI washing” cases in March 2024, charging Delphia (USA) Inc. and Global Predictions Inc. with making false statements about their AI capabilities. Penalties: $225,000 and $175,000 respectively. Small dollar amounts, big signal.
-
Whether firms have adequate AI controls — examiners are reviewing training and security controls firms use to “identify and mitigate new risks associated with AI.” If your advisory firm is deploying AI without documented governance, expect questions at your next exam.
The SEC’s Investor Advisory Committee recommended AI-related disclosure guidelines in December 2025, including requiring issuers to adopt a definition of “Artificial Intelligence” and disclose material AI risks. Rulemaking hasn’t started, but the direction is clear.
What’s Coming in the Next Six Months
Colorado AI Act: June 30, 2026
Colorado’s AI Act (SB 24-205) was originally set for February 1, 2026. Governor Jared Polis signed SB 25B-004 on August 28, 2025, delaying it to June 30, 2026 after a special legislative session failed to reach a compromise.
Colorado’s law is significant because it targets high-risk AI systems specifically — AI used in consequential decisions about employment, lending, housing, insurance, education, and healthcare. Both developers and deployers must:
- Conduct impact assessments
- Implement risk management programs
- Monitor for algorithmic discrimination
- Provide consumer notification when AI influences significant decisions
The delay bought compliance teams five extra months. Those months are nearly gone.
EU AI Act: High-Risk AI System Rules — August 2, 2026
The big one. On August 2, 2026, the EU AI Act’s high-risk AI system requirements become enforceable for standalone AI systems. For financial services, this means AI used in:
- Credit scoring and lending decisions
- Insurance pricing and claims assessment
- Fraud detection systems
- Creditworthiness assessments
All classified as high-risk under the Act. Requirements include:
| Requirement | What It Means |
|---|---|
| Risk management system | Document and continuously manage AI risks throughout the system lifecycle |
| Data governance | Training data must be relevant, representative, and free from errors |
| Technical documentation | Full system documentation before market deployment |
| Record-keeping | Automatic logging of system operations |
| Transparency | Users must be informed they’re interacting with AI |
| Human oversight | Systems must allow human intervention and override capability |
| Accuracy and robustness | Appropriate levels verified through testing |
| Conformity assessment | Third-party or self-assessment before deployment |
This applies to any company serving EU customers — regardless of where the company is headquartered. A US-based lender using AI for loan approvals that serves European customers falls within scope.
For high-risk systems embedded in regulated products (like medical devices), the deadline extends to August 2, 2027. But standalone high-risk AI in financial services? August 2026. That’s five months away.
GPAI Model Fines: August 2, 2026
General-purpose AI model obligations became applicable August 2, 2025, but enforcement fines under Article 101 don’t kick in until August 2, 2026. If you’re building on or deploying GPAI models (including large language models), that’s when non-compliance starts costing money.
The Federal Wildcard: Trump’s Preemption Play
On December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” building on his January 2025 EO “Removing Barriers to American Leadership in Artificial Intelligence” (EO 14179).
The December EO attempts to use federal authority to challenge state AI regulations that could be characterized as barriers to AI development. The strategy includes conditioning federal broadband funding on policy alignment and advancing federal preemption through litigation and agency action.
What this means for compliance teams: Don’t bet on preemption. Executive orders don’t override enacted state laws. Colorado, Illinois, and Texas laws are on the books. Until Congress passes comprehensive federal AI legislation (and there’s no bill with real momentum), you comply with the states where you operate. The EO creates uncertainty, not exemption.
What the Banking Regulators Are Saying
The federal banking regulators haven’t issued AI-specific rules, but they’ve made clear that existing frameworks apply. The GAO’s 2025 report on AI in financial services (GAO-25-107197) confirmed that regulators consider existing model risk management guidance and third-party risk guidance applicable to AI.
That means:
- OCC Bulletin 2011-12 (model risk management) applies to AI models the same way it applies to traditional models
- OCC Bulletin 2025-26 clarified that community banks should apply model risk management proportionally to their risk exposure and model complexity
- Third-party risk guidance applies when you’re using vendor AI or AI-as-a-service
- The GAO recommended Congress grant NCUA authority to examine technology service providers — a gap that leaves credit union AI oversight weaker than bank oversight
The message from regulators is consistent: there’s no AI exemption. The CFPB’s Advanced Technology page states it plainly: they’re monitoring AI to “ensure it does not violate the rights of consumers.” Same laws, new technology, same expectations.
What Compliance Teams Should Do Right Now
Here’s your priority list, starting with the highest-impact items:
1. Map Your AI Inventory (Week 1-2)
You can’t comply with laws you don’t understand, and you can’t understand your exposure without knowing what AI you’re running. Build a complete inventory:
- Every AI system in production (including vendor-provided tools)
- What decisions each system influences
- Which jurisdictions’ customers it touches
- Whether it qualifies as “high-risk” under EU AI Act or Colorado’s definitions
2. Run a Regulatory Applicability Assessment (Week 2-3)
Cross-reference your AI inventory against:
- EU AI Act high-risk categories (if you serve European customers)
- Colorado SB 24-205 high-risk definitions (employment, lending, housing, insurance)
- Illinois HB 3773 (any AI in employment decisions)
- Texas TRAIGA (broad definition — check if your systems qualify)
- SEC exam priorities (if you’re a registered adviser or broker-dealer)
3. Close Documentation Gaps (Week 3-6)
The EU AI Act’s high-risk requirements are documentation-heavy. Start building:
- Risk management documentation for each high-risk system
- Data governance records (training data provenance, quality checks)
- Technical documentation (system architecture, design choices, testing results)
- Human oversight procedures (who can intervene, when, how)
4. Test for Bias — Now, Not Later (Ongoing)
Illinois and Colorado both target discriminatory AI outcomes. If you haven’t run disparate impact testing on your AI systems, you’re flying blind. Test across protected classes. Document results. Fix what you find. Document the fixes.
5. Update Disclosures and Notices (Before June 30, 2026)
Multiple laws now require telling people when AI is involved in decisions about them. Review your customer-facing communications, employment processes, and healthcare interactions. Build disclosure mechanisms before Colorado’s June 30 deadline.
The Regulatory Timeline at a Glance
| Date | What Happens |
|---|---|
| Already in effect | EU AI Act prohibited practices; Illinois AI employment discrimination ban; Texas TRAIGA; SEC AI exam focus |
| June 30, 2026 | Colorado AI Act (SB 24-205) takes effect for high-risk AI systems |
| August 2, 2026 | EU AI Act high-risk AI system requirements enforceable; GPAI fine enforcement begins |
| August 2, 2027 | EU AI Act high-risk requirements for AI in regulated products |
So What?
The AI regulatory landscape in 2026 isn’t a future problem — it’s a present one. Three states have enforceable AI laws on the books today. The EU AI Act’s most impactful provisions hit in five months. Federal banking regulators are applying existing model risk guidance to AI without waiting for new rules. And the SEC is watching what firms claim about their AI capabilities.
The firms that treat this as a compliance exercise to check off will get caught flat-footed. The ones building real AI governance programs — model inventories, risk assessments, bias testing, documentation — will be ready regardless of which regulatory shoe drops next.
If you need a structured starting point, grab the AI Risk Assessment Template & Guide — it’s built for exactly this kind of regulatory environment.
Frequently Asked Questions
Does the EU AI Act apply to US companies?
Yes, if your AI systems process data about or make decisions affecting people in the EU. The EU AI Act has extraterritorial reach — similar to GDPR. A US-based bank using AI for credit decisions that serves European customers must comply with the high-risk AI system requirements by August 2, 2026.
Which US states have AI laws taking effect in 2026?
Illinois (HB 3773, effective January 1, 2026) bans AI-driven employment discrimination. Texas (TRAIGA, effective January 1, 2026) creates a broad AI governance framework. Colorado (SB 24-205, delayed to June 30, 2026) targets high-risk AI systems with impact assessment and consumer notification requirements. As of March 2026, 45 states have introduced 1,561 AI-related bills — more laws are coming.
Will federal AI regulation preempt state laws?
Not yet. President Trump’s December 2025 executive order signals intent to challenge state AI regulation, but executive orders cannot override enacted legislation. Until Congress passes a comprehensive federal AI law — and no bill has significant momentum — state laws remain enforceable. Compliance teams should plan for a multi-state compliance environment.
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Keep Reading
Long Island Investment Adviser Pleads Guilty to $160 Million Fraud: What Compliance Teams Should Learn
Vincent Camarda of A.G. Morgan Financial Advisors pleaded guilty to $160M investment fraud. Here's what went wrong and the compliance red flags every firm should watch for.
Apr 3, 2026
Regulatory ComplianceAI in Consequential Decision-Making: Where Regulators Draw the Compliance Line
How state and federal regulators define consequential AI decisions — and what compliance teams must do before June 2026 to avoid enforcement.
Apr 3, 2026
Regulatory ComplianceWho Needs a Contingency Funding Plan? FINRA, OCC & Interagency Requirements Explained
Contingency funding plan requirements vary by regulator, but most banks and larger credit unions need a CFP now. Here’s what OCC, Fed, FDIC, NCUA, and FINRA expect.
Apr 3, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.