Regulatory Compliance

Colorado AI Act (SB 205) Compliance Guide: What Every Business Must Do Before June 2026

March 25, 2026 Rebecca Leung
Table of Contents

TL;DR:

  • Colorado SB 205 (the Colorado Artificial Intelligence Act) takes effect June 30, 2026 — the first comprehensive state AI law in the US
  • It covers both developers (who build AI) and deployers (who use AI for real decisions) of “high-risk AI systems”
  • Required actions: risk management policies, annual impact assessments, consumer notifications, and 90-day AG reporting for discovered discrimination

The countdown is on. As of June 30, 2026, Colorado becomes the first U.S. state to impose comprehensive, enforceable AI governance obligations on private-sector businesses. If your organization builds or uses AI to make decisions about people’s jobs, loans, housing, healthcare, or education — and those people include Colorado residents — you’re in scope.

This isn’t a “check the box and move on” law. Colorado SB 205, formally titled the Consumer Protections for Artificial Intelligence Act (CAIA), requires documented risk management programs, annual impact assessments, consumer notification processes, and reporting pipelines to the state Attorney General. And the AG has exclusive enforcement authority — violations are classified as unfair trade practices.

Here’s everything you need to know, and the concrete steps to get compliant before the deadline.


What Is Colorado SB 205?

Governor Jared Polis signed Senate Bill 24-205 on May 17, 2024, making Colorado the first U.S. state to enact a comprehensive private-sector AI governance law. The law adopts a risk-based framework modeled loosely on the EU AI Act — it doesn’t ban AI applications outright, but it imposes a “duty of reasonable care” on anyone developing or deploying AI systems that make consequential decisions affecting consumers.

The original effective date was February 1, 2026. After a special legislative session in August 2025, that was pushed to June 30, 2026. That extra window helped — but it didn’t change the law’s core requirements, and many organizations are still catching up.

The central obligation: Developers and deployers of high-risk AI systems must use reasonable care to protect consumers from algorithmic discrimination. That means documented processes, not just good intentions.


Who Is Covered? Developers vs. Deployers

SB 205 draws a clear line between two actors, and both are regulated:

RoleDefinitionExamples
DeveloperBuilds, trains, or substantially modifies an AI systemAI vendors, fintech companies, model providers, in-house ML teams that build their own models
DeployerUses an AI system to make consequential decisions affecting Colorado consumersBanks, lenders, healthcare organizations, employers, insurers, landlords

Most organizations reading this are deployers — you bought or licensed an AI tool and you’re using it to make real decisions. That still puts you squarely in scope. You can’t outsource accountability to your vendor.

Small Business Exemption (Narrow)

Organizations with fewer than 50 full-time employees are exempt — but only if they don’t train the AI system on their own data. If a small business fine-tunes or trains a model using proprietary data, the exemption disappears. This is intentional: anyone actively shaping model behavior with their own data remains accountable for downstream outcomes.

Key Exemptions

Not everything is covered. SB 205 explicitly exempts:

  • Financial institutions: Banks, out-of-state banks, credit unions (chartered by Colorado or federal), and their affiliates
  • HIPAA covered entities providing AI-generated healthcare recommendations that require a healthcare provider’s action
  • Insurers and fraternal benefit societies
  • Systems approved by federal agencies (FDA, FAA)
  • Systems used for research supporting federal certification/approval

For financial services teams: The financial institution exemption is meaningful — traditional banks and federally chartered credit unions are carved out. But if you’re a fintech, payments company, or non-bank lender, check carefully. The exemption is for “financial institutions” as a legal category, not everyone in the financial services ecosystem.


What Counts as a “High-Risk AI System”?

This is the threshold question. SB 205 defines a high-risk AI system as one that, when deployed, makes or is a substantial factor in making a “consequential decision.”

Two things must be true:

  1. The AI system is a substantial factor — it materially influences the outcome, not just a background input
  2. The decision is consequential — it has a material legal or similarly significant effect on a consumer

The Eight Consequential Decision Categories

CategoryWhat’s Covered
EducationEnrollment decisions, scholarship awards, access to educational programs
EmploymentHiring, firing, promotions, scheduling, performance evaluations
Financial servicesLoan approvals, credit limits, interest rates, access to banking
Government servicesPublic benefits, social services eligibility
HealthcareTreatment decisions, diagnosis assistance, care access
HousingRental applications, mortgage approvals, tenancy decisions
InsuranceCoverage determinations, premium calculations, claims processing
Legal servicesAccess to legal representation

A decision is consequential if it affects the provision, denial, cost, or terms of any of the above. That last part matters: it’s not just binary access decisions — AI-driven pricing and terms adjustments are also in scope.

Systems excluded from “high-risk” status (unless repurposed for consequential decisions): anti-fraud tools (without facial recognition), spam filters, cybersecurity tools, spreadsheets, calculators, spell checkers, video games.


Developer Obligations Under SB 205

If your organization builds or substantially modifies AI systems used by others:

1. Documentation Package for Deployers

Developers must provide deployers with comprehensive documentation, including:

  • General statements on foreseeable uses and risks of algorithmic discrimination
  • Training data summaries, including limitations
  • Purpose and output descriptions
  • Risk mitigation measures already implemented

2. Public Disclosure

Developers must maintain a publicly available summary of each high-risk AI system they offer — including how they assess and manage discrimination risks.

3. Attorney General Reporting

Within 90 days of discovering any known or reasonably foreseeable risk of algorithmic discrimination, developers must disclose this to the AG, affected deployers, and other relevant developers in the supply chain.


Deployer Obligations Under SB 205

This is where most compliance programs live or die. If you’re using AI to make consequential decisions about Colorado residents, here’s what you’re required to do:

1. Risk Management Policy

Implement a written policy covering the principles, processes, and personnel responsible for identifying, assessing, and mitigating discrimination risks from your AI systems. The law specifically recommends aligning with NIST AI Risk Management Framework or ISO/IEC 42001 as reference frameworks.

This isn’t a paragraph in your employee handbook. Think: governance structure with named owners, a process for evaluating AI tools before deployment, ongoing monitoring procedures, and escalation paths when something goes wrong.

At most organizations, this sits with the CRO, Chief Compliance Officer, or a dedicated model risk management team. If you don’t have those roles, it defaults to Legal + whoever runs your technology decisions. Name it explicitly — the law expects accountability to have an owner.

2. Annual Impact Assessments

Before deploying any high-risk AI system, and then at least annually (and within 90 days of any substantial modification), deployers must complete a formal impact assessment covering:

  • Purpose and intended use: What decisions does this system make? In what context?
  • Foreseeable risks: What algorithmic discrimination risks exist? How will they be mitigated?
  • Data inputs and outputs: What categories of data does the system use? What does it output?
  • Transparency measures: Are consumers notified that AI is being used?
  • Post-deployment monitoring: How will you track performance, identify drift, and catch discrimination issues over time?

This is not a one-time exercise. It’s an annual governance artifact — and it needs to be audit-ready.

3. Consumer Notification

When a high-risk AI system is used to make or substantially influence a consequential decision about a consumer, you must notify that consumer. The notification must include:

  • The system’s purpose and what data it uses
  • The consumer’s right to correct their data
  • The consumer’s right to appeal the decision

Think about where these notifications need to appear in your customer journey: credit application decisions, insurance underwriting, employment screening, housing eligibility.

4. Annual Reviews

Conduct annual reviews to confirm your AI systems are not producing algorithmic discrimination. Document the results. This feeds into the impact assessment cycle.

5. Attorney General Reporting

Just like developers, deployers must report discovered algorithmic discrimination to the AG within 90 days. If your monitoring catches it, you’re on the clock.


The Affirmative Defense: How to Protect Yourself

Here’s the good news: SB 205 includes an affirmative defense that protects organizations that are actively managing their AI risk. You have a defense if you:

  1. Discover a violation through your own feedback mechanisms, testing, or internal review processes (not because a regulator found it first)
  2. Cure the violation promptly
  3. Can demonstrate compliance with a recognized AI risk management framework (NIST AI RMF or ISO/IEC 42001 are explicitly referenced)

Translation: the law rewards organizations that build real governance programs. Documentation, regular audits, and a formal testing cadence aren’t just compliance theater — they’re your legal protection.


Enforcement: What Happens If You Don’t Comply

The Colorado Attorney General has exclusive enforcement authority under SB 205. Violations are classified as unfair trade practices under Colorado consumer protection law.

The enforcement process:

  1. AG issues a notice of violation
  2. Organization has 60 days to cure the issue
  3. If the violation isn’t cured, enforcement actions follow

Penalties scale with scope of harm — the calculation is per-consumer, which means the size of your impacted population directly drives potential liability. A large-scale AI deployment affecting thousands of applicants for loans, jobs, or housing creates significantly higher exposure than a narrow-scope system.

Starting February 1, 2026, the AG can also require developers to disclose documentation within 90 days of a request. Even before full enforcement kicks in, the oversight teeth are there.


Your SB 205 Compliance Checklist

Here’s a practical roadmap by timeframe:

Now Through April 2026 — Inventory and Gap Assessment

  • Map all AI systems your organization uses or builds that touch Colorado residents
  • Classify each as high-risk or not under SB 205’s definitions
  • Identify your role for each system (developer, deployer, or both)
  • Confirm whether financial institution or other exemptions apply
  • Assess current documentation against SB 205 requirements

May 2026 — Policy and Process Build

  • Draft or update your AI Risk Management Policy (align to NIST AI RMF)
  • Complete initial impact assessments for all in-scope high-risk AI systems
  • Define consumer notification templates and deployment points
  • Build the 90-day AG reporting workflow and assign ownership
  • Establish ongoing monitoring protocols (bias testing, drift detection, performance reviews)

June 1–30, 2026 — Final Compliance Sprint

  • Legal review of all impact assessments and consumer-facing disclosures
  • Test notification delivery in production environments
  • Brief front-line staff on appeal rights and consumer inquiry handling
  • Finalize reporting escalation paths for discovered discrimination
  • Complete governance sign-off — your audit trail starts June 30

Ongoing After June 30, 2026

  • Annual impact assessment refresh for all in-scope systems
  • Within 90 days of any substantial AI system modification: update impact assessment
  • Annual bias and discrimination reviews with documented results
  • Monitor AG rulemaking — rules in six areas are being finalized and may clarify requirements
  • Maintain documentation for at least the current year + prior year

So What? The Bigger Picture

SB 205 is the opening shot. Colorado is setting the template that other states are watching. Texas, Virginia, and several other states have introduced or are developing similar legislation. The federal AI governance vacuum means state-level frameworks like Colorado’s are filling in — and they’re building a patchwork that multi-state businesses will have to navigate.

For financial services, healthcare, and HR tech teams: even if you benefit from the financial institution or HIPAA exemptions under Colorado’s law, the compliance posture SB 205 requires — documented risk management, regular impact assessments, consumer transparency, monitoring — is where every regulator is heading. Building the program now is a one-time investment that transfers across frameworks.

The organizations that will struggle in June 2026 are the ones who deployed AI fast without building governance. They have systems doing real work, and no paperwork to show for it. That’s the gap SB 205 is designed to expose.


FAQ

Does Colorado SB 205 apply to out-of-state businesses? Yes. If your AI system makes consequential decisions affecting Colorado residents — regardless of where your company is located — you’re in scope. It’s the location of the consumer, not the company, that triggers coverage.

What’s the difference between a developer and a deployer under SB 205? A developer builds or substantially modifies AI systems. A deployer uses them for real decisions. Many organizations are both — if you built an AI tool in-house and you’re also using it to make employment or lending decisions, you have obligations under both sets of requirements.

We use a third-party AI vendor. Are we still responsible? Yes. As the deployer, you’re responsible for the impact of AI decisions made using your vendor’s system. You can’t fully outsource compliance. You should: get documentation from your vendor (they’re required to provide it), conduct your own impact assessments, and include AI governance in your vendor due diligence process.


Need to build your AI Risk Management Program? The AI Risk Assessment Template & Guide gives you a ready-to-use framework for inventorying your AI systems, conducting impact assessments, and documenting your governance program — exactly what SB 205 requires.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.