AI Risk

AI Impact Assessments: What They Are, Who Needs Them, and How to Conduct One

Table of Contents

TL;DR:

  • Colorado SB 205 (effective June 30, 2026) and the EU AI Act (Article 27, effective August 2026) both require impact assessments before deploying high-risk AI systems.
  • An AI impact assessment evaluates who’s affected, what can go wrong, and what controls are in place — it’s broader than a privacy impact assessment (PIA/DPIA) and specifically targets algorithmic harm.
  • If you haven’t started building your assessment methodology, you’re already behind. Here’s a complete walkthrough with timelines, templates, and the regulatory specifics you need.

Three months from now, deploying a high-risk AI system in Colorado without a documented impact assessment will be a legal violation. Two months after that, the EU’s fundamental rights impact assessment requirement kicks in. Canada already mandates algorithmic impact assessments for federal agencies. And in May 2025, a federal judge in Mobley v. Workday certified a nationwide class action alleging that Workday’s AI hiring tools discriminated against applicants over 40 — a case that’s accelerating every company’s urgency to document how their AI systems make decisions.

The pattern is unmistakable: if your AI touches consequential decisions, you need a formal impact assessment. Not because it’s a best practice. Because it’s the law.

What Is an AI Impact Assessment?

An AI impact assessment is a structured evaluation of how an AI system affects the people subject to its decisions. It documents the system’s purpose, identifies who it impacts, analyzes risks of harm (especially algorithmic discrimination), and maps the controls in place to mitigate those risks.

Think of it as the AI-specific cousin of the Data Protection Impact Assessment (DPIA) required under GDPR — but broader. Where a DPIA focuses on personal data processing risks, an AI impact assessment covers the full range of potential harms: discrimination, fairness violations, safety risks, transparency gaps, and the knock-on effects of automated decision-making.

Assessment TypeScopeTriggerKey Regulation
AI Impact AssessmentAlgorithmic discrimination, fairness, rights impact, safetyHigh-risk AI deploymentColorado SB 205, EU AI Act Art. 27
DPIA (Data Protection Impact Assessment)Personal data processing risksHigh-risk data processingGDPR Article 35
Privacy Impact Assessment (PIA)Broader privacy risks to individualsNew system/program with PIIUS state privacy laws, NIST Privacy Framework
Algorithmic Impact Assessment (AIA)Automated decision-making impactsGovernment AI deploymentCanada’s Directive on Automated Decision-Making

The EU AI Act explicitly acknowledges the overlap: Article 27(4) states that if you’ve already conducted a DPIA under GDPR Article 35, your fundamental rights impact assessment can build on that work rather than starting from scratch. But it can’t replace it — the AI-specific elements (bias analysis, fairness testing, human oversight design) go beyond what a DPIA covers.

Who Needs an AI Impact Assessment?

The short answer: anyone deploying AI for decisions that meaningfully affect people’s lives. The regulatory answer depends on your jurisdiction.

Colorado SB 205 (Effective June 30, 2026)

Colorado’s AI Act requires deployers of high-risk AI systems to complete an impact assessment before deployment and update it annually. A “high-risk AI system” is any system that makes or substantially influences a “consequential decision” — defined as a decision with material legal or similarly significant effect on:

Consequential Decision CategoryExamples
EmploymentHiring, promotions, terminations, compensation
Financial ServicesLoan approvals, credit limits, interest rates
EducationEnrollment, scholarship eligibility, placements
HealthcareTreatment recommendations, coverage decisions
HousingRental applications, tenant screening, lease terms
InsurancePolicy eligibility, premium pricing, claims
Government ServicesBenefits eligibility, licensing, permits
Legal ServicesCase assessment, resource allocation

The assessment must include a statement of purpose and intended use cases, an analysis of whether the deployment poses risks of algorithmic discrimination, a description of the data inputs and outputs, steps taken to mitigate discrimination risks, and how the system is monitored post-deployment.

Colorado’s law also has teeth: the Attorney General has exclusive enforcement authority, and the law covers any company “doing business in Colorado” — meaning if your AI makes decisions affecting Colorado consumers, you’re in scope regardless of where you’re headquartered.

EU AI Act — Fundamental Rights Impact Assessment (FRIA)

Under Article 27 of the EU AI Act, deployers of high-risk AI systems in certain categories must conduct a fundamental rights impact assessment (FRIA) before putting the system into use. This applies when the deployer is:

  • A body governed by public law, or
  • A private entity providing public services, or
  • A deployer of AI systems for creditworthiness evaluation or risk assessment in life/health insurance

The FRIA must describe the deployer’s processes using the AI system, the period and frequency of use, the categories of persons and groups likely to be affected, the specific risks of harm to those groups, a description of human oversight measures, and the actions to be taken if risks materialize. The EU AI Office is developing a standardized questionnaire template to help deployers comply.

High-risk AI system obligations under the EU AI Act take effect August 2, 2026 — just weeks after Colorado’s deadline.

Canada’s Algorithmic Impact Assessment

Canada pioneered mandatory algorithmic impact assessments through its Directive on Automated Decision-Making, which applies to federal government agencies. The Treasury Board’s Algorithmic Impact Assessment tool uses a 65-question questionnaire to score systems on a four-level impact scale, with each level triggering progressively stricter requirements for peer review, monitoring, and human oversight.

While currently limited to government systems, Canada’s AIA is the most mature implementation globally and serves as a practical model for private-sector firms building their own assessment process.

Federal US Landscape

No comprehensive federal AI impact assessment law exists yet. However, the Algorithmic Accountability Act has been reintroduced in Congress multiple sessions running, and would direct the FTC to establish mandatory impact assessment requirements for automated decision systems. Meanwhile, the CFPB, EEOC, DOJ, and FTC issued a joint statement on enforcement against discrimination and bias in automated systems making clear that existing civil rights laws already apply to AI decisions — assessment or not.

How an AI Impact Assessment Differs from a Privacy Assessment

This is the question every GRC team asks first, and the answer matters for resourcing: an AI impact assessment is not a privacy assessment with a new name. The two overlap but serve different purposes.

A DPIA asks: “Does this data processing create risks to individuals’ privacy rights?”

An AI impact assessment asks: “Does this automated decision-making system create risks of harm — including but not limited to privacy — to the people subject to its outputs?”

The critical additions in an AI impact assessment:

  1. Fairness and bias analysis. Does the system produce disparate outcomes across protected classes? A DPIA doesn’t require statistical testing for demographic parity or equalized odds. An AI impact assessment does.

  2. Explainability evaluation. Can the system’s decisions be explained to affected individuals? The EU AI Act requires this for high-risk systems. DPIAs don’t address model explainability.

  3. Human oversight design. What human review mechanisms exist? Who can override the system? What triggers escalation? These are AI-specific governance questions.

  4. Performance degradation and drift. AI systems degrade over time. The assessment must address monitoring for model drift, concept drift, and performance decay — none of which appear in a traditional DPIA.

  5. Rights impact beyond privacy. AI can affect employment rights, creditworthiness, housing access, and insurance eligibility. An AI impact assessment evaluates these broader harms.

Practical implication: If you already run DPIAs, build your AI impact assessment as a companion process that extends (not replaces) the DPIA. Many of the data flow and data source questions overlap. The new work is in fairness testing, explainability documentation, and human oversight design.

How to Conduct an AI Impact Assessment: Step by Step

Here’s a methodology that satisfies Colorado SB 205, aligns with EU AI Act Article 27, and maps to the NIST AI Risk Management Framework GOVERN and MAP functions.

Step 1: Scope and Trigger (Days 1-5)

Determine whether an assessment is required. Not every AI system needs a full impact assessment. Apply these screening questions:

  • Does the system make or substantially influence a consequential decision (credit, employment, housing, insurance, healthcare, education, government services)?
  • Does it process personal data to profile individuals?
  • Is it deployed in a jurisdiction with mandatory assessment requirements?
  • Does the system use ML, deep learning, or generative AI techniques?

If yes to any of the first three, a full assessment is required. If only the fourth, a lighter-weight assessment is still recommended as a governance best practice.

Identify the assessment owner. This is typically the business line deploying the AI system, NOT the AI/ML engineering team. At most mid-size banks, the assessment owner is the First Line risk lead or business unit head, with Second Line (risk management or compliance) providing oversight and challenge. At fintechs, this often falls to the Head of Compliance or VP of Product.

Assemble the assessment team:

RoleResponsibility
Business OwnerDefines use case, intended outcomes, deployment context
AI/ML EngineeringProvides technical documentation, model details, training data info
Compliance/LegalMaps regulatory requirements, reviews for legal sufficiency
Model Risk ManagementValidates fairness testing methodology, reviews performance metrics
Data PrivacyAssesses data inputs, consent basis, PII handling
Affected Stakeholder RepProvides perspective on potential impact (may be internal proxy)

Step 2: System Documentation (Days 5-15)

Document the AI system comprehensively. This forms the foundation of the assessment and is specifically required under both Colorado SB 205 and the EU AI Act.

Required documentation elements:

  • Purpose and intended use cases. What business problem does this system solve? What decisions does it inform or make? Be specific — “improves lending efficiency” isn’t sufficient. “Automates initial credit decisioning for consumer personal loans under $50K, approving or declining applications based on creditworthiness scoring” is.

  • System architecture. Model type (logistic regression, random forest, neural network, LLM, etc.), inputs, outputs, confidence thresholds, decision boundaries, and fallback logic.

  • Training data provenance. What data was used to train the model? Where did it come from? What time period does it cover? Were protected class attributes included (directly or as proxies)? Was the data representative of the population the system will be applied to?

  • Known limitations and failure modes. Every model has blind spots. Document them: edge cases where accuracy drops, population segments with lower performance, known biases in training data.

  • Performance metrics. Accuracy, precision, recall, F1 score, and — critically — these metrics broken down by protected class (age, race, gender, etc.).

Step 3: Stakeholder and Rights Analysis (Days 15-25)

This is where the AI impact assessment fundamentally diverges from a DPIA. You’re not just mapping data flows — you’re mapping who gets hurt and how.

Identify affected populations:

  • Who is directly subject to the system’s decisions?
  • Are any protected classes disproportionately represented in the affected population?
  • Are there vulnerable populations (elderly, non-English speakers, people with disabilities) who may be differently affected?
  • Who are indirect stakeholders (e.g., dependents of someone denied credit)?

Map potential harms:

Harm CategoryExamplesRelevant Law
DiscriminationHigher denial rates for protected classesECOA, Fair Housing Act, Title VII, state civil rights laws
AutonomyDecisions made without meaningful human reviewEU AI Act Art. 14, Colorado SB 205
TransparencyInability to explain decisions to affected personsECOA adverse action requirements, EU AI Act Art. 13
EconomicUnjustified higher pricing or worse termsState UDAP laws, CFPB authority
SafetyPhysical or psychological harm from incorrect decisionsProduct liability, state tort law
AccessExclusion from essential servicesADA, state accessibility laws

The Massachusetts AG lesson: In July 2025, Massachusetts AG Andrea Joy Campbell settled with a student loan company whose AI underwriting model used a Cohort Default Rate — the average loan default rate associated with specific higher education institutions — as a scoring factor. The AG alleged this resulted in disparate impact against Black and Hispanic applicants under ECOA and state UDAP laws. The company had also used immigration status as a knockout factor until 2023, creating disparate impact based on national origin. The settlement required the company to stop using both factors and overhaul its fair lending testing. An impact assessment that properly mapped which populations were affected and tested for disparate impact would have caught both issues before they became enforcement actions.

Step 4: Bias and Fairness Testing (Days 20-30)

This is the technical core of the assessment. If you haven’t already built a bias testing methodology, now is the time.

Minimum testing requirements:

  1. Disparate impact analysis. Compare approval/denial rates (or score distributions) across protected classes. Apply the four-fifths rule as a screening tool: if the selection rate for any protected group is less than 80% of the rate for the highest-scoring group, investigate further.

  2. Outcome fairness metrics. Calculate demographic parity, equalized odds, and predictive parity across key demographics. No single metric is sufficient — different fairness definitions can conflict, and you need to document which definition you optimized for and why.

  3. Proxy variable analysis. Even if protected attributes aren’t direct inputs, correlated features (zip code, university attended, employment history) can serve as proxies. Test whether removing proxy variables changes outcome distributions.

  4. Intersectional analysis. Test not just individual protected classes but intersections (e.g., Black women over 50). Single-axis testing can mask compound discrimination.

  5. Temporal stability. Run the same tests on recent data vs. training data. If fairness metrics are degrading over time, that’s a drift signal requiring intervention.

Document all results — including unfavorable findings. Colorado SB 205’s “rebuttable presumption” defense (which provides a legal safe harbor for companies that demonstrate reasonable care) only works if your documentation is honest and complete.

Step 5: Mitigation Planning (Days 25-35)

For every risk identified in Steps 3 and 4, document a specific mitigation:

Risk IdentifiedMitigationOwnerTimelineVerification
Disparate impact in credit scoring for Hispanic applicantsRetrain model excluding zip code proxy; implement post-hoc fairness adjustmentML Engineering + MRM60 daysRe-run disparate impact analysis post-fix
No human review for denialsImplement mandatory human review for all denials in bottom 10% of score distributionOperations + Compliance30 daysAudit sample of 100 reviewed decisions monthly
Inability to explain individual decisionsDeploy SHAP-based local explanation module; create consumer-facing explanation templateML Engineering + Legal90 daysUser testing with 50 sample explanations
No ongoing monitoring for driftBuild automated drift detection dashboard with alerts at ±5% thresholdData Engineering + MRM45 daysMonthly drift report review by Model Risk Committee

Step 6: Review and Approval (Days 30-40)

The assessment isn’t complete until it’s been reviewed and challenged by independent parties:

  • Model Risk Management reviews fairness testing methodology and results
  • Legal/Compliance confirms regulatory requirement coverage
  • Senior management or AI governance committee approves deployment (with documented sign-off)
  • Internal Audit reviews the assessment process (not necessarily every assessment) on an annual basis

Colorado SB 205 note: The law requires the assessment to be provided to the Attorney General within 90 days of receiving a request. Structure your assessment so it can be produced quickly — don’t let it sit in someone’s personal drive.

Step 7: Ongoing Monitoring and Annual Updates (Continuous)

An impact assessment isn’t a one-time checkbox. Both Colorado SB 205 and the EU AI Act require ongoing monitoring and periodic reassessment.

Build these into your operational rhythm:

  • Quarterly: Review model performance metrics by protected class. Check for drift.
  • Annually (minimum): Full reassessment update. Colorado SB 205 explicitly requires annual updates.
  • Triggered reassessment: Any material change to the model, training data, deployment scope, or affected population should trigger a new assessment. A “material change” includes retraining on new data, expanding to new customer segments, changing decision thresholds, or adding new input features.

30/60/90-Day Implementation Roadmap

If you’re starting from zero, here’s how to get compliant before the deadlines:

Days 1-30: Foundation

DeliverableOwnerNotes
AI system inventory (which systems make consequential decisions?)Business line leads + ITCan’t assess what you can’t find — start with your model inventory
Impact assessment template and methodology documentCompliance + MRMAlign to NIST AI RMF MAP function and Colorado SB 205 requirements
Assessment trigger criteria (what requires a full assessment vs. screening)Risk ManagementDocument in your AI governance policy
Identify highest-priority systems for assessmentCRO/CCOStart with credit decisioning, hiring tools, customer-facing AI

Days 31-60: First Assessments

DeliverableOwnerNotes
Complete assessments for top 3-5 highest-risk AI systemsAssessment teams (per system)Use the 7-step methodology above
Fairness testing infrastructure in placeML Engineering + MRMAutomated testing pipeline, not manual spreadsheets
Human oversight mechanisms documented and operationalOperations + Business LinesKill switches, escalation paths, override procedures
Attorney General response package prepared (Colorado)Legal + ComplianceCompile assessments so they’re producible within 90 days of request

Days 61-90: Operationalize

DeliverableOwnerNotes
Remaining system assessments completedAssessment teamsPrioritize by risk tier
Ongoing monitoring dashboards liveData Engineering + MRMDrift detection, fairness metric tracking, alert thresholds
Annual reassessment calendar setComplianceMap each system to its next assessment date
Staff training completedCompliance + HRAssessment methodology, roles, escalation procedures
AI governance committee charter updated to include assessment oversightBoard/Senior ManagementFormal governance integration

So What?

The convergence is real: Colorado (June 2026), the EU (August 2026), and federal enforcement signals are all pointing the same direction. AI impact assessments are becoming table stakes for any company deploying AI in consequential decision-making.

The companies that get this right won’t just avoid fines — they’ll avoid the Mobley v. Workday scenario, where a federal court certifies a nationwide class action because no one bothered to document how the AI was making decisions or test whether those decisions were fair.

The companies that get it wrong? Look at what happened to that student loan company in Massachusetts: an AI model using school default rates as a scoring factor created disparate impact against Black and Hispanic borrowers. A proper impact assessment — one that mapped affected populations, tested for disparate impact, and flagged proxy variables — would have caught it before the Attorney General did.

Start with your highest-risk systems. Build the methodology once. Then scale it across your AI portfolio. Our AI Risk Assessment Template includes a pre-built impact assessment framework aligned to Colorado SB 205 and the NIST AI RMF that you can customize and deploy immediately.

FAQ

How often do I need to update an AI impact assessment?

Colorado SB 205 requires annual updates at minimum. The EU AI Act requires reassessment when there’s a material change to the system. Best practice: quarterly monitoring checks with a full annual reassessment, plus triggered reassessments for any significant model changes (retraining, new data sources, expanded deployment scope, or threshold changes).

Does an AI impact assessment replace a DPIA?

No — and a DPIA doesn’t replace an AI impact assessment either. The EU AI Act (Article 27(4)) explicitly says the fundamental rights impact assessment should complement any DPIA conducted under GDPR. The AI assessment adds fairness testing, explainability evaluation, and rights impact analysis that go beyond data privacy. Build them as companion processes that share data flow documentation but add AI-specific analysis layers.

What happens if I deploy a high-risk AI system without an impact assessment?

Under Colorado SB 205, the Attorney General can bring enforcement action. The law doesn’t specify penalty amounts (it relies on existing enforcement mechanisms), but violations of the duty of reasonable care open the door to AG action and potential injunctive relief. Under the EU AI Act, penalties for non-compliance with high-risk system requirements can reach €15 million or 3% of total worldwide annual turnover, whichever is higher. And even without specific AI assessment laws, deploying without one removes your best defense in any discrimination lawsuit — documented evidence that you tested for and mitigated bias before deployment.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.