AI Risk

AI Impact Assessment Guide & Template: A Practical Framework for 2026

March 30, 2026 Rebecca Leung
Table of Contents

TL;DR

  • Regulators (CFPB, OCC, EU AI Act) are actively enforcing AI risk requirements — and “we didn’t assess it” is not a defense.
  • A solid AI impact assessment covers risk classification, data/model risk, fairness, explainability, and ongoing monitoring — not just a one-time checkbox.
  • This guide gives you the exact framework, risk tier table, and responsible-party assignments to run a credible assessment.

Your company just approved a new AI use case. Maybe it’s a credit underwriting model. Maybe it’s an AI chatbot handling customer complaints. Maybe it’s a fraud detection algorithm that flags accounts for review. Whatever it is, someone in the room said “great, let’s launch it” — and risk management is now staring at a blank Word doc.

That blank doc is exactly what this ai impact assessment guide template is here to fix.

AI impact assessments aren’t theoretical compliance theater. The CFPB clarified in September 2023 that lenders using AI in credit decisions must still provide specific, principal reasons for adverse actions under ECOA — “the model said no” doesn’t cut it. The EU AI Act requires deployers of high-risk AI systems to conduct Fundamental Rights Impact Assessments (FRIAs) before deployment. And the Federal Reserve / OCC’s SR 11-7 has required rigorous model validation and risk management for bank models since 2011 — and regulators treat AI models the same way.

The common thread: you need to know what your AI does, to whom, and what can go wrong. That’s an impact assessment.


Why Most AI Assessments Fail (Before They Start)

The typical failure mode isn’t malice — it’s scope. Companies run “AI risk reviews” that are really just IT security questionnaires stapled to a vendor due diligence form. They ask: Does the vendor have SOC 2? Is the data encrypted? Who has access?

All valid questions. None of them are an impact assessment.

A real AI impact assessment asks:

  • What decisions does this system influence, and for whom?
  • What happens when it’s wrong — and how often will it be wrong?
  • Can you explain the output to the person affected?
  • Is there disparate impact on protected classes?
  • Who owns the model after launch, and what does monitoring look like?

Miss any of these and you’ve got a gap that regulators will find before you do.


Step 1: Risk-Tier Your AI Use Case First

Not every AI needs the same depth of assessment. A spell-checker built on AI and a credit scoring model built on AI are not the same animal. Start by classifying the use case.

Risk TierDescriptionExamplesAssessment Depth
Tier 1 – CriticalDecisions with legal, financial, or safety consequences for individualsCredit underwriting, fraud flagging, employment screening, benefits eligibilityFull impact assessment + independent validation
Tier 2 – HighSignificant operational or reputational risk; consumer-facing with material influenceCustomer service AI, collections prioritization, AML transaction monitoringFull impact assessment
Tier 3 – ModerateInternal operational tools; limited direct consumer impactInternal chatbots, document summarization, employee productivity toolsStreamlined assessment + annual review
Tier 4 – LowNarrow automation with no material decision-makingGrammar check, scheduling assistants, email autocompleteBasic intake form; no formal assessment required

Who does this: AI risk owner or model risk team, with input from the business unit. Document the tier decision and the rationale — you’ll need it if a regulator asks.

Timeline: Should happen in the intake phase, before any vendor is selected or any model is built.


Step 2: Define Scope and Map Affected Populations

Once you’ve tiered the use case, document exactly what the AI does and who it affects. This sounds obvious. It almost never gets done properly.

Your scope document should answer:

  • Purpose: What specific decision or output does the model produce?
  • Inputs: What data does it consume? (Be specific — credit bureau data, transaction history, demographic proxies?)
  • Outputs: What does it generate? A score, a flag, a recommendation, a customer-facing message?
  • Decision authority: Is the AI making the final call, or is a human in the loop?
  • Affected population: Who are the people this touches? Size of the population? Any subgroups with protected characteristics?
  • Downstream consequences: What happens to someone who gets a “bad” output? Denied loan? Account frozen? Higher rate?

The EU AI Act’s Annex III defines specific categories of high-risk AI systems — biometric identification, critical infrastructure, education, employment, essential services, law enforcement, border management, administration of justice. If your use case lands in any of these, you’re in mandatory FRIA territory under the Act.


Step 3: Run the Core Risk Assessment

This is the substance of the assessment. Cover all five dimensions — skip one and your framework has a hole.

3A. Model Performance & Reliability Risk

Assessment AreaQuestions to AnswerRed Flags
AccuracyWhat’s the model’s error rate on the intended population?No baseline accuracy metrics available
Data qualityHow representative is the training data? How old is it?Training data predates a major market event; no refresh cadence
Model driftIs there a monitoring plan to detect performance degradation?No post-deployment monitoring; model hasn’t been retrained in 2+ years
RobustnessHow does the model perform on edge cases or adversarial inputs?Only tested on “normal” distribution; no stress testing

Under SR 11-7, banks must conduct conceptual soundness review, ongoing monitoring, and outcomes analysis for all models — AI included. If you’re a bank and your AI vendor can’t provide validation documentation, that’s a model risk management gap.

3B. Fairness & Disparate Impact Risk

The CFPB, OCC, FDIC, NCUA, FHFA, and Fed issued a joint statement making clear that existing fair lending laws apply to AI. You need to test for disparate impact across protected classes, even if those variables aren’t inputs (proxy discrimination is real).

At minimum:

  • Run disparate impact analysis by race, sex, national origin, and age before deployment
  • Document the methodology and results
  • Set a threshold for acceptable disparity ratios (the 4/5ths rule from employment law is a common starting point)
  • Assign accountability for remediation if disparities are found

3C. Explainability Risk

The CFPB’s September 2023 circular is unambiguous: if an AI model makes a credit decision, lenders must provide the specific reasons — not generic “insufficient credit history” when the model actually flagged something else. This requires model explainability at the individual-decision level.

Ask your model vendor or data science team:

  • Can you generate specific, principal factors driving individual outputs?
  • Is the model interpretable by design (logistic regression, decision tree) or does it require post-hoc explanation (SHAP values, LIME)?
  • Do the explanations hold up under audit? Has anyone tested them?

If the answer to any of these is “we’re not sure,” you have an explainability risk that will become a regulatory problem.

3D. Data Privacy & Security Risk

  • What PII or sensitive data does the model consume?
  • Is there a data retention limit? Can data be deleted on request?
  • How is the model protected against data extraction or model inversion attacks?
  • If a third-party vendor is involved, what’s their data use agreement? Are they using your data to train their models?

This one catches companies off-guard when they use off-the-shelf AI tools. “We didn’t know they were using our customer data for training” is not a data privacy defense.

3E. Third-Party & Concentration Risk

Most enterprise AI is vendor-provided. That means your risk assessment has to include the vendor:

  • What’s the vendor’s AI governance program?
  • Can you audit the model or get documentation of validation?
  • What’s the concentration risk if the vendor goes down, pivots, or gets acquired?
  • What are the SLAs, and what happens to your operations if the model is unavailable?

Step 4: Document Controls and Assign Owners

An assessment without control assignments is a report. Controls with named owners and dates are risk management.

Risk AreaKey ControlsOwnerReview Frequency
Model performanceMonthly KPI monitoring dashboard; automated drift alertsModel Risk ManagerMonthly
Fairness/disparate impactQuarterly bias testing; pre-launch validationCompliance / Fair LendingQuarterly
ExplainabilityDecision log with SHAP factor outputs for adverse decisionsProduct / Data SciencePer adverse decision
Data privacyData use agreement review; retention schedule enforcementPrivacy OfficerAnnually + at vendor renewal
Human oversightEscalation protocol for edge cases; override procedures documentedBusiness Unit LeadOngoing
Third-partyAnnual vendor AI risk review; SLA monitoringVendor Risk / ProcurementAnnually

The NIST AI RMF 1.0 organizes AI risk management around four core functions: Govern, Map, Measure, Manage. Your controls map to the Measure and Manage functions. Don’t skip the Govern function — that’s where accountability structures live, and regulators look for it.


Step 5: Establish the Monitoring and Review Cadence

An AI impact assessment is not a one-and-done document. AI models degrade. Populations shift. Regulations change. Your assessment needs a built-in expiration date and a trigger for re-assessment.

Standard review triggers:

  • Annually (for all Tier 1 and Tier 2 use cases)
  • Material change to the model (algorithm change, data source change, significant retraining)
  • Material change to the use case (expanded population, new decision context)
  • Adverse event (regulatory inquiry, fair lending complaint, significant model failure)
  • Vendor change (new model version, M&A at vendor, contract renegotiation)

Governance checkpoint: The business unit should formally re-certify Tier 1 and Tier 2 assessments annually. This creates a paper trail showing ongoing oversight — which is exactly what regulators want to see.


The Minimum Viable AI Impact Assessment

If you need to start somewhere today, here’s the minimum you need for a credible Tier 2 or Tier 3 assessment:

  1. Use case description (what it does, who it affects, how it influences decisions)
  2. Risk tier classification with rationale
  3. Data inventory (inputs, sources, data classifications)
  4. Known limitations (accuracy, edge cases, known failure modes)
  5. Fairness testing results (even preliminary)
  6. Explainability approach (how you explain outputs to affected parties)
  7. Owner assignment (one named individual accountable for the model post-launch)
  8. Monitoring plan (what you’re measuring, how often, and who reviews it)
  9. Re-assessment trigger conditions
  10. Sign-off with date

That’s it. Ten fields. You can do this in a structured template without a 50-page policy framework — and it’ll hold up to a regulator asking “what did you review before you deployed this?”


So What?

Here’s the hard truth: AI risk is moving faster than most organizations’ governance programs. Regulators aren’t waiting. The CFPB is actively enforcing explainability. The EU AI Act’s high-risk AI provisions kicked in August 2026. The SEC has incorporated AI-washing into its examination priorities. And NIST released its Generative AI Profile (NIST AI 600-1) in July 2024, specifically because the original AI RMF didn’t fully address gen AI risks.

If your organization doesn’t have a structured AI impact assessment process right now, you’re one model failure or one regulatory exam away from a very uncomfortable conversation.

The good news: the framework isn’t that complicated. Risk-tier, scope, assess, control, monitor. That’s the whole thing. The discipline is in doing it every time, for every use case, before launch.


Need a head start? The AI Risk Assessment Template & Guide includes a pre-built impact assessment framework, risk tier classification guide, fairness testing checklist, and monitoring dashboard template — everything structured so you can run a credible assessment without starting from scratch.


Frequently Asked Questions

Q: Is an AI impact assessment required by law in the US?

There’s no single federal law requiring AI impact assessments in the US — yet. But several regulatory frameworks effectively require them. SR 11-7 requires banks to validate and manage model risk, which encompasses AI models. The CFPB’s ECOA adverse action guidance requires explainability for AI credit decisions. Executive Order 14110 (October 2023) directed federal agencies to develop AI safety standards. And if you have EU operations or EU-resident customers, the EU AI Act’s FRIA requirement is binding for high-risk AI deployments. Practically speaking, if regulators examine your AI program and you can’t show evidence of pre-deployment assessment, you have a governance gap — whether or not there’s a specific statute.

Q: How is an AI impact assessment different from a standard model validation?

Model validation (as required under SR 11-7) focuses on technical soundness — is the model statistically valid, are the assumptions correct, does it perform as expected? An AI impact assessment is broader. It covers the model’s downstream effects on people: fairness, explainability, privacy, and what happens when the model gets it wrong. Think of model validation as the engineering review and the impact assessment as the risk management review. You need both for Tier 1 use cases.

Q: How long does an AI impact assessment take?

For a Tier 3 use case with an existing vendor model, a well-structured assessment takes 3–5 business days with the right template and stakeholder inputs. Tier 2 takes 2–4 weeks, including fairness testing and stakeholder review. Tier 1 (credit decisions, high-stakes consumer-facing AI) should take 4–8 weeks minimum, including independent model validation. If anyone tells you they can do a complete Tier 1 assessment in a week, they’re either skipping steps or lying.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.