AI Risk

AI Model Risk Tiering: How to Classify AI Models by Risk Level

Table of Contents

TL;DR:

  • Most MRM tiering methodologies were built for regression models — they fall apart when applied to LLMs, agentic AI, and deep learning systems.
  • AI-specific risk factors like autonomy level, explainability gap, data sensitivity, and velocity of change must be weighted alongside traditional materiality drivers.
  • A four-tier structure (Critical → High → Medium → Low) with defined oversight requirements at each level gives examiners what they need and gives your team a defensible framework.

Your Model Risk Tiering Methodology Is Probably Broken

Here’s a question most model risk management teams can’t answer cleanly: How do you tier a GPT-powered customer service chatbot versus a logistic regression credit scoring model?

Both are “models” under SR 11-7. Both need to be in your inventory. But they present fundamentally different risk profiles — and the tiering methodology you built in 2015 almost certainly doesn’t capture the difference.

The problem is real and growing. The GAO’s May 2025 report on AI in financial services (GAO-25-107197) found that “challenges in assessing the quality of AI inputs and outputs could heighten model risk,” specifically because AI models that learn continuously from live data can lead to “shifts in underlying data, variable relationships, or statistical characteristics, potentially leading to model underperformance or inaccurate outputs.” Traditional tiering methodologies — built around materiality and financial impact — simply don’t account for these AI-specific risk dimensions.

Meanwhile, the OCC issued Bulletin 2025-26 in October 2025, clarifying that model risk management should be “commensurate with the bank’s risk exposures, its business activities, and the complexity and extent of its model use.” That’s a green light for risk-based tiering — but it also means your tiering methodology needs to actually work for AI systems, not just check a box.

This guide walks through how to build an AI-ready model risk tiering framework from scratch, including the factors that matter, a four-tier structure with specific criteria, and the oversight requirements that should differ by tier.

Why Traditional Tiering Breaks Down for AI

Most existing tiering methodologies evaluate models on two or three dimensions:

  • Financial materiality — how much money is at risk if the model is wrong
  • Usage scope — how many decisions the model influences
  • Regulatory sensitivity — whether the model touches a regulated domain (capital, fair lending, BSA/AML)

These factors still matter. But they’re insufficient for AI systems because they miss several risk dimensions unique to machine learning and generative AI:

Explainability gap. A linear regression model is inherently interpretable — you can trace exactly how inputs map to outputs. A deep neural network or LLM is not. SR 11-7 requires “effective challenge” of models, and the OCC’s Comptroller’s Handbook on Model Risk Management expects documentation of model logic. When the model is a black box, the risk of undetected errors, bias, or drift increases significantly — and that should raise the tier.

Autonomy level. A model that scores loan applications for human review is fundamentally different from an agentic AI system that autonomously executes trades, responds to customer complaints, or modifies its own parameters. Higher autonomy means less opportunity for human intervention before harm occurs.

Velocity of change. Traditional models are updated quarterly or annually. Many AI models retrain continuously, ingest streaming data, or update weights in near-real-time. Faster change means more opportunities for drift, degradation, or unexpected behavior between validation cycles.

Data sensitivity. AI models — especially LLMs — often process or have been trained on data far beyond what traditional models touch. PII exposure, training data provenance, and the risk of memorization (where an LLM can reproduce sensitive training data verbatim) all factor into risk.

Third-party opacity. Vendor-supplied AI models, especially foundation models from providers like OpenAI, Anthropic, or Google, come with limited visibility into architecture, training data, and version changes. When the vendor can update the model underneath you without notice, your validation assumptions can break overnight.

The Six AI Risk Factors for Model Tiering

Build your tiering methodology around these six factors. Each should be scored on a defined scale, and the composite score determines the tier.

Factor 1: Decision Impact (Materiality)

This is your existing factor — keep it, but sharpen the definitions for AI use cases.

ScoreCriteria
4 — CriticalModel directly determines credit decisions, pricing, capital calculations, or BSA/AML alert disposition
3 — HighModel substantially influences decisions on customer outcomes, risk limits, or regulatory reporting
2 — ModerateModel supports operational decisions with indirect financial or customer impact
1 — LowModel handles internal analytics, reporting dashboards, or information synthesis with no direct decision authority

Factor 2: Autonomy Level

ScoreCriteria
4 — Fully autonomousModel takes action without human review (automated trading, real-time fraud blocks, autonomous customer communications)
3 — Semi-autonomousModel recommends actions that are implemented with minimal human review or auto-approved below thresholds
2 — Human-in-the-loopModel generates outputs reviewed and approved by humans before action
1 — Advisory onlyModel provides information or analysis that humans use as one input among many

Factor 3: Explainability Gap

ScoreCriteria
4 — OpaqueNo meaningful explainability possible (black-box deep learning, proprietary vendor model with no model card)
3 — Limited explainabilityPost-hoc explanations available (SHAP, LIME) but model internals remain opaque
2 — Partially interpretableModel structure is known, key features are identifiable, but some interactions are complex
1 — Fully interpretableModel logic is transparent and directly traceable (linear/logistic regression, decision trees, rule-based systems)

Factor 4: Data Sensitivity

ScoreCriteria
4 — Highly sensitiveModel processes or was trained on PII, NPI, protected-class data, or material non-public information
3 — SensitiveModel uses customer-level data that could enable re-identification or includes protected-class proxies
2 — InternalModel uses aggregated business data, market data, or internal operational metrics
1 — PublicModel uses only publicly available data with no customer or employee information

Factor 5: Velocity of Change

ScoreCriteria
4 — ContinuousModel retrains or updates in real-time or near-real-time (online learning, streaming data ingestion)
3 — FrequentModel is retrained monthly or more frequently, or vendor updates are pushed regularly
2 — PeriodicModel is retrained quarterly to annually on a fixed schedule
1 — StaticModel is fixed at deployment and only changes through formal redevelopment

Factor 6: Third-Party Dependency

ScoreCriteria
4 — Full vendor black-boxFoundation model API with no visibility into architecture, training data, or versioning (e.g., GPT, Claude, Gemini via API)
3 — Vendor with limited transparencyVendor-supplied model with some documentation but limited ability to validate internals
2 — Open-source or co-developedModel is open-source or developed jointly with a vendor, with access to weights and architecture
1 — Fully in-houseModel developed entirely in-house with complete control over code, data, and infrastructure

The Four-Tier Structure

Sum the six factor scores (minimum 6, maximum 24) and map to tiers:

TierScore RangeLabelOversight Intensity
Tier 120–24CriticalMaximum oversight — annual independent validation, quarterly performance review, board-level reporting, pre-deployment committee approval
Tier 215–19HighEnhanced oversight — annual validation (can be targeted scope), semi-annual performance review, senior management reporting
Tier 310–14MediumStandard oversight — validation every 18–24 months, annual performance review, department-level reporting
Tier 46–9LowLight oversight — validation at initial deployment and upon material change, periodic self-assessment, exception-based reporting

Tier-Specific Oversight Requirements

Getting the tier right matters because it drives everything downstream: validation frequency, documentation depth, committee escalation, and monitoring intensity.

Tier 1: Critical — Full Regulatory Treatment

Validation: Annual independent validation by qualified validators (internal model validation group or qualified third party). Full scope — conceptual soundness review, outcomes analysis, sensitivity testing, and for AI models, adversarial testing and bias evaluation.

Documentation: Complete model documentation package including model card, training data provenance, feature engineering rationale, hyperparameter decisions, known failure modes, performance thresholds, drift detection methodology, and fallback/rollback procedures. For LLMs: prompt engineering documentation, guardrail specifications, and output monitoring protocols.

Monitoring: Continuous or near-continuous performance monitoring with automated alerting. Drift detection thresholds set at ±3–5% from baseline. For LLMs: automated hallucination detection sampling, output quality scoring.

Governance: Pre-deployment approval by model risk committee or equivalent senior governance body. Quarterly reporting to CRO or board risk committee. Any material change triggers re-validation.

Kill switch: Documented and tested shutdown procedure with named decision authority available 24/7. Automated circuit breakers for defined trigger conditions.

Tier 2: High — Enhanced Oversight

Validation: Annual validation, scope can be targeted to highest-risk components if full validation was completed within the last 24 months. Independent validators required but can be internal (model validation group or second-line risk).

Documentation: Full model documentation with model card. Training data documentation required but may reference centralized data governance records. Performance metrics and drift thresholds documented.

Monitoring: Monthly or quarterly performance review with defined metrics. Automated drift alerts recommended but not mandatory if manual review cadence is monthly.

Governance: Senior management approval for deployment. Semi-annual reporting to risk committee or CRO. Material changes require risk assessment before implementation.

Kill switch: Documented shutdown procedure with named decision authority during business hours. After-hours coverage through on-call escalation.

Tier 3: Medium — Standard Oversight

Validation: Validation every 18–24 months, or upon material change. Can be performed by model developers with independent review of results (effective challenge).

Documentation: Streamlined model documentation — purpose, inputs, outputs, known limitations, performance benchmarks. Model card recommended but not required.

Monitoring: Annual performance review with comparison to baseline metrics. Exception-based alerting for significant deviations.

Governance: Department-head approval for deployment. Annual reporting as part of aggregate MRM reporting. Material changes flagged to MRM team.

Tier 4: Low — Light Oversight

Validation: Initial validation at deployment, then upon material change only. Self-assessment by model owner with MRM team review of methodology.

Documentation: Brief model summary — what it does, what data it uses, who owns it, when it was last reviewed.

Monitoring: Periodic spot-checks. No continuous monitoring required.

Governance: Line-of-business approval sufficient. Included in model inventory with annual attestation that model remains fit for purpose.

Applying the Framework: Three Examples

Example 1: LLM-Powered Customer Service Chatbot

FactorScoreRationale
Decision Impact2Handles customer inquiries, doesn’t make financial decisions
Autonomy Level4Responds to customers autonomously in real-time
Explainability Gap4Black-box LLM, no interpretability into response generation
Data Sensitivity4Processes customer PII, account information in conversations
Velocity of Change3Vendor updates model versions regularly, fine-tuning updated monthly
Third-Party Dependency4Foundation model API, no visibility into internals
Total21Tier 1 — Critical

Many firms would intuitively tier this as “Medium” because it’s “just a chatbot.” But the scoring reveals the truth: it’s autonomously interacting with customers, processing sensitive data through an opaque vendor model. That’s a Tier 1 risk profile.

Example 2: Logistic Regression Credit Scoring Model

FactorScoreRationale
Decision Impact4Directly drives credit approval/denial decisions
Autonomy Level2Generates scores reviewed by underwriters
Explainability Gap1Fully interpretable, coefficient-level explanation
Data Sensitivity3Uses applicant financial data, potential protected-class proxies
Velocity of Change1Retrained annually on fixed schedule
Third-Party Dependency1Developed and maintained in-house
Total12Tier 3 — Medium

Wait — a credit scoring model in Tier 3? That might feel wrong. This is where the framework reveals an important nuance: while the decision impact is critical, the model itself is well-understood, fully interpretable, stable, and internally controlled. The traditional tiering based on materiality alone would put it at Tier 1. The AI-adjusted framework recognizes that operational risk from the model itself is moderate — but you’d still apply fair lending testing and regulatory compliance requirements regardless of tier (those are regulatory obligations, not MRM tier-dependent).

If your institution wants to ensure credit models always land in Tier 2 or above, add a regulatory floor rule: “Models that directly influence decisions covered by ECOA, Fair Housing Act, BSA/AML, or capital adequacy are automatically classified as Tier 2 or higher regardless of composite score.”

Example 3: Agentic AI Trade Execution System

FactorScoreRationale
Decision Impact4Executes trades with direct financial P&L impact
Autonomy Level4Fully autonomous execution within defined parameters
Explainability Gap3Deep learning with post-hoc SHAP explanations available
Data Sensitivity2Uses market data, no PII
Velocity of Change4Continuous learning from real-time market data
Third-Party Dependency1Built in-house
Total18Tier 2 — High

High but not quite Critical. The in-house development and available (if limited) explainability keep it just below the Tier 1 threshold. In practice, most firms would bump this to Tier 1 through a management override given the financial exposure — and that’s a feature of the framework, not a bug. The scoring gives you a starting point; professional judgment adjusts at the margins.

Building the Decision Tree

For teams that need a faster classification path than scoring six factors, here’s a simplified decision tree:

Step 1: Does the model make or directly drive consequential decisions (credit, employment, insurance, capital)?

  • Yes → Minimum Tier 2. Continue to Step 2.
  • No → Continue to Step 2.

Step 2: Does the model operate autonomously (takes actions without human review before execution)?

  • Yes → Raise one tier (e.g., from Tier 2 → Tier 1, or from unclassified → Tier 2 minimum). Continue to Step 3.
  • No → Continue to Step 3.

Step 3: Is the model an opaque AI system (deep learning, LLM, vendor black-box)?

  • Yes → Raise one tier. Continue to Step 4.
  • No → Continue to Step 4.

Step 4: Does the model process PII, NPI, or protected-class data?

  • Yes → Raise one tier (cap at Tier 1). Final classification.
  • No → Final classification.

The decision tree is less precise than full scoring but gets you to a defensible classification in under two minutes — useful for initial inventory triage when you’re classifying hundreds of models.

How Tiering Maps to Regulatory Expectations

Your tiering methodology doesn’t exist in a vacuum. Multiple frameworks now define risk-based approaches that should inform your tier definitions:

SR 11-7 / OCC Bulletin 2011-12: Expects model risk management “commensurate with” model complexity and materiality. The OCC’s October 2025 clarification (Bulletin 2025-26) explicitly stated that “a community bank using relatively few models of only moderate complexity might conduct significantly fewer model risk management activities” — confirming risk-based tiering is not just acceptable but expected.

EU AI Act (Article 6): Classifies AI systems into four risk tiers — Unacceptable (banned), High-Risk (Annex III use cases including credit scoring, employment decisions, biometric identification), Limited Risk (transparency obligations), and Minimal Risk (unregulated). High-risk AI system obligations take full effect August 2, 2026. If your firm operates in or serves EU markets, your internal tiering should at minimum align with the EU’s high-risk classification for Annex III use cases.

Colorado AI Act (SB 205): Takes effect February 1, 2026. Defines a “high-risk AI system” as any AI system that is a “substantial factor in making a consequential decision” — covering credit, employment, housing, insurance, education, and legal services. Any model touching these domains needs Tier 2 or higher treatment with documented impact assessments.

NIST AI RMF: The AI RMF 1.0 GOVERN function calls for organizations to establish “policies, processes, procedures, and practices” that include risk categorization. The framework doesn’t prescribe specific tiers but its MAP function emphasizes that risk assessment should consider “context of use” — the same principle driving the six-factor approach above.

Implementation Roadmap: 30/60/90 Days

Days 1–30: Foundation

Owner: Model Risk Management team lead (or CRO designee)

  • Week 1: Assess current tiering methodology gaps against the six AI risk factors. Document which factors are missing.
  • Week 2: Draft updated tiering criteria with scoring rubrics for each factor. Circulate to model validators and first-line model owners for feedback.
  • Week 3: Define tier-specific oversight requirements (validation frequency, documentation standards, monitoring expectations, governance thresholds).
  • Week 4: Present proposed methodology to model risk committee for approval. Incorporate feedback.

Deliverables: Updated tiering methodology document, approved scoring rubrics, tier-oversight mapping matrix.

Days 31–60: Reclassification

Owner: Model Risk Management team + first-line model owners

  • Week 5–6: Re-score all AI models in the inventory using the new methodology. Flag models that change tier (especially those moving up).
  • Week 7–8: For models that moved up in tier, conduct gap assessment against the new tier’s oversight requirements. Prioritize validation and documentation gaps for Tier 1 and Tier 2 models.

Deliverables: Updated model inventory with new tier classifications, gap assessment for upgraded models, remediation plan with timelines.

Days 61–90: Operationalize

Owner: Model Risk Management team + IT/Engineering (for monitoring infrastructure)

  • Week 9–10: Implement automated monitoring and drift detection for Tier 1 models. Configure alerting thresholds and escalation workflows.
  • Week 11: Update MRM policies and procedures to reflect new tiering methodology. Train model owners on the updated framework.
  • Week 12: Conduct first quarterly reporting cycle using new tiers. Brief senior management or risk committee.

Deliverables: Updated MRM policy, monitoring dashboards for Tier 1 models, training materials, first quarterly report.

Common Mistakes to Avoid

Tiering by technology instead of risk. Don’t automatically make all AI models Tier 1 and all traditional models Tier 3. A simple decision tree model that autonomously blocks fraud transactions might be higher risk than an LLM that summarizes internal meeting notes. The factors matter more than the label.

Ignoring vendor model changes. If you tier a vendor model at deployment and never re-assess, you’ll miss risk creep. Major version updates (GPT-3.5 → GPT-4, for example) can change the risk profile significantly. Build re-tiering triggers into your vendor management process.

No management override mechanism. The scoring framework should be the starting point, not the final word. Build in a documented override process where the MRM team or risk committee can adjust a tier up or down based on factors the scoring doesn’t capture (strategic importance, concentration risk, regulatory scrutiny).

Treating regulatory floor rules as optional. If a model directly drives fair lending decisions, it’s Tier 2 minimum regardless of what the composite score says. Same for BSA/AML and capital adequacy models. Document these floor rules explicitly.

So What?

Your model risk tiering methodology is the foundation that everything else in your MRM program rests on — validation scope, documentation depth, monitoring intensity, and governance cadence all flow from it. If the tier is wrong, the oversight is wrong. And when examiners find models that are under-tiered and under-validated, that’s where MRAs and MRIAs come from.

The good news: updating your tiering methodology for AI isn’t a multi-year initiative. It’s a 90-day project with clear deliverables and measurable outcomes. The framework above gives you a defensible, regulatory-aligned structure. What matters now is doing it before your next exam — not after.

If you want a structured framework to assess AI model risks across all six dimensions, the AI Risk Assessment Template & Guide includes scoring rubrics, risk tier definitions, and ready-to-use assessment worksheets that align with SR 11-7, NIST AI RMF, and EU AI Act requirements.

Frequently Asked Questions

How often should AI model risk tiers be reassessed?

Re-assess tiers at minimum annually during the regular model inventory review cycle. Additionally, trigger a re-tiering assessment whenever there’s a material change — including vendor model version updates, changes in model scope or usage, new data inputs, or regulatory changes that affect the model’s domain. For Tier 1 models using third-party AI APIs, consider re-assessment whenever the vendor announces a major model update.

Does the EU AI Act require a specific tiering methodology?

The EU AI Act establishes its own risk classification (Unacceptable, High-Risk, Limited, Minimal) based on use case rather than model characteristics. It doesn’t prescribe an internal tiering methodology. However, for firms subject to both the EU AI Act and US banking regulations, your internal tiering should at minimum ensure that any model classified as “high-risk” under EU AI Act Annex III — including those used for credit scoring, employment decisions, and biometric identification — receives Tier 2 or higher treatment in your internal framework.

Should traditional statistical models still go through the same tiering process?

Yes. The six-factor framework works for all model types. Traditional models will typically score lower on explainability gap, velocity of change, and third-party dependency — which is appropriate. They’ll naturally land in lower tiers unless they carry high decision impact or autonomy scores. Running all models through the same framework ensures consistency and gives examiners a single, defensible methodology that covers your entire inventory.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.