AI Risk

AI Model Inventory Management: What Examiners Ask For First (And What Banks Can't Find)

March 26, 2026 Rebecca Leung
Table of Contents

The first question your examiner will ask isn’t about bias, explainability, or governance frameworks. It’s simpler than that: “Can you show me your model inventory?”

And that’s where most banks stumble — not because they lack AI governance ambitions, but because they genuinely don’t know how many AI models they’re running.

A 2025 GAO report on AI use and oversight in financial services found that federal regulators (OCC, Federal Reserve, FDIC) consistently apply existing model risk management guidance — specifically SR 11-7 and OCC Bulletin 2011-12 — to AI and machine learning systems. The rules aren’t new. The inventory obligation has existed since 2011. What’s new is that “model” now means something far more complex, more numerous, and harder to track.


TL;DR

  • SR 11-7 (Fed) and OCC Bulletin 2011-12 require a comprehensive model inventory — and examiners are now applying this squarely to AI/ML systems
  • Most banks undercount their AI models because shadow AI, vendor-embedded models, and low-code AI tools fall through the cracks
  • A defensible inventory needs 12+ specific fields — not just a spreadsheet of model names
  • OCC Bulletin 2025-26 (October 2025) clarified community bank flexibility — but the inventory requirement itself is non-negotiable

Why the Model Inventory Is Ground Zero for AI Risk Exams

SR 11-7 is explicit: “Organizations should maintain an inventory of models implemented for use, under development for implementation, or recently retired.” The companion piece, OCC Bulletin 2011-12, says the same thing.

For a decade, compliance teams maintained tidy spreadsheets of their credit models, stress testing models, and pricing models. That worked when “model” meant a regression or a scorecard. It doesn’t work now.

Today’s model environment includes:

  • Traditional ML models (credit scoring, fraud detection, AML transaction monitoring)
  • LLMs and GenAI tools (customer service chatbots, internal policy Q&A tools, document summarizers)
  • Vendor-embedded AI (the AI your core banking platform uses to flag anomalies, the credit underwriting model inside your fintech partner’s API)
  • Low-code/no-code AI (tools built by business units using Microsoft Copilot, Salesforce Einstein, or similar platforms without MRM involvement)
  • Retired models still referenced in production pipelines or model documentation

Examiners are asking about all of them. If your inventory only captures the models your MRM team formally reviewed, you’re starting the exam in a hole.


What Most Banks Miss: The Shadow AI Problem

Shadow AI is the single biggest gap in most model inventories. It’s not rogue behavior — it’s the predictable result of business units moving faster than governance processes.

Three patterns that create shadow AI exposure:

1. Business unit tools. Operations, marketing, and HR teams subscribe to AI platforms directly. They’re not building models in the traditional sense — they’re deploying AI features. But if that AI is making decisions (routing complaints, screening job applicants, personalizing loan offers), it’s in scope for SR 11-7 purposes.

2. Vendor black boxes. Your TPRM process may have signed off on Vendor X as a credit underwriting partner without anyone asking “does their platform use AI/ML to generate recommendations?” Many fintechs embed AI decision engines in APIs that look like simple data outputs. If you’re consuming those outputs in a regulated process, their model is your model risk.

3. Proof-of-concepts that went live. A team builds a PoC using an ML library to automate a manual review process. It works. They keep using it. It never went through model development/validation. Two years later, it’s running in production and no one in MRM knows it exists.

A solid model inventory discovery process accounts for all three.


The 12 Fields Every AI Model Inventory Record Needs

SR 11-7 doesn’t prescribe a specific template, but regulators have been clear about what “comprehensive” looks like in practice. For AI/ML models, your inventory records need to go beyond the basics.

FieldWhat to CaptureAI-Specific Notes
Model IDUnique identifierInclude version/iteration (v1.2, not just “Fraud Model”)
Model Name & DescriptionPlain-language descriptionInclude the business decision it supports
Model TypeTraditional ML / Deep Learning / LLM / Rules-based / VendorCritical for calibrating validation approach
Business OwnerNamed individual, not just a teamWho answers when regulators ask?
MRM Owner / ValidatorNamed individual responsible for model riskSeparate from business owner
Deployment StatusIn-use / In development / Retired / PausedInclude date of each status change
Risk TierHigh / Medium / LowDocument the tiering criteria applied
Model FunctionWhat decision does it inform?Credit / Fraud / Marketing / HR / Operations / Customer Service
Training Data SourcesData sources used to train/tune the modelNote PII, protected class data, or third-party data
Last Validation DateWhen was the last independent validation?For AI models, include drift monitoring date
Known Limitations / Failure ModesDocumented weaknessesWhat could go wrong? What has gone wrong?
Vendor / Internal BuildWho built it?For vendor models, include contract reference and exit plan

For LLMs and GenAI models, add two more fields:

  • Prompt/System Instruction Version — GenAI model behavior changes with prompt updates, not just model version updates
  • Output Review Mechanism — How are AI outputs reviewed before acting on them?

How to Discover Shadow AI: A Practical Approach

You can’t build a complete inventory from memory or by asking people if they use AI. You need active discovery.

Step 1: IT procurement and SaaS audit (Weeks 1–2)

Pull your SaaS subscription list and software procurement records. Flag anything with “AI,” “ML,” “intelligent,” “predictive,” “automated decision,” or vendor names associated with AI platforms (Salesforce, Microsoft 365 Copilot, ServiceNow, HireVue, etc.). Cross-reference against your current inventory. Gap = shadow AI candidate.

Step 2: Business unit survey (Weeks 2–3)

Send a structured survey to all department heads: “Do you use any tools that make automated recommendations, predictions, or decisions — even if you don’t think of them as ‘AI’?” Include examples. Business units often don’t self-identify as AI users because they think of AI as something the data science team does. Lower the threshold in how you describe it.

Step 3: Vendor contract review (Weeks 3–4)

Pull active vendor agreements for technology services. Look for phrases like “machine learning,” “artificial intelligence,” “predictive analytics,” “algorithmic,” or “automated scoring.” Flag those vendors for follow-up: “Does your platform use AI or ML in the outputs/decisions we receive?”

Step 4: IT infrastructure scan (Ongoing)

Work with your IT team to identify model serving endpoints, API calls to ML inference services (AWS SageMaker, Azure ML, Google Vertex AI, OpenAI API), and any internal Python/R model deployment pipelines. Models running in production leave infrastructure footprints.

Step 5: Periodic re-attestation (Quarterly)

Send a brief re-attestation to business unit leads quarterly: “Have you deployed or started using any new AI tools in the past 90 days?” Build this into your model governance calendar, not as a one-time project.


The Vendor Model Trap (And How to Close It)

The OCC and Federal Reserve have both indicated that banks remain responsible for model risk when they rely on vendor models. SR 11-7 is clear that the standard applies “regardless of whether models are developed in-house or acquired externally.”

In practice, that means:

  • You need to understand what the vendor model does — inputs, outputs, training methodology, known limitations
  • You need to assess the vendor model risk, even if you can’t validate it directly
  • You need a plan if the vendor model fails — this is both a model risk question and a third-party risk question

For vendor AI models, your inventory record should include a reference to the vendor contract, any model documentation the vendor has provided (model cards, validation summaries, accuracy benchmarks), and the escalation path if performance deteriorates.

When vendors won’t provide model documentation — which happens more often than it should — document that gap explicitly in your inventory and in your third-party risk assessment. Regulators understand that black-box vendor AI is a real challenge; what they don’t accept is silence about the problem.


OCC 2025-26: Flexibility for Community Banks (But Not on Inventory)

In October 2025, the OCC issued Bulletin 2025-26, clarifying that community banks have flexibility to tailor their model risk management practices — specifically, that the OCC does not require annual model validation for all models.

This is meaningful flexibility. It means a small community bank using a third-party credit scoring model doesn’t need the same validation cadence as a large regional bank running a proprietary deep learning underwriting model.

What 2025-26 does not change:

  • The obligation to maintain a model inventory
  • The expectation that validation frequency be risk-based and documented
  • The requirement to understand and manage vendor model risk

Community banks should read 2025-26 as an invitation to calibrate, not an invitation to skip. Document your tiering rationale, document your validation decisions, and keep your inventory current. That’s exactly what examiners want to see.


Maintenance: Who Owns the Inventory and How Often Does It Update?

The model inventory is only useful if it’s current. Stale inventories are a common MRA finding.

Ownership:

At most mid-size banks, the model inventory is owned by the Model Risk Management function (typically under the CRO). At smaller institutions without a dedicated MRM team, ownership often falls to the Chief Risk Officer or the compliance function. What matters is that ownership is named, not just implied.

Update triggers:

EventInventory Action
New model goes liveAdd record, status = In-Use
Model enters developmentAdd record, status = In-Development
Model is retired or decommissionedUpdate status, add retirement date
Vendor changes their underlying modelUpdate vendor model record, trigger re-assessment
Business unit adopts new AI toolDiscovery → assessment → add record
Material model change (retraining, new features)Update record, trigger validation review

Cadence:

  • Formal full-inventory review: annually at minimum
  • Status-update sweep: quarterly
  • Re-attestation survey to business units: quarterly
  • Automated IT scanning for new model infrastructure: continuous

So What?

If an examiner walked in today and asked for your model inventory, what would you hand them?

If the answer is a spreadsheet that hasn’t been touched since your last exam cycle, with 12 models listed and zero GenAI entries — that’s a problem. Not because the examiner will issue an immediate MRA, but because the inventory is the foundation for everything else: validation scheduling, oversight intensity, resource allocation, and your ability to demonstrate a functioning AI governance program.

The goal isn’t a perfect inventory. The goal is a defensible one — accurate enough, updated regularly enough, and comprehensive enough that you can show examiners you know what you’re running and who’s responsible for it.

The AI Risk Assessment Template & Guide includes a model inventory template with all 12+ required fields, a shadow AI discovery questionnaire you can send to business units, and guidance on tiering your models by risk level — so you know where to spend your validation resources.


Frequently Asked Questions

Does SR 11-7 apply to AI and machine learning models?

Yes. Federal regulators (OCC, Federal Reserve, FDIC) have confirmed that SR 11-7 and OCC Bulletin 2011-12 apply to AI and ML systems. A May 2025 GAO report found that all seven federal financial regulators consider existing model risk guidance applicable to AI, regardless of whether AI-specific rules have been issued.

Do community banks need to maintain an AI model inventory?

Yes. OCC Bulletin 2025-26 (October 2025) gave community banks flexibility on validation frequency and scope, but the inventory requirement itself was not relaxed. Community banks should maintain an inventory and calibrate their validation activities based on the complexity and risk of their model use.

What if a vendor won’t tell us how their AI model works?

Document the gap. SR 11-7 acknowledges that vendor model transparency can be limited, but your MRM program needs to account for this risk. In your inventory and TPRM documentation, note that the vendor has not provided model documentation and describe the compensating controls you’ve implemented — such as output monitoring, performance benchmarking, and contractual audit rights.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.