OCC Model Risk Management Meets AI: What Bulletin 2011-12 Means for Your ML Program
Table of Contents
TL;DR:
- OCC Bulletin 2011-12 applies to AI/ML models — and the 2021 Comptroller’s Handbook makes that explicit with AI-specific guidance
- The OCC’s position: “Regardless of how AI is classified (as a model or not a model), the associated risk management should be commensurate with the level of risk”
- October 2025’s Bulletin 2025-26 signals a broader MRM guidance review is coming — your AI model risk program needs to be ready
The 2011 Guidance Your AI Program Can’t Ignore
OCC Bulletin 2011-12, “Sound Practices for Model Risk Management,” was issued jointly with the Federal Reserve on April 4, 2011. It was written when “models” meant regression-based credit scorecards and interest rate risk calculators. Nobody was thinking about transformer architectures or large language models.
But here’s the thing: it still governs how national bank examiners evaluate your AI and machine learning programs. The OCC hasn’t replaced it. They’ve expanded on it — and in the process, made their expectations for AI risk management much more explicit.
If you’re running AI models at an OCC-supervised institution and your MRM framework hasn’t been updated since 2020, you’re walking into exam findings.
How the OCC Extended 2011-12 to Cover AI
The evolution happened in three stages, and each one ratcheted up expectations.
Stage 1: The Original Guidance (2011)
OCC Bulletin 2011-12 established three pillars of model risk management:
| Pillar | Original Focus | AI Challenge |
|---|---|---|
| Model Development & Implementation | Documentation of methodology, assumptions, data inputs | AI models may have billions of parameters, opaque training processes, and emergent behaviors |
| Model Validation | Independent review, backtesting, sensitivity analysis | Traditional validation breaks down with black-box models — you can’t backtest a chatbot |
| Model Governance | Board oversight, policies, model inventory | Shadow AI, vendor-embedded models, and rapid deployment cycles outpace traditional governance |
The guidance defined a “model” as any “quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.” That definition is broad enough to capture most ML models — but it also left a gray zone around AI tools that don’t produce “quantitative estimates” in the traditional sense.
Stage 2: The 2021 Comptroller’s Handbook
In August 2021, the OCC published a new Model Risk Management booklet for the Comptroller’s Handbook (OCC Bulletin 2021-39). This was the game-changer for AI teams. The handbook didn’t replace Bulletin 2011-12, but it provided significant additional interpretations and made AI-specific expectations explicit.
Key additions:
- AI is called out directly. The handbook singles out artificial intelligence and emphasizes the need for appropriate risk management when using AI, noting that “while some AI does not produce quantitative estimates and might not meet the 2011 guidance definition of a model, the risks of AI can be high depending on the complexity of the methodology and its use.”
- The classification debate is moot. The handbook states that “regardless of how AI is classified (i.e., as a model or not a model), the associated risk management should be commensurate with the level of risk of the function that the AI supports.” Translation: don’t play the “it’s not technically a model” game. Examiners won’t buy it.
- Bias gets explicit attention. The handbook makes special note of analyzing the potential for implicit bias in AI models and tools — directly linking MRM to fair lending compliance.
- Explainability matters for risk ratings. Model risk rating methodologies should consider explainability for AI models, meaning opaque models automatically carry higher risk.
- Risk-sensitive MRM is expected. A model’s risk level should drive the breadth, depth, priority, and frequency of MRM activities. High-risk AI models demand more intensive oversight than a simple linear regression.
Stage 3: Recent Developments (2025)
In October 2025, the OCC issued Bulletin 2025-26, clarifying model risk management for community banks. While focused on smaller institutions, the bulletin contained a critical signal: it announced a broader review of model risk management guidance, practices, and examiner feedback at banks of all sizes.
That review is happening now. If you’re a large or mid-size bank running AI models, this is the time to shore up your program — not after the updated guidance drops.
Meanwhile, in April 2025, Acting Comptroller Rodney Hood delivered remarks at the National Fair Housing Alliance’s Responsible AI Symposium, highlighting the OCC’s work to ensure AI is used “ethically and responsibly within the banking industry.” The OCC’s Office of Financial Technology continues to monitor AI adoption and bank-fintech partnerships, feeding into supervisory and policy development.
What OCC Examiners Actually Look For
In May 2022, Deputy Comptroller for Operational Risk Kevin Greenfield testified before Congress and laid out five supervisory expectations for banks using AI. These are effectively the exam playbook:
1. Risk and Compliance Management Programs
Examiners expect well-designed risk management and compliance programs that explicitly cover AI use. This includes controls for monitoring AI process outcomes to identify unwarranted risks or violations of consumer protection laws, including fair lending.
For banks subject to OCC heightened standards, the bar is even higher — risk governance and management practices should be adjusted when introducing or altering AI activities.
What to have ready: An AI-specific risk policy or addendum to your enterprise risk management framework that covers AI use cases, risk appetite for AI, and escalation procedures.
2. Model Risk Management
Many AI processes qualify as models under existing OCC guidance. Effective MRM practices for AI include:
- Appropriate due diligence and risk assessment before deployment
- Sufficient and qualified staffing (people who actually understand ML, not just traditional quant teams)
- Governance and controls commensurate with model complexity
- Documentation that covers training data provenance, feature engineering, hyperparameter choices, known failure modes, and performance metrics
What to have ready: An updated model inventory that captures all AI/ML models — including vendor-embedded models, shadow AI, and tools that might not meet the traditional model definition but carry material risk.
3. Third-Party Risk Management
Banks deploying AI from third-party providers (which is most banks) need robust due diligence, effective contract management, and ongoing oversight. This means you can’t just license an AI-powered underwriting tool and call it done.
What to have ready: Third-party AI vendor assessments that go beyond standard SOC 2 reviews to cover model methodology, training data practices, bias testing, and performance monitoring obligations.
4. New and Modified Products Principles
AI implementations should go through your new product/activity review process. Examiners want to see that you’ve assessed and understood the risks associated with AI activities before deploying them.
What to have ready: Evidence that AI use cases went through your new product approval process, with documented risk assessments and committee approvals.
5. Responsible Use of Alternative Data
If you’re using AI in underwriting with alternative data sources, the OCC expects you to manage consumer protection implications proactively — including an analysis of relevant requirements before implementation.
What to have ready: Fair lending analysis for any AI model that touches credit decisions, with documented testing for disparate impact across protected classes.
Common MRA/MRIA Findings Related to AI
Based on the OCC’s exam procedures and public supervisory commentary, these are the areas where banks most frequently receive Matters Requiring Attention (MRAs) or the more severe Matters Requiring Immediate Attention (MRIAs):
| Finding | Why It Happens | How to Prevent |
|---|---|---|
| Incomplete model inventory | AI tools and vendor-embedded models aren’t captured | Run a shadow AI discovery exercise; include all tools that influence decisions, not just those meeting the strict model definition |
| Inadequate validation for AI models | Traditional backtesting applied to ML models without adaptation | Implement AI-appropriate validation: adversarial testing, bias probing, output consistency checks, benchmark evaluations |
| Missing or insufficient documentation | AI models deployed without full documentation of training data, methodology, and limitations | Require model cards or equivalent documentation before production deployment |
| No bias testing | Fair lending testing not extended to AI/ML credit models | Conduct disparate impact analysis across protected classes for every model touching credit, pricing, or marketing |
| Weak ongoing monitoring | No drift detection or performance monitoring post-deployment | Set automated monitoring for data drift, concept drift, and performance degradation with clear thresholds and escalation triggers |
| Insufficient challenge capability | Validation teams lack ML expertise to effectively challenge AI models | Invest in MRM team upskilling or engage qualified third-party validators with AI/ML expertise |
Building a Defensible OCC-Ready AI MRM Program
Here’s a 30/60/90-day roadmap for getting your program exam-ready:
Days 1–30: Foundation
Owner: Chief Risk Officer / Head of Model Risk Management
- Update model risk policy to explicitly reference AI/ML models and the 2021 Comptroller’s Handbook
- Complete an AI model discovery exercise across all business lines — include vendor models, employee-facing tools, and embedded AI
- Update your model inventory with every identified AI system, regardless of whether it meets the traditional “model” definition
- Assess each AI tool against the OCC’s risk-commensurate framework — assign risk tiers based on autonomy, data sensitivity, decision impact, and explainability
Days 31–60: Validation and Documentation
Owner: Model Validation Team / Independent Risk
- Develop AI-specific validation procedures covering adversarial testing, output consistency, bias probing, and benchmark evaluation
- Create or update model documentation standards for AI — require model cards with training data provenance, performance metrics, known limitations, and drift thresholds
- Conduct bias and fair lending testing for all AI models in credit, pricing, and marketing
- Review third-party AI vendor contracts for validation rights, performance guarantees, and data handling obligations
Days 61–90: Governance and Monitoring
Owner: Model Risk Governance Committee
- Implement ongoing monitoring for all deployed AI models — data drift detection, performance dashboards, automated alerting
- Establish AI-specific escalation protocols: who decides to retrain, retune, or shut down a model
- Train the MRM team on AI/ML concepts — ensure they can effectively challenge AI models, not just rubber-stamp them
- Conduct a mock exam: walk through every OCC exam procedure for MRM and verify you have defensible answers
The GAO’s May 2025 Wake-Up Call
In May 2025, the Government Accountability Office published GAO-25-107197, “Artificial Intelligence: Use and Oversight in Financial Services”. The report found that while financial institutions are rapidly adopting AI for trading, credit decisions, and customer service, regulatory oversight is still catching up.
Key findings relevant to OCC-supervised banks:
- Regulators are using existing frameworks — not new AI-specific rules — to supervise AI. This means Bulletin 2011-12 and the Comptroller’s Handbook are your primary compliance targets.
- The OCC itself is exploring generative AI to help examiners identify relevant information in supervisory guidance and assess risk in bank documents. Examiners may soon be AI-assisted in evaluating your AI program.
- Model risk management gaps are a cross-agency concern. The GAO recommended that the NCUA update its model risk management guidance — highlighting that MRM frameworks across regulators need modernization.
The message is clear: regulators see AI risk management through the lens of existing model risk guidance, and they expect you to apply it rigorously.
The Classification Trap: Don’t Fall Into It
The biggest mistake banks make? Spending months debating whether an AI tool is “technically a model” under the 2011-12 definition — and using that debate as an excuse to defer risk management.
The OCC has been unambiguous: risk management should be commensurate with the level of risk, regardless of classification. A customer-facing chatbot that provides investment guidance might not fit the narrow model definition, but if it gives bad advice that costs a customer money, the regulatory consequences are the same.
The 2021 Comptroller’s Handbook drove this point home by noting that even non-model analytical tools — like a machine learning clustering algorithm used for customer risk rating — can pose substantial risk and require appropriate governance.
Practical approach: Stop debating classification. Instead, assess every AI tool by its risk impact and apply proportionate controls. Examiners will respect a risk-based approach far more than a semantic argument about model definitions.
So What?
OCC Bulletin 2011-12 isn’t going anywhere. The OCC has chosen to extend and reinterpret it for the AI era rather than replace it. That means the three pillars — development, validation, and governance — remain your foundation, but the interpretation of each pillar has evolved substantially.
The broader MRM guidance review announced in October 2025 means updated expectations are coming. Banks that build AI-aware MRM programs now will be ahead of the curve. Banks that wait will be scrambling to remediate MRAs.
If your model risk framework still treats AI as an afterthought — or worse, doesn’t mention it at all — the clock is ticking.
Need a head start? The AI Risk Assessment Template & Guide gives you a structured framework for assessing and documenting AI model risks in line with OCC expectations.
FAQ
Does OCC Bulletin 2011-12 apply to generative AI and LLMs?
Yes. The OCC’s 2021 Comptroller’s Handbook explicitly addresses AI and states that risk management should be commensurate with the level of risk, regardless of whether the AI tool meets the traditional model definition. If your institution deploys generative AI for any function that influences decisions, risk management controls are expected.
What’s the difference between an MRA and an MRIA?
A Matter Requiring Attention (MRA) identifies a deficiency that needs correction within a reasonable timeframe. A Matter Requiring Immediate Attention (MRIA) signals a more severe deficiency that poses an immediate threat to safety and soundness and requires urgent remediation. AI-related findings can escalate to MRIA status if they involve consumer harm or significant compliance gaps.
Do I need to validate AI models from third-party vendors?
Yes. The OCC expects banks to have effective third-party risk management that includes validation of vendor-supplied models. While vendor-funded validations can be used as a starting point, the bank must assess whether the validation is adequate and supplement it as needed. You can’t outsource responsibility for model risk to a vendor.
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Keep Reading
Long Island Investment Adviser Pleads Guilty to $160 Million Fraud: What Compliance Teams Should Learn
Vincent Camarda of A.G. Morgan Financial Advisors pleaded guilty to $160M investment fraud. Here's what went wrong and the compliance red flags every firm should watch for.
Apr 3, 2026
Regulatory ComplianceAI in Consequential Decision-Making: Where Regulators Draw the Compliance Line
How state and federal regulators define consequential AI decisions — and what compliance teams must do before June 2026 to avoid enforcement.
Apr 3, 2026
Regulatory ComplianceWho Needs a Contingency Funding Plan? FINRA, OCC & Interagency Requirements Explained
Contingency funding plan requirements vary by regulator, but most banks and larger credit unions need a CFP now. Here’s what OCC, Fed, FDIC, NCUA, and FINRA expect.
Apr 3, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.