OCC Bulletin 2011-12 and SR 11-7 Are Officially Rescinded — What Banks Need to Know
Table of Contents
TL;DR:
- On April 17, 2026, the OCC, Federal Reserve, and FDIC jointly rescinded the 15-year-old model risk management framework — including OCC Bulletin 2011-12, SR 11-7, SR 21-8, OCC 2021-19, and OCC 1997-24.
- The replacement (OCC Bulletin 2026-13, SR 26-2) is principles-based, tailored to institution size and complexity, and carries no supervisory criticism for non-compliance.
- Generative AI and agentic AI are explicitly excluded — separate guidance is coming “in the near future.”
- Banks should review current MRM programs now: the framework that shaped 15 years of exams just changed overnight.
The 15-Year Playbook Is Gone
On April 4, 2011, the OCC and Federal Reserve published SR 11-7 / OCC Bulletin 2011-12 — the foundational model risk management guidance that defined how banks governed, validated, and documented their models for the next decade and a half. It became the cornerstone of virtually every MRM program at a regulated financial institution. Examiners cited it. Banks built entire governance frameworks around it. Consultants made careers out of explaining it.
On April 17, 2026, it was officially rescinded.
The OCC, Federal Reserve, and FDIC issued joint replacement guidance — OCC Bulletin 2026-13 and SR 26-2 — moving from a prescriptive framework to risk-based principles tailored to each institution’s size, complexity, and model footprint. At the same time, the agencies made a pointed exclusion: generative AI and agentic AI models are out of scope for now.
This isn’t a cleanup edit. It’s a full replacement. If your MRM program is built around the 2011 framework, you have work to do.
What Was Rescinded
The April 17 joint action wiped out more than two decades of cumulative model risk guidance:
| Guidance | Agency | Issued | What It Covered |
|---|---|---|---|
| Bulletin 2011-12 | OCC | April 2011 | Core model risk sound practices |
| SR 11-7 | Federal Reserve | April 2011 | Core model risk sound practices (parallel to OCC 2011-12) |
| SR 21-8 | Federal Reserve | April 2021 | BSA/AML model risk management |
| OCC Bulletin 2021-19 | OCC | April 2021 | BSA/AML model risk (parallel to SR 21-8) |
| OCC Bulletin 1997-24 | OCC | 1997 | Credit scoring models |
| Comptroller’s Handbook: Model Risk Management | OCC | 2021 | Full examination booklet |
That’s everything. If your program document cites any of the above as the operative regulatory framework, that citation is now outdated.
What the New Framework Says
The new interagency guidance, documented in OCC Bulletin 2026-13 and Federal Reserve SR 26-2, organizes model risk management around four core areas:
1. Model Development and Testing
Sound model development includes defining clear objectives, selecting appropriate methodologies, documenting assumptions, and testing performance before deployment. The guidance emphasizes that development practices should be proportionate to the model’s risk — a credit scoring model for a $500M community bank shouldn’t be governed identically to one at a systemically important financial institution.
2. Model Validation and Monitoring
Independent validation remains a cornerstone. The guidance preserves the principle that validation should be performed by staff with sufficient independence from developers, with documentation that supports examiner review. Ongoing monitoring — tracking model performance, drift, and the conditions under which a model was built — is explicitly addressed.
3. Governance and Controls
Clear model inventories, defined ownership, escalation procedures, and board-level oversight remain expectations. The guidance doesn’t eliminate these requirements — it shifts the framing from “you must have X” to “your governance should be commensurate with your risk profile.”
4. Third-Party and Vendor Model Considerations
This is a meaningful addition relative to the original 2011 framework. The new guidance explicitly addresses vendor and third-party model products — an acknowledgment of the reality that most institutions today don’t build their own credit, fraud, or risk models from scratch. Banks are expected to apply appropriate oversight to vendor models, including understanding the model’s design and limitations, even when the underlying methodology isn’t fully transparent.
The Headline Change: From Prescriptive to Tailored
The clearest practical shift in the new guidance is this line: practices should be “risk-based, tailored, and commensurate with a banking organization’s size, complexity, and extent of model use.”
That’s a departure from how 2011-12 / SR 11-7 were applied in practice. The original guidance set baseline expectations that examiners applied across institutions largely irrespective of size — community banks got examined against the same conceptual framework as money-center banks, even if the expectations were calibrated informally. The new guidance puts that calibration explicitly in the text.
For large institutions (formally defined as >$30 billion in assets), the new framework is the operative standard. For community banks, the guidance still applies when model risk is “significant” relative to the institution’s operations — but the level of rigor is explicitly tied to actual risk exposure, not a uniform baseline.
This doesn’t mean community banks can skip model governance. It means the conversation with your examiner is now explicitly about proportionality — and you should document why your program is appropriately scaled.
The GenAI Carve-Out: What It Means (and What’s Coming)
The most striking thing in the new guidance isn’t what’s in it — it’s what’s explicitly excluded.
Generative AI and agentic AI models are out of scope. The agencies described these technologies as “novel and rapidly evolving” and stated they will issue separate guidance “in the near future.”
That’s a deliberate choice. The agencies are signaling that trying to fit LLMs, AI agents, and generative model systems into the 2011 MRM framework doesn’t work — a position that practitioners on the ground have been making for two years.
What this means in practice:
- Your existing model inventory should flag generative AI and agentic AI as governed under a separate (pending) framework. Don’t try to retrofit them into the new guidance and call it done.
- Validation practices for these systems need to stand on their own until the separate guidance drops. The NIST AI RMF and existing internal frameworks remain your primary reference points — as covered here for the NIST GOVERN function.
- Examiner conversations about these systems will get more consistent once the agency guidance lands. For now, document your approach thoroughly and be prepared to explain your methodology.
Given the current regulatory pace, expect the AI-specific model risk guidance within 6-18 months. Until then, treat generative AI governance as a gap to actively manage, not a box you can check.
The “No Supervisory Criticism” Language — Read It Carefully
The joint guidance includes language that may cause some compliance officers to relax more than they should: “non-compliance with this guidance will not result in supervisory criticism.”
This is standard language for supervisory guidance (as opposed to rules), and it means the guidance itself doesn’t create an enforceable obligation that can trigger a supervisory action. What it does not mean:
- That examiners will ignore material model risk failures
- That MRA/MRIA findings related to model governance are off the table
- That banks with weak model risk programs are now protected
Examiners still have full authority to cite model risk deficiencies under safety and soundness standards — they just can’t cite “you violated the guidance” as the standalone basis for a finding. In practice, if your model governance is poor enough to generate a finding, it was probably poor enough to generate a finding under 2011-12 as well. The shift matters most at the margin, where examiners had used the prescriptive text of 2011-12 to push for specific program elements that may not have been strictly necessary for a given institution.
Think of it this way: you still need a solid MRM program. You just have more defensible space to argue that your program is appropriately calibrated for your institution.
What to Do Now
The transition from the 2011 framework to OCC 2026-13 / SR 26-2 doesn’t require rebuilding from scratch — but it does require reviewing what you have. Here’s a practical checklist:
Immediate (within 30 days):
- Update all MRM policy documents that cite OCC 2011-12 or SR 11-7 as the operative framework — cite OCC Bulletin 2026-13 and SR 26-2 instead
- Review model inventory to flag generative AI and agentic AI models that fall outside current guidance scope — document what framework you’re applying to them
- Confirm your BSA/AML model oversight documentation has been updated to reflect the rescission of SR 21-8 / OCC 2021-19
Near-term (60-90 days):
- Review third-party/vendor model oversight procedures against the new emphasis on vendor model governance — do you have sufficient documentation of third-party model design, limitations, and vendor validation practices?
- Revisit proportionality arguments in your MRM program: can you demonstrate that your governance level is “commensurate with your risk profile”? Document that rationale explicitly.
- If you’re a community bank with limited model exposure, assess whether a simplified program narrative tied to the new guidance improves your exam posture
For the next exam cycle:
- Prepare an MRM program summary that references the new framework and articulates your tailoring rationale
- Brief model risk leadership and the board risk committee on the transition — the governance change is board-level news
- Monitor for the incoming AI-specific model risk guidance; build a placeholder in your AI governance framework now
The Bigger Picture
The rescission of 2011-12 / SR 11-7 reflects something broader: regulators recognize that the model risk landscape has fundamentally changed. In 2011, “models” meant FICO score variants, DFAST stress testing engines, and interest rate risk calculators. The 2011 framework was never designed to handle the scale, opacity, and velocity of modern model deployments — let alone generative AI.
The move to principles-based guidance is a tacit acknowledgment of that gap. The explicit exclusion of GenAI and agentic AI is an acknowledgment that those systems need a different conversation entirely.
For practitioners who’ve been applying SR 11-7 to LLMs and finding it a poor fit — as discussed in this post on applying SR 11-7 to AI systems — the new guidance validates that instinct. The 2011 framework didn’t map cleanly onto these systems. The new framework explicitly steps back and says: we’re not going to pretend otherwise.
What comes next for AI model risk governance will matter a lot. The agencies’ pledge to issue separate AI guidance “in the near future” is the most consequential unfulfilled promise in bank risk management right now.
If your MRM program is overdue for a refresh — or you’re trying to get an AI risk governance structure in place before that separate guidance drops — the AI Risk Assessment Template gives you a structured framework to document model risk, tier your AI systems by risk level, and build the inventory examiners will want to see.
Sources:
Related Template
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Frequently Asked Questions
Is OCC Bulletin 2011-12 still in effect?
Is SR 11-7 still in effect?
What does the new model risk management guidance require?
Does the new guidance apply to generative AI and agentic AI models?
Who does OCC Bulletin 2026-13 apply to?
Does 'no supervisory criticism for non-compliance' mean model risk exams will be less rigorous?
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Related Framework
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Keep Reading
State Money Transmitter Licensing for Crypto: The Patchwork Compliance Challenge
49 states require money transmitter licenses for crypto businesses. OKX paid $505M for getting this wrong. Here's the state-by-state breakdown and how to build your licensing strategy.
Apr 21, 2026
Regulatory ComplianceVoyager Pacific Capital's $25M Ponzi: What the SEC + DOJ Double Tap Means for Investment Advisers
The SEC charged Voyager Pacific Capital Management in a $25M real estate Ponzi that ran five years. Here's what compliance teams must fix before examiners ask.
Apr 21, 2026
Regulatory ComplianceStablecoin Compliance Under the GENIUS Act: Consumer Protection Requirements Explained
The GENIUS Act is law. Here's what permitted payment stablecoin issuers owe consumers—reserve requirements, redemption policies, fee disclosures, and bankruptcy protections.
Apr 20, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.