SR 11-7 Is Dead: What OCC Bulletin 2026-13 and Fed SR 26-2 Mean for Your Model Risk Program
Table of Contents
TL;DR:
- On April 17, 2026, the Fed, OCC, and FDIC jointly issued revised model risk management guidance — rescinding SR 11-7, OCC Bulletin 2011-12, the BSA/AML model statement, and the MRM Comptroller’s Handbook booklet
- The new guidance is principles-based and non-binding: “non-compliance will not result in supervisory criticism”
- The applicability threshold jumped from $1B to $30B; the definition of “model” was narrowed; generative and agentic AI are explicitly out of scope
- Model risk teams need to reassess inventory, rewrite policies against principles instead of bullet points, and prepare for a separate AI guidance package
SR 11-7 Just Got Replaced After 15 Years
For fifteen years, “SR 11-7” was shorthand for how every US bank was supposed to manage model risk. Two pages of governance, three pages on validation, prescriptive language about “effective challenge” and outcomes analysis. It became the de facto standard for non-banks too — fintechs, insurers, and asset managers wrote MRM frameworks against it because it was the most concrete document anyone had.
That document is gone.
On April 17, 2026, the Federal Reserve, OCC, and FDIC jointly issued revised model risk management guidance. The Fed’s SR 26-2 supersedes SR 11-7 (April 2011) and SR 21-8 (the 2021 BSA/AML model statement). The OCC’s Bulletin 2026-13 rescinds Bulletin 2011-12, Bulletin 2021-19, Bulletin 1997-24 (“Credit Scoring Models”), and the Model Risk Management booklet of the Comptroller’s Handbook. The FDIC issued parallel guidance the same day.
This isn’t a tweak. It’s a rewrite that changes scope, applicability, enforcement posture, and how regulators think about validation. If your MRM policy still cites SR 11-7 by paragraph, you have work to do.
What Actually Changed
The 2011 framework was prescriptive: detailed expectations for board oversight, mandatory annual policy reviews, structural independence for validation, specific testing techniques. The 2026 framework is principles-based, narrower, and explicitly non-binding.
Here are the substantive shifts:
| Topic | SR 11-7 / OCC 2011-12 | SR 26-2 / OCC 2026-13 |
|---|---|---|
| Asset threshold | Most relevant for banks $1B+ | Most relevant for banks $30B+ |
| Enforceability | Supervisory criticism for non-compliance | ”Does not set forth enforceable standards”; non-compliance won’t result in supervisory criticism |
| Definition of “model” | Quantitative method; broad coverage | Requires complexity; excludes simple arithmetic, rule-based processes |
| AI/ML | Implicit — covered as “models” | Conventional ML in scope; generative and agentic AI explicitly out, RFI coming |
| BSA/AML models | Separate interagency statement (2021) | No separate guidance; rolled into general framework |
| Validation independence | Organizational independence required | Quality depends on “rigor and effectiveness,” not reporting line |
| Vendor models | Brief treatment | Stand-alone section on third-party products |
| Specific testing | Backtesting, parallel outcomes, benchmarking expected | Generic principles; “validation approaches may differ” |
The most consequential line in the new guidance, repeated across all three agencies, is this: non-compliance will not result in supervisory criticism. That language doesn’t appear in SR 11-7. It signals a fundamental shift in how MRM will be examined.
What This Doesn’t Mean
Before the celebration starts, three reality checks:
1. Examiners still examine. Safety-and-soundness exams haven’t changed. If your trading models blow up, your CCAR submission gets rejected, or your CECL allowance is materially wrong because validation missed something, that’s still a finding. The MRM document is non-binding; the underlying expectation that banks manage model risk is not.
2. Smaller banks aren’t off the hook. The $30B threshold is “most relevant” — not exclusive. The guidance specifically calls out that smaller institutions with “significant exposure to model risk because of the prevalence and complexity of their models” should still apply it. If you’re a $5B community bank running an in-house ALM model, a CECL model, and a fair lending HMDA model, you have model risk regardless of where the threshold lands.
3. SOX, fair lending, and consumer compliance regimes still bite. A discriminatory pricing model is still a UDAAP and ECOA problem. A flawed CECL model is still an SEC disclosure problem. The CFPB’s enforcement of fair lending model risk doesn’t depend on SR 11-7 or its replacement. Other regulatory regimes haven’t softened.
What Model Risk Teams Need to Do Now
For practitioners — heads of MRM, model validation, model owners, CROs at $30B+ banks — here’s the work that actually needs to happen.
1. Reassess your model inventory under the narrower definition
The 2026 guidance excludes “simple arithmetic calculations, rule-based processes, and software lacking statistical/economic/financial theories.” That likely means your inventory is too broad.
What to do this week:
- Pull the current model inventory and tag each entry: keep, scope-out, or evaluate
- “Scope-out” candidates: deterministic Excel calculators, lookup tables, hard-coded business rules, simple aging buckets
- “Evaluate” candidates: anything where complexity is debatable — econometric overlays on simple calcs, rules engines with thresholds derived from data
- Don’t just delete. Document the rationale per item. Examiners will still ask why something fell off the inventory, and “the new guidance let me drop it” is not a defensible answer if the item was driving real decisions.
2. Rewrite your MRM policy against principles, not paragraphs
If your policy was structured as “Section 4.1 satisfies SR 11-7 paragraph X,” that mapping is now stale. The new guidance is shorter and less specific. A policy keyed to the old paragraphs reads as a compliance artifact rather than a working framework.
Restructure around the four pillars the agencies still recognize:
- Model development and use — what the model does, who uses outputs, what decisions depend on it
- Validation and monitoring — how you confirm the model works and keeps working
- Governance and controls — roles, ownership, escalation, change management
- Third-party and vendor models — how you handle products you can’t fully validate
3. Re-examine validation independence — but don’t break what works
The new guidance says validation quality is about rigor, not reporting line. That’s true. It’s also a regulatory invitation to weaken structures that took banks ten years to build.
Don’t take the bait. If your validation function reports up through the CRO independent of model owners, leave it there. The cost of dismantling that structure is real and the benefit is theoretical. The agencies are giving flexibility, not encouraging consolidation.
Where to use the new flexibility: low-risk, low-complexity models where independent validation was always overkill. Allocate validation resources by model materiality — not by checking the same box for every model in inventory.
4. Prepare for the AI guidance package
Generative and agentic AI are out of scope of the 2026 guidance. The agencies signaled an RFI is coming. That doesn’t mean “wait.” It means:
- Your existing AI governance work continues — the NIST AI RMF is the de facto reference
- Map your AI inventory now: what you have, what’s in production, what risk it carries
- Be ready to comment on the RFI when it drops; this is the only chance to shape framework expectations before they harden
- For BSA/AML transaction monitoring with ML overlays, expect closer scrutiny — the 2021 BSA/AML model statement was rescinded without replacement, and FinCEN doesn’t have its own model guidance
For the deeper history of how the OCC has treated model risk for AI specifically, our walkthrough of OCC Bulletin 2011-12 and AI/ML programs covers what the regulator’s prior position was — which is the baseline you’ll be compared against until the new AI guidance arrives.
5. Handle BSA/AML models without a safety net
This one is going to hurt some shops. OCC Bulletin 2021-19 was the only piece of guidance that gave specific treatment to BSA/AML transaction monitoring and sanctions screening models. It acknowledged that these models often can’t be backtested in the traditional sense (you don’t have ground-truth labels for which alerts were “right”). It allowed for tuning rather than full revalidation. It was practical.
It’s gone.
BSA/AML models now fall under the general MRM framework. The general framework allows tailoring, but the specific accommodations for transaction monitoring tuning, suppression rules, and segmentation analysis are no longer codified. The risk: an examiner who came up under the 2021 statement is now without a reference document, and may default to applying the general principles more strictly.
What to do:
- Rewrite the BSA/AML model section of your policy without referencing the rescinded 2021 statement
- Document tuning methodology, alert disposition tracking, and segmentation rationale as part of ongoing monitoring (the same things 2021-19 called out — just labeled differently)
- Keep your MRM and BSA functions in lockstep; the model owner is BSA, the validator is MRM, and that division of labor still works
For broader work on AI-driven monitoring, our AI model inventory and exam-readiness guide walks through inventory mechanics that translate directly to BSA/AML ML overlays.
5 Things to Check Monday Morning
If you do nothing else this week, do these five.
- Pull your MRM policy and search for “11-7.” Every reference is now wrong. Flag them for rewrite.
- Email your model owners and let them know the inventory definition is changing. Give them two weeks to flag candidates for scope-out review with rationale.
- Brief your audit committee or risk committee. “Two regulatory documents changed; we’re updating the framework” is the right level for the board. Don’t bury this in the next quarterly MRM report.
- Talk to your validation function. Confirm you’re not making structural changes based on the new flexibility — yet. Decide who owns the policy rewrite and what the timeline looks like.
- Document your scope-out decisions. Every model that comes off the inventory needs a written rationale. This is the single biggest exam exposure from the rewrite.
The Bigger Picture
The 2026 guidance is part of a broader regulatory pattern: agencies pulling back from prescriptive supervision and toward principles-based frameworks. SR 11-7 was unusually specific by today’s standards. Replacing it with something shorter and less binding is consistent with how the same agencies have handled BSA/AML modernization and third-party risk management over the past two years.
Practically, this gives sophisticated banks more room to tailor frameworks to actual risk. It also gives less sophisticated banks more room to underinvest. The institutions that will come out ahead are the ones that read the principles, build a defensible framework, and document the reasoning — not the ones that treat “non-binding” as “optional.”
If your MRM program needs to be rebuilt against the new framework, start with the AI side first. That’s where the regulatory uncertainty is highest, where the audit committee questions are coming from, and where a documented framework will pay dividends regardless of how the upcoming RFI resolves.
The AI Risk Assessment Template & Guide gives you the inventory structure, control mappings, and validation criteria to anchor an AI-specific MRM extension that aligns with the new principles-based posture and the NIST AI RMF — without waiting for the next bulletin.
Sources:
- Federal Reserve, SR 26-2: Revised Guidance on Model Risk Management, April 17, 2026
- OCC, Bulletin 2026-13: Model Risk Management — Revised Guidance, April 17, 2026
- OCC, News Release NR-OCC-2026-29: OCC Issues Updated Model Risk Management Guidance, April 17, 2026
- FDIC, Agencies Issue Revised Model Risk Guidance, April 17, 2026
- Sullivan & Cromwell, Federal Banking Agencies Issue Revised Guidance on Model Risk Management
- Orrick, Agencies Overhaul Model Risk Management Guidance for Banks: Here’s What Changed
Related Template
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Frequently Asked Questions
Did SR 11-7 get rescinded?
Does the new guidance cover AI and machine learning models?
What is the asset threshold under the new guidance?
Is the new guidance enforceable?
What happened to BSA/AML model risk guidance?
Do I still need independent model validation?
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Related Framework
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Keep Reading
OFAC Risk Assessment Template: Sanctions Exposure Scoring for Financial Institutions
Build a defensible OFAC risk assessment using Treasury's five-component framework. Risk factors, scoring methodology, and what examiners look for.
May 5, 2026
Regulatory Compliance$450M Astor Impersonation Fraud: What the Sklarov SDNY Indictment Means for Lender Due Diligence
SDNY indicted Vladimir Sklarov for a $450M stock-backed loan scheme using a fake Astor family-linked lender. Here's the control gap every counterparty diligence team needs to fix.
May 5, 2026
Regulatory ComplianceAML Risk Assessment Template: A Practitioner's Methodology for Banks and Fintechs
Build a defensible BSA/AML risk assessment using the FFIEC's inherent risk framework. Covers the four risk categories, scoring methodology, FinCEN's April 2026 NPRM requirements, and common exam deficiencies.
May 4, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.