Regulatory Compliance

SR 11-7 Is Dead: What OCC Bulletin 2026-13 and Fed SR 26-2 Mean for Your Model Risk Program

Table of Contents

TL;DR:

  • On April 17, 2026, the Fed, OCC, and FDIC jointly issued revised model risk management guidance — rescinding SR 11-7, OCC Bulletin 2011-12, the BSA/AML model statement, and the MRM Comptroller’s Handbook booklet
  • The new guidance is principles-based and non-binding: “non-compliance will not result in supervisory criticism”
  • The applicability threshold jumped from $1B to $30B; the definition of “model” was narrowed; generative and agentic AI are explicitly out of scope
  • Model risk teams need to reassess inventory, rewrite policies against principles instead of bullet points, and prepare for a separate AI guidance package

SR 11-7 Just Got Replaced After 15 Years

For fifteen years, “SR 11-7” was shorthand for how every US bank was supposed to manage model risk. Two pages of governance, three pages on validation, prescriptive language about “effective challenge” and outcomes analysis. It became the de facto standard for non-banks too — fintechs, insurers, and asset managers wrote MRM frameworks against it because it was the most concrete document anyone had.

That document is gone.

On April 17, 2026, the Federal Reserve, OCC, and FDIC jointly issued revised model risk management guidance. The Fed’s SR 26-2 supersedes SR 11-7 (April 2011) and SR 21-8 (the 2021 BSA/AML model statement). The OCC’s Bulletin 2026-13 rescinds Bulletin 2011-12, Bulletin 2021-19, Bulletin 1997-24 (“Credit Scoring Models”), and the Model Risk Management booklet of the Comptroller’s Handbook. The FDIC issued parallel guidance the same day.

This isn’t a tweak. It’s a rewrite that changes scope, applicability, enforcement posture, and how regulators think about validation. If your MRM policy still cites SR 11-7 by paragraph, you have work to do.

What Actually Changed

The 2011 framework was prescriptive: detailed expectations for board oversight, mandatory annual policy reviews, structural independence for validation, specific testing techniques. The 2026 framework is principles-based, narrower, and explicitly non-binding.

Here are the substantive shifts:

TopicSR 11-7 / OCC 2011-12SR 26-2 / OCC 2026-13
Asset thresholdMost relevant for banks $1B+Most relevant for banks $30B+
EnforceabilitySupervisory criticism for non-compliance”Does not set forth enforceable standards”; non-compliance won’t result in supervisory criticism
Definition of “model”Quantitative method; broad coverageRequires complexity; excludes simple arithmetic, rule-based processes
AI/MLImplicit — covered as “models”Conventional ML in scope; generative and agentic AI explicitly out, RFI coming
BSA/AML modelsSeparate interagency statement (2021)No separate guidance; rolled into general framework
Validation independenceOrganizational independence requiredQuality depends on “rigor and effectiveness,” not reporting line
Vendor modelsBrief treatmentStand-alone section on third-party products
Specific testingBacktesting, parallel outcomes, benchmarking expectedGeneric principles; “validation approaches may differ”

The most consequential line in the new guidance, repeated across all three agencies, is this: non-compliance will not result in supervisory criticism. That language doesn’t appear in SR 11-7. It signals a fundamental shift in how MRM will be examined.

What This Doesn’t Mean

Before the celebration starts, three reality checks:

1. Examiners still examine. Safety-and-soundness exams haven’t changed. If your trading models blow up, your CCAR submission gets rejected, or your CECL allowance is materially wrong because validation missed something, that’s still a finding. The MRM document is non-binding; the underlying expectation that banks manage model risk is not.

2. Smaller banks aren’t off the hook. The $30B threshold is “most relevant” — not exclusive. The guidance specifically calls out that smaller institutions with “significant exposure to model risk because of the prevalence and complexity of their models” should still apply it. If you’re a $5B community bank running an in-house ALM model, a CECL model, and a fair lending HMDA model, you have model risk regardless of where the threshold lands.

3. SOX, fair lending, and consumer compliance regimes still bite. A discriminatory pricing model is still a UDAAP and ECOA problem. A flawed CECL model is still an SEC disclosure problem. The CFPB’s enforcement of fair lending model risk doesn’t depend on SR 11-7 or its replacement. Other regulatory regimes haven’t softened.

What Model Risk Teams Need to Do Now

For practitioners — heads of MRM, model validation, model owners, CROs at $30B+ banks — here’s the work that actually needs to happen.

1. Reassess your model inventory under the narrower definition

The 2026 guidance excludes “simple arithmetic calculations, rule-based processes, and software lacking statistical/economic/financial theories.” That likely means your inventory is too broad.

What to do this week:

  • Pull the current model inventory and tag each entry: keep, scope-out, or evaluate
  • “Scope-out” candidates: deterministic Excel calculators, lookup tables, hard-coded business rules, simple aging buckets
  • “Evaluate” candidates: anything where complexity is debatable — econometric overlays on simple calcs, rules engines with thresholds derived from data
  • Don’t just delete. Document the rationale per item. Examiners will still ask why something fell off the inventory, and “the new guidance let me drop it” is not a defensible answer if the item was driving real decisions.

2. Rewrite your MRM policy against principles, not paragraphs

If your policy was structured as “Section 4.1 satisfies SR 11-7 paragraph X,” that mapping is now stale. The new guidance is shorter and less specific. A policy keyed to the old paragraphs reads as a compliance artifact rather than a working framework.

Restructure around the four pillars the agencies still recognize:

  • Model development and use — what the model does, who uses outputs, what decisions depend on it
  • Validation and monitoring — how you confirm the model works and keeps working
  • Governance and controls — roles, ownership, escalation, change management
  • Third-party and vendor models — how you handle products you can’t fully validate

3. Re-examine validation independence — but don’t break what works

The new guidance says validation quality is about rigor, not reporting line. That’s true. It’s also a regulatory invitation to weaken structures that took banks ten years to build.

Don’t take the bait. If your validation function reports up through the CRO independent of model owners, leave it there. The cost of dismantling that structure is real and the benefit is theoretical. The agencies are giving flexibility, not encouraging consolidation.

Where to use the new flexibility: low-risk, low-complexity models where independent validation was always overkill. Allocate validation resources by model materiality — not by checking the same box for every model in inventory.

4. Prepare for the AI guidance package

Generative and agentic AI are out of scope of the 2026 guidance. The agencies signaled an RFI is coming. That doesn’t mean “wait.” It means:

  • Your existing AI governance work continues — the NIST AI RMF is the de facto reference
  • Map your AI inventory now: what you have, what’s in production, what risk it carries
  • Be ready to comment on the RFI when it drops; this is the only chance to shape framework expectations before they harden
  • For BSA/AML transaction monitoring with ML overlays, expect closer scrutiny — the 2021 BSA/AML model statement was rescinded without replacement, and FinCEN doesn’t have its own model guidance

For the deeper history of how the OCC has treated model risk for AI specifically, our walkthrough of OCC Bulletin 2011-12 and AI/ML programs covers what the regulator’s prior position was — which is the baseline you’ll be compared against until the new AI guidance arrives.

5. Handle BSA/AML models without a safety net

This one is going to hurt some shops. OCC Bulletin 2021-19 was the only piece of guidance that gave specific treatment to BSA/AML transaction monitoring and sanctions screening models. It acknowledged that these models often can’t be backtested in the traditional sense (you don’t have ground-truth labels for which alerts were “right”). It allowed for tuning rather than full revalidation. It was practical.

It’s gone.

BSA/AML models now fall under the general MRM framework. The general framework allows tailoring, but the specific accommodations for transaction monitoring tuning, suppression rules, and segmentation analysis are no longer codified. The risk: an examiner who came up under the 2021 statement is now without a reference document, and may default to applying the general principles more strictly.

What to do:

  • Rewrite the BSA/AML model section of your policy without referencing the rescinded 2021 statement
  • Document tuning methodology, alert disposition tracking, and segmentation rationale as part of ongoing monitoring (the same things 2021-19 called out — just labeled differently)
  • Keep your MRM and BSA functions in lockstep; the model owner is BSA, the validator is MRM, and that division of labor still works

For broader work on AI-driven monitoring, our AI model inventory and exam-readiness guide walks through inventory mechanics that translate directly to BSA/AML ML overlays.

5 Things to Check Monday Morning

If you do nothing else this week, do these five.

  1. Pull your MRM policy and search for “11-7.” Every reference is now wrong. Flag them for rewrite.
  2. Email your model owners and let them know the inventory definition is changing. Give them two weeks to flag candidates for scope-out review with rationale.
  3. Brief your audit committee or risk committee. “Two regulatory documents changed; we’re updating the framework” is the right level for the board. Don’t bury this in the next quarterly MRM report.
  4. Talk to your validation function. Confirm you’re not making structural changes based on the new flexibility — yet. Decide who owns the policy rewrite and what the timeline looks like.
  5. Document your scope-out decisions. Every model that comes off the inventory needs a written rationale. This is the single biggest exam exposure from the rewrite.

The Bigger Picture

The 2026 guidance is part of a broader regulatory pattern: agencies pulling back from prescriptive supervision and toward principles-based frameworks. SR 11-7 was unusually specific by today’s standards. Replacing it with something shorter and less binding is consistent with how the same agencies have handled BSA/AML modernization and third-party risk management over the past two years.

Practically, this gives sophisticated banks more room to tailor frameworks to actual risk. It also gives less sophisticated banks more room to underinvest. The institutions that will come out ahead are the ones that read the principles, build a defensible framework, and document the reasoning — not the ones that treat “non-binding” as “optional.”

If your MRM program needs to be rebuilt against the new framework, start with the AI side first. That’s where the regulatory uncertainty is highest, where the audit committee questions are coming from, and where a documented framework will pay dividends regardless of how the upcoming RFI resolves.

The AI Risk Assessment Template & Guide gives you the inventory structure, control mappings, and validation criteria to anchor an AI-specific MRM extension that aligns with the new principles-based posture and the NIST AI RMF — without waiting for the next bulletin.


Sources:

Frequently Asked Questions

Did SR 11-7 get rescinded?
Yes. On April 17, 2026, the Federal Reserve, OCC, and FDIC jointly issued revised model risk management guidance. The Fed superseded SR 11-7 and SR 21-8 with SR 26-2. The OCC rescinded Bulletin 2011-12, Bulletin 2021-19 (BSA/AML model statement), Bulletin 1997-24, and the Model Risk Management booklet of the Comptroller's Handbook through Bulletin 2026-13.
Does the new guidance cover AI and machine learning models?
Conventional ML models are in scope, but generative AI and agentic AI models are explicitly excluded. The agencies said those technologies are 'novel and rapidly evolving' and signaled future guidance through a request for information.
What is the asset threshold under the new guidance?
The revised guidance is most relevant to banking organizations with over $30 billion in total assets, up from the prior $1 billion threshold under SR 11-7. Smaller banks with significant model exposure may still find it relevant.
Is the new guidance enforceable?
No. The OCC and Fed both stated the guidance does not set forth enforceable standards or prescriptive requirements, and non-compliance will not result in supervisory criticism. Examiners will still evaluate model risk through safety-and-soundness exams, but the specific MRM document is non-binding.
What happened to BSA/AML model risk guidance?
OCC Bulletin 2021-19, which provided BSA/AML-specific model risk guidance, was rescinded without a replacement. BSA/AML transaction monitoring and sanctions screening models now fall under the general MRM framework, with no carve-outs.
Do I still need independent model validation?
The new guidance de-emphasizes organizational independence as a structural requirement. It says validation quality depends on 'rigor and effectiveness' rather than reporting line. In practice, most banks will keep an independent validation function — but the agencies are no longer prescribing the structure.
Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Related Framework

AI Risk Assessment Template & Guide

Comprehensive AI model governance and risk assessment templates for financial services teams.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.