Regulatory Compliance

OCC, Fed, and FDIC Just Replaced SR 11-7. Here's What Changed in the New Model Risk Management Guidance

May 1, 2026 Rebecca Leung
Table of Contents

TL;DR

  • On April 17, 2026, the OCC, Federal Reserve, and FDIC jointly issued revised model risk management guidance — replacing the 15-year-old SR 11-7 and OCC Bulletin 2011-12 framework that defined how banks have governed models since 2011.
  • The new guidance is principles-based, explicitly non-enforceable on its own terms, and primarily targets banks with more than $30 billion in assets.
  • Generative AI and agentic AI are carved out — the agencies will issue a separate request for information on AI model risk.
  • Banks need to re-tier their model inventories, revisit validation cadences, and decide which low-risk rule-based processes can leave the perimeter entirely.

For 15 years, every conversation about model governance at a U.S. bank traced back to two documents: Federal Reserve SR 11-7 and OCC Bulletin 2011-12. They were technically “guidance,” but every examiner treated them as binding, every consultant built their MRM offering around them, and every regulator-facing slide deck cited them by number. That era ended on April 17, 2026.

The OCC, Federal Reserve, and FDIC released a joint revised model risk management framework — issued by the Fed as SR 26-2 and by the OCC as Bulletin 2026-13 — that rescinds SR 11-7, SR 21-8, OCC Bulletin 2011-12, OCC Bulletin 2021-19, OCC Bulletin 1997-24, and the Model Risk Management booklet of the Comptroller’s Handbook in one sweep. The agencies say the revision “reflects supervisory experience and industry feedback accumulated over the past fifteen years, as well as significant advancements in modeling practices.”

Translation: SR 11-7 was written before deep learning, before LLMs, before vendor models ate half the financial services stack, and before “model” stopped meaning “regression equation in a spreadsheet.” The agencies finally caught up.

But the new framework is not just SR 11-7 with new wallpaper. The structure is fundamentally different — and the parts that aren’t different are deliberately not different, which is its own signal.

What actually changed

The guidance is no longer enforceable on its own

This is the headline shift. The 2026 guidance explicitly states that “non-compliance with this guidance will not result in supervisory criticism.” That language doesn’t exist in SR 11-7. For 15 years, examiners treated SR 11-7 like a rule — citing institutions for missing model documentation sections, for not having a board-approved model risk policy, for failing to validate annually. Sullivan & Cromwell flagged this in their analysis as one of the most consequential changes in the rewrite.

The Fed preserved a backstop: supervisory action can still result from “violations of law or unsafe or unsound practices stemming from insufficient management of model risk.” So this isn’t a green light to dismantle your MRM program. But it is a green light to stop running it as a checklist exercise driven by examiner muscle memory.

Banks under $30 billion get explicit relief

The new scope is targeted: “banking organizations with over $30 billion in total assets regulated by the Federal Reserve.” Smaller institutions with complex or prevalent model use remain in scope, but the agencies are no longer pretending a $5 billion community bank should have the same MRM machinery as JPMorgan.

This matters for two reasons. First, plenty of mid-sized banks have spent the last decade building MRM functions sized for the SR 11-7 cookie cutter — three-line-of-defense governance committees, formal model lifecycle policies, annual board reporting, dedicated independent validation teams. Some of that infrastructure was always overbuilt for the actual risk. The new guidance gives those institutions explicit air cover to right-size.

Second, vendor-model-heavy fintechs and bank partners can stop assuming the maximalist interpretation of MRM is the only defensible posture. If your bank partner is a $15 billion regional and you’re providing a third-party scoring model, the conversation about validation evidence just changed.

The definition of “model” got narrower

Davis Polk’s visual memo called this out plainly: the new guidance excludes “simple arithmetic calculations” and “deterministic rule-based processes” from the model definition. SR 11-7’s model definition was so broad that institutions ended up with thousands of “models” in inventory — many of them Excel formulas that calculated regulatory ratios.

If your model inventory has 4,000 entries and 2,500 of them are deterministic rule sets, you can probably move a third of them out of the MRM perimeter and into a much lighter-weight controls process. That’s a real efficiency win, but it requires a defensible re-classification process. Don’t just delete entries — document the basis for each removal so the next examiner cycle has a paper trail.

Materiality replaced the universal lifecycle

SR 11-7 prescribed a single lifecycle: development, implementation, use, validation, ongoing monitoring. The 2026 guidance replaces that with a materiality-driven approach — institutions determine model importance based on purpose (regulatory vs. business use) and exposure (portfolio size, business impact, downside scenarios).

For low-materiality models, the standard is now meaningfully lighter: “identifying those models and monitoring” performance. Compare that to the SR 11-7 expectation of full validation, documentation, and ongoing testing for every model regardless of consequence.

The catch: you need a defensible materiality framework before you can use this flexibility. “Low materiality” is a determination, not a default. Put scoring logic and review cadence in writing, and tie it back to specific business impacts.

Annual validation is gone as a default

The Sullivan & Cromwell read on this is unambiguous: the new guidance “eliminates mandatory annual validation cadences and detailed independence mechanisms.” Validation frequency now scales with materiality, model performance trends, and changes in the model’s use or environment.

For high-materiality models — credit underwriting, ALM, capital, AML transaction monitoring — annual validation will still be the right answer most of the time. For lower-tier models, you can plausibly move to triennial or event-driven validation. Document the trigger criteria. Build them into your model risk policy. Don’t just stop validating.

Vendor model expectations changed substantially

Under SR 11-7, the standard for third-party models was “developmental evidence from vendors” plus independent validation. That bar broke down in practice — vendors often refused to share developmental evidence, and most banks ended up doing performance testing as a substitute.

The 2026 guidance formalizes the substitute. The new sound practice is “developing an understanding of the model” combined with “ongoing monitoring and outcome analysis.” That’s a much more honest description of what banks actually do with vendor models, and it raises the relative importance of:

  • Pre-acquisition model documentation review (what does the vendor actually share?)
  • Outcome analysis frameworks (do model outputs track expected behavior?)
  • Performance monitoring with thresholds (when does drift trigger escalation?)

If you’ve got a vendor model contract up for renewal, the SLA language around performance evidence and explainability is now the operative control — not a generic right-to-validate clause that vendors negotiate around.

AI got carved out — entirely

This is the part of the guidance the AI/ML governance crowd is going to spend the next year arguing about. The agencies explicitly excluded generative AI and agentic AI from the new framework, calling them “novel and rapidly evolving.” For those models, banks are told to “apply their broader risk management and governance practices.”

Traditional machine learning models — the gradient-boosted trees and random forests that have been embedded in credit and AML for a decade — remain in scope. Statistical models, regression, time-series — all in scope.

The agencies committed to issuing a separate request for information on AI model risk management, including generative and agentic AI. That RFI hasn’t dropped yet. Until it does, banks running gen-AI applications are operating in a gray zone where the new MRM guidance doesn’t formally apply, but examiner expectations and “broader risk management” obligations still exist.

If you’ve been treating gen-AI use cases as an extension of your existing MRM program, the new guidance gives you more flexibility on framing — but also more responsibility to articulate what your gen-AI governance actually is, since you can’t just point to SR 11-7 anymore.

What changed vs. what stayed the same

TopicSR 11-7 / OCC 2011-12 (2011)SR 26-2 / OCC 2026-13 (2026)
EnforcementTreated as binding by examinersExplicitly non-enforceable
Scope thresholdAll banksBanks > $30B; smaller if model-intensive
Model definitionBroad (catches Excel formulas)Excludes simple arithmetic & deterministic rules
Validation cadenceImplicit annual standardRisk-based, no fixed cadence
Vendor modelsDevelopmental evidence + independent validationUnderstanding + monitoring + outcome analysis
Effective challengeDetailed independence requirements”Sufficient independence to maintain objectivity”
Board dutiesSpecific (annual policy approval, formal reporting)General accountability principle
AI / gen AINot addressedExplicitly carved out, RFI forthcoming
Materiality frameworkImplicit, lifecycle applied uniformlyExplicit (purpose + exposure)

Control gap analysis: where MRM programs fail under the new framework

The temptation reading the 2026 guidance is to assume “less prescriptive” means “less work.” That’s wrong. Principles-based regimes are harder to operate than rules-based regimes, because every judgment call has to be defensible without a checklist to point to. The places MRM programs are most likely to break under the new framework:

1. Materiality framework that doesn’t actually drive different treatment. If every model in your inventory ends up “high materiality” because nobody wants to be the analyst who downgraded a model that later failed, the framework is theater. Write tier definitions with concrete thresholds, validate the distribution annually, and make sure low-tier models actually get lighter-weight treatment.

2. Vendor model “understanding” that’s really just a procurement form. The new vendor model standard requires substantive understanding of how the model works, what its limitations are, and what its outputs mean. A 12-question vendor questionnaire signed by procurement isn’t that. Build a vendor model review template that requires technical engagement before contract renewal — not just at onboarding.

3. Gen-AI governance that lives outside MRM but isn’t anywhere else either. If your MRM team is no longer responsible for generative AI use cases under the new guidance, and your AI governance committee is still ramping, you have a coverage gap. Map every gen-AI use case to a named owner — someone whose job description includes “I am accountable for this AI system’s risk.”

4. Documentation that hasn’t been re-anchored to the new guidance. Your model risk policy almost certainly cites SR 11-7 by name. Internal training references the 2011 lifecycle. Validation reports use SR 11-7 terminology. None of that is wrong, but examiners reading your documentation in late 2026 will be looking for evidence that you’ve engaged with the 2026 framework. Do the rewrite.

What MRM teams should do this quarter

For institutions over the $30 billion threshold:

  • Re-anchor your model risk policy to the 2026 guidance. Cite SR 26-2 or OCC Bulletin 2026-13 directly. Don’t just edit the SR 11-7 references.
  • Audit your model inventory for items that fall outside the new model definition (deterministic rule-based processes, simple arithmetic). Build a defensible declassification process — model name, basis for removal, sign-off, retention path.
  • Refresh your materiality framework. If you don’t have one, build one. Purpose + exposure scoring with clear tier definitions.
  • Re-baseline validation cadence by tier. Document trigger criteria for off-cycle validation (model performance degradation, material change in use, regulatory change).
  • Update your gen-AI inventory and governance ownership. Where does generative and agentic AI live in your risk taxonomy? Who is accountable? What controls apply now, ahead of the AI RFI?

For institutions under the $30 billion threshold:

  • Review the gap between current MRM program scope and the new guidance. Where have you been overbuilt for the actual model risk profile? Document a right-sizing rationale.
  • Don’t deconstruct your program. Examiners will still expect evidence of effective model governance. The change is about calibration, not abandonment.
  • Watch the AI RFI. Smaller institutions are often higher consumers of vendor-supplied AI capabilities. The forthcoming AI guidance will likely have different applicability thresholds than the current MRM framework.

For everyone:

  • Document your read on the new guidance — internally — before your next examiner conversation. “We’re working through it” is fine if it’s followed by “and here’s what we’ve changed in the last 30 days.” It’s not fine standing alone.

The 30/60/90 day checklist

Days 1–30

  • Read SR 26-2 and OCC Bulletin 2026-13 in full. Distribute to MRM team, model owners, internal audit.
  • Inventory every reference to SR 11-7, OCC 2011-12, SR 21-8 in your policies, procedures, training, and templates.
  • Draft a “what changed for us” memo for executive risk committee.

Days 31–60

  • Update model risk policy and procedures to align with 2026 framework.
  • Pilot the new materiality framework on a representative sample of models — does it produce sensible tier distributions?
  • Identify candidate models for declassification under the narrower model definition.

Days 61–90

  • Roll out updated MRM policy organization-wide.
  • Re-baseline validation cadences by tier, with documented triggers.
  • Build a gen-AI/agentic AI governance overlay that names accountable owners, pending the AI RFI.
  • Surface program changes through the next risk committee report and to internal audit.

Bottom line

The 2026 framework isn’t a rollback. It’s a reset. The agencies have given the industry latitude to operate MRM as a risk-based discipline rather than a documentation-heavy compliance ritual — and that latitude comes with the responsibility to make principled judgments and defend them. Programs that were running on autopilot against SR 11-7 muscle memory are going to look outdated to examiners by year-end. Programs that engage thoughtfully with the new framework will look mature.

If you’re wrestling with how to operationalize the gen-AI carve-out specifically, that’s where most of the energy will go in the back half of 2026. The AI Risk Assessment Template & Guide gives you a model inventory structure, materiality scoring, vendor questionnaire, and pre-deployment checklist that map cleanly to the 2026 MRM framework — including the AI use cases the new guidance explicitly leaves outside MRM scope.

For more on how the OCC’s MRM guidance has evolved and what AI specifically does to model risk frameworks, see our deep dives on the SR 11-7 era and AI model risk, the original OCC 2011-12 framework and AI, and agentic AI risk management.

Frequently Asked Questions

Does the new MRM guidance replace SR 11-7?
Yes. The Federal Reserve's SR 26-2 (April 17, 2026) explicitly rescinds SR letter 11-7 and SR 21-8. The OCC simultaneously rescinded Bulletin 2011-12, Bulletin 2021-19, Bulletin 1997-24, and the Model Risk Management booklet of the Comptroller's Handbook through Bulletin 2026-13. The 2011 framework is officially retired.
Is the new guidance enforceable?
Not directly. The agencies state non-compliance with the guidance itself will not result in supervisory criticism. But the Federal Reserve preserves a backstop: supervisory action can still result from violations of law or unsafe and unsound practices stemming from poor model risk management.
Does this apply to banks under $30 billion in assets?
The Fed says the guidance is most relevant to banking organizations over $30 billion in total assets, but smaller institutions with complex or prevalent model use remain in scope. Community banks with limited model use can scale practices down significantly.
Does the new guidance cover AI and generative AI?
No. Generative AI and agentic AI are explicitly carved out as 'novel and rapidly evolving.' The agencies plan to issue a request for information specifically on AI model risk management. Traditional machine learning models that aren't generative or agentic remain in scope.
Do we still need annual model validation?
Not as a prescribed cadence. The 2011 guidance's de facto annual validation expectation is gone. The new framework calls for validation frequency tailored to model materiality, performance, and changes in use or environment — risk-based, not calendar-based.
What should our MRM team do this week?
Pull your existing model inventory, re-tier models using a materiality framework based on purpose and exposure, and identify which deterministic rule-based processes and simple arithmetic calculations can be removed from scope. Flag any generative or agentic AI use cases for separate governance pending the AI RFI.
Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Related Framework

AI Risk Assessment Template & Guide

Comprehensive AI model governance and risk assessment templates for financial services teams.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.