NIST AI RMF 1.1: What's Changing and What It Means for Your AI Risk Program
Table of Contents
The NIST AI Risk Management Framework is being revised — and if you’ve built your AI risk program around it, you need to know what’s changing, what’s staying, and what that means for your documentation.
The short version: a formal version “1.1” hasn’t dropped as a single published document, but the framework is in an active, mandated revision process that’s already reshaping how organizations should think about AI governance in 2026. Here’s what’s actually happening, what we can verify, and what your next 90 days should look like.
TL;DR
- The White House AI Action Plan (July 2025) directed NIST to revise the AI RMF — certain language is being removed, but the four core functions remain intact
- MEASURE 2.11 (bias and fairness evaluation) is still in the framework and still matters — financial services regulators haven’t stopped caring about discriminatory AI outcomes
- You don’t need to wait for a final revision to build a strong AI risk program — the core structure is stable and actionable now
What Is Actually Happening with NIST AI RMF in 2026
The NIST AI RMF 1.0 was released on January 26, 2023. Since then, NIST has released companion documents — most notably the Generative AI Profile (NIST-AI-600-1) in July 2024 — but hasn’t formally issued a revised core framework until now.
On July 23, 2025, the White House released the AI Action Plan, which tasked NIST with revising the AI RMF to eliminate references to misinformation, Diversity, Equity, and Inclusion (DEI), and climate change. NIST has confirmed the revision is underway. The AI Resource Center’s own documentation states this directive, and NIST IR 8596 notes the framework “is currently in revision and will be included in a future version.”
What does that mean in practice? The framework’s political framing is being stripped — language that tied AI risk to DEI outcomes, misinformation harms, and climate-related impacts is being removed or revised. But the technical infrastructure — the four core functions (Govern, Map, Measure, Manage), the subcategories, and the Playbook — is expected to remain substantively intact.
This is important context: if your AI risk program was built on the DEI-focused bias language in the original framework, you may need to recalibrate your framing. If your program was built on the technical architecture of the four functions, you’re on solid ground.
The Four Functions: What’s Staying Stable
The NIST AI RMF is organized around four functions. These are not changing:
| Function | What It Does | Where Most Programs Fall Short |
|---|---|---|
| GOVERN | Sets AI risk culture, policies, accountability structures | No named AI risk owner; governance exists on paper only |
| MAP | Identifies AI use cases, classifies risk, maps context | AI inventory is incomplete; risk classification is ad-hoc |
| MEASURE | Evaluates AI performance, bias, drift, and reliability | Testing is one-time at deployment, not ongoing |
| MANAGE | Implements controls, monitors, responds to incidents | Response plans exist but aren’t tested |
The weakest link in most financial services AI programs is the gap between MEASURE and MANAGE. You test the model when you deploy it. Then it goes into production and you don’t look at it again until something breaks.
That’s not what the framework intends — and it’s not what regulators expect.
MEASURE 2.11: Bias and Fairness Evaluation
Here’s one of the most consequential technical requirements that isn’t going anywhere, regardless of the political revision: MEASURE 2.11 requires evaluation of fairness and bias in AI systems.
This isn’t soft guidance. The subcategory explicitly calls for testing whether an AI system produces disparate outcomes across demographic groups. In financial services, that maps directly to:
- Credit scoring models — does your model deny credit at higher rates for certain demographic groups?
- Fraud detection — are certain customer segments flagged at disproportionate rates?
- Underwriting algorithms — do pricing models produce discriminatory outcomes even unintentionally?
The CFPB, OCC, and Federal Reserve have all made clear that fair lending obligations extend to algorithmic decision-making. They don’t care whether you call it a “model” or an “AI system” — if it makes credit decisions, bias evaluation is required.
The NIST AI RMF 1.0 provided a structured methodology for this evaluation. That structure stays. What may change is the specific language around DEI framing — but the underlying requirement to test for and document disparate outcomes isn’t going away at the federal regulatory level.
Practical implication: If your AI risk program’s bias testing was structured around the NIST DEI language and you’re worried about whether that still applies, the answer is: the testing itself still applies. Reframe the documentation around fair lending compliance and model validation rather than DEI terminology, and you’ve accomplished both goals.
The Generative AI Profile: Still Relevant, Still Underused
The NIST-AI-600-1 Generative AI Profile, released July 26, 2024, is a companion document to the core AI RMF that addresses risks specific to generative AI systems. Most organizations that adopted the AI RMF haven’t integrated this profile.
It covers 12 risk categories specific to GenAI, including:
- Confabulation (hallucination and factually incorrect outputs)
- Data privacy risks from training data and inference attacks
- Homogenization (over-reliance on a single AI provider or output style)
- Human-AI configuration risks from over-automation
If you’re using any LLM-based tools — and almost every financial services firm now is, whether through vendor products or internal builds — this profile is the right starting point for evaluating those systems. The core AI RMF alone wasn’t designed for generative systems.
What’s Being Removed and Why It Matters
The White House AI Action Plan is directional about the revision: remove references to misinformation, DEI, and climate change from the framework. This reflects a shift in the federal AI governance posture — from a risk-mitigation orientation that centered on societal harms to one that prioritizes innovation and American AI competitiveness.
Here’s what compliance practitioners should actually do with this information:
Don’t gut your bias testing. The removal of DEI language from the NIST framework does not remove your fair lending obligations under the Equal Credit Opportunity Act, the Fair Housing Act, or state-level equivalents. CFPB examiners are still going to ask whether your credit models produce disparate outcomes. NIST’s language revision doesn’t change that.
Do audit your framework alignment language. If your AI risk policy documents reference specific NIST subcategories that are expected to change, update the reference language — but preserve the underlying control. A control that says “we test for disparate impact in credit decisions per NIST MEASURE 2.11” can be updated to “we test for disparate impact in credit decisions per our fair lending obligations and model validation policy” without losing any substance.
Expect more sector-specific guidance. The federal AI governance posture is shifting toward industry-specific frameworks rather than a single cross-sector document. Financial services will see more from the OCC, Federal Reserve, and CFPB on AI risk — and those agencies haven’t softened their expectations on model governance, bias testing, or explainability.
What Your AI Risk Program Should Look Like Right Now
Stop waiting for a final revised framework before building your program. The core structure is stable. Here’s a concrete 90-day roadmap:
Days 1–30: Inventory and Classify
- Build or update your AI and model inventory — every system that makes, influences, or automates a decision. Assign risk classifications (high/medium/low) based on the decision type, customer impact, and regulatory exposure.
- At most mid-size banks and credit unions, this inventory either doesn’t exist or hasn’t been updated since initial model validation. If you don’t know what you’re running, you can’t manage it.
- Identify which systems are in-scope for MEASURE-level evaluation: credit models, fraud detection, pricing algorithms, any customer-facing automated decision.
Days 31–60: Assess and Document
- Run bias evaluations on in-scope models. At minimum: adverse impact ratios across demographic groups, performance parity testing across customer segments.
- Document model performance baselines — accuracy, precision, recall, false positive/negative rates at the time of your evaluation. This becomes your drift detection baseline.
- Review vendor AI systems for equivalent documentation. If a vendor’s model is making decisions for you, you need their validation documentation. If they can’t provide it, that’s a risk finding.
Days 61–90: Implement Ongoing Monitoring
- Set drift detection thresholds. A common starting point: flag models for re-evaluation if performance metrics shift more than 5–10% from baseline.
- Establish monitoring cadence — quarterly at minimum for high-risk models, annually for low-risk. Document who owns each review.
- Create an escalation path for model exceptions. When a model flags an alert, who sees it, who decides whether to pause or override the model, and how is that documented?
The Monitoring Cadence Problem
This is where most programs fail. The NIST framework’s MANAGE function calls for ongoing monitoring and the ability to respond to model degradation — but most organizations interpret “ongoing” as “we’ll look at it if something breaks.”
That’s not ongoing monitoring. That’s reactive incident response dressed up as governance.
Regulators — especially the OCC through its model risk management guidance and the Federal Reserve through SR 11-7 — expect documented, scheduled review cycles with evidence of completion. “Ongoing” means:
- A named owner for each model
- A scheduled review date on the calendar
- A documented outcome from each review (no issues found counts — but it has to be documented)
- A process for what happens if performance degrades
If you have ten models in production and none of them have scheduled review dates with a named owner, your monitoring program doesn’t exist yet — regardless of what your policy says.
So What Does This Mean for Your Program?
The NIST AI RMF revision is real, ongoing, and will matter when it’s finalized. But the practical impact for most financial services AI risk programs is narrower than the headlines suggest:
- The technical architecture isn’t changing. Govern, Map, Measure, Manage — that structure stays. Build your program around it.
- Bias testing requirements aren’t going away. Federal fair lending law doesn’t care what NIST says about DEI. Test your models and document it.
- The Generative AI Profile is the gap most programs haven’t addressed. If you’re using LLMs, you need an evaluation framework specific to generative systems.
- Monitoring cadence is the weakest link. Most programs test at deployment and then stop. That’s not enough.
The organizations that are going to be well-positioned when the revised framework drops — and when examiners start asking for AI risk documentation — are the ones building programs right now, not waiting for a final document.
FAQ
Is NIST AI RMF 1.1 officially published?
As of late March 2026, NIST has confirmed the AI RMF is in active revision pursuant to the White House AI Action Plan (July 2025), but a formally published version 1.1 had not been released as a single document. NIST has been releasing companion guidance (the Generative AI Profile, updated Playbook content) that effectively expands the framework while the core document is under revision. Monitor nist.gov/itl/ai-risk-management-framework for official announcements.
What’s actually being removed from the NIST AI RMF?
The White House AI Action Plan directed NIST to remove references to misinformation, Diversity, Equity, and Inclusion, and climate change from the framework. The four core functions (Govern, Map, Measure, Manage) and the subcategory structure are not expected to change substantively. The revision is primarily about removing politically charged framing, not redesigning the risk management architecture.
If NIST removes DEI language, do I still need to test for bias in my AI models?
Yes. Bias testing requirements for financial institutions come primarily from fair lending law (Equal Credit Opportunity Act, Fair Housing Act) and banking regulator expectations — not from the NIST framework. The OCC, CFPB, and Federal Reserve have all indicated that algorithmic lending models are subject to fair lending scrutiny. NIST’s language revision doesn’t change your legal obligations.
Building an AI risk program from scratch — or updating one that’s overdue? The AI Risk Assessment Template & Guide gives you a structured framework aligned with NIST AI RMF, covering inventory, risk classification, bias evaluation, and monitoring documentation. Built for financial services practitioners who need to get something real on paper fast.
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Keep Reading
Long Island Investment Adviser Pleads Guilty to $160 Million Fraud: What Compliance Teams Should Learn
Vincent Camarda of A.G. Morgan Financial Advisors pleaded guilty to $160M investment fraud. Here's what went wrong and the compliance red flags every firm should watch for.
Apr 3, 2026
Regulatory ComplianceAI in Consequential Decision-Making: Where Regulators Draw the Compliance Line
How state and federal regulators define consequential AI decisions — and what compliance teams must do before June 2026 to avoid enforcement.
Apr 3, 2026
Regulatory ComplianceWho Needs a Contingency Funding Plan? FINRA, OCC & Interagency Requirements Explained
Contingency funding plan requirements vary by regulator, but most banks and larger credit unions need a CFP now. Here’s what OCC, Fed, FDIC, NCUA, and FINRA expect.
Apr 3, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.