AI Risk

NIST AI RMF MAP Function: How to Frame AI Risk Context Before You Build or Deploy

April 21, 2026 Rebecca Leung
Table of Contents

Most AI governance programs treat MAP as the part that happens before the real work starts. Tick the box, write a one-page use-case description, and move on to validation and monitoring. That’s exactly backward.

The MAP function is where NIST AI RMF becomes operational. GOVERN establishes the infrastructure — policies, accountability structures, risk tolerance. MAP is where you apply that infrastructure to a specific AI system: what does it do, who does it affect, what can go wrong, and where does the risk actually live? Skip MAP or rush through it, and your downstream MEASURE and MANAGE activities are built on a foundation that doesn’t know what it’s managing.

The problem is that MAP requires thinking before building — and most teams are under pressure to build.

TL;DR:

  • MAP is NIST AI RMF’s risk context function: it frames what risks exist before you validate or monitor them
  • Five MAP categories cover context (MAP 1), system categorization (MAP 2), capabilities and costs (MAP 3), component risk mapping (MAP 4), and impact characterization (MAP 5)
  • The Treasury FS AI RMF (February 2026) translates MAP into financial-institution-specific control objectives — this is your implementation reference
  • Most organizations fail MAP on two gaps: missing system-specific risk tolerance documentation and incomplete third-party component mapping

MAP in Context: What Comes Before and After

The NIST AI RMF structures AI risk management across four functions: GOVERN, MAP, MEASURE, and MANAGE. The functions aren’t strictly sequential — the framework explicitly encourages iteration — but there’s a logical dependency chain.

GOVERN gives you the organization-level foundation: who owns AI risk, what your risk tolerance is, how oversight is structured. MAP takes that foundation and applies it to a specific AI system. MEASURE then tests whether the risks you identified in MAP are actually present and quantifiable. MANAGE determines what you do about it.

If GOVERN is the institutional infrastructure, MAP is the project-level intake process. Every new AI system — or significant change to an existing one — should go through MAP before it gets validated or deployed.

The Treasury’s Financial Services AI Risk Management Framework (FS AI RMF), released in February 2026 by the Cyber Risk Institute with input from 108 financial institutions, maps each NIST AI RMF function to specific control objectives for financial services. For MAP, the FS AI RMF provides exactly the kind of operational specificity the NIST framework deliberately leaves to implementers.


MAP 1: Establish and Understand the Context

This is the foundational work. Before any risk can be assessed, you need to know what you’re building, who it affects, and what the rules are. MAP 1 has six subcategories.

MAP 1.1 — Document Intended Purpose and Regulatory Context

MAP 1.1 requires documenting the AI system’s intended purposes, potentially beneficial uses, context-specific laws, and deployment settings — including user expectations and potential impacts.

For a financial institution, this is not a paragraph summary. It’s a structured document that answers:

  • What decisions does this model inform or make?
  • Which customer segments are affected?
  • What regulations apply? (SR 11-7, ECOA/Reg B, FCRA, GLBA, state AI laws like Colorado SB 205?)
  • How is human oversight structured — does a human review every output, or is it automated?
  • What are the known limitations of the system?

An LLM deployed for customer-facing FAQ responses has a different regulatory context than a credit scoring model. MAP 1.1 is where you make that explicit. An incomplete MAP 1.1 is also why “AI scope creep” happens — a model gets deployed for one purpose, gets used for another, and nobody notices because the original documentation was never specific enough to flag the drift.

MAP 1.2 — Ensure Interdisciplinary Participation

MAP 1.2 requires that diverse interdisciplinary teams participate in establishing context, with documented collaboration opportunities. In plain English: risk and compliance need a seat at the table before the product team ships.

This sounds obvious. It isn’t. The most common failure mode is that MAP 1.2 gets satisfied with a sign-off email rather than meaningful input — “compliance reviewed the use case description” is not the same as “compliance identified the ECOA implications of the feature’s output format.”

For financial institutions, MAP 1.2 is the moment to surface: legal considerations, fair lending implications, customer disclosure requirements, data privacy obligations, and third-party contracting issues. That’s a lot of ground to cover in a kickoff meeting. The FS AI RMF recommends structured intake forms with risk-type-specific prompts rather than open-ended conversations.

MAP 1.5 — Document Organizational Risk Tolerance

This is the subcategory most institutions get wrong. MAP 1.5 requires determining and documenting organizational risk tolerance levels for the specific AI system — not just pointing to a generic AI risk policy.

The distinction matters in practice. You might have an organization-level risk tolerance that says “low tolerance for AI in credit decisioning.” But what does that mean for a specific LLM being used to generate adverse action explanation letters? That’s not credit decisioning in the traditional SR 11-7 sense, but it affects credit customers and their legal rights under FCRA. MAP 1.5 is where you make that determination for each system.

At a minimum, document: risk tier assigned (High/Medium/Low), rationale for that tier, what types of failure are most consequential for this system, and what escalation threshold would trigger a supervisory review or model suspension.

MAP 1.6 — Gather Stakeholder Requirements

MAP 1.6 requires gathering system requirements from relevant stakeholders and addressing socio-technical implications in design decisions. For financial institutions, “relevant stakeholders” extends beyond the product team — it includes affected customers, community advocates, compliance, legal, and the business lines that will use the system’s outputs.

Most institutions stop at internal stakeholders. MAP 1.6 pushes for external input too — especially for AI systems that affect credit access, insurance pricing, or other high-stakes decisions. That doesn’t mean running a public comment process for every model. It does mean having a documented process for how customer feedback, complaint data, and community impact considerations inform model design decisions.


MAP 2: Categorize the AI System

MAP 2 requires classifying the AI system by type and documenting its operational characteristics. Three subcategories cover the what, the where, and the how.

MAP 2.1 — Define System Type and Tasks

MAP 2.1 requires defining the specific tasks and methods the AI system will support — whether classifiers, generative models, recommenders, or others. This isn’t academic taxonomy. System type determines what testing methodologies apply and what kinds of failures are most likely.

AI System TypeCommon Financial Services Use CasesPrimary Risk Dimensions
Classifiers (supervised ML)Credit decisioning, fraud detection, AML transaction monitoringBias, disparate impact, model drift
Generative AI (LLMs)Customer service, adverse action letters, document summarizationHallucination, confidentiality, fair lending implications
RecommendersProduct recommendations, upsell targetingUDAAP risk, suitability concerns
Anomaly detectionFraud, suspicious activity monitoringFalse positive rates, disparate SAR rates
Automated decisioningUnderwriting, claims processingReg B adverse action, explainability

MAP 2.1 also requires documenting what happens when the system gets an input outside its training distribution — the “unknown unknowns” case. Most implementations say “model returns low confidence score” without specifying what the downstream handling process is.

MAP 2.2 — Document Knowledge Limits and Human Oversight

MAP 2.2 requires documenting the system’s knowledge limits and how human oversight integrates with its outputs. This is where AI governance gets into specifics that examiners increasingly look for: at what confidence threshold does a human review a model output? Who does that review? What authority do they have to override the model?

For credit decisioning systems, the OCC has been explicit that human oversight doesn’t mean rubber-stamping automated outputs. MAP 2.2 is the documentation that proves it isn’t.


MAP 3: Understand Capabilities, Costs, and Benefits

MAP 3 shifts from “what is this system” to “what can go wrong and how much does it matter.” Five subcategories cover the benefit-cost framing.

MAP 3.1 and 3.2 — Benefits and Costs

MAP 3.1 requires documenting potential benefits; MAP 3.2 requires documenting potential costs, including non-monetary impacts from errors or trustworthiness failures.

For financial institutions, “costs” in MAP 3.2 should include:

  • Direct financial impact: Model errors that result in credit decisions that violate ECOA, triggering enforcement and remediation costs
  • Regulatory costs: Exam findings, consent orders, civil money penalties
  • Customer harm: Incorrect denials, adverse action errors, biased pricing
  • Reputational costs: Media coverage, customer complaints, community advocate pressure

The CFPB’s supervisory emphasis on AI-driven adverse action makes MAP 3.2 a critical input for any model that touches credit, debt collection, or insurance pricing. If you haven’t quantified the cost of a biased model output, you haven’t done MAP 3.

MAP 3.5 — Human Oversight Processes

MAP 3.5 requires defining and assessing processes for human oversight aligned with governance policies. This is where the GOVERN function’s accountability structures (GV-2.1 roles and responsibilities) intersect with the specific AI system. Who provides oversight for this specific model’s outputs? What’s the escalation path? How is the override documented?


MAP 4: Map Risks Across All System Components

MAP 4 is the third-party risk management layer of the MAP function. Two subcategories cover risk mapping and internal controls.

MAP 4.1 — Third-Party Component Risk

MAP 4.1 requires approaches for mapping AI technology risks, explicitly including third-party components. For vendor-supplied AI, MAP 4.1 means:

  • Model design assessment: How was the vendor’s model trained? On what data? With what bias controls?
  • Data sourcing review: Where does the model’s training data come from? Are there known biases in that data source?
  • Risk controls: What controls has the vendor built in? Are they appropriate for your use case?
  • Explainability: Can the vendor explain why the model produces a given output? In a format that satisfies adverse action notice requirements?
  • Regulatory compliance: Has the vendor assessed their model for ECOA, FCRA, GLBA compliance in your specific context?

“We have a vendor SOC 2 and an API agreement” doesn’t satisfy MAP 4.1. The third-party AI vendor due diligence framework needs to produce answers to these questions before you deploy, not after your examiner asks.

MAP 4.2 — Internal Controls Documentation

MAP 4.2 requires identifying and documenting internal risk controls for all system components. This isn’t about controls for the AI system in isolation — it’s about controls for the full pipeline: data ingestion, preprocessing, model execution, output handling, and human review.


MAP 5: Characterize Impacts on Individuals and Society

MAP 5 is the impact assessment layer. Two subcategories require identifying potential harms and establishing stakeholder engagement processes.

MAP 5.1 — Identify Beneficial and Harmful Impacts

MAP 5.1 requires identifying and documenting the likelihood and magnitude of both beneficial and harmful impacts — covering socioeconomic effects, privacy considerations, potential bias or discrimination, and effects on affected populations.

For financial institutions, MAP 5.1 is where fair lending analysis lives in the NIST AI RMF context. An AI credit model that produces statistically unbiased outputs on aggregate may still disproportionately affect specific communities. MAP 5.1 requires you to characterize those impacts before the model goes live, not discover them in a CFPB complaint data review two years after deployment.

The FS AI RMF maps MAP 5.1 to specific control objectives around demographic analysis, disparate impact testing, and adverse impact reporting. If your MAP 5.1 doesn’t include a community-level impact analysis for credit or insurance models, it’s incomplete.

MAP 5.2 — Stakeholder Engagement and Feedback Integration

MAP 5.2 requires establishing documented practices and personnel for regular stakeholder engagement and feedback integration. This creates the ongoing loop that prevents MAP from being a one-time exercise — you document how you’ll continue to incorporate stakeholder input as the system evolves.


MAP in the FS AI RMF: What Financial Institutions Actually Implement

The Treasury Financial Services AI Risk Management Framework translates NIST’s MAP function into bank-ready control objectives. For MAP-equivalent activities, the FS AI RMF provides:

  • Use case documentation requirements: Minimum fields for AI use case intake forms, including regulatory applicability mapping
  • Risk tiering methodology: Criteria for classifying AI systems as High/Medium/Low risk with specific financial services examples
  • Third-party AI governance standards: What to require from AI vendors as a condition of onboarding, including documentation standards and ongoing monitoring obligations
  • Impact assessment frameworks: Structured approaches for characterizing fair lending and consumer harm risks before deployment

The FS AI RMF doesn’t replace NIST AI RMF — it operationalizes it. For financial institutions, start with FS AI RMF control objectives when implementing MAP, and use NIST AI RMF 1.0 for the conceptual framework behind each control.


MAP Implementation Roadmap: 30/60/90 Days

TimeframePriorityDeliverable
Days 1–30FoundationBuild MAP intake template covering MAP 1.1 (purpose/regulatory context), MAP 1.5 (system-specific risk tolerance), and MAP 2.1 (system type and tasks)
Days 1–30InventoryApply MAP framework to top 5 highest-risk deployed AI systems retroactively
Days 31–60Third-partyComplete MAP 4.1 assessment for all vendor AI tools using structured questionnaire
Days 31–60ImpactRun MAP 5.1 impact characterization for any AI system touching credit, pricing, or employment decisions
Days 61–90ProcessOperationalize MAP 1.6 stakeholder input process — define how complaint data and customer feedback inform model updates
Days 61–90IntegrationConnect MAP output to MEASURE testing plan — MAP 3.2 identified costs should directly inform MEASURE testing priorities
OngoingMaintenanceMAP refresh trigger: any material change to model inputs, outputs, use cases, or the regulatory environment

So What? Why MAP Determines Everything Downstream

MEASURE validates risks. MANAGE responds to them. But if your MAP phase was a one-paragraph use case description and a generic risk tier, your MEASURE testing will miss the most relevant failure modes — and your MANAGE responses will be calibrated to the wrong things.

MAP is where you decide: what does “model failure” look like for this specific system? A credit scoring model that produces a 0.5% disparity rate across demographic groups has a very different failure definition than an LLM customer service bot that occasionally generates inaccurate information about rates and fees. MAP 3.2 quantifies why each matters. MAP 5.1 characterizes who gets hurt when it happens.

Examiners reviewing AI governance are increasingly pulling MAP-equivalent documentation first — use case context, risk tier rationale, third-party component assessment. The institutions that survive these reviews cleanly are the ones that built MAP into their AI intake process from the beginning, not the ones that reverse-engineered documentation after deployment.

Check the AI Governance Program Checklist to see how MAP fits into the full exam readiness picture.

If you’re building your AI risk assessment process from scratch, the AI Risk Assessment Template & Guide includes an AI intake framework aligned with MAP requirements — use case documentation, risk tiering, third-party assessment, and impact characterization built for financial services teams.


FAQ

What’s the difference between MAP 1.5 (risk tolerance) and the GOVERN function’s risk tolerance framework?

GOVERN establishes the organization-level risk tolerance — your overall appetite for AI-related risk across the portfolio. MAP 1.5 requires applying that to a specific AI system: given our organizational risk tolerance, where does this specific system sit, and what thresholds would trigger escalation or suspension? One is the policy; the other is the application of the policy to a specific case.

Does every AI system need to go through the full MAP process?

The depth of MAP activity should scale with the risk tier of the system. A high-risk AI system affecting credit decisions for consumer customers should go through all five MAP categories with detailed documentation. A low-risk internal analytics tool might need a lighter MAP 1.1 and 2.1 only. Document your tiering methodology so examiners can see that lower-intensity MAP treatment was a deliberate, risk-based decision, not a shortcut.

How often should MAP be revisited after deployment?

MAP isn’t just a pre-deployment activity. Material changes should trigger MAP review: significant changes to model inputs or outputs, deployment into a new use case or customer segment, changes in the regulatory environment that affect the system’s compliance posture, or evidence from post-deployment monitoring that the system’s risk profile has shifted. Build a MAP refresh trigger into your model change management process.

What’s the relationship between MAP and SR 11-7 model risk management?

SR 11-7 predates NIST AI RMF but covers similar territory in the model development and validation stages. MAP’s use case documentation (MAP 1.1), risk categorization (MAP 2.1), and benefit-cost analysis (MAP 3.1, 3.2) align with SR 11-7’s model development and inventory requirements. The primary gap is MAP 5 — impact characterization at the individual and societal level — which SR 11-7 doesn’t explicitly address and which is increasingly relevant for AI systems subject to fair lending and consumer protection scrutiny.

How does MAP relate to the AI system inventory requirement?

Your AI inventory (GOVERN 1.6) lists what AI systems you have. MAP generates the documentation that should be attached to each inventory entry: the risk context, system categorization, impact assessment, and third-party component mapping for each deployed model. Think of the inventory as the index; MAP documentation as the content for each record.

Frequently Asked Questions

What is the MAP function in NIST AI RMF?
MAP is the second core function of NIST AI RMF (after GOVERN), organized into five categories: establishing context (MAP 1), categorizing the AI system (MAP 2), understanding capabilities and costs (MAP 3), mapping risks across all system components including third-party inputs (MAP 4), and characterizing impacts on individuals, groups, and society (MAP 5). MAP is where AI risk management becomes concrete — it translates the governance structures built in GOVERN into actual risk identification and framing for each AI system before deployment.
Is the NIST AI RMF MAP function mandatory for banks?
NIST AI RMF is voluntary, but the Treasury's Financial Services AI Risk Management Framework (FS AI RMF), released February 2026 with 230 control objectives, operationalizes NIST AI RMF functions including MAP for financial institutions. OCC, Federal Reserve, and FDIC examiners use NIST AI RMF language in supervisory conversations, and institutions that can't demonstrate MAP-equivalent activities — risk context documentation, stakeholder impact analysis, third-party component mapping — are increasingly getting questions in AI-related exam findings.
What does MAP 1.1 require in practice?
MAP 1.1 requires documenting the intended purposes, potentially beneficial uses, context-specific laws and regulations, and deployment settings for each AI system. For a financial institution, that means recording: what the model does, what decisions it informs, which regulations apply (SR 11-7, ECOA/Reg B, FCRA, GLBA), which customer segments are affected, and how human oversight is structured. It's more than a use-case description — it's a documented risk context that can be pulled during an exam.
How does MAP 4 handle third-party AI vendor risk?
MAP 4.1 requires organizations to map risks for all AI system components, explicitly including third-party software and data. For vendor-supplied AI, this means documenting the vendor's model design approach, training data sourcing, risk control mechanisms, explainability and monitoring processes, and regulatory compliance posture. 'The vendor is handling it' doesn't satisfy MAP 4 — you need evidence that you've assessed those risks independently, not just accepted the vendor's SOC 2 report and called it done.
What is the difference between MAP and MEASURE in NIST AI RMF?
MAP establishes the risk context and identifies what risks exist — it's about framing and categorization. MEASURE quantifies those risks — it's about testing, evaluation, verification, and validation (TEVV). In sequence: MAP tells you what you're worried about and why, then MEASURE gives you the data to know whether those worries are warranted. Without a solid MAP phase, MEASURE becomes unfocused — you end up testing for the wrong things or missing critical risk dimensions entirely.
What are the most common MAP function gaps in AI governance exams?
Based on supervisory observations: (1) Use cases where intended purpose was documented at deployment but never updated when the model's actual use expanded; (2) Missing documentation of organizational risk tolerance specific to the AI system (MAP 1.5 — most institutions have a generic AI risk tolerance, not system-specific thresholds); (3) Incomplete third-party component mapping that covers the primary vendor model but misses downstream data providers or API dependencies; (4) Impact characterization (MAP 5) that skipped affected populations outside the primary user group.
Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Related Framework

AI Risk Assessment Template & Guide

Comprehensive AI model governance and risk assessment templates for financial services teams.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.