Compliance Strategy

How to Build an Annual Compliance Risk Assessment: Methodology, Scoring, and What Regulators Look For

Table of Contents

TL;DR

  • Every regulated financial institution needs a documented compliance risk assessment. “We know our regulations” is not the same thing as having one.
  • Regulators — OCC, CFPB, FDIC, Federal Reserve — assess the compliance risk assessment as a core component of your Compliance Management System. Weak or absent CRAs generate MRAs.
  • The methodology that survives exam scrutiny: inherent risk scored on likelihood × impact, control effectiveness assessed independently, residual risk driving resource and testing prioritization.
  • The most common exam failure is a risk assessment that was built once and never updated — particularly when new products launched after the last update.
  • The CRA is not a filing exercise. It should drive your compliance testing calendar, your training prioritization, your audit scope, and your monitoring cadence.

An examiner walks into your compliance review and asks to see your compliance risk assessment. What happens next depends on which of the three camps you’re in.

Camp one: you pull out a current, documented methodology that maps your regulatory universe to your products and business activities, assigns inherent and residual risk scores, and has been updated in the last 12 months. The examiner reviews it, asks a few clarifying questions, and moves on.

Camp two: you pull out something that was built in 2021, has regulations listed but no scoring, and doesn’t reflect three product launches since then. The examiner asks follow-up questions you can’t answer. An MRA forms.

Camp three: you don’t have one. The examiner notes this. Nothing else that follows in the exam matters as much as this single gap.

Most organizations in financial services are in camp two. The compliance risk assessment exists — but it’s not doing the work it needs to do. Here’s how to build one that actually earns its keep.

What Regulators Actually Require

No single federal regulation mandates a “compliance risk assessment” by name, but every major banking and consumer protection regulator treats it as a required component of a functioning Compliance Management System.

The OCC Comptroller’s Handbook describes compliance risk management as including identification and measurement of compliance risk, monitoring and testing to verify control effectiveness, and reporting to management and the board. Identification and measurement requires a structured risk assessment — an examiner looking at whether your CMS is “satisfactory” or “strong” is looking at whether you can demonstrate a systematic view of where compliance risk lives and how it’s managed.

The CFPB Examination Manual explicitly requires that an institution’s compliance management system include “risk assessment — a process to identify, assess, prioritize, and manage consumer compliance risks.” The manual describes this as one of the four core components of a CMS alongside board and management oversight, policies and procedures, and consumer complaint response.

The FDIC and Federal Reserve apply parallel expectations through their examination frameworks. The Federal Reserve’s Consumer Compliance Supervision program evaluates whether institutions have identified their consumer compliance risks and whether those assessments are used to allocate supervisory attention appropriately.

For UDAAP specifically, the CFPB guidance calls for full reassessment annually and more frequent review when significant changes occur — new product launches, regulatory changes, or compliance incidents in the same area.

The common thread: regulators don’t want a document that recites applicable laws. They want evidence that management has translated legal obligations into a prioritized view of where things can go wrong.

The Five-Step Compliance Risk Assessment Methodology

Step 1: Define Your Regulatory Universe

Start with a complete inventory of the laws, regulations, and supervisory guidance that apply to your business. This is your regulatory universe — and it should be specific to your products, services, customers, and jurisdictions, not a generic list of “laws that apply to financial services companies.”

A fintech offering consumer installment loans to residents of 22 states operates under a different regulatory universe than a commercial bank offering trade finance products. The inventory should reflect the actual regulatory touchpoints of your business.

Practical approach: organize by category (consumer protection, BSA/AML, fair lending, data privacy, securities, licensing) and within each category, list specific laws and the regulatory authority enforcing them. Flag each as “directly applicable” (you are a covered entity or your products trigger the requirement) versus “monitoring only” (may become applicable under foreseeable business changes).

Update this inventory whenever you launch a new product, enter a new state, serve a new customer segment, or when significant regulatory changes take effect.

Step 2: Map Regulations to Business Activities

A regulatory inventory sitting next to a business description is not a risk assessment — it’s a list. The risk assessment requires mapping: which regulations govern which products, services, channels, and transactions?

Create a matrix with regulations on one axis and business activities on the other. Where they intersect is where compliance risk lives. This mapping exercise does several things at once:

  • Identifies regulatory obligations most likely to be tested by examiners given your business mix
  • Surfaces gaps (regulations that apply but have no clear ownership or control coverage)
  • Establishes the foundation for the inherent risk scoring in Step 3
  • Forces product and compliance teams to have a conversation about how regulatory requirements actually apply to specific workflows

The mapping should be granular enough to be useful. “TILA applies to consumer lending” is too broad. “TILA requires Regulation Z disclosures at application, at credit approval, and at each periodic statement cycle for revolving credit — mapped to Loan Origination System workflow steps 3, 7, and 14” is the level of specificity that supports real control design.

Step 3: Score Inherent Risk

Inherent risk is the risk of compliance failure before controls. Scoring it involves two dimensions:

Likelihood of violation: How probable is it that this regulatory requirement will be violated in the course of normal business operations? Factors include transaction volume (more transactions = more opportunities for error), complexity of the requirement (plain vanilla or technically demanding?), history of industry violations (is this area generating MRAs across peer institutions?), and process automation level (manual processes fail more often than automated ones).

Impact of violation: What are the consequences if a violation occurs? Factors include regulatory consequence (civil money penalty, enforcement action, license suspension, consent order?), consumer harm potential (financial harm to consumers, potential class exposure), and reputational impact.

Inherent Risk FactorWeightScore RangeNotes
Regulatory consequence severity25%1–55 = license risk or criminal referral
Transaction volume in regulated area20%1–55 = >1M transactions/year
Consumer harm potential20%1–55 = direct financial harm to vulnerable customers
Process complexity15%1–55 = manual, multi-party, exception-heavy
Industry violation history10%1–55 = active CFPB/OCC enforcement in this area
Product complexity10%1–55 = novel product with uncertain regulatory treatment

The resulting inherent risk score identifies your highest-risk areas before you’ve evaluated whether controls exist or work. High inherent risk areas should receive more robust control investment, more frequent testing, and more frequent assessment updates.

Step 4: Evaluate Control Effectiveness

For each area with meaningful inherent risk, document the controls that exist to prevent or detect violations and assess how effective those controls actually are.

Control assessment covers:

  • Design effectiveness: Is the control designed to actually address the risk? A disclosure control that generates a disclosure but doesn’t verify the customer received and acknowledged it has a design gap.
  • Operating effectiveness: Is the control actually running as designed? Manual controls that require human judgment fail at higher rates than automated controls that run on every transaction.
  • Testing: Is the control subject to independent testing? Self-reported control effectiveness — where the business line that owns the control is also evaluating it — has low credibility with examiners.
  • Control failure history: Has this control failed before? Known failures that haven’t been remediated should be reflected in the control effectiveness score.

Score control effectiveness from 1 (strong: automated, tested, no failures) to 5 (weak: manual, untested, or known failures).

This is where the connection to your RCSA methodology becomes direct: the RCSA’s control-level testing feeds into the CRA’s control effectiveness ratings. The two documents should be consistent — a high-effectiveness rating in the CRA should be supported by RCSA testing results that confirm it.

Step 5: Calculate Residual Risk and Prioritize

Residual risk = inherent risk adjusted for control effectiveness. High inherent risk with strong controls produces moderate residual risk. Moderate inherent risk with weak controls produces high residual risk.

The residual risk ranking drives resource allocation decisions: where to focus compliance monitoring, how to prioritize the testing calendar, where to require more frequent management reporting, and what training to prioritize. A residual risk ranking that has no observable connection to how your compliance program allocates resources is a red flag examiners will note.

Document the prioritization explicitly. “Areas with residual risk scores above [X] receive quarterly testing and monthly management reporting. Areas with moderate residual risk receive annual testing. Low residual risk areas are assessed during the annual review cycle.” That kind of documented logic demonstrates that the risk assessment is a management tool, not a filing exercise.

The MRA Triggers: What Generates Exam Findings

Compliance risk assessment deficiencies that generate Matters Requiring Attention tend to cluster around five patterns:

1. Static assessments after dynamic business changes. The most common trigger: a product launched, a regulation changed, or an enforcement action put the industry on notice — and the risk assessment wasn’t updated. Examiners look at your product portfolio, identify anything launched in the past 12–24 months, and check whether the CRA reflects the risk associated with that product. If you launched a buy-now-pay-later product and your CRA still reads like it was written for your legacy installment loan portfolio, the examiner will notice.

2. No documented methodology. A spreadsheet with risk ratings and no explanation of how those ratings were derived is not defensible. Examiners want to understand: what factors drive the inherent risk score, who assessed control effectiveness and how, what does the residual risk score mean in operational terms. If the methodology isn’t written down, the examiner assumes it isn’t consistent.

3. Control assessment done by business lines without independent validation. Self-attestation is not a control. If the same business line that owns the process is also rating its controls as “effective,” the examiner will probe the testing documentation. Absent independent validation — from internal audit, compliance testing, or a third party — high control effectiveness ratings are not credible.

4. Residual risk inconsistent with known issues. If your compliance testing log, your audit findings, or your consumer complaint data shows material problems in an area rated “low residual risk” in your CRA, you have a credibility problem. The risk assessment and the operational compliance data should tell the same story. When they don’t, the examiner investigates why.

5. Assessment disconnected from resource allocation. A CRA that identifies high-risk areas but doesn’t result in more testing, training, or monitoring in those areas suggests the assessment isn’t actually driving the program. Examiners will ask: how does your risk assessment inform your compliance testing calendar? Be able to answer.

How to Keep the CRA Current

An annual update cycle with documented triggers for interim updates is the baseline.

Annual review: Reassess the full regulatory universe, re-score all risk areas, update control effectiveness based on the prior year’s testing results, adjust residual risk ratings, and update prioritization decisions. Document who participated, what methodology was used, and what changed from the prior year.

Interim triggers requiring an update outside the annual cycle:

  • New product or service launch
  • Entry into a new state or customer segment
  • Significant regulatory change or new supervisory guidance
  • Consumer complaint spike in a specific area
  • Adverse audit finding in a compliance area
  • Enforcement action against a peer institution in your product category

The regulatory change management process should have an explicit handoff to the CRA: when a new rule takes effect, who is responsible for updating the regulatory universe and re-evaluating affected risk areas?

Connect the CRA to your compliance management system. The CRA should drive your compliance testing calendar, your training prioritization, your consumer complaint monitoring focus, and your board reporting on top compliance risks. A CRA that sits in a SharePoint folder and doesn’t visibly influence how compliance resources are allocated is not a compliance risk assessment — it’s a document that looks like one.

So What Does a Strong CRA Actually Buy You?

Beyond exam performance, a functioning compliance risk assessment changes how compliance decisions get made. When a product manager wants to change a fee structure, the CRA tells you which regulatory requirements are affected and how much residual risk you’re currently running there. When the audit committee asks what your top compliance risks are, you can answer from a documented, methodology-driven ranking instead of improvising. When an enforcement action drops in your industry, you can check the CRA to see whether you have the same exposure.

Most organizations have the raw ingredients for a strong CRA: they know their regulations, they have control frameworks, they conduct some testing. What they’re missing is the integration layer — the documented methodology that translates all of those inputs into a single, prioritized view of where regulatory risk actually lives.

That integration layer is what examiners are looking for. It’s also what makes compliance a strategic function rather than a reactive one.


The RCSA (Risk & Control Self-Assessment) includes a pre-built RCSA worksheet with inherent risk scoring, control effectiveness assessment, residual risk calculation, and an executive summary dashboard — designed to complement your compliance risk assessment and satisfy examiner requests for documented methodology.


Sources:

Frequently Asked Questions

What is a compliance risk assessment and why do regulators require it?
A compliance risk assessment (CRA) is a systematic process for identifying, measuring, and prioritizing compliance risks across your organization's products, services, customers, channels, and regulatory obligations. Regulators require it because it demonstrates that management has a structured view of where regulatory risk actually lives in the business — not just a list of applicable laws, but a ranked assessment of where violations are most likely to occur and what controls exist to prevent them. The OCC Comptroller's Handbook and the CFPB Exam Manual both treat the risk assessment as a foundational component of a compliance management system.
How often should a compliance risk assessment be updated?
The baseline frequency is annual. The CFPB's examination manual requires annual reassessment of the compliance risk landscape, with more frequent updates when material changes occur: new product launches, new customer segments, acquisitions, regulatory changes, or enforcement actions in your industry. An annual cycle with quarterly reviews for high-risk areas or recent changes is the standard that survives exam scrutiny. A CRA updated once in 2022 and never touched since is a red flag.
What's the difference between a compliance risk assessment and an RCSA?
A compliance risk assessment focuses specifically on regulatory and legal obligations — mapping what laws apply, where violations are most likely, and whether controls adequately mitigate those violations. An RCSA (Risk and Control Self-Assessment) is broader, covering all operational risks including compliance risk, and evaluates individual controls against specific risk events. In practice, a compliance risk assessment feeds into the RCSA: the CRA identifies where regulatory risk is high; the RCSA evaluates whether the specific controls in those areas are effective. They're complementary, not interchangeable.
What scoring methodology should a compliance risk assessment use?
The most defensible approach uses a two-dimensional inherent risk score (likelihood × impact before controls) followed by a control effectiveness assessment that produces a residual risk score. Inherent risk factors typically include: regulatory consequence of violation (fine, enforcement, license risk), volume/frequency of transactions in the regulated area, product or service complexity, and customer vulnerability (e.g., subprime lending carries higher consumer harm potential than commercial lending). Control effectiveness factors include: control design, whether controls are automated or manual, testing frequency, and control failure history. Residual risk = inherent risk adjusted by control effectiveness.
What are the most common exam findings related to compliance risk assessments?
The most cited deficiencies are: (1) the risk assessment was not updated when new products were launched or existing products changed; (2) the methodology isn't documented — examiners can't tell how risks were scored or what assumptions drive the ratings; (3) the risk assessment covers regulations on paper but doesn't actually map regulatory requirements to specific business activities, products, or transactions; (4) the control assessment is self-reported by business lines with no independent validation; and (5) residual risk ratings don't align with known control failures or audit findings — a residual rating of 'low' while the audit found material deficiencies in the same area.
Does every company need a standalone compliance risk assessment document?
Yes, if you're a regulated financial institution subject to OCC, FDIC, Federal Reserve, or CFPB examination. These regulators expect to see a documented compliance risk assessment as part of your compliance management system. The format can vary — it doesn't need to be a 200-page document — but it must demonstrate systematic identification and prioritization of compliance risks, documented methodology, and evidence that the assessment drives resource allocation and control investment decisions. A spreadsheet with clear methodology often satisfies examiners better than an elaborate document with no methodology disclosed.
Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Related Framework

RCSA (Risk & Control Self-Assessment)

141 pre-populated fintech risks with control assessments, questionnaire framework, and testing calendar.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.