AI Risk Management

AI Oversight: Who Owns AI Risk in Your Organization?

Table of Contents

TL;DR

  • Only 28% of organizations say the CEO takes direct responsibility for AI governance — and just 17% report board oversight (McKinsey, 2025). When nobody owns AI risk, everybody pays for it.
  • The three lines of defense model maps cleanly to AI risk — but most firms haven’t actually done the mapping. Business units build models, risk functions don’t understand them, and audit doesn’t know what to test.
  • This article gives you a concrete RACI matrix, three lines of defense breakdown, and committee structure for AI risk ownership that survives regulatory scrutiny.

“Who owns AI risk at your organization?”

Ask this question at your next leadership meeting. Watch the room. The CRO points to the data science team. The CISO says it’s a model risk issue. The CDO thinks compliance handles it. The business line VP who actually deployed the model three months ago looks at their phone.

This is the AI accountability gap — and regulators have noticed.

The “Everyone Owns It” Trap

Here’s a stat that should make risk professionals uncomfortable: according to McKinsey’s State of AI 2025 survey, only 28% of organizations say their CEO takes direct responsibility for AI governance oversight. Just 17% report their board of directors is formally involved. On average, respondents say two leaders share AI governance responsibilities — which in practice means neither one has clear accountability.

Meanwhile, only 25% of organizations have fully implemented AI governance programs, and just 27% of boards have formally incorporated AI governance into their committee charters.

Translation: three-quarters of organizations are deploying AI without a clear owner for the risk it creates. At regulated financial institutions, that’s not a strategic gap — it’s a ticking examination finding.

Why This Matters Now: Regulators Are Done Waiting

The regulatory posture on AI accountability has shifted from “we’re watching” to “show us your governance structure — now.”

SR 11-7 Still Applies (and Now It’s About AI)

The Federal Reserve’s SR 11-7 and the OCC’s companion guidance (OCC 2011-12) remain the foundational model risk management framework for banking. Originally issued in 2011 for traditional quantitative models, the guidance is now squarely applied to AI and machine learning models during examinations.

The core principle hasn’t changed: the board of directors and senior management are accountable for model risk governance. SR 11-7 requires that institutions establish clear roles for model development, validation, and use — including independent review. When examiners find gaps, they don’t ask “whose model is this?” They ask “whose governance framework failed?”

In September 2025, the OCC issued Bulletin 2025-26, clarifying model risk management expectations for community banks. The message: even smaller institutions are expected to have clear ownership and oversight of model risk, including AI models.

The GAO Raised the Alarm

The GAO’s May 2025 report on AI Use and Oversight in Financial Services (GAO-25-107197) found that federal financial regulators are primarily overseeing AI through existing guidance — not new AI-specific rules. This means institutions can’t wait for a dedicated “AI regulation” before building governance structures. Examiners are using SR 11-7, third-party risk management guidance, and fair lending frameworks right now to evaluate AI deployments.

The GAO also highlighted that NCUA lacks authority to examine technology service providers — a significant gap given credit unions’ increasing reliance on third-party AI services. If your institution outsources AI to a vendor, the ownership question gets even harder: you own the risk even when you don’t own the model.

SEC Is Enforcing AI Accountability

On March 18, 2024, the SEC charged two investment advisers — Delphia (USA) Inc. and Global Predictions Inc. — with making false and misleading statements about their use of AI. Delphia paid a $225,000 civil penalty; Global Predictions paid $175,000. These were the SEC’s first “AI-washing” enforcement actions.

The lesson isn’t just “don’t lie about AI.” It’s that someone at those firms should have been responsible for ensuring AI-related claims were accurate — and either that person didn’t exist, or they failed. Clear AI risk ownership would have caught this before it became an enforcement action.

UK FCA: Personal Accountability for AI Failures

Across the Atlantic, the UK’s Financial Conduct Authority has gone further. Under the Senior Managers and Certification Regime (SM&CR), individuals with responsibility for AI oversight may be held personally accountable if their firm fails to adequately develop, test, monitor, or manage AI risks. This isn’t theoretical — it’s written into the accountability framework. Personal liability changes behavior faster than policy documents ever will.

Three Lines of Defense for AI Risk

The IIA’s Three Lines Model (updated July 2020) maps directly to AI risk — but most firms haven’t done the work to apply it. Here’s what each line actually owns:

First Line: Business Units and Model Developers

Who: Data science teams, business line managers, product owners, engineers who build and deploy AI.

What they own:

  • Day-to-day risk decisions for AI models they develop and use
  • Model documentation: purpose, data inputs, assumptions, limitations
  • Pre-deployment testing for accuracy, bias, and performance
  • Ongoing monitoring of model outputs and drift detection
  • Escalating issues when models behave unexpectedly
  • Maintaining model inventories for their business unit

The common failure: First-line teams build models but don’t document them. Or they document them once and never update the documentation. When the examiner asks for the model inventory, it’s six months stale — and three shadow models aren’t listed at all.

Second Line: Risk Management and Compliance

Who: Chief Risk Officer (CRO), model risk management (MRM) team, compliance officers, legal.

What they own:

  • Setting AI risk appetite and policies
  • Independent model validation (separate from the developers)
  • Challenging first-line risk assessments — not rubber-stamping them
  • Monitoring compliance with internal policies and external regulations
  • Aggregating AI risk across the enterprise (not just one business unit)
  • Reporting to the board on AI risk posture

The common failure: The second line doesn’t have enough technical expertise to challenge AI models effectively. They can review a credit scorecard, but a gradient-boosted model or an LLM-powered decision engine? Many MRM teams are still staffing up. Meanwhile, models go into production with validation that amounts to “the numbers look reasonable.”

Third Line: Internal Audit

Who: Chief Audit Executive (CAE), internal audit team, or co-sourced audit providers.

What they own:

  • Independent assurance that the first and second lines are working
  • Testing whether AI governance policies are actually followed
  • Evaluating the adequacy of model risk management frameworks
  • Reporting to the board/audit committee on gaps and deficiencies
  • Assessing whether AI risks are properly identified and escalated

The common failure: Audit doesn’t audit AI. The audit plan was written before the firm deployed 15 machine learning models, and nobody updated it. Or worse — audit tests the process (is there a policy?) but not the substance (does the model actually work as documented?).

The RACI Matrix: Who Does What

Abstract ownership discussions go nowhere. Here’s a concrete RACI matrix for key AI risk activities:

AI Risk ActivityBusiness Unit (1L)MRM/Risk (2L)Internal Audit (3L)Board/Committee
Model development & deploymentR/ACII
Model documentationR/ACII
Pre-deployment validationCR/AII
Ongoing performance monitoringRAII
Bias and fairness testingRAII
Model risk appetite & policyIRCA
Enterprise model inventoryRR/AII
AI risk reporting to boardIRCA
Independent assurance/auditICR/AI
Regulatory exam readinessRR/ACI
Vendor/third-party AI oversightRAII
Incident response (model failure)RAII

R = Responsible (does the work), A = Accountable (owns the outcome), C = Consulted, I = Informed

The critical insight: accountability (the “A”) should never sit with the same person who builds the model. That’s the independence requirement baked into SR 11-7 — and it’s where most organizations fail.

Who Should Lead? The Realistic Answer

At most financial institutions, AI risk ownership ends up in one of four places. Each has tradeoffs:

CRO / Head of Model Risk Management — The strongest structure for regulated institutions. MRM teams already understand model lifecycle management under SR 11-7. The gap: many MRM teams were built for traditional models and need AI/ML expertise.

CISO / Head of Information Security — Common at fintechs where AI risk is conflated with data and cyber risk. Works for narrow use cases (fraud detection, security operations), but the CISO typically doesn’t own lending models or customer-facing AI decisions.

CDO / Chief Data Officer — Emerging pattern at firms where AI risk is viewed as a data quality and data governance issue. Risk: the CDO may own the data but not the business decisions the models drive.

Dedicated Chief AI Officer (CAIO) — Growing trend at large institutions. A 2026 CISO AI Risk Report (based on a survey of 235 CISOs, CIOs, and senior security leaders) found that organizations are increasingly creating dedicated AI oversight roles. The risk: creating a CAIO without giving them authority, budget, or a reporting line to the board just adds another acronym, not accountability.

The right answer for most mid-size banks and insurance companies: The CRO or Head of MRM owns AI risk as part of the broader model risk management program. They chair or co-chair an AI governance committee with cross-functional representation. AI isn’t a separate risk domain — it’s a new dimension of model risk, operational risk, compliance risk, and third-party risk that needs coordinated oversight.

Building the AI Governance Committee

If you don’t have a dedicated AI governance committee, you’re governing AI through hallway conversations and email chains. Here’s how to stand one up in 90 days:

Days 1-30: Foundation

  • Draft a charter. Define scope (all AI/ML models? Only production models? Vendor models too?), authority (advisory vs. approval), and reporting line (to the board risk committee or the full board).
  • Identify members. At minimum: CRO or MRM lead (chair), CISO, CDO, General Counsel or senior compliance attorney, senior data scientist or head of AI engineering, and at least one business line representative. Invite, don’t voluntell.
  • Inventory what exists. Compile a list of every AI/ML model in production, in development, or procured from a vendor. If you don’t know the full list (most firms don’t), that’s your first finding.

Days 31-60: Operationalize

  • Establish a tiered review process. Not every model needs the same level of oversight. Define tiers based on risk (e.g., Tier 1: models that directly affect consumer credit decisions. Tier 2: internal efficiency models. Tier 3: experimental/sandbox models). SR 11-7 supports risk-based approaches.
  • Set meeting cadence. Monthly for Tier 1 model reviews and emerging risk discussions. Quarterly for full portfolio reviews and board reporting. Ad hoc for incident response.
  • Create a decision log. Every model approval, rejection, conditional approval, and escalation gets documented. This is your audit trail.

Days 61-90: Mature

  • Run a tabletop exercise. Pick a scenario: a lending model produces unexplainable denials at 2x the rate for a protected class. Walk through who gets notified, what happens to the model, who talks to the regulator. Find the gaps before the examiner does.
  • Produce the first board report. Include: number of AI models in inventory, risk tier distribution, key findings from initial assessments, open issues and remediation timelines, and the committee’s assessment of the institution’s AI risk posture.
  • Schedule the annual review. The charter, membership, and scope get reviewed at least annually. AI is moving fast — your governance needs to keep pace.

The Shadow AI Problem

Here’s the uncomfortable truth: your model inventory is incomplete. Business units are using AI tools — chatbots, code generators, analytics platforms, vendor-embedded models — that never went through the governance process. According to McKinsey’s 2025 survey data, AI governance is often jointly owned by two leaders, but when responsibility is shared without clear delineation, shadow AI flourishes in the gaps.

Shadow AI isn’t malicious. It’s a data analyst using ChatGPT to clean data. A loan officer using a vendor’s “AI-powered” underwriting tool that came bundled with the core banking system. A marketing team running customer segmentation through a SaaS platform nobody vetted.

How to address it:

  1. Expand your definition of “model.” If it uses data to generate outputs that inform business decisions, it’s in scope. Period.
  2. Create a safe harbor for self-reporting. If business units face punishment for disclosing shadow AI, they’ll hide it. Make it easy to register AI tools with no penalty for past use.
  3. Embed AI risk questions in procurement. Every vendor assessment should ask: “Does this product use AI or machine learning? How? On what data?” Add it to your standard due diligence questionnaire.
  4. Run periodic discovery sweeps. Technology teams can scan for API calls to AI services, unusual data flows, or unauthorized model deployments. Find it before the examiner does.

So What? Why This Matters for Your Next Exam

Regulators aren’t going to ask “do you use AI?” They’re going to ask “who’s responsible for AI risk, and how do you know it’s working?”

If you can’t answer that clearly — with a governance structure, a RACI matrix, a model inventory, committee minutes, and board-level reporting — you’re going to get an MRA. Or worse.

The institutions that get ahead of this aren’t the ones with the most sophisticated AI models. They’re the ones where somebody wakes up every morning knowing that AI risk is their problem, and they have the authority, budget, and organizational support to do something about it.

Stop debating whether it’s the CRO’s job or the CISO’s job. Pick an owner. Stand up the committee. Build the inventory. Start the reporting. You can iterate on the structure later — but you can’t iterate on a governance framework that doesn’t exist.


Need a head start? The AI Risk Assessment Template & Guide includes the risk assessment framework, model inventory template, and governance documentation structure to get your AI oversight program from zero to audit-ready.


FAQ

Who should own AI risk at a financial institution?

At most regulated financial institutions, AI risk ownership belongs with the Chief Risk Officer (CRO) or the head of Model Risk Management (MRM), integrated into the existing model risk governance program under SR 11-7. The CRO typically chairs or co-chairs an AI governance committee with cross-functional representation from compliance, legal, data science, and business lines. Creating a separate Chief AI Officer role works for large enterprises but only if the role has real authority and a board reporting line.

How does the three lines of defense model apply to AI?

The three lines of defense model applies to AI the same way it applies to other risks: the first line (business units and model developers) owns day-to-day AI risk management and documentation; the second line (risk management, MRM, compliance) provides independent validation, policy-setting, and oversight; and the third line (internal audit) provides independent assurance that the first two lines are working effectively. The key principle is separation — the people building AI models should never be the same people validating them.

What happens if nobody owns AI risk at my organization?

Without clear AI risk ownership, institutions face regulatory findings (MRAs or consent orders), inability to respond to model failures quickly, incomplete model inventories, and potential fair lending or consumer protection violations. The SEC’s 2024 AI-washing enforcement actions against Delphia ($225,000 penalty) and Global Predictions ($175,000 penalty) show that regulators will act when AI oversight fails. In the UK, individuals can face personal liability under the FCA’s Senior Managers regime for AI governance failures.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.