RiskTemplates

The 2025–2026
Risk & Compliance
Landscape

What happened. What’s coming. What to do about it.
April 2026
RiskTemplates Research
risktemplate.com/research

Executive Summary

The 60-second version for people who skip to the end.

The risk and compliance landscape shifted more dramatically in the past 12 months than in any period since the 2008 financial crisis. Federal enforcement cratered. States rushed to fill the void. AI went from pilot programs to production deployments—while governance lagged behind. And compliance teams are being asked to cover more ground with fewer resources than ever.

The defining challenge of 2026 is the widening gap between what regulators and the market expect from risk and compliance programs—and what those programs can actually deliver with their current resources, tools, and talent.

This report looks back at 2025’s seismic shifts and forward at the emerging risks accelerating in 2026. It synthesizes enforcement data, regulatory developments, industry surveys, and labor market analysis to give CROs and CCOs a clear picture of where things stand—and a concrete action plan for what comes next.

–90%
Drop in SEC monetary remedies YoY, $8.2B → $808M (Cornerstone)
42
CFPB enforcement actions dismissed since Feb 2025 (Protect Borrowers)
78%
of orgs use AI; 63% have no AI governance policies (McKinsey)
30%
of breaches now involve a third party, 2x YoY (Verizon DBIR)
80%
of CCOs say staffing constraints affect performance (PwC)
38.5%
of interviews trigger AI cheating flags (Fabric)

Key Takeaways

1

The Enforcement Cliff

Federal financial enforcement fell off a cliff in FY2025. The SEC filed just 313 enforcement actions—the lowest in a decade. Total monetary remedies plummeted to $808 million, a 90% drop from FY2024’s $8.2 billion. The CFPB was functionally gutted: 42 enforcement actions dismissed, $360M+ in consumer compensation pulled back, and enforcement staff slashed by ~80%.

MetricFY2024FY2025Change
SEC monetary remedies$8.2B$808M–90%
SEC enforcement actions583313–46%
Whistleblower awards$255M$60M–76%
Federal penalties (H2 vs H1 2025)$3.93B$654M–83%

Sources: Cornerstone Research, Harvard Law, Paul Weiss, Wolters Kluwer

Entire enforcement categories were abandoned: crypto cases against Coinbase, Kraken, and Binance dismissed; ESG disclosure enforcement scrapped; off-channel communications enforcement concluded. The SEC experienced a 17% headcount reduction, and the Enforcement Division Director resigned after just 7 months. CFPB cases dropped include Capital One ($2B savings interest fraud), Zelle/JPMorgan/BofA/Wells Fargo, TransUnion, Vanderbilt Mortgage, and Rocket Homes.

2026: States Are Filling the Void

After the CFPB dropped its Capital One case, a state AG coalition secured a $425 million settlement plus $530 million in future interest—doubling the original proposed amount. NYDFS imposed $82M+ in fines. California expanded DFPI authority via SB 825. Seven state AGs sued to block the CFPB’s defunding.

AgencyActionAmount
OCCTD Bank BSA/AML failures$1.75B
State AG CoalitionCapital One 360 Savings fraud$425M + $530M future
NYDFSBlock/Cash App, Paxos, int’l bank$111.5M combined
California DFPISafeguard Metals elder fraud$51M

So What: The Two-Tier Enforcement World

2

AI: Everyone’s Using It, Nobody’s Governing It

The enforcement cliff is dramatic, but the AI governance gap may be the more dangerous story. In 2025, AI adoption in financial services hit an inflection point. The technology moved into production. The governance did not follow.

2025: Adoption Exploded, Governance Didn’t Keep Up

78% of organizations now use AI in at least one business function. Financial services commands 19.6% of the global AI market—the largest single-industry share, spending $3,200 per employee on AI (2.6x cross-industry average). GenAI tool adoption jumped from 10% in 2023 to 47% in 2025. And 54% of CROs say they have AI in production.

63%
of organizations have no AI governance policies (SQ Magazine)
2.3/5
Average responsible AI maturity (McKinsey 2026)
80%+
of employees use unapproved shadow AI tools (SQ Magazine)

Shadow AI is the immediate threat: 77% of employees paste sensitive business data into AI tools. Only 30% of organizations have visibility into employee AI usage. Companies with 1,000+ employees manage an average of 250+ unauthorized tools. Shadow AI adds $670K to average breach costs. AI incidents spiked 56.4% in 2024 (233 incidents, Stanford HAI), and FINRA’s 2026 report included a standalone GenAI section for the first time.

Where AI Risk Shows Up in Your Business

These are not theoretical failure modes. They are happening in production systems right now.

AI RiskWhat It IsReal-World Incident
Prompt InjectionMalicious inputs hijack model behavior, bypassing guardrailsChevrolet of Watsonville (Dec 2023): A user prompt-injected the dealership’s ChatGPT-powered chatbot into “agreeing” to sell a 2024 Chevy Tahoe for $1 with “no takesies backsies” as a legally binding offer. Dealership took the bot offline. (VentureBeat)
HallucinationModel generates confident but factually wrong outputsAir Canada / Moffatt (Feb 2024): BC Civil Resolution Tribunal ruled the airline liable when its chatbot fabricated a bereavement fare refund policy that did not exist. Court rejected the “chatbot is a separate legal entity” defense. (CBS News)
Brand / ReputationalModel produces offensive or off-brand outputsDPD (Jan 2024): After a system update, the UK delivery firm’s customer service chatbot was prompted into swearing and writing a poem calling DPD “useless” and “a customer’s worst nightmare.” Screenshots hit 1.1M+ views; DPD disabled the AI same day. (The Guardian)
Bias AmplificationModel reflects or amplifies training data biasesMassachusetts AG v. Earnest (Jul 2025): $2.5M settlement over AI underwriting model with “knockout rules” that auto-denied non-citizens without green cards and used cohort default rate as a proxy variable producing disparate impact for Black and Hispanic applicants. (Mass.gov)
Data Leakage (Third-Party)Sensitive data from AI vendors or integrations exposed via supply chainSalesloft Drift / UNC6395 (Aug 2025): Threat actors used stolen OAuth tokens from the Drift conversational AI integration to access 700+ customer Salesforce environments, exposing customer data across multiple enterprises. (Reco.ai)
Model DriftPerformance degrades as real-world data diverges from training dataEmerging risk area: No single named enforcement action to date, but FINRA’s 2026 report explicitly expects firms to monitor for drift. Highest-risk applications: AML transaction monitoring and credit underwriting models.

2026: AI Provider Concentration—The Next CrowdStrike?

Your firm likely depends on 2–3 AI providers the same way it depends on 2–3 cloud providers. Except this dependency formed in months, not decades—and with far less oversight.

88%
of enterprise LLM spend controlled by 3 providers (Menlo Ventures)
11%
of teams switched AI providers in the past year (Menlo Ventures)
~90%
of AI training chips from a single vendor: NVIDIA (Carbon Credits)

Anthropic, OpenAI, and Google control ~77% of enterprise LLM workloads and ~88% of spend (Menlo Ventures 2025). Closed-source models power 87% of enterprise workloads. When OpenAI went down for 15+ hours in June 2025, every business built on its API went dark. When AWS went down in October 2025, Sony’s engineering teams lost access to their AI coding assistant, stalling code reviews and forcing manual processes. Average enterprise downtime cost: $23,750 per minute.

The FSB warned in October 2025 that “the market for these products and services is highly concentrated, which could expose financial institutions to operational vulnerabilities and systemic risk.” The ESRB flagged that when many banks use the same AI models, “their strategies and risk assessments can become synchronized”—during market stress, they may all recommend similar actions, intensifying instability.

The Regulatory Patchwork Is Closing In

RegulationStatusKey Requirement
Colorado AI Act (SB 24-205)Effective June 2026Impact assessments, consumer notifications, AG discrimination reporting
EU AI Act (high-risk)Effective Aug 2026Risk management, transparency, human oversight; fines up to 7% revenue
Treasury FS AI RMFReleased March 2026230 control objectives across 5 domains—de facto exam standard
1,561 state AI bills45 states, Q1 2026All 50 states introduced AI bills in 2025 for the first time (NCSL)

So What: Your AI Action Plan

3

Third-Party Risk & Operational Resilience

If the AI governance gap is the widest, the third-party risk gap is the most dangerous. 2025 proved that vendor concentration is not a theoretical exercise—it is a multi-billion dollar reality. And 2026 is adding new dimensions of vendor risk that most TPRM programs are not built to handle.

2025: The Year Everything Went Down

IncidentDateImpact
CrowdStrike global outageJuly 20248.5M systems crashed; $5.4B Fortune 500 losses; $1.15B banking sector
FIS multi-day outageJan 2025Capital One + 26 banks affected; customers locked out of deposits for 3–4 days
Fiserv infrastructure failureMay 202560+ apps down incl. Zelle (151M users); BofA, Capital One, Navy Federal, Truist affected; 12+ hours
AWS US-EAST-1 outageOct 202515+ hours; Robinhood, PayPal, Coinbase affected; $250M+ estimated losses

The concentration is staggering. Three core banking providers (Fiserv, Jack Henry, FIS) serve over 70% of all US banks and nearly half of all credit unions. Three cloud providers hold 63% of enterprise cloud spending. When Fiserv’s “planned enhancement” went wrong in May 2025, it did not just take down one bank—it knocked out 60+ applications across dozens of institutions simultaneously.

30%
of breaches now involve a third party, 2x YoY (Verizon 2025 DBIR)
$5.56M
Avg. financial services breach cost (IBM 2025)
65%
of FS orgs hit by ransomware (Sophos 2024)

Meanwhile, 73% of financial institutions have only 1–2 full-time employees managing vendor risk—while overseeing 300+ vendors. The average company now manages 286 vendors (up from 237 in 2024). The KPMG 2026 Global TPRM Survey of 851 organizations concluded: “true integration and effectiveness in TPRM remain elusive for most.”

2026: The Risks That Are Compounding

Your Vendors Are Deploying AI—and You Don’t Know How

Core banking providers, transaction monitoring vendors, and credit scoring companies are all embedding AI into their products. FIS is leveraging AI across 200 petabytes of data. Fiserv launched CoreAdvance with AI-driven modernization. NICE Actimize released SAM-10 with multilayered ML for transaction monitoring. Zest AI received a $200M investment for AI credit scoring now used by Freddie Mac and Discover.

But most TPRM programs are still assessing these vendors with pre-AI questionnaires. The data is bleak:

72%
of FIs only partially aware which vendors use AI (Ncontracts 2026)
0%
of organizations report feeling “extremely confident” managing vendor AI risk (Ncontracts 2026)

The Massachusetts AG’s $2.5 million settlement with Earnest (July 2025) is the template for what comes next: a vendor’s AI credit scoring model produced disparate impact based on race and immigration status. The NYDFS October 2025 Industry Letter now explicitly requires contract clauses on vendor AI use and whether your data may be used to train vendor AI models.

Business Continuity: The Gap Between “Having a Plan” and Surviving

Most firms claim they are prepared. The testing data says otherwise.

12%
of orgs have comprehensive BCPs (HUB 2024)
71%
do zero failover testing (Secureframe)
36%
of critical systems fail to recover within target RTOs (Cutover 2025)

CrowdStrike exposed the core BCP failure: plans assumed cyber incidents, not a trusted vendor’s own software causing the outage. CrowdStrike deployed a fix in 78 minutes, but recovery took days to weeks because each machine required manual remediation. 90% of financial services ransomware victims had backup compromise attempts; 48% succeeded. Only 2% of organizations have reached high-level DR maturity.

So What: Your Vendors Are Your Risk

4

The Staffing Crisis—and How to Fix It

Fragmented enforcement. AI systems without governance. 286 vendors managed by 2 people. Every trend in this report converges on one bottleneck: the teams expected to handle all of it. And there are not enough of them.

The Numbers

The US has approximately 418,000 compliance officers and 183,000 information security analysts. That sounds like a lot until you consider the demand: 33,300 compliance officer openings and 16,000 infosec analyst openings are projected annually over the next decade.

80%
of CCOs say staffing constraints affect performance (PwC)
55%
of cybersecurity teams are understaffed (ISACA 2025)
56%
of CCOs considering leaving for better comp (BarkerGilmore 2025)

Between 2016 and 2023, employee hours dedicated to regulatory compliance increased 61%—roughly triple the 20% growth in overall employee hours. Internal audit is getting squeezed too: budget cuts rose from 11% to 19% between 2024 and 2025, and 42% of audit teams lack needed skill sets. Compliance costs run ~$12,800 per employee annually, with large financial institutions spending $10,000+ per employee and up to $200M+ total—roughly 3% of operating expenses.

And the federal supervisory pullback is making it worse, not better. With the Fed shrinking its supervision division by ~30% and the OCC moving to risk-based exams, institutions need stronger internal second-line functions—at the exact moment when staffing is hardest.

What Good Looks Like: Right-Sizing Your Team

Company StageAssetsMin. Compliance FTEsTrigger to Add
StartupPre-revenue–$100M1–2 (fractional OK)First bank partnership or licensing
Growth$100M–$1B3–8New product lines, MRA, state expansion
Mid-market$1B–$10B8–20FDIC $10B threshold / Three Lines requirement
Regional$10B–$50B20–50Fed SR 08-8 enhanced expectations
Large$50B+50–100+Firmwide compliance program required

Thresholds reference: Fed SR 08-8 (firmwide compliance program at $50B+), FDIC Three Lines Model at $10B+.

The technology multiplier: Building an in-house TPRM program typically runs $400,000–$500,000 annually for mid-sized institutions (personnel, technology, and assessments combined). RegTech does not replace headcount 1:1, but it changes the math—particularly for teams drowning in spreadsheet-based processes.

The Hiring Playbook: Who to Hire and What to Look For

Hire in this order: (1) BSA/AML Officer or CCO—regulatory accountability has to live somewhere. (2) Information Security Officer or CISO. (3) Risk/Compliance Analyst. (4) Vendor/Third-Party Risk Manager. (5) Internal Audit (can outsource initially).

The source debate: The strongest teams blend backgrounds. Industry practitioners who have built and run programs. Former regulators who know the exam playbook. Big 4 alumni for framework design and board-facing work. Avoid building a team that is entirely one type.

Certifications That Command Premium Pay

CertificationAvg. SalarySource
CISSP$161K–$176KInfosec Institute
CISM~$155KSkillsoft
CRISC~$148K–$151KPayScale
CAMS~$85K median (+42% vs. non-certified)ACAMS
CRCM$90K–$120KPayScale

Emerging skills in demand: AI literacy, data analytics, and RegTech automation. The shift from “can you read a regulation” to “can you analyze data to identify risk patterns” is accelerating. Per Robert Half’s 2026 Salary Guide, 87% of finance and accounting leaders offer higher pay to candidates with specialized certifications.

The AI Interview Problem: Your Next Hire Might Not Know How to Do the Job

New risk category: 38.5% of job interviews trigger AI cheating flags, based on Fabric’s analysis of 19,368 interviews (July 2025–January 2026). Cheating adoption doubled from 15% to 35% between June and December 2025. This is not hypothetical—it is happening in your hiring pipeline right now.

A growing ecosystem of commercial tools enables candidates to cheat during live video interviews. Invisible screen overlay tools like Cluely use low-level graphics hooks to render AI-generated answers that do not appear on screen share. AI copilots like Final Round AI (1.2M+ users) work as real-time teleprompters. Only 19% of managers are “extremely confident” their hiring process could catch a fraudulent applicant.

For compliance and risk roles, the stakes are uniquely high. A fraudulently hired BSA/AML analyst who cannot perform transaction monitoring creates direct regulatory exposure. A compliance officer who used AI to pass knowledge-based interviews but lacks genuine regulatory judgment is a ticking time bomb during your next exam.

Red Flags to Watch For

Eye movement patterns
  • Horizontal line-by-line sweeps (reading) vs. natural upward gaze (recall)
  • Sharp off-camera glances toward a second screen
  • Eyes snapping back to the left margin repeatedly
Response timing
  • Consistent 3–5 second pause before every answer regardless of difficulty
  • 15–20 second silence followed by a perfect, list-formatted response
Speech patterns
  • Answers too polished for spontaneous conversation—no filler words, no self-correction
  • Sudden vocabulary shifts mid-answer
  • Cannot elaborate when asked follow-up questions on their own answers
Behavioral tells
  • Insists on virtual-only, resists any in-person component
  • Performance improves dramatically when screen is shared vs. camera-only

So What: Protect Your Hiring Pipeline

5

Emerging Risks & What to Watch: 2026–2027

The next 12–18 months will test whether firms have been building or coasting. These are the risks accelerating toward your front door.

Regulatory Calendar: What Hits When

DateEventImpact
June 2026Colorado AI Act takes effectFirst comprehensive US state AI law; impact assessments, consumer notifications, AG discrimination reporting
Aug 2026EU AI Act high-risk provisions enforcedApplies to any AI touching European customers; fines up to 7% of global revenue
Q3 2026NIST AI RMF 1.1 expected releaseUpdated framework will reset the baseline for AI risk management programs
2026–2027Texas TRAIGA enforcement beginsAnother major state AI law enters the enforcement phase
OngoingState AG enforcement escalationMore coalitions, bigger settlements, broader scope as federal retreat continues
TBDCFPB legal resolutionFederal judge ruled CFPB must stay funded; political/legal battle continues into 2027

Agentic AI: The Risk Nobody Is Ready For

AI systems that take autonomous actions—executing trades, processing claims, approving vendor payments, managing customer communications—create risk profiles that existing frameworks were not designed for. McKinsey’s 2026 AI Trust Maturity Survey found only one-third of organizations report maturity levels of 3+ in agentic AI governance.

The control challenges are fundamentally different: How do you audit a system that makes thousands of decisions per hour? What happens when an AI agent executes a transaction that violates policy but technically optimized its objective function? Who is liable when an autonomous system makes a decision that harms a customer? These questions do not have settled answers, and regulators are watching firms figure it out in production.

Deepfake and Synthetic Identity Fraud

Deepfake-driven fraud led to $200 million in losses in Q1 2025 alone, including a widely reported $25 million case involving AI-generated video used to authorize wire transfers. Real-time deepfake software can superimpose a different face onto a live video feed, and voice cloning tools can replicate anyone’s voice from seconds of audio.

For financial institutions, the attack surface is expanding: deepfake voice calls impersonating executives to authorize wire transfers, synthetic identities passing KYC verification, and AI-generated documents that pass initial document review. 17% of HR managers have directly encountered deepfake technology in video interviews.

The CFPB Snap-Back Risk

A federal judge ruled in December 2025 that the CFPB must remain funded. The agency is not dead—it is in legal limbo. For compliance teams, this creates a uniquely dangerous dynamic: firms that dismantled CFPB-related controls during the enforcement gap will face the most severe consequences when the pendulum swings. And it will swing. The 42 dismissed cases represent a roadmap of what the next administration’s CFPB will prioritize.

Do not unwind CFPB compliance programs. The firms that maintained their consumer protection controls through the enforcement gap will be positioned as leaders. The ones that cut corners will be first in line for enforcement when the agency rebounds.

The Buy Now, Pay Later Crackdown

Seven Democratic state AGs sent enforcement letters to 6 BNPL providers in December 2025. New York enacted legislation treating BNPL as loans under Truth in Lending Act standards. NYDFS proposed BNPL licensing and supervision rules in February 2026. This is moving from “emerging risk” to “active enforcement” in real time—and any institution offering or partnering with BNPL providers needs to reassess their compliance posture.

The Crypto Deregulation Experiment

Federal agencies are opening the door: the SEC dismissed major crypto cases and launched “Project Crypto” as a regulatory framework. The OCC rescinded prior-approval requirements for bank crypto activities. The FDIC eliminated its notification requirement for crypto/blockchain. For risk teams, this means crypto-adjacent products and partnerships may accelerate—without the enforcement backstop that previously deterred the worst actors. BSA/AML risk from crypto exposure remains elevated.

The Compliance Talent War Intensifies

Per BLS, information security analyst roles are projected to grow 29% over the next decade—one of the fastest-growing occupations in the country. CISO compensation in financial services averages $705K total (Heidrick & Struggles 2025). And 56% of CCOs are considering leaving for better compensation. The talent pool is not growing fast enough to match demand, and firms without competitive pay, clear career paths, and modern tooling will lose their best people to those that do.

The firms that will be best positioned in 2027 are not the ones waiting for regulatory clarity. They are the ones building governance frameworks now, staffing up now, and pressure-testing their resilience now—while their competitors wait for the exam finding.

Bringing It All Together: The 90-Day Action Plan

Every section of this report tells the same story from a different angle: the gap between what is expected of risk and compliance programs and their actual capacity to deliver is widening. The good news: the firms that act now—while competitors wait for regulatory clarity—will own the advantage for years.

Week 1–2: Visibility

ActionWhy Now
Build your AI inventory. Catalog every AI tool in use—sanctioned and shadow.FINRA, OCC, and NYDFS will ask. 63% of firms have no governance policies.
Map vendor concentration. Identify which vendors, if offline for 24 hours, would halt operations.Three core providers serve 70%+ of US banks. Fiserv took down 60+ apps simultaneously.
Audit AI provider dependencies. Which teams rely on which models? What is the fallback?Three providers control 88% of enterprise LLM spend.

Week 3–6: Governance

ActionWhy Now
Adopt the FS AI RMF as your baseline. Map current state against Treasury’s 230 controls; build a phased roadmap.De facto exam standard. You need a plan, not all 230 on day one.
Update TPRM questionnaires for AI. Add vendor AI usage, training data rights, bias testing, model change notification.72% of FIs only partially know which vendors use AI. NYDFS requires AI contract clauses.
Deploy shadow AI controls. Acceptable use policy, monitoring tools, quarterly audits.77% of employees paste sensitive data into AI tools today.
Maintain CFPB-era compliance programs. Do not unwind consumer protection controls.CFPB is in legal limbo, not dead.

Week 7–12: Resilience

ActionWhy Now
Run a compound-scenario BCP test. Simulate simultaneous vendor outage + cyber incident.Only 12% have comprehensive BCPs. 71% do zero failover testing.
Prepare for Colorado AI Act (June) and EU AI Act (August).Impact assessments and discrimination reporting are weeks from enforcement.
Redesign interviews for AI fraud. In-person rounds for compliance roles; scenario-based assessments.38.5% of interviews trigger cheating flags.
Review cyber insurance. Confirm coverage for vendor outages, business interruption, AI incidents.After CrowdStrike, only 10–20% of Fortune 500 losses were insured.

KPIs Every Risk Program Should Track in 2026

A 90-day action plan gets you started. Ongoing metrics keep you ahead. These are the indicators every CRO should be reporting to the board—and the ones examiners will increasingly ask about.

AI Governance KPIs

MetricTargetWhy It Matters
% of AI systems in inventory with documented impact assessments100% for high-risk systemsRequired by Colorado AI Act (June 2026) and EU AI Act (Aug 2026)
Shadow AI detection rate (unauthorized AI tools identified per quarter)Rising quarter-over-quarter as visibility improves77% of employees paste sensitive data into unapproved AI tools
Time-to-detect for model drift in production AI systems<30 days for consequential decision modelsFINRA 2026 report explicitly expects drift monitoring
% of consequential AI decisions with human-in-the-loop review100% for credit, hiring, AML, customer-facingMass AG / Earnest settlement set the precedent; regulators expect oversight

Third-Party Risk KPIs

MetricTargetWhy It Matters
% of critical vendors with AI usage disclosed in last TPRM review100% for tier-1 vendors72% of FIs only partially know which vendors use AI
% of critical vendors with fourth-party dependency mapping completed100% for tier-1 vendorsNYDFS Oct 2025 letter and DORA both require this
Number of BCP scenarios tested that include vendor failureAt least 2 per year; at least 1 compound scenarioOnly 12% of orgs have comprehensive BCPs; 71% do zero failover testing
RTO achievement rate in actual tests (not tabletops)>90% for mission-critical systems36% of critical systems fail to recover within target RTOs when tested

Program Health KPIs

MetricTargetWhy It Matters
Compliance FTE ratio to business growth (new products, geographies, vendors)Scale proportionallyCompliance burden grew 3x faster than headcount between 2016–2023
% of compliance work performed manually vs. automatedDeclining toward <40% manualManual-process firms report 4x lower staffing satisfaction
Compliance team retention rate (rolling 12 months)>85%56% of CCOs are actively considering leaving their current role
% of compliance-sensitive hires with in-person verification completed100%Only 19% of managers are “extremely confident” they could catch a fraudulent applicant

The Board Deck Version: Talking Points for Your Next Update

You probably need to brief your board on what is changing in the risk and compliance landscape. Here are the key messages—each with the data to back it up. Use them.

Talking Point 1: “Federal enforcement is down, but our risk exposure is up.”

The SEC dropped enforcement by 90% year-over-year. The CFPB dismissed 42 cases. But state AGs secured a $425M Capital One settlement—doubling what the CFPB originally sought. NYDFS imposed $82M+ in fines. The regulatory landscape has fragmented, not disappeared. Programs built to the lower federal bar will be exposed when the pendulum swings—and the legal battle over the CFPB’s future continues into 2027.

Talking Point 2: “We are using AI. Our governance program is not keeping up.”

78% of organizations use AI. 63% have no AI governance policies. 77% of employees paste sensitive data into unapproved AI tools. The Massachusetts AG’s $2.5M settlement with Earnest in July 2025 is the template for what comes next. Colorado AI Act takes effect June 2026. EU AI Act high-risk provisions take effect August 2026. We need to inventory what we have, govern what we use, and prepare for both regulatory deadlines.

Talking Point 3: “Our vendor risk is concentrated and our BCP is untested.”

Three core banking providers serve 70%+ of US banks. Three AI providers control 88% of enterprise LLM spend. When Fiserv’s May 2025 outage hit, it took down 60+ applications across dozens of institutions simultaneously. Meanwhile, only 12% of organizations have comprehensive BCPs, and 71% do zero failover testing. We should run a compound-scenario test this quarter.

Talking Point 4: “We cannot govern what we cannot staff.”

80% of CCOs say staffing constraints affect performance. 56% are considering leaving. The compliance burden grew 3x faster than headcount between 2016 and 2023. CISO compensation in financial services averages $705K. Without competitive pay, modern tooling, and a clear career path for our team, we will lose our best people—right when the workload is at its peak.

Talking Point 5: “AI is now also a hiring risk.”

38.5% of job interviews trigger AI cheating flags. Candidates use invisible screen overlays and AI teleprompters to game live video interviews. For compliance-sensitive roles, a fraudulently hired BSA/AML analyst who cannot actually perform the work creates direct regulatory exposure. Google, McKinsey, and Cisco have already reinstated in-person interviews for critical roles. We should too.

2025 redefined the landscape. 2026 will reward the firms that saw it clearly and moved early. The data in this report is your starting point. What you do with it is up to you.

RiskTemplates · risktemplate.com · Immaterial Findings Newsletter