The 60-second version for people who skip to the end.
The risk and compliance landscape shifted more dramatically in the past 12 months than in any period since the 2008 financial crisis. Federal enforcement cratered. States rushed to fill the void. AI went from pilot programs to production deployments—while governance lagged behind. And compliance teams are being asked to cover more ground with fewer resources than ever.
This report looks back at 2025’s seismic shifts and forward at the emerging risks accelerating in 2026. It synthesizes enforcement data, regulatory developments, industry surveys, and labor market analysis to give CROs and CCOs a clear picture of where things stand—and a concrete action plan for what comes next.
Federal financial enforcement fell off a cliff in FY2025. The SEC filed just 313 enforcement actions—the lowest in a decade. Total monetary remedies plummeted to $808 million, a 90% drop from FY2024’s $8.2 billion. The CFPB was functionally gutted: 42 enforcement actions dismissed, $360M+ in consumer compensation pulled back, and enforcement staff slashed by ~80%.
| Metric | FY2024 | FY2025 | Change |
|---|---|---|---|
| SEC monetary remedies | $8.2B | $808M | –90% |
| SEC enforcement actions | 583 | 313 | –46% |
| Whistleblower awards | $255M | $60M | –76% |
| Federal penalties (H2 vs H1 2025) | $3.93B | $654M | –83% |
Sources: Cornerstone Research, Harvard Law, Paul Weiss, Wolters Kluwer
Entire enforcement categories were abandoned: crypto cases against Coinbase, Kraken, and Binance dismissed; ESG disclosure enforcement scrapped; off-channel communications enforcement concluded. The SEC experienced a 17% headcount reduction, and the Enforcement Division Director resigned after just 7 months. CFPB cases dropped include Capital One ($2B savings interest fraud), Zelle/JPMorgan/BofA/Wells Fargo, TransUnion, Vanderbilt Mortgage, and Rocket Homes.
After the CFPB dropped its Capital One case, a state AG coalition secured a $425 million settlement plus $530 million in future interest—doubling the original proposed amount. NYDFS imposed $82M+ in fines. California expanded DFPI authority via SB 825. Seven state AGs sued to block the CFPB’s defunding.
| Agency | Action | Amount |
|---|---|---|
| OCC | TD Bank BSA/AML failures | $1.75B |
| State AG Coalition | Capital One 360 Savings fraud | $425M + $530M future |
| NYDFS | Block/Cash App, Paxos, int’l bank | $111.5M combined |
| California DFPI | Safeguard Metals elder fraud | $51M |
The enforcement cliff is dramatic, but the AI governance gap may be the more dangerous story. In 2025, AI adoption in financial services hit an inflection point. The technology moved into production. The governance did not follow.
78% of organizations now use AI in at least one business function. Financial services commands 19.6% of the global AI market—the largest single-industry share, spending $3,200 per employee on AI (2.6x cross-industry average). GenAI tool adoption jumped from 10% in 2023 to 47% in 2025. And 54% of CROs say they have AI in production.
Shadow AI is the immediate threat: 77% of employees paste sensitive business data into AI tools. Only 30% of organizations have visibility into employee AI usage. Companies with 1,000+ employees manage an average of 250+ unauthorized tools. Shadow AI adds $670K to average breach costs. AI incidents spiked 56.4% in 2024 (233 incidents, Stanford HAI), and FINRA’s 2026 report included a standalone GenAI section for the first time.
These are not theoretical failure modes. They are happening in production systems right now.
| AI Risk | What It Is | Real-World Incident |
|---|---|---|
| Prompt Injection | Malicious inputs hijack model behavior, bypassing guardrails | Chevrolet of Watsonville (Dec 2023): A user prompt-injected the dealership’s ChatGPT-powered chatbot into “agreeing” to sell a 2024 Chevy Tahoe for $1 with “no takesies backsies” as a legally binding offer. Dealership took the bot offline. (VentureBeat) |
| Hallucination | Model generates confident but factually wrong outputs | Air Canada / Moffatt (Feb 2024): BC Civil Resolution Tribunal ruled the airline liable when its chatbot fabricated a bereavement fare refund policy that did not exist. Court rejected the “chatbot is a separate legal entity” defense. (CBS News) |
| Brand / Reputational | Model produces offensive or off-brand outputs | DPD (Jan 2024): After a system update, the UK delivery firm’s customer service chatbot was prompted into swearing and writing a poem calling DPD “useless” and “a customer’s worst nightmare.” Screenshots hit 1.1M+ views; DPD disabled the AI same day. (The Guardian) |
| Bias Amplification | Model reflects or amplifies training data biases | Massachusetts AG v. Earnest (Jul 2025): $2.5M settlement over AI underwriting model with “knockout rules” that auto-denied non-citizens without green cards and used cohort default rate as a proxy variable producing disparate impact for Black and Hispanic applicants. (Mass.gov) |
| Data Leakage (Third-Party) | Sensitive data from AI vendors or integrations exposed via supply chain | Salesloft Drift / UNC6395 (Aug 2025): Threat actors used stolen OAuth tokens from the Drift conversational AI integration to access 700+ customer Salesforce environments, exposing customer data across multiple enterprises. (Reco.ai) |
| Model Drift | Performance degrades as real-world data diverges from training data | Emerging risk area: No single named enforcement action to date, but FINRA’s 2026 report explicitly expects firms to monitor for drift. Highest-risk applications: AML transaction monitoring and credit underwriting models. |
Your firm likely depends on 2–3 AI providers the same way it depends on 2–3 cloud providers. Except this dependency formed in months, not decades—and with far less oversight.
Anthropic, OpenAI, and Google control ~77% of enterprise LLM workloads and ~88% of spend (Menlo Ventures 2025). Closed-source models power 87% of enterprise workloads. When OpenAI went down for 15+ hours in June 2025, every business built on its API went dark. When AWS went down in October 2025, Sony’s engineering teams lost access to their AI coding assistant, stalling code reviews and forcing manual processes. Average enterprise downtime cost: $23,750 per minute.
The FSB warned in October 2025 that “the market for these products and services is highly concentrated, which could expose financial institutions to operational vulnerabilities and systemic risk.” The ESRB flagged that when many banks use the same AI models, “their strategies and risk assessments can become synchronized”—during market stress, they may all recommend similar actions, intensifying instability.
| Regulation | Status | Key Requirement |
|---|---|---|
| Colorado AI Act (SB 24-205) | Effective June 2026 | Impact assessments, consumer notifications, AG discrimination reporting |
| EU AI Act (high-risk) | Effective Aug 2026 | Risk management, transparency, human oversight; fines up to 7% revenue |
| Treasury FS AI RMF | Released March 2026 | 230 control objectives across 5 domains—de facto exam standard |
| 1,561 state AI bills | 45 states, Q1 2026 | All 50 states introduced AI bills in 2025 for the first time (NCSL) |
If the AI governance gap is the widest, the third-party risk gap is the most dangerous. 2025 proved that vendor concentration is not a theoretical exercise—it is a multi-billion dollar reality. And 2026 is adding new dimensions of vendor risk that most TPRM programs are not built to handle.
| Incident | Date | Impact |
|---|---|---|
| CrowdStrike global outage | July 2024 | 8.5M systems crashed; $5.4B Fortune 500 losses; $1.15B banking sector |
| FIS multi-day outage | Jan 2025 | Capital One + 26 banks affected; customers locked out of deposits for 3–4 days |
| Fiserv infrastructure failure | May 2025 | 60+ apps down incl. Zelle (151M users); BofA, Capital One, Navy Federal, Truist affected; 12+ hours |
| AWS US-EAST-1 outage | Oct 2025 | 15+ hours; Robinhood, PayPal, Coinbase affected; $250M+ estimated losses |
The concentration is staggering. Three core banking providers (Fiserv, Jack Henry, FIS) serve over 70% of all US banks and nearly half of all credit unions. Three cloud providers hold 63% of enterprise cloud spending. When Fiserv’s “planned enhancement” went wrong in May 2025, it did not just take down one bank—it knocked out 60+ applications across dozens of institutions simultaneously.
Meanwhile, 73% of financial institutions have only 1–2 full-time employees managing vendor risk—while overseeing 300+ vendors. The average company now manages 286 vendors (up from 237 in 2024). The KPMG 2026 Global TPRM Survey of 851 organizations concluded: “true integration and effectiveness in TPRM remain elusive for most.”
Core banking providers, transaction monitoring vendors, and credit scoring companies are all embedding AI into their products. FIS is leveraging AI across 200 petabytes of data. Fiserv launched CoreAdvance with AI-driven modernization. NICE Actimize released SAM-10 with multilayered ML for transaction monitoring. Zest AI received a $200M investment for AI credit scoring now used by Freddie Mac and Discover.
But most TPRM programs are still assessing these vendors with pre-AI questionnaires. The data is bleak:
The Massachusetts AG’s $2.5 million settlement with Earnest (July 2025) is the template for what comes next: a vendor’s AI credit scoring model produced disparate impact based on race and immigration status. The NYDFS October 2025 Industry Letter now explicitly requires contract clauses on vendor AI use and whether your data may be used to train vendor AI models.
Most firms claim they are prepared. The testing data says otherwise.
CrowdStrike exposed the core BCP failure: plans assumed cyber incidents, not a trusted vendor’s own software causing the outage. CrowdStrike deployed a fix in 78 minutes, but recovery took days to weeks because each machine required manual remediation. 90% of financial services ransomware victims had backup compromise attempts; 48% succeeded. Only 2% of organizations have reached high-level DR maturity.
Fragmented enforcement. AI systems without governance. 286 vendors managed by 2 people. Every trend in this report converges on one bottleneck: the teams expected to handle all of it. And there are not enough of them.
The US has approximately 418,000 compliance officers and 183,000 information security analysts. That sounds like a lot until you consider the demand: 33,300 compliance officer openings and 16,000 infosec analyst openings are projected annually over the next decade.
Between 2016 and 2023, employee hours dedicated to regulatory compliance increased 61%—roughly triple the 20% growth in overall employee hours. Internal audit is getting squeezed too: budget cuts rose from 11% to 19% between 2024 and 2025, and 42% of audit teams lack needed skill sets. Compliance costs run ~$12,800 per employee annually, with large financial institutions spending $10,000+ per employee and up to $200M+ total—roughly 3% of operating expenses.
And the federal supervisory pullback is making it worse, not better. With the Fed shrinking its supervision division by ~30% and the OCC moving to risk-based exams, institutions need stronger internal second-line functions—at the exact moment when staffing is hardest.
| Company Stage | Assets | Min. Compliance FTEs | Trigger to Add |
|---|---|---|---|
| Startup | Pre-revenue–$100M | 1–2 (fractional OK) | First bank partnership or licensing |
| Growth | $100M–$1B | 3–8 | New product lines, MRA, state expansion |
| Mid-market | $1B–$10B | 8–20 | FDIC $10B threshold / Three Lines requirement |
| Regional | $10B–$50B | 20–50 | Fed SR 08-8 enhanced expectations |
| Large | $50B+ | 50–100+ | Firmwide compliance program required |
Thresholds reference: Fed SR 08-8 (firmwide compliance program at $50B+), FDIC Three Lines Model at $10B+.
The technology multiplier: Building an in-house TPRM program typically runs $400,000–$500,000 annually for mid-sized institutions (personnel, technology, and assessments combined). RegTech does not replace headcount 1:1, but it changes the math—particularly for teams drowning in spreadsheet-based processes.
Hire in this order: (1) BSA/AML Officer or CCO—regulatory accountability has to live somewhere. (2) Information Security Officer or CISO. (3) Risk/Compliance Analyst. (4) Vendor/Third-Party Risk Manager. (5) Internal Audit (can outsource initially).
The source debate: The strongest teams blend backgrounds. Industry practitioners who have built and run programs. Former regulators who know the exam playbook. Big 4 alumni for framework design and board-facing work. Avoid building a team that is entirely one type.
| Certification | Avg. Salary | Source |
|---|---|---|
| CISSP | $161K–$176K | Infosec Institute |
| CISM | ~$155K | Skillsoft |
| CRISC | ~$148K–$151K | PayScale |
| CAMS | ~$85K median (+42% vs. non-certified) | ACAMS |
| CRCM | $90K–$120K | PayScale |
Emerging skills in demand: AI literacy, data analytics, and RegTech automation. The shift from “can you read a regulation” to “can you analyze data to identify risk patterns” is accelerating. Per Robert Half’s 2026 Salary Guide, 87% of finance and accounting leaders offer higher pay to candidates with specialized certifications.
A growing ecosystem of commercial tools enables candidates to cheat during live video interviews. Invisible screen overlay tools like Cluely use low-level graphics hooks to render AI-generated answers that do not appear on screen share. AI copilots like Final Round AI (1.2M+ users) work as real-time teleprompters. Only 19% of managers are “extremely confident” their hiring process could catch a fraudulent applicant.
For compliance and risk roles, the stakes are uniquely high. A fraudulently hired BSA/AML analyst who cannot perform transaction monitoring creates direct regulatory exposure. A compliance officer who used AI to pass knowledge-based interviews but lacks genuine regulatory judgment is a ticking time bomb during your next exam.
The next 12–18 months will test whether firms have been building or coasting. These are the risks accelerating toward your front door.
| Date | Event | Impact |
|---|---|---|
| June 2026 | Colorado AI Act takes effect | First comprehensive US state AI law; impact assessments, consumer notifications, AG discrimination reporting |
| Aug 2026 | EU AI Act high-risk provisions enforced | Applies to any AI touching European customers; fines up to 7% of global revenue |
| Q3 2026 | NIST AI RMF 1.1 expected release | Updated framework will reset the baseline for AI risk management programs |
| 2026–2027 | Texas TRAIGA enforcement begins | Another major state AI law enters the enforcement phase |
| Ongoing | State AG enforcement escalation | More coalitions, bigger settlements, broader scope as federal retreat continues |
| TBD | CFPB legal resolution | Federal judge ruled CFPB must stay funded; political/legal battle continues into 2027 |
AI systems that take autonomous actions—executing trades, processing claims, approving vendor payments, managing customer communications—create risk profiles that existing frameworks were not designed for. McKinsey’s 2026 AI Trust Maturity Survey found only one-third of organizations report maturity levels of 3+ in agentic AI governance.
The control challenges are fundamentally different: How do you audit a system that makes thousands of decisions per hour? What happens when an AI agent executes a transaction that violates policy but technically optimized its objective function? Who is liable when an autonomous system makes a decision that harms a customer? These questions do not have settled answers, and regulators are watching firms figure it out in production.
Deepfake-driven fraud led to $200 million in losses in Q1 2025 alone, including a widely reported $25 million case involving AI-generated video used to authorize wire transfers. Real-time deepfake software can superimpose a different face onto a live video feed, and voice cloning tools can replicate anyone’s voice from seconds of audio.
For financial institutions, the attack surface is expanding: deepfake voice calls impersonating executives to authorize wire transfers, synthetic identities passing KYC verification, and AI-generated documents that pass initial document review. 17% of HR managers have directly encountered deepfake technology in video interviews.
A federal judge ruled in December 2025 that the CFPB must remain funded. The agency is not dead—it is in legal limbo. For compliance teams, this creates a uniquely dangerous dynamic: firms that dismantled CFPB-related controls during the enforcement gap will face the most severe consequences when the pendulum swings. And it will swing. The 42 dismissed cases represent a roadmap of what the next administration’s CFPB will prioritize.
Seven Democratic state AGs sent enforcement letters to 6 BNPL providers in December 2025. New York enacted legislation treating BNPL as loans under Truth in Lending Act standards. NYDFS proposed BNPL licensing and supervision rules in February 2026. This is moving from “emerging risk” to “active enforcement” in real time—and any institution offering or partnering with BNPL providers needs to reassess their compliance posture.
Federal agencies are opening the door: the SEC dismissed major crypto cases and launched “Project Crypto” as a regulatory framework. The OCC rescinded prior-approval requirements for bank crypto activities. The FDIC eliminated its notification requirement for crypto/blockchain. For risk teams, this means crypto-adjacent products and partnerships may accelerate—without the enforcement backstop that previously deterred the worst actors. BSA/AML risk from crypto exposure remains elevated.
Per BLS, information security analyst roles are projected to grow 29% over the next decade—one of the fastest-growing occupations in the country. CISO compensation in financial services averages $705K total (Heidrick & Struggles 2025). And 56% of CCOs are considering leaving for better compensation. The talent pool is not growing fast enough to match demand, and firms without competitive pay, clear career paths, and modern tooling will lose their best people to those that do.
Every section of this report tells the same story from a different angle: the gap between what is expected of risk and compliance programs and their actual capacity to deliver is widening. The good news: the firms that act now—while competitors wait for regulatory clarity—will own the advantage for years.
| Action | Why Now |
|---|---|
| Build your AI inventory. Catalog every AI tool in use—sanctioned and shadow. | FINRA, OCC, and NYDFS will ask. 63% of firms have no governance policies. |
| Map vendor concentration. Identify which vendors, if offline for 24 hours, would halt operations. | Three core providers serve 70%+ of US banks. Fiserv took down 60+ apps simultaneously. |
| Audit AI provider dependencies. Which teams rely on which models? What is the fallback? | Three providers control 88% of enterprise LLM spend. |
| Action | Why Now |
|---|---|
| Adopt the FS AI RMF as your baseline. Map current state against Treasury’s 230 controls; build a phased roadmap. | De facto exam standard. You need a plan, not all 230 on day one. |
| Update TPRM questionnaires for AI. Add vendor AI usage, training data rights, bias testing, model change notification. | 72% of FIs only partially know which vendors use AI. NYDFS requires AI contract clauses. |
| Deploy shadow AI controls. Acceptable use policy, monitoring tools, quarterly audits. | 77% of employees paste sensitive data into AI tools today. |
| Maintain CFPB-era compliance programs. Do not unwind consumer protection controls. | CFPB is in legal limbo, not dead. |
| Action | Why Now |
|---|---|
| Run a compound-scenario BCP test. Simulate simultaneous vendor outage + cyber incident. | Only 12% have comprehensive BCPs. 71% do zero failover testing. |
| Prepare for Colorado AI Act (June) and EU AI Act (August). | Impact assessments and discrimination reporting are weeks from enforcement. |
| Redesign interviews for AI fraud. In-person rounds for compliance roles; scenario-based assessments. | 38.5% of interviews trigger cheating flags. |
| Review cyber insurance. Confirm coverage for vendor outages, business interruption, AI incidents. | After CrowdStrike, only 10–20% of Fortune 500 losses were insured. |
A 90-day action plan gets you started. Ongoing metrics keep you ahead. These are the indicators every CRO should be reporting to the board—and the ones examiners will increasingly ask about.
| Metric | Target | Why It Matters |
|---|---|---|
| % of AI systems in inventory with documented impact assessments | 100% for high-risk systems | Required by Colorado AI Act (June 2026) and EU AI Act (Aug 2026) |
| Shadow AI detection rate (unauthorized AI tools identified per quarter) | Rising quarter-over-quarter as visibility improves | 77% of employees paste sensitive data into unapproved AI tools |
| Time-to-detect for model drift in production AI systems | <30 days for consequential decision models | FINRA 2026 report explicitly expects drift monitoring |
| % of consequential AI decisions with human-in-the-loop review | 100% for credit, hiring, AML, customer-facing | Mass AG / Earnest settlement set the precedent; regulators expect oversight |
| Metric | Target | Why It Matters |
|---|---|---|
| % of critical vendors with AI usage disclosed in last TPRM review | 100% for tier-1 vendors | 72% of FIs only partially know which vendors use AI |
| % of critical vendors with fourth-party dependency mapping completed | 100% for tier-1 vendors | NYDFS Oct 2025 letter and DORA both require this |
| Number of BCP scenarios tested that include vendor failure | At least 2 per year; at least 1 compound scenario | Only 12% of orgs have comprehensive BCPs; 71% do zero failover testing |
| RTO achievement rate in actual tests (not tabletops) | >90% for mission-critical systems | 36% of critical systems fail to recover within target RTOs when tested |
| Metric | Target | Why It Matters |
|---|---|---|
| Compliance FTE ratio to business growth (new products, geographies, vendors) | Scale proportionally | Compliance burden grew 3x faster than headcount between 2016–2023 |
| % of compliance work performed manually vs. automated | Declining toward <40% manual | Manual-process firms report 4x lower staffing satisfaction |
| Compliance team retention rate (rolling 12 months) | >85% | 56% of CCOs are actively considering leaving their current role |
| % of compliance-sensitive hires with in-person verification completed | 100% | Only 19% of managers are “extremely confident” they could catch a fraudulent applicant |
You probably need to brief your board on what is changing in the risk and compliance landscape. Here are the key messages—each with the data to back it up. Use them.
The SEC dropped enforcement by 90% year-over-year. The CFPB dismissed 42 cases. But state AGs secured a $425M Capital One settlement—doubling what the CFPB originally sought. NYDFS imposed $82M+ in fines. The regulatory landscape has fragmented, not disappeared. Programs built to the lower federal bar will be exposed when the pendulum swings—and the legal battle over the CFPB’s future continues into 2027.
78% of organizations use AI. 63% have no AI governance policies. 77% of employees paste sensitive data into unapproved AI tools. The Massachusetts AG’s $2.5M settlement with Earnest in July 2025 is the template for what comes next. Colorado AI Act takes effect June 2026. EU AI Act high-risk provisions take effect August 2026. We need to inventory what we have, govern what we use, and prepare for both regulatory deadlines.
Three core banking providers serve 70%+ of US banks. Three AI providers control 88% of enterprise LLM spend. When Fiserv’s May 2025 outage hit, it took down 60+ applications across dozens of institutions simultaneously. Meanwhile, only 12% of organizations have comprehensive BCPs, and 71% do zero failover testing. We should run a compound-scenario test this quarter.
80% of CCOs say staffing constraints affect performance. 56% are considering leaving. The compliance burden grew 3x faster than headcount between 2016 and 2023. CISO compensation in financial services averages $705K. Without competitive pay, modern tooling, and a clear career path for our team, we will lose our best people—right when the workload is at its peak.
38.5% of job interviews trigger AI cheating flags. Candidates use invisible screen overlays and AI teleprompters to game live video interviews. For compliance-sensitive roles, a fraudulently hired BSA/AML analyst who cannot actually perform the work creates direct regulatory exposure. Google, McKinsey, and Cisco have already reinstated in-person interviews for critical roles. We should too.
RiskTemplates · risktemplate.com · Immaterial Findings Newsletter