AI and Fair Lending: UDAAP Risk in Algorithmic Decisioning
Table of Contents
TL;DR:
- The CFPB’s UDAAP-as-discrimination approach was vacated by a federal court in 2023 and the Bureau dropped its appeal in 2025 — but adverse action notice requirements under ECOA and Regulation B are fully intact and actively enforced
- CFPB Circular 2023-03 requires AI lenders to provide specific and accurate adverse action reasons — generic checklist responses don’t cut it when an algorithm made the decision
- State AGs are aggressively filling the federal void: New York’s FAIR Business Practices Act (December 2025) added “unfair” and “abusive” prongs to state law; state UDAP statutes in 40+ states are now the primary consumer protection enforcement mechanism
- The compliance obligations haven’t gone away — the enforcers have changed
The CFPB spent three years trying to use UDAAP to attack algorithmic discrimination. The courts said no. The Bureau retreated. And a lot of AI lending teams exhaled.
Too soon.
The federal enforcement posture may have shifted, but the underlying compliance obligations for AI-driven credit decisions are largely intact. Adverse action notice requirements still apply. State UDAP statutes are increasingly aggressive. Private litigation over algorithmic discrimination is active. And the new New York law signed in December 2025 just handed state AGs a weapon that makes federal UDAAP look modest by comparison.
Here’s what the regulatory landscape actually looks like for AI in fair lending — and what your compliance program still needs to have.
What UDAAP Means for AI Models (Before We Get to the Politics)
UDAAP stands for Unfair, Deceptive, or Abusive Acts or Practices. Section 1031 of the Dodd-Frank Act prohibits covered entities from engaging in any of the three. Each prong has a distinct standard.
Unfair: An act or practice is unfair if it (1) causes or is likely to cause substantial injury to consumers, (2) the injury is not reasonably avoidable by consumers, and (3) the injury is not outweighed by countervailing benefits. Regulators have argued that algorithmic discrimination — producing systematically worse outcomes for protected class members — qualifies as “unfair” under this standard, regardless of intent.
Deceptive: An act or practice is deceptive if it involves a material representation, omission, act, or practice that misleads or is likely to mislead a reasonable consumer. AI’s opacity creates a specific deceptive risk: if a lender tells a consumer their application was denied for reasons that don’t actually reflect what the model weighted, that’s a deceptive adverse action notice.
Abusive: An act or practice is abusive if it materially interferes with a consumer’s ability to understand a term or condition, or takes unreasonable advantage of a consumer’s circumstances. Targeting vulnerable populations with AI models that amplify existing financial distress is the core abusive concern.
The CFPB’s 2023 attempt to formally embed discrimination within the “unfair” prong — through revisions to its UDAAP exam manual — was vacated by the U.S. District Court for the Eastern District of Texas in September 2023. The Bureau didn’t appeal, and the current administration rescinded the underlying guidance in 2025. So that specific theory is off the table at the federal level.
But the three underlying UDAAP prongs remain. And they apply to AI.
The Obligation That Never Left: Adverse Action Notices
Here’s the UDAAP risk that’s concrete, active, and hasn’t been touched by any court or administration: the requirement to provide specific and accurate adverse action notices when AI models deny credit or change terms.
CFPB Circular 2022-03 established the baseline: creditors using complex algorithms must still identify the principal reasons for adverse actions, and those reasons must accurately reflect the model’s actual decision factors.
CFPB Circular 2023-03 sharpened the requirement: using sample form checklists is not sufficient if the listed reasons don’t actually reflect why the AI denied the application. And creditors cannot claim that model complexity excuses non-compliance. Regulation B is unambiguous: “creditors cannot justify noncompliance with ECOA based on the mere fact that the technology they use to evaluate credit applications is too complicated, too opaque in its decision-making, or too new.”
What “Specific and Accurate” Actually Requires
The Regulation B sample forms were designed for traditional underwriting — things like “insufficient income” or “too many existing credit obligations.” When an AI model is making the decision, these generic reasons may not accurately reflect the actual factors the model weighted.
A gradient boosted model denying credit because of a complex interaction between payment timing patterns, merchant category mix, and mobile device metadata cannot accurately be described as “insufficient income.” That’s the deception risk — your adverse action notice is technically in the form-compliant format but is functionally misleading.
The practical requirements:
| Requirement | Traditional Underwriting | AI/ML Underwriting |
|---|---|---|
| Identify reasons | Manual review of application factors | Translate model feature importance to consumer-readable reasons |
| Specificity | Standard form list typically adequate | Must reflect actual model decision factors |
| Accuracy | Match documented underwriting criteria | Match actual model weights/outputs for that application |
| ”Too complex” defense | N/A | Not available under ECOA |
At minimum, you need a process that:
- Captures the principal factors the model weighted for each adverse decision
- Translates those factors into consumer-intelligible language
- Reviews translations for accuracy before notice issuance
- Documents the translation methodology for examiner review
This is not just a CFPB concern. ECOA and Regulation B are enforced by eight different federal agencies for the institutions they supervise — the OCC, Fed, FDIC, NCUA, FHFA, USDA, SBA, and FRB. The CFPB’s enforcement posture doesn’t determine what those agencies require of their examiners.
The Federal Pullback and the State AG Gap-Fill
While the CFPB has retreated from active AI fair lending enforcement, state attorneys general have moved aggressively to fill the vacuum. This matters because state UDAP authority often extends to practices that federal UDAAP doesn’t cover — and state AGs don’t need federal coordination to act.
New York’s FAIR Business Practices Act (December 2025)
The most significant state-level development: Governor Hochul signed the Fostering Affordability and Integrity Through Reasonable (FAIR) Business Practices Act on December 19, 2025. The law adds “unfair” and “abusive” prongs to New York’s General Business Law Section 349, which previously only prohibited “deceptive” acts and practices.
Before this law, the New York AG could pursue deceptive AI practices — like misleading adverse action notices — but not unfair or abusive ones. Now the AG has the full toolkit, and New York’s consumer protection law applies to any business affecting New York consumers, not just state-chartered entities.
State UDAP Landscape
Every state has some form of UDAP statute. What’s changed is enforcement intensity:
- Illinois AG: $20 million settlement with a major auto dealer in December 2024 for deceptive financing practices
- Pennsylvania AG: $11 million settlement with a rent-to-own provider for deceptive financing
- Massachusetts AG: $2.5 million settlement with Earnest Operations in July 2025 for AI underwriting that created disparate impact — using ECOA and state UDAP authority jointly
- New York AG: Actively pursuing CFPB enforcement matters that were abandoned in 2025
Morgan Lewis’s May 2025 analysis documents the scale of the shift: state AGs are using their consumer protection powers to target a broader range of financial products, often coordinating multistate coalitions for larger matters.
The practical implication: even if your federal examination risk is lower in 2026, your state-level risk is higher than it was in 2024.
The Three Remaining UDAAP Risk Areas for AI Lenders
With the formal UDAAP-as-discrimination theory off the table at the federal level, where does actual risk concentrate? Three areas:
1. Deceptive Adverse Action Notices
This is the clearest and most active risk. The CFPB’s 2023 guidance is still on the books even though the broader UDAAP exam manual changes were rescinded. The requirement that adverse action notices be specific and accurate applies to AI-driven credit decisions — and that’s a basis for ECOA enforcement by multiple agencies, not just CFPB.
If your process generates generic adverse action reasons by default without translating actual model factors, that’s a deceptive practice claim waiting to be made.
2. Unfair Algorithmic Targeting
The “unfair” prong doesn’t require discriminatory intent — it requires substantial harm, inability to avoid it, and lack of countervailing benefit. AI models that systematically produce worse outcomes for specific consumer segments, independent of protected class status, can still create unfair practices exposure under state law.
This includes: models trained on biased historical data that perpetuate past unfairness, models that charge higher prices to geographically concentrated populations without actuarial justification, and models that create unnecessarily harsh default consequences for specific behavioral segments.
3. Abusive Practices in High-Risk Product Targeting
The abusive prong — taking unreasonable advantage of consumer circumstances — has specific application to AI-driven offer targeting. If an AI model identifies consumers with constrained financial options and targets them with high-cost products, the “unreasonable advantage” argument materializes.
This is particularly relevant for: earned wage access, high-APR installment products, BNPL with deferred interest, and AI-optimized pricing in auto loans and mortgage servicing.
What Your Compliance Program Needs Right Now
Regardless of federal enforcement posture, these are non-negotiable:
Adverse Action Notice Process Documentation
Document how your AI model’s decision factors are translated into consumer-facing reasons. This needs to be a reviewable, auditable process — not something your tech team does ad hoc. The documentation should show:
- How feature importance outputs are captured for each adverse decision
- How features are mapped to consumer-intelligible language
- Who reviews translations for accuracy before notices go out
- How the process handles unusual or novel model outputs
Bias Testing Log
Even if CFPB isn’t examining your AI models for disparate impact, your state AG might be. The Massachusetts Earnest settlement was brought by a state AG using ECOA and state authority — not CFPB. Maintain a log of:
- What protected classes were tested
- What testing methodology was used
- What the results showed
- What corrective actions were taken, if any
See our guide on disparate impact testing for AI lending models for testing methodology detail.
State UDAP Applicability Matrix
If your consumer-facing AI products operate in New York, you now need to assess your practices against the FAIR Business Practices Act’s “unfair” and “abusive” standards, not just “deceptive.” Do that mapping now, before an AG’s civil investigative demand forces you to.
Model Documentation for Fair Lending Exam Prep
If you’re in an OCC-supervised institution, examiners are still conducting fair lending examinations and asking for AI model documentation. For each credit AI model, document:
- Training data characteristics and known limitations
- Protected class variables and proxies excluded from the model
- Disparate impact testing performed pre-deployment and post-deployment
- Adverse action reason generation process
- Model monitoring outputs and any identified drift
Our AI governance program checklist has a complete pre-exam documentation framework.
The 30/60/90-Day Remediation Roadmap
Days 1–30: Audit Your Adverse Action Process
Pull your current adverse action notice templates. For AI-driven decisions, verify that the stated reasons accurately reflect your model’s actual decision factors for a sample of recent denials. If you’re using sample-form language that doesn’t map to actual model outputs, that’s your first remediation priority.
Days 31–60: Map State UDAP Exposure
Identify which states your products operate in and which have enacted or enhanced UDAP statutes since 2024. New York is the headline, but Illinois, Massachusetts, California, and Colorado all have robust consumer protection authority. For each state:
- What UDAP standard applies (unfair, deceptive, abusive)?
- Does your AI model’s decisioning or pricing create exposure under that standard?
- Is there pending state legislation that changes the calculus?
Days 61–90: Document and Test
Run or refresh your bias testing for each AI model used in adverse action decisions. Document results. If you identify issues, document your remediation plan and timeline. This documentation is your primary defense against state AG enforcement — it demonstrates proactive risk management rather than willful blindness.
So What?
The headline that “CFPB dropped UDAAP-as-discrimination” is technically accurate and practically misleading. The adverse action notice requirement — which is where the day-to-day compliance exposure lives for AI lenders — is fully intact. State AGs are more active than at any point in recent memory. And New York just created a state-level UDAAP framework that matches federal UDAP scope.
For AI lenders, the practical advice is unchanged from 2023: you cannot use “the model is too complex to explain” as a compliance answer. Your adverse action notices need to be specific and accurate. Your models need to be tested for disparate impact. Your documentation needs to be ready for examination.
What’s changed is who’s examining and where they’re looking. Your state regulatory risk is now higher than your federal risk. Plan accordingly.
Need a structured framework for documenting your AI model risk across SR 11-7, ECOA, and state fair lending requirements? The AI Risk Assessment Template & Guide includes a model inventory, pre-deployment checklist, adverse action documentation guide, and third-party AI vendor questionnaire — built for teams that need to show progress without hiring a dedicated model risk team.
Frequently Asked Questions
Does UDAAP apply to AI models used in credit decisions?
Yes. Lenders using AI or machine learning models for credit decisions are subject to UDAAP standards — including the prohibition on unfair or deceptive acts or practices. The CFPB’s 2022 guidance made clear that algorithmic complexity doesn’t exempt creditors from their legal obligations. State UDAP laws add another layer, especially as state AGs fill enforcement gaps left by federal pullback.
What adverse action notice requirements apply to AI credit models?
Under ECOA and Regulation B, creditors must provide specific and accurate reasons for adverse actions — including when the decision was made by an AI model. CFPB Circular 2023-03 made clear that using generic sample form reasons is not sufficient if they don’t accurately reflect the model’s actual decision factors. The “the model did it” explanation doesn’t satisfy Regulation B.
Can CFPB treat discrimination as an unfair practice under UDAAP?
The CFPB’s attempt to formally embed discrimination within UDAAP was vacated by a federal court in September 2023, and the Bureau abandoned its appeal in 2025. However, state AGs using state UDAP statutes — including New York’s FAIR Business Practices Act signed in December 2025 — can still pursue discriminatory AI practices under broad unfair and deceptive standards.
What is the biggest UDAAP risk in AI lending today?
The adverse action notice requirement. Any lender using AI models to deny credit, lower limits, or change credit terms must provide specific, accurate reasons — not generic checklist responses. This is an active compliance obligation that remains fully in force regardless of CFPB enforcement posture. State AGs and private litigants actively pursue violations.
How should compliance teams document AI fair lending controls?
Document your adverse action reason generation process — how the AI model’s outputs are translated into specific reasons, how reasons are reviewed for accuracy, and how edge cases are handled. Maintain records of bias testing results, disparate impact analyses, and any corrective actions. For state-regulated activity, also document compliance with applicable state UDAP requirements.
What is the New York FAIR Business Practices Act?
Signed by Governor Hochul on December 19, 2025, the FAIR Business Practices Act adds prohibitions against “unfair” and “abusive” acts or practices to New York’s General Business Law, giving the AG new tools beyond traditional “deceptive” practices. This substantially expands New York’s consumer protection reach and creates meaningful UDAP-equivalent enforcement risk for AI-driven financial products.
Related Template
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Frequently Asked Questions
Does UDAAP apply to AI models used in credit decisions?
What adverse action notice requirements apply to AI credit models?
Can CFPB treat discrimination as an unfair practice under UDAAP?
What is the biggest UDAAP risk in AI lending today?
How should compliance teams document AI fair lending controls?
What is the New York FAIR Business Practices Act?
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Related Framework
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Keep Reading
Third-Party AI Vendor Risk Assessment: Due Diligence Framework and Questionnaire
When a vendor deploys AI in the service they provide you, your institution's model risk responsibility doesn't disappear. Here's the due diligence framework, questionnaire areas, and contract provisions you need before deploying a vendor's AI.
Apr 13, 2026
AI RiskCommon Regulatory Exam Findings on AI: Top Deficiencies and How to Fix Them
These are the AI governance deficiencies regulators are actually finding in exams — incomplete model inventories, missing validation records, unmanaged vendor AI — and what to do about each one.
Apr 12, 2026
AI RiskSR 11-7 for AI Systems: Applying Legacy Model Risk Guidance to LLMs
How to actually implement SR 11-7 for LLMs: model inventory, governance ownership, documentation standards, and validation scope for in-house and vendor AI.
Apr 12, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.