72% of Banks Don't Know Which Vendors Use AI. Here's How to Fix Your TPRM Program.
TL;DR:
- The 2026 Ncontracts TPRM survey found 72% of financial institutions are only “partially aware” of which vendors use AI — and not a single respondent felt “extremely confident” managing AI-related vendor risk.
- Most TPRM teams are running 1-2 people managing 300+ vendors. AI risk assessment can’t be a separate workstream — it has to plug into existing due diligence.
- Below: a practical framework for identifying vendor AI use, 20 due diligence questions to add to your assessments today, and a 90-day implementation roadmap.
The Vendor AI Blind Spot Is Worse Than You Think
Here’s a number that should keep every TPRM professional up at night: 72% of financial institutions are only partially aware of which vendors use AI.
That’s from the 2026 State of Third-Party Risk Management Survey published by Ncontracts last week — 173 financial services professionals surveyed between November 2025 and January 2026. And the kicker? Not a single organization rated itself “extremely confident” in managing AI-related vendor risk. The majority landed at “slightly confident” (38%) or “moderately confident” (31%).
Larger institutions didn’t fare any better. In fact, 73% of organizations with 5,001+ employees fell into the lowest confidence tiers. Size and sophistication offer no advantage when your TPRM framework was built for a world where vendors ran deterministic software, not probabilistic models that can drift, hallucinate, and introduce bias your controls were never designed to catch.
This isn’t an abstract governance concern. When a vendor’s AI model makes a decision that affects your customers — credit scoring, fraud detection, customer communications, document processing — you own the regulatory consequences. The OCC, FDIC, and Fed made that crystal clear in their Interagency Guidance on Third-Party Relationships: the use of third parties doesn’t diminish a banking organization’s responsibility to operate in a safe and sound manner and comply with applicable laws.
Why Traditional Vendor Due Diligence Misses AI Risk
Most TPRM programs were built to assess three things: financial stability, information security, and regulatory compliance. The standard due diligence questionnaire asks about SOC 2 reports, business continuity plans, data encryption, and maybe a few questions about subcontractors.
That framework breaks down with AI for specific reasons:
AI outputs aren’t deterministic. Traditional software does the same thing every time given the same input. AI models can produce different outputs from identical inputs, drift over time as data distributions shift, and behave unpredictably in edge cases. Your vendor’s fraud detection model that performed flawlessly in testing might start flagging legitimate transactions from specific zip codes six months later — and nobody notices until customers complain.
Model risk compounds through the supply chain. Your vendor might use a third-party model (say, an OpenAI or Anthropic API) embedded inside their product. That’s a fourth-party AI dependency your current due diligence probably doesn’t cover. When that upstream model gets updated, your vendor’s product behavior changes — and they may not even test for it.
Bias and fairness aren’t covered by SOC 2. A vendor can have a clean SOC 2 Type II report and still deploy a lending model that produces disparate impact across protected classes. Information security controls and AI fairness controls are different domains entirely.
Explainability gaps create regulatory exposure. When a regulator asks why a specific customer was denied or flagged, “the vendor’s AI decided” isn’t an acceptable answer. If your vendor can’t provide model explainability at the individual decision level, you can’t meet your regulatory obligations under ECOA, the Fair Housing Act, or state UDAP laws.
How to Identify Which Vendors Actually Use AI
Before assessing AI risk, the immediate problem is that most TPRM teams don’t even have a complete inventory. Here’s a practical approach:
Step 1: Send a Targeted Disclosure Request
Don’t bury AI questions in your annual due diligence questionnaire — send a standalone, focused request to all critical and high-risk vendors. Keep it short (vendors actually respond to short requests):
| Question | Why It Matters |
|---|---|
| Does your product/service use AI, machine learning, or automated decision-making in any capacity? | Establishes baseline awareness |
| If yes, does AI directly affect decisions about our customers (credit, fraud, pricing, communications)? | Identifies consumer-facing AI risk |
| Do you use third-party AI models or APIs (e.g., OpenAI, Google, AWS AI services) within your product? | Surfaces fourth-party AI dependencies |
| Has AI functionality been added or significantly modified in the last 12 months? | Catches recent changes your last assessment missed |
| Can you provide documentation of model validation, bias testing, and performance monitoring? | Tests vendor maturity — vague answers = red flag |
Step 2: Cross-Reference Against Your Vendor Inventory
Categorize responses into a tiered risk structure:
| Tier | Description | Assessment Frequency | Examples |
|---|---|---|---|
| Tier 1 — Direct AI Impact | Vendor AI directly affects customer-facing decisions | Quarterly review + annual deep assessment | Credit decisioning platforms, fraud detection, automated underwriting |
| Tier 2 — Indirect AI Impact | Vendor uses AI internally but it influences outputs you receive | Semi-annual review | Document processing, data analytics, customer service chatbots |
| Tier 3 — AI-Adjacent | Vendor uses AI for internal operations only (not affecting your customers) | Annual review | HR software, internal productivity tools |
| Tier 4 — No AI | Vendor confirmed no AI use | Standard due diligence cycle | Traditional infrastructure, office supplies |
Step 3: Don’t Trust Self-Reporting Alone
Vendors have incentives to understate AI use — either because they genuinely don’t know (their own developers embedded an API call they didn’t flag), or because they don’t want the compliance overhead. Supplement self-reporting with:
- Contract review: Search existing agreements for language around “automated decision-making,” “machine learning,” “algorithmic,” or “AI.” Many vendors added AI capabilities after the original contract was signed without formal amendment.
- Product release notes: Review the last 12 months of release notes or product updates. Look for terms like “smart,” “intelligent,” “automated,” “predictive,” or “enhanced.”
- Industry intelligence: Check vendor marketing materials — companies love to advertise AI capabilities to prospects while downplaying them in compliance questionnaires.
20 Vendor AI Risk Due Diligence Questions
Once AI-using vendors are identified, layer these questions into your existing due diligence process. Organized by domain:
Model Governance & Lifecycle
- Who owns model risk management within your organization? (Title, reporting line, dedicated team vs. shared responsibility)
- What is your model development lifecycle? (Require documentation of development, validation, deployment, monitoring, and retirement stages)
- How do you validate models before production deployment? (Look for: independent validation, challenger models, out-of-sample testing, bias testing on protected classes)
- What is your model change management process? (Version control, regression testing, client notification requirements)
- How frequently are models retrained, and what triggers retraining? (Calendar-based vs. performance-triggered — both should exist)
Bias & Fairness
- Do you test for disparate impact across protected classes before and after deployment? (Ask for methodology, not just a yes/no)
- What fairness metrics do you use? (Demographic parity, equalized odds, predictive parity — the specific metric matters because they can conflict)
- Can you provide results of your most recent bias audit? (If they can’t or won’t share even a summary, that’s a significant red flag)
- Have you received any fair lending or discrimination complaints related to AI-driven decisions? (And how were they resolved?)
Data Governance
- What data sources feed your AI models? (Especially: do they use alternative data, scraped data, or purchased data sets?)
- How do you ensure training data quality and representativeness? (Garbage in, garbage out — but specifically, biased data in, biased decisions out)
- Do you use our institution’s customer data to train or improve models used by other clients? (This is more common than anyone admits, and it’s a data privacy landmine)
- What data retention policies apply to model training data and inference logs?
Explainability & Transparency
- Can you provide individual-level explanations for AI-driven decisions affecting our customers? (Not just “feature importance” — actual reason codes a compliance officer can review)
- What is your approach to model documentation? (Look for model cards, datasheets, or equivalent documentation)
- How do you handle consumer requests for explanation of automated decisions? (Relevant under ECOA adverse action requirements and emerging state AI laws)
Monitoring & Incident Response
- What ongoing performance monitoring do you conduct? (Drift detection thresholds, accuracy monitoring, fairness metric tracking — ask for specifics, not generalities)
- What are your defined thresholds for model degradation, and what actions trigger? (e.g., “If accuracy drops below X% or fairness metric exceeds Y, model is automatically flagged for review”)
- How do you notify clients of material changes to AI model behavior or performance? (Proactive notification vs. buried in release notes)
- Have you experienced any AI-related incidents in the past 24 months? (Model failures, unintended bias findings, regulatory inquiries)
Making This Work With a 2-Person TPRM Team
Here’s the elephant in the room: 63% of TPRM programs run with just one or two dedicated staff managing 300+ vendors. Adding a comprehensive AI risk assessment layer sounds great in a conference presentation. In practice, it means one person doing the work of three.
The only way this scales is by integrating AI risk into your existing process — not bolting it on as a separate workstream.
Practical shortcuts that maintain rigor:
-
Risk-tier the AI questions. Tier 1 vendors get all 20 questions above. Tier 2 gets questions 1-5 and 10-12. Tier 3 gets the initial disclosure request only. Don’t gold-plate assessments for vendors whose AI doesn’t touch your customers.
-
Use the annual due diligence cycle you already have. Don’t create a separate AI assessment timeline. Add the relevant questions to your existing questionnaire and review schedule. One process, one timeline, one tracking mechanism.
-
Template the analysis. Build a standardized AI risk scoring rubric so you’re not writing narrative assessments from scratch for each vendor. Score responses on a 1-5 scale across governance, fairness, transparency, and monitoring. Flag anything below 3 for deeper review.
-
Leverage contract renewals. The best time to add AI risk provisions is during contract renewal. Add language requiring: (a) notification of material AI changes, (b) right to audit AI-related controls, (c) bias testing results upon request, and (d) incident notification for AI-related failures.
-
Prioritize by customer impact. If a vendor’s AI touches credit decisions, you start there. Period. A chatbot vendor using AI for FAQ routing is a lower priority than a fraud detection platform making real-time approve/deny decisions.
90-Day Implementation Roadmap
This roadmap is designed for the reality of lean TPRM teams. It’s aggressive but achievable if leadership gives air cover.
Days 1-30: Discovery & Inventory
| Week | Deliverable | Owner | Details |
|---|---|---|---|
| 1 | Send AI disclosure request to all critical/high-risk vendors | TPRM Lead | Use the 5-question template above. Set a 15-business-day response deadline. |
| 2 | Review existing contracts for AI-related language | TPRM Analyst / Legal | Flag contracts that predate vendor’s AI capabilities for amendment. |
| 3 | Compile vendor AI inventory and assign risk tiers | TPRM Lead | Map each vendor to Tier 1-4 based on disclosure responses. |
| 4 | Present inventory and tiering to Risk Committee | TPRM Lead + CRO | Get buy-in on assessment approach and resource allocation. Bring the Ncontracts survey data — it helps justify the ask. |
Days 31-60: Assessment & Gap Analysis
| Week | Deliverable | Owner | Details |
|---|---|---|---|
| 5-6 | Deploy full AI due diligence questionnaire to Tier 1 vendors | TPRM Lead | All 20 questions. Give vendors 20 business days. Follow up at day 10 if no response. |
| 7 | Deploy abbreviated questionnaire to Tier 2 vendors | TPRM Analyst | Questions 1-5 and 10-12. Same timeline. |
| 8 | Score responses using standardized rubric and identify gaps | TPRM Lead | Document vendor gaps and remediation requirements. Draft findings for any vendor scoring below 3 in any domain. |
Days 61-90: Remediation & Integration
| Week | Deliverable | Owner | Details |
|---|---|---|---|
| 9-10 | Issue remediation requests to vendors with identified gaps | TPRM Lead | Specific, time-bound requirements with 60-day remediation window. |
| 11 | Update TPRM policy and procedures to incorporate AI risk | TPRM Lead + Compliance | Add AI risk tiering, assessment questions, and monitoring requirements to existing TPRM policy. Don’t create a separate policy — amend the existing one. |
| 12 | Update contract templates with AI risk provisions | TPRM Lead + Legal | Standard AI addendum for new contracts and renewals. Include notification, audit, bias testing, and incident response clauses. |
Ongoing (post-90 days): Quarterly monitoring reviews for Tier 1 vendors, semi-annual for Tier 2. Annual reassessment of vendor AI inventory (vendors adopt AI constantly — your inventory will be stale within a year).
The Regulatory Direction Is Clear
The interagency guidance on third-party relationships already establishes that outsourcing doesn’t outsource responsibility. The OCC’s Bulletin 2023-17, the Fed’s SR 23-7, and the FDIC’s FIL-29-2023 all say the same thing: banks must manage third-party risk commensurate with the level of risk and complexity of the relationship.
AI dramatically increases both the risk and complexity of vendor relationships. Regulators haven’t issued AI-specific TPRM guidance yet, but they don’t need to — the existing framework already requires it. When an examiner asks how you’re managing the risk of a vendor’s AI-driven credit decisioning model, “we ask about their SOC 2” isn’t going to cut it.
The FCC named third-party risk evaluation as one of its eight core best practices for preventing ransomware attacks in its January 2026 Public Notice. State-level AI legislation is accelerating. The direction is unmistakable: vendor AI risk is becoming a supervisory priority, and the institutions that can demonstrate structured oversight will be the ones that pass their next exam without findings.
So What? Start With What You Can Control
The Ncontracts survey paints a bleak picture, but it also reveals an opportunity: if 72% of institutions can’t even identify vendor AI use, the bar for demonstrating examiner-ready oversight is lower than most people think. A structured inventory, risk-tiered assessment process, and documented monitoring program puts a TPRM team ahead of the vast majority of peers.
Three things to do this week:
- Send the 5-question AI disclosure request to your top 20 critical vendors. Don’t wait for the annual cycle. This takes an hour to draft and send.
- Pull your vendor inventory and flag any relationship where the vendor’s marketing mentions AI capabilities that weren’t covered in your last assessment.
- Talk to Legal about adding AI risk provisions to your next three contract renewals. Start building the muscle now instead of scrambling later.
The vendor AI risk blind spot is real, but it’s fixable. It just takes the same disciplined, systematic approach that good TPRM programs already run — extended to cover a risk domain that didn’t exist five years ago.
Need a head start? The Third-Party Risk Management Kit includes vendor assessment templates, risk tiering frameworks, and due diligence questionnaires — ready to customize for your AI risk program.
FAQ
How do I assess vendor AI risk if my TPRM team is only one or two people?
Integrate AI risk questions into your existing due diligence process rather than creating a separate workstream. Risk-tier your vendors so Tier 1 (customer-facing AI) gets the full 20-question assessment while lower tiers get abbreviated versions. Use a standardized scoring rubric to avoid writing narrative assessments from scratch. The key is prioritization — start with the vendors whose AI directly affects customer decisions and work outward.
What if a vendor refuses to answer AI due diligence questions?
Document the refusal and escalate it through your vendor governance process. A vendor that can’t or won’t disclose basic information about how they use AI is a risk signal in itself. For critical vendors, consider adding AI transparency requirements to contract renewals. For non-critical vendors with alternatives, factor the refusal into your overall risk rating and explore replacement options. At minimum, flag it for your Risk Committee so the accepted risk is documented and visible.
Do regulators specifically require AI risk assessment for third-party vendors?
Not yet as standalone guidance, but the existing interagency framework (OCC Bulletin 2023-17, Fed SR 23-7, FDIC FIL-29-2023) already requires banks to manage third-party risk “commensurate with the level of risk and complexity of the relationship.” AI substantially increases both risk and complexity. Examiners are already asking about vendor AI use during TPRM reviews. Proactive documentation of AI risk assessment demonstrates the kind of risk-aware governance that avoids MRAs and matters requiring board attention.
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.