Vendor Risk Questionnaire Template: The Questions That Actually Surface Third-Party Risk
Table of Contents
TL;DR
- A completed vendor questionnaire where every answer is “Yes” is not a clean vendor — it’s a questionnaire designed to be answered yes
- The highest-signal questions are evidence questions, not status questions: “When was your last pen test and what did it find?” reveals more than “Do you conduct penetration tests?”
- Three question categories most questionnaires miss: AI vendor risk (flagged in the Treasury’s FS AI RMF, February 2026), fourth-party/sub-processor disclosure (required by DORA Article 28 and expected by U.S. regulators), and cloud concentration risk
- The 2023 OCC/FDIC/Fed Interagency Guidance on Third-Party Relationships requires questionnaire depth to be commensurate with vendor risk — Tier 1 critical vendors get the full treatment; Tier 3 office-supply vendors do not
Your vendor sent back a completed 120-question questionnaire. Every answer is “Yes.” No gaps, no caveats, no qualifications. That’s not a clean vendor — that’s a questionnaire designed to be answered yes.
The problem with most vendor questionnaires isn’t coverage. It’s that they ask status questions (“Do you have MFA enabled?”) instead of evidence questions (“On which system types is MFA enforced, and when did you last audit compliance?”). Status questions produce checkbox responses. Evidence questions reveal whether the status is real.
This is a domain-by-domain breakdown of the questions that actually surface third-party risk — including the three categories that most standard questionnaires miss entirely.
Why Most Questionnaires Produce Useless Answers
Three structural failures drive most questionnaire ineffectiveness:
Status without evidence. “Do you have a penetration test?” gets answered yes by any vendor who had one done three years ago. “When was your most recent external penetration test? Who conducted it? What were the critical findings, and what is the current remediation status?” forces a substantive response — or exposes that the vendor can’t answer it.
Missing AI risk coverage. The 2025 SIG covers 21 risk domains and maps to 31 reference frameworks including DORA, NIS2, and NIST CSF 2.0. AI-specific questions remain nascent in most standard templates despite the U.S. Treasury’s Financial Services AI Risk Management Framework (FS AI RMF, February 2026) formalizing expectations for assessment of vendor-supplied AI tools. Standard SOC 2 and ISO 27001 controls provide zero coverage for hallucination, training data provenance, model bias, or prompt injection.
No fourth-party visibility. SecurityScorecard data shows that 12.7% of third-party breaches extend into fourth-party incidents — vendors that appear independent but share common sub-processors, cloud regions, or infrastructure providers. Yet most questionnaires never ask who the vendor outsources to.
The upgrade that fixes all three: shift from status questions to evidence questions, add an AI section for any vendor using machine learning, and make sub-processor disclosure mandatory.
The Status-to-Evidence Shift
Before the domain breakdown, the single most useful questionnaire reform:
| Status Version (Produces checkboxes) | Evidence Version (Surfaces reality) |
|---|---|
| Do you have MFA enabled? | On which system types is MFA enforced without exception? When did you last audit MFA compliance across your user population? |
| Do you have a BCP? | When was your most recent BCP or DR test? What scenarios were tested? What gaps were identified, and what is the remediation status? |
| Do you encrypt data at rest? | What encryption standard and key length are used for data at rest? Where are key management systems hosted? Who has access to encryption keys? |
| Do you have cyber insurance? | What is your current policy limit? Does coverage include ransomware and business interruption? What are the sub-limits for first-party losses versus third-party liability? |
| Do you conduct background checks? | What background check components are included (criminal, credit, identity)? What roles are subject to periodic re-screening, and at what frequency? |
Information Security: The Questions That Matter
The SIG Lite (2025) and CAIQ Lite (124 questions) cover information security broadly. Within that domain, these are the questions that consistently surface real gaps — not clean checkboxes:
1. Penetration testing: When was your most recent external penetration test? Was it conducted by an independent third party — not your internal team — and can you provide the testing firm’s name? What were the most critical findings? What is the current remediation status for critical and high findings?
2. Privileged access management: What privileged access management (PAM) solution is in place for administrative accounts? When was your most recent privileged access review, and what percentage of accounts were modified or removed as a result?
3. Multi-factor authentication: Is multi-factor authentication enforced for all remote access, administrative portals, and cloud management consoles — without exception? (The Change Healthcare breach in February 2024, which compromised 192.7 million patient records, originated from a Citrix remote access portal lacking MFA — a single questionnaire item would have flagged it.)
4. Vulnerability management: What is your internal SLA for patching critical vulnerabilities after disclosure? What was your mean time to patch (MTTP) for critical findings in the past 12 months?
5. Access provisioning for third parties: How are your vendors and subcontractors granted access to your systems? Is access time-limited, reviewed quarterly, and individually logged? Can you provide your third-party access policy?
6. Incident response with evidence: What is your contractual commitment for notifying us of a confirmed breach affecting our data? Can you provide your incident response plan and the results of your most recent IR tabletop exercise?
7. Security certifications with documentation: What security certifications do you currently hold (SOC 2 Type II, ISO 27001, PCI DSS, HITRUST, FedRAMP)? Can you provide the full SOC 2 report — including exceptions, management responses, and the auditor’s opinion letter? (Summary reports without exception detail are not adequate for due diligence.)
Business Continuity and Disaster Recovery: What Most Templates Miss
Most questionnaires ask “Do you have a BCP?” and “What is your RTO?” and stop there. The evidence questions:
When was your most recent full DR test? What failure scenarios were tested? Did the test achieve your stated RTOs and RPOs? What gaps were identified, and what remediation actions resulted?
Geographic separation: Are your DR systems hosted in a geographically separate data center or cloud region from your primary systems — not just a separate availability zone within the same region?
Cloud region specifics: For cloud-hosted services, which cloud providers and regions host this service? What is the contractual SLA from the cloud provider? Is the service deployable in a region that avoids our existing cloud concentration?
Simultaneous failure scenario: If your primary and DR sites were both impacted simultaneously — regional weather event, cloud region outage, power grid failure — what is your tertiary recovery option?
Sub-Processors and Fourth-Party Risk: The Questions Nobody Asks
DORA Article 28 requires financial entities to assess ICT concentration risk before entering any vendor contract and to understand the sub-contracting chain for critical or important functions. The 2023 Interagency Guidance on Third-Party Relationships from the OCC, Federal Reserve, and FDIC extends the lifecycle obligation to include ongoing oversight of sub-processors — not just direct vendor relationships.
Most questionnaires skip these questions entirely:
Complete sub-processor list: Provide a complete list of sub-processors and subcontractors that will have access to our data or provide any component of the service we are contracting for. Include their primary function, data access scope, and country of operation.
Sub-processor certifications: For each sub-processor with access to our data, provide their SOC 2 Type II report or equivalent independent attestation. If unavailable for any sub-processor, describe the compensating controls you maintain.
Change notification: What is your process for notifying us when you add new sub-processors, replace existing ones, or when a sub-processor experiences a security incident affecting your service?
Fourth-party concentration: Do you use any shared infrastructure components — cloud hosting providers, CDN providers, DNS resolvers, payment processors — that are likely used by other vendors in our portfolio? (This surfaces fourth-party concentration: your vendors appear independent but share the same sub-processors, cloud regions, or infrastructure, meaning a single sub-processor incident can cascade across your entire vendor portfolio simultaneously.)
Sub-processor transition: What is your contingency plan if a critical sub-processor terminates services, becomes insolvent, or is subject to a regulatory action?
The fourth-party risk post covers the full scope of fourth-party exposure and why the 2024 CrowdStrike and Change Healthcare incidents were fourth-party concentration events dressed as single-vendor failures.
AI Vendor Questions: The Gap in Every Standard Template
If your vendor uses AI in any component of the service — fraud detection, document processing, recommendation engines, chatbots, decision support, underwriting assistance — standard questionnaires provide no coverage for AI-specific risks. Ask every vendor: “Do any components of your service use AI or ML models, including models accessed via third-party API providers?” If yes:
AI component disclosure: Describe the AI or ML components used in this service. Are any decisions affecting our customers, our data, or our transactions automated by AI models without human review? If so, what human review or override process is in place?
Foundation model dependencies: Which third-party foundation models or AI APIs does your service rely on (e.g., OpenAI, Anthropic, Google Gemini, Cohere, Meta Llama)? What are your data processing and retention agreements with each provider? Does our data flow to these providers?
Output error management: How is AI output error — including hallucination, factual error, or unexpected model behavior — detected and handled before it reaches end users or affects decisions?
Bias and fairness auditing: When were your AI models last audited for bias, fairness, or disparate impact? By whom (internal or independent third party)? What were the findings, and what changes resulted?
Model versioning and rollback: What is your process for model versioning, change management, and rollback if a model update produces degraded performance or unexpected outputs?
Training data provenance: How is training data sourced? Does it include any of our organization’s data? If so, how is it isolated from other clients’ training data, and are we notified when our data is used for model training or fine-tuning?
Cloud Concentration Risk Questions
DORA Article 28 requires explicit concentration risk assessment for ICT third parties. U.S. regulators — through the 2023 Interagency Guidance and FFIEC cloud guidance — have made concentration risk an active examination focus for any institution with significant cloud or core processing dependencies.
Questions that surface concentration exposure most reliably:
Cloud provider and region specifics: Which cloud providers and which specific regions host the components of this service? Can this service be deployed in a region that avoids our existing concentration?
Shared facility exposure: For non-cloud services or hybrid architectures, what is your primary data center provider and physical location? Do any other financial services clients in our size tier share the same facility or network infrastructure?
Portfolio-level concentration check: If we contracted with your organization, are there specific cloud providers, CDN providers, or DNS infrastructure components that would increase our aggregate dependency on providers we may already be concentrated in?
Exit and transition timeline: In the event your service becomes unavailable, what is the realistic timeline for us to transition to an alternative provider? What data portability and transition assistance do you contractually commit to?
How to Use This as a Template
The framework above isn’t a single questionnaire — it’s a modular set of evidence questions organized by domain and risk type. The way to apply it:
-
Start with vendor risk tiering. Vendor risk tiering determines which question modules apply. A Tier 1 critical vendor handling customer data gets all modules. A Tier 3 vendor with no data access gets information security and BCP basics only.
-
Choose your base questionnaire. SIG Lite for non-cloud vendors; CAIQ Lite for cloud providers. The base questionnaire covers standard domains — supplement it with the AI and fourth-party modules above for all Tier 1 and Tier 2 vendors.
-
Require evidence alongside responses. Status answers are starting points. For each high-signal question, specify what evidence you’re requesting: SOC 2 report, penetration test summary, BCP test results, sub-processor list, cyber insurance certificate.
-
Document non-responses and partial responses. A refusal to disclose sub-processors, or to provide a full SOC 2 report rather than a summary, is a finding — not a pass. Document it in the due diligence record and escalate to the risk decision-maker before contracting.
For what to do when the questionnaire comes back — how to verify self-reported answers and what red flags look like in supporting documentation — vendor due diligence techniques covers evidence verification step by step.
So What?
A vendor questionnaire only creates value if it surfaces real information. The difference between a questionnaire that produces clean checkboxes and one that produces useful risk intelligence isn’t the number of questions — it’s whether the questions require evidence rather than status.
The three additions that most change what you learn: evidence-based question wording, an AI section for any vendor using machine learning, and mandatory sub-processor disclosure. Each of these addresses a documented failure mode in recent high-profile third-party incidents — from the MFA gap at Change Healthcare to the cascading fourth-party concentration revealed by the CrowdStrike outage.
The FINRA 2025 Annual Regulatory Oversight Report on Third-Party Risk is explicit: firms should have a documented rationale for the questionnaire content they use and be able to explain how it reflects the vendor’s specific risk profile. A questionnaire built for evidence — not checkbox — coverage satisfies that standard.
The Third-Party Risk Management (TPRM) Kit includes a pre-built vendor questionnaire template with separate modules for information security, BCP/DR, privacy, AI vendor risk, and fourth-party/sub-processor disclosure — alongside a risk tiering matrix, due diligence tracker, and contract requirements checklist.
Need the working template?
Start with the source guide.
These answer-first guides summarize the required fields, evidence, and implementation steps behind the templates practitioners search for.
Related Template
Third-Party Risk Management (TPRM) Kit
Complete vendor risk management lifecycle from initial due diligence to ongoing oversight.
Frequently Asked Questions
Should I use SIG, SIG Lite, or CAIQ as my base vendor questionnaire?
What evidence should I request alongside the vendor questionnaire?
What do I do if a vendor refuses to answer certain questionnaire questions?
Do I need AI-specific questions for every vendor?
How does DORA change what I need to ask vendors?
How many questions should a vendor questionnaire have?
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Related Framework
Third-Party Risk Management (TPRM) Kit
Complete vendor risk management lifecycle from initial due diligence to ongoing oversight.
Keep Reading
Critical Vendor Exit Planning: How to Build a Wind-Down Strategy Before You Need One
A practitioner's guide to building vendor exit strategies that satisfy OCC, FDIC, and Federal Reserve examiners — with lessons from the Synapse collapse and the six components every exit plan must cover.
May 14, 2026
Third-Party RiskVendor Breach Response: What to Do When a Critical Supplier Reports an Incident
When a vendor calls to report a breach, your incident response clock starts immediately. Here's the step-by-step playbook — triage, regulatory obligations, customer notification, and vendor accountability.
May 11, 2026
Third-Party RiskVendor Due Diligence Techniques: What to Verify When the Questionnaire Comes Back
A completed vendor questionnaire is the starting point, not the finish line. Here's how to verify self-reported answers, what documents to request, what red flags look like, and how to document your work so it survives an OCC or FDIC exam.
May 7, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.