Vendor Breach Response: What to Do When a Critical Supplier Reports an Incident
Table of Contents
On February 21, 2024, Change Healthcare was taken offline by a ransomware attack. Within days, medical practices across the country couldn’t submit insurance claims, verify patient eligibility, or access pharmacy processing. One-third of all US medical transactions were disrupted — not because the medical practices were breached, but because their vendor was.
The call you’ll get isn’t from your own security team. It’s from a vendor. And how you respond in the next 72 hours determines whether you’re managing a third-party incident or starring in a regulatory enforcement action.
TL;DR
- Reg S-P requires service providers to notify you within 72 hours of a breach, and requires you to notify affected customers within 30 days — the clock runs even when the breach is at the vendor, not at you.
- Most vendor breach response failures are process failures, not security failures: no tested workflow, no contractual notification requirements, no pre-designated vendor liaison.
- The playbook has five phases: initial verification, scope assessment, parallel regulatory/legal tracks, customer notification decisions, and vendor remediation accountability.
- MOVEit affected 2,559+ organizations and 66 million individuals — most organizations that struggled had IR plans built for their own infrastructure, not for supplier breaches.
Why Vendor Breaches Break Your Normal IR Plan
Standard incident response plans assume you own the affected environment. You can pull logs, restrict access, run forensics, and control the timeline. A vendor breach strips all of that away.
Instead, you’re dependent on a supplier’s investigation team — whose interests in minimizing disclosed scope may not perfectly align with yours. You’re waiting for the vendor’s forensics report while your own regulatory notification clock is already running. You can’t access the affected systems independently, and you may not know which of your customers’ data was actually in scope until days after the initial notification.
This dependency gap is exactly what made MOVEit so damaging. The CL0p ransomware group exploited a zero-day vulnerability in Progress Software’s MOVEit Transfer product in late May 2023. By the time the breach was fully characterized, 2,559 organizations and more than 66 million individuals had been confirmed as impacted. Many affected organizations didn’t even know they used MOVEit directly — they were impacted through a vendor who used MOVEit and hadn’t disclosed that dependency.
Change Healthcare compounded the problem differently: the scope was so large and the affected systems so embedded in healthcare infrastructure that even well-resourced organizations couldn’t quickly determine which of their specific patient records were compromised.
These weren’t failures of cybersecurity controls. They were failures of third-party breach response process.
The Regulatory Clock: What’s Running and When
Before getting into the playbook, you need to know which clocks start the moment a vendor notifies you.
Reg S-P: 72 Hours In, 30 Days Out
The SEC’s amended Regulation S-P, effective December 3, 2025 for large covered institutions (and June 3, 2026 for smaller entities), creates a two-tier notification structure for vendor breaches:
- Vendor to firm: Service providers must notify the covered institution no later than 72 hours after discovering a breach affecting customer information.
- Firm to customers: Covered institutions must notify affected customers as soon as practicable, but no later than 30 days after discovery of the breach.
Critically: the 30-day customer notification clock runs from discovery, not from when the vendor tells you. If your vendor took 48 hours to notify you, you have roughly 28 days left to reach customers — not 30. And the covered institution remains responsible for notification even if it contractually delegates the mechanics to the vendor.
Reg S-P covers broker-dealers, investment companies, and SEC-registered investment advisers. But its structure mirrors what other regulators are moving toward.
NYDFS Part 500: 72 Hours to the Regulator
Under 23 NYCRR Part 500, New York-licensed covered entities must notify the NYDFS Superintendent within 72 hours of determining that a cybersecurity event occurred — including events at third-party service providers affecting the covered entity’s systems or nonpublic information. The NYDFS October 2025 industry letter reinforced this: Section 500.11 requires covered entities’ policies and procedures to address contractual provisions for TPSP breach notification and establishes that the 72-hour DFS notification clock runs even for third-party incidents.
DORA: 72 Hours to Your Competent Authority
For EU-regulated financial entities subject to the Digital Operational Resilience Act, ICT-related incidents must be reported to the relevant competent authority within 72 hours of determining the incident meets major incident criteria. Third-party ICT provider incidents that cascade to your environment trigger this notification requirement regardless of whether the root cause was internal or external.
State Breach Notification Laws
Forty-eight states have breach notification laws with varying deadlines — most ranging from 30 to 90 days after discovery. If your affected customer data includes residents of multiple states, you’re running simultaneous notification clocks under different legal standards. Document which states are in scope the moment you have any indication of the breach’s geographic reach.
The Five-Phase Vendor Breach Response
Phase 1: Initial Verification (0–4 Hours)
When the vendor notification arrives, resist the urge to treat it as informational until clarified. Your first moves:
Verify authenticity. Social engineering attacks sometimes involve fake vendor breach notifications designed to extract credentials or internal data. Call your pre-designated vendor security contact via a number you have on file — not one provided in the notification.
Invoke your incident response team. Even if you don’t know the scope yet, get your incident response lead, legal counsel, and CISO (or equivalent) on a call immediately. This is not a matter to work through email chains.
Preserve the notification. Document the time you received the vendor’s notification, the channel through which it arrived, and its exact content. This timestamp becomes the anchor for your regulatory notification obligation calculations.
Initiate vendor escalation. Require the vendor to provide: (1) the date and time they first identified the incident, (2) the specific systems affected, (3) the data categories potentially in scope, (4) the estimated number of affected records, and (5) their current containment status. If the vendor can’t answer these questions within the first few hours, escalate to their executive leadership.
Phase 2: Scope Assessment (4–24 Hours)
Your goal in this phase is to determine whether your organization — and specifically your customers — are actually impacted, and to what degree.
Map vendor access to your data. Pull your vendor’s data processing inventory: What data did they hold? Which customers does it belong to? How was it transmitted, stored, and protected? If you don’t have this documentation immediately accessible, that’s an underlying TPRM gap the vendor breach just exposed.
Identify potentially affected customer populations. You need a list — however preliminary — of which customers’ data was potentially within scope of the affected vendor systems. This list drives your notification obligations. A customer who wasn’t in the vendor’s affected environment doesn’t need a notification. One who was does — even if the data wasn’t confirmed exfiltrated.
Assess your own exposure. Does the vendor have credentials, API keys, or access tokens to your systems that could enable lateral movement? If so, rotate those credentials immediately, regardless of the vendor’s assurances about containment.
Establish a vendor coordination cadence. Set daily calls — or more frequently initially — with a designated vendor incident manager. Define what updates you need and when. Don’t wait for the vendor to volunteer information.
Phase 3: Parallel Regulatory and Legal Tracks (24–72 Hours)
By 72 hours post-notification, you need to have assessed whether regulatory reporting obligations have triggered and begun customer notification preparation in parallel.
Regulatory notification assessment. Run through your notification obligation checklist: Are you a Reg S-P covered institution? NYDFS licensed entity? EU-regulated under DORA? State-chartered bank subject to OCC or state banking department requirements? Each framework has its own materiality threshold and notification trigger. Your legal team needs to make a go/no-go notification decision for each applicable regulator within the first 72 hours.
Customer notification preparation. Even before final scope is confirmed, begin drafting customer notification materials. Waiting until scope is fully confirmed to start drafting is a common way organizations miss the 30-day window.
Preservation and documentation. Issue a legal hold. All communications with the vendor about the incident, all internal communications about scope, and all investigative steps must be preserved. Regulators and plaintiffs’ lawyers will ask for everything.
Insurance notification. Cyber liability policies typically have strict notification windows — often 72 hours to a few weeks from discovery. Notify your insurer now. Coverage disputes frequently arise from late notification, not from the incident itself.
Phase 4: Customer Notification Decisions
Customer notification under vendor breaches is genuinely harder than notification for internal breaches because scope is often uncertain and evolving.
The legally defensible approach is to notify based on what could have been accessed, not what you can prove was exfiltrated. If a customer’s records were in a vendor system that was compromised, and you cannot definitively rule out their data being accessed, the prudent position is to notify. Over-notification in good faith is far more defensible than under-notification.
Key elements of the notification:
- Clear statement of what happened (the vendor experienced an incident)
- The data categories potentially affected
- What you are doing in response (vendor investigation, enhanced monitoring, etc.)
- What affected customers should do (monitor accounts, use provided credit monitoring if offered)
- Contact information for questions
Do not speculate about what data was accessed before you know. Do not minimize. Do not use the phrase “there is no evidence that” in a way that implies certainty you don’t have.
Phase 5: Remediation and Vendor Accountability
After the immediate crisis stabilizes, you have two objectives: verify the vendor has remediated the root cause, and hold them contractually accountable.
Independent validation. Don’t accept the vendor’s forensics report at face value. Depending on your risk exposure, request that the vendor share their forensics findings with your security team, engage a third-party firm to validate remediation, and provide evidence of the specific controls they’ve implemented to prevent recurrence.
Root cause review. Assess whether the breach exploited a gap that your vendor due diligence should have caught: an unpatched vulnerability, missing MFA, inadequate access controls, an undisclosed subprocessor. If it did, that’s a TPRM program gap, not just a vendor failure.
Contract enforcement. Review what your contract actually requires. Was the vendor obligated to notify you within 72 hours? Did they? Were they required to maintain specific security controls that they failed to maintain? Breach notification and indemnification provisions rarely survive unexamined after an incident. If the contract is silent on these obligations, that’s the first item on your TPRM remediation list.
What Your Contracts Need to Say Before the Call Comes
The organizations that handled MOVEit and Change Healthcare best had contracts with specific, enforceable requirements — not general “commercially reasonable” security language. The minimum contract provisions for any critical vendor:
| Provision | Minimum Standard |
|---|---|
| Incident notification timeline | 72 hours from vendor’s discovery (not confirmation) |
| Notification content requirements | Specific: affected systems, data categories, estimated records, containment status |
| Investigation cooperation | Access to forensics findings, designated vendor security liaison |
| Regulator notification assistance | Vendor to cooperate with your regulatory submissions |
| Audit rights post-incident | Right to engage third-party auditor to validate remediation |
| Subcontractor obligations | Vendor must impose equivalent notification requirements on its subcontractors |
| Indemnification | Clear allocation of liability for breach response costs and regulatory fines |
Vague “prompt notification” and “reasonable security measures” language has been tested in litigation repeatedly. It doesn’t hold. The NYDFS October 2025 industry letter specifically called out the need for contractual provisions addressing cybersecurity event notification and subcontractor requirements — not because it’s good practice, but because it’s a compliance requirement under Part 500.
So What Does This Mean for Your TPRM Program?
Start with your three most critical vendors. For each one, answer: Do you have a pre-designated security contact you can call (not email) in the first hour of a breach? Does your contract specify 72-hour notification? Do you have a documented data map of what customer data they hold?
If the answer to any of those questions is no, you have a gap that will compound any future vendor incident into a regulatory problem.
Then look at your IR plan. Does it have a dedicated vendor breach section? Or does it assume you’re always the affected entity? A plan that doesn’t account for third-party breach response is a plan that will fail you when it matters most.
The Change Healthcare attack didn’t require sophisticated technical knowledge to exploit. It required a credential, an internet-facing VPN, and an organization that wasn’t ready to respond when someone else’s breach became theirs.
For a complete framework on managing the full vendor lifecycle from onboarding through incident response, see Vendor Risk Management: The Complete Process from Onboarding to Offboarding. For the questions that surface third-party risk before a breach occurs, see Vendor Risk Questionnaire Template: The Questions That Actually Surface Third-Party Risk. For how to triage and classify any incident — including vendor-sourced ones — under the SEC’s 4-day materiality clock, see Incident Triage Techniques: Severity Classification, Materiality, and the SEC 4-Day Clock.
Sources: SEC Final Rule — Regulation S-P Amendments (34-100155) | NYDFS Industry Letter October 21, 2025: Guidance on Managing Risks Related to Third-Party Service Providers | ORX News Deep Dive: MOVEit Transfer Data Breaches | SVMIC: Managing Vendor Risk — Lessons Learned After the Change Healthcare Breach | Protiviti: NYDFS 2025 Guidance on Third-Party Oversight & Security
Need the working template?
Start with the source guide.
These answer-first guides summarize the required fields, evidence, and implementation steps behind the templates practitioners search for.
Related Template
Third-Party Risk Management (TPRM) Kit
Complete vendor risk management lifecycle from initial due diligence to ongoing oversight.
Frequently Asked Questions
What is the Reg S-P requirement when a vendor reports a breach?
Does our incident response plan need a separate vendor breach workflow?
What should vendor contracts require for breach notification?
How does NYDFS treat vendor breaches under Part 500?
What's the difference between a vendor breach and a fourth-party breach?
Can we outsource our customer notification obligations to the vendor?
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Related Framework
Third-Party Risk Management (TPRM) Kit
Complete vendor risk management lifecycle from initial due diligence to ongoing oversight.
Keep Reading
Critical Vendor Exit Planning: How to Build a Wind-Down Strategy Before You Need One
A practitioner's guide to building vendor exit strategies that satisfy OCC, FDIC, and Federal Reserve examiners — with lessons from the Synapse collapse and the six components every exit plan must cover.
May 14, 2026
Third-Party RiskVendor Risk Questionnaire Template: The Questions That Actually Surface Third-Party Risk
Most vendor questionnaires produce clean checkboxes, not useful answers. Here are the specific questions — including AI vendor and fourth-party sections most templates miss — that reveal what's actually there.
May 8, 2026
Third-Party RiskVendor Due Diligence Techniques: What to Verify When the Questionnaire Comes Back
A completed vendor questionnaire is the starting point, not the finish line. Here's how to verify self-reported answers, what documents to request, what red flags look like, and how to document your work so it survives an OCC or FDIC exam.
May 7, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.