FFIEC 36-Hour Incident Notification Rule: What Banking Organizations Must Report, When, and to Whom
Table of Contents
TL;DR
- Banking organizations must notify their primary federal regulator within 36 hours of determining a qualifying “notification incident” — not of first detecting a potential incident.
- The threshold is a “notification incident”: an event that has materially disrupted — or is reasonably likely to materially disrupt — banking operations, product delivery, or financial stability.
- Bank service providers face a separate, narrower obligation: notify affected banking organization customers (not regulators) as soon as possible when disruptions last four or more hours.
- This is an operational resilience rule, not a consumer breach notification rule — it runs parallel to your state breach notification obligations, not instead of them.
Your CISO just walked into your office. There’s been a ransomware attack. Backup systems are down. Core banking operations are impaired. Three things need to happen in the next few hours: containment, executive escalation, and — somewhere on a checklist that probably isn’t short enough — notifying your federal regulator.
That last one is exactly what the computer-security incident notification rule requires. And whether it’s the first thing your team reaches for or the last, it has a hard deadline.
Background: What the Rule Is and Where It Came From
In November 2021, the OCC, Federal Reserve, and FDIC jointly finalized the Computer-Security Incident Notification rule. The rule took effect April 1, 2022, with full compliance required by May 1, 2022. It is codified at 12 CFR Part 53 (OCC), 12 CFR Part 225 (Fed), and 12 CFR Part 304 (FDIC).
The agencies framed the purpose plainly: regulators need early awareness of significant cyber and operational events so they can assess systemic risk, deploy examiner resources, and coordinate responses before problems compound. The rule wasn’t designed to punish banks for getting attacked. It was designed to ensure regulators aren’t the last to know.
This is not your state breach notification rule. It’s not the SEC’s cybersecurity disclosure requirement. It runs parallel to those obligations — and in a real incident, you’ll likely be managing multiple notification timelines simultaneously.
The Two-Tier Framework: Banking Organizations vs. Bank Service Providers
The rule creates two distinct notification obligations that practitioners frequently conflate.
Tier 1: Banking Organizations
A banking organization supervised by the OCC, Federal Reserve, or FDIC must notify its primary federal regulator as soon as possible and no later than 36 hours after the banking organization determines that a notification incident has occurred.
Two elements carry the most weight here.
What is a “notification incident”? Not every cyberattack or system failure qualifies. Under the rule, a notification incident is a computer-security incident that has materially disrupted or degraded — or is reasonably likely to materially disrupt or degrade — any of the following:
- The banking organization’s ability to carry out banking operations, activities, or processes, or deliver banking products and services to a material portion of its customer base
- A business line that, upon failure, would result in a material loss of revenue, profit, or franchise value
- Operations whose failure would pose a threat to the financial stability of the United States
The agencies provided examples in the final rule preamble: a major computer-system failure, a DDoS attack that disrupts customer account access for an extended period, ransomware hitting a core banking system. What’s excluded: phishing attempts that don’t result in successful compromise, scheduled maintenance, minor outages affecting a small subset of customers, incidents contained before materially impacting operations.
When does the 36-hour clock start? This is where most compliance gaps emerge. The agencies were explicit: the clock starts when the banking organization determines that a notification incident has occurred — not when the incident is first detected.
A bank that detects a ransomware alert at 2am but needs time to assess whether core systems are materially impacted doesn’t start the 36-hour clock until it concludes they are. The agencies acknowledged that a “reasonable amount of time” is needed to investigate before a determination can be made.
This matters operationally. But “reasonable” has limits. Once the facts reasonably support a determination that a notification incident has occurred, the investigation period ends and the clock runs. The determination can’t be indefinitely deferred to extend the notification window.
Tier 2: Bank Service Providers
Bank service providers (BSPs) — cloud platforms, core banking systems, payment processors, technology vendors — operate under a different framework.
When a BSP determines it has experienced a computer-security incident that has caused, or is reasonably likely to cause, a material service disruption or degradation for four or more hours, it must notify each affected banking organization customer as soon as possible.
Key differences from the banking organization rule:
- No specific hour deadline. The standard is “as soon as possible,” not 36 hours.
- Four-hour threshold. Disruptions lasting less than four hours don’t trigger the notification obligation.
- Banking organizations, not regulators. BSPs notify their banking customers directly — the regulator notification responsibility stays with the bank.
- Scheduled maintenance is excluded. Disruptions from previously communicated maintenance, testing, or software updates don’t count.
This framework creates an important dependency: a bank relying on a BSP for critical services may not receive notification until the BSP has made its own determination — which could be hours into a disruption that is already consuming the bank’s 36-hour window. Closing that gap in vendor contracts is a practical necessity.
What to Actually Say When You Call the Regulator
The rule is intentionally minimal on format requirements. The initial notification does not need to be a formal written report. A telephone call or email to the bank’s primary federal supervisor is sufficient. The purpose is early awareness, not a complete incident disclosure.
Contact information by regulator:
- OCC-supervised banks: 1-800-613-6743 (24-hour supervisory information line, confirmed in OCC Bulletin 2021-55)
- Federal Reserve-supervised banks: SR 22-4 provides district-specific contact information
- FDIC-supervised banks: The FDIC Regional Office and your case manager
The follow-up — a more detailed written account of the incident, your response, and remediation — typically comes during subsequent examiner dialogue. But the 36-hour notification is the gate event.
A practical recommendation: keep regulator notification contact information directly in your incident response playbook, alongside state breach notification deadlines and SEC disclosure timelines. Under a real incident, spending 20 minutes searching for a phone number is 20 minutes you don’t have.
How This Intersects With Other Notification Regimes
The 36-hour rule covers operational disruption — the degradation of a bank’s ability to function. It is separate from — and runs parallel to — several other notification obligations.
Consumer breach notification: If the incident involves compromise of customer personal information, state breach notification laws apply on their own timelines (typically 30–90 days, with some states tighter). Assessing consumer data exposure and operational disruption must happen simultaneously during incident triage, not sequentially.
SEC 8-K disclosure: Public companies must assess whether the incident constitutes a “material” cybersecurity incident under SEC rules and, if so, disclose on Form 8-K within four business days of making that determination. The materiality standard and the notification incident standard are different assessments by different functions.
FTC Safeguards Rule: Non-bank financial institutions covered by GLBA must notify the FTC within 30 days of discovering a breach affecting 500 or more customers. If you’re a fintech operating outside the banking organization definition but within GLBA’s scope, this is your primary federal notification obligation.
Managing all of these timelines in a real incident requires pre-built decision trees that run in parallel — assigning ownership, documenting the determination process, and tracking each notification timeline independently.
The BSP Gap: What Contracts Need to Address
One underappreciated implication of the rule is what it means for your vendor contracts.
If a critical vendor experiences an outage that impairs your banking operations, that BSP must notify you “as soon as possible” — but that’s measured from when they make their own determination, which may not happen until after your 36-hour clock has started running.
Best practice: negotiate notification timelines into critical vendor contracts that are tighter than the regulatory floor. If a vendor’s service goes down, you want to know within one to two hours — not “as soon as possible” measured from a determination timeline you can’t control. This is especially important for vendors providing core banking infrastructure, payment processing, and customer-facing systems.
Also build into your vendor breach response playbook an explicit decision point: “Does this vendor incident constitute a notification incident for our banking organization?” A vendor outage that impairs your operations may trigger your 36-hour obligation even if the incident originated externally. Waiting for the vendor to tell you that is not a defensible approach.
Building a 36-Hour-Ready Program
Most banking organizations that struggle with this rule aren’t missing the policy. They’re missing the operational infrastructure to execute it under pressure.
What ready looks like:
| Component | What It Requires |
|---|---|
| Contact directory | Regulator phone numbers and emails in the IR playbook — not a website URL you’ll search at 3am |
| Determination standard | Written criteria for what constitutes a “determination,” signed off by legal and compliance |
| Escalation path | Who makes the notification determination? Does it require legal sign-off? How does compliance get looped in during a fast-moving incident? |
| Vendor contract language | Critical BSPs required to notify within your internal window, not just “as soon as possible” |
| Tabletop testing | At least one exercise per year where a scenario explicitly tests the notification determination — when does this become a notification incident? |
The incident response team owns containment. Compliance or legal typically owns the notification determination. In a fast-moving incident, both tracks must run simultaneously — not sequentially.
Common Gray Areas
“Has it materially disrupted operations — or is it likely to?” The “reasonably likely” standard creates a forward-looking obligation. If a ransomware attack has encrypted 40% of your systems and your security team believes core banking will be affected within hours, the notification obligation may arise before core systems are actually down.
System upgrade failures: The agencies confirmed in the final rule preamble that a planned system upgrade that fails and leads to widespread customer and employee access outages qualifies as a notification incident. It doesn’t need to be a cyberattack.
Third-party-caused outages: If your cloud provider goes down and your banking operations are materially affected, you may have a notification incident — even though the source was your vendor. Assess the impact on your operations, not the origin of the event.
So What?
If your incident response plan doesn’t explicitly address the notification determination — who makes it, based on what evidence, and how they contact the regulator — you have a program gap that examiners are actively probing. The rule has been in effect since May 2022, and supervisory teams are now asking about notification procedures as a standard part of cybersecurity reviews.
The answer isn’t just having the right contact number on file. It’s a documented determination standard, a tested escalation path, and vendor contracts that give you the notification lead time you need to meet your own deadlines.
The Incident Response & Breach Notification Kit includes a severity classification matrix that maps incident types to notification obligations — including the 36-hour rule, state breach notification timelines for all 50 states, and SEC disclosure triggers — in a single decision framework your team can use under real-time pressure, not after the fact.
Related Template
Incident Response & Breach Notification Kit
Step-by-step incident response playbooks and breach notification templates for all 50 states.
Frequently Asked Questions
What exactly counts as a 'notification incident' under the 36-hour rule?
When exactly does the 36-hour clock start — detection or determination?
Are non-bank fintechs covered by this rule?
What are bank service providers required to do under this rule?
What does the actual notification to a regulator involve?
How does the 36-hour rule interact with other notification requirements?
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Related Framework
Incident Response & Breach Notification Kit
Step-by-step incident response playbooks and breach notification templates for all 50 states.
Keep Reading
NYDFS Hits Delta Dental With $2.25M — The First 2026 Cyber Action Is About Notice and Retention, Not the Breach
NYDFS's first 2026 cybersecurity enforcement penalizes Delta Dental for a six-month notification delay and lengthened MOVEit retention settings — not for getting hit. What practitioners should pull from the consent order.
May 13, 2026
Incident ResponseRansomware Incident Response Playbook: The 24-Hour Checklist for Financial Institutions
When ransomware hits your bank or fintech, the first 24 hours determine your regulatory exposure, recovery timeline, and whether your next call is to your CEO or your lawyer. Here's the phase-by-phase playbook.
May 11, 2026
Incident ResponseIncident Triage Techniques: Severity Classification, Materiality, and the SEC 4-Day Clock
How to classify incident severity correctly, build a working materiality decision process for SEC 8-K purposes, and avoid the documentation failures that turned early Form 8-K filings into SEC comment letters.
May 7, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.