AI Risk

Shadow AI: How to Find and Govern Unauthorized AI Use in Your Organization

March 25, 2026 Rebecca Leung
Table of Contents

TL;DR:

  • 71% of office workers admit to using AI tools without IT approval, and shadow AI incidents now carry an average breach premium of $4.63 million vs. $3.96 million for standard breaches (IBM 2025 Cost of Data Breach Report).
  • Your examiners will ask for an AI model inventory — and shadow AI belongs in it. Firms that can’t answer “what AI tools are employees using?” are already behind.
  • Detection comes first: network monitoring, procurement audits, SaaS discovery tools, OAuth authorization reviews, and employee surveys are your starting points. Governance follows.

The AI Your Employees Are Using Isn’t the AI You Approved

Here’s the uncomfortable truth: your employees are using ChatGPT, Claude, Copilot, Gemini, and dozens of AI browser extensions to do their jobs right now. Most of them haven’t asked permission, and most of them won’t. They’ve found tools that make them faster and they’re not stopping because of a policy memo you sent six months ago that half the organization never read.

This is shadow AI — AI tools used within an organization without formal approval, procurement review, security assessment, or oversight. And in financial services, where you’re handling nonpublic personal information (NPI), trading data, customer credit profiles, and proprietary models, that’s not just a technology management problem. It’s a compliance and regulatory exposure.

The scale of the problem is hard to overstate. According to Reco’s 2025 State of Shadow AI Report, 71% of office workers admit to using AI tools without IT department approval. A 2024 CybSafe/National Cybersecurity Alliance survey of 7,000 employees found that 38% share confidential data with AI platforms without their employer’s knowledge. And according to IBM’s 2025 Cost of Data Breach Report, shadow AI incidents now account for 20% of all breaches and cost organizations an average of $4.63 million per incident — roughly $670,000 more than a standard breach.

The regulators are watching. MAS concluded Project MindForge Phase 2 on March 20, 2026 — a collaboration between 24 banks, insurers, and capital market firms — culminating in an AI Risk Management Toolkit that explicitly addresses AI inventorisation and shadow AI identification as foundational governance requirements. The message is clear: you need to know what AI is in use before you can manage it.

Why Shadow AI Is Different from Shadow IT

Shadow IT (rogue SaaS apps, unmanaged endpoints) has existed for two decades. Shadow AI is categorically different — and more dangerous — for three reasons.

Training data doesn’t stay local. When an employee pastes client data or proprietary code into a consumer AI tool, that input may be used to improve the underlying model. Samsung learned this in April 2023 when employees uploaded proprietary semiconductor source code and internal meeting notes to ChatGPT to debug problems. The data was transmitted to OpenAI’s servers before Samsung even knew it had happened. The company banned all AI tools shortly after.

The harm is often invisible and immediate. Unlike shadow SaaS (where you might eventually discover an unapproved vendor through a procurement review), AI data exposure happens in milliseconds, at the prompt level, with no audit trail on your side. By the time you find out — if you find out — the data has already been processed by a third party with its own retention, training, and security practices.

AI decisions create regulatory accountability. If an employee uses an unapproved AI tool to assist with credit decisions, customer communications, or regulatory filings, and that tool produces a biased or erroneous output, your firm bears the accountability — not the AI vendor. The CFPB and OCC don’t accept “we didn’t know it was being used” as a defense.

Step 1: Find the Shadow AI You Don’t Know About

You can’t govern what you can’t see. Discovery is the non-negotiable first step, and it requires multiple detection channels running in parallel.

Network and DNS Monitoring

Your network logs are capturing traffic to every AI endpoint your employees hit — they just haven’t been reviewed with that lens. Configure your SIEM, firewall, or CASB to flag traffic to known AI tool domains: chat.openai.com, claude.ai, gemini.google.com, copilot.microsoft.com, perplexity.ai, and the hundreds of AI API endpoints increasingly embedded in third-party software.

Specific tactics:

  • Pull DNS query logs for AI-related domains over the past 90 days — the volume will surprise you
  • Review proxy and web filter logs for AI tool categories
  • Flag unusual outbound data volumes to AI API endpoints (large POST requests often indicate prompt/document uploads)
  • If you use a CASB (Cloud Access Security Broker) like Netskope, Zscaler, or Palo Alto Prisma, many now have pre-built shadow AI detection rules

SaaS and OAuth Authorization Audits

Employees grant AI tools access to corporate systems constantly. A “Copilot” browser extension might have OAuth authorization to read your email. An AI writing tool might have “edit documents” access granted through Google Workspace. These connections often persist long after the employee stops using the tool.

Audit steps:

  • Pull all OAuth authorizations across Google Workspace, Microsoft 365, Salesforce, and any other major SaaS platform your firm uses
  • Flag any authorization granted to apps with “AI” in the name or description — or unknown app names
  • Review browser extension inventories from endpoint management tools (Jamf, Intune, etc.) and flag AI-related extensions
  • Look for “high privilege” authorizations — OAuth apps with access to read/write email, calendar, or files deserve the most scrutiny

Procurement and Expense Review

Many AI tools are free or offer personal paid tiers — employees pay out of pocket and expense it. Others go through departmental purchasing without IT involvement.

Detection approach:

  • Search expense reports for charges from OpenAI, Anthropic, Midjourney, Jasper, Copy.ai, and similar vendors
  • Review SaaS spend across all corporate credit cards for AI-related charges
  • Ask department heads to self-report AI tools in use — you’ll get incomplete answers but it surfaces the obvious ones
  • Check vendor contracts for AI functionality embedded in existing tools (productivity suites, CRMs, coding platforms often add AI features without explicit notification)

Employee Surveys and Structured Disclosure Programs

The fastest way to understand your shadow AI landscape is to ask. Most employees aren’t trying to create risk — they’re trying to do their jobs. A non-punitive disclosure survey will surface tools you’d never find through technical monitoring alone.

Run a survey with questions like:

  • “What AI tools have you used in the past 90 days, including personal tools used for work purposes?”
  • “Have you used any AI tool to process, summarize, or draft content involving customer information?”
  • “Are there AI tools you’d like to use that aren’t currently available to you?”

The last question is particularly important. If you don’t know what employees want, you can’t build an approved alternative.

Step 2: Assess the Risk of What You Find

Once you have an inventory of actual AI tool usage, you need to triage it. Not all shadow AI is equal. A marketing analyst using an AI image generator carries different risk than a loan officer using an unapproved AI tool to draft denial letters.

Build a risk rating for each discovered tool using these factors:

Risk FactorHigher RiskLower Risk
Data sensitivityCustomer NPI, financial data, proprietary IPPublicly available info, anonymized data
Decision impactCredit, employment, pricing, customer communicationsInternal drafting, code autocomplete, scheduling
Data retention by vendorUnknown or confirmed training useZero retention, enterprise API terms
Regulatory exposureGLBA, FCRA, ECOA, state AI laws triggeredGeneral productivity use only
Volume of useWidespread, embedded in daily workflowsOccasional, one-time use
Vendor security postureNo SOC 2, no MFA, no audit loggingEnterprise-grade, security reviewed

A discovered tool in the “higher risk” column across multiple factors needs immediate action: stop use pending assessment, or move to an approved enterprise alternative fast. A tool in the “lower risk” column might get documented and monitored while a formal review proceeds.

Step 3: Build a Governance Framework That Employees Will Actually Follow

The wrong response to shadow AI discovery is a company-wide ban. Samsung tried that — and employees found workarounds within weeks. The better response is to build a governance structure that channels AI adoption into approved, observable pathways.

Establish an AI Tool Approval Process

Create a lightweight process for employees to request approval of new AI tools before use. The process shouldn’t require a six-month security review for a productivity tool — but it should require:

  • A completed AI tool intake form (tool name, vendor, use case, data types involved)
  • A security questionnaire focused on data handling and retention
  • Sign-off from IT Security and, if NPI is involved, the Privacy/Compliance team
  • A decision timeline commitment (target: 10 business days for standard tools)

Without a fast, functional approval process, employees will bypass it. If the official channel takes three months and approval is uncertain, people will use the tool first and ask forgiveness later.

Build and Maintain an Approved AI Tool List

Publish an approved tool list internally — a living document showing which AI tools are approved for what use cases and data types. Format it for easy consumption:

ToolApproved UsesData Classification AllowedNotes
Microsoft Copilot (M365)Drafting, summarization, internal researchPublic, InternalUse enterprise tenant only; no NPI
ChatGPT EnterpriseAnalysis, research, draftingPublic, InternalMust use enterprise account; no client data
GitHub CopilotCode completion and reviewInternalNo proprietary algorithm code
[Unapproved tools]N/AN/ARequest approval before use

Update this list quarterly or when approvals change. Make it easy to find on your intranet.

Define Acceptable Use Boundaries

Every AI governance framework needs clear, behavioral-level rules — not abstract principles. Don’t write “employees should use AI responsibly.” Write:

  • Prohibited without explicit approval: Inputting customer names, account numbers, SSNs, transaction history, or any GLBA-protected NPI into any AI tool
  • Prohibited entirely: Using any AI tool to make or substantially influence credit decisions, adverse action determinations, or customer disclosures without documented human review
  • Require disclosure: Any AI-generated content used in regulatory filings, customer communications, or model documentation must be disclosed as AI-assisted and reviewed by a human
  • Incident reporting: Employees who realize they have input sensitive data into an unapproved AI tool must report it to Information Security within 24 hours — no exceptions, no punishment for self-reporting

The last point matters more than you think. If employees fear punishment for disclosure, you’ll never know about incidents until they become breaches.

Step 4: Add Shadow AI to Your Model Inventory

MAS Project MindForge’s Operationalisation Handbook is explicit on this point: AI inventorisation must cover actual AI usage across the organization — not just approved deployments. Regulators reviewing your AI model inventory will ask whether it captures the full picture, including shadow tools discovered through monitoring.

Your model inventory entry for shadow AI doesn’t need to look like a full model risk card. At minimum, document:

  • Tool name and vendor
  • Departments/teams using it
  • Estimated user count
  • Use cases identified
  • Data types involved
  • Current status: approved, pending review, or prohibited
  • Risk rating from your assessment

This gives you a defensible answer when an examiner asks “what AI tools are in use at your organization?” Most firms currently can’t answer that question. The ones that can — and have documented it — are already ahead.

Step 5: Ongoing Monitoring, Not One-Time Discovery

Shadow AI is not a project with a finish line. New AI tools appear weekly. Employees find workarounds. Approved tools add new features that change their risk profile. Your governance framework needs continuous visibility, not periodic audits.

Minimum ongoing controls:

  • Quarterly SaaS/OAuth authorization review: Revoke unused authorizations, flag new AI apps
  • Monthly network traffic review: Run shadow AI domain lists against DNS/proxy logs; update the domain list as new tools emerge
  • Annual employee AI use survey: Track adoption trends and unmet demand
  • AI tool approval queue monitoring: If requests are piling up without decisions, the bottleneck creates shadow AI pressure — fix the process
  • Incident tracking: Log every shadow AI incident (data exposure, policy violation, unauthorized use report) and review quarterly for patterns

Cisco’s 2025 AI security research found that 46% of organizations have experienced internal data leaks through generative AI. That’s not a future risk. It’s happening now, in your organization, whether your monitoring is catching it or not.

What MAS Project MindForge Says About This

The March 2026 MindForge AI Risk Management Toolkit — developed by a 24-firm consortium including major banks and insurers under MAS oversight — addresses shadow AI as part of the broader AI inventorisation and scope/oversight framework. The toolkit’s Operationalisation Handbook identifies identification of AI usage through organizational systems, policies, and procedures as a foundational requirement before risk materiality assessment can even begin.

The logic is straightforward: you cannot assess risk you haven’t discovered. MAS’s approach formalizes what risk managers have been saying informally: the AI model inventory must capture actual usage, not just approved programs. For global banks operating in Singapore or referencing Singapore’s framework, this creates a clear expectation that shadow AI detection is a required governance activity — not optional housekeeping.

The Governance Spectrum: Prohibition vs. Rapid Approval

Financial services firms tend toward one of two failure modes on shadow AI. The first is blanket prohibition: ban all unapproved AI, issue a strongly worded policy, and assume compliance. This doesn’t work. Employees find workarounds, tools evolve, and the ban creates a culture where AI use goes underground rather than surfacing for review.

The second failure mode is inaction — acknowledging shadow AI exists but not building infrastructure to address it. This is where most firms actually sit today, and it’s the riskier position.

The right answer lives between those extremes: a governance framework that makes it easy to get approval for appropriate AI use, maintains visibility into what’s happening, and applies meaningful controls to high-risk uses while not impeding legitimate productivity.

For most financial services firms, the practical model is:

  • Fast-track approval (5 business days): General productivity tools with no access to NPI, using enterprise APIs with zero-retention terms
  • Standard approval (10 business days): Tools with NPI access or used in customer-facing workflows; requires full security review and compliance sign-off
  • High scrutiny (30+ days or committee review): AI tools involved in credit decisions, regulatory reporting, or model risk management functions
  • Prohibited pending policy development: Agentic AI tools that can take autonomous actions (book trades, send customer communications, modify records)

Make the first tier easy enough that employees choose the approved path over the shadow path.

So What? Your 30-Day Starting Point

Shadow AI governance doesn’t have to be a multi-year program. In 30 days, a small team can get meaningful visibility and basic controls in place.

Week 1 — Discover:

  • Pull DNS/proxy logs for known AI domains; identify top tools by traffic volume
  • Run an OAuth authorization audit across Microsoft 365 and Google Workspace
  • Send an employee disclosure survey

Week 2 — Assess:

  • Categorize discovered tools by data sensitivity and decision impact
  • Identify your highest-risk shadow AI instances; escalate immediately if NPI is involved
  • Draft your approved tool list based on what’s already in use that meets security standards

Week 3 — Govern:

  • Publish the approved tool list and acceptable use boundaries
  • Launch the AI tool approval process (keep it lightweight)
  • Brief department heads on what you found and what’s changing

Week 4 — Control:

  • Add shadow AI to your model inventory
  • Configure network monitoring rules for ongoing detection
  • Brief your compliance and risk committee on current state and ongoing program

This won’t be perfect. It will be defensible — which is what regulators actually care about. The firms that face the most scrutiny aren’t the ones that found problems; they’re the ones that clearly weren’t looking.


Managing shadow AI is step one of any serious AI risk program. The AI Risk Assessment Template & Guide includes a ready-to-use AI model inventory template, acceptable use policy framework, risk tiering methodology, and vendor assessment checklist — everything you need to build a defensible AI governance program from scratch. Practical, examiner-ready, and built for financial services teams who don’t have time to start from a blank page.


Frequently Asked Questions

What is shadow AI, and why is it a compliance risk? Shadow AI refers to AI tools used within an organization without formal approval, security review, or oversight. In financial services, the compliance risk is significant: employees routinely input NPI, customer data, and proprietary information into these tools, creating GLBA exposure, potential exam findings, and data breach risk. IBM’s 2025 Cost of Data Breach Report found shadow AI incidents carry an average cost of $4.63 million — roughly $670,000 more than standard breaches.

How do you detect shadow AI in a financial services firm? Detection requires multiple channels: DNS and network traffic monitoring for known AI tool domains, OAuth authorization audits across SaaS platforms, expense report reviews for AI vendor charges, browser extension audits via endpoint management tools, and anonymous employee surveys. No single method surfaces everything — you need all of them running in parallel. SaaS security platforms like Reco, DoControl, or Netskope automate much of this.

Should firms ban shadow AI or build an approval process? Blanket bans don’t work — employees find workarounds and usage goes underground, which is worse than visible unauthorized use. The better approach is a fast, functional approval process with a published approved tool list, clear acceptable use rules, and tiered controls based on data sensitivity and use case. Make the approved path easier than the shadow path and most employees will take it.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.