Compliance Strategy

Generative AI Acceptable Use Policy Template: What to Include and Why It Matters

Table of Contents

Your employees are already using ChatGPT. The question is whether you have a policy governing it — or whether you’re going to find out the hard way that they’ve been feeding it customer data.

Samsung learned this the hard way in April 2023. Engineers at the Korean tech giant uploaded proprietary source code and internal meeting notes to ChatGPT in three separate incidents. The AI tool retained that data. Samsung’s response was swift: ban generative AI tools entirely. That may have been the right call for a semiconductor company. It’s probably not workable for a financial institution that needs to stay competitive.

The better answer — the one that doesn’t leave you flying blind or locked out of a transformational technology — is a generative AI acceptable use policy (AUP). Here’s how to build one that actually holds up.


TL;DR

  • A gen AI AUP isn’t about banning tools — it’s about drawing clear lines between permitted and prohibited use
  • Data classification is the linchpin: what goes into an AI tool determines almost every downstream risk
  • Regulators are already watching: NIST released its Generative AI Profile (NIST AI 600-1) in July 2024; the FTC launched five enforcement actions under Operation AI Comply in September 2024
  • Your policy needs a tool approval workflow, not just a prohibited use list

Why Financial Services Firms Need a Dedicated Gen AI AUP — Not Just an AI Policy

Most financial institutions already have an AI policy — something tied to model risk management, referencing SR 11-7 (the Federal Reserve’s 2011 guidance on model risk), maybe with a nod to the FFIEC’s technology risk management guidance. That framework was built for models: quantitative systems that produce outputs used in decisions. Credit scoring models. Fraud detection algorithms. Interest rate risk tools.

Generative AI is different in ways that matter for policy design:

  • It’s employee-facing, not system-facing. An underwriting model sits behind an API. ChatGPT, Microsoft Copilot, and Google Gemini sit in front of your employees, who are prompting them with whatever they have open.
  • The training data risk is bidirectional. You can control what goes into your fraud model. You cannot always control what a vendor’s LLM retains from user inputs.
  • The output risk is qualitative, not quantitative. A credit model’s output is a score — auditable, versioned, explainable. A generative AI output is text. It can be confidently wrong. It can be fabricated entirely.
  • The regulatory landscape is evolving in real time. NIST’s AI 600-1 Generative AI Profile, released July 26, 2024, identified 12 unique risk categories for generative AI — including data privacy, hallucination, homogenization, and “confabulation” — that simply don’t appear in traditional model risk frameworks.

SR 11-7 is still your foundation. But you need a gen AI AUP as a layer on top.


The 8 Core Sections of a Generative AI Acceptable Use Policy

1. Scope and Definitions

Be explicit about what this policy covers. “AI” is too broad. Define:

  • Generative AI tools: Applications that use large language models (LLMs), image generators, or multimodal AI to produce text, code, images, summaries, or other content in response to user prompts. Examples: ChatGPT, Claude, Microsoft Copilot, Google Gemini, GitHub Copilot, DALL-E, Midjourney.
  • Enterprise-licensed tools: Approved gen AI tools procured and configured by your organization with appropriate data governance controls.
  • Consumer/personal tools: Public-facing versions of gen AI tools accessed via personal accounts (e.g., the free tier of ChatGPT), which may use inputs for model training.
  • Third-party AI-embedded tools: Vendor products that have incorporated gen AI features — document review software, contract management platforms, customer service tools.

Who it covers: All employees, contractors, and third parties with access to company systems and data. Not just the tech team.

What it doesn’t cover: Internally developed AI models subject to your model risk management policy (those go through validation, governance committees, etc.).


2. Data Classification Rules — The Most Critical Section

This is where most policies fail. They say “don’t put sensitive data into AI tools” without defining what sensitive means or giving employees a usable framework.

Use a three-tier approach:

Data TierExamplesAI Tool Rule
Public / Non-ConfidentialPublished reports, publicly available information, generic business documentsMay be used with any approved AI tool
Internal / ConfidentialInternal processes, non-public strategies, employee informationEnterprise-licensed tools only (with data retention controls); prohibited in consumer tools
Restricted / RegulatedCustomer PII, NPI, transaction data, MNPI, attorney-client privileged content, exam findingsProhibited in all external gen AI tools; only permitted in enterprise tools with explicit CISO/Legal approval

The critical distinction between enterprise-licensed and consumer tools isn’t brand — it’s contract. An enterprise Microsoft 365 Copilot license includes data processing agreements and commitments that Microsoft won’t use your data to train models. The free tier of ChatGPT does not make the same commitment. Your policy needs to enforce that line.

Specific financial services prohibitions to call out explicitly:

  • Customer account numbers, SSNs, DOBs — never in external gen AI tools
  • Non-public material information (MNPI) related to M&A, earnings — prohibited; this isn’t just a data policy issue, it’s a securities law issue
  • Suspicious activity report (SAR) information — prohibited; federal law (31 U.S.C. § 5318(g)) restricts disclosure
  • Examination findings and MRAs — prohibited; treat as restricted by default
  • Attorney-client privileged communications — prohibited

3. Approved and Prohibited Tools List

Publish and maintain a pre-approved tool list — not a prohibited list. The prohibited list approach chases an infinite tail. New tools launch daily. The better model:

Pre-approved tools (can be used for permitted purposes without additional approval):

  • List your enterprise-licensed gen AI tools here with their approved data tiers
  • Example: “Microsoft 365 Copilot — approved for Internal/Confidential data; prohibited for Restricted/Regulated data”

Approved with restrictions (require CISO/Compliance review before use):

  • AI tools embedded in approved vendor platforms
  • Task-specific gen AI tools under evaluation

Prohibited by default:

  • Any gen AI tool not on the pre-approved list
  • Consumer/free-tier accounts of any gen AI tool, even from approved vendors
  • Gen AI tools operated by vendors without executed data processing agreements

Review cadence: Reassess the pre-approved list quarterly. Gen AI capabilities change fast; a tool that was safe six months ago may have changed its data handling practices.


4. Use Case Approval Workflow

Not every gen AI use case carries the same risk. A paralegal using Copilot to summarize a contract template is different from a compliance officer using it to analyze exam findings. Your policy needs a tiered approval mechanism:

Tier 1 — Self-service (no approval needed)

  • Uses involving only public data
  • Uses within pre-approved tools at their permitted data tier
  • Examples: summarizing public news articles, drafting initial templates, generating ideas for marketing copy using only public information

Tier 2 — Manager approval + use case log

  • Uses involving internal/confidential data
  • Any novel use case not explicitly addressed in policy
  • Examples: summarizing internal policy documents, drafting internal memos using non-public information

Tier 3 — Compliance/CISO review

  • New tools not on the approved list
  • Any use involving restricted data (with rare exceptions for approved enterprise tools)
  • Customer-facing AI outputs (emails, letters, chatbot responses)
  • Any use that feeds AI output directly into a regulatory filing or customer disclosure

Responsible party for workflow administration: At most mid-size banks, this sits with the CRO or Head of Compliance. At fintechs without a dedicated CRO, assign it to the Head of Compliance or VP of Engineering — but whoever owns it needs actual authority to say no.


5. Output Review Requirements

Generative AI hallucinates. That’s not a bug that will be fully fixed — it’s an inherent characteristic of how LLMs work. NIST’s AI 600-1 calls it “confabulation”: the model generates plausible-sounding but factually incorrect content with full confidence.

Your policy needs to require human review of AI outputs before they’re acted on:

Always required (regardless of data tier or tool):

  • Any AI-generated content that will be sent to customers
  • Any AI-generated legal or regulatory analysis used in a filing or decision
  • Any AI-generated output used in a credit decision, underwriting determination, or adverse action
  • Any AI-generated content used in an audit response or regulatory communication

Best practice for mandatory review: Require the reviewer to document that they reviewed the output, not just that they used an AI tool. “I reviewed this AI-generated summary and verified accuracy against the source documents” — not “AI-assisted.”

Hallucination risk is especially acute for: regulatory citations, case law references, statistics and data points, historical facts, and any time the model is asked to analyze documents it doesn’t actually have access to.


6. Training and Annual Attestation

Policy without training is wallpaper. Require:

  • Initial training before any employee uses an approved gen AI tool — cover data classification rules, prohibited uses, and how to log Tier 2/3 use cases
  • Annual refresher — given how fast gen AI is evolving, once a year is a minimum; semi-annual is better
  • Annual attestation — employees sign off that they’ve read and will comply with the policy
  • Role-specific modules for high-risk functions: compliance, legal, customer-facing roles, anyone with access to restricted data

Where this connects to enforcement: The FTC launched Operation AI Comply in September 2024, taking five enforcement actions against companies that used AI to facilitate deceptive practices. One takeaway: “we didn’t know employees were using it that way” is not a defense. Documented training is how you establish that you took reasonable steps.


7. Third-Party and Vendor AI Use

Your vendors are using gen AI too. They may be using it on your customers’ data without your knowledge. Your policy needs a third-party component:

For existing vendors: Add gen AI disclosure requirements to your next vendor review cycle. Ask: Does your product use generative AI? Which model(s)? How is customer data handled? What are your data retention practices?

For new vendor onboarding: Your TPRM questionnaire should explicitly include gen AI usage. This isn’t optional — your customer data doesn’t become less protected just because it’s in a vendor’s environment.

Contractual requirements: Any vendor processing your customer data using gen AI tools needs explicit contractual commitments covering: (1) data not used for third-party model training, (2) data retention and deletion rights, (3) right to audit AI-related controls.

The OCC, FDIC, and Federal Reserve issued interagency guidance on third-party risk management in June 2023 that explicitly frames these expectations. Your vendor AI oversight connects directly to that framework.


8. Monitoring, Incident Reporting, and Enforcement

Monitoring: Depending on your data loss prevention (DLP) tooling, you may be able to monitor for prompts containing sensitive data patterns. At minimum, require employees to self-report unauthorized uses — and make that reporting genuinely low-stakes so it actually happens.

Incident reporting: Define what constitutes an AI-related incident requiring escalation. Triggers should include:

  • Unauthorized input of restricted data into any gen AI tool
  • AI output that was acted on and later found to be materially incorrect, resulting in a customer impact
  • Discovery that a vendor’s gen AI tool accessed or retained customer data outside of agreed terms

Enforcement: Policy without teeth isn’t policy. State clearly that violations may result in disciplinary action up to and including termination. For violations involving regulated data, specify that Legal/Compliance will assess whether regulatory reporting is required.


Implementation Roadmap: 30/60/90/120 Days

Days 1–30: Foundation

  • Assign policy owner (Compliance, CRO, or CISO — one owner, not a committee)
  • Inventory current gen AI tool usage across the organization (send a brief survey; you’ll be surprised)
  • Draft data classification tiers specific to your institution’s data types
  • Identify enterprise-licensed tools already in use; pull data processing agreements

Days 31–60: Policy Draft and Legal Review

  • Draft the full AUP covering all 8 sections above
  • Legal and Compliance review — confirm alignment with SR 11-7, existing AML/privacy policies, and applicable state laws
  • IT review — identify DLP monitoring capabilities and gaps
  • Build the approved tools list

Days 61–90: Training and Rollout

  • Develop training module (can be brief — 20 minutes is enough for initial rollout)
  • Communicate policy to all staff before it takes effect
  • Launch use case log/approval workflow (a SharePoint list or Jira queue works fine for Tier 2/3)
  • Collect initial attestations

Days 91–120: Operationalize

  • Complete first quarterly review of approved tools list
  • Complete first round of vendor AI disclosure requests
  • Incorporate gen AI questions into new vendor onboarding checklist
  • Review any use case logs for patterns; adjust policy as needed
  • Document everything — if a regulator asks whether you’ve addressed gen AI risk, this paper trail is your answer

What Regulators Expect to See

Regulators haven’t issued gen AI-specific AUP requirements for banks (yet), but the signals are consistent: existing risk frameworks apply. When examiners review your gen AI governance, they’re going to look for:

  1. Board/executive awareness — has AI risk been presented to the board or board risk committee?
  2. Policies and procedures — is there a written policy? Is it current?
  3. Training — can you demonstrate that employees know the rules?
  4. Third-party oversight — are you managing AI risks in your vendor relationships?
  5. Monitoring — is there any oversight mechanism, or is usage completely unchecked?

The NIST AI RMF Generative AI Profile (NIST AI 600-1) is the closest thing to a voluntary compliance standard right now. Build your governance documentation around the GOVERN, MAP, MEASURE, and MANAGE framework — it maps cleanly to what regulators expect.


So What?

Not having a gen AI acceptable use policy isn’t a neutral position. It’s a decision — a decision that your employees can use any tool, for any purpose, with any data, with no accountability. That’s how Samsung ends up with source code in OpenAI’s training pipeline and a policy of total prohibition to clean up the mess.

You don’t have to choose between innovation and control. A solid gen AI AUP gives your employees the guardrails they need to use these tools productively while protecting you from the data leaks, regulatory findings, and reputational damage that come from no guardrails at all.

Build the policy before the incident. Not after.


If you’re building your gen AI governance framework from scratch, the AI Risk Assessment Template & Guide walks through a comprehensive AI risk inventory that pairs directly with an acceptable use policy — including data classification frameworks and use case risk tiering tailored for financial services.


FAQ

What’s the difference between a generative AI acceptable use policy and a model risk management policy?

Model risk management (MRM) policies — typically anchored to Federal Reserve SR 11-7 — govern quantitative models used in bank decisions: credit models, market risk models, stress testing tools. Generative AI AUPs govern employee use of LLM-based tools for productivity, content generation, and analysis. Gen AI tools can be models under SR 11-7 if they’re used in decisions (e.g., an AI tool that produces credit recommendations), but most employee-facing gen AI use falls outside the SR 11-7 scope. You need both.

Do I need a gen AI acceptable use policy if we only use Microsoft 365 Copilot?

Yes. Even an enterprise-licensed, properly contracted gen AI tool carries risk without a use policy. Employees still need to know what data they can prompt it with, what outputs require human review, and what use cases need additional approval. The contract with Microsoft protects you from third-party data exposure — it doesn’t protect you from an employee using Copilot to draft a customer letter that contains a fabricated regulatory citation.

How often should we update the gen AI acceptable use policy?

At minimum, annual review. In practice, review it whenever: (1) a new AI tool is added or removed from the approved list, (2) a significant regulatory development occurs (new agency guidance, enforcement action), (3) an internal incident reveals a gap in the policy, or (4) a vendor changes its AI data handling practices. Given the pace of change in this space, semi-annual reviews are worth the effort.

Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.