EU AI Act GPAI Obligations: What Providers and Downstream Deployers Must Do in 2026
Table of Contents
Your AI vendor signed the EU AI Act’s GPAI Code of Practice in August 2025. That tells you they’re working toward compliance. What tells the regulator that you are?
Most compliance teams treating the EU AI Act as a “provider problem” — something OpenAI, Google, or Anthropic handles — are misreading how the obligations flow downstream. The Act creates a two-tier structure for general-purpose AI models: foundational obligations on every GPAI provider, and elevated obligations on those whose models cross a systemic risk threshold. Downstream companies building on those models aren’t exempted — they need documentation from providers to satisfy their own AI system requirements.
With enforcement fines of up to 3% of global annual turnover going live August 2, 2026, this is no longer a deferred compliance project.
TL;DR
- EU AI Act GPAI model obligations went live August 2, 2025; enforcement fines (up to 3% global turnover or €15M) begin August 2, 2026.
- All GPAI providers face four Article 53 obligations: technical documentation, downstream use instructions, a copyright compliance policy, and a public training data summary.
- Models with training compute above 10²⁵ FLOPs trigger Article 55 systemic risk obligations — adversarial testing, incident reporting to the AI Office, and cybersecurity requirements.
- Downstream deployers aren’t Article 53 providers for the underlying model, but they need the provider’s documentation to complete their own AI system compliance.
- Open-source models are partially exempt from Article 53 — but lose that exemption if they carry systemic risk.
What Qualifies as a GPAI Model
The EU AI Act defines a general-purpose AI (GPAI) model as an AI model trained with large amounts of data using self-supervision at scale, displaying significant generality, and capable of competently performing a wide range of distinct tasks (Article 3(63)).
Practically: GPT-5, Gemini 3, Claude, Llama 4, and similar frontier models qualify. The definition focuses on functional generality and training methodology — not a single compute threshold.
A separate, higher threshold applies to systemic risk designation: a model is presumed to carry systemic risk if cumulative training compute exceeds 10²⁵ FLOPs (Article 51(1)(a)). The AI Office can also designate models below this threshold based on market reach, deployment scope, or other factors. Providers must notify the AI Office within two weeks of reaching or reasonably foreseeing the 10²⁵ FLOPs threshold.
The GPAI chapter applies to providers — companies that develop or place GPAI models on the EU market, including via API. A US company making a model accessible to EU users is within scope. No EU office required.
Article 53: The Four Obligations Every GPAI Provider Faces
Article 53 establishes four baseline obligations for all GPAI model providers. These apply to models placed on the market from August 2, 2025 forward. Providers of models placed on the market before August 2, 2025 have until August 2, 2027 to comply.
1. Technical Documentation (Annex XI)
Providers must prepare and maintain documentation covering:
- Model architecture and training methodology
- Training data sources and volumes
- Computational resources and energy consumption
- Performance benchmarks and evaluation results
- Known limitations and foreseeable misuse scenarios
This documentation is submitted to the AI Office and national supervisory authorities on request. The AI Office published a mandatory documentation template on July 24, 2025 — providers use this template, not a bespoke format.
2. Instructions for Use (Annex XII)
Providers must supply downstream AI system providers with sufficient information to understand:
- Intended use cases and appropriate deployment contexts
- Capabilities and performance characteristics
- Known limitations and failure modes
- Required precautions for specific use cases
If you’re building an enterprise AI application on top of a third-party GPAI model, this is the documentation you should be requesting from your vendor. Inability to produce it is a gap in your third-party AI vendor due diligence process.
3. Copyright Compliance Policy
Providers must implement a policy to comply with EU copyright law, specifically honoring the text and data mining (TDM) opt-out regime under Article 4(3) of Directive 2019/790. When a rights-holder has reserved their works from TDM use, the GPAI provider must have a mechanism to identify and honor that reservation.
The GPAI Code of Practice’s Copyright chapter provides a framework for demonstrating compliance. This is one of the more operationally complex obligations — particularly for models trained on web-scraped datasets — and a live area of EU litigation.
4. Public Training Data Summary
Providers must publish a public summary of training data using the AI Office’s mandatory template (published July 24, 2025). The summary discloses:
- Types of content used (text, image, code, audio, etc.)
- Data sources and collection methods
- Content filtering or curation applied
This is a public-facing obligation — not a private regulatory submission. It is distinct from, and less detailed than, the Annex XI technical documentation.
Open-source exception: Obligations (1) and (2) above don’t apply to open-source GPAI models with publicly available parameters. That exemption disappears for any open-source model that also qualifies as systemic risk — which is why models like Llama 4 still have exposure if they cross the 10²⁵ FLOPs threshold.
Article 55: Systemic Risk — The Higher Tier
Providers whose models cross the 10²⁵ FLOPs threshold — or are designated by the AI Office — face four additional obligations under Article 55, on top of the baseline Article 53 requirements.
| Obligation | What It Requires |
|---|---|
| Model evaluations | Standardized testing and adversarial testing to identify systemic risks |
| Risk assessment and mitigation | Assess and mitigate possible systemic risks at EU level |
| Incident reporting | Track and report serious incidents and corrective measures to the AI Office without undue delay |
| Cybersecurity protection | Adequate security for the model itself and its physical infrastructure |
“Serious incidents” under the incident reporting obligation means events that constitute or may constitute a criminal offense, cause harm to persons or property, or disrupt critical infrastructure. Reporting is to the AI Office — not national authorities.
The models currently presumed to meet the systemic risk threshold include the most capable frontier models from OpenAI, Google DeepMind, Anthropic, and Meta. Systemic risk tier providers must run adversarial testing programs; the GPAI Code of Practice’s Safety and Security chapter provides the framework for documenting them.
The GPAI Code of Practice: Your Compliance Path
On July 10, 2025, the European AI Office published the final GPAI Code of Practice. The European Commission endorsed it via adequacy decisions on August 1. Adherence is voluntary — but compliance with the CoP creates a presumption of compliance with the underlying AI Act obligations it covers.
The CoP has three chapters:
| Chapter | Covers |
|---|---|
| Transparency | Annex XI technical documentation and Annex XII instructions for use |
| Copyright | Copyright policy documentation and TDM opt-out compliance |
| Safety and Security | Systemic risk model evaluation, adversarial testing, incident reporting |
Major AI providers — Microsoft, Google, Amazon, OpenAI, and Anthropic — signed the CoP in August 2025. Meta declined, a decision the AI Office has specifically noted as it builds its oversight program. Non-signatories can still demonstrate compliance through the underlying regulatory requirements, but they forgo the CoP’s presumption of compliance and face higher scrutiny in any AI Office review.
What Downstream Deployers Actually Need
A company building an AI application on top of a GPAI model — say, a financial institution deploying an LLM for credit assessment or customer service — is a downstream AI system provider under the Act. They are not subject to Article 53’s provider obligations for the underlying GPAI model (that’s the model vendor’s obligation). But they are subject to their own AI system-level risk assessment requirements.
To complete that assessment, they need the upstream provider’s documentation. This creates a documentation chain:
GPAI Model Provider → [Annex XI/XII documentation] → Downstream System Provider → [system-level risk assessment] → Deployer/Operator
If you’re deploying GPT-5, Gemini, or Claude via API to build an EU-facing product, the compliance question isn’t “do I have GPAI obligations?” — it’s “have I received and documented the upstream provider’s technical documentation and used it in my own system-level risk assessment?”
For financial institutions, this maps directly to model risk governance expectations. Vendor documentation gaps are model risk documentation gaps. The AI governance checklist your regulators actually test has specific expectations around third-party model documentation, and examiners will ask to see it.
How This Intersects with US AI Governance Frameworks
US companies navigating both NIST AI RMF requirements and EU AI Act obligations often ask which framework to lead with. The short answer: they’re largely compatible, and the documentation chain the EU AI Act requires maps onto what NIST calls GOVERN and MAP functions.
The NIST AI RMF vs EU AI Act compliance mapping breaks this out in detail. For GPAI model compliance specifically, the practical overlap is:
- NIST GOVERN 1.2 (AI risk policies) → Article 53 copyright compliance policy
- NIST MAP 1.1 (AI context establishment) → Annex XII instructions for use documentation
- NIST MAP 2.1 (impact assessment) → Article 55 systemic risk assessment
If you’re already running a NIST-aligned AI risk program, most of the documentation infrastructure for EU AI Act compliance is closer than it looks.
So What? The Compliance Checklist
For GPAI model providers (new model, placed on market after August 2, 2025):
| Action | Article | Deadline |
|---|---|---|
| Complete Annex XI technical documentation | 53(1)(a) | At market placement |
| Prepare Annex XII instructions for use | 53(1)(b) | At market placement |
| Implement TDM copyright opt-out compliance policy | 53(1)(c) | At market placement |
| Publish AI Office training data summary | 53(1)(d) | At market placement |
| Determine whether 10²⁵ FLOPs threshold applies | Art. 51 | Ongoing; notify AI Office within 2 weeks |
| If systemic risk: build adversarial testing program | 55(1)(a) | At market placement |
| If systemic risk: establish incident reporting mechanism to AI Office | 55(1)(c) | At market placement |
| Consider signing GPAI Code of Practice | — | Ongoing |
For downstream deployers building on GPAI models:
| Action | Why It Matters |
|---|---|
| Request Annex XII instructions for use from each GPAI model vendor | Foundation for your AI system risk assessment |
| Confirm vendor GPAI Code of Practice status | Signals documentation quality; non-signatories face elevated scrutiny |
| Document whether the model is systemic risk | Affects your own system-level obligations and exposure analysis |
| Incorporate vendor documentation into your AI model inventory | Required for model risk governance under OCC/Fed AI guidance |
| Review vendor contracts for EU AI Act documentation provisions | Sets clear expectations before enforcement actions begin |
Enforcement powers enter full application August 2, 2026. The AI Office’s first priorities will be providers — but documentation gaps surface when providers respond to oversight requests and ask downstream firms what they’ve done with the documentation. Getting the chain right now costs far less than explaining gaps under a formal inquiry.
Sources: EU AI Act Article 53 — GPAI Provider Obligations · EU AI Act Article 55 — Systemic Risk Obligations · GPAI Code of Practice Final Version · EC Guidelines for GPAI Model Providers · Latham & Watkins: GPAI Obligations in Force
Related Template
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Frequently Asked Questions
Which AI models qualify as GPAI models under the EU AI Act?
What are the four Article 53 obligations for GPAI model providers?
What is the systemic risk threshold and what additional obligations does it trigger?
Does the EU AI Act apply to US companies building or deploying AI models?
What are the fines for non-compliant GPAI providers?
What documentation should downstream deployers request from their GPAI model providers?
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Related Framework
AI Risk Assessment Template & Guide
Comprehensive AI model governance and risk assessment templates for financial services teams.
Keep Reading
AI Risk Assessment Template: Pre-Deployment Checklist for Financial Services
A pre-deployment AI risk assessment for banks and fintechs — model inventory, tiering, scorecard, and the controls examiners ask about under SR 26-02 and FS AI RMF.
May 4, 2026
AI RiskGenAI Supply Chain Risk: Third-Party Model Dependencies and NIST AI 600-1 Controls
Most financial institutions using GenAI APIs don't fully own their AI supply chain. NIST AI 600-1 says that's your problem. Here's what you need to control.
Apr 25, 2026
AI RiskDeveloper vs. Deployer vs. Operator: Role-Specific Obligations Under NIST AI 600-1
NIST AI 600-1 assigns different GenAI risk obligations to developers, deployers, and operators. Here's what each role actually owns—and where the gaps live.
Apr 25, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.