AI Risk

EU AI Act GPAI Obligations: What Providers and Downstream Deployers Must Do in 2026

May 2, 2026 Rebecca Leung
Table of Contents

Your AI vendor signed the EU AI Act’s GPAI Code of Practice in August 2025. That tells you they’re working toward compliance. What tells the regulator that you are?

Most compliance teams treating the EU AI Act as a “provider problem” — something OpenAI, Google, or Anthropic handles — are misreading how the obligations flow downstream. The Act creates a two-tier structure for general-purpose AI models: foundational obligations on every GPAI provider, and elevated obligations on those whose models cross a systemic risk threshold. Downstream companies building on those models aren’t exempted — they need documentation from providers to satisfy their own AI system requirements.

With enforcement fines of up to 3% of global annual turnover going live August 2, 2026, this is no longer a deferred compliance project.

TL;DR

  • EU AI Act GPAI model obligations went live August 2, 2025; enforcement fines (up to 3% global turnover or €15M) begin August 2, 2026.
  • All GPAI providers face four Article 53 obligations: technical documentation, downstream use instructions, a copyright compliance policy, and a public training data summary.
  • Models with training compute above 10²⁵ FLOPs trigger Article 55 systemic risk obligations — adversarial testing, incident reporting to the AI Office, and cybersecurity requirements.
  • Downstream deployers aren’t Article 53 providers for the underlying model, but they need the provider’s documentation to complete their own AI system compliance.
  • Open-source models are partially exempt from Article 53 — but lose that exemption if they carry systemic risk.

What Qualifies as a GPAI Model

The EU AI Act defines a general-purpose AI (GPAI) model as an AI model trained with large amounts of data using self-supervision at scale, displaying significant generality, and capable of competently performing a wide range of distinct tasks (Article 3(63)).

Practically: GPT-5, Gemini 3, Claude, Llama 4, and similar frontier models qualify. The definition focuses on functional generality and training methodology — not a single compute threshold.

A separate, higher threshold applies to systemic risk designation: a model is presumed to carry systemic risk if cumulative training compute exceeds 10²⁵ FLOPs (Article 51(1)(a)). The AI Office can also designate models below this threshold based on market reach, deployment scope, or other factors. Providers must notify the AI Office within two weeks of reaching or reasonably foreseeing the 10²⁵ FLOPs threshold.

The GPAI chapter applies to providers — companies that develop or place GPAI models on the EU market, including via API. A US company making a model accessible to EU users is within scope. No EU office required.


Article 53: The Four Obligations Every GPAI Provider Faces

Article 53 establishes four baseline obligations for all GPAI model providers. These apply to models placed on the market from August 2, 2025 forward. Providers of models placed on the market before August 2, 2025 have until August 2, 2027 to comply.

1. Technical Documentation (Annex XI)

Providers must prepare and maintain documentation covering:

  • Model architecture and training methodology
  • Training data sources and volumes
  • Computational resources and energy consumption
  • Performance benchmarks and evaluation results
  • Known limitations and foreseeable misuse scenarios

This documentation is submitted to the AI Office and national supervisory authorities on request. The AI Office published a mandatory documentation template on July 24, 2025 — providers use this template, not a bespoke format.

2. Instructions for Use (Annex XII)

Providers must supply downstream AI system providers with sufficient information to understand:

  • Intended use cases and appropriate deployment contexts
  • Capabilities and performance characteristics
  • Known limitations and failure modes
  • Required precautions for specific use cases

If you’re building an enterprise AI application on top of a third-party GPAI model, this is the documentation you should be requesting from your vendor. Inability to produce it is a gap in your third-party AI vendor due diligence process.

Providers must implement a policy to comply with EU copyright law, specifically honoring the text and data mining (TDM) opt-out regime under Article 4(3) of Directive 2019/790. When a rights-holder has reserved their works from TDM use, the GPAI provider must have a mechanism to identify and honor that reservation.

The GPAI Code of Practice’s Copyright chapter provides a framework for demonstrating compliance. This is one of the more operationally complex obligations — particularly for models trained on web-scraped datasets — and a live area of EU litigation.

4. Public Training Data Summary

Providers must publish a public summary of training data using the AI Office’s mandatory template (published July 24, 2025). The summary discloses:

  • Types of content used (text, image, code, audio, etc.)
  • Data sources and collection methods
  • Content filtering or curation applied

This is a public-facing obligation — not a private regulatory submission. It is distinct from, and less detailed than, the Annex XI technical documentation.

Open-source exception: Obligations (1) and (2) above don’t apply to open-source GPAI models with publicly available parameters. That exemption disappears for any open-source model that also qualifies as systemic risk — which is why models like Llama 4 still have exposure if they cross the 10²⁵ FLOPs threshold.


Article 55: Systemic Risk — The Higher Tier

Providers whose models cross the 10²⁵ FLOPs threshold — or are designated by the AI Office — face four additional obligations under Article 55, on top of the baseline Article 53 requirements.

ObligationWhat It Requires
Model evaluationsStandardized testing and adversarial testing to identify systemic risks
Risk assessment and mitigationAssess and mitigate possible systemic risks at EU level
Incident reportingTrack and report serious incidents and corrective measures to the AI Office without undue delay
Cybersecurity protectionAdequate security for the model itself and its physical infrastructure

“Serious incidents” under the incident reporting obligation means events that constitute or may constitute a criminal offense, cause harm to persons or property, or disrupt critical infrastructure. Reporting is to the AI Office — not national authorities.

The models currently presumed to meet the systemic risk threshold include the most capable frontier models from OpenAI, Google DeepMind, Anthropic, and Meta. Systemic risk tier providers must run adversarial testing programs; the GPAI Code of Practice’s Safety and Security chapter provides the framework for documenting them.


The GPAI Code of Practice: Your Compliance Path

On July 10, 2025, the European AI Office published the final GPAI Code of Practice. The European Commission endorsed it via adequacy decisions on August 1. Adherence is voluntary — but compliance with the CoP creates a presumption of compliance with the underlying AI Act obligations it covers.

The CoP has three chapters:

ChapterCovers
TransparencyAnnex XI technical documentation and Annex XII instructions for use
CopyrightCopyright policy documentation and TDM opt-out compliance
Safety and SecuritySystemic risk model evaluation, adversarial testing, incident reporting

Major AI providers — Microsoft, Google, Amazon, OpenAI, and Anthropic — signed the CoP in August 2025. Meta declined, a decision the AI Office has specifically noted as it builds its oversight program. Non-signatories can still demonstrate compliance through the underlying regulatory requirements, but they forgo the CoP’s presumption of compliance and face higher scrutiny in any AI Office review.


What Downstream Deployers Actually Need

A company building an AI application on top of a GPAI model — say, a financial institution deploying an LLM for credit assessment or customer service — is a downstream AI system provider under the Act. They are not subject to Article 53’s provider obligations for the underlying GPAI model (that’s the model vendor’s obligation). But they are subject to their own AI system-level risk assessment requirements.

To complete that assessment, they need the upstream provider’s documentation. This creates a documentation chain:

GPAI Model Provider → [Annex XI/XII documentation] → Downstream System Provider → [system-level risk assessment] → Deployer/Operator

If you’re deploying GPT-5, Gemini, or Claude via API to build an EU-facing product, the compliance question isn’t “do I have GPAI obligations?” — it’s “have I received and documented the upstream provider’s technical documentation and used it in my own system-level risk assessment?”

For financial institutions, this maps directly to model risk governance expectations. Vendor documentation gaps are model risk documentation gaps. The AI governance checklist your regulators actually test has specific expectations around third-party model documentation, and examiners will ask to see it.


How This Intersects with US AI Governance Frameworks

US companies navigating both NIST AI RMF requirements and EU AI Act obligations often ask which framework to lead with. The short answer: they’re largely compatible, and the documentation chain the EU AI Act requires maps onto what NIST calls GOVERN and MAP functions.

The NIST AI RMF vs EU AI Act compliance mapping breaks this out in detail. For GPAI model compliance specifically, the practical overlap is:

  • NIST GOVERN 1.2 (AI risk policies) → Article 53 copyright compliance policy
  • NIST MAP 1.1 (AI context establishment) → Annex XII instructions for use documentation
  • NIST MAP 2.1 (impact assessment) → Article 55 systemic risk assessment

If you’re already running a NIST-aligned AI risk program, most of the documentation infrastructure for EU AI Act compliance is closer than it looks.


So What? The Compliance Checklist

For GPAI model providers (new model, placed on market after August 2, 2025):

ActionArticleDeadline
Complete Annex XI technical documentation53(1)(a)At market placement
Prepare Annex XII instructions for use53(1)(b)At market placement
Implement TDM copyright opt-out compliance policy53(1)(c)At market placement
Publish AI Office training data summary53(1)(d)At market placement
Determine whether 10²⁵ FLOPs threshold appliesArt. 51Ongoing; notify AI Office within 2 weeks
If systemic risk: build adversarial testing program55(1)(a)At market placement
If systemic risk: establish incident reporting mechanism to AI Office55(1)(c)At market placement
Consider signing GPAI Code of PracticeOngoing

For downstream deployers building on GPAI models:

ActionWhy It Matters
Request Annex XII instructions for use from each GPAI model vendorFoundation for your AI system risk assessment
Confirm vendor GPAI Code of Practice statusSignals documentation quality; non-signatories face elevated scrutiny
Document whether the model is systemic riskAffects your own system-level obligations and exposure analysis
Incorporate vendor documentation into your AI model inventoryRequired for model risk governance under OCC/Fed AI guidance
Review vendor contracts for EU AI Act documentation provisionsSets clear expectations before enforcement actions begin

Enforcement powers enter full application August 2, 2026. The AI Office’s first priorities will be providers — but documentation gaps surface when providers respond to oversight requests and ask downstream firms what they’ve done with the documentation. Getting the chain right now costs far less than explaining gaps under a formal inquiry.


Sources: EU AI Act Article 53 — GPAI Provider Obligations · EU AI Act Article 55 — Systemic Risk Obligations · GPAI Code of Practice Final Version · EC Guidelines for GPAI Model Providers · Latham & Watkins: GPAI Obligations in Force

Frequently Asked Questions

Which AI models qualify as GPAI models under the EU AI Act?
A general-purpose AI (GPAI) model qualifies if it displays significant generality and can competently perform a wide range of distinct tasks — including generating text, images, audio, or video — and was trained using large-scale data with self-supervised learning at scale. The definition focuses on functional generality, not a single compute threshold. A separate higher bar — 10²⁵ FLOPs of training compute — creates a presumption of systemic risk. Practically, GPT-5, Gemini 3, Claude, Llama 4, and similar frontier models qualify as GPAI models. Open-weight models are also within scope if they meet systemic risk criteria, which is why Meta faces elevated scrutiny despite declining to sign the GPAI Code of Practice.
What are the four Article 53 obligations for GPAI model providers?
All GPAI model providers must: (1) prepare and maintain technical documentation per Annex XI, covering model architecture, training methodology, energy consumption, and performance benchmarks; (2) provide downstream AI system providers with Annex XII documentation sufficient to understand the model's capabilities, limitations, and appropriate use cases; (3) establish and implement a policy to comply with EU copyright law — specifically honoring text and data mining opt-outs under Article 4(3) of Directive 2019/790; and (4) publish a public summary of training content using the AI Office's mandatory template (published July 24, 2025). Open-source model providers are exempt from obligations (1) and (2) unless the model also qualifies as systemic risk.
What is the systemic risk threshold and what additional obligations does it trigger?
A GPAI model is presumed to present systemic risk when cumulative training compute exceeds 10²⁵ FLOPs (Article 51(1)(a)). The AI Office can also designate models below this threshold based on market reach, deployment scope, or other risk factors. Providers of systemic-risk models face four additional Article 55 obligations: (1) conduct model evaluations including adversarial testing to identify systemic risks; (2) assess and mitigate possible systemic risks at Union level; (3) track, document, and report serious incidents and corrective measures to the AI Office without undue delay; and (4) ensure adequate cybersecurity protection for the model and its physical infrastructure. Notification to the AI Office is required within two weeks of reaching or foreseeing the 10²⁵ FLOPs threshold.
Does the EU AI Act apply to US companies building or deploying AI models?
Yes. The EU AI Act applies to providers who place AI systems or GPAI models on the EU market regardless of where the company is incorporated. A US company making its AI model accessible to EU users via API or direct product is within scope. US companies that build on third-party GPAI models to create EU-facing applications are downstream providers under the Act — they must obtain technical documentation from their upstream GPAI model provider and incorporate it into their own system-level risk assessment. There is no EU office required for the Act to apply, mirroring GDPR's extraterritorial scope.
What are the fines for non-compliant GPAI providers?
Under Article 101, which enters full application on August 2, 2026, the AI Office can impose fines of up to 3% of total worldwide annual turnover for the preceding financial year, or €15 million — whichever is higher — for violations of GPAI model obligations. Providing incorrect, incomplete, or misleading information to the AI Office can trigger fines of up to 1% of annual turnover. For violations related to high-risk AI systems (separate from GPAI obligations), the ceiling rises to 7% of annual turnover. The AI Office is the sole enforcement authority for GPAI model obligations — national authorities handle AI systems that aren't GPAI models.
What documentation should downstream deployers request from their GPAI model providers?
Downstream AI system providers building on GPAI models should request: (1) Annex XII instructions for use specifying intended use cases, appropriate deployment contexts, and known failure modes; (2) a summary of Annex XI technical documentation covering the model's capabilities, limitations, and performance benchmarks; (3) confirmation of the provider's copyright compliance policy and whether a training data summary has been published per the AI Office template; and (4) whether the model is classified as systemic risk and, if so, what adversarial testing has been conducted. This documentation forms the foundation for your own AI system-level risk assessment and is an examiner expectation under AI governance frameworks.
Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Related Framework

AI Risk Assessment Template & Guide

Comprehensive AI model governance and risk assessment templates for financial services teams.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.