AI Risk

EU AI Act Article 5 Prohibited AI Systems: The Compliance Checklist Financial Institutions Can't Ignore

May 12, 2026 Rebecca Leung
Table of Contents

Your legal team almost certainly briefed you on the EU AI Act’s high-risk AI requirements coming August 2026. They may have mentioned the GPAI provider obligations that kicked in last August. What’s discussed less often: the prohibitions have been fully in force since February 2025, and the enforcement machinery — including fines up to €35 million — became operational in August 2025.

If your institution hasn’t conducted an Article 5 audit, you’re already more than a year behind on the requirements that carry the AI Act’s steepest penalties.

TL;DR

  • EU AI Act Article 5 prohibitions took effect February 2, 2025; the enforcement regime with fines up to €35M or 7% of global turnover launched August 2, 2025
  • Credit scoring and risk underwriting are explicitly carved out of the social scoring prohibition — but certain behavioral analytics and employee monitoring tools are not
  • Contact center emotion monitoring, biometric categorization inferring protected characteristics, and cross-context social scoring are directly in scope
  • Financial institutions operating in or providing AI to the EU should audit their model inventory against Article 5 now — national market surveillance authorities are operational in several member states

What Article 5 Actually Prohibits

The EU AI Act’s Article 5 identifies specific AI practices so fundamentally incompatible with EU values that they cannot be permitted in any form, for any purpose. Unlike the high-risk tier — which imposes documentation, transparency, and oversight requirements — these practices are banned outright.

The prohibited categories cover eight distinct areas:

Prohibited PracticeWhat It Bans
Harmful manipulationAI using subliminal techniques or exploiting psychological vulnerabilities to distort behavior against a person’s interests
Exploitation of vulnerabilitiesAI targeting people based on age, disability, or economic hardship to manipulate behavior harmfully
Social scoring by authoritiesEvaluating individuals on social behavior to determine their treatment in unrelated contexts
Individual criminal profilingAI predicting a person’s risk of committing a crime based solely on profiling or personality traits, unrelated to actual behavior
Untargeted facial image scrapingBuilding or expanding facial recognition databases by scraping images from the internet or CCTV without specific authorization
Emotion recognition in workplacesAI inferring emotional states of employees or students in professional or educational settings
Biometric categorization for sensitive traitsUsing biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation
Real-time remote biometric ID for law enforcementLive biometric surveillance in publicly accessible spaces by law enforcement (with narrow, specific exceptions)

The European Commission published non-binding guidelines on prohibited AI practices on February 4, 2025 — two days after the prohibitions took effect — to help organizations interpret scope and intent. The guidelines include concrete examples and are the primary interpretive authority pending formal enforcement guidance.

The Financial Services Interpretation: What’s Permitted, What Isn’t

Credit Scoring: Explicitly Permitted

The most important clarification for financial services practitioners: lawful credit scoring is not prohibited.

The European Commission’s guidelines address this directly. Credit scoring, risk scoring, and underwriting are “essential aspects of the services of financial and insurance businesses” and are not per se prohibited under Article 5(1)(c), provided:

  1. The scoring is based on relevant financial data — payment history, debt ratios, account behavior
  2. Any detrimental treatment is justified and proportionate to actual financial behavior
  3. The system doesn’t import social behavior from unrelated contexts to make its determinations

A traditional credit model scoring based on payment history, debt-to-income ratios, and account conduct is not social scoring. A model that incorporates social media activity, friend networks, neighborhood associations, or lifestyle inferences to determine creditworthiness operates in significantly more dangerous territory — not necessarily prohibited, but requiring careful legal analysis to establish the relevant-data nexus.

The same logic applies to insurance risk underwriting: actuarially sound risk assessment using relevant health or life data is not the target. The prohibition aims at systems that evaluate people’s social behavior in one domain to disadvantage them in another, unrelated context.

Fraud Detection and AML

AML and fraud detection systems are generally classified as high-risk under Annex III rather than prohibited under Article 5. But the line requires care. A fraud model scoring transaction behavior against known fraud patterns is not social scoring. A behavioral analytics system that builds reputational scores across customer touchpoints and applies them to credit or service eligibility decisions — especially using data from contexts unrelated to the financial relationship — is closer to the prohibited zone.

The test the Commission applies: does the system evaluate people based on behavior in unrelated contexts to produce detrimental treatment in the financial context? If yes, that’s social scoring regardless of the label you put on it.

Where Financial Institutions Have Real Exposure

Most banks and fintechs aren’t running obviously prohibited systems. The risk is more often in AI tools that crossed a line without anyone recognizing it — particularly in three areas.

Contact Center Employee Monitoring

This is the clearest financial services exposure point. The Commission’s guidelines give as a primary example of prohibited emotion recognition: “a call center using webcams and voice recognition to track employees’ emotions such as anger.”

Banks and fintechs have deployed exactly these tools — vendor-provided software that analyzes agent voice tone, facial expressions via video, or behavioral patterns to measure engagement, compliance adherence, or customer service quality. If any of these tools infer emotional state from biometric data (voice patterns, facial features), they are prohibited in EU workplace contexts, regardless of the stated operational justification.

The prohibition applies based on where the employee is located, not where the vendor is headquartered. If your EU contact center agents are monitored by an AI emotional analytics tool, that’s in scope.

Behavioral Biometrics in Workforce Management

Keystroke dynamics, typing rhythm analysis, and activity monitoring tools that go beyond productivity tracking into emotional or psychological state inference are a growing compliance risk. Some workforce management platforms have incorporated these capabilities as “employee wellbeing” features. If the inference engine is making determinations about employee emotional state from biometric behavioral signals, the Article 5(1)(f) prohibition applies.

Customer Segmentation Using Biometric Inference

Any AI system that processes customer images, voice data, or other biometric inputs to infer protected characteristics — race, religion, sexual orientation — violates Article 5(1)(g), even if the stated purpose is neutral segmentation, product personalization, or market research. The prohibition covers the act of inference, not just the use of the inferred data.

This is especially relevant for firms using computer vision in branch environments, voice analytics platforms, or any third-party tool that “enriches” customer profiles with demographic inference layers.

Third-Party Vendor Exposure

Many financial institutions are downstream deployers of AI tools built by vendors. Under the EU AI Act, a “user” (the EU Act’s term for deployer) who knowingly uses a system for a prohibited practice is directly liable — the prohibition is on use, not just development. Your vendor due diligence questionnaires need to explicitly ask whether any system component incorporates prohibited practices, and your vendor contracts should include representations on Article 5 compliance.

Enforcement: Who’s Watching and What They Can Do

Each EU Member State is required to designate national market surveillance authorities (MSAs) responsible for AI Act enforcement. As of early 2026, designation is proceeding:

  • Germany has designated the Bundesnetzagentur (Federal Network Agency) as its primary MSA for prohibited practices
  • France published a draft MSA designation framework in September 2025 but had not finalized authority designation as of early 2026
  • Several member states met the August 2025 designation deadline; others continue implementation

No public enforcement actions for prohibited AI practices have been announced as of mid-2026. This reflects MSA capacity ramp-up rather than regulatory disinterest. When the first enforcement cases emerge, they will be high-visibility and the penalties will be severe.

The penalty structure under Article 99: up to €35 million or 7% of total worldwide annual turnover, whichever is higher — the maximum penalty tier in the entire Act, exceeding the fines for high-risk AI non-compliance (€15M / 3%) or GPAI model violations.

Article 5 Compliance Checklist

Work through this against every AI system your institution operates or procures where EU individuals are involved:

AI Inventory and Scoping

  • Does your model inventory identify which systems process biometric data (voice, facial, physical characteristics)?
  • Have you flagged all systems that evaluate employee behavior, emotional state, or wellbeing?
  • Have you reviewed customer analytics tools for biometric categorization components?
  • Have you asked third-party AI vendors to confirm their tools don’t incorporate Article 5-prohibited practices?

Specific Practice Review

  • Is any AI used to monitor employee emotional states, engagement levels, or sentiment via biometric signals?
  • Does any behavioral analytics tool assess employees on personality traits unrelated to defined job functions?
  • Does any customer-facing AI infer protected characteristics from biometric data?
  • Have any vendors offered a “social scoring,” behavioral reputation, or lifestyle-inference scoring feature?
  • Do your fraud/AML models use behavioral data from contexts unrelated to the financial relationship in ways that produce detrimental treatment?

Governance and Documentation

  • Do your AI governance policies explicitly reference Article 5 and the prohibited practices?
  • Is there a designated owner for EU AI Act compliance monitoring?
  • Have you reviewed AI vendor contracts for Article 5 compliance representations?
  • Have you documented your credit scoring systems to confirm alignment with the social scoring carve-out criteria?

Procurement Controls

  • Is Article 5 compliance verification required before procurement approval for new AI tools?
  • Does your AI vendor risk assessment questionnaire explicitly address prohibited practices?

So What?

Article 5 prohibitions aren’t an August 2026 problem — they’ve been law since February 2025 and enforceable since August 2025. The enforcement quiet period reflects implementation ramp-up, not regulatory retreat.

For most financial institutions, the highest near-term exposure is not credit scoring — it’s employee monitoring tools deployed in contact centers, workforce analytics platforms acquired through HR without compliance review, and third-party AI integrations that were never subjected to an Article 5 prohibited-practices assessment.

Run the inventory. Pull the vendor contracts. Test the contact center monitoring tools against the emotion recognition prohibition. Document what you found and the decisions you made. That paper trail is your compliance evidence when the MSA inquiry arrives.


For a complete pre-deployment AI risk assessment aligned to EU AI Act, SR 11-7, and NIST AI RMF requirements, see the AI Risk Assessment Template. For the full high-risk AI documentation obligations due August 2026, see EU AI Act High-Risk AI in Financial Services. Teams managing both the EU AI Act and NIST AI RMF simultaneously will find the crosswalk between the two frameworks useful for reducing duplicative documentation effort.

Need the working template?

Start with the source guide.

These answer-first guides summarize the required fields, evidence, and implementation steps behind the templates practitioners search for.

Frequently Asked Questions

Is credit scoring prohibited under EU AI Act Article 5?
No. The European Commission's February 2025 guidelines explicitly clarify that lawful credit scoring and risk underwriting are not prohibited under Article 5(1)(c), provided they use relevant financial data and don't produce unjustified or disproportionate treatment based on unrelated social context data.
When did EU AI Act Article 5 prohibitions take effect?
The prohibited practices became applicable on February 2, 2025. The penalty and enforcement regime took effect on August 2, 2025, when national market surveillance authorities gained their formal enforcement powers.
What are the fines for violating EU AI Act Article 5?
Up to €35 million or 7% of total worldwide annual turnover, whichever is higher — the largest penalty tier in the entire EU AI Act, higher than the fines for high-risk AI non-compliance.
Does the emotion recognition prohibition apply to customer-facing AI tools?
No. The European Commission's guidelines clarify that the workplace emotion recognition ban covers employees in professional or educational settings. AI tools that detect customer emotions — such as chatbots analyzing sentiment based on message content or voice tone — fall outside the Article 5(1)(f) prohibition.
What counts as prohibited biometric categorization under Article 5?
AI systems that use biometric data to infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This includes systems that categorize individuals based on facial features, voice patterns, or physical characteristics to deduce these protected attributes.
Are AML and fraud detection AI systems affected by Article 5?
Not directly. AML and fraud detection systems are generally classified as high-risk under Annex III, not prohibited under Article 5. However, behavioral analytics tools that evaluate individuals on traits unrelated to the financial service being provided may trigger the social scoring prohibition and warrant careful legal review.
Rebecca Leung

Rebecca Leung

Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.

Related Framework

AI Risk Assessment Template & Guide

Comprehensive AI model governance and risk assessment templates for financial services teams.

Immaterial Findings ✉️

Weekly newsletter

Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.

Join practitioners from banks, fintechs, and asset managers. Delivered weekly.