Which AI Systems Are Banned Under the EU AI Act?
The AI Act does not ban all risky AI. It bans specific practices considered unacceptable. Understanding these boundaries is essential for product teams and deployers alike.
Core prohibited-practice themes (Article 5 context)
- Manipulative or exploitative AI uses that materially distort behavior and cause likely harm.
- Social scoring-like practices that lead to unjustified detrimental treatment.
- Certain biometric and surveillance practices in sensitive contexts, with narrow exceptions.
- Other explicitly restricted use classes where rights and dignity risks are considered too severe.
Why SMEs should care even if they are not building surveillance products
Risk can emerge indirectly:
- behavior scoring used for workforce control,
- opaque profiling that affects access decisions,
- emotion-inference features in workplace or education contexts,
- synthetic media use without adequate disclosure safeguards.
Product and procurement red-flag checklist
- Does the feature infer psychological traits in rights-sensitive decisions?
- Could output lead to adverse treatment without due process?
- Are there contexts where users cannot reasonably opt out?
- Is there clear documentation of safeguards and legal basis?
Governance control: "no-go gate"
Introduce a mandatory no-go review before release for features touching biometric, behavioral, or rights-sensitive decision pathways. If legal rationale is unclear, block deployment until resolved.
Final takeaway
Treat prohibited-practice screening as a product quality gate, not a last-stage legal review. Early gating prevents expensive rework and severe exposure.