Zum Hauptinhalt springen
← Zurück zum Blog

GPAI Enforcement Begins August 2: What SMEs Using ChatGPT, Claude, and Gemini Need to Know

Teilen auf LinkedIn

7 Min. Lesezeit

The General-Purpose AI (GPAI) obligations under the EU AI Act took legal effect on August 2, 2025. For the past twelve months, providers of GPAI models — OpenAI, Anthropic, Google, Mistral, Meta, and others — have been operating under a grace period during which the AI Office's enforcement powers were dormant.

That grace period ends on August 2, 2026. From that date, the AI Office can issue requests for information, demand model access, order recalls, and impose fines of up to €15 million or 3% of global annual turnover, whichever is higher, for violations of the GPAI provisions.

This affects providers directly. But it also reaches every SME that deploys a GPAI tool — and that is almost every SME in Europe today. If your business uses ChatGPT, Claude, Gemini, Mistral, or any model fine-tuned on top of one of them, the August 2 enforcement transition is your problem too.

Provider vs deployer — which one are you?

The AI Act draws a sharp line between two roles:

  • A provider develops and places an AI system on the market under its own name. For GPAI, this means the labs training the foundation models.
  • A deployer uses an AI system under its authority, in the course of professional activity. This is where almost every SME sits.

If your SaaS product calls the OpenAI API to generate user-facing text, you are an OpenAI deployer. If you embed Claude into your customer support workflow, you are an Anthropic deployer. The model provider's GPAI obligations under Articles 51-56 stay with them. Your obligations are the deployer obligations under Articles 4, 13, 14, 26, and 50 — and the Article 50 transparency duties in particular hit hard at deployers using generative AI.

The clearest practical rule: if you didn't train the model, you are a deployer, not a provider. Even if you fine-tuned it. Even if you wrap it in your own product.

(For a deeper provider-side view, see our earlier piece on how the EU will fine GPAI model providers. And if you are still unsure of your role, run the provider-or-deployer checker.)

The Code of Practice landscape

Alongside the GPAI obligations, the European Commission published a voluntary General-Purpose AI Code of Practice on July 10, 2025. It has three chapters:

  • Transparency — for all GPAI providers, covering model documentation, training data summaries, and downstream-deployer information.
  • Copyright — for all GPAI providers, addressing how training-data copyright is managed and respected.
  • Safety and Security — only for the small set of providers whose models meet the systemic-risk threshold under Article 55.

The Code is voluntary, but signing it provides a safe harbour: signatories are presumed to comply with the corresponding Article 53 (and, where applicable, Article 55) obligations. Non-signatories must demonstrate compliance through their own documentation, and they will face heightened scrutiny from the AI Office and from national authorities.

For SMEs as deployers, the practical implication is vendor selection: a GPAI vendor that has signed the Code of Practice is materially less likely to be hit by enforcement action that could disrupt your service. Confirming Code signatory status should now be part of standard AI-vendor due diligence.

What deployers actually owe

Even though the headline GPAI rules apply to providers, deployers carry their own statutory load. The four duties that bite hardest in the run-up to August 2 are:

  • Article 50 transparency. If your AI system interacts with people, you must disclose that fact. If it generates synthetic audio, image, video, or text content, that content must be labelled as AI-generated. If it generates deepfakes, those must be marked. If it uses emotion recognition or biometric categorisation, affected persons must be informed. The deadline is August 2, 2026, and the Omnibus does not delay it.

  • November 2, 2026 watermarking technical compliance. The AI Act requires that synthetic content be marked in a machine-readable format so it can be detected as AI-generated downstream. Watermarking technology must be in place by November 2, 2026, three months after the headline transparency deadline.

  • Article 4 AI literacy. In force since February 2, 2025. Every organisation putting AI systems into use must ensure its staff have appropriate AI literacy — the level depends on role, technical knowledge, and the context of use. See our Article 4 guide for what counts as "appropriate" and how to demonstrate it.

  • Use-case documentation. Especially relevant where a GPAI deployment crosses into Annex III territory — recruitment screening, employee evaluation, credit scoring, education-grading, access to essential public services. Deployers need to record purpose, data flows, affected persons, and human-oversight arrangements.

Penalty tiers

The fines are structured in three bands. Here is what each maps to:

Violation Maximum Fine
Prohibited practices (Article 5 — social scoring, real-time biometric ID, manipulative AI, etc.) €35M or 7% of global annual turnover
High-risk system non-compliance + GPAI obligations (Articles 16-29 and 51-56) €15M or 3% of global annual turnover
Providing incorrect, incomplete, or misleading information to authorities €7.5M or 1% of global annual turnover

For the largest providers, 3% of global turnover dwarfs the €15M floor. For SMEs, the percentage cap means the fine scales down — but the floor matters: a small fine on a small SME can still be existential. The Act provides for proportionality in fining decisions, but proportionality is not exemption.

What this means for SMEs

  1. Vendor due diligence. Before August 2, confirm whether your GPAI vendors are signatories to the Code of Practice. The official Signatory Taskforce list is the source of truth. If your vendor has not signed, ask why — it is a legitimate procurement question, and a non-signatory should be able to point you at equivalent documentation.

  2. Add Article 50 transparency labels everywhere relevant. Audit every customer-facing AI feature. If a user is interacting with an AI system, say so in plain language. If the system generates content, label that content. If it produces deepfakes, mark them as artificially generated. This is a technical and copy-editing job — but it is the single most likely thing a regulator looks for in a casual inspection.

  3. Document every use case. For every AI system you deploy, record: purpose, inputs, outputs, affected users, who reviewed it, who can override it, what data it was trained or fine-tuned on, and which Annex III category (if any) it touches. This becomes your defence file if questioned. Ten minutes of writing per system, repeated, is cheap insurance.

  4. Roll out AI literacy training. Article 4 has been in force for fifteen months. If your team has not had structured AI literacy training, that is a finding waiting to happen. Free starting point: the ClearAct AI literacy checklist walks through what regulators expect and what counts as evidence of compliance.

  5. Watch for the AI Office Q2 2026 transparency guidelines. The Commission has confirmed that practical guidelines on Article 50 transparency will be published in the second quarter of 2026 — almost certainly before the August 2 deadline. These will move from principle to procedure (what exactly counts as a sufficient disclosure, how labels must be presented, what watermarking formats are accepted). When they drop, they will become the operational standard immediately.

The bottom line

August 2, 2026 is the day the AI Office stops being a guidance shop and becomes a fining authority for GPAI. Providers face the immediate exposure. But the deployer obligations attached to the same date — Article 50 transparency, Article 4 literacy already-in-force, watermarking three months later — are what most SMEs will actually be measured against.

The path to compliance is not heroic. It is vendor due diligence, transparency labelling, use-case documentation, and literacy training — done deliberately, with dated artefacts, before the deadline rather than after. Treat August 2 as the day a real regulatory environment begins, not the day a paper one continues.


Want a free 2-minute assessment of where your AI systems sit under the GPAI rules? Take the ClearAct risk quiz. Not sure of your role? Run the provider-or-deployer checker. Article 4 readiness is the first thing a regulator will ask about — start with the AI literacy checklist.

Verwandte Artikel

Digital Omnibus Trilogue Fails: Why August 2 Deadline Is Suddenly Back in Play

On April 28, 2026, after 12 hours of talks, Parliament and Council failed to reach agreement on the Digital Omnibus reforms. With trilogue resumed May 13, the August 2 deadline is no longer guaranteed to be delayed. Here is what stalled and what SMEs should do this month.

Artikel lesen →

GPAI Enforcement: How the EU Will Investigate and Fine AI Model Providers

The Commission published draft rules on how it will probe GPAI providers like OpenAI and Google. Fines up to 3% of global turnover. What this means for SMEs using these models.

Artikel lesen →

Machen Sie unsere kostenlose Risikobewertung

Finden Sie in 2 Minuten heraus, wo Ihr Unternehmen unter der EU-KI-Verordnung steht.

Quiz starten