Zum Hauptinhalt springen
← Zurück zum Blog

EU Parliament Approves AI Act Delay to December 2027 and Bans Nudifier Apps

Teilen auf LinkedIn

8 Min. Lesezeit

On March 26, 2026, the European Parliament voted overwhelmingly — 569 in favour, 45 against, 23 abstentions — to approve its position on the Digital Omnibus amendments to the EU AI Act. The vote marks the Parliament's formal entry into trilogue negotiations with the Council and Commission.

Two headline outcomes: high-risk AI deadlines are moving to December 2027, and AI-powered nudification tools are being banned as a prohibited practice under Article 5.

This is the most significant legislative step since the Council adopted its negotiating mandate on March 13. With both institutions now aligned on the direction of travel, the question is no longer whether deadlines will move — but exactly when and how.

What the Parliament approved

High-risk deadline extensions

The Parliament's position aligns with the Council on pushing back high-risk AI obligations:

AI System Type Original Deadline Parliament Position
Stand-alone high-risk AI (Annex III) August 2, 2026 December 2, 2027
Product-embedded high-risk AI (Annex I) August 2, 2027 August 2, 2028

The rationale is straightforward: harmonised standards under the AI Act are not ready. The European standardisation organisations (CEN, CENELEC) have not delivered the technical standards that providers need to demonstrate conformity. Without these standards, requiring full compliance creates legal uncertainty rather than legal clarity.

The Parliament explicitly tied the delay to standards readiness — not to a general desire to weaken the regulation.

What is NOT delayed

The Parliament did not touch obligations that are already in force or approaching their original deadlines:

  • Prohibited practices (Article 5): In force since February 2, 2025
  • AI literacy (Article 4): In force since February 2, 2025
  • GPAI model obligations (Articles 51-56): In force since August 2, 2025
  • Transparency obligations (Article 50): Still scheduled for August 2, 2026
  • Transparency labelling deadline: Set at November 2, 2026 for watermarking of AI-generated audio, image, video, and text

These dates remain legally binding regardless of the Omnibus outcome.

Ban on AI nudifier tools

The Parliament added a new prohibited practice under Article 5: a ban on AI systems designed to create or manipulate images that are sexually explicit or intimate and resemble an identifiable real person without that person's consent.

Key details of the Parliament's position:

  • Scope: AI systems that generate realistic depictions of a real person's intimate body parts or place a real person's likeness in sexually explicit situations
  • Exception: Systems with effective safety measures that prevent users from creating such content are not covered by the ban
  • Timeline: Parliament rapporteur Michael McNamara indicated the ban could apply almost immediately after publication — potentially as early as July 2026. The Council, however, is proposing a later date around February 2027
  • Penalty tier: As a prohibited practice under Article 5, violations carry the maximum penalty: up to €35 million or 7% of global annual turnover

The ban responds directly to the wave of AI nudification tools targeting women and children across Europe, including the high-profile Grok deepfake controversy in late 2025.

For SMEs using any image generation or manipulation AI: review your systems now. Ensure usage policies explicitly prohibit nudification, and implement technical safeguards. This ban has near-unanimous political support and will survive trilogue.

Where the Council and Parliament differ

While both institutions agree on the direction, there are points of divergence that trilogue will need to resolve:

Issue Council Position Parliament Position
Nudifier ban start date ~February 2027 As early as July 2026
SME exemption scope Extended to small mid-caps (up to 500 employees) To be negotiated
Sandbox deadline Extended to December 2027 To be negotiated
AI Office powers Reinforced investigative authority To be negotiated

These differences are significant but not fundamental. Both institutions want the same outcome — the negotiations will focus on specifics and timing.

What happens next: the trilogue

With both the Council (March 13) and Parliament (March 26) having adopted their positions, the legislative process moves to trilogue negotiations — the closed-door three-way talks between the Council, Parliament, and Commission where the final text is agreed.

Expected timeline

Date Event
March 13, 2026 Council adopted negotiating mandate ✓
March 26, 2026 Parliament adopted its position ✓
April–May 2026 Trilogue negotiations begin
Mid-2026 Final text expected (optimistic)
Late 2026 Publication in Official Journal
December 2, 2027 Proposed new deadline for stand-alone high-risk AI
August 2, 2028 Proposed new deadline for product-embedded high-risk AI

Political signals are strongly in favour of rapid conclusion. Both institutions have the same goal — get clarity on the timeline before the original August 2026 deadline creates enforcement chaos. The Commission is also motivated, having been criticised for missing its own deadline on high-risk guidance.

Realistic best case: trilogue concludes by June or July 2026, with the amending regulation published shortly after. This would give legal certainty before the original August deadline.

Realistic worst case: trilogue extends into late 2026. In this scenario, the August 2, 2026 deadline would technically remain in force for high-risk systems, but enforcement would be practically impossible given the political consensus to delay. Regulators would likely exercise forbearance — but that is a political expectation, not a legal guarantee.

What this means for SMEs

1. The delay is now highly likely — but not yet law

With both the Council and Parliament aligned on December 2027 for stand-alone high-risk AI, the probability that the original August 2026 deadline will hold is near zero. However, until the amending regulation is published in the Official Journal, the original deadlines remain legally binding.

Do not treat the Parliament vote as permission to stop. Treat it as strong evidence that you have more time to do things properly.

2. Transparency obligations are NOT delayed

If your AI systems interact with people, generate content, or use biometric data, the August 2, 2026 transparency deadline under Article 50 is unchanged. This includes:

  • Disclosing that a person is interacting with an AI system
  • Labelling AI-generated or manipulated content (text, images, audio, video)
  • Marking deepfakes as artificially generated
  • Informing affected persons about emotion recognition or biometric categorisation

The November 2, 2026 deadline for watermarking compliance is also unchanged.

3. Use the extra time for fundamentals

If December 2027 is confirmed, you gain approximately 16 months of additional preparation time for high-risk obligations. Use it for:

  • AI system inventory: 83% of organisations still have no formal inventory. Build one now — document purpose, data flows, affected persons, and risk classification for every AI system.
  • Risk assessment: Determine which of your AI systems qualify as high-risk under Annex III. This analysis takes time and requires understanding your actual use cases, not just your vendor's marketing.
  • Risk management system: Article 9 requires a lifecycle risk management system. Start building it incrementally rather than rushing a paper exercise later.
  • Compliance ownership: Designate an internal owner for AI Act compliance. 74% of organisations still lack one.
  • AI literacy training: Article 4 is already in force. Use this period to build genuine team competence.

4. Prepare for the nudifier ban now

If you use, develop, or distribute AI image generation or manipulation tools:

  • Audit every tool for nudification capability
  • Update acceptable use policies
  • Implement or verify technical safeguards
  • Document your compliance measures

The ban carries the highest penalty tier (€35M / 7% of turnover) and has overwhelming political support. Do not wait for the exact effective date.

The bottom line

The Parliament vote on March 26 was the strongest signal yet that the EU AI Act timeline for high-risk systems is shifting to December 2027. The direction is set. The institutional consensus is clear. Trilogue will sort the details.

For SMEs, the correct posture has not changed: keep building your compliance infrastructure, but now with more confidence that you have time to do it well. Focus immediately on transparency obligations (August 2026), AI literacy (already in force), and the nudifier ban (coming soon). Use the expected extra time for thorough, not superficial, high-risk compliance work.

The organisations that come out strongest will be those that used 2026 to build real compliance capability — not those that treated the delay as a holiday.


Not sure if the delay affects your AI systems? Take the free ClearAct risk assessment quiz — it takes 2 minutes and tells you exactly which risk tier you fall into. You can also check whether you are a provider or deployer under the regulation, or explore your potential fine exposure.

Verwandte Artikel

EU Council Votes to Delay AI Act High-Risk Deadline to December 2027

On March 13, 2026, the EU Council adopted its position on the Digital Omnibus. High-risk AI deadlines may shift to December 2027. Here is what changed, what is still uncertain, and what SMEs should do now.

Artikel lesen →

EU AI Act Deadline Delayed to December 2027? What the Omnibus Vote Means

EU Council voted March 13, 2026 to push high-risk AI deadlines to Dec 2027. But it’s not final law yet. What changed, what didn’t, and what SMEs should do now.

Artikel lesen →

Machen Sie unsere kostenlose Risikobewertung

Finden Sie in 2 Minuten heraus, wo Ihr Unternehmen unter der EU-KI-Verordnung steht.

Quiz starten