Zum Hauptinhalt springen
← Zurück zum Blog

EU AI Act in Crisis: Missed Deadlines, Missing Standards — What SMEs Should Do Now

Teilen auf LinkedIn

6 Min. Lesezeit

The EU AI Act implementation is hitting serious turbulence. In February 2026, three separate failures converged: the Commission missed a legally mandated deadline, standardization bodies fell behind, and political debate over the Digital Omnibus proposal intensified. For SMEs trying to plan their compliance work, this creates genuine confusion.

This article breaks down exactly what happened, what it means, and what you should do about it.

What happened on February 2, 2026

The AI Act required the European Commission to publish guidelines on Article 6 by February 2, 2026. Article 6 is the provision that determines whether your AI system counts as high-risk — the single most consequential classification in the entire regulation.

These guidelines were supposed to include:

  • Practical examples of AI systems that are and are not high-risk
  • Clarification on post-market monitoring obligations
  • A comprehensive list of use cases to help organizations self-assess

The Commission missed the deadline.

Deputy Director-General Renate Nikolay acknowledged the delay, stating: "These standards are not ready, and that's why we allowed ourselves in the AI omnibus to give us a bit more time."

The Commission indicated it is still integrating months of stakeholder feedback and plans to publish a draft by the end of February, with final adoption potentially in March or April 2026.

AI Act negotiator Laura Caroli was more direct in her criticism: "There was one thing that was fixed from the very beginning... It's just not there, and it is supposed to give clarity."

The standardization crisis

The problem runs deeper than one missed deadline. The two European standardization bodies responsible for creating technical standards for the AI Act — CEN and CENELEC — missed their own 2025 fall deadline.

These technical standards are critical because they define how organizations demonstrate compliance. Without them, even willing companies cannot fully prepare because the specific measurements, documentation formats, and testing procedures remain undefined.

Current estimates put standard completion at end of 2026 — meaning the standards needed for August 2026 compliance will not exist when that deadline arrives.

The Digital Omnibus: delay or not?

In November 2025, the Commission proposed the Digital Omnibus on AI, which would push high-risk obligations for Annex III systems from August 2026 to December 2027. Key points:

  • The delay would only take effect once the Commission confirms that relevant standards and guidance are ready
  • A backstop mechanism ensures rules become enforceable by December 2027 regardless
  • Providers of generative AI systems released before August 2026 would get an extra six months (until February 2027) for transparency obligations
  • The proposal also simplifies some high-risk AI definitions

The political split

The European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) issued a joint opinion in February 2026, warning that the omnibus proposal "may weaken the protection of individuals" and viewing the high-risk delay critically.

Meanwhile, industry groups including the Computer and Communications Industry Association Europe (representing Alphabet, Meta, and Apple) are lobbying for a relaxed approach. Civil society organizations want full enforcement on schedule.

The European Parliament has appointed rapporteurs and co-rapporteurs have published a draft report. Their position: reinstate AI literacy obligations while still delaying high-risk rules to December 2027.

The omnibus is not law yet. It must pass through the full EU legislative process. Organizations that pause compliance based on a proposal that may be amended or rejected are taking a significant gamble.

What is NOT delayed

Even in the most optimistic omnibus scenario, these obligations remain on the August 2, 2026 timeline:

Obligation Status
Transparency rules (Article 50) On schedule — August 2026
AI literacy requirements (Article 4) Already in force since February 2025
Prohibited AI practices (Article 5) Already in force since February 2025
GPAI model obligations (Articles 51-56) Already in force since August 2025
Codes of practice for GPAI First draft published, being finalized

If you deploy AI that interacts with people, generates synthetic content, or uses emotion recognition or biometric categorization, your transparency obligations are not affected by the omnibus debate.

What SMEs should do right now

1. Do not wait for political clarity

The worst response is paralysis. Whether high-risk deadlines land in August 2026 or December 2027, the preparation work is identical. Starting now means spreading cost and effort over months instead of scrambling at the last minute.

2. Complete your AI system inventory

You cannot assess risk if you do not know what AI systems your organization uses. Map every AI tool — from your CRM's lead scoring to your support chatbot to your HR screening tool.

3. Assess your role: provider or deployer

Under Article 3, your obligations depend on whether you are a provider (building/training/modifying AI) or a deployer (using AI under your authority). Most SMEs are deployers, but role-shift rules in Article 25 can make you a provider if you substantially modify a system.

4. Lock down transparency obligations now

These are confirmed for August 2026 with no omnibus delay:

  • Disclose when people interact with AI systems
  • Label AI-generated content appropriately
  • Mark deepfakes and synthetic media
  • Maintain documentation of your transparency measures

5. Start risk management documentation

Even if the formal conformity assessment timeline shifts, having a documented risk management process benefits your organization regardless of regulatory timing. It reduces liability exposure and builds the evidence base you will eventually need.

6. Monitor the omnibus process

The legislative process will produce a final text in the coming months. Track it through the EU AI Act implementation timeline and adjust your roadmap accordingly — but always plan for the earlier deadline.

The bigger picture

The implementation struggles are real, but they do not change the fundamental trajectory. The EU AI Act is the world's first comprehensive AI regulation. It will be enforced. The question is not whether but when exactly.

Organizations that use this uncertainty as an excuse to delay preparation will find themselves in exactly the position the regulation was designed to prevent: deploying AI systems without adequate risk management, transparency, or oversight.

The smart play is straightforward: prepare as if August 2026 is real, and treat any delay as bonus time.

Not sure where your organization stands? Take the free ClearAct risk assessment — it takes 2 minutes and gives you a concrete starting point.

Verwandte Artikel

AI Literacy Requirements Under Article 4: What SMEs Must Do Now

Article 4 of the EU AI Act is already active. Here is what “sufficient AI literacy” means in practice and how SMEs can prove compliance.

Artikel lesen →

ClearAct vs Protectron.ai: Which EU AI Act Platform Is Better for SMEs in 2026?

A fair, evidence-based comparison of ClearAct and Protectron.ai across pricing, features, implementation depth, and SME fit.

Artikel lesen →

Machen Sie unsere kostenlose Risikobewertung

Finden Sie in 2 Minuten heraus, wo Ihr Unternehmen unter der EU-KI-Verordnung steht.

Quiz starten