Zum Hauptinhalt springen
← Zurück zum Blog

EU Council Votes to Delay AI Act High-Risk Deadline to December 2027

Teilen auf LinkedIn

10 Min. Lesezeit

On March 13, 2026, the Council of the European Union formally adopted its negotiating mandate on the Digital Omnibus package — the regulatory simplification proposal that amends, among other things, the EU AI Act (Regulation (EU) 2024/1689). This is the most significant development in AI Act compliance planning since the regulation entered into force.

The headline: high-risk AI deadlines are moving backward. Stand-alone high-risk AI systems would shift from August 2, 2026 to December 2, 2027. Product-embedded high-risk AI systems would move from August 2, 2027 to August 2, 2028.

But there is a critical qualifier that every compliance officer needs to understand: the Council position is not final law. The European Parliament must still agree, and trilogue negotiations between the Council, Parliament, and Commission are required before any of this becomes binding.

Here is exactly what happened, what it means, and what you should do about it.

What the Council position changes for high-risk AI

The original AI Act timeline set August 2, 2026 as the date when most high-risk AI obligations under Annex III would begin to apply. This included risk management systems (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), accuracy and robustness (Article 15), and the corresponding deployer obligations.

The Council's Digital Omnibus position restructures this into two tiers:

Stand-alone high-risk AI systems (Annex III)

New proposed deadline: December 2, 2027 (was August 2, 2026)

These are AI systems classified as high-risk under Annex III based on their intended purpose — for example, AI used in recruitment screening, credit scoring, law enforcement risk assessment, or access to essential services. If you operate an AI system that falls under one of the eight high-risk use-case areas in Annex III, this is the deadline shift that affects you.

Product-embedded high-risk AI systems (Annex I)

New proposed deadline: August 2, 2028 (was August 2, 2027)

These are AI systems that are safety components of products already covered by EU harmonized legislation listed in Annex I — medical devices, machinery, toys, radio equipment, civil aviation, vehicles, and similar regulated products. The extra time reflects the complexity of aligning AI Act requirements with existing product-safety conformity assessment procedures.

What does not change

The Council position does not touch obligations that are already in force:

Obligation Status Effective since
Prohibited AI practices (Article 5) In force February 2, 2025
AI literacy requirements (Article 4) In force February 2, 2025
GPAI model obligations (Articles 51-56) In force August 2, 2025

Transparency obligations under Article 50 — disclosure of AI interaction, labeling of synthetic content, deepfake marking — remain on the August 2, 2026 timeline and are not affected by the Omnibus proposal.

New prohibited practice: AI-generated nudification

The Council position adds a new entry to the list of prohibited AI practices under Article 5: the use of AI systems to generate non-consensual intimate imagery, commonly known as nudification or deepfake pornography.

This addition was accelerated by the Grok deepfake scandal in December 2025, when xAI's Grok chatbot was widely used to generate non-consensual intimate images. The incident drew sharp public and regulatory backlash across Europe and created the political momentum to include an explicit ban.

Under the Council's proposed text, AI systems that generate or manipulate images to create realistic depictions of a real person's intimate body parts or likeness in sexually explicit situations, without that person's consent, would be classified as a prohibited practice. The ban would sit alongside existing Article 5 prohibitions on manipulative AI, social scoring, and unauthorized biometric identification.

For SMEs: if you operate any image generation or manipulation tools, audit them now. Ensure your usage policies explicitly prohibit nudification use cases, and implement technical safeguards where feasible. This prohibition will likely survive trilogue intact — it has broad political support across all institutions.

SME-specific changes in the Council position

The Council position includes three changes that directly benefit smaller organizations:

1. Extended SME exemptions to small mid-caps

Under the original AI Act text, certain reduced obligations and fee exemptions applied to micro, small, and medium-sized enterprises as defined by Commission Recommendation 2003/361/EC (up to 250 employees, up to EUR 50 million turnover). The Council position extends these benefits to small mid-cap companies — enterprises with up to 500 employees.

This is significant. Many growing tech companies and AI-adopting businesses exceed the 250-employee SME threshold but still lack the compliance infrastructure of large corporations. The extension means more organizations can benefit from simplified conformity assessment procedures, reduced registration fees, and proportionate documentation requirements.

2. AI regulatory sandboxes extended to December 2027

The original AI Act required each Member State to establish at least one AI regulatory sandbox by August 2, 2026 (Article 57). The Council position pushes this to December 2, 2027.

Sandboxes are important for SMEs because they allow testing of AI systems under regulatory supervision before full market deployment. The extended timeline means sandboxes and the high-risk compliance deadline would align — giving organizations that use sandboxes a coherent path from testing to compliance.

3. Expanded access to sensitive data for bias detection

The Council position broadens the provisions under Article 10(5) that allow processing of sensitive personal data (such as racial or ethnic origin, political opinions, health data, and sexual orientation) for the specific purpose of bias detection, correction, and monitoring in high-risk AI systems.

For SMEs, this is practical relief. One of the most challenging aspects of AI Act compliance is demonstrating that your AI systems do not produce discriminatory outcomes — but detecting discrimination often requires analyzing exactly the kind of sensitive data that GDPR and the AI Act otherwise restrict. The expanded provisions give clearer legal basis for this necessary work.

Reinforced AI Office powers

The Council position strengthens the role and enforcement authority of the EU AI Office, which was established under Article 64 of the AI Act to oversee compliance at the Union level.

Key reinforcements include:

  • Expanded investigative powers for cross-border AI Act enforcement
  • Greater coordination authority over national market surveillance bodies
  • Enhanced role in monitoring GPAI model compliance

For SMEs, this signals that enforcement infrastructure is being built in parallel with the compliance timeline — even as deadlines shift, the enforcement architecture is not being weakened.

This is NOT final law

This point cannot be overstated. The Council adopting its negotiating mandate is a necessary step, but it is only one of three institutional positions required.

What still needs to happen

  1. European Parliament position — The Parliament's committees are currently drafting their own position on the Digital Omnibus. The IMCO and LIBE committees are jointly responsible, and rapporteurs have been appointed. Parliament may agree with the Council timeline, push for different dates, or add conditions.

  2. Trilogue negotiations — Once Parliament adopts its position, formal trilogue negotiations between Council, Parliament, and Commission begin. This is where the final text is hammered out. Trilogues on AI policy can be contentious — recall that the original AI Act trilogue took months.

  3. Formal adoption and publication — After trilogue agreement, both Council and Parliament must formally vote to adopt the final text. It then gets published in the Official Journal of the European Union.

Realistic timeline for final adoption

Political signals suggest all three institutions want to conclude quickly. The most optimistic scenario puts final adoption in mid-2026, with the amending regulation entering into force shortly after publication.

But "most optimistic" is not the same as "guaranteed." If Parliament pushes back on timeline specifics, or if political attention shifts (elections, geopolitical events, other legislative priorities), trilogue could extend into late 2026 or beyond.

Until the amending regulation is published in the Official Journal, the original AI Act deadlines remain legally binding. Organizations that halt compliance work based on the Council position alone are taking a legal risk.

What SMEs should do now

1. Do not stop compliance work

This is the most important message. Whether high-risk deadlines land in August 2026 or December 2027, the compliance requirements are identical. The work is the same — only the timeline changes. Organizations that pause now will face a compressed sprint later.

2. Use the potential extra time strategically

If the December 2027 deadline is confirmed through trilogue, you gain approximately 16 additional months. Use that time to:

  • Build your AI system inventory properly — do not rush a superficial list. Document purpose, data flows, affected persons, decision impact, and current safeguards for every AI system.
  • Conduct thorough risk assessments — high-risk classification under Article 6 requires careful analysis. Use the extra time to get it right.
  • Implement risk management systems incrementally — Article 9 requires a risk management system that operates throughout the AI system lifecycle. Start now and iterate.
  • Train your teams — AI literacy (Article 4) is already in force. Use the breathing room to build genuine competence, not checkbox training.

3. Lock down what is already required

Regardless of Omnibus outcomes:

  • Article 5 screening is mandatory now. Review every AI use case against the prohibited practices list — including the likely addition of nudification.
  • Article 4 literacy is mandatory now. Ensure relevant staff understand the AI systems they use and their limitations.
  • Article 50 transparency applies from August 2026. If your AI interacts with people, generates content, or uses biometric processing, prepare your disclosure and labeling mechanisms.

4. Monitor trilogue progress

Track the legislative procedure through official EU sources:

  • Council press releases: consilium.europa.eu
  • European Parliament legislative observatory: europarl.europa.eu/oeil
  • EUR-Lex procedure file for the Digital Omnibus amending regulation

Adjust your internal roadmap as the trilogue progresses, but always plan against the earlier deadline until the later one is confirmed in law.

5. Document your compliance posture now

Even with shifting deadlines, having documented evidence of your compliance efforts protects you. If enforcement actions begin before you are fully compliant, demonstrating that you have been actively and systematically working toward compliance is vastly better than showing nothing.

What happens next: expected timeline

Date Event
March 13, 2026 Council adopts negotiating mandate (completed)
Q2 2026 European Parliament expected to adopt its position
Q2-Q3 2026 Trilogue negotiations between Council, Parliament, Commission
Mid-to-late 2026 Final adoption and publication in Official Journal (optimistic)
August 2, 2026 Original high-risk deadline (still legally binding until amended)
December 2, 2027 Proposed new deadline for stand-alone high-risk AI (Annex III)
August 2, 2028 Proposed new deadline for product-embedded high-risk AI (Annex I)

The bottom line

The Council vote on March 13, 2026 is a strong political signal that high-risk AI deadlines will move. The direction is clear, and institutional momentum favors adoption. But political signals are not law.

For SMEs, the rational response is straightforward: continue your compliance work at a sustainable pace, use any confirmed delay as breathing room to do it properly, and do not treat the Council position as permission to stop.

The organizations that will be best positioned — whether deadlines move or not — are those building compliance infrastructure now. If December 2027 is confirmed, you get more time to do it well. If something derails the Omnibus in trilogue and August 2026 holds, you will not be caught unprepared.

Either way, the work is the same. Only the pressure changes.


Not sure where your organization stands on EU AI Act compliance? Take the free ClearAct risk assessment quiz — it takes 2 minutes and gives you a concrete starting point. You can also explore your potential fine exposure or check whether you are a provider or deployer under the regulation.

Verwandte Artikel

EU AI Act Deadline Delayed to December 2027? What the Omnibus Vote Means

EU Council voted March 13, 2026 to push high-risk AI deadlines to Dec 2027. But it’s not final law yet. What changed, what didn’t, and what SMEs should do now.

Artikel lesen →

HR & Recruitment AI Is High-Risk Under the EU AI Act — What You Must Do

CV screening, interview scoring, and workforce monitoring all trigger Annex III high-risk rules. Required: risk management, human oversight, transparency. Action checklist inside.

Artikel lesen →

Machen Sie unsere kostenlose Risikobewertung

Finden Sie in 2 Minuten heraus, wo Ihr Unternehmen unter der EU-KI-Verordnung steht.

Quiz starten