Zum Hauptinhalt springen
← Zurück zum Blog

EU AI Act Regulatory Sandboxes: Only 8 of 27 Member States Are Ready

Teilen auf LinkedIn

7 Min. Lesezeit

On April 1, 2026, the European Parliament's research service (EPRS) published its analysis of AI regulatory sandbox implementation across the EU. The finding is stark: only 8 of 27 EU member states are on track to establish the operational sandboxes required by the AI Act before the August 2, 2026 deadline.

Under Article 57 of the EU AI Act, every member state must establish at least one AI regulatory sandbox by August 2, 2026. These sandboxes provide a controlled environment where AI providers and deployers can develop, test, and validate AI systems under regulatory supervision — before full market deployment and compliance obligations kick in.

The readiness gap matters because sandboxes are not optional infrastructure. They are a core mechanism of the AI Act's enforcement and compliance architecture, designed specifically to help organisations — particularly SMEs — navigate the regulation's requirements without guessing.

What the numbers show

The EPRS report categorises member states into three tiers of readiness:

Status Count Examples
Operational or near-operational 8 Spain, France, Netherlands, Germany, and others
Actively implementing 5 In various stages of legislation and authority designation
No communicated plans 14 Have not publicly disclosed sandbox timelines or frameworks

Spain: the frontrunner

Spain is the most advanced member state. Its sandbox — operated by AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) — has been operational since 2025 and has already hosted 12 high-risk AI systems in its first cohort.

In December 2025, AESIA published practical guidelines based on its sandbox experience, including lessons learned on conformity assessment processes, documentation requirements, and risk management implementation for high-risk AI systems. These guidelines are the closest thing to real-world compliance guidance currently available.

Germany: sandbox with Bundesnetzagentur

Germany designated the Bundesnetzagentur (Federal Network Agency) as its national AI authority responsible for sandboxes. The German sandbox framework focuses on innovation support alongside compliance testing, with particular attention to enabling SMEs and startups to access the sandbox without excessive administrative burden.

The laggards

14 member states have not publicly communicated their sandbox plans. This includes several countries with significant AI ecosystems and a substantial number of organisations that will need compliance support.

The delay creates a geographic lottery: an SME in Spain has access to a functioning sandbox with published compliance guidance. An SME in a member state that has not yet started has no equivalent resource.

Why sandboxes matter for SMEs

Regulatory sandboxes are not just a nice-to-have for large corporations. They are one of the most practically valuable tools in the AI Act for smaller organisations. Here is why:

1. Test compliance before market deployment

The sandbox allows you to bring your AI system to the national authority, explain what it does, and work through the compliance requirements together. You get feedback on whether your risk classification is correct, whether your documentation meets the standard, and whether your technical measures are sufficient — before you face enforcement.

For SMEs without large compliance teams or specialist legal counsel, this is the closest thing to a compliance dress rehearsal.

2. Reduced fees and simplified procedures

Under the AI Act, SMEs and startups benefit from reduced sandbox participation fees. Article 57(7) explicitly requires member states to facilitate access for small and medium-sized enterprises, including startups, and to keep administrative requirements proportionate.

3. Priority access to guidance

Sandbox participants receive direct engagement with the national authority. Questions that would otherwise require expensive legal opinions get answered through supervised testing. This is particularly valuable in the early years when case law and enforcement precedents do not yet exist.

4. Cross-border recognition

AI systems that have been tested in one member state's sandbox benefit from a presumption of compliance when deployed across the EU. Article 57(9) establishes that results from sandbox testing should be recognised by other member states' authorities. For SMEs selling AI products across multiple EU markets, this reduces the need to repeat compliance exercises in every country.

What the Commission was supposed to do — and has not

The AI Act gave the European Commission responsibility for adopting implementing acts to support sandbox implementation. These acts were meant to provide common rules, templates, and standards to ensure sandboxes across different member states operate consistently.

As of April 2026, the Commission has not yet adopted any implementing acts for sandboxes. This leaves member states to design their sandboxes independently, leading to divergent approaches, different application procedures, and inconsistent standards — exactly the fragmentation the implementing acts were supposed to prevent.

The EPRS report explicitly flags this as a contributing factor to the readiness gap: without Commission guidance, many member states have been waiting for clarity that has not come.

The Omnibus complication

The Digital Omnibus proposal — which is currently in trilogue after the Parliament's March 26 vote — includes a provision to extend the sandbox deadline from August 2026 to December 2027, aligning it with the proposed new high-risk compliance deadline.

If the Omnibus is adopted, the 14 member states with no current plans would get an additional 16 months. But this also means sandboxes would not be available to help organisations prepare during the period when preparation is most needed.

For the 8 member states that are already operational or close to it, the Omnibus extension does not change much — their sandboxes will be available regardless.

What SMEs should do now

If your member state has an operational sandbox

Apply. The sandbox is the single best resource available for testing your compliance approach. Contact your national AI authority and inquire about the application process.

Currently operational or near-operational sandboxes:

  • Spain: AESIA — aesia.gob.es
  • France: Contact the national AI authority (AIA function within CNIL/Arcep)
  • Germany: Bundesnetzagentur — bundesnetzagentur.de
  • Netherlands: Check with the Autoriteit Persoonsgegevens / designated AI authority

If your member state is not ready

You have three options:

  1. Use Spain's published guidance: AESIA's December 2025 sandbox guidelines are the most detailed practical compliance guidance available. They are publicly accessible and applicable to high-risk AI systems regardless of your country.
  2. Monitor your national authority: Check your government's AI policy page for updates on sandbox establishment. In many cases, the authority has been designated even if the sandbox is not yet operational.
  3. Use the cross-border provision: Article 57 allows organisations to apply to sandboxes in other member states. If your home country is not ready, consider applying to a sandbox in a country that is.

Regardless of sandbox availability

Do not wait for a sandbox to begin compliance work. Sandboxes are helpful for validation, but the foundational work — AI system inventory, risk classification, documentation, governance — can and should be done independently.

Use ClearAct's free tools to start:

The bigger picture

The sandbox readiness gap is a symptom of a broader implementation challenge. The EU AI Act is ambitious regulation meeting uneven institutional capacity. The Commission has not delivered its implementing acts. Two-thirds of member states have not built their sandboxes. Harmonised standards are late. The high-risk deadline is being pushed back precisely because the supporting infrastructure is not in place.

For SMEs, this is frustrating — but it is also an opportunity. The organisations that use the available tools (including the sandboxes that do exist), build their compliance infrastructure now, and document their efforts will be in the strongest position when enforcement begins. The playing field is uneven, but the direction is clear.

The regulation is real. The deadlines — whether August 2026 for transparency or December 2027 for high-risk — are approaching. And the organisations that prepared while others waited will have a decisive advantage.


Ready to start your EU AI Act compliance journey? Take the free risk assessment quiz to find your risk tier, then explore the compliance checklists for your specific obligations. For comprehensive support, try ClearAct Pro — AI-generated compliance reports, template filling, and AI system inventory management.

Verwandte Artikel

78% of Enterprises Are Not Ready for the EU AI Act — Here Is What They Are Missing

A new industry report reveals that 78% of organisations have not taken meaningful steps toward EU AI Act compliance. The gaps are basic: no AI inventory, no compliance owner, no documentation. Here is what the data shows and how to close the gaps.

Artikel lesen →

FRIA Guide: How to Run a Fundamental Rights Impact Assessment (Article 27)

Article 27 requires deployers of high-risk AI to conduct a FRIA before use. Step-by-step process, required elements, and a practical template for SMEs.

Artikel lesen →

Machen Sie unsere kostenlose Risikobewertung

Finden Sie in 2 Minuten heraus, wo Ihr Unternehmen unter der EU-KI-Verordnung steht.

Quiz starten