The EU AI Act required every Member State to designate or establish its national competent authorities by August 2, 2025. Eight months later, the picture is fragmented in ways that directly affect how — and whether — SMEs can comply.
Some countries are model implementers: Germany has its market-surveillance regime in statute. Spain set up AESIA, the first dedicated national AI agency in Europe. Italy split AI oversight cleanly between AgID and the National Cybersecurity Agency. Others are visibly behind: France still has not designated its national competent authorities, more than eight months past the deadline. And some are technically compliant in a way that creates new problems for SMEs — Ireland has spread enforcement across fifteen different regulators.
This is the SME guide to who actually enforces the EU AI Act in the country where you operate, drawing on the April 2026 Cullen International snapshot of national implementation laws, regulatory bodies, and penalties.
The state of play, country by country
| Country | NCA designated? | Authority | Notes |
|---|---|---|---|
| Germany | ✅ Yes | BNetzA + BaFin | KI-MIG implementation law in force; financial-sector high-risk AI carved out to BaFin |
| France | ❌ No | None | DDADUE Bill provisions on NCAs withdrawn; no formal regulator to file with |
| Ireland | ⚠️ Fragmented | 15 sectoral regulators | Sector-by-sector model; cross-sector SMEs face multiple supervisors |
| Italy | ✅ Yes | AgID + National Cybersecurity Agency (ACN) | AgID handles innovation and conformity; ACN handles enforcement and sanctions |
| Spain | ✅ Yes | AESIA (Royal Decree 729/2023) | First EU dedicated AI agency; supported by CNMC and other sectoral regulators |
| Netherlands | ⚠️ Draft | AP + RDI (coordinating); 8 sectoral authorities | April 20, 2026 consultation opened; full law expected Q4 2026 |
| Poland | ⚠️ Adopted | KRiBSI (Commission for the Development and Security of AI) | March 31, 2026 Council of Ministers adoption; structurally inside Ministry of Digital Affairs |
The headline takeaway: less than half of the EU's seven largest economies have a fully operational, statutorily designated AI Act regulator as of April 2026. The August 2 enforcement date is approaching. The institutional readiness is not.
Germany — the model implementation
Germany passed the KI-Marktüberwachungsgesetz- und Innovationsförderungsgesetz (KI-MIG) — the AI Market Surveillance and Innovation Promotion Act — to operationalise the AI Act domestically. The law designates two authorities:
- Bundesnetzagentur (BNetzA) — the Federal Network Agency — as the central market surveillance authority for the great majority of high-risk AI systems.
- Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin) — the Federal Financial Supervisory Authority — for high-risk AI systems directly linked to regulated financial activities (credit scoring, insurance pricing, AML monitoring, etc.).
This dual structure means SMEs using AI in financial services know to engage with BaFin; everyone else has a clear single contact point in BNetzA. Documentation, registration, and incident reporting all flow through one of these two bodies.
The early German parliamentary elections in March 2025 slowed implementation modestly, but the statute is now in force and BNetzA is hiring AI specialists. For SMEs in or selling into Germany, this is the cleanest regulatory environment in the EU today.
France — still no national authority
France presents the opposite picture. The DDADUE Bill of November 2025 initially included provisions to formalise the designation of national competent authorities. Those provisions were withdrawn before the bill was submitted to Parliament, leaving France without a statutory basis for its NCA structure.
Practically, this means:
- There is no single regulator an SME can register with in France today.
- Enforcement responsibility is implicit (CNIL on data protection grounds, sectoral regulators on sector-specific grounds), but not explicitly assigned under the AI Act.
- French SMEs face legal uncertainty about who would receive a serious-incident report under Article 73, who would issue corrective measures under Article 79, or who would impose fines under Article 99.
The August 2026 enforcement date does not pause for France. If a French SME is found to be non-compliant with high-risk AI obligations, the legal mechanism for fining them exists in EU law — but the practical mechanism for delivering that fine domestically does not. This is not a comfortable position for either side.
If you operate in France, the only defensible posture is to document your AI Act compliance as if all the regulators were in place, and be prepared to engage with whichever authority is ultimately designated.
Ireland — fragmented across 15 regulators
Ireland chose a structurally different path: rather than designate one or two central authorities, it spread market-surveillance responsibility across fifteen sector-specific regulators, each covering AI use in their existing remit (healthcare, financial services, communications, employment, education, transport, and so on).
For an SME that uses AI in one sector, this is fine — the existing regulator picks up AI Act enforcement on top of its existing mandate. The trouble starts when an SME crosses sectors:
- An HR-tech SaaS company providing AI-driven recruitment tools to financial services clients now has at least two relevant Irish regulators.
- An AI vendor selling into education, healthcare, and local government potentially has three.
- Each regulator may interpret the same provision (for example, what counts as "appropriate human oversight" under Article 14) slightly differently.
Cross-sector SMEs in Ireland should expect to maintain separate compliance dossiers per regulator, with consistent core documentation but regulator-specific cover summaries.
What this means if you operate cross-border
The single biggest practical risk in the April 2026 snapshot is regulatory divergence. The AI Act is one regulation, but it is now being enforced by — and in some cases not enforced by — twenty-seven different national systems with very different levels of capacity and coherence.
A single AI product offered in Germany, France, Italy, and Ireland today faces:
- One clearly designated regulator in Germany (BNetzA or BaFin).
- No designated regulator in France — but live legal obligations.
- A bifurcated regulator in Italy (AgID for conformity, ACN for sanctions).
- Up to several sector-specific regulators in Ireland.
That is four substantively different supervisory environments for the same product. Article 50 transparency obligations, Article 9 risk management requirements, and Article 17 quality management systems must be implemented in a way that is defensible against the most demanding of the four — not the most lenient.
What this means for SMEs
Identify your primary jurisdiction. Where is your establishment? Where do you have the most users? That is your home regulator. Germany, Spain, and Italy already have statutory clarity. France does not — if your home is France, you should still build documentation as if BNetzA-grade scrutiny were coming.
Map your secondary jurisdictions. Every Member State where you offer an AI system, market it, or have substantial users is a potential supervisor. List them. Under Article 70, each Member State has a single point of contact, even where multiple authorities share enforcement.
Monitor local statute, not just EU-level news. The AI Act sets the floor. National implementation laws (Germany's KI-MIG, the Netherlands' April 2026 consultation, Poland's KRiBSI proposal, France's eventual replacement bill) set the practical reality. Subscribe to your home regulator's updates and at least one cross-border legal tracker.
Maintain country-specific compliance dossiers. Your core technical documentation under Annex IV is universal. Your cover summary — explaining how your AI system fits each Member State's regulatory framework — is jurisdiction-specific. Build a template now; fill it in once per active country.
If you operate in France, watch the replacement bill carefully. The withdrawal of the DDADUE Bill's NCA provisions was political, not substantive. France will eventually designate authorities — likely a CNIL-led structure with sector-specific support. The shape of that designation will materially affect French SMEs' compliance burden. Do not assume the gap will continue indefinitely.
The bottom line
The EU AI Act is one regulation with twenty-seven enforcement environments, and the gap between the most prepared (Germany, Spain) and the least prepared (France) is now wide enough to materially affect how SMEs should operate. The best posture is to plan for the strictest plausible regulator in each market you serve, and treat the absence of a designated authority as a delayed risk, not an absent one.
Compliance is judged by the regulation, not by the regulator. When France finally designates its authorities, every SME that has been operating in France during the gap will be evaluated against what the Act required from August 2026 onwards — not from when France caught up. The forward-looking move is to behave as though the structure is already in place.
For a structured walkthrough of what your AI systems need by August, see our compliance checklists. And for the wider context on how this implementation gap developed, see our earlier piece on the AI Act's missed deadlines and missing standards.
Not sure which Member State authority will supervise your AI systems? Take the free ClearAct risk assessment quiz — it takes 2 minutes and tells you exactly which risk tier you fall into, and you can then walk through our compliance checklists for your tier.