On April 1, 2026, Vision Compliance released its annual EU AI Act Readiness Report. The headline finding: 78% of organisations surveyed have not taken meaningful steps toward AI Act compliance.
This is not a niche concern. The EU AI Act applies to any organisation that develops, deploys, or distributes AI systems in the EU market — regardless of where the company is headquartered. And while the Parliament recently voted to delay high-risk obligations to December 2027, several categories of obligations are already in force or approaching their deadlines in August 2026.
The report paints a picture of widespread unpreparedness that should concern every compliance officer, CTO, and business leader using AI.
The three critical gaps
The Vision Compliance report identified three areas where the readiness deficit is most severe:
1. No AI system inventory (83%)
83% of organisations have no formal inventory of their AI systems.
This is the most fundamental gap. You cannot comply with the EU AI Act if you do not know what AI systems you operate. The regulation requires different obligations depending on the risk classification of each AI system — but classification is impossible without first identifying what you have.
An AI system inventory is not a spreadsheet listing your vendor names. Under the AI Act, it means documenting:
- What the system does — its intended purpose and actual use cases
- Who it affects — the natural persons subject to decisions or outputs
- What data it processes — input data, training data, and output data
- How decisions are made — transparency of the decision logic
- What risk level it carries — classification under Annex III or Article 6
- Who is responsible — the provider, deployer, or both
Without this inventory, every subsequent compliance step — risk assessment, documentation, conformity assessment — has no foundation.
2. No compliance owner (74%)
74% of organisations have no designated internal owner for AI Act compliance.
This is a governance gap that predicts failure. Regulations without clear internal ownership do not get implemented. The AI Act is complex — it touches legal, technical, procurement, HR, and operational teams. Without someone who is accountable for driving compliance across these functions, nothing moves.
The compliance owner does not need to be a full-time role in an SME. But someone must be explicitly responsible for:
- Tracking regulatory developments and deadlines
- Coordinating the AI system inventory
- Driving risk assessments and classification decisions
- Ensuring documentation requirements are met
- Managing relationships with AI providers (obtaining necessary information)
- Reporting to leadership on compliance status
In many SMEs, this will be the DPO, the CTO, the Head of Legal, or the COO — whoever has both the authority and the cross-functional visibility to make things happen.
3. No technical documentation process (61%)
61% of organisations have no process for generating the technical documentation required for high-risk AI systems.
Under the AI Act, providers of high-risk AI systems must produce and maintain technical documentation that demonstrates compliance with every requirement in Articles 8-15. This includes documentation of the risk management system, data governance measures, design choices, testing results, performance metrics, and instructions for use.
For deployers, the obligation is lighter but still real: you must keep records of your use, maintain logs of the system's operation, and be able to demonstrate that you have implemented appropriate human oversight.
The 61% figure suggests that most organisations have not even begun thinking about what documentation they will need — let alone building the processes to produce it.
Why this matters now
The temptation is to read the readiness numbers and conclude: "Everyone is behind, so it is fine." That logic is dangerous for three reasons.
Obligations are already in force
The AI Act is not a future regulation. Two major categories of obligation have been binding since February 2, 2025:
- Prohibited practices (Article 5): If you operate AI systems that engage in subliminal manipulation, exploitation of vulnerabilities, social scoring, or (soon) nudification, you are already in violation. Penalties: up to €35 million or 7% of global turnover.
- AI literacy (Article 4): Every organisation deploying AI must ensure that staff who use AI systems have sufficient understanding of how those systems work, their capabilities, and their limitations. This is a current, binding obligation — not a future one.
Transparency deadlines are unchanged
The Digital Omnibus delays affect high-risk obligations but do not touch transparency requirements under Article 50. These become enforceable on August 2, 2026 — four months from now.
If your AI systems interact with humans, generate content, detect emotions, or categorise people biometrically, you must have transparency mechanisms in place by August.
Enforcement infrastructure is being built
The EU AI Office is operational. National market surveillance authorities are being designated. Spain's regulatory sandbox is already hosting AI systems. The enforcement architecture is not waiting for companies to be ready — it is being built in parallel.
Being part of the 78% is not a comfortable position. It means you are behind the curve at the exact moment the curve is accelerating.
How to close the gaps
If your organisation is in the 78%, the path forward is not complicated. It requires commitment, not complexity.
Step 1: Build your AI inventory (weeks 1-4)
Start simple. Survey every department and ask: "What AI-powered tools, systems, or services do you use?" Include:
- Vendor AI products (ChatGPT, Copilot, automated HR tools, AI customer service, etc.)
- Internal AI systems (ML models, recommendation engines, automated decision systems)
- AI components embedded in other products (analytics platforms, CRM AI features, etc.)
For each system, document the purpose, affected persons, data processed, and current safeguards. This does not need to be perfect on day one — it needs to exist and be maintainable.
Step 2: Designate a compliance owner (week 1)
This is a decision, not a project. Pick someone with cross-functional authority, give them the mandate, and make AI Act compliance a named responsibility in their role. In an SME, this is often the same person responsible for GDPR — the operational overlap is significant.
Step 3: Classify your AI systems (weeks 3-6)
With your inventory in hand, determine the risk classification of each system:
- Unacceptable risk (Article 5): Banned. If any of your systems fall here, stop using them immediately.
- High risk (Annex III): Subject to full compliance requirements. The Annex III categories cover biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.
- Limited risk (Article 50): Subject to transparency obligations only.
- Minimal risk: No specific obligations, but voluntary codes of conduct encouraged.
Step 4: Address immediate obligations (weeks 2-8)
In parallel with inventory and classification:
- AI literacy (Article 4): Train relevant staff on how the AI systems they use work, what their limitations are, and what oversight is required. This is already legally required.
- Prohibited practices (Article 5): Screen every AI use case against the banned list. If anything is borderline, get legal advice.
- Transparency (Article 50): For systems that interact with people or generate content, build disclosure and labelling mechanisms before August 2026.
Step 5: Build high-risk compliance infrastructure (months 3-12)
For systems classified as high risk, begin building:
- Risk management system (Article 9)
- Data governance framework (Article 10)
- Technical documentation (Article 11 / Annex IV)
- Record-keeping and logging (Article 12)
- Transparency and user information (Article 13)
- Human oversight mechanisms (Article 14)
- Accuracy and robustness testing (Article 15)
This is the work that takes the most time and the most cross-functional coordination. With the likely delay to December 2027, you have time to do it thoroughly — but only if you start now.
The cost of waiting
The 78% figure will shrink over the coming months as awareness grows and deadlines approach. The organisations that move now have a structural advantage:
- Better quality compliance: Building an AI inventory under time pressure produces a checklist. Building one with adequate time produces a governance asset.
- Lower cost: Rushing compliance work at the last moment is expensive — consultants charge premium rates, internal teams work overtime, and corners get cut.
- Competitive advantage: In regulated industries, demonstrating AI governance maturity to customers, partners, and regulators creates trust. Being early is a differentiator.
- Reduced enforcement risk: Regulators will exercise more scrutiny over organisations that show no evidence of compliance effort than over those that are making visible, documented progress.
The bottom line
The 78% readiness gap is an opportunity for every organisation willing to act. The compliance requirements are known. The tools are available. The deadlines — whether August 2026 for transparency or December 2027 for high-risk — give you enough time to do this properly.
The organisations that act now will spend less, comply better, and face fewer surprises. The organisations that wait will eventually do the same work — just under worse conditions.
Do not be part of the 78%. Start this week.
Find out exactly where your organisation stands. Take the free ClearAct risk assessment quiz — 2 minutes, no login required. Already know your risk level? Start your AI system inventory with ClearAct Pro, or explore compliance checklists tailored to your risk tier.