High-risk classification under the EU AI Act is where compliance moves from theory to operational reality. If your system is high-risk, you are no longer in "best-practice" territory—you are in mandatory-controls territory.
For SMEs, the challenge is rarely legal access to information. The challenge is correctly identifying exposure early enough to build controls before a customer audit, incident, or deadline forces rushed decisions.
This guide explains how to determine whether you are affected, what the law actually requires, and how to act quickly without overengineering.
What "high-risk" means in legal terms
Under Article 6 of Regulation (EU) 2024/1689, AI systems can become high-risk through specific routes, especially Annex III use contexts and safety-component pathways. The key lesson: high-risk is not about how "advanced" a model sounds. It is about impact and context.
A simple model used in recruitment ranking can be high-risk.
A sophisticated writing assistant used for harmless drafting may not be.
The decision point is legal classification, not model marketing.
The two mistakes SMEs make most
- Under-classification: "We only use vendor software, so this cannot be our problem."
- Over-classification: "Everything with AI is high-risk, so we should freeze everything."
Both are expensive. Under-classification creates legal and commercial exposure. Over-classification wastes scarce team capacity.
What works: a documented triage method tied to the Act.
Annex III contexts you should examine first
For most SMEs, high-risk exposure appears in repeatable hotspots:
- Employment and worker management (screening, ranking, promotion support)
- Education/vocational contexts where outcomes affect progression
- Access to essential private/public services (e.g., creditworthiness-related decisions)
- Certain biometric or identity-sensitive applications
- Safety-impacting environments depending on system role
If your use case influences rights, opportunities, essential access, or safety, classify it as a high-risk candidate until proven otherwise.
Article 6(3): when an Annex III use might be exempt
Many teams ignore Article 6(3), which can remove some Annex III systems from high-risk where criteria are genuinely met (for example narrowly procedural contexts that do not materially influence outcomes).
Do not use this as a loophole by default. Use it as a structured test with written rationale, because buyers and regulators may challenge unsupported claims.
Provider vs deployer: why role clarity changes workload
- Provider duties are broader and lifecycle-heavy.
- Deployer duties still remain substantial in practical operation.
Even as a deployer, you still need controlled use, trained personnel, oversight mechanisms, recordkeeping posture, and incident-ready operations.
In mixed environments, one company can be deployer for one use case and provider-like for another (for example, after substantial modification or branding decisions linked to Article 25 role shifts).
The operational core of high-risk compliance (Articles 9-15)
If a use case is high-risk, these are not optional:
Risk management (Article 9)
Establish a continuous process to identify, evaluate, mitigate, and monitor risk throughout lifecycle, not only during procurement.
Data and governance quality (Article 10)
Ensure training/validation/testing data governance and quality practices are suitable for intended purpose and risk profile.
Technical documentation (Article 11 + Annex IV)
Maintain usable documentation explaining system purpose, limitations, controls, and evidence. If your documentation cannot be understood by operations/compliance teams, it is not sufficient in practice.
Logging (Article 12)
Capture records that support traceability of key events, especially those tied to consequential outputs and override decisions.
Transparency and instructions (Article 13)
Users and operators need clear, practical instructions—not generic vendor brochures.
Human oversight (Article 14)
Oversight must be designed into workflow: who can intervene, when, with what authority, and with what evidence trail.
Accuracy, robustness, cybersecurity (Article 15)
You need performance thresholds and incident response plans proportionate to harm potential.
A practical high-risk screening flow for SMEs
Use this 8-question triage in product/compliance review:
- Does output affect rights, access, employment, education, safety, or essential services?
- Does context resemble Annex III categories?
- Could model error materially disadvantage a person/group?
- Is AI output used directly in decision path?
- Can trained humans override outcomes in real time?
- Are there logs showing how outputs were generated/used?
- Is there documented role ownership and escalation path?
- Is Article 6(3) exemption being claimed, and if yes, with evidence?
If answers show impact + low control maturity, escalate immediately.
Concrete examples
Example 1: CV ranking plugin for a 200-person company
Likely high-risk candidate in employment context. Required actions: restrict autonomous ranking use, add structured reviewer rubric, log override decisions, and document limitations communicated to HR.
Example 2: AI fraud scoring in a fintech workflow
May influence access and account actions; requires strong human oversight and error-resolution path. False positives can create serious consumer harm and complaints.
Example 3: AI tutor that only suggests reading resources
Could be lower risk than automated grading/progression systems, but still needs transparency and monitoring for harmful outputs.
What a regulator or enterprise client will test first
They usually start with practical questions, not theory:
- Show your high-risk classification rationale.
- Show who owns each sensitive AI workflow.
- Show evidence of oversight intervention and logging.
- Show last review date and open risk actions.
If you cannot answer these in one sitting, your program is likely immature.
60-day remediation plan when exposure is likely
Days 1-10: freeze and map
- Freeze uncontrolled expansion of sensitive AI use cases.
- Build list of all candidate high-risk workflows.
- Assign accountable owner per workflow.
Days 11-25: classify and design controls
- Map each use case to Article 6/Annex III.
- Test Article 6(3) only where evidence supports it.
- Define oversight trigger points and override authorities.
Days 26-45: operationalize
- Implement logging for decision-relevant steps.
- Publish operator instructions and user-facing notices where needed.
- Start documentation pack aligned with Article 11/Annex IV expectations.
Days 46-60: validate and harden
- Run incident simulation on at least one high-risk candidate workflow.
- Track unresolved gaps with owners and deadlines.
- Set monthly governance review cadence.
Commercial upside of getting this right
High-risk maturity is not only defensive. It improves procurement outcomes, lowers insurance and contractual friction, and increases trust with regulators and partners.
In many enterprise deals, weak AI governance now blocks revenue before legal enforcement ever begins.
Useful internal tools to accelerate work
- Initial exposure triage: /quiz
- Common terms and definitions for cross-team alignment: /glossary
- Financial downside scenario planning: /fine-calculator
These are useful for turning legal requirements into operational execution.
Common failure patterns
- Assuming vendor documentation automatically covers your deployment context.
- Treating oversight as symbolic approval instead of real intervention authority.
- Keeping risk decisions in private chats instead of auditable systems.
- Failing to retrigger classification after product/process changes.
What to prepare for customer due diligence questionnaires
Enterprise customers increasingly ask AI governance questions during procurement. A strong answer set usually includes:
- Classification logic tied to Article 6 and Annex III
- Oversight and intervention mechanics tied to Article 14
- Logging and traceability tied to Article 12
- Transparency handling tied to Article 13 and Article 50
- Training records and role readiness tied to Article 4
If your responses are policy-heavy but evidence-light, expect follow-up audits.
High-risk incident readiness: a practical drill
Run a tabletop exercise once per quarter for one sensitive workflow:
- Simulate harmful output reaching a real decision path.
- Trigger escalation owner and verify response time.
- Check whether logs reconstruct model input/output and human intervention.
- Test customer communication and remediation path.
- Record findings and corrective actions.
This drill validates whether Articles 9, 12, and 14 controls work in reality, not just in documents.
When to escalate to legal counsel immediately
Escalate quickly when you see any of these:
- Potential Article 5 prohibited-practice overlap
- Uncertain Article 6(3) exemption claim with weak evidence
- Expanded deployment context that may move a use case into Annex III
- Repeat incidents suggesting systemic fairness or safety weakness
Fast legal escalation prevents downstream rework and contractual risk.
High-risk governance RACI example for SMEs
A simple responsibility matrix prevents confusion:
- Product owner: maintains use-case intent, feature boundaries, release notes
- Compliance/legal lead: validates Article 6/Annex III rationale and Article 6(3) claims
- Operations lead: owns incident workflow and response SLAs
- Data/engineering lead: owns logging quality, model/version traceability, and performance monitoring
- HR/training lead: tracks AI literacy completion under Article 4
RACI clarity reduces "everyone thought someone else owned it" failures.
Documentation minimum for each high-risk candidate workflow
Store one structured record per workflow containing:
- Intended purpose and non-intended use constraints
- Role classification and legal references (Articles 3, 6, Annex III)
- Risk list with mitigations and open issues (Article 9 context)
- Oversight flow with escalation thresholds (Article 14)
- Logging fields and retention owner (Article 12)
- User/operator communication requirements (Article 13 and, where relevant, Article 50)
This format keeps reviews fast and defensible.
Final takeaway
High-risk status is not a stigma. It is a signal that your AI use case can create meaningful harm if unmanaged. Under the EU AI Act, that signal carries concrete obligations.
Anchor your work in the law: Article 6 and Annex III for classification, Article 6(3) for narrow exceptions, and Articles 9-15 for core control architecture.
For SMEs, the right goal is not perfect paperwork. The goal is a working, testable control system with clear ownership and evidence. Build that now, and you convert compliance from a deadline panic into a durable business capability.