Skip to main content
← Back to blog

Your Right to an AI Explanation: Article 86 of the EU AI Act

Share on LinkedIn

8 min read

When a company uses an AI system to make or support a decision about you — whether to approve your loan, shortlist your job application, or determine your insurance premium — you have a right to understand what happened. Article 86 of Regulation (EU) 2024/1689, the EU AI Act, establishes this right. It is one of the most practically significant provisions in the entire regulation, and it takes effect on August 2, 2026, roughly five months from now.

For SMEs that deploy high-risk AI systems, this article creates a concrete obligation: when someone asks why a decision was made, you must be able to give them a clear and meaningful answer.

What Article 86 Actually Says

Article 86(1) states that any person subject to a decision taken by a deployer on the basis of the output from a high-risk AI system listed in Annex III, where that decision produces legal effects or similarly significantly affects that person, has the right to obtain clear and meaningful explanations of:

  • The role of the AI system in the decision-making procedure
  • The main elements of the decision taken

This is not a right to understand the algorithm's internal mechanics. It is a right to understand how the AI system was used in the process that led to the decision, and what the key factors behind that decision were. The obligation sits with the deployer — the organization that uses the AI system — not the company that built it.

When Does Article 86 Apply?

Four conditions must all be met for this right to be triggered:

1. The AI system must be high-risk and listed in Annex III. Annex III covers AI systems used in areas such as biometric identification, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services (including credit, insurance, and social benefits), law enforcement, migration and border control, and administration of justice. If your AI system falls into one of these categories, this article applies.

2. A deployer must have made a decision based on the AI system's output. The deployer is the entity using the AI system in a professional capacity. For most SMEs, this means your company, not the AI vendor.

3. The decision must produce legal effects or similarly significantly affect the person. A legal effect means a change in someone's legal position — a contract denied, a benefit refused, an application rejected. "Similarly significant" effects include decisions that materially impact someone's financial situation, access to services, employment prospects, or educational opportunities, even if they do not strictly alter legal rights.

4. The affected person must request the explanation. Article 86 creates a right that individuals can exercise. It is not an obligation to proactively explain every decision, but rather an obligation to respond when asked.

Practical Examples

Consider these scenarios where Article 86 would apply:

Credit decisions. A bank uses an AI credit scoring system to assess loan applications. A customer's application is rejected. Under Article 86, the customer can request an explanation. The bank must explain what role the AI system played in the assessment and what the main factors behind the rejection were — for example, that the AI system flagged insufficient income stability based on employment history patterns, and this output was a significant factor in the lending officer's decision.

Hiring decisions. An HR department uses an AI resume screening tool to shortlist candidates. A candidate who was not shortlisted requests an explanation. The company must explain that an AI system was used to rank applications based on specified criteria, and identify the main elements that led to the candidate not being progressed — such as a mismatch between stated experience and role requirements as assessed by the system.

Insurance pricing. An insurance company uses an AI risk assessment tool to calculate premiums. A customer who receives a higher-than-expected premium can request an explanation of how the AI system contributed to the pricing decision and what factors were most influential.

Benefit eligibility. A public authority uses an AI system to assess eligibility for social benefits. An applicant whose claim is denied can request an explanation of the AI system's role and the main reasons for the denial.

In each case, the deployer does not need to reveal proprietary algorithms or trade secrets. The obligation is to explain the decision-making process and the AI system's role within it in terms the affected person can understand.

How Article 86 Differs from GDPR Article 22

If you are already familiar with GDPR, you may wonder how Article 86 relates to Article 22 of the GDPR, which gives individuals the right not to be subject to solely automated decision-making with legal effects.

The differences are important:

  • GDPR Article 22 applies only when a decision is made solely by automated processing, with no meaningful human involvement. If a human reviews and approves the AI's recommendation, Article 22 generally does not apply.
  • AI Act Article 86 applies regardless of whether the decision was solely automated or involved human oversight. Even when a human makes the final call based on an AI system's output, the affected person still has the right to an explanation.

This is a significant expansion. Many organizations have structured their AI workflows to include human review specifically to avoid GDPR Article 22 obligations. Article 86 closes that gap. A human-in-the-loop does not remove the obligation to explain.

Both provisions can apply simultaneously. If a decision is made solely by an AI system listed in Annex III, the affected person may have rights under both GDPR Article 22 and AI Act Article 86. Organizations need to comply with both frameworks.

What SMEs Need to Do as Deployers

If your organization deploys a high-risk AI system listed in Annex III, you should start preparing now. Here are the concrete steps:

Establish a request handling process. Decide how individuals can submit explanation requests — a dedicated email address, a form on your website, or through your existing customer service channels. Make sure someone is responsible for receiving and routing these requests.

Train your staff. The people making AI-assisted decisions need to understand the AI system well enough to explain its role. This does not mean they need to understand neural network architectures. It means they need to know what data the system considers, what its output represents, and how that output factors into their decision. "The computer said no" is not a clear and meaningful explanation.

Document your decision-making process. For each high-risk AI use case, document the workflow: what input the AI system receives, what output it produces, how that output is used by the human decision-maker, and what other factors are considered. This documentation is the foundation of any explanation you provide.

Set response timelines. While Article 86 does not specify an exact response deadline, explanations should be provided within a reasonable timeframe. Establishing internal SLAs — for example, 30 days from receipt of request — demonstrates good faith and helps manage the process.

Keep records. Log explanation requests and the responses you provide. This creates an audit trail that demonstrates compliance and helps you improve your explanation processes over time.

Review your AI vendor agreements. To explain the AI system's role, you need adequate information from your AI provider about how the system works. Check that your contracts include sufficient transparency provisions. Under Article 13 of the AI Act, providers of high-risk AI systems must supply deployers with instructions for use that enable them to fulfil their obligations, including those under Article 86.

The Exception: Article 86(2)

Article 86(2) provides a limited exception. The right to explanation does not apply where the use of AI systems is subject to exceptions from, or restrictions to, the obligation to provide an explanation under Union or national law. This covers specific contexts such as national security and certain law enforcement scenarios where full transparency could compromise legitimate objectives.

For most SMEs, this exception is not relevant. If you are using AI for commercial purposes — hiring, lending, insurance, customer service — the full obligation applies.

The Timeline

Article 86 takes effect on August 2, 2026, as part of the main body of obligations for high-risk AI systems under the EU AI Act. That is approximately five months from today. Organizations that deploy high-risk AI systems should already be working on their compliance preparations.

The key actions to prioritize between now and August:

  1. Identify which of your AI systems are high-risk under Annex III
  2. Map the decision-making workflows for each system
  3. Create explanation templates for common decision types
  4. Train staff who make AI-assisted decisions
  5. Set up a process for receiving and responding to explanation requests
  6. Review vendor contracts for adequate transparency provisions

Take Action Now

Not sure whether your AI systems qualify as high-risk? Our free risk assessment quiz takes two minutes and gives you a clear picture of where you stand under the EU AI Act. Understanding your risk level is the first step toward compliance — including your Article 86 obligations.

Take the Free Risk Assessment Quiz

Related articles

What Counts as an AI System Under the EU AI Act?

Not every software tool is an AI system. Learn the official 7-element test from the European Commission to determine if your tools fall under EU AI Act obligations.

Read article →

How to Run an AI System Inventory for Compliance

Step-by-step AI inventory process: what to document, how to classify systems, and how to maintain an audit-ready register.

Read article →