Skip to main content
← Back to blog

What Counts as an AI System Under the EU AI Act?

Share on LinkedIn

9 min read

The Question Every SME Should Be Asking

Since the EU AI Act entered into force, we have spoken with hundreds of small and medium-sized businesses across Europe. The single most common question is disarmingly simple: "Does this law even apply to us?"

The answer depends entirely on whether the software tools you build or use qualify as AI systems under the regulation. If they do not, you have zero obligations under the AI Act. None. So getting this classification right is not a technicality — it is the first and most consequential compliance decision you will make.

In February 2025, the European Commission published official Guidelines on the Definition of an AI System (C(2025) 883) to clarify exactly where the line falls. This article breaks down those guidelines into practical terms.

The Official Definition: Article 3(1)

The AI Act defines an "AI system" as:

A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

That is a single sentence with a lot packed into it. The Commission identifies seven cumulative elements that must all be present for something to qualify.

The 7 Cumulative Elements

Every element below must be satisfied. If even one is missing, the system is not an AI system under the Act.

1. Machine-Based System

The system runs on a machine — a computer, server, edge device, or cloud infrastructure. This is the easiest criterion to meet and excludes only purely human processes.

2. Designed to Operate with Varying Levels of Autonomy

The system can operate with some degree of independence from human control. This does not mean full autonomy — even a tool that presents recommendations for a human to approve can meet this criterion. The key word is "varying."

3. May Exhibit Adaptiveness After Deployment

The system may — but does not have to — continue learning or adjusting its behavior after it has been put into use. A system trained once and then frozen still qualifies, because "may" sets a possibility, not a requirement.

4. Explicit or Implicit Objectives

The system is built to achieve some goal, whether that goal is stated explicitly (e.g., "classify this image") or implicit in its design (e.g., a recommendation engine optimizing for engagement).

5. Infers from Input

This is the decisive element. The system must perform inference — it must go beyond mechanically executing pre-defined rules. It must derive outputs that were not directly programmed by a human for every possible input. We will return to this point below because it is the key differentiator.

6. Generates Outputs

The system produces something: predictions, content, recommendations, decisions, or other outputs. A system that processes data purely internally without generating any output would not qualify.

7. Can Influence Physical or Virtual Environments

The outputs have the capacity to affect something — a decision about a person, a control signal to a machine, content displayed to a user, or any other effect on the real or digital world.

"Inference" Is the Key Test

The Commission's guidelines make clear that inference is what separates AI systems from conventional software. Inference means the system derives new information, patterns, or conclusions that go beyond what was explicitly coded.

Think of it this way:

  • A system that follows a flowchart — no matter how complex — is not performing inference. It is executing instructions.
  • A system that examines data and produces outputs that the developer did not explicitly program for each input is performing inference.

The guidelines specifically state that approaches such as machine learning, logic-based reasoning, statistical methods, and knowledge-based techniques all involve inference. The critical point is that the output generation process involves deriving results rather than mechanically applying fixed rules.

What Is NOT an AI System

Here are concrete examples that fall outside the definition. If your SME only uses tools like these, the AI Act does not apply to you:

  • Robotic Process Automation (RPA) that follows fixed, scripted steps — clicking buttons, copying data between fields, filling forms according to predetermined rules
  • Spreadsheet formulas, even sophisticated ones with nested IF statements, VLOOKUP chains, or pivot tables — these are deterministic calculations
  • Basic if-then-else business logic — approval workflows that route invoices based on amount thresholds, automated email responses triggered by keywords
  • Simple calculators and converters — tax calculators, currency converters, BMI calculators
  • Database queries and reports — SQL queries, BI dashboards that aggregate and display data without predictive modeling
  • Conventional rule-based spam filters — filters that check against fixed keyword lists and sender blacklists
  • Basic inventory management — systems that trigger reorder alerts when stock falls below a set threshold
  • Deterministic pricing engines — systems that apply fixed pricing rules, volume discounts, or seasonal adjustments according to predefined tables

The common thread: these systems do exactly what they were told to do, for every input, in a way the developer explicitly specified.

What IS an AI System

These tools cross the inference threshold and fall under the AI Act:

  • Machine learning models of any type — supervised, unsupervised, reinforcement learning, whether you trained them or bought them
  • Large language models and chatbots — ChatGPT, Claude, Copilot, Gemini, and any application built on top of them
  • Computer vision systems — quality inspection cameras that learn defect patterns, facial recognition, object detection
  • Recommendation engines — systems that learn from user behavior to suggest products, content, or actions (not to be confused with rule-based "customers also bought" lists)
  • Natural language processing systems — sentiment analysis, document classification, automated summarization, translation models
  • Predictive analytics — demand forecasting models, credit scoring systems, churn prediction, predictive maintenance
  • Generative AI tools — image generators, code assistants, text generators, music creation tools

The common thread: these systems derive outputs that the developer did not explicitly program for each possible input.

Why This Matters for Your Business

The classification has binary consequences:

If your tool is NOT an AI system: You have no obligations under the EU AI Act. You do not need to classify it by risk level, you do not need technical documentation under Annex IV, you do not need a quality management system, and you do not need to register in the EU database. You can stop worrying about this particular regulation for that tool.

If your tool IS an AI system: You need to determine whether you are a provider or deployer, classify the risk level, and comply with the corresponding requirements before the August 2, 2026 deadline.

Getting this right early saves you from either unnecessary compliance costs on tools that are exempt, or dangerous complacency about tools that are covered.

The Gray Areas

Some systems sit in a genuinely ambiguous zone. Here are situations where reasonable people might disagree:

  • "Smart" rule engines with many thousands of rules — If a human wrote every rule, it is probably not AI. But if the rules were generated or optimized by a learning process, it likely is.
  • Statistical models with fixed parameters — A linear regression model with hand-tuned coefficients is borderline. The Commission's guidelines suggest that statistical approaches can constitute inference, but the simpler and more deterministic the model, the weaker the case.
  • Expert systems with human-authored knowledge bases — Classic expert systems that apply rules encoded by domain experts are generally not AI systems. However, if the knowledge base is automatically updated through learning, it may cross the threshold.
  • Hybrid systems — A system that is 90% rule-based but uses an ML model for one sub-component contains an AI system in that sub-component. The AI Act obligations apply to the AI component.

When in doubt, document your reasoning. If a regulator questions your classification, a well-reasoned analysis showing you considered the seven elements will serve you far better than no analysis at all.

Practical Checklist: Is Your Tool an AI System?

Use this checklist for each software tool in your organization. All seven must be "Yes" for the tool to qualify as an AI system under the EU AI Act.

  • [ ] Machine-based? Does it run on computing hardware?
  • [ ] Some autonomy? Can it operate with any degree of independence from continuous human control?
  • [ ] Objectives? Is it designed to achieve explicit or implicit goals?
  • [ ] Processes input? Does it receive and process input data?
  • [ ] Performs inference? Does it derive outputs that go beyond executing pre-defined rules — producing results the developer did not explicitly specify for every possible input? (This is the critical question)
  • [ ] Generates outputs? Does it produce predictions, content, recommendations, decisions, or similar outputs?
  • [ ] Influences environments? Can those outputs affect physical or virtual environments, including decisions about people?

If you answered "No" to any of these — the tool is not an AI system under the EU AI Act and falls outside its scope.

If you answered "Yes" to all seven — the tool is an AI system. Your next step is to determine your role (provider vs. deployer) and classify the risk level.

What to Do Next

  1. Inventory your software tools. List every tool your organization builds, deploys, or uses that could potentially involve AI.
  2. Apply the 7-element test to each tool using the checklist above.
  3. Document your assessment. For each tool, record your reasoning — especially around the inference question.
  4. For confirmed AI systems, proceed with risk classification and compliance planning.
  5. For borderline cases, seek specialist guidance or take a precautionary approach.

If you have not started this process yet, our free risk assessment quiz takes two minutes and gives you a starting point for understanding your exposure to the EU AI Act.


This article is based on the European Commission's Guidelines on the Definition of an Artificial Intelligence System (C(2025) 883, published February 6, 2025) and Article 3(1) of Regulation (EU) 2024/1689. It is provided for informational purposes and does not constitute legal advice.

Related articles

How to Run an AI System Inventory for Compliance

Step-by-step AI inventory process: what to document, how to classify systems, and how to maintain an audit-ready register.

Read article →

Your Right to an AI Explanation: Article 86 of the EU AI Act

Article 86 of the EU AI Act gives individuals the right to clear and meaningful explanations when AI systems influence decisions that affect them. Here is what deployers need to know.

Read article →