Zum Hauptinhalt springen
Risk level: general Title VI — Measures in Support of Innovation

Article 59 — Further Processing of Personal Data for AI Regulatory Sandboxes

Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox 1. In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following...

Article 59
Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory
sandbox
1. In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the
purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met:
(a) AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or
legal person and in one or more of the following areas:
(i) public safety and public health, including disease detection, diagnosis prevention, control and treatment and
improvement of health care systems;
(ii) a high level of protection and improvement of the quality of the environment, protection of biodiversity, protection
against pollution, green transition measures, climate change mitigation and adaptation measures;
(iii) energy sustainability;
(iv) safety and resilience of transport systems and mobility, critical infrastructure and networks;
(v) efficiency and quality of public administration and public services;
(b) the data processed are necessary for complying with one or more of the requirements referred to in Chapter III,
Section 2 where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other
non-personal data;
(c) there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as
referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise
during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where
necessary, stop the processing;
(d) any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected
data processing environment under the control of the prospective provider and only authorised persons have access to
those data;
(e) providers can further share the originally collected data only in accordance with Union data protection law; any
personal data created in the sandbox cannot be shared outside the sandbox;
(f) any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data
subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data;
(g) any personal data processed in the context of the sandbox are protected by means of appropriate technical and
organisational measures and deleted once the participation in the sandbox has terminated or the personal data has
reached the end of its retention period;
(h) the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in
the sandbox, unless provided otherwise by Union or national law;
(i) a complete and detailed description of the process and rationale behind the training, testing and validation of the AI
system is kept together with the testing results as part of the technical documentation referred to in Annex IV;
ELI: http://data.europa.eu/eli/reg/2024/1689/oj 91/144
EN OJ L, 12.7.2024
(j) a short summary of the AI project developed in the sandbox, its objectives and expected results is published on the
website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities
of law enforcement, border control, immigration or asylum authorities.
2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of
criminal penalties, including safeguarding against and preventing threats to public security, under the control and
responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on
a specific Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.
3. Paragraph 1 is without prejudice to Union or national law which excludes processing of personal data for other
purposes than those explicitly mentioned in that law, as well as to Union or national law laying down the basis for the
processing of personal data which is necessary for the purpose of developing, testing or training of innovative AI systems or
any other legal basis, in compliance with Union law on the protection of personal data.

Related Blog Articles

Which AI Systems Are Banned Under the EU AI Act?

Article 5 prohibited AI practices explained: social scoring, manipulative systems, and limits on ...

What the EU AI Act Means for Small Businesses

A plain-English breakdown of why SMEs should prepare early for the August 2026 deadline.

Provider vs Deployer Under the EU AI Act

Learn the difference between AI providers and deployers, with practical examples and SME-focused ...

Check Your Compliance

Find out if this article applies to your AI system.

Take Risk Quiz

Compliance Templates

Download ready-made templates for EU AI Act compliance.

View Templates

Machen Sie unsere kostenlose Risikobewertung

Finden Sie in 2 Minuten heraus, wo Ihr Unternehmen unter der EU-KI-Verordnung steht.

Quiz starten