Rules for working with safe AI
Do you build AI systems or models? Or do you have one built, for your own use or to sell in the European Union? Or do you deploy an existing AI system, and are you responsible for its use? Your AI system must comply with the rules set out in the European Artifical Intelligence Act (AI ACT).
What is the AI Act?
The AI Act obliges providers and deployers of AI systems throughout the EU to protect the rights of individuals and companies in the development and use of AI systems. The Act sets requirements for AI systems that pose a high risk. It bans AI applications that pose unacceptable risks to people and society. The AI Act also sets rules for the transparency of AI systems, and regulates supervision and enforcement at EU and national levels.
What are prohibited AI systems?
You are not allowed to develop or use an AI system with the intent to:
- negatively influence human behaviour in order to limit individuals’ free choice
- exploit vulnerable people based on their age, disability, or situation
- social scoring: reward or punish people through a points system based on behaviour or personal characteristics
- calculate the risk of someone committing a crime, based on their personal characteristics
- untargeted scraping: fill facial recognition databases with randomly collected images, for example from surveillance or CCTV cameras, or social media
- recognise emotions in, for example, education or the workplace, unless this is done for special medical or safety reasons
- classify individuals into sensitive categories based on biometric data, such as their origin, health, or sexual orientation
- use biometric data for automatic real-time identification of individuals in public spaces for law enforcement purposes, except in situations where its use is strictly necessary
If you, either intentionally or unintentionally, release a prohibited AI system on the market, citizens or companies that suffer damage may take legal action against you. And you may be fined.
What are high-risk AI systems?Â
From 2 August 2026, high-risk AI systems must comply with the AI Act. This concerns AI systems that are used in the following areas:
- biometrics
- critical infrastructure
- education and vocational training
- employment, human resource management, and access to self-employment
- essential private and public services and benefits
- law enforcement
- migration, asylum, and border management
- judicial and democratic processes
From 2 August 2027, all high-risk AI systems must comply with the rules. This includes existing products in which the AI is deployed as a safety component. And when the product itself is a high-risk AI system. Examples include machines, medical equipment, and lifts.
Requirements for high-risk AI systems
Do you develop a high-risk AI system that you want to market for use in these areas? Then you must meet a number of conditions to ensure that the AI system works reliably and safely. Among other things, you must:
- have risk and quality management systems
- build in monitoring and human oversight options
- be able to show the technical documentation
- meet the requirements for transparency and informing users
You must use CE marking to show your users that your system has an EU declaration of conformity.
Exceptions to high-risk areas of application
In some cases the use of AI systems in high-risk application areas may be allowed. For example, if using the AI system does not influence any decisions you make. You must still register the system in the EU database for high-risk AI systems.
Transparency obligations for providers
Do you offer specific AI systems that people have direct contact with, such as chatbots or generative AI? Then you must make sure your customers are aware that they are using an AI system. Images or text created by an AI system must be labelled as such, enabling automatic detection.
Transparency obligations for deployers
Do you deploy an AI system, making you responsible for the usage of the system (gebruiksverantwoordelijke)? You must be transparent about this. If the AI system you use classifies individuals’ biometric data, such as fingerprints or facial features, you must inform them about how your system works. And if you use AI to create or edit content such as text or images, you must clearly indicate that the content was created using AI.
Supervision of AI systems
The European Commission enforces the rules for AI models for general use. National supervising authorities monitor the use of prohibited AI systems, and they check whether high-risk AI systems meet the technological and transparency obligations. The Dutch supervising bodies also offer a regulatory sandbox (in Dutch), which allows providers of AI systems to find out how they can comply with the rules of the AI Act.
Standards for AI
The European standards for AI are still under development. The most up-to-date version can be found with the Joint Technical Committee 21. Their standards will serve as a basis for what is considered safe and reliable. In the Netherlands, the Dutch Standardisation Institute (Nederlandse Normalisatie-Instituut, NEN) will supervise this.
Amendments
- European Data Act for fairer access to and use of dataEffective date: 12 September 2025