EU rules on trustworthy Artificial Intelligence systems (AI Act)

Published by:
Netherlands Enterprise Agency, RVO
Netherlands Enterprise Agency, RVO
Effective date: 1 August 2024

What has changed?

The EU has introduced rules on the development and uses of Artificial Intelligence (AI) systems. Those rules are listed in the Artificial Intelligence Act (AI Act). The aim of these rules is to ensure that only trustworthy AI systems enter the European common market.

The rules apply to providers and deployers of all types of AI systems. For instance, AI systems that help you take decisions, make predictions, create content, or otherwise support people.

Examples of the rules include:

  • You are not allowed to sell or use, among other things, manipulative AI or AI used to identify emotions at work (prohibited AI).
  • You are responsible for ensuring various requirements are met to eliminate or reduce risks before you sell or use high-risk AI systems. For example, you must meet risk management, monitoring, and cybersecurity requirements for AI systems that are used in recruiting, education, or critical infrastructure.
  • Special transparency obligations apply to some AI systems such as chatbots and generative AI (AI that allows you to create text and images) so users know they are dealing with an AI system or with AI-generated content.
  • You must clearly inform consumers that they are interacting with an AI system, and which risks this may involve.
  • Providers of general-purpose AI models (such as models generating images or translating text) are subject to transparency obligations. Those providers must also meet other obligations, for instance to provide information and documentation on their AI model. Additional rules apply if a general-purpose AI model is incorporated into a high-risk AI system.
  • Low-risk AI systems will not be subject to requirements. Examples include low-risk AI systems that optimise electric car charging, or AI that optimises logistics for floriculture.

Which rules apply to you or your AI system depends on:

  1. whether your application is an AI system as defined under the AI Act,
  2. the risk classification of your AI system,
  3. whether you are the provider or deployer of the AI system

Risk classification of AI systems

The risk classes for AI systems are:

  • Prohibited AI, such as manipulative AI or an emotion recognition system at the workplace.
  • High-risk AI, such as AI systems used for biometric identification, in law enforcement, and human resources.
  • AI with transparency obligations, such as chatbots and content from generative AI.
  • Low-risk AI, such as AI that optimises electric car charging.
  • General-purpose AI models that can perform many different tasks, such as image analysis and text translation. These AI models can be integrated into various downstream systems or applications.

For whom?

  • AI providers: you are an AI provider if you develop an AI system or AI model or if you have an AI system developed that you bring to market or use yourself.
  • AI deployers: you are an AI deployer if you use an AI system under your own authority (gebruiksverantwoordelijke).

Do you want to know if and how you are affected by the AI Act? Then you should check whether you sell or use an AI system that falls in 1 of the risk classes and check if you are an AI provider or an AI deployer.

When?

The AI regulation has entered into effect on 1 August 2024. Then, the following sections of the AI Act will take effect from:

  • 1 February 2025: prohibited AI systems are banned from the EU market.
  • 1 August 2025: large AI models that can be used for many applications must be compliant with the AI Act. The European Commission will oversee the AI Act’s enforcement.
  • 1 August 2026: all high-risk AI systems must be compliant. Providers of high-risk AI systems must have a declaration of conformity and the AI system must bear a CE mark. AI deployers must comply with their obligations.
  • 1 August 2027: the requirements for AI built into regulated products, such as medical applications, will apply. Providers of this type of AI must have a declaration of conformity and the AI system must bear a CE mark. AI deployers must comply with their obligations.

In 2025, national rules for some parts of the AI Act are expected to be added to the EU rules. For example who will be responsible for the supervision of the AI Act. From 2025 on, this competent authority will supervise AI providers and AI deployers and their compliance to the AI Act.

Do you think government rules are unclear?

Do the rules create unnecessary administrative burdens? Or do you know how the rules could make doing business easier? You can report this (anonymously) to the Regulatory Reporting Centre (Meldpunt regelgeving).

This article is related to:

Questions relating to this article?

Please contact the Netherlands Enterprise Agency, RVO