EU AI Act

The EU AI Act is coming: the world’s first extensive piece of legislation on artificial intelligence. It sets out rules for responsible development and deployment of AI by companies, public authorities, and other organisations. 

On this page

  1. General information

The Act will be implemented in phases and will be fully in force by August 2nd, 2027. A number of AI systems are likely to be prohibited as early as from February 2nd, 2025. Other AI-systems will have to meet additional requirements from August 2nd, 2026. Preparation for complying with the AI Act should start now. You can find more information in the ‘quick answers’ on this page.  

Why this Act? 

The EU AI Act is intended to ensure that everyone across Europe can rest assured that AI systems are secure and that fundamental rights are protected. Even though there are countless systems with low risk and all kinds of benefits to AI, there is also a whole other side to AI. While existing legislation, such as the General Data Protection Regulation (GDPR) and the Dutch Police Data Act (‘Wet politiegegevens’ (Wpg)), already offer protection, such as when AI systems are used to process personal data, the protection offered is inadequate to address all risks associated with AI. Irresponsible use of AI can, for example, lead to discrimination, restrictions of our freedoms, and to people being misled and exploited. The EU AI Act will ensure that AI system developers address risks and implement supervision of these efforts. 

What does the EU AI Act regulate? 

The EU AI Act classifies AI systems into risk groups. The higher the risk for citizens and society as a whole, the tighter the rules. Key points regulated by the Act are, for example, that: 

  • applications that involve an unacceptable risk are prohibited;
  • applications that represent a high risk are subject to stricter rules. Examples of high-risk AI systems include AI systems intended for recruitment and selection purposes or for law enforcement. Deployers of such systems will be under an obligation to, for example, log system activity, set up adequate data management, and ensure that human oversight is possible. They must also be able to show that they comply with these requirements. The EU AI Act contains a list of high-risk systems that have to meet these and other requirements.  Applications that represent a lower risk must meet various transparency rules. A system that generates artificial content, for example, must make it clear to citizens that the content it creates is artificial; 
  • market surveillance authorities will be designated that will be able to enforce compliance with the Act, oblige organisations to withdraw a product from the market, and impose fines.

If your organisation develops or deploys an AI system, it will be your responsibility to check what category your system falls into. Before the requirements for high-risk AI systems come into force, guidelines will be issued to help organisations assess into what risk group to classify their systems. See also: EU AI Act risk groups

To whom do the rules of the EU AI Act apply? 

The EU AI Act will govern both organisations that provide AI and organisations that deploy AI, ranging from public authorities and healthcare institutions to SMEs.

Providers must, for example, make sure that the systems they develop meet the requirements before placing them on the market and putting them into service. They will also be under an obligation to keep monitoring for risks that, for example, undermine the protection of fundamental rights. And to take action when there is something wrong with their system. Providers will also have to see to it that high-risk systems are sufficiently transparent, so as to ensure that they are deployed correctly.

Organisations will, for example, have a responsibility to deploy a high-risk AI systems as per the instructions for use issued by the provider. Deploying organisations must for these systems also ensure human oversight, use relevant and sufficiently representative input data, and report incidents to the provider and market surveillance authority. In certain cases, they must also register their use of a system in a European database. 

Additionally, the EU AI Act gives anyone who comes into contact with an AI system the right to:

  • lodge a complaint with a market surveillance authority;
  • in some cases, get an explanation on a decision made on the basis of output from a high-risk AI system.

Specific rules for public authorities 

The EU AI Act also provides additional rules for public authorities and entities providing public services. These must always conduct a so-called ‘fundamental rights impact assessment’ for high-risk systems and, from August 2nd, 2026, register their deployment of high-risk AI systems in the European database for high-risk AI systems.

Providers of general-purpose AI models  

There will also be rules governing providers of ‘general-purpose AI models’, also referred to as 'general purpose AI'These are models that are the foundation for other applications, such as language models used for AI chatbots. 

The developers of such models will be required to provide technical documentation on their product, make their system ‘explainable’, and have a policy in place to protect copyright. Enforcement will largely be handled on a European level, by the European AI Office. 

Quick answers

When will the EU AI Act come into force?

The EU AI Act will take effect in steps. The first rules will probably come into effect in the Netherlands on February 2nd, 2025. By August 2nd, 2027, the full Act will be in effect. See below for a run-down of the main steps of this process.  

February 2025

August 2025

  • Market surveillance authorities must be designated to oversee compliance with the EU AI Act in the Netherlands. 
  • The designated market surveillance authorities are competent to issue fines.
  • The rules for providers of AI models for general purposes take effect. These models are also referred to as ‘general-purpose AI’. 

August 2026

  • Start of high-risk AI oversight (as defined in Annex III to the EU AI Act).
  • Start of oversight on transparency requirements for certain AI systems . It must be clear to people that they are dealing with an AI system or AI-generated content. 
  • Dutch regulatory sandbox launched to help AI system providers with advice in response to complex questions about rules under the EU AI Act.

August 2027

  • Start of oversight on high-risk AI in existing regulated products (Annex I to the EU AI Act). 
  • The EU AI Act is now in full force. 

August 2030

  • End of extended transition period for existing AI at public authorities. 

What can my organisation do in preparation for the EU AI Act?

There are several actions to consider at this stage: 

  • If you are a provider or a deployer of algorithms and AI, take stock of the systems used in your organisation. And already assess which risk group your AI system falls into.  This way, you will know what requirements your system will have to meet. 
  • Is it already clear that your AI systemwill be prohibited under the AI Act? You would be well advised to stop selling the system right away. If your organisation uses such a system, try to decommission it as soon as possible. Chances are that you are already breaking laws by using the system. 
  • For now, see if you can improve transparency about your system. For example, by clarifying information about it on your website and by providing clear product information when offering the service. Systems designed to interact with individuals or generate content, such as deepfakes, will be subject to specific transparency obligations starting August 2026.
  • If you are considering purchasing an AI system, check first whether the terms and conditions of purchase take adequate account of existing and upcoming legislation, such as the EU AI Act and the GDPR. Under the AI Act, developers are obligated to ensure that their systems meet the requirements before placing them on the market. You want to prevent a situation where you develop or implement an AI system that turns out not to comply with the law.
  • Are you a professional working at a public authority? Explore options to conduct a human rights impact assessment (HRIA) and/or to use other practical tools from the Dutch government’s Business Ethics Toolbox
  • As a public authority, you can already register your algorithmic system in the algorithm register
  • Some organisations have already appointed an internal supervisory officer for algorithms or an ‘algorithm officer’. The Dutch Data Protection Authority (AP) applauds organisations that are taking their responsibility in this way. 

Who will oversee compliance with the EU AI Act?

In the Netherlands, the government is working on a proposal for the national supervisory structure for the supervision on the EU AI Act. It is expected that multiple supervisory authorities will be designated for different components of the EU AI Act. Which supervisory authority is competent in a specific area will depend on the context in which the AI system is being used or developed. The distribution of supervisory roles will be enshrined in Dutch legislation.

The extent to which the Dutch Data Protecion Authority (AP) will supervise the AI Act is therefore not yet fixed. However, as coordinating AI supervisor, in collaboration with other supervisory authorities, we are already contributing to the preparations. In May, the AP, together with the Dutch Authority for Digital Infrastructure (RDI), issued an advisory report to the Dutch government on the establishment of a supervisory structure for the supervision of the AI Act. This role of the AP as coordinating AI supervisor is carried out by the Department for the Coordination of Algorithmic Oversight (DCA)

The Dutch Data Protection Authority (AP) is already the supervisory authority for personal data processing using algorithms.