AI Act comes into effect: work to be done for developers and users

Themes:
EU AI Act

On 1 August 2024, the AI Act will officially enter into force. This Act applies to the entire European Union, including the Netherlands. In the coming period, the various requirements that this law imposes on developers and users of artificial intelligence (AI) are going to apply step by step. The first requirements will apply from February 2025. From that moment, certain AI systems will be prohibited and organisations that use AI must ensure that their employees are sufficiently AI literate. Developers and users of AI should start preparing for these new requirements as soon as possible.

The AI Act is the world's first comprehensive law on artificial intelligence, setting rules for responsible development and use of AI by companies, governments and other organisations.

Identify risks and take action

For developers and users of AI, it is important to identify quickly which AI systems they offer or use, and in which risk group they fall. Is it a prohibited AI system, a high-risk system or a low-risk system? After that, taking preparatory action is the top priority:

  1. Prohibited AI. Developers have to take these systems off the market and organisations that use them have to stop doing so. The provisions on prohibited AI will apply from February 2025. Keep in mind that such systems are already likely to break the law, such as legislation in the field of equal treatment, privacy or employment legislation. From August 2025, organisations that develop or use prohibited AI must take into account the possibility of hefty fines, based on the AI Act. In the coming period, the Dutch Data Protection Authority will further clarify which systems do and do not fall under the prohibition.
  2. High-risk AI. These systems must meet requirements such as risk management, data quality, technical documentation and registration, transparency, and human oversight. In addition, a certification mark for these systems is mandatoryUsers of AI systems can already ask their provider if and to what extent it is preparing to ensure that the AI system meets the requirements. Governments and other entities performing public tasks are subject to additional requirements, such as carrying out a ‘fundamental rights impact assessment’. If these requirements are not met, developers are not allowed to offer these systems and organisations are not allowed to use them.
  3. AI with limited risk. Systems intended to engage with individuals or that generate content, such as deepfakes, are subject to transparency obligations. If these systems are offered or used, people should be informed about them. Developers must design their systems in such a way that providing this information is possible. Organisations that use the systems must actually inform people. Users of AI systems should inform the provider of their AI systems about the state of play in this regard.

Enhancing AI literacy 

From February 2025, organisations that use AI systems must also have sufficient knowledge about AI. The level of AI literacy of each employee must be in line with the context in which the AI systems are used and how (groups of) people may be affected by the systems. 

For example, the AI literacy obligation means that an HR employee must understand that an AI system may contain biases or ignore essential information that may lead to an applicant being or not being selected for the wrong reasons. 

And a front-desk employee at a municipality where AI systems are used for verification of the identity of citizens must be aware that these systems often do not work equally well for everyone and that they cannot blindly follow the results produced by this type of AI systems. 

In the coming period, the Dutch DPA wants to enter into a dialogue with stakeholders on possibilities to ensure that AI knowledge in organisations is at a sufficiently high level.

Supervisors

There is also work to be done for the Dutch government. For example, it must quickly become clear which supervisors in the Netherlands will be responsible for which part of the supervision of the AI Regulation, the Dutch Data Protection Authority (Dutch DPA) and the Dutch Authority for Digital Infrastructure (RDI) wrote in an opinion last June .

AI-verordening persbericht - Header image

Also read

View all current affairs