Final recommendation on supervision of AI: sector and centrally coordinated

Themes:
Coordination of algorithmic and AI supervision

Collaboration, coordination and the best possible use of expertise in the areas of fundamental rights and safety. Those elements are at the heart of the final recommendation 'Supervision of AI' presented by the Dutch Authority for Digital Infrastructure (RDI) and the Dutch Data Protection Authority (Dutch DPA) today. The recommendation describes how the use of AI can effectively be supervised through an integrated approach.

Artificial intelligence (AI) is developing at breakneck speed and is used on an increasingly large scale everywhere. The possibilities for application are endless, and AI offers huge opportunities for our society. At the same time, there may be substantial risks. 

The AI Act contains rules for responsible development and use of AI by businesses, governments, and other organisations. Well-organised supervision gives consumers confidence and creates clarity for organisations and the business sector. They can continue to consult the (sectoral) supervisory authorities they already know.

Sectoral approach

Because of the many different AI applications, there are also many different risks. An AI system in toys entails other risks than, for example, an AI system for recruitment and selection. That is why the RDI and the Dutch DPA recommend ensuring that supervision of AI in the different sectors and domains is in line with regular supervision as much as possible. 

Products such as machines, lifts or toys already have to comply with various European (safety) rules. Consumers can recognise this by the CE marking. If AI is used in products such as these, the requirements of the AI Act must be met as well. The supervision of these products can remain with the same supervisory authority. In this way, the best possible use is made of knowledge, expertise and capacity of the sectoral supervisory authorities, and their mandates remain intact.

When organisations integrate AI applications in products and services without the current mandatory CE marking, supervision of this can also be in line with existing supervisory roles. It concerns, for example, applications where AI systems are used for making decisions about people, such as with recruitment and selection, assessments in education, or risk selection by governmental organisations. 

Collaboration is crucial

For these cross-sectoral applications, it is important that supervisory authorities collaborate from their sectoral and domain-specific expertise. For example: they should be able to share signals among each other and they can collaborate in appropriate interventions. Supervision of the AI Act has to take place in close collaboration between supervisory authorities in order to prevent fragmentation of supervision and to ensure that organisations know what is expected from them. 

In the final recommendation on supervision of the AI Act, the RDI and the Dutch DPA are given coordinating roles. From an expert role, they support and advise other supervisory authorities and facilitate collaboration.

Joint commitment

This final recommendation is the result of intensive collaboration between all supervisory authorities involved, and with support from the Inspection Council and the Algorithm & AI Chambre of the Digital Regulation Cooperation Platform (SDT). 

Angeline van Dijk, inspector-general of the RDI, says about this: "This broad involvement underlines the joint commitment to a responsible and safe integration of AI in Dutch society. We do not only look at today, but also at what a safe and at the same time innovative AI landscape asks of us tomorrow."

Dutch DPA chairman Aleid Wolfsen emphasises the importance of the protective effect of the AI Act: "Even now, people can already come into contact with AI systems every day, and it is important that citizens can be confident that AI is used in a safe and fair manner and with respect for fundamental rights. By collaborating as supervisory authorities, we safeguard the development and safe use of high-risk AI systems across various sectors, based on a uniform interpretation of the AI Act throughout the European Union. As the European Data Protection Board also wrote recently, within such a structure, its role and knowledge enable the Dutch DPA to make an important contribution to the product supervision where AI makes decisions about people or makes assessments."

Explanation of final recommendation on 18 November

To provide further explanation about the recommendation on supervision of the AI Act, the RDI and the Dutch DPA are organising a meeting in Utrecht on Monday 18 November from 10:00 to 12:00. At this meeting, the supervisory authorities will explain the recommendation and there will be room for asking questions or providing points for attention. 

Interested parties on behalf of representative organisations can register through dca@autoriteitpersoonsgegevens.nl quoting 'Explanatory meeting about Final recommendation on supervision of AI Act'. Please note that there is only limited seating capacity.

Publications

Third advice on the supervisory structure for the AI Act

Also read

View all current affairs