EU AI Act risk groups

The EU AI Act is risk oriented. The higher the risk for people or society as a whole, the stricter the rules you as an organisation will face. The Act distinguishes between different risk groups: AI systems and applications with an unacceptable, high, or limited risk. 

On this page

If you develop, sell, or deploy AI systems, it is your responsibility to assess which risk group your system falls into. Even though the EU AI Act has not yet come into force, you are advised to already take this into consideration now. In February 2025, part of the rules of the Act will take effect. 

Unacceptable-risk AI systems: prohibited AI

Systems that involve an unacceptable risk will be prohibited, meaning that you will not be allowed to provide or deploy these systems. This ban will come into force in February 2025. The systems and applications in question include those that restrict people’s freedom of choice too much, those that manipulate, and those that discriminate, such as systems used as part of or for:

  • social scoring based on certain social behaviour or personal traits;
  • predictive policing to assess or predict criminal acts committed by people;
  • creating or adding to facial recognition databases (using data scraping);
  • manipulating or misleading people; 
  • using data scraping to create or add to facial recognition databases;
  • emotion recognition in the workplace and education; 
  • remote biometric identification for law enforcement purposes. There are, however, exceptions: 
  • biometric categorisation, where people are classified in certain sensitive categories based on biometric data.

High-risk AI systems

AI systems classed as ‘high risk’ will be subject to strict obligations. If you are unable to meet these obligations, you will not be allowed to place your system on the market or deploy it. Requirements for these systems will come into effect from August 2026. This includes systems such as those used as part of or for:

  • education or vocational training, where the system determines access or admission to educational institutions and the course of someone’s career. This includes AI systems used to mark exams;
  • employment, workers management, and access to self-employment, including due to the considerable risks posed to people’s future career opportunities and ability to provide for themselves. Examples include an AI system that automatically selects CVs for the next round in a recruitment procedure; 
  • essential private services and essential public services. These kinds of high-risk systems may have major impact on, for example, people’s ability to provide for themselves, such as software that determines whether or not someone is eligible for benefits or a loan; 
  • law enforcement, because this undermines people’s fundamental rights. This includes AI systems used to evaluate the reliability of evidence;
  • migration, asylum, and border control. This includes automated processing of asylum applications;
  • administration of justice and democratic processes, because the use of AI in this context represents risks for democracy and the rule of law. Examples include an AI system assisting a judicial authority in arriving at rulings.
    If the system you want to provide or deploy falls into the ‘high risk’ category, it will be subject to rules to prevent these risks or mitigate them to an acceptable level. These rules may be rules on risk management, the quality of the data used, technical documentation and registration requirements, and rules on transparency and human oversight. Public authorities or entities providing public services will be required to conduct a ‘fundamental rights impact assessment’ for high-risk systems.

Limited-risk AI systems

AI applications involving only a limited risk are subject to a number of ‘transparency obligations’. These tend to be systems intended to communicate with people, such as chatbots. They also include AI systems that generate content such as text and images. If you provide or deploy these systems, you will be required to let people know that they are dealing with an AI system. These transparency obligations will come into effect from August 2026.