AI System Risk Classification Explained: From Minimal to Unacceptable
A deep dive into the four risk levels of the EU AI Act. Learn how to determine which category your AI system falls into and what obligations apply.
The risk-based approach explained
Rather than regulating all AI systems equally, the EU AI Act (Regulation (EU) 2024/1689) assigns obligations proportional to the risk that a given AI system poses to health, safety, and fundamental rights. This is a deliberate policy choice: it avoids stifling innovation for low-risk applications while imposing strict safeguards where the potential for harm is greatest.
The framework defines four tiers — unacceptable, high, limited, and minimal risk — each with a distinct set of obligations ranging from an outright ban to no additional legal requirements at all.
The classification of an AI system depends on its intended purpose and the context of use, not on the underlying technology. The same machine learning model could fall into different categories depending on how it is deployed.
Unacceptable risk — Prohibited AI practices
Article 5 of the EU AI Act lists AI practices that are considered to pose an unacceptable risk and are therefore prohibited. The ban on these practices became enforceable on February 2, 2025.
The following AI practices are banned:
- Social scoring by or on behalf of public authorities — AI systems that evaluate or classify natural persons based on their social behavior or personal characteristics, leading to detrimental or unfavorable treatment unrelated to the context in which the data was generated or that is disproportionate.
- Real-time remote biometric identification in publicly accessible spaces for law enforcement — with strictly limited exceptions for targeted search for specific victims of crime, prevention of a specific and imminent threat to life, and localization of suspects of serious criminal offenses.
- Subliminal, manipulative, or deceptive techniques — AI systems that deploy techniques beyond a person's consciousness or exploit vulnerabilities to materially distort behavior, causing or likely to cause significant harm.
- Exploitation of vulnerabilities — targeting specific groups due to age, disability, or social or economic situation in ways likely to cause significant harm.
- Untargeted scraping of facial images — from the internet or CCTV footage to create or expand facial recognition databases.
- Emotion recognition in the workplace and educational institutions — except where used for medical or safety reasons (e.g., monitoring fatigue levels of pilots).
- Biometric categorization to infer sensitive attributes — using biometric data to categorize persons by race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
- Individual predictive policing — AI systems that make risk assessments of natural persons to predict criminal offenses based solely on profiling or personality traits.
If your AI system falls into any of these categories, it must be discontinued. There is no grace period — the prohibition is already in effect.
High-risk AI systems
High-risk AI systems are subject to the most extensive obligations in the EU AI Act. Article 6 defines two pathways for a system to be classified as high-risk.
Pathway 1: Annex I — Product safety legislation
AI systems that are safety components of, or are themselves, products already regulated under existing EU harmonization legislation listed in Annex I. This includes medical devices (Regulation (EU) 2017/745), in-vitro diagnostics, civil aviation, motor vehicles, machinery, toys, lifts, equipment for explosive atmospheres, radio equipment, pressure equipment, and more. These systems follow the existing conformity assessment procedures of their sector, with AI Act requirements integrated. The compliance deadline is August 2, 2027.
Pathway 2: Annex III — Standalone high-risk use cases
AI systems used in the following areas, as listed in Annex III, must comply by August 2, 2026:
- Biometrics — Remote biometric identification systems (not real-time in public spaces, which is prohibited); biometric categorization; emotion recognition systems.
- Critical infrastructure — AI used as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating, and electricity.
- Education and vocational training — AI systems used to determine access to or admission to educational institutions; evaluate learning outcomes; assess the appropriate level of education; or monitor and detect prohibited behavior during tests.
- Employment, workers management, and access to self-employment — AI used for recruitment, screening, filtering, or evaluating candidates; making decisions on promotion, termination, task allocation, or monitoring and evaluation of performance and behavior.
- Access to essential private and public services — AI used to evaluate eligibility for public assistance benefits and services; creditworthiness assessment; risk assessment and pricing in life and health insurance; evaluation and classification of emergency calls; or prioritization of emergency first response services.
- Law enforcement — AI used as polygraphs or to detect emotional state; to assess the reliability of evidence; to predict occurrence or reoccurrence of criminal offenses based on profiling; for profiling during criminal investigations; or for crime analytics regarding natural persons.
- Migration, asylum, and border control management — AI used as polygraphs or to detect emotional state; to assess risks posed by persons entering EU territory; to assist examination of applications for asylum, visa, or residence permits; or for identification purposes.
- Administration of justice and democratic processes — AI used to assist judicial authorities in researching and interpreting facts and law, and in applying the law. Also includes AI intended to influence the outcome of an election or referendum.
The Article 6(3) exception
An AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights — including by not materially influencing the outcome of decision-making. This exception does not apply if the system performs profiling of natural persons. Providers who determine their system is not high-risk under this exception must document the assessment and make it available to authorities upon request (Article 6(4)).
Limited risk — Transparency obligations
Article 50 of the EU AI Act defines transparency obligations for certain AI systems that interact with people or generate content. These are sometimes called "limited risk" systems because their primary obligation is to ensure transparency, rather than meeting the full set of high-risk requirements.
Key transparency obligations include:
- AI-powered chatbots and conversational systems — Providers must ensure that natural persons are informed they are interacting with an AI system, unless this is obvious from the circumstances and context of use.
- Deepfakes and AI-generated content — Providers of AI systems that generate synthetic audio, video, image, or text content must ensure that the output is marked in a machine-readable format and is detectable as artificially generated or manipulated.
- Emotion recognition systems — Deployers must inform natural persons exposed to the system about its operation and process personal data in accordance with the GDPR.
- Biometric categorization systems — Deployers must inform natural persons exposed to such systems about their operation.
These transparency requirements complement rather than replace existing obligations under the GDPR, the ePrivacy Directive, and other EU law. Limited-risk systems do not need to undergo conformity assessments or be registered in the EU database.
Minimal risk — Voluntary codes of conduct
The majority of AI systems currently in use fall into the minimal risk category. These are AI systems that do not fall into any of the categories above — they are not prohibited, not classified as high-risk, and not subject to specific transparency obligations.
Examples include spam filters, AI-enabled video games, inventory management systems, AI-powered search engines (for general purpose), recommendation algorithms for entertainment content, and manufacturing optimization tools.
These systems can be developed and deployed in the EU without additional obligations under the AI Act. However, Article 95 encourages the development of voluntary codes of conduct that apply some of the principles from the high-risk requirements, including environmental sustainability considerations, AI literacy for all stakeholders, inclusive and diverse design teams, voluntary transparency measures, and accessibility for persons with disabilities.
How to classify your system step by step
Follow this decision tree to determine the risk classification of your AI system:
Step 1: Check against prohibited practices (Article 5). Does your AI system perform any of the practices listed under Article 5? If yes: the system is prohibited and must be discontinued.
Step 2: Check Annex I product safety. Is your AI system a safety component of, or itself, a product covered by existing EU harmonization legislation listed in Annex I? If yes: the system is high-risk under Annex I. Deadline: August 2027.
Step 3: Check Annex III use cases. Is your AI system used in one of the eight areas listed in Annex III? If yes: the system is likely high-risk under Annex III — unless the Article 6(3) exception applies. Deadline: August 2026.
Step 4: Check transparency obligations (Article 50). Does your AI system interact directly with natural persons (chatbot), generate synthetic content, or perform emotion recognition or biometric categorization? If yes: the system has limited risk / transparency obligations.
Step 5: Minimal risk. If your AI system does not fall into any of the above categories, it is classified as minimal risk. No additional obligations apply under the AI Act, though voluntary codes of conduct are encouraged.
An AI system can fall into multiple categories. For example, a chatbot (limited risk/transparency) could also be used for employment screening (high-risk). In such cases, the highest applicable risk classification determines the obligations. Document your classification reasoning thoroughly — providers who determine that their Annex III system is not high-risk must make this assessment available to national competent authorities upon request (Article 6(4)).
Ready to get compliant?
complixo helps you classify, document, and track EU AI Act compliance in minutes — not months.
Start for free