The Complete EU AI Act Compliance Guide for 2026
Everything you need to know about EU AI Act compliance in 2026: deadlines, requirements, risk categories, and how to prepare your organization.
What is the EU AI Act?
The EU AI Act — officially Regulation (EU) 2024/1689 — is a regulation of the European Parliament and of the Council, published in the Official Journal of the European Union on July 12, 2024. It entered into force on August 1, 2024, with different provisions becoming applicable on a staggered timeline.
It establishes a harmonized legal framework for the development, placing on the market, putting into service, and use of AI systems within the European Union. The regulation takes a risk-based approach: the higher the risk an AI system poses to health, safety, or fundamental rights, the stricter the obligations.
The EU AI Act applies not only to organizations based in the EU, but also to any organization worldwide that places AI systems on the EU market or whose AI system's output is used within the EU. This extraterritorial reach makes it relevant to virtually any company deploying AI that touches European users.
Importantly, the regulation complements existing EU legislation including the GDPR, the Product Safety Directive, and sector-specific regulations. It does not replace these frameworks but adds AI-specific requirements on top.
Key deadlines you need to know
The EU AI Act uses a phased implementation timeline. The most critical deadlines are:
February 2, 2025 — Prohibited AI practices and AI literacy. The ban on unacceptable-risk AI systems (Article 5) becomes enforceable. Organizations must also ensure sufficient AI literacy for staff involved with AI systems (Article 4). This deadline has already passed.
August 2, 2025 — GPAI provisions and governance. Obligations for providers of general-purpose AI (GPAI) models take effect (Chapter V). The AI Office governance structure and rules on notified bodies also apply from this date. This deadline has also passed.
August 2, 2026 — High-risk obligations (Annex III). The full set of obligations for high-risk AI systems listed in Annex III becomes applicable. This includes requirements for risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, cybersecurity, conformity assessments, registration in the EU database, and post-market monitoring. This is the most impactful deadline for most organizations.
August 2, 2027 — Annex I high-risk (product safety). Obligations for high-risk AI systems embedded in products already covered by existing EU product safety legislation listed in Annex I (e.g., medical devices, machinery, toys, aviation). These follow the existing sectoral conformity assessment procedures.
The four risk categories
The EU AI Act classifies AI systems into four risk tiers. Each tier carries different obligations.
Unacceptable risk (Prohibited)
AI systems considered a clear threat to fundamental rights are banned outright under Article 5. Prohibited practices include social scoring by public authorities, real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions), AI systems deploying subliminal or manipulative techniques to distort behavior, exploitation of vulnerabilities of specific groups, untargeted scraping of facial images to build recognition databases, emotion recognition in the workplace and educational institutions (except for medical or safety reasons), biometric categorization to infer sensitive attributes, and individual predictive policing based solely on profiling.
High risk
AI systems that pose significant risks to health, safety, or fundamental rights. These are defined in two ways under Article 6:
- Annex I: AI systems that are safety components of, or are themselves, products covered by existing EU harmonization legislation (medical devices, machinery, toys, etc.)
- Annex III: AI systems used in specific high-risk areas including biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration and border control, and administration of justice
High-risk systems face the most extensive obligations (Articles 9-15, 17, and related provisions).
Limited risk
AI systems with specific transparency risks. Under Article 50, these systems must meet transparency obligations: chatbots must disclose they are AI; deepfakes and AI-generated content must be labeled; emotion recognition systems must inform users they are being analyzed. The focus is on ensuring people know when they are interacting with AI or AI-generated content.
Minimal risk
The vast majority of AI systems fall into this category (e.g., spam filters, AI-powered video games, inventory management). These can be developed and used with no additional legal obligations under the AI Act, though providers are encouraged to voluntarily adopt codes of conduct (Article 95).
Who does the EU AI Act apply to?
The regulation applies to several categories of actors in the AI value chain, regardless of where they are established, as long as the AI system is placed on the EU market or its output is used in the EU:
- Providers — organizations that develop an AI system (or have it developed) and place it on the market or put it into service under their own name or trademark. They bear the primary compliance burden, including conformity assessments and registration.
- Deployers — organizations that use an AI system under their authority, except where the AI system is used in the course of a personal, non-professional activity. Deployers have obligations around human oversight, monitoring, data quality for input, and informing affected individuals.
- Importers — entities that place an AI system from a third country on the EU market. They must ensure the provider has carried out the conformity assessment and that the system bears a CE marking.
- Distributors — entities in the supply chain (other than the provider or importer) that make an AI system available on the EU market.
- Authorized representatives — entities established in the EU, mandated by a non-EU provider to act on their behalf for AI Act compliance.
Notably, the AI Act also applies to providers and deployers located in third countries if the output produced by their AI system is used within the EU.
Penalties for non-compliance
The EU AI Act introduces significant financial penalties, structured in tiers depending on the severity of the violation (Article 99):
- Up to EUR 35 million or 7% of global annual turnover — for violations related to prohibited AI practices (Article 5).
- Up to EUR 15 million or 3% of global annual turnover — for non-compliance with other obligations under the regulation, including high-risk system requirements.
- Up to EUR 7.5 million or 1% of global annual turnover — for supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities.
The "whichever is higher" principle applies — meaning the fine is the larger of the fixed amount or the percentage of turnover. For SMEs and startups, fines are capped proportionally. Member States are responsible for laying down rules on penalties and enforcement.
How to prepare your organization
With the August 2026 deadline for high-risk obligations approaching, here is a practical roadmap for compliance:
1. Inventory your AI systems. Create a comprehensive register of all AI systems your organization develops, deploys, or uses. Include the purpose, data inputs, decision outputs, and affected individuals for each system.
2. Classify each system by risk level. Determine whether each AI system falls under unacceptable risk, high-risk (Annex I or III), limited risk, or minimal risk. Document the reasoning behind each classification.
3. Verify prohibited practices. Confirm that none of your AI systems engage in practices prohibited under Article 5. If they do, these systems must be discontinued immediately — this deadline has already passed.
4. Implement high-risk obligations. For each high-risk system, address the requirements of Articles 9-15: risk management, data governance, technical documentation, logging, transparency, human oversight, and accuracy/robustness/cybersecurity.
5. Establish a quality management system. Article 17 requires providers of high-risk AI systems to put in place a quality management system. This includes documented policies, procedures for design, development, testing, and post-market monitoring.
6. Register in the EU database. High-risk AI systems must be registered in the EU database (Article 49) before being placed on the market or put into service. Deployers of high-risk systems must also register their use.
7. Ensure AI literacy. Article 4 requires that providers and deployers ensure their staff and other persons dealing with AI systems have a sufficient level of AI literacy. This obligation is already in effect since February 2, 2025.
8. Plan for conformity assessment. High-risk AI systems require a conformity assessment (Article 43) before being placed on the market. For most Annex III systems, this can be done via internal control (Annex VI), but some require third-party assessment by a notified body.
Looking ahead
The EU AI Act represents a fundamental shift in how AI systems are governed in Europe. While the compliance burden is significant — particularly for high-risk systems — organizations that start preparing now will be well-positioned. The regulation rewards proactive, well-documented compliance processes. Waiting until the last minute is not a viable strategy: building a risk management system, preparing technical documentation, and establishing quality management processes all take time.
Organizations that approach this systematically — inventorying their systems, classifying risks, and methodically addressing obligations — will find that compliance, while demanding, is achievable. The key is to start now.
Ready to get compliant?
complixo helps you classify, document, and track EU AI Act compliance in minutes — not months.
Start for free