DeadlinesFebruary 12, 2026Last reviewed: February 12, 20268 min read

EU AI Act Deadlines 2025-2027: What You Need to Know

A complete timeline of EU AI Act implementation deadlines from 2025 to 2027. Know exactly what's required, when, and how to prepare for each phase.

By complixo Team

The EU AI Act implementation timeline

The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024. Unlike many EU regulations that have a single application date, the AI Act uses a phased implementation approach. Different obligations become enforceable at different times, giving organizations time to prepare for increasingly complex requirements.

Understanding this timeline is critical for compliance planning. Missing a deadline means you are already in violation — and with fines reaching up to 7% of global annual turnover, the stakes are significant.

Here is every key deadline, what it requires, and how to prepare.

Phase 1: February 2, 2025 — Prohibited practices & AI literacy

Status: Already in effect.

This was the first major compliance deadline. Two categories of obligations became enforceable:

Prohibited AI practices (Article 5)

All AI practices classified as "unacceptable risk" are now banned. Organizations must have discontinued any AI systems that:

  • Perform social scoring by or on behalf of public authorities
  • Use real-time remote biometric identification in publicly accessible spaces (with narrow law enforcement exceptions)
  • Deploy subliminal, manipulative, or deceptive techniques causing significant harm
  • Exploit vulnerabilities of specific groups (age, disability, social/economic situation)
  • Perform untargeted scraping of facial images for recognition databases
  • Conduct emotion recognition in workplaces or educational institutions (except medical/safety purposes)
  • Use biometric categorization to infer sensitive attributes (race, political opinions, etc.)
  • Engage in individual predictive policing based solely on profiling

AI literacy (Article 4)

All providers and deployers must ensure that personnel dealing with AI systems have sufficient AI literacy. This applies regardless of the AI system's risk classification.

What you should have done

  • Inventoried all AI systems for prohibited practices and discontinued any that qualify
  • Implemented AI literacy training programs appropriate to each role
  • Documented your prohibited practices review and training programs

Phase 2: August 2, 2025 — GPAI and governance

Status: Already in effect.

General-purpose AI (GPAI) obligations (Chapter V)

Providers of general-purpose AI models must now comply with:

  • Technical documentation: Maintain and make available documentation of the model's training process, including data sources, training methodology, and evaluation results
  • Copyright compliance: Implement policies to comply with the EU Copyright Directive, including providing a sufficiently detailed summary of training content
  • Downstream information: Provide adequate information to downstream providers who integrate the GPAI model into their own AI systems
  • Content summary: Publish a sufficiently detailed summary of the content used for training

GPAI with systemic risk

GPAI models classified as having systemic risk (trained with more than 10^25 FLOPs of compute, or designated by the Commission) face additional obligations:

  • Perform model evaluations including adversarial testing
  • Assess and mitigate possible systemic risks
  • Track, document, and report serious incidents to the AI Office
  • Ensure adequate cybersecurity protection

Governance structures

The EU governance framework for AI is now operational:

  • The AI Office within the European Commission oversees GPAI compliance and coordinates enforcement
  • The European Artificial Intelligence Board provides advice and ensures consistent application
  • National competent authorities must be designated by Member States
  • The advisory forum for stakeholder input is established

What you should have done

  • If you provide GPAI models: completed technical documentation, copyright compliance measures, and downstream information obligations
  • Assessed whether your GPAI model meets the systemic risk threshold
  • Identified which national competent authority is relevant for your operations

Phase 3: August 2, 2026 — High-risk obligations (Annex III)

Status: 6 months away. This is the most impactful deadline for most organizations.

This is when the full set of obligations for high-risk AI systems listed in Annex III becomes enforceable. These are AI systems used in specific high-risk areas — not those embedded in already-regulated products (which fall under Annex I with a later deadline).

Annex III high-risk areas

  • Biometrics (remote identification, categorization, emotion recognition)
  • Critical infrastructure (digital infrastructure, traffic, water, gas, heating, electricity)
  • Education and vocational training (admissions, evaluations, behavior monitoring during tests)
  • Employment (recruitment, screening, evaluations, promotion, termination, task allocation)
  • Essential services (public benefits eligibility, creditworthiness, insurance risk, emergency calls)
  • Law enforcement (polygraphs, evidence reliability, crime prediction, profiling, crime analytics)
  • Migration and border control (risk assessment, application examination, identification)
  • Administration of justice (legal research, fact interpretation, democratic process influence)

Obligations that apply

For each high-risk AI system, providers must have implemented:

  • Risk management system (Article 9): Continuous, documented process for identifying and mitigating risks
  • Data governance (Article 10): Quality criteria for training, validation, and testing data
  • Technical documentation (Article 11): Comprehensive documentation per Annex IV
  • Record-keeping (Article 12): Automatic event logging capabilities
  • Transparency (Article 13): Clear instructions for deployers
  • Human oversight (Article 14): Mechanisms for effective human control
  • Accuracy, robustness, cybersecurity (Article 15): Appropriate performance levels
  • Quality management system (Article 17): Documented policies and procedures
  • Conformity assessment (Article 43): Self-assessment or third-party assessment completed
  • EU Declaration of Conformity (Article 47): Formal compliance declaration
  • CE marking (Article 48): Marking affixed to the system
  • EU database registration (Articles 49, 71): System registered before market placement

Deployers must:

  • Ensure human oversight per provider instructions (Article 26)
  • Monitor system operation and report malfunctions
  • Conduct a Fundamental Rights Impact Assessment where required (Article 27)
  • Keep logs for at least 6 months (Article 26(6))
  • Inform affected individuals about AI use (Article 26(11))
  • Register use in the EU database for applicable categories

What you should be doing now

  • Complete your AI system inventory if not already done — identify every system that falls under Annex III
  • Finalize risk classification for each system, documenting Article 6(3) exceptions where applicable
  • Implement risk management systems — these take time to build properly
  • Prepare technical documentation — Annex IV is extensive
  • Build logging capabilities into your systems
  • Establish human oversight procedures
  • Begin conformity assessment preparation
  • Conduct FRIAs where required
  • Plan EU database registration

Six months may seem like adequate time, but building a proper quality management system, preparing technical documentation, and conducting conformity assessments are substantial undertakings. Start now if you have not already.

Phase 4: August 2, 2027 — Annex I high-risk (product safety)

This is the final major deadline. Obligations take effect for high-risk AI systems that are safety components of, or are themselves, products covered by existing EU harmonization legislation listed in Annex I.

Annex I product categories

These include AI systems embedded in:

  • Machinery (Regulation (EU) 2023/1230)
  • Toys (Directive 2009/48/EC)
  • Recreational craft (Directive 2013/53/EU)
  • Lifts (Directive 2014/33/EU)
  • Equipment for explosive atmospheres (Directives 2014/34/EU)
  • Radio equipment (Directive 2014/53/EU)
  • Pressure equipment (Directive 2014/68/EU)
  • Cableway installations (Regulation (EU) 2016/424)
  • Personal protective equipment (Regulation (EU) 2016/425)
  • Gas appliances (Regulation (EU) 2016/426)
  • Medical devices (Regulation (EU) 2017/745)
  • In-vitro diagnostics (Regulation (EU) 2017/746)
  • Civil aviation (Regulation (EU) 2018/1139)
  • Motor vehicles (Regulation (EU) 2019/2144)
  • Agricultural and forestry vehicles (Regulation (EU) 167/2013)
  • Marine equipment (Directive 2014/90/EU)
  • Rail (Directive (EU) 2016/797)

Key difference from Annex III

For Annex I systems, the conformity assessment follows the existing sectoral procedures already established under the relevant product safety legislation, with AI Act requirements integrated into those procedures. This means the process may involve existing notified bodies already familiar with the product category.

What you should be doing now

  • Identify any AI systems embedded in products covered by Annex I legislation
  • Begin coordinating with relevant notified bodies in your product sector
  • Integrate AI Act requirements into your existing product safety documentation

Additional dates and obligations

Codes of practice for GPAI (May 2, 2025)

The AI Office published draft codes of practice for GPAI providers. While voluntary, adherence to these codes creates a presumption of compliance with GPAI obligations.

National competent authority designation (August 2, 2025)

Member States were required to designate their national competent authorities by this date. These authorities are responsible for enforcement at the national level.

Post-market monitoring (ongoing from August 2, 2026)

Providers of high-risk AI systems must establish post-market monitoring systems (Article 72) and report serious incidents (Article 73) from the moment their systems are placed on the market after the relevant compliance date.

Penalties applicable from August 2, 2025

While the full set of obligations phases in gradually, the enforcement and penalty provisions (Article 99) have been applicable since August 2, 2025. This means fines can already be imposed for violations of provisions that are currently in effect (prohibited practices and AI literacy).

Summary timeline

  • Feb 2, 2025: Prohibited AI practices banned + AI literacy required
  • Aug 2, 2025: GPAI obligations + governance + penalties enforceable
  • Aug 2, 2026: Full high-risk obligations for Annex III systems
  • Aug 2, 2027: Full obligations for Annex I product-embedded AI systems

Practical advice for compliance planning

1. Do not wait for the deadline. Compliance requires building systems, processes, and documentation — not flipping a switch. Organizations that start 3 months before a deadline will struggle. Start now.

2. Prioritize by impact. Focus first on high-risk systems under Annex III (August 2026 deadline) as these face the most extensive obligations. Minimal-risk systems need no action beyond AI literacy.

3. Build incrementally. The risk management system feeds into technical documentation, which feeds into the conformity assessment. Work sequentially through the obligations — do not try to do everything at once.

4. Use tools, not just consultants. Platforms like complixo can help you systematically track obligations, classify systems, and manage compliance documentation — reducing dependence on expensive external advisors.

5. Monitor national implementation. Each Member State may add specific requirements or guidance. Track developments in the countries where you operate.

Ready to get compliant?

complixo helps you classify, document, and track EU AI Act compliance in minutes — not months.

Start for free