The world's first comprehensive AI regulation. Classify, assess, and document your AI systems before the deadline.
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It establishes a risk-based classification system where AI systems are categorized as unacceptable risk, high-risk, limited risk, or minimal risk — each with proportionate obligations.
For high-risk AI systems (Annex III), organizations must implement a quality management system, conduct conformity assessments, maintain technical documentation, and ensure human oversight. Non-compliance can result in fines up to 7% of global annual turnover or EUR 35 million.
The regulation applies to providers, deployers, importers, and distributors of AI systems placed on the EU market or whose output is used in the EU — regardless of where the organization is based.
The most important articles and obligations you need to address.
Rules for determining whether an AI system is high-risk based on its intended purpose and Annex III categories.
Requirement for a continuous, iterative risk management process throughout the AI system lifecycle.
Training, validation, and testing data sets must meet quality criteria including relevance, representativeness, and bias examination.
Comprehensive documentation demonstrating compliance, including system architecture, design choices, and risk assessments.
High-risk AI systems must be designed to allow effective human oversight, including the ability to override system decisions.
Providers must implement a quality management system covering development, testing, validation, and post-market monitoring.
Users must be informed when interacting with AI systems, especially chatbots, emotion recognition, and deepfakes.
Non-compliance fines up to EUR 35M or 7% of global turnover for prohibited AI, and EUR 15M or 3% for other violations.
Key enforcement milestones
Feb 2025
Prohibited AI practices + AI literacy
In effectAug 2025
GPAI rules + national authorities
In effectAug 2026
High-risk obligations (Annex III)
DeadlineAug 2027
Annex I product safety AI
Purpose-built features to get you from zero to compliant.
Select your AI use case and get instant risk classification with mapped obligations per article.
Pre-built checklists covering all Annex III requirements, from data governance to human oversight.
Structured evidence collection for Art. 11 technical documentation requirements.
Control checks and effectiveness monitoring linked directly to AI Act requirements. Check results become evidence.
5x5 risk matrix for AI-specific risks with likelihood, impact scoring, and mitigation tracking.
Generate PDF, Excel, or Word reports per AI system with full traceability from regulation to evidence.
Yes. The AI Act applies to any organization that places AI systems on the EU market or whose AI system output is used in the EU, regardless of where the company is headquartered. This extraterritorial scope is similar to GDPR.
High-risk AI systems are defined in Annex III and include AI used for biometric identification, critical infrastructure, education, employment, credit scoring, law enforcement, and migration management. These systems must comply with the strictest requirements.
Prohibited AI practices apply from February 2025. GPAI rules from August 2025. The main high-risk obligations (Annex III) apply from August 2026. Annex I product safety AI from August 2027.
Fines range from EUR 7.5M or 1% of turnover for incorrect information, up to EUR 35M or 7% of global annual turnover for using prohibited AI practices. For other violations, fines can reach EUR 15M or 3% of turnover.
complixo provides automated risk classification based on your AI system's use case, pre-built compliance checklists mapped to specific articles, evidence management for technical documentation, and control testing for validation. All within a structured GRC framework.
Pre-built checks, structured evidence, and audit-ready reports. No credit card required.