Everything you need to know about EU AI Act, GDPR, NIS2, and DORA compliance.
EU AI Act Basics
What is the EU AI Act?
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It was adopted by the European Parliament and Council and entered into force on August 1, 2024. It establishes a risk-based approach to regulating AI systems in the European Union.
Who does the EU AI Act apply to?
The Act applies to providers (developers) of AI systems, deployers (organizations using AI), importers, and distributors — regardless of where they are based, as long as their AI system is placed on the EU market or its output is used in the EU.
When does the EU AI Act take effect?
The Act is being phased in: prohibited AI practices since February 2, 2025; GPAI provisions and governance structures since August 2, 2025; high-risk obligations in Annex III from August 2, 2026; and remaining obligations including Annex I from August 2, 2027.
The EU AI Act defines four risk levels: Unacceptable risk (banned), High risk (strict obligations), Limited risk (transparency requirements), and Minimal risk (voluntary codes of conduct). The classification determines which obligations apply to your AI system.
How do I classify my AI system?
Classification depends on the system's intended purpose, the sector it operates in, and its potential impact. High-risk systems are listed in Annex III and include areas like biometrics, critical infrastructure, education, employment, and law enforcement. complixo's classification engine analyzes these factors automatically.
What if my system could fall into multiple categories?
If your AI system has multiple use cases, the highest applicable risk category takes precedence. For example, an AI tool used for both recruitment (high-risk) and internal scheduling (minimal risk) would be classified as high-risk overall.
High-risk AI providers must implement: a risk management system (Art. 9), data governance (Art. 10), technical documentation (Art. 11), logging capabilities (Art. 12), transparency measures (Art. 13), human oversight provisions (Art. 14), and accuracy/robustness/security standards (Art. 15).
Do I need a conformity assessment?
Yes. Providers of high-risk AI systems must conduct a conformity assessment before placing the system on the market. Most systems in Annex III can use self-assessment (internal control procedure in Annex VI), except for biometric systems which require a third-party assessment.
Do I need to register in the EU database?
Yes. Both providers and deployers of high-risk AI systems must register in the EU database before putting the system on the market or into service (Article 49). This database is managed by the European Commission.
Article 5 prohibits: social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions), subliminal manipulation causing harm, exploiting vulnerabilities of specific groups, emotion recognition in workplaces and educational institutions, untargeted scraping for facial recognition databases, and predictive policing based solely on profiling.
Are there exceptions to the prohibitions?
Limited exceptions exist for real-time biometric identification in public spaces, specifically for targeted search for missing persons, preventing imminent terrorist threats, and locating suspects of serious crimes. These require prior judicial authorization.
Roles & Responsibilities
What is the difference between a provider and a deployer?
A provider develops or has an AI system developed and places it on the market. A deployer is a person or organization that uses an AI system under their authority. Providers have more extensive obligations (design, develop, document), while deployers must ensure proper use, monitoring, and human oversight.
What if I use a third-party AI system?
As a deployer, you're still responsible for using the system in compliance with the Act. This includes ensuring human oversight, monitoring for risks, keeping logs, and informing affected individuals. You should also verify that the provider has fulfilled their obligations.
Documentation & Compliance
What documentation do I need?
For high-risk systems, you need: technical documentation describing the system, its purpose, and design (Art. 11); a risk management plan (Art. 9); data governance documentation (Art. 10); records of automatic logging (Art. 12); and an EU Declaration of Conformity (Art. 47).
How long must I keep records?
Providers must keep technical documentation and logs for 10 years after the AI system has been placed on the market. Deployers must keep logs of the system's operation for at least 6 months, or longer if required by other EU or national law.
Deadlines & Enforcement
What are the penalties for non-compliance?
Fines can reach up to EUR 35 million or 7% of global annual turnover for prohibited practices, EUR 15 million or 3% for most other violations, and EUR 7.5 million or 1% for providing incorrect information. SMEs and startups face proportionate, lower caps.
Who enforces the EU AI Act?
Each EU Member State must designate national competent authorities. The European Commission has established an AI Office to oversee GPAI models and coordinate enforcement. There is also a European AI Board (advisory) and an advisory forum for stakeholder input.
What is a Fundamental Rights Impact Assessment (FRIA)?
A FRIA is a mandatory assessment for deployers of high-risk AI systems (Article 27). It evaluates how an AI system may impact fundamental rights such as non-discrimination, privacy, freedom of expression, and human dignity. Deployers must complete a FRIA before putting a high-risk system into use.
When is a FRIA required?
A FRIA is required when deploying high-risk AI systems in public services, healthcare, education, employment, law enforcement, and border management. Bodies governed by public law and private operators providing public services must always conduct a FRIA.
How do I conduct a FRIA?
A FRIA should describe the deployer's processes, the timeframe and frequency of use, categories of affected persons, specific risks of harm, human oversight measures, and actions to be taken if risks materialize. The assessment must be sent to the relevant national authority and updated when significant changes occur.
Article 4 requires both providers and deployers to ensure their staff and representatives have sufficient AI literacy. This means adequate knowledge to make informed decisions about AI systems, understand risks, and ensure appropriate use — proportionate to the context.
Who needs AI literacy training?
All staff who operate, oversee, or make decisions about AI systems need appropriate training. This includes technical teams, managers, customer service staff who interact with AI outputs, and decision-makers. The level of training should match their role and the risk level of the AI system.
How should AI literacy be documented?
Organizations should maintain records of training programs, attendee lists, training materials, and competency assessments. While the Act does not prescribe a specific format, documentation should demonstrate that personnel have the knowledge needed for their role in AI system deployment.
GPAI models are AI models trained on broad data at scale that can perform a wide range of distinct tasks. Examples include large language models like GPT, Claude, and Gemini. GPAI provisions took effect August 2, 2025.
What obligations apply to GPAI providers?
All GPAI providers must: maintain technical documentation, provide information to downstream providers, comply with the Copyright Directive, and publish a training content summary. GPAI models with 'systemic risk' (>10^25 FLOPs training compute) face additional obligations including adversarial testing and incident reporting.
Do GPAI rules affect deployers?
If you deploy a product built on GPAI (e.g., a chatbot using an LLM), you are primarily governed by the risk category of your specific use case. However, you should ensure your GPAI provider meets their obligations, as their compliance affects your downstream liability.
National Implementation
How is the Netherlands implementing the AI Act?
The Netherlands has designated the Autoriteit Persoonsgegevens (AP) as the primary supervisory authority for AI. The Dutch government is developing national guidance through the Algorithm Register and Algoritmekader. The Netherlands was an early adopter of algorithmic transparency policies.
How is Germany implementing the AI Act?
Germany is setting up its national AI supervisory structure through the Bundesnetzagentur (Federal Network Agency) as the coordinating authority. Multiple sector-specific authorities (BaFin for finance, BfArM for medical devices) will oversee AI in their respective domains.
How is France implementing the AI Act?
France has designated CNIL as a key authority for AI oversight, leveraging its existing data protection expertise. The French government emphasizes a pro-innovation approach, with AI regulatory sandboxes and support for SMEs to achieve compliance.
How is Belgium implementing the AI Act?
Belgium is coordinating its AI Act implementation across federal and regional levels. The GBA/APD (Data Protection Authority) is expected to play a central role. Belgium is also active in the Benelux coordination on AI governance.
Practical Tips
What are the first steps to start compliance?
Start with an inventory: list all AI systems your organization uses or develops. Then classify each by risk level. For high-risk systems, begin with technical documentation and a risk management plan. Use complixo to automate classification and track your checklist progress.
What are the most common compliance mistakes?
Common mistakes include: underclassifying AI systems (missing high-risk use cases), neglecting deployer obligations when using third-party AI, failing to document human oversight procedures, not training staff on AI literacy, and waiting until enforcement deadlines to start preparation.
How can SMEs manage compliance cost-effectively?
SMEs benefit from reduced fines and can use self-assessment for most systems. Prioritize by risk: start with high-risk systems, use tools like complixo instead of expensive consultants, leverage EU AI Act regulatory sandboxes for guidance, and reuse documentation templates across similar systems.
What should I do if I'm unsure about my classification?
First, use complixo's automated classification engine which maps your use case to EU AI Act categories. If still uncertain, consult the European Commission's guidance documents, participate in an AI regulatory sandbox, or seek legal advice. When in doubt, assume the higher risk category — it's better to over-comply than to be caught under-classified.
GDPR Basics
What is the GDPR?
The General Data Protection Regulation (EU 2016/679) is Europe's landmark data protection law. It applies since May 25, 2018 and gives individuals control over their personal data while imposing obligations on organizations that process it. It applies to any organization processing data of EU residents, regardless of where the organization is based.
What is personal data under the GDPR?
Personal data is any information relating to an identified or identifiable natural person. This includes obvious identifiers (name, email, ID number) but also IP addresses, cookie identifiers, location data, and any data that can be combined to identify someone. Special categories include health data, biometric data, racial/ethnic origin, and political opinions.
What are the six legal bases for processing?
The GDPR requires one of six legal bases: (1) Consent of the data subject, (2) Performance of a contract, (3) Legal obligation, (4) Vital interests, (5) Public interest, and (6) Legitimate interests of the controller. Most businesses rely on consent, contract performance, or legitimate interests. Each basis has specific requirements and limitations.
What rights do data subjects have?
Data subjects have eight key rights: right to be informed (Art. 13-14), right of access (Art. 15), right to rectification (Art. 16), right to erasure/right to be forgotten (Art. 17), right to restrict processing (Art. 18), right to data portability (Art. 20), right to object (Art. 21), and rights related to automated decision-making (Art. 22).
GDPR Compliance
Do I need a Data Protection Officer (DPO)?
A DPO is mandatory if you are a public authority, if your core activities require regular and systematic monitoring of data subjects on a large scale, or if you process special categories of data on a large scale. Even if not required, appointing a DPO is recommended as a best practice for accountability.
What is a Data Protection Impact Assessment (DPIA)?
A DPIA (Article 35) is a risk assessment required before processing that is likely to result in a high risk to individuals' rights and freedoms. This includes systematic profiling, large-scale processing of special category data, and large-scale monitoring of public areas. The DPIA must describe the processing, assess necessity and proportionality, and identify measures to address risks.
What are the data breach notification requirements?
Under Articles 33-34, you must notify your supervisory authority within 72 hours of becoming aware of a personal data breach, unless the breach is unlikely to result in a risk to individuals. If the breach is likely to result in a high risk, you must also notify the affected individuals without undue delay. Document all breaches regardless of severity.
What are the penalties for GDPR non-compliance?
Fines can reach up to EUR 20 million or 4% of global annual turnover (whichever is higher) for the most serious violations (Articles 83-84). Less severe violations can result in fines up to EUR 10 million or 2% of turnover. National Data Protection Authorities also have powers to issue warnings, reprimands, and processing bans.
GDPR Data Transfers
Can I transfer personal data outside the EU/EEA?
International data transfers require a valid legal mechanism. Options include: adequacy decisions (the destination country has adequate protection), Standard Contractual Clauses (SCCs), Binding Corporate Rules (BCRs), or explicit consent. The EU-US Data Privacy Framework (since July 2023) provides a mechanism for transfers to certified US organizations.
What are Standard Contractual Clauses (SCCs)?
SCCs are pre-approved contract templates by the European Commission that provide appropriate data protection safeguards for international transfers. The current version (adopted June 2021) covers four scenarios: controller-to-controller, controller-to-processor, processor-to-processor, and processor-to-controller. A Transfer Impact Assessment (TIA) may also be required.
How does the GDPR affect cloud services?
Cloud providers processing EU personal data act as data processors and must comply with GDPR. You need a Data Processing Agreement (DPA) with your cloud provider. Verify where data is stored and processed — preferably in the EU/EEA. Major cloud providers (AWS, Azure, Google Cloud, Supabase) offer EU-hosted regions specifically for GDPR compliance.
NIS2 Basics
What is the NIS2 Directive?
The NIS2 Directive (EU 2022/2555) is the EU's updated cybersecurity legislation, replacing the original NIS Directive. It entered into force on January 16, 2023, and EU Member States had until October 17, 2024 to transpose it into national law. NIS2 significantly expands the scope of sectors and entities covered, and introduces stricter cybersecurity requirements.
Who does NIS2 apply to?
NIS2 applies to 'essential' entities (energy, transport, banking, health, digital infrastructure, ICT service management, public administration, space) and 'important' entities (postal services, waste management, chemicals, food, manufacturing, digital providers, research). Size thresholds generally apply: medium-sized entities (50+ employees or EUR 10M+ turnover) and above are in scope.
What is the difference between essential and important entities?
Essential entities face proactive supervision (audits, inspections) and higher penalties (EUR 10 million or 2% of global turnover). Important entities face reactive supervision (only after incidents or evidence of non-compliance) and lower penalties (EUR 7 million or 1.4% of turnover). Both must implement the same cybersecurity measures.
NIS2 Obligations
What cybersecurity measures does NIS2 require?
Article 21 requires entities to implement appropriate and proportionate measures including: risk analysis and information system security policies, incident handling, business continuity and crisis management, supply chain security, security in network and information systems acquisition and development, vulnerability handling and disclosure, cybersecurity risk-management assessment, basic cyber hygiene and training, cryptography and encryption policies, and multi-factor authentication.
What are the NIS2 incident reporting requirements?
Entities must report significant incidents to their national CSIRT/competent authority in three stages: an early warning within 24 hours of awareness, an incident notification within 72 hours with initial assessment, and a final report within one month with detailed description, root cause analysis, and mitigation measures taken. Incidents affecting multiple Member States trigger cross-border notifications.
What is supply chain security under NIS2?
NIS2 emphasizes supply chain cybersecurity (Article 21(2)(d)). Organizations must identify and assess cybersecurity risks from their suppliers and service providers, implement appropriate security measures in contracts, and monitor the security posture of critical suppliers. This is especially relevant for ICT service providers and digital infrastructure operators.
What is management body accountability under NIS2?
Article 20 holds management bodies (boards, executives) personally accountable for cybersecurity. They must approve cybersecurity risk-management measures, oversee their implementation, and undergo specific cybersecurity training. Management members can be held personally liable for non-compliance — a significant shift from previous legislation.
DORA Basics
What is DORA?
The Digital Operational Resilience Act (EU 2022/2554) is an EU regulation specifically for the financial sector. It entered into force on January 16, 2023 and applies from January 17, 2025. DORA creates a comprehensive framework for ICT risk management, incident reporting, resilience testing, and third-party risk management for financial entities.
Who does DORA apply to?
DORA applies to virtually all EU financial entities: banks, insurance companies, investment firms, payment institutions, crypto-asset service providers, trading venues, central counterparties, and credit rating agencies. It also directly regulates critical ICT third-party service providers (CTPPs) to the financial sector — a regulatory first.
How does DORA relate to NIS2?
DORA is a sector-specific regulation (lex specialis) that takes precedence over NIS2 for financial entities. Financial entities subject to DORA do not need to separately comply with NIS2's general cybersecurity requirements — DORA is more comprehensive for the financial sector. However, NIS2 still applies to aspects not covered by DORA.
DORA Obligations
What are the five pillars of DORA?
DORA is built on five pillars: (1) ICT Risk Management — comprehensive framework with identification, protection, detection, response, and recovery; (2) ICT Incident Reporting — mandatory reporting of major ICT-related incidents; (3) Digital Operational Resilience Testing — including advanced threat-led penetration testing (TLPT); (4) ICT Third-Party Risk Management — contracts, oversight, exit strategies; (5) Information Sharing — voluntary sharing of cyber threat intelligence.
What does DORA require for ICT third-party risk?
DORA requires financial entities to maintain a register of all ICT third-party service providers, conduct thorough risk assessments before entering contracts, include specific contractual provisions (security, audit rights, exit strategies, data location), and continuously monitor providers. Critical ICT third-party providers (CTPPs) will be directly supervised by European Supervisory Authorities.
What is Threat-Led Penetration Testing (TLPT) under DORA?
Article 26 requires significant financial entities to conduct TLPT at least every three years. TLPT must be performed by qualified external testers using real-world threat intelligence, cover critical and important functions, and be reported to competent authorities. The framework is aligned with the TIBER-EU framework already used in several Member States.
What are the DORA incident reporting timelines?
Financial entities must report major ICT-related incidents to their competent authority in three stages: an initial notification within 4 hours of classification (or 24 hours of detection), an intermediate report within 72 hours with updates on impact and recovery, and a final report within one month with root cause analysis and lessons learned.
Cross-Framework Compliance
How do EU AI Act, GDPR, NIS2, and DORA overlap?
These regulations share common themes: risk-based approach, incident reporting, documentation requirements, and governance accountability. An AI system in financial services may need to comply with all four simultaneously. Common controls like access management, encryption, logging, and incident response can be mapped across frameworks to reduce duplicate effort.
How can I manage multi-framework compliance efficiently?
Use a Common Control Framework (CCF) approach: identify controls that satisfy multiple regulations simultaneously. For example, a robust access control system addresses GDPR Article 32 (security of processing), NIS2 Article 21 (cybersecurity measures), DORA ICT risk management requirements, and EU AI Act Article 15 (security). complixo's cross-framework mapping makes this manageable.
Which framework should I prioritize?
Start with what applies most broadly to your organization. For most EU businesses: GDPR first (already in effect, highest fines). Then add NIS2 if you're in a covered sector. Financial entities should prioritize DORA. Add EU AI Act if you develop or deploy AI systems. complixo helps you identify which frameworks apply and tracks compliance across all of them simultaneously.
Using complixo
How does complixo help with compliance?
complixo is a GRC + Control Testing platform that helps you manage compliance across EU AI Act, GDPR, NIS2, and DORA from one dashboard. Register your applications, map controls across frameworks, collect evidence, manage risks, run control checks, and generate audit-ready reports. It replaces spreadsheets, consultants, and scattered tools.
Is complixo a substitute for legal advice?
No. complixo helps you document and manage your AI systems' compliance, but it does not provide legal advice. For legal interpretation of specific cases or regulatory questions unique to your situation, we recommend consulting qualified legal counsel.
Where is my data stored?
All data is stored in Frankfurt, Germany (eu-central-1) using Supabase with PostgreSQL. All serverless functions run in the EU. We do not transfer data outside the European Economic Area.