ChecklistJanuary 28, 2026Last reviewed: January 28, 202610 min read

High-Risk AI Obligations: Your Complete Compliance Checklist

If your AI system is classified as high-risk under the EU AI Act, these are the specific obligations you must fulfill. A practical checklist for compliance teams.

By complixo Team

Overview

If your AI system is classified as high-risk under the EU AI Act (Regulation (EU) 2024/1689), you face the most extensive set of obligations in the regulation. For Annex III systems, the compliance deadline is August 2, 2026. For Annex I systems (product safety), the deadline is August 2, 2027.

This checklist covers every requirement so you know exactly what needs to be done.

1. Risk management system (Article 9)

Providers of high-risk AI systems must establish, implement, document, and maintain a risk management system. This is not a one-time exercise — it must be a continuous, iterative process that runs throughout the entire lifecycle of the AI system.

Your risk management system must:

  • Identify and analyze known and reasonably foreseeable risks that the AI system can pose to health, safety, or fundamental rights
  • Estimate and evaluate the risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse
  • Adopt appropriate and targeted risk management measures to address identified risks, prioritizing elimination through design and development choices where possible
  • Test the AI system to ensure it performs consistently for its intended purpose and meets the requirements of the regulation
  • Consider the risk management measures in relation to the effects and possible interactions with other high-risk systems
  • Account for the generally acknowledged state of the art, including as reflected in relevant harmonized standards or common specifications

Testing must include appropriate metrics and probabilistic thresholds. Testing procedures must be suitable for the intended purpose and do not need to go beyond what is necessary.

2. Data governance and management (Article 10)

High-risk AI systems that involve the training of AI models with data must be developed on the basis of training, validation, and testing datasets that meet specific quality criteria.

Data governance practices must address:

  • Relevant design choices for data collection, including the origin, scope, and characteristics of data
  • Data preparation processing operations (annotation, labeling, cleaning, enrichment, aggregation)
  • Formulation of assumptions about what the data is intended to measure and represent
  • Assessment of the availability, quantity, and suitability of the datasets needed
  • Examination for possible biases that are likely to affect health and safety or lead to discrimination
  • Identification of relevant data gaps or shortcomings and how they can be addressed

Training, validation, and testing datasets must be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose.

3. Technical documentation (Article 11)

Before a high-risk AI system is placed on the market or put into service, the provider must draw up technical documentation. This must be kept up to date and made available to national competent authorities upon request.

Technical documentation must include (per Annex IV):

  • A general description of the AI system including its intended purpose, the provider, and the version
  • A detailed description of the elements of the AI system and its development process
  • Information about the monitoring, functioning, and control of the AI system, including human oversight measures
  • A description of the computational and hardware resources used, the system's expected lifetime, and necessary maintenance
  • The risk management system documentation
  • A description of any change made to the system through its lifecycle
  • The metrics used to measure accuracy, robustness, and cybersecurity (Article 15), including known limitations
  • A detailed description of data governance practices, including data sheets for training, validation, and testing datasets
  • The EU Declaration of Conformity

4. Record-keeping and logging (Article 12)

High-risk AI systems must be designed and developed with capabilities enabling the automatic recording of events (logs) while the system is operating.

Logging must enable, at a minimum:

  • Recording of the period of each use of the system (start date and time, end date and time)
  • The reference database against which input data has been checked
  • The input data for which the search has led to a match
  • Identification of the natural persons involved in the verification of results

Deployers must keep the logs automatically generated by those systems for at least six months, unless otherwise provided in applicable EU or national law.

5. Transparency and information to deployers (Article 13)

High-risk AI systems must be designed to ensure that their operation is sufficiently transparent to enable deployers to interpret the output and use it appropriately.

Instructions for use must include:

  • The identity and contact details of the provider
  • The characteristics, capabilities, and limitations of performance of the system
  • The changes to the high-risk AI system pre-determined at initial conformity assessment
  • The human oversight measures and tools enabling deployers to interpret AI system outputs
  • The expected lifetime of the system and necessary maintenance and care measures
  • A description of the mechanisms included for recording logs

6. Human oversight (Article 14)

High-risk AI systems must be designed so that they can be effectively overseen by natural persons during the period they are in use.

Human oversight measures must enable the overseeing individual to:

  • Fully understand the capabilities and limitations of the system and properly monitor its operation
  • Remain aware of the possible tendency of automation bias
  • Correctly interpret the system's output taking into account interpretation tools available
  • Decide not to use the system or to disregard, override, or reverse its output in any particular situation
  • Intervene in the operation of the system or interrupt it through a stop button or similar procedure

For high-risk systems in Annex III areas 1 (biometrics), 6 (law enforcement), and 7 (migration), at least two natural persons must verify the results before a decision is taken.

7. Accuracy, robustness and cybersecurity (Article 15)

High-risk AI systems must achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.

  • Accuracy: Levels of accuracy and relevant metrics must be declared in the instructions for use
  • Robustness: Systems must be resilient to errors, faults, or inconsistencies. Systems that continue to learn after deployment must mitigate biased outputs due to feedback loops
  • Cybersecurity: Systems must be resilient against data poisoning, model poisoning, adversarial examples, confidentiality attacks, and model flaws

8. Quality management system (Article 17)

Providers must put in place a quality management system documented as written policies, procedures, and instructions. This must include:

  • A strategy for regulatory compliance including conformity assessment procedures
  • Techniques and procedures for design, development, quality control, and quality assurance
  • Examination, test, and validation procedures before, during, and after development
  • Technical specifications and standards to be applied
  • Systems and procedures for data management
  • The risk management system (Article 9)
  • Post-market monitoring system (Article 72)
  • Procedures for reporting serious incidents (Article 73)
  • Communication procedures with authorities, notified bodies, and customers
  • Systems for record-keeping of all relevant documentation
  • Resource management including security-of-supply measures
  • An accountability framework setting out management responsibilities

9. Conformity assessment (Article 43)

Before placing a high-risk AI system on the market, a conformity assessment must be performed.

  • For most Annex III systems, this can be done via internal control (Annex VI) — the provider self-assesses compliance
  • For biometric identification systems (Annex III, area 1), a third-party assessment by a notified body is required (unless harmonized standards fully cover the requirements)
  • For Annex I systems, the assessment follows the existing sectoral conformity assessment procedures with AI Act requirements integrated

10. EU Declaration of Conformity (Article 47)

The provider must draw up a written or electronic EU Declaration of Conformity for each high-risk AI system, kept available for 10 years after the system has been placed on the market.

The declaration must state the name and type of the AI system, the provider's identity, that it is issued under the sole responsibility of the provider, that the system conforms with the EU AI Act, references to applied standards, and where applicable the notified body details.

11. CE marking and EU database registration (Articles 48-49)

  • The CE marking must be affixed visibly to the AI system, its packaging, or accompanying documentation before the system is placed on the market
  • Providers must register themselves and each high-risk AI system in the EU database (Article 71) before market placement
  • Deployers of Annex III high-risk systems must also register their use
  • For law enforcement, migration, and asylum systems (Annex III areas 6, 7, 8), registration is in a restricted section of the database

12. Post-market monitoring (Article 72)

Providers must establish a post-market monitoring system that actively and systematically collects, documents, and analyzes relevant data on the performance of the AI system throughout its lifetime.

The system must allow the provider to evaluate continuous compliance and include procedures to take corrective actions, including withdrawal or recall.

Serious incident reporting (Article 73)

Providers must report any serious incident to the market surveillance authorities of the relevant Member State(s). A serious incident is any incident or malfunctioning that directly or indirectly leads to death, serious damage to health or property, or serious disruption of critical infrastructure. Reports must be made immediately, and no later than 15 days after the provider becomes aware.

Summary table

  • Article 9 — Risk management system (Providers)
  • Article 10 — Data governance (Providers)
  • Article 11 — Technical documentation (Providers)
  • Article 12 — Record-keeping / logging (Providers + Deployers)
  • Article 13 — Transparency (Providers)
  • Article 14 — Human oversight (Providers + Deployers)
  • Article 15 — Accuracy, robustness, cybersecurity (Providers)
  • Article 17 — Quality management system (Providers)
  • Article 43 — Conformity assessment (Providers)
  • Article 47 — EU Declaration of Conformity (Providers)
  • Article 48 — CE marking (Providers)
  • Article 49 — EU database registration (Providers + Deployers)
  • Article 72 — Post-market monitoring (Providers)
  • Article 73 — Serious incident reporting (Providers)

Starting your compliance work now ensures you have ample time to implement these requirements before the August 2, 2026 deadline for Annex III systems. Each obligation builds on the others — the risk management system feeds into technical documentation, which feeds into the conformity assessment. Approach this as a systematic process, not a last-minute checklist.

Ready to get compliant?

complixo helps you classify, document, and track EU AI Act compliance in minutes — not months.

Start for free