AssessmentFebruary 16, 2026Last reviewed: February 16, 202611 min read

FRIA: How to Conduct a Fundamental Rights Impact Assessment

A step-by-step guide to conducting a Fundamental Rights Impact Assessment under Article 27 of the EU AI Act. When it's required, what to include, and common pitfalls.

By complixo Team

What is a Fundamental Rights Impact Assessment?

A Fundamental Rights Impact Assessment (FRIA) is a mandatory evaluation required under Article 27 of the EU AI Act for certain deployers of high-risk AI systems. It requires organizations to systematically assess how their use of a high-risk AI system may affect the fundamental rights of individuals.

The FRIA is distinct from the risk management system required of providers under Article 9. While the risk management system focuses on technical risks and system performance, the FRIA examines the broader societal and individual impact of deploying an AI system — including effects on non-discrimination, privacy, freedom of expression, human dignity, and other rights enshrined in the EU Charter of Fundamental Rights.

The FRIA must be completed before the high-risk AI system is put into use for the first time. It is a deployer obligation — meaning the organization using the AI system (not the one that built it) is responsible for conducting it.

When is a FRIA required?

Article 27 specifies that a FRIA is required when:

1. Bodies governed by public law deploy high-risk AI systems. This includes government agencies, municipalities, public universities, public hospitals, and other entities established under public law.

2. Private entities providing public services deploy high-risk AI systems. This covers private companies operating in areas such as healthcare, education, utilities, public transport, and social services when they are performing a public service function.

3. Deployers of high-risk AI systems in the following Annex III categories:

  • Credit institutions using AI for creditworthiness assessment (area 5(b))
  • Insurance companies using AI for risk assessment and pricing of life and health insurance (area 5(c))
  • Any deployer of high-risk AI systems for evaluating eligibility for public assistance benefits and services, or for granting, reducing, revoking, or reclaiming such benefits and services (area 5(a))

In practice, the FRIA requirement captures a wide range of organizations: any public body using high-risk AI, any private entity providing public services with high-risk AI, and specific private-sector use cases in finance and insurance.

What must a FRIA include?

Article 27(1) specifies the minimum content. A FRIA must describe:

a) The deployer's processes

How will the AI system be integrated into the deployer's existing processes? This includes the workflow in which the system operates, who interacts with it, and how its outputs feed into decision-making.

b) Timeframe and frequency of use

When and how often will the AI system be used? Is it continuous, periodic, or triggered by specific events? This helps assess the scale of potential impact.

c) Categories of affected persons

Which natural persons and groups will be affected by the AI system? This must be specific — not just "customers" or "citizens," but identified categories such as "loan applicants aged 18-25" or "benefit claimants in municipality X."

d) Specific risks of harm

What are the specific risks that the AI system poses to the fundamental rights of the identified categories of persons? This is the core of the assessment and must be concrete, not generic. Risks should be mapped to specific fundamental rights:

  • Non-discrimination (Article 21 Charter): Could the system produce discriminatory outcomes based on protected characteristics (race, gender, age, disability, religion)?
  • Privacy and data protection (Articles 7-8 Charter): Does the system process personal data? How is privacy preserved? What are the risks of data breaches or misuse?
  • Freedom of expression (Article 11 Charter): Could the system restrict or chill legitimate expression?
  • Human dignity (Article 1 Charter): Could the system treat individuals in a way that undermines their dignity?
  • Right to an effective remedy (Article 47 Charter): Can affected persons challenge decisions made with the AI system?
  • Rights of the child (Article 24 Charter): Are children among the affected persons, requiring additional protections?

e) Human oversight measures

What measures are in place to ensure human oversight of the AI system? Who is responsible for reviewing outputs, and what authority do they have to override the system? How are escalation procedures structured?

f) Measures when risks materialize

What actions will be taken if the identified risks actually materialize? This includes incident response procedures, notification processes, remediation steps, and mechanisms for affected persons to seek redress.

Step-by-step guide to conducting a FRIA

Step 1: Define scope and context

Before diving into the assessment, clearly define:

  • Which specific AI system is being assessed (name, version, provider)
  • What is the intended purpose and context of deployment
  • Which organizational unit will deploy it
  • What the expected scale of deployment is (number of affected individuals, geographic scope)

Step 2: Map affected persons and rights

Identify all categories of natural persons who will be directly or indirectly affected by the AI system. For each category, map which fundamental rights could potentially be impacted. Be specific and concrete — generic statements like "privacy may be affected" are insufficient.

Create a matrix: affected person categories on one axis, fundamental rights on the other. For each intersection, assess whether there is a potential impact (positive, negative, or neutral).

Step 3: Assess risks systematically

For each identified potential negative impact, assess:

  • Likelihood: How likely is it that this harm will occur? Consider the system's design, the data it uses, known biases, and deployment context.
  • Severity: If the harm occurs, how serious would it be for the affected individuals? Consider both individual and aggregate impact.
  • Reversibility: Can the harm be undone? A wrongly denied loan application can be reconsidered; reputational damage from a wrongly flagged individual may be harder to reverse.
  • Scale: How many people could be affected? A system processing thousands of applications daily has a different risk profile than one used for occasional decisions.

Step 4: Define mitigation measures

For each significant risk, define concrete mitigation measures:

  • Technical safeguards (bias testing, fairness constraints, accuracy thresholds)
  • Organizational measures (human review processes, oversight committees, appeal procedures)
  • Documentation and monitoring (audit trails, performance metrics, regular review cycles)
  • Communication measures (informing affected persons, transparency about AI use)

Step 5: Consult stakeholders

While Article 27 does not mandate public consultation, good practice and several national guidelines recommend engaging with:

  • Representatives of affected groups
  • Data protection officers
  • Legal counsel
  • Domain experts in the relevant sector
  • Works councils or employee representatives where employment decisions are involved

Step 6: Document and submit

The completed FRIA must be:

  • Documented in a structured format
  • Submitted to the relevant market surveillance authority (Article 27(4))
  • Kept updated when significant changes to the AI system or its deployment occur

The regulation does not specify a template, but the assessment must be comprehensive enough to address all required elements of Article 27(1).

Notification to the market surveillance authority

Under Article 27(4), deployers must notify the results of the FRIA to the relevant national market surveillance authority. This notification should include:

  • The completed FRIA document
  • The data protection impact assessment (DPIA) prepared under Article 35 GDPR, where applicable
  • A description of the governance process used to conduct the FRIA

The market surveillance authority may use this information for risk assessment and enforcement purposes.

Relationship with GDPR DPIA

Many high-risk AI systems that require a FRIA will also require a Data Protection Impact Assessment under Article 35 of the GDPR. While there is overlap, the two assessments serve different purposes:

  • GDPR DPIA: Focuses specifically on risks to data protection rights arising from personal data processing
  • FRIA: Covers a broader range of fundamental rights beyond data protection, including non-discrimination, dignity, and access to services

Article 27(3) explicitly states that the FRIA should be performed in conjunction with the GDPR DPIA where applicable. Organizations can conduct them as a single, integrated assessment — provided all requirements of both regulations are addressed.

Common pitfalls to avoid

1. Generic risk descriptions. Stating that "discrimination may occur" is not sufficient. Identify specific discrimination vectors (which protected characteristics, through which mechanism, with what evidence).

2. Treating it as a one-time exercise. The FRIA must be updated when significant changes occur — changes to the AI system, changes in deployment context, or when risks materialize in ways not previously anticipated.

3. Ignoring indirect effects. AI systems can affect people who do not directly interact with them. For example, a credit scoring system affects not just the applicant, but potentially their family and dependents.

4. Insufficient human oversight plans. Stating that "a human will review decisions" is not enough. Specify who, how often, with what authority, and what happens when they disagree with the AI.

5. Missing the consultation step. While not strictly mandated by Article 27, failing to consult affected groups weakens the assessment and may be viewed negatively by supervisory authorities.

6. Not linking to the provider's documentation. The FRIA should reference the provider's technical documentation, risk management system documentation, and instructions for use. The deployer cannot assess risks in isolation from the system's technical characteristics.

Key takeaways

  • A FRIA is required for public bodies, public service providers, and specific private deployers before using high-risk AI
  • It must be completed before the AI system is put into use for the first time
  • The assessment covers fundamental rights broadly — not just data protection
  • It must be submitted to the national market surveillance authority
  • Update the FRIA when circumstances change significantly
  • Integrate with the GDPR DPIA where both are required

Ready to get compliant?

complixo helps you classify, document, and track EU AI Act compliance in minutes — not months.

Start for free