This new post forms part of our series on the AI Act.
The Fundamental Rights Impact Assessment (FRIA) is a new obligation introduced by the EU Artificial Intelligence Act (AI Act).Despite its novelty, the origins of this assessment and the logic behind its conception are found in pre-existing legislation, such as the GDPR, and human/fundamental rights impact assessments that constitute a good practice in business when onboarding new technologies or tools or creating new processes. As its name reveals, the intention is to protect individuals’ fundamental rights from adverse impacts of AI systems. Such rights can be diverse but of equal value, such as the right to access healthcare, the right of assembly, or the right to an effective remedy. To ensure such protection, the FRIA tries to identify specific risks and address them through appropriate mitigation measures.
The present blogpost constitutes a practical guide on understanding the scope of a FRIA under the AI Act, the persons or entities responsible for carrying out such an assessment, and the sectors of application, and outlines a series of factors to be taken into account in order to achieve a comprehensive assessment.
As already explained in our earlier blogpost (The EU’s newly published Artificial Intelligence Act – an overview in 6 questions | Timelex), the AI Act applies across all sectors, governing the development, deployment, and use of AI systems within the EU. It entered into force on 1 August 2025, but the timeline for its application is deliberately staggered. In the same blogpost, extensive explanation is provided as to the meaning of AI systems and their categorisation according to the level and scope of risks they generate (risk-based approach).
The obligation to conduct a FRIA is not applicable to all users (or in the language of the AI Act – deployers) of high-risk AI systems. Instead, this is an obligation for the following categories:
In this regard, a public entity using biometric identification systems for the control of passengers’ passports at an airport, a banking institution relying on an AI system for the categorization of loan candidates’ creditworthiness, or a private company using an AI system to evaluate the eligibility of patients for access to a cancer treatment, are all examples of cases where a FRIA shall be conducted.
Thus, the scope of the FRIA is broad but with clear limitations.
FRIA is a prerequisite for the first use of the system but should be updated at a later stage whenever the deployer considers that the relevant factors have changed. More precisely, Article 27 AI Act lists a series or requirements that a FRIA shall take into account (for a detailed analysis see Part V below). If, for example, while using the AI system the deployer notices that vulnerable groups of people (e.g., children, persons with disabilities etc.) are affected, while this was not initially foreseen, or that the measures chosen to address the occurring risks are not sufficiently efficient or tailored to those risks, then a new FRIA shall be conducted to duly reflect such critical changes.
Having a closer look to the stand-alone systems that fall in the ambit of the FRIA, the AI Act’s Annex III enumerates the following categories of services:
biometrics;
education and vocational training;
employment, workers management and access to self-employment;
access to and enjoyment of essential private services and essential public services and benefits;
law enforcement;
migration asylum and border control management; administration of justice and democratic processes
To further clarify Annex III, recital 96 provides specific examples of essential public services, such as education, healthcare, social services, housing, administration or administration of justice.
However, deployers of AI systems for critical infrastructure, namely those used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity, are exempted from the obligation to conduct a FRIA.
As Article 27 AI Act explains, a thoroughly conducted FRIA must include the following information:
Subject to some exceptions, once the FRIA is completed, the deployer shall notify the market surveillance authority of its results. It is encouraged that for AI systems used in the public sector, deployers consult stakeholders, including the representatives of groups of persons likely to be affected by the AI system, independent experts, and civil society organisations in conducting the FRIA and designing mitigation measures.
Overall, we can highlight the following considerations as regards the FRIA.
a. FRIA follows a human-centered approach. Individuals are the primary factor of consideration at all stages of the assessment.
b. FRIA builds upon the risk-based logic. It aims to detect risks relevant to a broad range of fundamental rights and design effective mitigating measures before the assessed AI system is used on the market. Hence, it is a proactive measure, similar to the DPIA under GDPR. The close link with the DPIA is reaffirmed in the AI Act, which mentions that if any of the obligations laid down for FRIA are already been met through a prior DPIA, then the FRIA shall complement that DPIA. As DPIA has to be completed before the data processing starts, FRIA must be completed before the AI system is first used by its deployers. Thus, in reality FRIA can be seen as a risk management tool. Such a tool cannot serve its purpose unless it incorporates in its design the highest fundamental rights guarantees, similarly to the data protection by design and default principles that underpin the GDPR.
c. FRIA requires various types of expertise from those that conduct it. To understand the impact on fundamental rights it is necessary to be aware of the way that fundamental rights are perceived and applied through legislation and case law. Moreover, this expert knowledge has to be combined with a technical background to decode the particularities of the AI context. Therefore, FRIA can be seen as a multidisciplinary expert exercise that influences on the use of the AI system.
Even though the AI Act provides some guidance on the basic elements of the FRIA, there is not a single template to be followed. Deployers are free to choose their own methodology. There is no official template from the AI Office available so far. Such templates can be expected in the future, but organizations wanting to be prepared should likely become familiar with the process before the entry into application of most of the AI act’s High-risk provisions in August 2026. Moreover, it remains to be seen to what extent the methodology provided by the AI office will take into account specific sectors and situations and whether it will be suitable for large organizations that have their own complex processes a part of a regional/global compliance programme. The same is true for existing AI projects. While the AI act will initially not apply to them, any significant updates after August 2026 may lead to the application of the AI Act, and to the application of FRIA requirements for all users/deployers. Getting started early is therefore beneficial in many cases. Luckily, there is no need to start from scratch. Several stakeholders have already tried to approach the impact of AI systems on fundamental rights as part of more general templates.
To name a few examples, the High-Level Expert Group on AI, a group of experts appointed by the European Commission, has drafted the Assessment List for Trustworthy Artificial Intelligence, based on its previous Ethics guidelines for trustworthy AI. This list translates AI principles into a guiding checklist for developers and deployers to understand some ethical challenges, including an initial understanding of fundamental rights impact assessment.
Another example, the Fundamental Rights and Algorithms Impact Assessment (FRAIA) of the Dutch Ministry of the Interior and Kingdom Relations includes a specific section on fundamental rights, being presented as the final step of the AI system assessment after its intended purposes, input, throughput and output have been thoroughly described. It separates fundamental rights into four thematic clusters (relating to the person, including a number of social and economic rights; freedom-related; equality; procedural). This large list of rights that must be considered and assessed for each high-risk AI system.
A specific FRIA template has been produced in the context of the ALIGNER project for deploying AI systems for law enforcement purposes within the EU. It measures the potential challenges to fundamental rights that most possibly can be affected by law enforcement systems (i.e., presumption of innocence, right to an effective remedy, right to a fair trial, right to equality and non-discrimination, right to freedom of expression and information, right to respect for private and family life (privacy), and right to protection of personal data) and assesses the impact by examining the severity of prejudice and the number of affected individuals.
FRIAs also could take inspiration from existing methodologies on human rights impact assessment. For instance, the methodology and toolbox proposed by the Danish Institute for Human Rights draws inspiration from the UN’s guiding principles on Business and Human Rights and guides businesses, mainly with large-scale projects, on how to conduct human right due diligence. This is done by analysing the effects that business activities have on rights-holders such as workers, local community members, and consumers, while observing compliance with fundamental principles, such as non-discrimination.
Another example specific to the AI context is the HUDERIA (Human Rights, Democracy and Rule of Law Impact Assessment) framework developed under the auspices of the Council of Europe. Through a five-phase approach (project summary report, context-based risk analysis, stakeholder engagement process, impact mitigation plan, iterative revisitation) this methodology sets a framework for responsible AI governance by capturing both the technical aspects of AI systems and the sociotechnical context of their development and application.
Many other methodologies exist that may provide input for FRIAs, but no single official methodology exists right now that is applicable in all circumstances. This may remain the case even when the AI office provides guidance. Hence, now and perhaps in the future, organizations may wish to build their own methodology.
When selecting a template (to further develop) and when putting in place processes and procedures for documenting one’s own FRIA, several elements should be considered.
a. Specific context of use of the AI system. It is necessary to list the intended needs and objectives covered by its use, and identify the categories of affected individuals. Special consideration is required when vulnerable groups are affected. Moreover, it is important to outline the geographical scope of use. This stage requires the collaboration of technical and legal experts in order to understand the current stage of the technology used, which are the crucial features of the AI system that may have an adverse impact and how to describe them by using a terminology which is in line with the AI Act. The mapping of the social, technical and economical context that the AI system belongs is an important starting point to limit the focus of FRIA to specific fundamental rights. Talking about objectives should not be a mere theoretical exercise but focus on the specific situation at hand, including a detailed analysis and summary of the legal framework applicable to the specific use case, including sector rules. Moreover, it is useful to justify why the use of an AI system has been favored against more traditional options that do not imply to use of AI.
b. Algorithmic transparency by comprehending the modus operandi of the AI system. This element requires collaboration between providers and deployers. Once the intended uses and objectives have been identified, the deployer should be able to understand and to explain how the features of the algorithm satisfy them. The functioning is heavily influenced by the quality of the input data used to train the algorithm. In this step it is fundamental to describe possible sources of bias and how to mitigate them. It is also crucial to describe who will have access to the data and whether data will be shared with third parties and for which purposes (e.g. retraining of the AI system). Furthermore, having identified the categories of potentially affected individuals permits to decide concretely how information about the algorithm should be disseminated in plain and explainable language. The key is to make the AI system accessible to users and affected individuals.
c. Accountability through a detailed analysis of the applicable governance measures. This element is inextricably linked to the high risk nature of the AI systems that are subject to FRIA. This means describing how the AI output influences the decisions made and how AI is approached from an organizational perspective. The procedures put in place to make decisions and the intended period of use of the AI system should be explicitly mentioned. Accountability is closely linked to human oversight measures. The organization conducting the FRIA shall demonstrate that it deploys effective governance measures, such as adequately trained staff to observe that decisions are made responsibly and are reviewed when necessary, policies for incident handling, data subject rights, incidental findings etc. Another important parameter of accountability is to describe the existing auditing, safeguarding, and complaint mechanisms. The deployers should have in place clear policies for the regular monitoring and update of these measures when they are not effective anymore.
d. Identification of the affected fundamental rights. Once the content, objective, modus operandi, and governance of the AI system have been determined, it is possible to assess whether and which fundamental rights are particularly affected. The assessment of the effect to specific fundamental rights involves describing specific risks. In each risk scenario one has to examine the applicable legislative framework, as supplemented by relevant court decisions that help interpret the law, and see whether specific conditions apply in case of a violation. For more clarity it is useful to categorize fundamental rights according to the core value affected, e.g. health, property, fundamental freedoms etc.
As for measuring the risk, the AI Act defines “risk” as the combination of the probability of an occurrence of harm (= likelihood) and the severity of that harm. The concepts of “likelihood” and “severity” constitute abstract notions that always require conceptualization, but inspiration for the development of the risk assessment matrix can be drawn from other assessments, such as the DPIA.
Comparing FRIA to other existing assessment methodologies, such as human right impact assessments, or social and economic impact assessments, FRIA seems to be formulated more as a risk assessment tool, even though its name points to the element of impact. This stems from the text of Article 27 AI Act, the reliance on the DPIA, which is a risk assessment, and the inclusion of “risk” in the list of important definitions. According to the most favourable approach, as discussed in literature, in the context of FRIA, impact, understood as the severity of the harm, will be measured in conjunction with the likelihood that this harm may occur. Taken together, these parameters result eventually in the measurement of risk. Seen under this perspective, there is space for adjustments that may end up in a lower threshold of protection for fundamental rights. In particular, in a risk assessment a harm (impact) in itself may be serious, but adjusting the likelihood of occurrence, being a highly subjective element, can artificially reduce the severity. In this situation, even if the market surveillance authority is notified of the results of FRIA, it is not obvious from the text of the AI Act that the deployer will be obliged to apply specific mitigation measures or face other concrete consequences. On the contrary in an impact assessment even if impact is assessed as of low importance, the responsible entity shall take appropriate mitigation measures. Therefore, a purely impact approach may provide more guarantees compared to the risk approach, which tends to be more formalistic.
In any case, all these methodologies rely on complex measurements, that need to be tailored in the context of each assessment and the sector and situation at hand.
Overall, while the format of the FRIA remains largely undefined, in terms of a strict structure to be followed, and guidance from the AI Office is expected, Timelex can assist you in decoding the complex legal requirements of the new legislative framework of the AI Act. Being at the forefront of helping market players understand the essentials of the AI Act, through dedicated workshops, seminars, consultations, and participation as the legal and ethical partner in several EU research projects related to the development of AI systems, Timelex is best placed in adjusting its vast experience in conducting impact and risk assessments in other fields of digital law into the specificities of the highly technical AI context.