Like in many other sectors, Artificial Intelligence (AI) has the potential to produce significant benefits in Law Enforcement. Law Enforcement Agencies (LEAs) and Law Enforcement Officers (LEOs) deal with huge amounts of data every day, both for intelligence purposes (to prevent crime, to maintain public order, to optimize the use of resources) and when investigating crime. AI can help them in finding the right pattern, the right information or the right person more quickly, and may at times even reveal valuable information that a human would not have found, given the real constraints in terms of time and resources LEAs face, or even irrespective of the amount of time and resources spent on a case. Hence, whether it’s to find a suspect or a victim, to analyze huge amounts of data that form potential evidence, or to support dynamic data-driven patrol routes to be followed, AI systems can significantly aid Law Enforcement in their tasks. In fact, AI already does so in day-to-day practice. Although AI features are not necessarily marketed as such, they have been around for years. (Mobile) forensics software suites, for example, are used widely in the EU and contain many AI features, necessary to help LEAs deal with ever-increasing amount of data to be found on personal devices they seize as part of an investigation. Without such features, the time and effort required to solve cases would become prohibitively high. Another examples is the use of open source intelligence tools (OSINT). Such tools, where are often intensively data-driven and AI-driven, enable LEAs to use openly available information on the internet to help intelligence activities and investigations. These are just some examples that highlight a general trend that technology should be able to support LEAs in their demanding tasks.
Despite the huge potential, the use of such tools, if unchecked, could at times raise concerns about privacy, as well as a potential negative impact on other fundamental rights. The AI act puts it as follows (recital 59):
“Given their role and responsibility, actions by law enforcement authorities involving certain uses of AI systems are characterized by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its performance, its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner.Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defense and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk, insofar as their use is permitted under relevant Union and national law, a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress”.
Hence, the use of AI by Law Enforcement has great potential but AI tools should generally be used only after due consideration of the potential negative effects and after having put in place appropriate governance measures.
Because of this power imbalance and real potential for significant negative impacts if inappropriate tools are used or if appropriate tools are used inappropriately, an important number of Law Enforcement-specific AI use cases (or rather AI systems meant for such purposes) are covered by the AI Act.
As explained in previous posts, the AI act makes a distinction between certain forbidden AI systems, high-risk AI systems (allowed but regulated) and other AI systems that are either not regulated or subject to limited (transparency) requirements.
Importantly, even if an AI system for Law Enforcement purposes is not regulated by the AI Act, it may still be necessary to think about certain compliance elements covered in the AI Act, and governance measures are always necessary. It is important to remember that the AI Act is only one element of the legal framework applying to the use of AI by Law Enforcement. In addition to the AI Act, LEAs have to comply with data protection law (Law Enforcement Directive and the complete framework of national implementation measures), as well as with the national law on policing, on criminal procedure, etc. Hence, the question whether a specific AI tool/system is permissible depends on more than just the AI Act.
The purpose of this blog post however is to introduce the AI Act as an important part of the legal landscape throughout Europe, and will cover the following aspects:
The types of Law Enforcement related AI systems (and hence use cases for AI) that are forbidden;
The types of Law Enforcement related AI systems (and hence use cases for AI) that are inherently considered high-risk by the AI Act, meaning they are allowed but strictly regulated;
The relevant roles under the AI Act and how they apply to LEAs;
The responsibilities/requirements for LEAs using an AI system;
The timeline for application of the AI act;
By way of conclusion: 7 key takeaways for LEAs.
2. Forbidden AI systems
For a limited number of AI systems, the EU legislator has decided that they should not be used, irrespective of the safeguards that are put in place. The AI Act lists a number of specific AI systems relevant to Law Enforcement that are forbidden to be used, namely:
AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
AI systems intended to individually categorize natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. Note that AI systems meant for labeling or filtering legally acquired data are not covered by this prohibition;
AI systemsthat make risk assessments about natural persons, assessing or predicting the predict the risk for that person to commit a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. Note that the prohibition does not cover AI systems used to support the human assessment of the involvement of a person in a criminal activity, when this is already based on objective and verifiable facts directly linked to a criminal activity. Such profiling will be a high-risk AI system. The reason for this is that the EU legislator wants to ensure that in line with the presumption of innocence, people are judged on their actual behavior, and not purely on the basis of certain risk factors, such as demographic data;
AI systems that use ‘real-time’ remote biometric identification systems, in publicly accessible spaces for the purpose of law enforcement. The most common example of this is live facial recognition software. Note that the prohibition only applies to physical spaces, and hence online facial recognition (e.g. on an image, video) is not covered by this prohibition, even if live (although this is usually not live, but post factum, i.e. on images, video etc. that has already been recorded). Moreover, there are 3 exceptions to this rule, namely if strictly necessary for:
The targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of human beings as well as the search for missing persons; or
The prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; or
The localization or identification of a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty” for certain serious offences listed in the AI Act, where the Member State concerned punishes this offence by a custodial sentence or a detention order for a maximum period of at least four years.
The exceptions only apply to the extent that Members States decide to partially or fully authorize them. This means that they can decide to only grant some of the three exceptions and not others, and/or that they can limit the list of offences referred to in the third exception. In addition, if Member States do allow the exceptions, they can only do so subject to the requirement of prior authorization granted by a judicial authority or an independent administrative authority, whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request (with some specific exceptions for situations of urgency). Member States must moreover provide for detailed rules regarding the request, issuance and exercise of, as well as supervision and reporting relating to such authorizations. This must include necessary and proportionate safeguards and conditions in relation to the use of such systems, in particular as regards the temporal, geographic and personal limitations.
In such cases, the ‘real-time’ remote biometric identification AI system will still be considered a high-risk AI system, since remote biometric identification systems are generally considered high-risk.
When a specific AI system does not exactly match the description of the prohibition, it will be allowed under the AI act, but as illustrated in the examples above (and below under high-risk), will normally be regulated as a high-risk system.
Moreover, the AI act foresees the possibility for the list of forbidden use cases (and high-risk use cases) to be adapted over time. Hence, what is forbidden will evolve together with technological developments in years to come.
3. High-risk use cases (allowed but regulated)
In addition to the forbidden AI systems, the main relevance of the AI Act for Law Enforcement is that it provides a number of categories of Law Enforcement- specific AI systems in its Annex III that are considered to be high-risk. They can be grouped into 3 categories:
Biometrics;
Profiling;
Evaluation of evidence.
These categories, as well as some general notes on the scope of this qualification as high-risk are discussed below.
3.1 Biometrics
A first group of high-risk systems for Law Enforcement in the AI Act relates to the use of biometrics. AI systems using biometrics in Law Enforcement are nearly always high-risk. A relevant exception is biometric identification systems meant only to verify the identity of a person, e.g. AI systems meant to verify the identity of someone trying to access a building, device, etc.
Examples of ‘normal’ high-risk biometric systems include biometric categorization systems, e.g. systems to filter or label lawfully acquired data with biometrics, e.g. facial recognition on images or videos to group photos of the same person in a data set; or emotion recognition systems for Law Enforcement, e.g. AI in bodycam software to predict whether a person is likely to attack. For all biometrics, the AI Act clearly stipulates that this is subject to such systems being permissible under national law (and EU law), highlighting that the AI Act is only one part of the puzzle.
Remote biometric identification systems are treated with an additional level of concern and consideration. As explained above, the use of ‘real-time’ remote biometric identification systems, in publicly accessible spaces for the purpose of law enforcement is in principle forbidden, subject to specific exceptions. In as a far as the exceptions apply, these systems will be allowed as high-risk, but the Member States are obliged to provide a detailed framework and additional safeguards. Remote biometric identification systems that are not real-time but post-remote, or any form of online biometric identification (e.g. on the basis of facial recognition in photos or videos) are in principle allowed (so not dependent on Member States passing an exception into national law and can hence not be forbidden), but are also subject to additional requirements to the general requirements for high-risk systems, including prior authorization (unless for initial identification of a suspect based on objective and verifiable facts) and the use only being allowed in as far as it is targeted. Hence, both types of remote biometric identification systems are subject to additional requirements that other high-risk Law Enforcement systems are not subject to, with the real-time being most strictly regulated.
3.2 Profiling
A second group of high-risk systems for Law Enforcement in the AI Act relates to the use of various profiling techniques. The AI Act defines the following categories:
AI systems meant to assess the riskof a natural person becoming a victim;
AI systems meant to assess the risk of a natural person of offending or re-offending, not solely based on profiling (but e.g., based on facts that provide an indication of higher risk of offending, personality traits and characteristics, or past criminal behavior);
AI systems meant for the profiling of natural persons in the course of the detection, investigation or prosecution of criminal offences.
3.3 Evaluation of evidence
Notably, the AI Act makes a distinction between profiling for risk assessment and profiling for other Law Enforcement purposes, as well as a distinction between risk assessment systems for victimization vs risk assessment systems to assess the risk of becoming an offender or re-offending. Note that AI systems meant to assessing the risk of offending or re-offending solely based on profiling are forbidden, as mentioned above.
The third bullet above is a general category of AI systems using profiling for various Law Enforcement purposes. This may have many use cases, and often will include various risk assessments vis-à-vis suspects as well, however, not trying to assess the risk of offending or re-offending, but perhaps the flight risk, their importance in a criminal network, etc.
A third main category of AI systems related to Law Enforcement covered in the AI Act are systems that relate to the evaluation of evidence. The AI Act specifically mentions:
AI systems meant to act as polygraphs and similar tools;
AI systems intended to evaluate the reliability of evidence in the course of an investigation or prosecution.
3.4 Scope and exceptions
Both types of AI systems have an impact on how evidence gathered in a case is assessed, on the definition of further lines of enquiry and potentially down the line on the decision whether a prosecution is brought or not. Hence, the AI Act treats these AI systems with additional care as well.
It is important to make some general remarks regarding the scope of the high-risk AI systems identified in the AI Act. Like with the prohibitions, in principle the AI systems is only covered, and therefore only regulated, if it matches the description of the AI Act. This leaves some room for interpretation. What about an AI systems that helps investigators identify the best next steps or guides them through the various steps of the technical part of an investigation? While this may be marketed as an AI assistant for investigators, such a system could easily be imagined to have a quite direct impact of evaluating the evidence collected so far, and getting stuck in the process might indicate that the evidence will not suffice.
For this reason the European Commission will provide more guidance on the existing high-risk categories. Moreover, here as well, categories can be adapted and changed in the future to match technological developments and new insights.
In addition, the AI Act recognizes that systems that may at first glance seem to fit the high-risk category, can sometimes present limited risk, namely when:
The AI system only performs a narrow procedural task; or
The AI systemmerely improves the result of a previously completed human activity; or
The AI system only detects decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review (so not a decision support system but rather something closer to statistical analysis indicating outliers and potential issues in prior (human) decisions; or
The AI system performs only a preparatory task prior to the actual high-risk assessment (by a human or another AI system).
Note that it is the developer of the system that has to claim these exceptions, not the user. In other words, the systems should always have been intended for such limited interventions. This is in line with the product regulation nature of the AI Act, which seeks to provide requirements for AI systems intended for a given defined purpose.
AI systems that perform profiling of natural persons cannot benefit from the exception. Interestingly, as with the other qualification elements mentioned, the conditions of this exception are subject to review in the future as well.
4. Roles under the AI Act
Different roles exist under the AI Act, most relevant for LEAs in relation to high-risk systems. The most relevant roles for LEAs are the following:
Providers are the most regulated stakeholders, because of the decisive role they have in the design and development of an AI system. Providers are the natural or legal persons, public authority, agency or other body thatdevelops or has developed an AI system and places them on the market orputs them into service under their own name or trademark, whether for payment or free of charge;
Deployers are the natural or legal persons, public authority, agency or other body using an AI system under their authority, except when the AI system is used in the course of a non-professional activity.
In most cases, LEAs will obtain AI tools from a third party provider, which may be a commercial provider or solutions provided to LEAs for free, e.g. through Europol’s Innovation lab, EACTDA, etc. Hence, in most cases LEAs will act as a deployer of the AI system.
LEAs making their own high-risk AI systems would be qualified as a provider. In particular, this also applies when:
A LEA makes a substantial modification to a high-risk AI system that has already been placed on the market or put into service in a way that is remains high-risk;
A LEA modifies the intended purpose of an AI system, including a general purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such manner that the AI system becomes a high risk AI system.
While LEAs will in most cases be deployers, it may be relevant to know that the AI Act imposes the following conditions on providers:
They must establish and keep up to date a risk management system that deals in particular with the known and the reasonably foreseeable risks related to the use of the system, as well as reasonably foreseeable misuse and proposes risk management and mitigation measures;
They must implement appropriate data management and data governance practices (i.a. relating to design choices, data collection, assumptions, data processing, bias management, quality assurance, etc.);
They must create and maintain technical documentation on the system, which can be made available to authorities;
They must make sure the system has automatic record-keeping/logging capabilities;
They must make sure the system is designed and developed in a way that is sufficiently transparent for deployers to interpret it correctly, providing information to this extent (instructions for use);
They must enable the effective possibility for humans to oversee the high-risk AI system while it is being used (although certain measure may be required from the deployer as well);
They must design and develop the high-risk AI system in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity and perform consistently in those respects throughout their lifecycle.
It is important to understand two things in this regard:
These requirements and their conformity assessment are, in the majority of cases of high-risk AI systems for Law Enforcement, based on internal control by the AI provider (the one exception is biometric AI systems). This means due diligence of the provider is of utmost importance: can LEAs trust the assessment of the provider? Issues on the level of the provider could after all have a very real negative impact in the field, leading to liability, reputational damage, etc.;
LEAs acting as deployers should understand their own obligations as deployers (see next section) and position them in relation to the provider obligations. The AI Act at times leaves some leeway for providers to push obligations down to the deployer. Human oversight is a good example of that. Despite being of critical importance, it is possible for the provider to try and argue that many governance measures needed for effective human oversight need to be take on the deployer level, i.e. by the LEA itself requiring additional time and resources, and, importantly, LEA personnel that is already trained in AI. Here as well, it is important to actively scrutinize the AI tool provider, not only based on whether the tool they offer seems promising, but also on whether they have thought about compliance both on their side and the LEA’s side and on how to support the LEA in making this process easy and effective, e.g. by having effectively built human oversight measures into the tool itself, and being proactive in trying to bridge any knowledge gaps by providing appropriate training as part of the tool.
5. Responsibilities for LEAs using an AI system
Assuming that LEAs are acting as deployers, the AI Act still imposes a number of significant obligations on LEAs acting as a deployer (i.e. using the AI system), namely LEAs must:
Take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems: this requires trained personnel and potentially additional investment (including time and resources);
Assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support; in particular when the deployer exercises control over the high-risk system: as mentioned above, it is important to check to what extent the provider is able to help with training and supporting the implementation of governance measures (including eliciting the appropriate chain of command and how to escalate issues, as well as to organize oversight not only on the level of a given use, but also on the level of regularly assessing the system as a whole;
Ensure that staff dealing with the operation of an AI system has sufficient AI literacy, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups on whom the AI systems are to be used;
Ensure, to the extent that they can exercise control over the input data, that the input data used is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system: this will often be the case for Law Enforcement systems, here as well providers may help facilitate this process;
Monitor the operation of the high-risk AI system on the basis of the instructions for use and when relevant, inform providers of issues, as well as others (depending on the case: the distributor, importer or the market authority) in cases of serious risks and serious incidents, unless this concerns sensitive operational data of LEAs;
Register their use of high risk systems in a EU database set up for this purpose. For LEAs, the registration is done in a secure non-public section of the EU database and only contains a subset of the information that must normally be registered to respect the LE context; moreover when they find as part of registration that the system they intend to use has not yet been registered by provider they shall inform the provider or the distributor;
Keep the logs automatically generated by the high-risk AI system to the extent such logs are under their control for a period appropriate to the intended purpose of the high-risk AI system, of at least six months;
Inform natural persons of the factthat they are subject to the use of the high-risk AI system in cases covered by Annex III (which includes the LE uses discussed), where the system make decisions or assists in making decisions related to natural persons. However, the AIA here refers to Art. 13 of the Law Enforcement Directive, which allows for a more limited or delayed form of information to respect LE purposes;
Cooperate with national competent authorities to implement the AI Act;
When required by applicable data protection law (i.e. the Law Enforcement Directive), carry out a Data Protection Impact Assessment (DPIA), using information provided to them by the provider under its transparency obligation;
Implement a Fundamental Rights Impact Assessment(FRIA) before the first use of the systems and to be updated whenever relevant elements changer or are no longer up to date: the AI office should in the future provide templates (including through an automated tool) for this, which hopefully will also be adapted to the Law Enforcement context. Generally, overlap with the DPIA should be avoided.
Some of these obligations pose significant requirements on Law Enforcement. In particular the new FRIA obligation (and its interaction with the DPIA) raises some questions in terms of practical implementation. The hope and expectation is that appropriate standardized templates will be provided for this. More challenging still from an operational point of view, will be to make sure appropriate competence is developed through training, both relating to awareness of the AI Act and its obligations, national law relevant to AI use, as well as technical and organizational elements to allow proper AI governance within the organization.
6. Timeline for AI Act application and preparation
The AI Act has a staggered application, meaning that different provisions will start to apply at different times. Relevant for Law Enforcement are at least the following dates:
By 2 February 2025 already, the rules on forbidden AI systems will start to apply;
By 2 February 2026, the European Commission should have provided practical guidance with examples to clarify what qualifies as a high-risk AI system under the existing provisions of the AI Act and what does not;
By 2 August 2026, the main provisions of the AI Act, also relating to the requirements for high-risk systems in Law Enforcement will start to apply (both for providers and for deployers).
An important element to consider here is the impact for existing tools used by LEAs. Prohibited tools must be discontinued before 2 February 2025. For existing AI tools, and AI features in existing tools and software suites that are high-risk systems, the AI Act in principle will only apply if these are subject to significant changes in their design after 2 August 2026. Only when the AI system is subject to the AI Act will provider and deployer obligations apply.
This may seem like the AI Act only becomes relevant in 2026. That, however, is not the case. First, LEAs should get started today with building the necessary competence, creating awareness and ensuring that sufficient resources will be available to meet the new requirements. This will take time and effort and getting started early will yield the best results.
Second, while it may seem like existing tools will largely not be caught by the AI Act at least for a while, the notion of significant change in design may be less of a threshold than one would think. Recital 177 of the AI Act clarifies that this is equivalent to the notion of substantial modification, which is defined in Article 3 of the AI Act, meaning:
A change, not planned or foreseen in the initial conformity assessment of the provider, as a result of which:
The AI Act compliance is affected; or
There is a modification of the intended purpose.
This definition is rather broad, and could arguably be interpreted as meaning that any major update that changes the functionalities of a tool will be then immediately caught by the AI Act. If that is the case, then also deployer obligations for LEAs will apply as of that update.
The risk of underpreparing is exacerbated by the fact, as mentioned above, that AI features in existing LEA tools and software suites are often not at all marketed as such. LEAs may therefore likely be using tools that they do not readily think of as AI, but that qualify as AI, in particular given the very broad definition of AI in the AI Act. Mapping potential instances of AI in existing tools and software would therefore also be a very helpful exercise for LEAs wanting to get prepared for the impact of the AI Act. This also again highlights the importance of selecting the right vendors, who can help LEAs identify which features and services are AI-driven and can help them prepare for the obligations coming their way.
7. Key takeaways
By way of conclusion of this introductory overview of the impact of the AI Act on Law Enforcement, these are the key takeaways for LEAs:
LEAs will usually be deployers under the AI Act. Even if deployers are less regulated than providers, there are still significant obligations;
Addressing deployers obligations under the AI Act will require significant investment of time and resources. Selecting the right tool providers is of paramount importance;
The application of the AI Act is quite soon, even if it may not seem that way. Starting the preparation as soon as possible is essential. AI literacy, training and creating awareness in an organization takes time, as does acquiring additional resources for this purpose;
AI is used in many existing tools and software, LEAs are almost certainly already using AI – especially given the broad definition of AI in the AI Act – a mapping exercise of high-risk and forbidden applications is the best way to get started;
Prohibited practices start to apply as of 2 February 2025. Mapping tools that potentially qualify as such is urgent, as well as discontinuing tools that effectively meet the requirements of forbidden AI systems;
LEAs should invest in AI (Act) knowledge and follow up on the evolutions to be expected in the coming years. This includes e.g. guidance on high-risk use cases by 2 February 2026, and development regarding FRIA (expected around 2 August 2026), as well as training and resources provided for LEAs either for free (e.g. by CEPOL, as part of tools provided for free through the Europol tool repository, as part of LEA specific project like CC4AI/AP4AI) or by private players. The content of this blog post and more is also available as a free e-training for LEAs provided by Timelex and hosted by the Polish Platform for Homeland Security (see information below).
LEAs should, in addition to the specific requirements of the AI Act, make sure that they understand the implications of other legislative instruments for the potential use of certain AI tools , in particular data protection law (the Law Enforcement Directive and the national implementation) and national policing law and criminal procedure in particular, as well as any potential future national law on AI, e.g. the national implementation (or lack of implementation) of the exceptions for ‘real-time’ remote biometric identification systems, in publicly accessible spaces for the purpose of Law Enforcement.
This blog post was written in the context of the ARICA project https://www.aricaproject.eu/, in which Timelex participates as a partner. As part of that project, Timelex also wrote a white paper which covers, among other topics, the impact of the AI act for Law Enforcement. Anyone looking for more in-depth information should consider reading the relevant sections of that white paper, available here: https://sparksinthedark.net/investigations-dark-web-arica-project-white-paper/. In addition, any LEAs/LEOs wanting more information on the same topic in a more accessible form, could consider registering for the e-training created by Timelex in this same project, and hosted by the Polish Platform of Homeland Security. The training is available free of charge. More information on this can be found at https://sparksinthedark.net/impact-ai-act-law-enforcement-training/
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.