The EU’s newly published Artificial Intelligence Act – an overview in 6 questions

Author info

After three years of legislative work, the EU Artificial Intelligence Act (‘AI Act’) was finally published in the Official Journal on July 12, 2024, becoming the world’s first comprehensive artificial intelligence (AI) regulation.

With this post, we begin a series covering the AI Act, where we will explore specific aspects and delve deeper into the most interesting topics. To put these discussions into perspective, let’s start with some basics, covering six key questions.

1. What is the goal of the AI Act?

While substantive changes have been made to the regulation during the legislative process, the general goal has remained the same: to maximise Europe’s potential to compete globally, fostering investment in AI and innovation, while making sure that AI technologies work for people and are a force of good in society. Hence, the AI Act aims to manage and mitigate those risks that AI may present to society if left unchecked, such as risks to the health, safety and fundamental rights and freedoms of natural persons, as well as risks to public interests (e.g., public health, protection of critical infrastructure, environmental impact etc.). The ambition is to manage these risks appropriately, focusing on robust and trustworthy AI, without creating an unnecessary burden that might unnecessarily stifle innovation.

Notably, the AI Act does not cover AI liability issues, which will be addressed in the planned AI Liability Directive. Thus, the AI Act does not cover rules of liability for damages caused by an output of an AI system. Also, the AI Act does not provide answers to questions related to potential copyright infringements, such as using copyrighted works for AI training or outputs that closely resemble the original works of authors.

2. What is regulated by the AI Act?

The AI Act can be seen as a type of product regulation, which will apply to artificial intelligence systems and general-purpose AI models in all sectors and domains. This said, the regulation does not treat all the AI systems in the same way. The rules mainly focus on certain AI systems that are incorporated in a product or are themselves a product that is already regulated under other EU product rules, as well as on AI systems that are not subject to existing product regulation, but are stand-alone systems that present a high level of risk. For stand-alone systems, the high risk is linked to specific use cases or application domains, such as biometric identification and categorisation, emotion recognition, critical infrastructure, education and vocational training, employment, access to public services, law enforcement, migration, border control and justice to name a few.

For what concerns AI systems that are that are incorporated in a product or are themselves a product that is already regulated under other EU product rules, there are two sets of relevant legislation. For some products, covered in Annex I, section A, the AI Act will complement the existing rules on specific products that may constitute or incorporate AI, such as medical devices or machines, which are already covered by the Medical Device Regulation or the Machinery Directive respectively. In those cases, products must comply with AI Act in addition to those rules.

For certain other products, namely those covered by Section B of Annex I, the AI Act does not apply directly, but the AI Act’s requirements for high-risk AI systems will be indirectly added when the European Commission adopts delegated acts under those sectoral laws. Examples include cars and other types of motor vehicles using AI as safety components.

More specifically, the AI Act covers:

  • Harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;

  • Prohibitions of certain AI practices;

  • Specific requirements for high-risk AI systems and obligations for operators of such systems;

  • Harmonised transparency rules for certain AI systems;

  • Harmonised rules for the placing on the market of general-purpose AI models;

  • Rules on market monitoring, market surveillance, governance and enforcement;

  • Measures to support innovation, with a particular focus on SMEs, including start-ups.

The AI Act will apply to AI systems put on the EU market. This means that also providers located outside of EU will have to comply with its rules, if they want to make their AI products available in the EU.

There are however some exceptions. For example, AI Act does not apply to:

  • AI systems exclusively for military, defence or national security purposes;

  • AI systems released under free and open-source licences, unless they are placed on the market or put into service as high-risk AI systems, if they fall in the scope of prohibited AI Practices or of specific transparency obligations for providers and deployers of certain AI systems;

  • Users who are natural persons using AI systems for their purely personal non-professional activity;

  • AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.

Note that many low-risk AI systems are not subject to any specific rules under the AI Act at all. Therefore, understanding the qualification of any AI project is essential to know whether it is subject to the new act or not.

3. What is the definition of ‘artificial intelligence system’?

Under the AI Act, an ‘AI system’ is a machine-based system that:

  • Is designed to operate with varying levels of autonomy and that;

  • May exhibit adaptiveness after deployment, and that;

  • For explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Key characteristic of AI systems is their capability to infer. Hence, the term does not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.

Nonetheless, the AI Act’s definition of AI systems, modelled on the OECD’s conceptualization of AI, remains very broad, meaning that many organizations are likely using AI systems, sometimes perhaps without realizing this. Many well-known and established software suites contain AI features that match this definition, and may therefore attach obligations under the AI Act for their users.

4. Who will have obligations under the AI Act?

The requirements under AI Act are mostly directed at so called ‘operators’. This is a diverse group of stakeholders, which participate in developing and being in a supply chain of AI. They include:

Deployers

Use an AI system under their authority. The main exception here is where the AI system is used in the course of a personal non-professional activity.

Providers

Develop an AI system or a general-purpose AI model and place it on the market under own name (for the first time).

Importers

Are entities located or established in the Union that place an AI system on the market that bears the name or trademark of a natural or legal person established in a third country.

Distributors

Are parties in the supply chain, other than the provider or the importer, that make an AI system available in the EU.

Product manufacturers

Place an AI system on the market or put it into service together with their (main) product and under their own name or trademark.

Authorized representatives

Have a written mandate from a provider of an AI system or a general-purpose AI model.

Most companies experimenting with AI in the course of their normal non-AI related core activities will qualify as deployers. They’ll merely use an existing system for their own business interests. However, the AI Act imposes obligations on deployers as well, namely when they use a high-risk AI system, or when they use an AI system that has the potential to mislead or otherwise negatively impact the persons it interacts with, for example deepfakes, emotion recognition systems.

More advanced stakeholders may develop their own AI systems (from scratch or based on existing models) and be qualified as a provider.

In addition, any party, including deployers, may be requalified or considered as a provider if they:

  • Put their name or trademark on an high-risk AI system that has already been placed on the market or put into service for the first time;

  • Make a substantial modification to an high-risk AI system that has already been placed on the market or put into service in a way that remains high risk;

  • Modify the intended purpose of an AI system, including a general purpose AI system, which has not been classified as high risk and has already been placed on the market or put into service in such a manner that the AI system becomes an high-risk AI system.

5. What is the ‘risk based approach’ in the AI Act?

The AI applies so-called ‘risk based approach’, applying different levels of regulation, depending on the assigned level of inherent risk that the AI systems poses to health, safety and fundamental rights. It can be depicted as shown below.

AI Act Risk Image

Source: https://www.cnil.fr/en/entry-force-european-ai-regulation-first-questions-and-answers-cnil

Type

Examples

Main requirements

Prohibited AI systems:

Unacceptable risk as it violates EU fundamental rights and values.

Examples are AI systems that:

  • Deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective of distorting their behaviour or impairing their ability to make an informed decision, causing or being reasonably likely to cause significant harm to them or others.

  • Exploit the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, materially distorting their behaviour or the behaviour of their peers in a way that causes them or another person significant harm.

  • Perform ‘social scoring’-  evaluation or classification of natural persons based on their social behaviour or known, inferred or predicted personal or personality characteristics that leads to unjustified or disproportionate and detrimental or unfavourable treatment of individuals or groups.

  • Predict the risk of a natural person committing a criminal offence, based solely on profiling or assessing personality traits and characteristics (i.e. without adding objective other factors into the mix, such as the presence in an offender space online, or evidence of specific risk factors for the crime in question).

  • Create or expand facial recognition databases through untargeted scraping of the internet and CCTV.

  • Infer emotions of a natural person in the areas of workplace and education institutions (except for medical or safety reasons).

  • Biometric categorisation systems that deduce sensitive characteristics such as race, political opinion or sexual orientation based on biometric data.

  • Use of ‘real-time’ remote biometric identification in publicly accessible spaces for the purposes of law enforcement (forbidden in principle, but with limited exceptions).

Deployment of such AI systems in the EU is forbidden.

High-risk AI systems:

High risk as it impacts health, safety or fundamental rights.

  • Stand-alone AI systems in specific sectors, use cases and application domains (Annex III) – for example, and subject to specific conditions and exemptions, biometrics, critical infrastructures, AI systems used in education and vocational training, AI systems used in employment and workforce management.

  • AI systems which are safety components of products, or products themselves (Annex I) – for example AI systems which are intended to provide information to be used to take decisions with diagnosis or therapeutic purposes.

Various requirements including as regards risk management, the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, human oversight, and robustness, accuracy and cybersecurity. Type of requirements vary between different operators.

Sui generis risk AI systems:  

Risk of deception.

  • Chatbots, deepfakes.

Mostly transparency requirements to avoid risk of deception.

Other AI systems:

  • Spam filters, document editors, recommender systems, etc.

Not regulated by AI Act, but consumer protection and product safety rules apply.

Additionally, the AI Act also regulates general-purpose AI (GPAI) models. Those are models that have a wide range of possible uses (for example, large language models such as GPT-4, etc.). They are subject to a tiered approach depending on whether it is a GPAI model with systemic risk on the EU level because of its capabilities and potential impact or a ‘normal’ GPAI model.

6. When will the AI Act start to apply?

The AI Act was published in the EU’s Official Journal on 12 July 2024 and will enter into force 20 days after publication, meaning 1 August 2024. As a general rule, the AI Act shall apply from 2 August 2026. However, some notable exceptions apply:

DateRules

2 February 2025

Bans concerning prohibited AI systems will take effect.

2 August 2025

Requirements concerning general-purpose AI will apply.

2 August 2027

Requirements applicable to AI systems which are safety components of products, or products themselves covered by the Union harmonisation legislation (listed in Annex I) enter into force. Please note that the requirements for stand-alone high-risk use cases or application domains listed in Annex III will apply one year earlier (subject to general AI Act application rule).

For high-risk AI systems that have been placed on the market or put into service before 2 August 2026, the AI Act will apply only if, from that date, those systems are subject to ‘significant changes in their design’. Regardless of any changes, all high-risk AI systems intended for use by public authorities must fully comply with the new regulation by 2 August 2030 at the latest.

This means that in principle, existing systems have some additional time before the AI Act applies, which also means that users (deployers) have to think in first instance only about new systems they take into operation or systems that have been significantly altered after 2 August 2026.

There is however an important sidenote to be made here. The notion of ‘significant change in the design’, covers any change to an AI system that was not foreseen or planned in the initial compliance work (and so-called conformity assessment) under the AI Act, and that affects the compliance of the system or results in a modification of the intended purpose. In particular, a change that ‘affects’ compliance seems to be a relatively low bar. Adding new functionalities through an update after 2 August 2026 for example, extending the initial capabilities of the system, may not necessarily feel like a significant design change from the user perspective. Still, may be enough to trigger new and additional compliance concerns if they were not already planned and accounted for in the original compliance work. If this threshold is met, both provider and deployer obligations will immediately apply to this existing system.

Lastly, the AI Act is not a self-contained piece of legislature. On the contrary, it contains numerous delegations addressed to the European Commission, the European Artificial Intelligence Board and the AI Office for implementing acts, guidelines, recommendations or standardized templates, among others.

Follow us for more information about AI Act and to read the upcoming episodes, where we will focus on specific AI Act compliance topics.