On April 21, 2021, the European Commission published the much awaited draft of the regulation laying down harmonized rules on artificial intelligence (AI Act). The publication is the intermediary outcome of a long discussion and evaluation process. Timelex was privileged to support the Commission as a legal expert in a study on liability for AI, which provided some of the analysis that drove the proposal.
This article briefly discusses the most important areas covered by the AI Act, the approach proposed by the Commission, as well as the next steps to be expected.
AI Act defines the “artificial intelligence system” (AI system) as a software that fulfils the following two conditions:
The high level purpose of the AI Act is to provide a uniform legal framework for the development, marketing and use of AI systems, where both benefits and risks of AI are adequately addressed at Union level.
Not all AI systems are treated by the proposed regulation in the same way. The Commission’s approach is “risk based”, classifying the AI systems according to the risk they can cause to human beings:
The Commission emphasizes that most AI systems should fall into the latter category. However, this will remain to be seen, as some types of high-risk AI systems or even prohibited systems seem to be quite broadly defined.
It will be banned to place on the market, put into service and use:
AI systems used or developed for exclusively for military purposes are outside of the scope of the proposed regulation. This limitation is dictated by the legal basis of the act, which focuses on internal market challenges (article 114 of the TFEU), thereby not allowing the inclusion of matters of national defense. None the less, this leaves an important area of development of potentially the most dangerous systems outside the foreseen oversight.
There are two broad categories of high-risk AI systems.
The first one encompasses systems which could produce adverse outcomes to health and safety of persons, in particular when they operate as components of products. For those systems both of the following conditions should be fulfilled:
This category of AI systems includes, for example, components of toys, medical devices, lifts, as well as safety features of motor vehicles, rail systems, aircrafts, and other machinery. A full list of Union harmonisation legislation is provided in Annex II of the AI Act.
The second category are stand-alone AI systems which may pose a threat to the fundamental rights of persons. Those include, for example, AI systems intended to be used:
The full listed is included in Annex III to the AI Act. The Commission will be empowered to amend this already long list by issuing delegated acts in the future, thereby providing somewhat greater flexibility in maintaining its relevance.
High-risk AI systems can only be placed on the EU market or put into service if they comply with strict mandatory requirements. They include in particular:
Before a high-risk AI system is put on the market or deployed to service, it must a undergo conformity assessment procedure. Once completed, the system will have to be labelled with a CE marking. Certain high risk AI systems should be also registered in a EU database maintained by the
Commission.
The AI Act also provides various obligations related to post market monitoring. There will be also a regime of notifying national authorities of non-compliance of the high-risk AI system and of any corrective actions taken.
Most of the responsibilities listed above will fall on the so-called “provider” of the AI system, i.e. a person or entity that that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, for money or free of charge. However, there are also obligations for AI importers, distributors and users. For instance, the user of the high-risk AI system must follow the instructions and “feed” the system with relevant output data. They must also monitor their operation.
6. What requirements apply to other AI systems?
The proposed AI Act does not provide similarly detailed requirements for lower risk AI systems. However, transparency rules will apply, which means that the users will have to be notified about:
Some exceptions are provided, e.g. if the fact that the user is interacting with an AI is obvious from the circumstances and the context of use, for systems used to detect or investigate criminal offences or for scientific or artistic purposes.
Developers of non-high-risk AI may also adhere to voluntary codes of conduct.
The AI Act will set up a European Artificial Intelligence Board made up of representatives of the appropriate regulators from each member state, as well as the European Data Protection Supervisor and the Commission. EAIB will responsible for a number of advisory tasks. EU Members will also need to appoint national authorities.
The national regulators will be charged with enforcement of the rules of the AI Act. They will be equipped with the power to impose “GDPR style” administrative fines. For the most serious infringements, the national authorities will be able to issue fines up to 30 000 000 EUR or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. The highest fines are intended for putting on the market banned AI systems and failure to provide compliance with requirements for training of high-risk AI systems with quality datasets.
AI Act needs to be adopted by the European Parliament and the Member States in the ordinary legislative procedure. Once passed, the law will enter into force on the twentieth day following its publication. As a EU regulation, the AI Act will be directly applicable in all EU countries.
Importantly, there will be a 2-year grace period within which the AI systems need to be brought into conformity with its requirements.
Draft of the regulation here
Questions and answers here