Harmonising AI and data protection: Navigating the AI Act and the GDPR

Author info

The rapid adoption of artificial intelligence (AI) systems presents transformative opportunities for organisations but also introduces complex regulatory challenges. At the heart of these challenges lies the need to balance innovation with the protection of fundamental rights, particularly in the area of data protection. This balance is illustrated by the interplay between the European Union's General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). This blog post explores to what extent the GDPR and the AI Act are to be seen as complementary legislations, their areas of intersection, and the practical challenges organisations face in aligning with both frameworks.

Complementary frameworks with divergent goals?

The GDPR and the AI Act have different objectives. The GDPR, as a fundamental rights regulation, focuses on protecting the fundamental rights of individuals when their personal data is processed. It emphasises on principles such as lawfulness, fairness, transparency and data minimisation. In contrast, the AI Act, as a product safety legislation prioritises responsible innovation by aiming to ensure the safe and ethical development and deployment of AI systems, which often rely on the processing of vast amounts of personal data. It also relies on ethical principles that apply to all AI systems influenced by the work of the OECD and the High-Level Expert Group on AI (HLEG), namely human agency and human oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability. However, unlike the GDPR, the AI Act does not explicitly grant rights to individuals, with a few exceptions. For example, any person who considers to have suffered an adverse impact on their health, safety or fundamental by the use of a high-risk AI system has the right to obtain from the deployer of that system clear and meaningful explanations of the role of the AI system and how it made its decision.  

a) Key areas of intersection

  • Data accuracy and data quality: According to the GDPR personal data must be accurate and kept up to date. The AI Act builds on this principle, requiring high-risk AI systems to use datasets that meet strict quality criteria to prevent bias and provide an appropriate level of accuracy. An interesting aspect is that in the context of detecting bias, the AI Act allows AI providers to exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, especially those provided in the GDPR.

  • Purpose limitation: Under the GDPR, personal data may only be collected for specific, legitimate purposes and limited to what is strictly necessary, which ensures that AI systems do not use data beyond their intended function. The AI Act reinforces this by requiring high-risk AI systems to have a clearly defined and documented intended purpose stemming from the specific context and conditions of use that the provider of the AI system has set. The most frequent way of accessing such information is through the instructions of use which shall delineate the intended purpose of the AI system in clear and plain language.

  • Transparency: Both the GDPR and the AI Act require transparency but approach it differently. Under the GDPR, data subjects must be provided with clear information about how their personal data is being processed. The AI Act requires users to be informed when interacting with an AI system. This obligation does not apply in the case of AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences. For high-risk AI systems, the AI act also requires clear and accessible explanations of how data will be used, with a particular focus on the decision-making processes involved. Additionally, in the context of general purpose AI (GPAI) models, to prepare and make publicly accessible a sufficiently detailed summary of the content used to train the GPAI model.  

  • Automated decision-making: Article 22 of the GDPR gives individuals the right not to be subject to decisions based solely on automated processing, with legal or other significant implications. The AI Act builds on this, requiring human oversight of high-risk systems to mitigate bias and ensure accountability. The “human-in-the-loop” approach is central in the architecture of the AI Act. It is translated into concrete obligations to supervise and intervene or disregard the output of AI systems when adequately trained staff assesses that there is a need to do so.  Additionally, the AI Act implements a right to explanation of individual decision-making for persons who are affected by the output of a high-risk AI system. 

  • Accountability: Both the GDPR and the AI Act emphasise accountability through structured frameworks. The GDPR focuses on transparency, record keeping, legal basis and safeguards such as Data Protection Impact Assessments (DPIAs). The AI Act builds on this with requirements for risk management, clear documentation, human oversight in high-risk AI systems and incident reporting. An interesting parameter that highlights the importance of fundamental rights in the AI Act is the obligation for deployers of certain AI systems to conduct a Fundamental Rights Impact Assessment (FRIA). More information on this can be found in the dedicated blogpost of our series on the AI Act.

  • Security of processing: Both the GDPR and the AI Act emphasise the importance of the security of data processing, with the GDPR requiring risk-based technical and organisational measures (TOMs) to address vulnerabilities. The AI Act complements these requirements by introducing additional measures tailored to AI systems. For high-risk AI systems, organisations must implement comprehensive risk management processes, ensure robust human oversight, and maintain detailed technical documentation. 

b) Practical challenges and considerations

  • Data accuracy: AI systems, especially large language models, frequently encounter difficulties in producing accurate personal data, as highlighted in the complaint filed by NOYB against OpenAI. This creates challenges in meeting the GDPR’s requirement for data accuracy while also ensuring that outputs comply with the reliability standards outlined in the AI Act.

  • Data minimisation: AI systems often require large data sets for training. The larger and more diverse the dataset, the better the AI system can generalise and perform. This dependency on vast amounts of data often conflicts with the GDPR’s principle of data minimisation, which requires that personal data collection and processing be limited to what is strictly necessary for the specified purpose.

  • Explaining AI decisions: The "black box" nature of neural networks in AI models makes explaining their decision-making processes challenging, creating conflicts with the transparency requirements of the GDPR and the AI act. These regulations emphasise the need for clarity in automated decision-making, but understanding complex AI models remains a hurdle. A common consequence of this is that providers of AI models have a difficulty in determining a proper legal basis when using personal data to train their systems. Advances in explainable AI (XAI) are helping to bridge this gap. Techniques such as feature attribution and visualisation tools are improving the interpretability of AI systems, supporting compliance and fostering trust. Yet, while a definitive solution to this problem is not achieved, robust involvement of the human factor in all AI operations remains an essential condition.

  • Double roles: The roles defined by the GDPR and the EU AI Act often overlap. This requires organisations to carefully assess their responsibilities. The GDPR differentiates between controllers and processors, while the AI Act distinguishes between providers, who develop AI systems, and deployers, who operate them. Companies that process personal data during the development or use of an AI system will therefore have a dual role and should thoroughly consider what role they play under both the GDPR and the AI Act.This allocation of responsibilities is a cornerstone for the effective exercise of fundamental rights, seen as data subject rights under the GPDR, and indirect rights stemming from concrete obligations in the AI Act. Besides, the recitals of the AI Act stress that the establishment of harmonised rules for the placing on the market, the putting into service and the use of AI systems should facilitate the effective implementation and enable the exercise of the data subjects’ rights and other remedies guaranteed under the GDPR.

  • Regulatory inconsistencies: As highlighted in the Mario Draghi report (The Future of European Competitiveness) the AI Act introduces inconsistencies with the GDPR in regulating personal data processing, particularly concerning biometric data. The GDPR narrowly defines biometric data as information used solely for the unique identification of a person. Although Recital 14 of the AI Act indicates that its definition should align with the GDPR, the Act adopts a broader scope, regulating biometric identification, verification, emotion recognition and categorisation, even in contexts that exceed the GDPR’s stricter criteria. 

Towards harmonised compliance

Overall, the relationship between the GDPR and the AI Act underscores the complex but complementary nature of their regulatory scope when it comes to the processing of personal data for the purposes of AI systems. While the GDPR focuses on protecting individuals' rights and ensuring the lawful processing of personal data, the AI Act addresses the safe and ethical development and deployment of AI systems, particularly high-risk applications. The AI Act focuses heavily on the proper monitoring of AI technologies, while the GDPR regulates the conditions and exercise of data subjects rights in each processing operation, e.g. in the context of AI systems. Together, these frameworks aim to balance innovation with the protection of fundamental rights.

Navigating compliance with both regulations presents challenges for organisations. Overlapping roles, tensions between data minimisation principles and AI’s reliance on large datasets and inconsistencies in the regulation of biometric data demand careful consideration. However, in a European regulatory landscape with a multitude of (partially) overlapping legislative acts on digital matters, the need for complementarity clearly should necessarily outweigh any points of tension. Adopting robust data governance practices, leveraging explainable AI techniques, and thoroughly assessing organisational responsibilities are critical steps towards achieving compliance. By aligning with the shared principles of these frameworks, organisations can streamline compliance efforts, foster trust in their AI systems, and support both innovation and legal and ethical accountability. Lastly, the promotion of coordinated efforts to unify the competent authorities under different frameworks (e.g. DPAs should take a leading role in the enforcement of the AI Act) can be a decisive factor in strengthening compliance efforts.

If you have any questions about this blog post, the GDPR, the AI Act, or their interplay, feel free to reach out to a Timelex lawyer or submit your inquiry through our contact form.