Impact of the AI Act on the financial services sector

Author info

In this next post in our series on the AI Act we will delve deeper into the implications of this new legal framework for the financial services sector. 

The use of AI in the financial services sector, including insurance, can enhance the efficiency and accuracy of operations, and improve overall customer experience. Specifically in banking and finance, AI-powered algorithms can analyse vast amounts of data in real-time, enabling faster decision-making in areas such as fraud detection, credit scoring, and risk management. For instance, AI could quickly identify unusual transaction patterns to flag potential fraudulent transaction, thus protecting customers and institutions alike. In the field of insurance, AI can be used for claims processing, underwriting, and customer service, allowing insurance undertakings to assess risks more precisely and to automate certain routine tasks. AI chatbots and virtual assistants can also improve customer interactions by providing personalized financial advice and facilitating transactions across this industry.

The impact of AI on financial services and insurance is substantial and presents new opportunities, but also significant challenges. AI-driven automation can reduce operational costs, speed up processes like loan approvals and claims handling, and offer more personalized data-driven services. In the insurance sector, AI can more accurately predict customer behaviour, assess claims with minimal human intervention, and improve risk modelling. However, the increasing use of AI also raises concerns about data privacy, job displacement, and the ethical implications of using algorithms for decision-making. As AI continues to shape these industries, financial and insurance companies must strike a balance between harnessing its potential and ensuring transparency, fairness, and security in their practices.

In this blogpost, we will cover three specific questions: 

  1. How does the AI Act impact financial services providers?

  2.  What are the data protection implications? 

  3. What about high-risk AI systems in this sector?

1. How does the AI Act impact financial services providers?

The AI Act is in the first place a horizontal framework. This means that it targets all cases of placing AI systems on the EU market, the putting into service of those AI systems, and the use thereof, regardless of the sector in which this takes place. The AI Act, therefore, does not specifically target the financial services sector, but will be relevant to AI systems placed in this market, put into service or used.

Financial services providers can act in different capacities under the AI Act. They can either develop their own AI system – in which case they will act as a provider – or they can make use of commercially available AI systems – in which case they will act as a deployer. The general obligations of providers and deployers were discussed in another blogpost in this series.

There are also a few cases where the AI Act imposes specific obligations for providers or deployers in the financial services sector. 

  • For instance, with regard to the quality management system, it is provided that financial institutions acting as providers of AI systems are already subject to requirements regarding their internal governance, arrangements or processes under EU financial services law. As a result, they can comply with the requirement to adopt a quality management system by complying with the rules on internal governance arrangements or processes pursuant to relevant EU financial services law (article 17(4) AI Act). For the purposes of this framework, EU financial services law includes the rules on credit institutions and investment firms, consumer credit (including mortgages), insurance and reinsurance, and insurance distribution. However, they are still required to implement the risk management system referred to in article 9 of the AI Act, a post-market monitoring system in accordance with article 72 of the AI Act, and procedures related to the reporting of a serious incident in accordance with article 73 of the AI Act.
  • The same principle applies for documentation keeping, where financial institutions can maintain the technical documentation as part of the documentation kept under the relevant EU financial services law (article 18(3) AI Act). The same logic applies to automatically generated logs, where financial institutions can maintain the logs automatically generated by their high-risk AI systems as part of the documentation kept under the relevant financial services law (article 19(2) AI Act).
  • When financial institutions act as deployers, they can fulfill the monitoring obligation by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law (article 26(5) AI Act), and maintain their logs as part of the documentation kept pursuant to the relevant Union financial service law (article 26(6) AI Act).
     

2. What are the data protection implications?

One particular issue regarding the use of AI by financial services providers is not directly covered by the AI Act, but rather by the GDPR. This concerns the question whether the financial services providers have a suitable legal basis to use the data they have on their customers to build and train their AI models.

This matter came up in a recent decision by the Litigation Chamber of the Belgian Data Protection Authority. In this case, a bank used its customers’ transaction data to build an AI model to offer customers more personalised information and discounts by third parties. According to the bank, this processing was based on the bank’s legitimate interests and compatible with the original purpose of the processing of this personal data. While the personalised services themselves are only activated at the request of the customer, one customer claimed that customer data is already being processed to build the AI model even before a customer has opted-in to this service. According to this customer, the bank’s privacy policy did not give the impression that this further processing would be compatible with the purposes for which this data was initially collected – namely, to process payment transactions. Moreover, after objecting to this processing, the bank needed a month to cease the processing. However, after this time the purpose of the processing – building the AI model – would already be accomplished, thus making an objection to the processing pointless.

The Litigation Chamber found that the bank’s customers were indeed not informed of this processing at the moment of their starting a commercial relationship with the bank. A recent Advocate General’s opinion highlights the need to provide the data subject with meaningful information linked to the technical nature of the field in question, which makes it necessary to ensure that that information is comprehensible and significant for the data subject. The bank’s privacy policy only referenced re-use of data for marketing and commercial purposes – but this is limited to its banking and insurance activities. The commercial AI model under discussion here would go beyond strictly banking and insurance activities, and is therefore to be considered as a new and separate data processing activity. As to whether this processing activity is compatible with the original purposes of the data collection, the Litigation Chamber found that a customer has no reasonable expectations that transaction data would be used for building an AI model, especially not a commercial AI model for personalised information and third-party discounts. As a result, this processing is not compatible with the original purposes of the data collection.

Since it considers a new and separate processing activity, incompatible with the purposes of the original processing, this activity requires its own legal basis. Here, the Ligitation Chamber found that there is indeed a legitimate interest at stake, and that the bank has demonstrated a need to process this personal data to attain that legitimate interest. As for the assessment of whether the fundamental rights of the data subject take precedence over that legitimate interest, it is found that the AI model was built on anonymised data, thus limiting the impact on the data subject. As a result, all conditions are fulfilled and legitimate interest can serve as a legal basis for this processing. The Ligitation Chamber also considered the term of one month needed by the bank to cease the processing after the customer objected thereto as being reasonable. Therefore, the customer’s complaint was rejected. 

3. What about high-risk AI systems in this sector?

With regard to the financial sector, recital 58 to the AI Act explains that AI systems used to evaluate the credit score or creditworthiness of customers should be classified as high-risk AI systems. This is because they determine those customers’ access to financial resources or essential services such as housing, electricity, and telecommunication services. However, AI systems provided for by EU law for the purpose of detecting fraud in financial services and for prudential purposes to calculate credit institutions’ and insurance undertakings’ capital requirements should not be considered to be high-risk. 

Moreover, AI systems intended to be used for risk assessment and pricing in relation to customers for health and life insurance can also have a significant impact on persons’ livelihood and if not duly designed, developed and used, can infringe their fundamental rights and can lead to serious consequences for people’s life and health, including financial exclusion and discrimination. Such AI systems are, therefore, also considered as high-risk. 

Article 6 of the AI Act defines what such high-risk AI systems are. These are systems where: 

(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I; and

(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.

Additionally, AI systems listed in Annex III are considered as high-risk AI systems. That is unless it can be proven that the AI system does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. However, if the AI system performs profiling of natural persons, it will always be considered to be high-risk. Moreover, in order to benefit from the exemption, any of the following conditions must be fulfilled:
(a) the AI system is intended to perform a narrow procedural task;
(b) the AI system is intended to improve the result of a previously completed human activity;
(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.

As a result, Annex III includes as high-risk AI systems:

  • AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
  • AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.

Providers or deployers of those high-risk AI systems in the financial sector are, therefore, subject to the stricter rules imposed on such high-risk AI systems. For instance, they will have to conduct a Fundamental Rights Impact Assessment (FRIA), which will have to be notified to the market surveillance authority.

However, there are a few deviations to the stricter requirements: 

  • Concerning post-market monitoring, providers of AI systems in the financial services sector can rely on a post-market monitoring system and plan already established under EU financial services legislation. In order to ensure consistency, avoid duplications and minimise additional burdens, these providers have a choice of integrating, as appropriate, the necessary elements described in paragraphs 1, 2 and 3 of article 72 of the AI Act into systems and plans already existing under that legislation, provided that it achieves an equivalent level of protection (article 72(4) AI Act). 
  • In terms of market surveillance, high-risk AI systems placed on the market, put into service, or used by financial institutions regulated by EU financial services law, market surveillance will be conducted by the relevant national authority responsible for the financial supervision of those institutions, insofar as the placing on the market, putting into service, or the use of the AI system is in direct connection with the provision of those financial services (article 74(6) AI Act). 

However, the deviations for Annex III may apply here. If the AI system, for instance, only detects patterns and does not replace or influence a human assessment, without proper human review, or if the AI system only performs a preparatory task to an assessment concerning credit scoring, creditworthiness, or insurance risk assessment, the AI system could potentially be considered as not posing a high risk. This, however, requires a careful case-by-case assessment.

Do you have a question about this blogpost or about the application of the AI Act to your financial services organisation? Please contact a Timelex lawyer or ask your question via our contact form.