Matching law & innovation

Leading niche law firm in the heart of Europe

IT contracts / IT litigation / Software /
E-commerce / FinTech / E-government /
E-authentication / Cybercrime

GDPR compliance / DPO as a Service /
GDPR litigation / GDPR certification /
Network & information security (NIS)

Copyright / Trademarks / Domain names /
Patents / Design rights / Trade secrets /
IP contracts / IP litigation

Defamation / Freedom of speech / Content regulation / Electronic communications / Gaming & gambling / Production & license agreements

Impact assessments / Comparative policy assessments / Drafting legislation / Implementing legislation / Regulatory review and evaluation / Fitness checks

Horizon2020 & Horizon Europe / Licensing / Technology transfers / Open source / Open science / Open data / Knowledge sharing / Ethics / Risk assessment

Latest News

Unethical AI

The Fundamental Rights Impact Assessment (FRIA) is a new obligation introduced by the EU Artificial Intelligence Act (AI Act).Despite its novelty, the origins of this assessment and the logic behind its conception are found in pre-existing legislation, such as the GDPR, and human/fundamental rights impact assessments that constitute a good practice in business when onboarding new technologies or tools or creating new processes. As its name reveals, the intention is to protect individuals’ fundamental rights from adverse impacts of AI systems. Such rights can be diverse but of equal value, such as the right to access healthcare, the right of assembly, or the right to an effective remedy. To ensure such protection, the FRIA tries to identify specific risks and address them through appropriate mitigation measures. 

The present blogpost constitutes a practical guide on understanding the scope of a FRIA under the AI Act, the persons or entities responsible for carrying out such an assessment, and the sectors of application, and outlines a series of factors to be taken into account in order to achieve a comprehensive assessment.

Impact of the AI Act on the financial services sector

In this next post in our series on the AI Act we will delve deeper into the implications of this new legal framework for the financial services sector. 

The use of AI in the financial services sector, including insurance, can enhance the efficiency and accuracy of operations, and improve overall customer experience. Specifically in banking and finance, AI-powered algorithms can analyse vast amounts of data in real-time, enabling faster decision-making in areas such as fraud detection, credit scoring, and risk management. For instance, AI could quickly identify unusual transaction patterns to flag potential fraudulent transaction, thus protecting customers and institutions alike. In the field of insurance, AI can be used for claims processing, underwriting, and customer service, allowing insurance undertakings to assess risks more precisely and to automate certain routine tasks. AI chatbots and virtual assistants can also improve customer interactions by providing personalized financial advice and facilitating transactions across this industry.

events

Coming Events

Timelex team members will be speaking at the following events:

A niche firm that has succeeded in obtaining a strong presence in the market’, Timelex consists of ‘smart and hands-on lawyers’ who are ‘very much up to speed on recent developments

Legal 500

 

a team of experts

A team of experts

Leading Belgian niche law firm in information technology, privacy & data protection, intellectual property, and media & electronic communications law