The AI Act and Financial Services: Risks, Benefits, and Opportunities

This article was prepared by Neil Micallef, Manager (AI Supervision and Market Surveillance) and Annelise Vassallo Seguna, Manager (Legal Counsel) at the Malta Digital Innovation Authority.
Artificial Intelligence (AI) has evolved at an unprecedented pace, becoming deeply rooted in the daily operations of a wide range of industries. The public release of OpenAI’s ChatGPT-3.5 in November 2022 marked a turning point, significantly accelerating AI’s accessibility, capabilities, and adoption across both personal and professional settings. With this surge of usage, a corresponding need for understanding of AI’s risks, benefits, and opportunities has presented itself.
In the realm of financial services, AI-driven systems may be used for several purposes. Credit institutions may utilise AI systems for analysing transactions for fraud detection and prevention and for flagging potential money laundering activities. Use-cases for insurance companies include using AI systems for their claims processing and underwriting operations, and possibly also to set the prices of their insurance products. Financial institutions may also consider using AI techniques to perform assessments of their clients.
There may also be use-cases which are not specific to the financial services domain, which, however, may still be considered for their perceived benefits. Such systems could include AI-based software for the purposes of recruiting new staff, or even promoting personnel internally. From a client-facing perspective, such institutions may also opt for other domain-agnostic use-cases of AI such as using interactive chatbots for their clients, or for providing highly personalised services to their users.
The potential benefits of utilising AI for these purposes are substantial, and include the streamlining of operations, increased efficiency, and minimisation of repetitive work. Moreover, AI systems can perform in-depth analysis of large datasets at a rapid pace, allowing human operators to focus on other tasks. This analysis can potentially even identify salient patterns within data which would not be straightforward to glean when performing manual checks.
Amid these advancements, one must also consider the risks of using AI systems for these processes. Taking the example of AI systems performing risk assessment of individuals for insurance products, a system could have underlying biases as part of the data used to train it, or even as a byproduct of how the AI was trained. Consider a system trained solely on individuals of a single nationality, in a young age group, and also skewed heavily towards a single biological sex. This could lead to the same system generating an unfair risk assessment, possibly outright denying access to insurance products for individuals not within these groups. This example highlights how certain uses of AI systems may negatively impact the quality of life of citizens if the appropriate mitigating mechanisms are not in place. Such scenarios highlight the need for a comprehensive regulatory framework to safeguard the fundamental rights of citizens.
The AI Act of the European Union (Regulation (EU) 2024/1689) presents several obligations and guidelines for this purpose. The AI Act adopts a risk-based approach whereby requirements for AI systems are determined by the severity of the risks they might pose.
The AI Act defines four distinct tiers of risk:
- Prohibited AI systems, such as systems performing social scoring which were banned from being placed on the market as of 2 February 2025,
- High Risk AI systems, such as certain AI systems used for employment,
- Limited Risk AI systems, such as AI chatbots which must inform users that they are interacting with an AI system, and
- Minimal Risk AI systems, such as AI in video games which are permitted with the least restrictions and may adopt voluntary codes of conduct.
For the financial sector, the AI Act specifically defines AI systems used for the following purposes as high-risk:
- “AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud”
- “AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance”
- “AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates”
- “AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships”.
For high-risk AI systems, requirements include undergoing conformity assessments, setting up a risk management and quality management system, writing up comprehensive documentation, and ensuring data governance and human oversight mechanisms are in place. These requirements shall apply from 2 August 2026, providing stakeholders with over a year’s time for compliance.
The Malta Digital Innovation Authority (MDIA) will be a market surveillance authority under the EU AI Act. Part of the MDIA’s role will be to investigate, inspect, and take action against non-compliant practices, including the enforcement of administrative penalties.
As part of the MDIA’s mission to create a secure and dependable digital environment, the Authority has actively collaborated with other key authorities such as the Malta Financial Services Authority. Such collaborations are essential, as the AI Act forms part of a wider digital legislative framework in the EU, which also includes the Digital Operational Resilience Act and the Cyber Resilience Act.
The MDIA engages directly with economic operators to assess how the legislation affects them. The Authority is also providing sectorial targeted information sessions regarding the EU AI Act to inform stakeholders of the corresponding requirements. As part of its supportive services, the MDIA has set up an AI Service Desk to respond to stakeholder queries regarding the AI Act. The Authority’s Technology Assurance Sandbox provides a platform, giving stakeholders the opportunity to consult technical experts for advice on filling gaps, such as those concerning regulatory compliance in their AI systems.
Malta’s commitment to safe and effective AI deployment is made tangible through the MDIA’s practical support services and collaborative mindset. This positions the country as a forward-thinking jurisdiction where financial institutions can both meet their regulatory obligations and unlock the full potential of AI technology.