The Artificial Intelligence Act will soon come into force across the EU. FinanceMalta caught up with Alexiei Dingli, Professor of AI at the University of Malta Faculty of ICT, to find out what it will mean for financial services.

Picture: Alexiei Dingli, Professor of AI at the University of Malta Faculty of ICT

Back in 2020, EU President Ursula von der Leyen said the years ahead would be the EU’s digital decade, as the bloc sought to boost the continent’s position vis-à-vis some of the major players like the US, China and India.

However, these are moving targets – meaning that the EU has not yet caught up. Consider, for example, the fact that ChatGPT was launched in the US.

“We are seeing the first models emerging from France now but we in the EU are still trying to catch up and we all need to make sure that we are getting the best out of it…,” Prof. Dingli said.

One area where the EU is proving itself, though, is in ensuring that AI is covered by a robust framework and legislation to protect EU citizens from abusive use of this powerful tool.

As with any technology, the power to do good is matched by the potential for it to be misused. But let us start with the solid reasons in favour of AI. Prof. Dingli did not hesitate: the main issue is the vast amount of data that companies are bombarded with.

“It is well known that we are ‘drowning in data but striving for information’ – big companies are data rich but admit that they do not do much with it,” he said. “Data is very valuable: it is an immense resource that we are not using.”

The digital decade is not only about keeping up with other countries but also about preparing for the future, in particular the human resources crisis that he believes will have an impact by as early as 2030.

He cited the World Economic Forum’s Future of Jobs Report 2023, which predicts that over the coming five years, respondents expected 69 million jobs to be created, with a decline of 83 million jobs: a net decrease of 14 million jobs, or 2% of current employment. This report is perhaps a timely reminder of the speed at which the world is changing: in 2020, the same report had forecast an increase of 97 million new jobs – and a displacement of 85 million jobs – by 2025.
One of the assumptions that Prof. Dingli was quick to challenge is that Malta and other countries facing labour shortfalls could simply ‘import’ people from outside the EU.

“Automation is shifting jobs from low-skilled and low paid to high-skilled and high paid. We have been noticing that in the Far East, there are more and more ‘Lights Out’ manufacturing companies that literally work without lights because they are robot-operated. I recently found out that we have one such company here in Malta, which employs 120 engineers to look after the robots.

“This is why I describe this as a transitional phase during which we need to really plan for the upskilling and re-skilling of people who would, in the past, have aspired to a job in manufacturing. This is no longer even an option for them!”

“And it is not only automation: things like Chat GPT are taking over jobs we thought till now that only humans could do. So there will be an even bigger problem to find skilled people while those with no training will be out of a job, not only in their home countries but also overseas.”

One of the problems for policy-makers is the speed at which new technology emerges. Social media platform LinkedIn estimated that in the coming five years, 60 per cent of skills would be new ones – but what, exactly, are these? Are they ones that have not yet even been defined?

However, Prof. Dingli does not believe that there is any option but to embrace change: in spite of the vast range of career paths now available at the University of Malta, there will simply not be enough people to fill the growing number of vacancies, meaning businesses will have no choice but to move to automation – especially in financial services – albeit with the oversight of specialised people.

The productivity benefit is massive: reports by the Massachusetts Institute of Technology and Microsoft estimate gains of as much as 40 per cent! Imagine the impact of that on companies!”

However, he also warned that no system is infallible, insisting that the approach should be to see AI as an assistant – rather than as a replacement – to employees, as the need for human oversight could not be overstated.

One of the dilemmas is that one of the only ways to identify and deter AI being used by hackers and scammers was… to use AI!

“AI is being used to create fake profiles, using your picture, generating new images, creating false documents and even cloning your voice based on a recording of just seconds of your own voice! It is almost impossible for a human to identify whether something is genuine or not, while AI can look at things pixel by pixel to identify patterns invisible to the naked eye. This is why AI is so important for AML and compliance.

“But let us face the facts: there will be a big fight between using AI for good and AI for bad – which is why cybersecurity is going to be such a big issue in the years to come. Laws tend to catch up with technology and not the other way around – which is unfortunate. So expect that both criminals and law enforcement will use AI.”

An important part of the EU’s legislation will be to offer protection to EU citizens, something that other countries do not always have. Biometric recognition will be banned, for example, apart from exceptions for law enforcement. However, he emphasizes that the answer is ‘Explainable AI’.

“Let me give you an example. There was one man in the US who was applying for parole from prison. The AI system used by the outsourced contractor turned down his request, and the contractor insisted that the reason could not be given – and therefore challenged! – because there were commercial implications. Is this best practice? We normally insist on ‘Explainable AI’ which explains its own decisions. We should not trust AI blindly – there could be cases of profiling as well as of gender bias!” he said.

Indeed, as accurate as AI systems are – around 95-96% – that still leaves a margin of error of 4-5%, which when taken over whole populations would represent huge numbers of people.

“So the principle of explainability is very important – and also having humans in the loop to act as gatekeepers.”

Turning to financial services, Prof. Dingli ticked off the various ways in which AI could help, in particular banks and insurance companies which rely heavily on data – particularly on digitised data.

“This is the most fertile ground for AI, which can make predictions, organise classifications and implement clustering, for example, tasks which have until now probably been done on spreadsheets by people! Chat GPT and similar technologies can now take tasks even further, creating reports or comparing analyses and tabulating data,” he said.

He said that AI provided a major opportunity for insurance, which bases risk on data. He referred to his own nationwide traffic management project, which was recently approved during local television programme Shark Tank for a €1.3 million investment.

“Say there is a vehicle that is not being driven properly, swerving across lanes, for example. The system I am working on would alert the authorities – without recognising the driver – so that they could send someone to check it out.

“Of course, the aim is to reduce fatalities – more than one person every 26 seconds around the world dies due to traffic fatalities… But there are other aspects. If you are a considerate driver, should this not be reflected in your premium? Imagine if you could voluntarily activate an app on your mobile phone which would track how you are driving? You could then opt for a variable insurance scheme based on how carefully you drive… Something like this would help us to be more focussed on our driving and to take fewer risks…” he said.