Jan Gasiewski
16 Oct 2024
When you read about AI, what percentage of those pieces are positive versus negative? It doesn’t take a scientist to tell us that, although feel-good pieces are very popular with readers, negative headlines generate more revenue and garner more attention. So, if your answer is "mostly negative," then you are not alone.
AI has had a rough journey. Just over five years ago, AI was hailed as a technological advancement that would change the world for the better; we started to become one step closer to a Matrix and Space Odyssey reality. Actually, what we didn’t anticipate was what the reality would look and feel like if aspects of the 80s and 90s cult movies were to materialize. Virtual worlds to escape from reality and biometric identity checks did come to fruition, as did the need to reimagine what privacy and security would look like when technology and our own lives became so interlinked (an underlying theme in The Matrix), especially in the fields of finance and health.
Through a series of blogs, we will delve into areas of fintech and healthtech that AI impacts, the challenges we are facing today, and what the future holds. In our first blog, we want to explore AI’s role in privacy and fraud detection. This is particularly pertinent as regulations such as the 2023 Consumer Duty Act now ensure that financial providers protect and prioritize customer needs, with one of the key areas being fraud. Further to this, a mandatory rule was set by the Payment Systems Regulator (PSR), stating that all payment companies that send or receive money must reimburse victims of authorized push payment (APP) fraud within five working days, effective from the 7th of October 2024. Although these are positive signs that policymakers, financial institutions, and regulators are working together in the best interest of the consumer, it is clear that, as long as there are technological advancements, new ways to commit financial fraud are close behind. Considering AI will continue to evolve, and fraud will also grow, it’s best to start thinking about the game plan now and what can be done to mitigate the inevitable with the help of artificial intelligence.
The Growing Threat of Fraud in Fintech
Since the dawn of digital banking, a door has remained open to fraudsters looking to target consumers and businesses for financial gain.
According to UK Finance, £1.17 billion was stolen through unauthorized and authorized fraud in 2023, a 4% decrease compared to 2022. They also stated that banks prevented a further £1.25 billion of unauthorized fraud through advanced security systems.
APP fraud (where a fraudster poses as a legitimate payee and tricks people into paying for goods or services that don’t exist) has continued to be a huge source of concern, with £341 million being lost in the UK over the course of 2023. Although younger age groups consider APP fraud a primary financial threat, people who are more disadvantaged, such as older generations and those who may have a disability, tend to be targeted. Other ways to commit financial fraud include phishing and malware scams that give access to account numbers, phone numbers, and passwords, allowing malicious players to impersonate others.
Although the PSR is hopeful that the amount of APP fraud will dip in 2024, mitigating fraud is a constant game of cat and mouse, costing financial institutions millions of pounds a year in updating infrastructure, educating staff and customers, and refunding victims.
Traditional Fraud Detection Methods and Their Limitations
As long as there’s been a finance system in place, there have been people out there capitalizing on the good nature of others.
In the early days, and to some extent in today’s world, banks used rules-based systems to detect fraud—a straightforward approach that established predefined rules or criteria to deem a transaction legitimate. For example, before the effect of globalization that we are all used to today, there was a time you would have to flag your travel plans to your bank so that they would not block a transaction, as it would be made outside your home country.
After the system raised its flag, a representative of the bank would analyze the legitimacy of the transaction, which could also lead to further delays, and due to human bias, could lead to inconsistencies and unfair treatment towards some customers. These systems were not just slow but also incapable of staying up-to-date with new technologies and societal expectations for fraud detection.
AI-Powered Fraud Detection: How It Works, Benefits, and Use Cases
The key differences between traditional methods of fraud detection and AI-powered fraud detection are the adaptability of the models, the number of datasets, the cleanup of the data, its 24/7 monitoring capabilities, and the ensuing analysis.
AI can cut through and better understand, at a quicker and higher rate of success, specific patterns of behavior for each profile. It also flags, through various methods including biometrics, if someone is using a different system to access their financial data. AI’s ability (through machine learning) to capture and analyze many data points—historical, location, biometrics, and even third-party data—to make a calculated score on the likelihood of fraud gives it better scalability for the future. It can also use historical conversations through Natural Language Processing (NLP) and text analysis techniques to detect phishing or social engineering scams.
This is much more beneficial for financial institutions and their customers. Although there is an upfront cost to implementing AI at scale, the long-term cost-effectiveness is undeniable. Plus, the replacement of static rules and human bias with dynamic rules and preventative measures ensures that the customer’s assets are protected.
Despite the harrowing statistics at the top of this blog regarding fraud over the past few years, there are several use cases demonstrating how AI has improved fraud detection. It’s also best to keep in mind that as technology evolves, so will its success rate. It takes time to implement, and the speed at which firms have been able to improve their systems to detect fraud using AI is impressive, especially in comparison to the lifespan of traditional financial services and the use of archaic fraud detection methods.
Use Case One: In an EY report this year, it stated that one firm claimed to have significantly reduced fraud by improving payment validation screening, leading to a 20% reduction in account validation rejection rates.
Use Case Two: The UK government invested £34m into AI to reduce fraud. It has added to its own fraud detection tools, for the first time in March 2024, a sanctions data list to help fight organized crime and sanctions evasion. It is possible that, in the future, these datasets will be used by financial organizations through third-party access.
Use Case Three: JP Morgan showcased their AI capabilities in a report last year, stating that, since the two years of AI implementation, account validation rejection rates have been cut by 15-20%.
Bonus Use Case: Interpol states that in 2023, scammers stole over $1 trillion from victims around the world. Pig-butchering scams, which involve exploiting human relationship vulnerabilities, have risen and even lead to more serious offenses such as human trafficking. Interpol mentioned in their article that "The use of AI, large language models, and cryptocurrencies combined with phishing- and ransomware-as-a-service business models have resulted in more sophisticated and professional fraud campaigns without the need for advanced technical skills, and at relatively little cost.”
Challenges of Implementing AI in Fraud Detection
Although AI is here to help fraud detection, on the other hand, it has also been the catalyst for increased fraud. According to a report by Signicat, 42.5% of all fraud attempts in the financial and payments industry now involve AI, of which nearly 30% of those attempts were successful.
It’s a complex challenge, as AI systems will need to recognize when AI is the mode being used to commit fraud. What benevolent AI systems already do to recognize malicious actors is note the number, time, and speed at which transactions are being made—typically far faster and much more atypical than what a human can replicate. CAPTCHA and behavioral biometrics are also used to detect whether you’re a bot (you most likely have had to go through this, at the very least, once a week) or human. However, none of this is perfect. There are times that these systems can be bypassed. Over time, this will improve, but for now, it still remains a hurdle to be dealt with.
Upgrading legacy systems is another battlefield for financial institutions. From firsthand experience at Skillwork, we have supported many financial-focused businesses in updating and improving their infrastructure to meet the needs of today, including the integration of AI. It’s no small task, yet decision-makers know that it's better to be prepared today than to be caught out tomorrow. Financial institutions tend to have a large range of siloed, unformatted data sources, making it hard to unify systems. Plus, if the data is biased in any way, training AI models with that very data can create biased outcomes, which can then lead to legitimate users being flagged as untrustworthy.
AI might also make decisions that cannot be explained, leading to a loss of time for those who need to report to regulators. To add to the regulatory burden, businesses must ensure that AI models meet regulatory standards such as Know Your Customer (KYC), Anti-Money Laundering (AML), and further requirements set out by governing bodies.
Lastly, the conversation around the ethics of AI being in control of financial decisions is ongoing. Some do not agree with the amount of power that will be handed over to AI, nor the amount of personal and sensitive data that it is being given, especially when it directly impacts a user’s access to their financial services.
All in all, it's clear that there are areas for improvement; no technology is perfect on day one. The arguments for AI-powered fraud detection, and its use cases, are a powerful incentive for financial institutions to upgrade and implement AI into their systems. Its ability to analyse vast amounts of data, detect anomalies in real-time and adapt to evolving fraud patterns offers a powerful defence against increasingly sophisticated threats. Ultimately, it will be a combined effort between human expertise and AI capabilities as one cannot work without the other. Neither can be replaced fully, and fraud will never be completely eliminated, yet what can be done between the two, is a concerted effort to significantly decrease the opportunities for fraudsters to take advantage of, and create a safer financial system for everyone.