Deepfakes Are a Threat to UK Banks

fraud in commercial payments, Vota fraud, mobile payments PCI compliance

fraud in commercial payments

As fraudsters continue utilizing innovative technology for their illicit activities, financial institutions find themselves in an endless game of catch-up. A particularly concerning development for UK banks involves the surge in deepfake technology threats.

According to a report from Sumsub, there was a 300% increase in deepfake incidents from 2022 to 2023 in the UK, with AI-driven identity fraud ranking among the top five in 2023.

The UK’s vulnerability to such attacks is heightened due to its economic prominence, widespread adoption of digital banking, and considerable online presence.

In an interview with the Financial Times, David Duffy, CEO at Virgin Money expressed unease about the evolving capabilities of generative AI and the alarming potential of cloning voices. As AI, powered by quantum computing, advances, the specter of financial crime taking on unprecedented dimensions may become increasingly worrisome, he noted.

Defending Against Deepfakes

Deepfakes serve as a stark warning for the financial industry, necessitating the adoption of  deepfake detection technology, tighter verification processes, and enhanced voice and video analysis tools, coupled with employee training.

This is even more true as banks increasingly become liable for consumer losses attributed to scams.

The influence of consumer advocacy, as witnessed in the UK, is extending to other countries, exemplified by organizations like the Consumer Action Law Centre which is urging Australian banks to protect fraud victims. Currently, the UK allows for victims of fraud to be reimbursed.

More will need to be done to protect consumers from the potential fallout of compromised personal information and funds. This requires a concerted effort by financial institutions to preserve trust in the financial system. It also requires global collaboration among tech companies, financial institutions, and law enforcement agencies to develop and implement best practices to prevent and mitigate deepfake attacks.

Exit mobile version