A recent article in the Wall Street Journal shed light on how fraudsters are utilizing generative AI to perpetrate sophisticated scams and impersonate others with alarming ease. By creating realistic videos where they’re impersonating individuals victims know, fraudsters are deceiving people into transferring large sums of money.
An incident reported in the article tells the story of a man named Guo who fell victim to this type of scam. He received a video call on WeChat from someone impersonating a friend. The scammer convinced Guo to transfer roughly $600,000 to a bank account in Inner Mongolia within a 10-minute timeframe. Guo complied, believing he was helping his friend in need. It was only when he contacted his friend to confirm the transfer that he discovered the deception.
Examples such as this highlight the alarming consequences of generative AI in the hands of fraudsters. The ability to create lifelike deepfake videos, coupled with social engineering tactics, is a potent combination that can exploit someone’s trust and vulnerability. As a result, authorities and countries worldwide are grappling with the challenge of regulating this emerging technology. Balancing the benefits of generative AI while safeguarding against fraud and misinformation has become a paramount concern.
According to Javelin Strategy & Research’s Identity Fraud Study, identity fraud scams affected 25 million individuals and resulted in losses amounting to $23 billion in 2022. Notably, identity fraud scams surpassed traditional identity fraud in terms of the number of victims impacted. With the advent of generative AI and the rise of deepfakes, this disconcerting trend is poised to ramp up even more.
The increased sophistication of fraud schemes driven by AI poses a challenge for both consumers and merchants—and building security measures to combat fraudulent activities has become crucial. Enhanced authentication methods, real-time monitoring, and transaction verification mechanisms will be essential in minimizing the risk of falling victim to AI-driven scams.
All of this is part of the reason generative AI may be deployed a lot slower in the payments industry than some people think.
In his report, Generative AI: It’s Here, and It Defies Static Definition, Christopher Miller, Lead Analyst of Emerging Payments at Javelin Strategy & Research, explains how generative AI will improve the efficiency of repetitive work, but not alter any fundamental processes for a while. For example, a bank will be hesitant to accept videos of customers, before they know how to prove that they aren’t deep fakes.
Financial institutions, technology companies, and governing bodies must work together to establish frameworks that strike a balance between fostering innovation and ensuring security. Implementing stringent regulations and guidelines that govern the use of generative AI can help deter fraudsters and protect individuals from falling prey to their deceptive tactics.