New ChatGPT Model Can Be Exploited for Voice Scams

chatgpt scams

senior Asian man using microphone application on mobile to communicate on dairly

The newest version of OpenAI’s popular chatbot, ChatGPT, can be used to perform financial scams with a low to moderate degree of success.

ChatGPT-4o was launched in May, offering an enhanced platform that includes inputs and outputs for text, voice, and vision. OpenAI has said it included safeguards to identify and block harmful content, like replicating a voice without permission.

However, a report from the University of Illinois Urbana-Champaign found that those safeguards aren’t adequate to prevent criminals from exploiting the platform. The researchers explored how ChatGPT can be manipulated for voice phishing, also known as vishing, to conduct scams such as bank transfers, gift card fraud, crypto transfers, and credential stealing from social media or Gmail accounts.

“AI-assisted vishing scams pose a threat to individuals and businesses alike and have been cropping up in the wild over the past several years,” said Kevin Libby, Fraud and Security Analyst at Javelin Strategy & Research. “Schemes targeting individuals usually proceed by some variant of the tried-and-true ‘grandparents scam.’ Schemes targeting businesses usually involve impersonating C-suite officers or business owners and connecting with legitimate employees to initiate money transfers.

“In both cases, publicly available AI tools that afford criminals the ability to impersonate the voices of their assumed identities increase the chances of success and pose an undeniable threat to potential victims. The more signals a criminal can create that seem to affirm their assumed identity, the more likely victims are to fall for the scam.”

Bypassing Protections

In the UIUC tests, the AI agents used voice-enabled ChatGPT-4o automation tools to navigate websites, input data, and manage two-factor authentication codes. Even though the platform will sometimes refuse to handle sensitive data like credentials, UIUC researchers were able to bypass those protections by using simple prompt jailbreaking techniques.

Vishing scams are accomplished using deepfake technology, which has quickly become a multibillion-dollar issue for businesses and financial institutions, and AI-powered text-to-speech tools only increase their efficiency. Criminals are using these tools to perpetrate scams on a much larger scale, with less manual interaction required.

Receptive to Research

In response to the concerns raised by the UIUC researchers, OpenAI told BleepingComputer that it was continually working to protect its chatbots from bad actors and that its upcoming version of ChatGPT would be its safest offering yet. Until then, however, consumers will have to be vigilant about potential misuse.  

“Sadly, the public is not sufficiently aware of just how far AI-assisted voice impersonations have come and how easily tools like ChatGPT can be used to create convincing auditory forgeries,” Libby said. “It’s good that companies like OpenAI are receptive to research like the UIUC report and they are reportedly addressing the concerns raised.

“However, ensuring that AI tools cannot be easily used for fraud is only one focus of the companies pioneering the technologies. Using the tools to that end—committing fraud—is the sole focus of the criminals intent on increasing the success rate and scalability of their vishing schemes. For this reason, it’s likely that criminal use of public-facing AI tech to assist with and improve vishing scams will likely get worse before it gets better.”

Exit mobile version