Artificial intelligence can sometimes seem like a solution in search of a problem, but one area where it has already made an impact is fraud prevention. In fact, two-thirds of organizations surveyed by IBM reported using AI to detect and combat fraud within their security operations centers, and it’s paying off.
By using strategies such as attack surface management, red-teaming, and posture management, these organizations were able to contain data breaches more quickly and at a much lower cost than those not employing AI. According to IBM’s Cost of a Data Breach Report, companies using AI incurred $2.2 million less in breach costs compared to those that don’t use AI to prevent such attacks.
Overall, the average cost of a data breach in 2024 jumped to $4.88 million from $4.45 million the previous year, marking the highest annual increase since the pandemic. The distinction between organizations using AI and those not using it is stark. When organizations extensively used AI and automation for preventing security breaches, their average cost for a cyberattack was $3.76 million. In contrast, those not using these tools lost an average of $5.98 million per breach.
A Tool for Criminals
One reason AI has proven so critical is that attackers are also using the technology.
“The use of generative AI by cybercriminals is making it easier for them to socially engineer or trick employees into providing sensitive information,” said Jennifer Pitt, Senior Analyst of Fraud & Security at Javelin Strategy & Research. “There have already been several cases where cybercriminals successfully used voice cloning and/or deepfake images and video to convince even the most security-conscious employees to provide sensitive information to people they thought were executives authorized to obtain the information.”
AI has also helped speed up the detection of data breaches, a key factor in limiting the damage. Organizations extensively using security AI and automation identified and contained data breaches nearly 100 days faster on average compared to those without these technologies.
“It is crucial that organizations train employees on how AI is used for social engineering and phishing attacks and encourage employees to challenge anyone who asks for sensitive information,” said Pitt. “Organizations must also implement generative AI solutions that can detect deepfakes and AI-generated content, then learn and adapt quickly to changing attacker strategies. With the growing number of data breaches and AI-related cyberattacks, companies can no longer afford to rely on legacy detection solutions.”