Microsoft’s AI Assistant Can Be Exploited by Cybercriminals

microsoft copilot hacker, AI in India's fintech sector, AI-based biometrics fraud

Microsoft’s Copilot has been touted as a productivity enabler, but the ubiquitous artificial intelligence app’s widespread use also exposes vulnerabilities that criminals can exploit.

At the Black Hat security conference, researcher Michael Bargury demonstrated five ways how Copilot, which has become an integral part of Microsoft 365 apps like Word and Outlook, can be manipulated by bad actors.

For instance, after a hacker gains access to a work email, they can use Copilot to mimic the user’s writing style, including emojis, and send convincing email blasts containing malicious links or malware.

“AI’s ability to assist criminals in writing code to scrape information from social media, paired with its ability to match the speech patterns, tone, and style of an impersonated party’s written communication—whether professional or personal—is an insidious combination,” said Kevin Libby, Fraud & Security Analyst at Javelin Strategy & Research. “When used conjointly, these abilities considerably increase the probability of success for a phishing or smishing operation. AI can even help to scale phishing attacks through automation.”

Poisoning Databases

Bargury demonstrated how a hacker with access to an email account can exploit Copilot to access sensitive information, like salary data, without triggering Microsoft’s security protections.

In other scenarios, he showed how an attacker can poison the Copilot’s database by sending a malicious email and then steering Copilot into providing banking details. Additionally, the AI assistant could also be maneuvered into furnishing critical company data, such as upcoming earnings call forecasts.

During the demonstration, Bargury largely used Copilot for its intended purpose, but also introduced  misinformation and gave Copilot misleading instructions to illustrate how easily the AI could be manipulated.

A Glaring Weakness

The demonstration highlighted a glaring weakness in AI: when secure corporate data is combined with unverified external information. Copilot’s flaws raise concerns about AI’s rapid adoption across nearly every industry, especially in large organizations where employees frequently interact with the technology.

AI can also be one of the strongest tools in fraud detection, as it can help companies discover breaches much faster. Still, it’s clear that the technology is still developing, which opens up opportunities for criminals.

“While AI tools promise innumerable benefits, they also pose significant risks,” Libby said. “Criminals can use AI tools to help them with everything from malicious coding of malware, to scraping social media accounts for PII and other information about potential targets to fortify social engineering attacks, to creating deepfakes of CEOs to scam organizations out of tens of millions of dollars per video or audio call.”

According to Wired, after the demonstration, Bargury praised Microsoft and said the tech giant worked hard to make Copilot secure, but he was able to discover the weaknesses by studying the system’s infrastructure. Microsoft’s leadership responded that they appreciated Bargury’s findings and would work with him to analyze them further.

Exit mobile version