Generative AI making it harder to spot fraudulent emails


Cybercriminals are increasingly using generative AI to bypass email security solutions and trick employees. Experts have shared real-world examples of these attacks.

According to Mike Britton, CISO of Abnormal Security, it’s getting harder to spot email attacks thanks to generative AI. Before the breakthrough of AI, cybercriminals relied on formats or templates as their go-to method for creating malicious campaigns.

A significant portion of attacks shared common indicators of compromise, making them detectable by conventional security software. This is often due to the usage of identical domain names or malicious links.

ADVERTISEMENT

However, generative AI has quickly become a game-changer, as it enables scammers to craft unique content in milliseconds. This makes detection, which typically relies on matching known malicious text strings, significantly more challenging.

​​Generative AI has helped to make social engineering attacks and email threats more sophisticated. Cybercriminals can abuse ChatGPT API to create convincing phishing emails, malware, and fraudulent payment requests.

While OpenAI has implemented safety features on ChatGPT, cybercriminals are bypassing it by creating their own malicious versions, like WormGPT and FraudGPT, to generate deceptive content.

Also, AI completely removes grammatical errors and typos, which were once clear indicators of an attack. This makes humans more susceptible to falling victim than ever before.

Throughout 2023, Abnormal Security has detected a number of email attacks on its customers that were likely generated by AI. The firm used Giant Language Model Test Room, or GLTR, to analyze the malicious emails and detect the likelihood of them being generated by AI.

The model uses color coding to show word predictions based on context. Green represents the top 10 predicted words, yellow the top 100, and red the top 1,000. All other words are in purple.

fraud email attack

The company shared an example of such an AI-generated email attack, in which the threat actor posed as an insurance representative, urging the recipient to open an email attachment containing benefits information and an enrollment form that must be completed/returned. If the recipient failed to do so, they’re told that they may lose coverage.

ADVERTISEMENT

“Despite a professional facade, our platform determined the attachment likely contains malware, putting the recipient's computer at risk of viruses and credential theft,” writes Britton.

fraud email attack

“As you can see, the majority of the text is highlighted green, indicating that it was likely generated by AI rather than created by a human. You’ll notice that there are also no typos or grammatical errors – signs that have historically been indicative of an attack,” points out Britton.

Another AI-generated attack they detected involved a threat actor posing as a customer service representative from Netflix. The email claimed that the target’s subscription had expired and asked them to log in again. However, the URL led to a malicious site where sensitive information was harvested.

fraud email attack
fraud email attack

The third example shared by the company featured a cosmetics brand impersonator who attempted invoice fraud. Posing as a business development manager for cosmetics company LYCON, the threat actor tried to get victims to update their billing accounts.

Email recipients were informed of irregularities in their balance sheets, which were noticed during a mid-year audit. The scam aims to extract sensitive financial information and reroute payments to the attacker’s bank account.

fraud email attack
fraud email attack
ADVERTISEMENT

“We’ve reached a point where only AI can stop AI, and where preventing these attacks and their next-generation counterparts requires using AI-native defenses,” says Britton.

“By understanding the identity of the people within the organization and their normal behavior, the context of the communications, and the content of the email, AI-native solutions can detect attacks that bypass legacy solutions. In fact, this is the only way forward – it is still possible to win the AI arms race, but security leaders act now to prevent these threats,” he concluded.