Scammers now using AI for A/B testing to craft the most effective trap


Half of all spam emails are generated by AI. If bad grammar used to be a tell-tale sign of a fraudulent email, now it’s the opposite, as humans make more errors than large language models (LLMs).

Cybersecurity company Barracuda, together with researchers from Columbia University and the University of Chicago, analyzed an enormous dataset of unsolicited and malicious emails, spanning from February 2022 to April 2025.

Naturally, after the release of ChatGPT in November 2022, the proportion of AI-written emails has skyrocketed, with threat actors becoming especially bewitched by its capabilities.

ADVERTISEMENT

The research shows that approximately half of the emails that land in your spam folder are now written using AI. Attackers also rely on AI to launch business email compromise (BEC) attacks. However, the growth of this attack vector has been a lot more modest.

AI email inbox

How do AI-written emails stand out?

“AI-generated emails typically showed higher levels of formality, fewer grammatical errors, and greater linguistic sophistication when compared to human-written emails,” Barracuda said.

What’s more, attackers seem to be using AI to test which phrases are more likely to go undetected by defense systems and be more clickable for potential victims.

“This process is similar to A/B testing done in traditional marketing,” the report reads.

jurgita Konstancija Gasaityte profile Izabelė Pukėnaitė Ernestas Naprys
Be the first to know and get our latest stories on Google News

ADVERTISEMENT
AI for A/B testing
Examples of emails detected as LLM-generated. The first one is a BEC email. The second and third are spam emails. The spam emails seem to be reworded variants, with differences shown in red.