
La Roux duo might be bulletproof, as per their hit song, but artificial intelligence (AI) is not. At least not when it’s a sole defender of email against phishing emails.
I treat most of my emails as spam and feel like senders have gotten used to that. Important emails often get accompanied by a call or a DM on Slack. “That security training email you got, it’s not spam,” the office administrator would say.
Before the sudden dawn of AI, bad grammar and unsolicited communication were clear telltale signs of a phishing attempt. But since crooks embraced AI tools like ChatGPT just like everybody else, that is mostly out of the equation.
Cybersecurity professionals employ AI, too, to guard our email. But, as per a new report titled AI Alone is Not Bulletproof by security firm Cofense Intelligence, many phishing emails still reach our inboxes. This is mainly because AI and machine learning (AI/ML) models are trained on historical data, and emerging threats evolve in the blink of an eye.
However, we should give the AI/ML systems guarding our inboxes some credit, as it’s not only poor grammar they can recognize. As per Cofence Intelligence, AI tech can understand the logic behind suspicious emails, such as the typosquatting technique. The latter refers to hackers exploiting typical human mistypes to lead them to malicious domains, such as vvindows[.]com and wallrnart[.]com.
AI can also recognize malicious sites (or attempt to do so) based on domain registration day, age, and category and also judge websites based on images and resemblance to other often spoofed pages.
“Although AI can help, it can also hurt organizations,” Jacob Malimban, author of the report, writes. Especially if the AI model in charge of defense is not adequately trained and left without supervision.
Yes, modern secure email gateways (SEGs) can block messages with urgent language. However, what happens when a hacker uses automated tools not only to craft an error-free email message but also to scout publicly available resources for employee and company info to make the phishing email more personalized and, therefore, more convincing?
Attackers can compromise accounts then train AI to copy the victim’s writing style. Combined with using the compromised account to reply to preexisting email threads, new targets may be less vigilant than usual,
the report reads.
In addition, criminals are exploiting deepfakes that can mimic the appearances and voices of trusted contacts with increasing accuracy. No one is safe from these, even cybersecurity experts like LastPass, who admitted to having suffered a deepfake scam in which attackers impersonated their CEO in a deepfake call.
In one case mentioned by Cofense Intelligence, criminals mimicked the CFO and other employees in a conference call, convincing the victim organization to transfer the funds. This one clever attack cost the company $25 million.
AI is also less adept at analyzing emails that require action from the user, such as scanning a QR code. QR code scams, or quishing, are an increasingly popular attack vector as it bypasses traditional defense mechanisms.
Attackers apparently also easily fool defenses by simply embedding a malicious link or a QR code in attached PDF files or similar documents. They also keep using CAPTCHA to make AI analysis more difficult.
To us humans, CAPTCHAs create a fake sense of security. However, threat actors use them for the same reasons that companies do, too – to make sure their malicious sites are visited by humans and not bots, including automated analytical tools.
And when attacks use a combo of QR codes in a PDF and CAPTCHA-protected malicious site, the AI guardian becomes just a bare and toothless warrior.
The list is far from complete here, but I suggest you head to the Cofence Intelligence report for a detailed explanation of this mind-boggling innovation on both the attacker and defender sides.
“Offensive AI used by attackers will be better developed (compared to defensive AI) because of no legal, copyright, or ethical constraints,” the report noted.
Your email address will not be published. Required fields are markedmarked