AI still more of a buzzword than a real tool in cybercriminal underground


Cybercrooks are obviously interested in utilizing AI in their campaigns, but researchers have yet to see them unlock the technology’s power at scale.

In its annual threat report for 2024, Intel 471, a global provider of cyber threat intelligence solutions, rejoices that the cybercriminal underground has been substantially disrupted during “a year of law enforcement wins.”

For instance, the company saw the slow decline of the ransomware-as-a-service leader LockBit following Operation Cronos and, consequently, the rapid rise of RansomHub in its place.

ADVERTISEMENT

Still, with everyone still talking about AI, the report also devotes much of its attention to this developing technology and its impact on the cybercriminal underground. We spoke to Intel 471’s analysts, who told us that the crooks still aren’t able to unlock AI’s power at scale.

In the report (available here), researchers say they observed how cybercriminals were advertising a handful of AI-based tools last year.

They include an AI-powered data exfiltration and analysis tool; a tool allegedly powered by AI to analyze, scrape, and summarize information about critical vulnerabilities and exposures (CVEs); and an AI-based tool to swap out the details of business invoices that was designed to facilitate invoice fraud in business email compromise (BEC) attacks.

Konstancija Gasaityte profile justinasv Ernestas Naprys jurgita
Don’t miss our latest stories on Google News

AI-based tools mostly helped the crooks bypass verification protocols and arrange phishing attacks. AI was also used in disinformation campaigns related to elections.

Intel 471 told Cybernews the threat actors usually create the specialized AI-based software themselves as they want the tools to “meet their unique needs or conduct a particular kind of cybercriminal activity.”

Is there a developing underground ecosystem for such tools? Well, just like in the legal market, AI is still a buzzword that can be leveraged – both to sell and to buy, the analysts say.

“These AI-driven offerings have gained significant traction within the underground, as criminals seek to capitalize on the efficiencies and capabilities that AI can bring to their operations,” Intel 471 told Cybernews.

ADVERTISEMENT

However, “despite the underground’s obvious interest in AI, we still have not seen cybercriminals unlock the technology’s power at scale,” analysts also pointed out.

According to Intel 471, the shelf life of malicious chatbots has been weeks to months on average, with some becoming too popular and shutting down and others performing too poorly to attract customers.

“Guardrails put in place by reputable technology companies have been mostly successful in restraining actors from leveraging existing AI offers for malicious purposes such as developing malware, aside from creating relatively rudimentary variants,” Intel 471 said.

Finally, perfectly legitimate AI tools have been improving and have lately become cheaper. Plus, some are open-source. That’s why it’s harder for malicious actors to sell their products at prices as high as even a year ago.

“The landscape for such tools is turbulent, with many of these illicit services lasting a few months before they backtrack/instill guardrails, are taken down, rebrand, or become obsolete. That said, threat actors continue to try and offer them, and they can answer a much broader set of tasks than similar tooling offered in, say, 2022,” analysts told Cybernews.