Cybergang attack Germany using AI-generated code


Cybercriminals impersonating legitimate German companies are attacking organizations in the country. This time, they’re incorporating an AI-generated dropper to work with other malware, the cybersecurity company Proofpoint has found.

Proofpoint warns that dozens of organizations across various industries in Germany are receiving emails containing fake invoices in password-protected ZIP files.

The hackers, identified as TA547, provide a password in an email itself (in the described case, MAR26) for the receivers to unpack the malware. The archive contains an LNK file, which, when executed, triggers a PowerShell script, which acts as a dropper to decode and execute Rhadamanthys, an information stealer used by multiple cybercriminal threat actors.

ADVERTISEMENT

This is one of the first cases where AI-generated code was used.

“The actor appeared to use a PowerShell script that researchers suspect was generated by large language models (LLM) such as ChatGPT, Gemini, CoPilot, etc.” the report reads.

Several signs hint at that. When deobfuscated into readable code, the PowerShell script contains detailed and grammatically correct, hyper-specific comments above each component of the script. Not even legitimate programmers are that thorough, and the code used by threat actors rarely includes any comments at all.

“This is a typical output of LLM-generated coding content and suggests TA547 used some type of LLM-enabled tool to write (or rewrite) the PowerShell or copied the script from another source that had used it.”

ai-generated-malicious-code
Image by Proofpoint.

Cybernews tried running the provided code sample through various AI content detectors, such as GPTZero or QuillBot, which also confirmed that the code is at least partially AI-generated. Hackers may have used AI to generate the email lure, too.

Why does it matter? The shift in some techniques indicates that threat actors are increasingly leveraging large language models (LLM) to launch more sophisticated attack chains. LLMs assist in generating social engineering lures and code that allows threat actors to scale malicious activities.

impersonating-email
Image by Proofpoint.
ADVERTISEMENT

In the recent TA547 attacks, LLM-generated content did not change the functionality or the efficacy of the used malware, and that didn’t impact the network defenders' ability to detect malicious actions. AI-written script assisted in delivering a malware payload but did not alter the payload itself.

Proofpoint describes TA547 as a financially motivated cybercriminal threat considered to be an initial access broker (IAB) that targets various geographic regions.

“Since 2023, TA547 typically delivers NetSupport RAT but has occasionally delivered other payloads, including StealC and Lumma Stealer (information stealers with similar functionality to Rhadamanthys). They appeared to favor zipped JavaScript attachments as initial delivery payloads in 2023, but the actor switched to compressed LNKs in early March 2024. In addition to campaigns in Germany, other recent geographic targeting includes organizations in Spain, Switzerland, Austria, and the US,” Proofpoint writes.