Cybercriminals are now fluent in the AI-based tool WormGPT, which automates phishing emails and facilitates business email compromise (BEC) attacks using exceptional grammar in multiple languages. It’s like ChatGPT without any ethical boundaries or limitations, security firm SlashNext has discovered.
The new cyber weapon WormGPT is supposed to revolutionize phishing attacks by generating human-like text based on the input it receives. This is a whole new vector for business email compromise attacks.
“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” writes Daniel Kelley, a reformed black hat computer hacker collaborating with the SlashNext team.
The team gained access to the sophisticated tool through an online forum associated with cybercrime.
Cybercriminals can now use the technology to automate the creation of compelling fake emails personalized to recipients, and hold conversations without much personal involvement. This increases the scope and chances of successful attacks.
Interestingly, WormGPT doesn’t use OpenAI’s tech. It’s based on the GPT-J open-source large language model developed in 2021, has over 6 billion parameters, and boasts various features including unlimited character support, chat memory retention, and code formatting capabilities. Its performance is described as similar to an older GPT-3 model.
The WormGPT’s author supposedly used diverse data sources, mainly concentrating on malware-related data, to train WormGPT.
“We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors,” Kelley said.
Experiments with WormGPT showed that unsuspecting account managers would have difficulty distinguishing fraudulent emails, as those are remarkably persuasive, strategically cunning, and have impeccable grammar.
WormGPT is subscription-based and costs 100 euros monthly or 550 euros yearly, while the “private setup” would set adversaries back with 5000 euros. A 5 percent discount is offered using the coupon code “SAGE.” Potential buyers must contact the developer by Telegram.
According to the researchers, companies should train employees, implement strict email verification, and test security measures.
Even ChatGPT, when “jailbroken” with carefully crafted prompts, is able to “facilitate a significant number of criminal activities, ranging from helping criminals to stay anonymous to specific crimes including terrorism and child sexual exploitation,” Europol noted in a recent report.
Malicious actors are now filling dark-web forums with their own custom modules that are specifically trained to help with cybercrimes.
And the subsequent iterations of large language models will be worse as they will have access to more data and be able to solve more complex problems.
“Dark LLMs trained to facilitate harmful output may become a key criminal business model of the future,” Europol noted.
More from Cybernews
Subscribe to our newsletter