ChatGPT has taken the cyber world by storm, but many critics are raising serious concerns about how it might be misused to spread lies, hate speech, or aid threat actors.
The latest concerns emerged this week when identity protection company CyberArk tested the artificially intelligent writing software and found that not only can it write malware – it can write malware that is much harder to detect and therefore defend against.
Malware typically uses strings of code to complete a malicious function, for example, injecting itself into a target system’s ‘healthy’ code or changing names and locations of files.
“These code strings can be the downfall of the malware, as anti-virus software often searches on common malicious code commands,” Maria-Kristina Hayden, CEO of cyber hygiene company OUTFOXM, told Cybernews.
But polymorphic malware – so-called because its code changes each time it runs while retaining the same function – is able to ‘outsmart’ many types of detection software, which tends to rely on previously detected patterns known as “signatures” to spot bad programs.
“Polymorphic code makes it much more difficult for these signature-based systems to detect the malware because the identifiable features are constantly changing and avoiding detection,” said Hayden.
Ordinarily, such code has to be written manually – but the advent of ChatGPT means this is no longer the case, she explained.
“Researchers have identified that with ChatGPT, people can create malware that is polymorphic in nature and does not contain the common malicious code commands that anti-virus watches for,” she said.
This is achieved by writing malware that can query ChatGPT and interpret code ‘on the fly’ as it carries out its commanded tasks: this allows it to effectively ‘outsource’ continually changing code and thus avoid detection in the usual manner.
Hayden said: “Instead of building in the typical malicious code strings needed to do bad things, the malware uses new functionality to instead ask ChatGPT to write and provide the malicious portions of code at different points in the execution process, and then interpret and run the code. The malware can also be designed to later delete any code ChatGPT sent, removing any traces.”
More from Cybernews:
Subscribe to our newsletter