DeepSeek’s chatbot can be used to generate ransomware and keylogger


DeepSeek’s reasoning model R1 can easily be tricked into generating malicious code, even though it still needs human input, research shows.

While generative AI tools significantly complement the work of cybersecurity professionals and companies, bad actors can easily exploit them for malicious purposes.

There have been a number of instances of trying to misuse chatbots like ChatGPT, prompting companies like OpenAI to place guardrails to prevent malicious use.

ADVERTISEMENT

However, some models, like the latest DeepSeek reasoning model R1, may be easier to manipulate to create malicious code.

Researchers from cybersecurity company Tenable demonstrated how, with several prompts and workarounds, R1 can create a half-baked keylogger and ransomware code.

As the R1 can reason and show its chain of thought, during the process of malware generation, the researchers were able to see step-by-step thinking of the model.

Generating keylogger and ransomware

Unsurprisingly, when DeepSeek’s chatbot was prompted to write a keylogger with C++-, it refused to do so, explaining that keyloggers can be used maliciously.

However, chatbots' poor guardrails can be bypassed with additional prompts, such as stating that the results will be used for educational purposes.

After a few more prompts, the R1 suggested how to create a keylogger and generated a buggy one. Based on the team’s estimations, the keylogger was four steps away from being fully functional without any changes.

Additionally, the researchers tried to improve the keylogger and asked if it could hide the key logging file better. After the researchers tried to implement the ideas, they still had to adjust the code manually.

ADVERTISEMENT

Tenable also applied similar jailbreaking logic to create ransomware, and the R1 provided a step-by-step overview of this process.

Stefanie Niamh Ancell BW justinasv Marcus Walsh profile
Don't miss our latest stories on Google News

“DeepSeek was able to identify potential issues when planning the development of this simple ransomware, such as file permissions, handling large files, performance, and anti-debugging techniques. Additionally, DeepSeek was able to identify some potential challenges in implementation, including the need for testing and debugging,” the company claims in its report.

Several additional prompts led to the creation of “ a few” working ransomware samples, though they needed to be manually edited before compiling.

“At its core, DeepSeek can create the basic structure for malware. However, it is not capable of doing so without additional prompt engineering as well as manual code editing for more advanced features,” the researchers conclude.