The large language model (LLM) black market is not a thing of the future, it’s happening right now. Bad actors are actively exploiting credentials to access existing models and even activate unreleased LLMs.
Have you ever logged on to your Netflix or streaming service and seen an account you don’t recognize? Oftentimes, that person has exploited your credentials and is using the service you pay for free of charge.
Well, this goes beyond just breaking into your streaming service. Bad actors are stealing credentials to access artificial intelligence (AI) models and using them for free. They’re also selling access and even activating LLMs that weren’t meant to be used.
The Sysdig Threat Research Team (TRT) has coined the term LLM-jacking, which describes a person illegally accessing an LLM, like ChatGPT or Claude, by exploiting login credentials and gaining entry to the cloud system where the model is stored.
This might not seem like much, but according to Sysdig TRT, the costs for unauthorized access can be extremely expensive – costing victims over $100,000 per day.
LLM-jacking has become increasingly popular – so popular that attackers’ tools are becoming more sophisticated, and bad actors seem to be using the LLMs they abuse to improve their tools.
But why would someone hijack a LLM? Surely there are other ways to access the popular models on the market?
Well, there are various reasons, some more nefarious than others.
Bypassing sanctions
One motive observed by Sysdig TRT is the desire to bypass sanctions imposed on countries like Russia by tech companies and other organizations.
Since the start of Russia’s invasion of Ukraine, various sanctions have been imposed, making it difficult for Russia to access certain tools and technologies.
For example, tech companies like Amazon and Microsoft have banned all individuals and organizations in Russia from using their tools.
This prompted the demand for illegal access to these tools, as companies and citizens still want to use LLMs.
In one instance, the threat research team observed a Russian student using stolen Amazon Web Service (AWS) credentials to create a Claude model. Ironically, the student was working on a project involving artificial intelligence chatbots.
The student inputted names into the prompt (which were later removed) allowing the researchers to identify that the bad actor was studying at a university in Russia. They believe that the student used stolen credentials and an external bot, which “acts as a proxy for their prompts.”
“We saw many more examples of Russian language queries, but this prompt in particular has enough supporting information to prove it came from inside of Russia,” Sysdig TRT said.
Image analysis
LLM-jacking is also used for image analysis, which is often used to help the actor cheat their way through a puzzle or game.
Researchers observed requests for LLMs like Claude to “cheat on a puzzle to help the attacker get a better score.”
However, attackers are also using these techniques to bypass restrictions implemented by creators.
For example, bad actors used LLMs to “extract text from Optical Character Recognition, primarily for adult content.”
Adult role play
AI girlfriends aren’t uncommon in this day and age, but as we’ve seen, many AI models restrict sexual or graphic content.
Most publicly available models don’t allow for graphic and sexual content on their platforms.
However, LLM-jacking allows actors to bypass these restrictions and allow them to have adult conversations with LLMs.
However, this “conversation” is very costly due to the high volume of prompts and responses it requires.
Researchers have noted that cloud accounts have always been a coveted target. But now, cloud-hosted LLMs are being continuously exploited as “attackers are actively searching for credentials to access and enable AI models to achieve their goals, spurring the creation of an LLM-access black market.”
Your email address will not be published. Required fields are markedmarked