Musk’s AI model Grok an easy target for malicious actors


Security researchers say that while Grok, an AI model released by Elon Musk’s startup xAI, is interesting, it can cause harm and be misused by malicious actors.

“Grok is an AI modeled after the Hitchhiker’s Guide to the Galaxy, so intended to answer almost anything and, far harder, even suggest what questions to ask!” xAI tweeted last week.

Musk’s company added that the model is still an early beta product and has been trained for only two months. Apparently, the hope is that Grok will keep improving.

However, some security researchers now say that the new model might prove to be quite harmful as Grok is being trained on X’s user data, which is controversial, to say the least.

Musk likes it, though, and says that rival chatbots are being trained on politically correct data. The billionaire, who has been critical of Big Tech's AI efforts and censorship, said earlier this year that he would launch a maximum truth-seeking AI that tries to understand the nature of the universe to rival Google's Bard and Microsoft's Bing AI.

The billionaire’s startup says Grok is witty and rebellious. It supposedly answers “spicy” questions that most other AI tools will not. But that’s precisely the problem, Joseph Thacker, a security researcher with AppOmni, a SaaS security pioneer, says.

“I can see Grok struggling due to the data source it uses for training – X. It has an immense amount of data but there’s a lot of toxicity, incorrect information, bias, and racism in there. So while it’s likely to sound very human, there are risks of that surfacing,” said Thacker.

Because it’s trained on X’s user data, Grok is also more likely to have the propensity for bias, as humans have bias and stereotype, the researcher added: “That’s what shows up on social media time and time again, so I expect it may be more likely to bubble up in Grok versus other models.”

Yes, Grok and its spicy answers might seem more interesting, but ethical and legal issues may arise if, for instance, someone were to ask the bot to provide information on illegal activities or sensitive topics.

Finally, Grok’s personality can actually be a problem and an inconvenience to users. “One nice thing about AI models with less personality is that they feel like they’re a blank slate that can be given whatever personality or tone desired,” said Thacker.

He nevertheless agreed that access to up-to-date X data is extremely useful and could give Grok an edge over other AI systems in some cases – for example, in emergency situations that are widely discussed on X in real-time.

Then again, X is filled with fake news and misinformation these days, and even Musk himself posted screenshots with incorrect information about Sam Bankman-Fried’s recent conviction. Grok got a simple factual point about the trial – the length of time the jury deliberated – wrong.

According to Thacker, Grok lacks some of the features that other current AI implementations have. For example, Bard has an index of the entire internet that it can query, and OpenAI’s current GPT-4 implementation can use Dalle, run code with the data analysis tool, and browse the internet.


More from Cybernews:

Adapting to AI: how workers can manage disruption

Sensitive military personnel data available for just a few cents online, research finds

Sun Life third-party breach exposes 212K individuals

Meta bars political advertisers from using generative AI ads tools

WeWork files for bankruptcy with debts of $19 billion

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked