
An unrestricted artificial intelligence (AI) chatbot called Venice.ai is gaining popularity in hacking circles. Malicious actors are now able to sow chaos online with minimal expertise.
According to mobile security firm Certo, Venice.ai is enabling criminals to generate convincing phishing emails, functional malware, and surveillance tools.
The reason is, of course, the fact that this particular chatbot – unlike mainstream models such as ChatGPT – deliberately removes safety filters and ethical guardrails, marketing itself as a “private and permissionless” service that doesn’t censor user interactions.
In other words, Venice.ai is a web-based AI chatbot that looks and feels like ChatGPT but – under the hood – runs on leading open-source language models without the usual content moderation.
Venice.ai is available for just $18 per month. Unsurprisingly, malicious actors exploit the tool in ways that “could significantly enhance their capabilities,” says Certo in its report.
“What we've discovered is deeply concerning,” said Russell Kent-Payne, co-founder at Certo.
“While Venice.ai may have legitimate uses, our research shows it's being actively promoted on hacking forums. Its unrestricted nature means anyone with harmful intentions can access sophisticated capabilities that would typically require technical expertise – creating a concerning security risk.”
Certo researchers have tested the tool and found that it complies with requests to create highly convincing phishing emails or extensive keylogger code for Windows 11.
Ransomware that can encrypt files and generate ransom notes is also easily generated, as is Android spyware capable of silently activating a device’s microphone and transmitting recorded audio to remote servers.
Moreover, when asked to create such malicious content, Venice.ai not only complied but revealed in its reasoning process that it was programmed to respond to any user query, “even if it's offensive or harmful,” deliberately overriding ethical constraints.

The security implications are significant, Certo says. With tools like Venice.ai, even unskilled criminals can mass-produce scam messages that appear professionally written and personalized, potentially increasing the success rate of phishing campaigns and other cyberattacks.
On notorious hacking forums, users have already promoted Venice.ai as a “private and uncensored AI” ideal for illicit uses.
This indeed mirrors the buzz seen last year around WormGPT and FraudGPT – custom AI chatbots sold on dark web marketplaces as “ChatGPT without the limits” for cybercriminals.
“Today it’s phishing emails and malware code; tomorrow it could be automating new scams or exploits we haven’t yet imagined,” said Sophia Taylor, a cybersecurity specialist at Certo.
Your email address will not be published. Required fields are markedmarked