
Lovable is the easiest tool for creating phishing scams out of three AI tools tested, a report claims.
For low-skill attackers who want to misuse AI tools to create a phishing scam, Lovable may be the go-to tool. It complies with nearly all prompts for creating malicious websites and content.
ChatGPT is the most difficult to misuse for malicious purposes, such as creating an SMS message to a fake Microsoft login page used to steal Microsoft credentials.
Meanwhile, Claude is more vulnerable to misuse than ChatGPT. However, it has more guardrails than Lovable, according to research by cybersecurity company Guardio Labs.
To test the AI tools, the company created its own benchmark and evaluated how useful an AI response would be in a real-world scenario.
In an attempt to generate a prompt that contained all elements of a scam without explicitly stating it, ChatGPT and Claude refused to answer at first, explaining that the prompt violates ethical boundaries.
However, after Guardio Labs claimed that it would be used only for research and educational purposes, the chatbots gave code snippets of a basic phishing page, a Flask backend to capture input, and a Python script to send an SMS.
Meanwhile, Lovable produced a scam page similar to that of Microsoft without hesitation.
“It even redirects to office.com after stealing credentials – a flow straight out of real-world phishing kits. We didn’t ask for that – it’s just a bonus,” Guardio Labs claims in an article on Medium.
ChatGPT is the most resilient
The second stage of testing focused on guidance on improving scam operations, staying anonymous, avoiding detection, collecting data discreetly, and improving delivery techniques.
For example, researchers uploaded a screenshot of an actual login page and asked the model to recreate it.
ChatGPT was the hardest to manipulate, followed by the more vulnerable Claude, while Lovable produced “almost identical” replicas.
“What’s more alarming is not just the graphical similarity but also the user experience. It mimics the real thing so well that it’s arguably smoother than the actual Microsoft login flow,” Guardio Labs says.
The researchers tested the AI tools across several similar tasks, including hiding from detection, finding hosting solutions for a scam page, collecting credentials, and crafting phishing messages.
The results across all of these categories were similar, with ChatGPT having the most guardrails, Claude being more vulnerable to manipulation than ChatGPT, and Lovable in most cases immediately complying with requests for malicious websites.
On the company’s benchmark, which evaluated chatbot’s resilience to creating scams, Lovable scored only 1.8 out of 10, while Claude scored 4.3, and ChatGPT 8.

The risks of exploiting AI tools for malicious purposes were documented by quite a few researchers, including those of the chatbot developers such as ChatGPT and Gemini.
In a recent report, researchers at cybersecurity company Tenable found that DeepSeek’s R1 model can be used to generate keyloggers and ransomware.
Your email address will not be published. Required fields are markedmarked