LLMs hallucinating phishing links – your AI just sent you to a scam site


Large language models (LLMs) are confidently recommending fake login pages. It’s not just a bug – it’s a blueprint for cybercriminals.

When an artificial intelligence (AI) bot serves up a link for a fake Wells Fargo site, presented as the real deal, you know something’s awry – well, when you’ve clicked on it and it steals your data (or scams you – take your pick).

This, according to Netcraft researchers, happened to Perplexity when they asked for a login page, and it presented them with a fake phishing version.

ADVERTISEMENT

Fake login pages spread

The ominous thing at large here is that the system wasn’t tricked – it just bypassed all the typical search engine security verifications and served up the wrong answer.

The researchers found that these kinds of hallucinations are actually not that rare.

Netcraft asked AI where to log in to 50 well-known brands and found that over 34% of responses pointed to non-brand-controlled domains – some inactive, some unrelated, all dangerous.

When your AI, as your concierge, is sending you phishing links, you know it might not be the best hire.

Marcus Walsh profile Niamh Ancell BW justinasv jurgita
Don’t miss our latest stories on Google News

Hallucinations are common

Traditional search results show URLs, snippets, and reputation scores – AI doesn’t.

ADVERTISEMENT

AI-generated answers often strip away those warning signs and present answers with undeserved confidence.

This means phishing criminals are able to optimize their strategy for chatbots, not just Google.

Thousands of fake support pages and “cracked software” guides are written by (or for) LLMs.

A "genuine fake" sign.
Image by Eye Ubiquitous via Getty

No safety signs are shown

Most of these fake support pages pretend to be official help centers, documentation sites, or troubleshooting guides for crypto wallets, booking platforms, and more.

AI tools – especially newer chatbots and browsers with AI summaries – often treat this fake content as legitimate due to the lack of traditional fact-checking or domain verification.

Netcraft tracked over 17,000 of these AI-written phishing pages hosted on GitBook alone – many targeting crypto users with fake login flows or seed phrase recovery steps.

Hackers have even made fake Solana tools and tutorials to trick AI into learning malware.

Code on a screen.
Image by Picture Alliance via Getty
ADVERTISEMENT

Endless fake URLs

Brands try to grab fake-looking URLs like paypol-help.com before scammers do, but AI can imagine endless new ones.

Netcraft uses smart rules, plus machine learning, to stop threats before they’re invented.

One bad link from your AI is all it takes to break user trust, and rebuilding that isn’t so easy.

When the bot gets it wrong, it’s not just a mistake – it’s an open door.

ADVERTISEMENT