AI is reshaping threat intelligence for both attackers and defenders – report


After observing more than 2.5 million AI-related posts: jailbreak prompts, deepfake service ads, phishing toolkits, and bespoke language models built for fraud and cybercrime, Flashpoint analysts say the scene is being reshaped significantly.

By now, it’s obvious: artificial intelligence (AI) is a powerful force that can be exploited by threat actors to scale attacks, manipulate perception, and erode trust with a precision and speed simply impossible a few years ago.

Clearly, cybercrime is being reshaped, says Flashpoint, a data and intelligence company, in its new report titled “AI and Threat Intelligence: The Defenders’ Guide.”

ADVERTISEMENT

“With everything from deepfake-enabled fraud to multilingual phishing campaigns and jailbroken large language models (LLMs), malicious innovation is rapidly advancing – and organizations must outpace it,” explain Flashpoint’s analysts.

They’re tracking these developments in real time across more than 100,000 illicit resources, monitoring everything from dark web marketplaces and Telegram groups to underground LLM communities.

Konstancija Gasaityte profile Niamh Ancell BW Marcus Walsh profile jurgita
Be the first to know and get our latest stories on Google News

After analyzing over 2.5 million AI-related posts between January and May 2025, Flashpoint has concluded that traditional methods alone can’t keep up with threat actors who adopt AI to boost speed, deception, and reach.

For security and intelligence teams, the question isn’t just how AI is being used by threat actors, it’s how that activity changes their own risk assessments, workflows, and priorities, explain the analysts.

Cyberattacks on AI-flavored steroids

That’s because the threat landscape is already very different. Threat actors are using AI to impersonate executives, bypass facial recognition, generate multilingual phishing kits, automate reconnaissance across open-source intelligence (OSINT) sources, and scale social engineering attacks with high precision and speed.

For example, following the release of WormGPT and FraudGPT, Flashpoint analysts have seen an uptick in AI tools fine-tuned on malicious datasets: breached credentials, scam scripts, infostealer logs, and malware documentation.

ADVERTISEMENT
WormGPT
Image by Cybernews.

Besides, a new underground economy is forming around jailbreaking. According to Flashpoint, “bypass builders” specialize in defeating guardrails of mainstream LLMs such as ChatGPT or Gemini to unlock restricted outputs like social engineering scripts, step-by-step hacking tutorials, or bank fraud playbooks.

Additionally, LLMs are being deployed at scale to supercharge disinformation, and vendors are now offering custom face generation for dating scams or audio spoofing for voice verification fraud.

“These services are increasingly offered with optional add-ons like pre-loaded backstories, matching fake documents, and automated scheduling for calls,” says the report.

Finally, adversaries are closing the loop on model tuning. Some malicious LLMs are being refined using underground forum posts, breach dumps, and Telegram logs.

As adversaries use these models to generate outputs, they gather user feedback to fine-tune responses, creating a loop where offensive capability keeps improving over time.

Adversaries are closing the loop on model tuning. Some malicious LLMs are being refined using underground forum posts, breach dumps, and Telegram logs.

For instance, Flashpoint analysts observed a private Telegram group where users regularly submitted failed prompt attempts back to an LLM developer. These feedback loops led to rapid iteration: new model files were released within days with improved performance and expanded outputs.

Human expertise still vital

All hope’s not lost, however, as security teams are also using AI to transform how they respond to threats.

ADVERTISEMENT

Humans can’t handle them alone and use AI to analyze vast datasets in seconds, or automate manual, time-consuming tasks like log analysis, keyword monitoring, and entity extraction. AI can also help to detect anomalies in time.

In one case, a Flashpoint customer monitored a specific coordination pattern. Flashpoint’s AI-assisted source discovery uncovered several backup Telegram channels created by the same actor.

“These were flagged within minutes, cutting manual discovery time in half and letting analysts focus on high-value investigation,” says the report.

Artificial Intelligence programmer computer
Image by Cybernews.

In other words, security teams are beginning to treat AI not as a nice-to-have add-on, but as a core capability: essential for staying ahead of adversaries, and making intelligence truly actionable.

Still, according to Flashpoint, AI should always only support human expertise rather than replace it: “In critical missions, AI needs to empower people, not distract them.”

If AI is almost blindly trusted, mistakes are more than likely. In reality, for instance, AI cannot predict attacks. Yes, analysts can and do use AI to spot signals or anomalies earlier but prediction still requires human intelligence to translate those signals into action.

“It’s still the analyst who draws the conclusion, connects the dots, and makes the final call,” says Flashpoint.

ADVERTISEMENT