
Security experts fear that artificial intelligence (AI) agents will soon perform sophisticated and difficult-to-detect cyberattacks at scale. The release of ChatGPT in 2022 transformed the cybercrime landscape with automated phishing, deepfakes, and malware development.
A new ThreatDown report from cybersecurity firm Malwarebytes warns about the imminent rise of autonomous AI attackers and the looming transformation of cybercrime.
“Cybercrime is undergoing a transformation,” said Marcin Kleczynski, Founder and CEO at Malwarebytes.
“We're not just seeing a rise in the quantity of attacks, we're seeing entirely new forms of deception and automation that would have been unimaginable just a few years ago.”
AI made it easy for cybercriminals to research vulnerabilities, compose phishing emails, write code, and create new forms of social engineering with cloned voices and faked likenesses, according to the report.
It lists some of the most notable trends:
- In January 2024, a video conference populated entirely by AI-generated deepfakes of senior executives tricked a finance worker at global engineering firm Arup into handing over $25 million to cybercriminals.
- In 2023, following the release of ChatGPT, researchers at SlashNext reported a massive 1,265% increase in malicious phishing messages.
- AI partly or entirely generated at least 2.3 million product reviews in 2024, according to research by The Transparency Company.
- AI email fraud losses are expected to hit $11.5 billion by 2027, according to the Deloitte Center for Financial Services
- Propaganda seeps into chatbots. In 2024, the “Pravda” disinformation network published 3.6 million articles, which successfully infected many popular generative AI tools with Kremlin propaganda.
- Financial institutions have seen an increase in the use of fraudulent, AI-generated identity documents, the US Treasury's FinCEN bureau has warned.
However, it seems that the worst is yet to come as cybercriminals become more adept at using AI agents.
“They will inevitably be used to scale up the number and speed of attacks that require a lot of human labor—including the most dangerous form of cyberattacks, big game ransomware,” the report reads.
Agentic AI will enable hackers to deploy “swarms of malicious agents and scale their attacks enormously,” and they will run 24/7.
Researchers successfully demonstrated how AI agents can be used for offensive cybersecurity, too.
Last year, researchers created ReaperAI, a fully autonomous cybersecurity agent, which can run offensive operations with minimal human oversight. Another AI agent, called AutoAttacker, mimics ransomware gangs and demonstrates that occasional attacks can be turned into routine, high-speed operations.
Google’s Big Sleep was the first AI to independently discover a real-world zero-day vulnerability in a widely used, real-world software application.
Malwarebytes researchers believe that to counter the growing threat of AI-powered cybercrime, organizations must reduce their attack surface, monitor systems continuously, and respond to alerts immediately.
Your email address will not be published. Required fields are markedmarked