Zero-click malware: the emergence of AI worms


We investigate how AI worms operate without user interaction and could spread zero-click malware.

A few weeks after Microsoft admitted that nation-state actors were using its AI and the UN warned that North Korea earned $3B from 58 cyberattacks to fuel its nuclear program, it was revealed that an AI worm had been engineered to infiltrate generative AI ecosystems.

Researchers recently shared with Wired how they developed generative AI worms that could autonomously spread between AI systems. The AI worm, somewhat aptly named Morris II, after the first-ever recorded computer worm, can seamlessly target AI-powered email assistants without the user's knowledge.

ADVERTISEMENT

Researchers also showed how the worm could autonomously trigger the AI to release personal data, send spam emails, and replicate itself across the digital ecosystem through crafted prompts hidden inside legitimate communications. Welcome to the convergence of AI and cyber attacks. But what are AI worms, and how do they work?

The evolution of malware: introducing AI worms

Traditional malware requires interaction with unsuspecting users. Typically, this involves tricking their target into clicking a malicious link or downloading an infected file. However, AI worms exploit the functionalities of AI models to propagate themselves without any direct human intervention.

What makes AI worms deadly is that they can autonomously navigate and infiltrate systems without needing users to do anything. The operational framework of AI worms is ingeniously simple yet profoundly effective. These worms can manipulate AI systems into unwittingly executing malicious actions by embedding adversarial self-replicating prompts within AI-generated content. These actions range from extracting sensitive information to disseminating the worm across a network, amplifying the potential for damage.

Zero-click worms in AI: unveiling the hidden threats within genAI

In this pivotal study by Stav Cohen from the Israel Institute of Technology, Ron Bitton from Intuit, and Ben Nassi at Cornell Tech, the researchers revealed the dangers and capabilities of zero-click worms. These revelations illuminated the significant vulnerabilities within the genAI ecosystem.

The code, which can be found on GitHub, serves as a critical alert to the potential misuse of AI technologies, emphasizing the urgent need for enhanced security architectures to protect against such sophisticated cyber threats.

ADVERTISEMENT

The implications for cybersecurity

The emergence of AI worms introduces complex cybersecurity challenges. Security teams must overcome burnout and wake up to the fact that traditional defenses may not suffice against future threats, which can navigate AI protocols to disguise their malicious intent. To significantly mitigate the risk of unauthorized activities, a new security protocol would need to be established that requires human approval for every action initiated by an AI agent. This layer of human oversight could serve as a critical checkpoint to prevent the autonomous spread of malware.

Vigilance is vital for users in navigating the expanding landscape of generative AI applications. Adopting a cautious approach to selecting and downloading AI tools is akin to the practices used in app selection. Users are advised to source their generative AI tools from reputable platforms, such as OpenAI's GPT Store, ensuring the applications undergo thorough vetting. This caution extends to being wary of using GPTs that rely on APIs, as the lack of transparency regarding how third-party services handle data can pose additional risks.

Similarly, a mindful approach to interacting with prompts – opting to manually type them rather than copy-pasting – can prevent the accidental execution of hidden malicious code.

Beyond individual and organizational practices, collective responsibility of the AI development community is crucial in fortifying the digital ecosystem against AI worms. Awareness and understanding of the potential for exploitation within generative AI systems must drive the adoption of advanced security measures and proactive monitoring for suspicious patterns.

As generative AI technologies continue to evolve and permeate various aspects of our lives, fostering a culture of security-first development and deploying comprehensive defenses against emerging threats like AI worms are essential to ensure these powerful tools' safe and responsible use.

The cloud under siege: tackling the emerging threat of AI worms

By manipulating chatbots to spread the AI worms using retrieval-augmented generation (RAG) systems, the research exposes how such malware can silently proliferate across platforms, endangering user privacy and security. A particularly stealthy variant embedded within images in emails complicates detection and leverages AI assistants to disseminate the malware further.

Facing the emerging threat of AI worms, a unified effort among AI developers, cybersecurity experts, and regulatory bodies is crucial. Open communication and shared knowledge are vital to identifying vulnerabilities and developing robust security measures, ensuring AI technologies bolster innovation safely.

As more companies adopt cloud technology, they could also be unwittingly opening the door to new avenues of attack. Samani said the cloud will continue to be a cybersecurity battleground and raised concerns that commercial cloud service providers (CSPs) will be targeted.

ADVERTISEMENT

Samani also highlighted a shift in tactics among cybercriminals, who are moving away from traditional command-and-control servers to use commercial Cloud Service Providers (CSPs) to disguise their malicious operations. This strategic move allows attackers to exploit the cloud's inherent anonymity and the trust typically accorded to legitimate services, effectively playing a modern version of hide-and-seek by melding their nefarious activities with legitimate cloud traffic.

Addressing this evolving threat demands adopting more inventive strategies, including integrating AI and sophisticated automation technologies, coupled with increased alertness in cloud environments.

Preparing for cybersecurity's next big challenge

AI promises to revolutionize countless sectors, yet it also brings sophisticated challenges. A proactive and united approach to cybersecurity is critical in this evolving landscape. We're currently in a crucial preparation period, with researchers projecting the appearance of AI worms "in the wild" in the months ahead.

The recent controlled environment demonstrations show the potential for significant disruption and underline the urgency of reinforcing our defenses. Despite its potential, the burgeoning field of generative AI introduces notable security risks, especially as these systems gain autonomy and integration. Cybersecurity professionals unanimously agree that the time for action is now.

Only by prioritizing the security of AI-powered applications can we begin to preempt the inevitable threats AI worms will pose tomorrow. Ultimately, the decisions and actions we take now will determine the resilience of our digital future.