How cybercriminals weaponize artificial intelligence: AI malware pioneers


Leveraging AI in malware attacks is a relatively new phonemenon, as it was not widely known before the release of generative AI.

ADVERTISEMENT

AI technology can be exploited maliciously in different ways, not just for malware development. For instance, threat actors used ChatGPT and other generative AI solutions to create fabricated content, including text, images and videos, to launch different social engineering attacks.

AI malware pioneers

Several notable threat actors have emerged as pioneers in exploiting AI capabilities for malicious purposes.

CyberAv3ngers: backed by the Iranian Islamic Revolutionary Guard Corps (IRGC), this group has established itself as a pioneer in exploiting AI capabilities for malware development. It primarily targets Industrial Control Systems (ICS) and Programmable Logic Controllers (PLCs), focusing on critical infrastructure sectors, including water and wastewater systems, manufacturing, and energy infrastructure. Its operations have notably targeted organizations in Israel, the United States, and Ireland. The group's technical capabilities have evolved to include sophisticated AI-driven vulnerability research in critical infrastructure, automated debugging of malware code, and developing evasive scripts designed explicitly for PLC manipulation. It has demonstrated particular expertise in developing targeted exploits for specific ICS protocols with the aid of AI technology.

SweetSpecter: This Chinese state-backed threat actor has advanced capabilities in leveraging OpenAI services for malicious purposes. Its operations encompass AI-powered reconnaissance, automated vulnerability research, and sophisticated malware development. What sets it apart is its successful integration of AI into existing malware strains, enabling its creations to evade traditional security detection methods. The group has excelled in developing anomaly detection evasion techniques, making its malware very difficult to detect through conventional security measures.

Forest Blizzard: also known as APT28, has distinguished itself through sophisticated AI implementation in cyber operations. Its operations center around creating compelling fake government documents using AI technology. The group has developed advanced phishing attacks by leveraging AI to analyze victim communication patterns and automate credential harvesting operations. Its use of AI for document forgery and to create sophisticated social engineering campaigns shows us another way for cybercriminals to leverage AI technology to facilitate executing sophisticated cyberattacks.

Aside from advanced threat actors known to use AI in sophisticated attacks, a growing risk comes from non-technical hacking groups exploiting AI and ML technologies to create or enhance malware. This trend was observed earlier this year with the emergence of a ransomware group called FunkSec. However, before examining their attack methods, defining what AI malware entails is essential.

FunkSec is a new ransomware group that first appeared publicly in December 2024. Security researchers think the group is comprised of inexperienced hackers because it uses AI technology to develop its ransomware.

ADVERTISEMENT

AI malware explained

AI malware refers to malicious software that leverages AI technology in some way during its development or functionality. This does not mean the malware must be entirely developed using AI.

For instance, AI or ML components can enhance existing malware strains to improve specific functions, such as evasion, encryption, or communication with command-and-control (C2) servers. For example, a typical ransomware strain could incorporate AI to dynamically adjust its encryption algorithms or optimize communication with its C2 infrastructure. Integrating AI into its core functionalities would qualify it as AI malware.

There are several ways in which AI can assist in creating or enhancing existing malware:

Generate code: AI systems like ChatGPT and GitHub Copilot can be misused to write malware code. This approach is often exploited by less experienced hackers who lack the technical skills to develop malware independently. For example, AI could assist in generating scripts for keylogging or automated data exfiltration.

Identify security vulnerabilities: AI can identify weaknesses in a target's IT environment, allowing threat actors to exploit them later. For instance, AI-powered network scanners can analyze traffic to detect vulnerable ports, outdated software, or misconfigured services.

Evade detection: AI enhances malware's ability to evade traditional security measures. For example, polymorphic malware can use AI to dynamically alter its codebase or behavior during execution, allowing it to bypass signature-based detection security solutions. Attackers have also experimented with adversarial AI to create payloads that evade machine-learning-based antivirus solutions.

hooded hacker
Image by GBJSTOCK | Shutterstock

AI malware types

Different types of AI malware can be grouped based on how they leverage AI to attack targets.

ADVERTISEMENT

Adaptive malware: This type of malware has the ability to modify its source code dynamically based on the target's IT environment, enabling it to evade detection by security solutions. For instance, malware might periodically connect to a generative AI tool to regenerate its core components every week. By doing so, it adapts its behavior and code signature to bypass the specific security measures deployed in the target environment.

Another example could be malware that uses AI to analyze the target's network traffic patterns. It then adjusts its communication methods to blend in with normal traffic, making it harder for intrusion detection systems to flag it as suspicious.

Dynamic malware payloads: In this type, AI malware uses AI technology to generate unique payloads for each target device. The payload is part of the malware used to execute the malicious activities. AI malware using this technique can change its payload or simply load additional malware after executing on the target device.

For example, imagine a ransomware attack that compromises one device in a network of 500 computers. To spread to other devices, the ransomware connects to an AI solution to create a slightly modified payload for each new target.

The ransomware uses AI to analyze each computing device's software and hardware configurations. Based on this analysis, it generates a tailored payload that avoids triggering alarms, such as altering file encryption methods or using different communication protocols for each infected device. This variation in payloads makes it extremely difficult for security solutions relying on signature-based detection to identify and halt the attack.

Gintaras Radauskas Niamh Ancell BW Ernestas Naprys Marcus Walsh profile
Don’t miss our latest stories on Google News

Content obfuscation: In this type, AI malware utilizes various techniques, such as encryption, encoding, polymorphism, or metamorphism, with the help of AI to conceal its malicious intent. This makes it challenging for security solutions that rely on behavioral analysis and signature detection to effectively identify and stop the malware.

For example, a polymorphic malware might use AI to automatically change its code structure whenever it infects a new device. This makes the malware's signature appear unique for each infection, which renders traditional signature-based detection tools useless. Another example is malware that uses AI to encrypt its payload differently for each target, making it very challenging for security systems to recognize the malicious content without the exact decryption key.

AI can greatly enhance these concealment measures by introducing advanced techniques. For instance, AI-powered malware could analyze the target IT environment and dynamically choose the most effective obfuscation method. If the target uses a specific type of antimalware software, the malware might employ a tailored encoding technique that the antivirus struggles to decode.

ADVERTISEMENT