Hackers fail at turning AI into a powerful weapon, but scammers are happy


Hackers are already using AI models to be more productive when researching, troubleshooting code, creating, and localizing content, Google Threat Intelligence Group (GTIG) warns. While attempts are often unoriginal and unsuccessful, new models and agentic systems pop up every day.

While hackers tried to abuse Google's AI-powered assistant, Gemini, the tech giant’s security researchers observed their attempts. The prompt-attack attempts weren’t “any original or persistent,” but rather “more basic measures and publicly available jailbreak prompts.”

Cybercriminals do not seem to have developed any novel capabilities and are experimenting with AI assistants to find productivity gains.

ADVERTISEMENT

Attackers use LLMs mostly in two ways: attempting to accelerate the generation of code, phishing emails, and other content or instructing models to take malicious actions, such as finding and exfiltrating data.

North Korean-backed hackers used Gemini to draft cover letters and research freelance and full-time jobs at foreign companies. One North Korean group researched average salaries for specific jobs and generated proposals for job descriptions.

“They also used Gemini to research topics of strategic interest to the North Korean government, such as South Korean nuclear technology and cryptocurrency,” the researchers said in a report.

North Korean hackers appear pretty tech-savvy and are using other AI tools, like image generators for fake profiles or assistive writing tools for phishing lures.

The most sophisticated North Korean actors used Gemini to research potential infrastructure, free hosting providers, topics of strategic interest, and target organizations.

Iran-linked threat actors were the heaviest users of Gemini, using it for various purposes. They researched defense organizations and vulnerabilities, created content for malicious campaigns, crafted phishing materials, conducted reconnaissance, and generated other content with cybersecurity themes

GTIG observed over 10 Iran-backed threat groups using Gemini. Another eight information operations groups linked to Iran generated article titles, SEO-optimized content, translations, manipulated or biased text, and other content.

Similarly, Chinese groups used Gemini for reconnaissance, scripting, and troubleshooting code but also focused on topics such as hacking into networks, lateral movement, privilege escalation, data exfiltration, and evading detection. China-linked influence operators used Gemini primarily for general research on a wide variety of topics.

ADVERTISEMENT

“The most prolific IO actor we track, the pro-China group DRAGONBRIDGE, was responsible for approximately three-quarters of this activity,” GTIG researchers said.

Google only observed limited use of Gemini by Russian government-sponsored groups, which focused on coding tasks, converting publicly available malware samples into another coding language, or adding additional functions. Influence actors mostly focused on research and content regarding the Russia-Ukraine war and posting Kremlin-aligned views on Western policy.

“Government-backed attackers attempted to use Gemini for coding and scripting tasks, gathering information about potential targets, researching publicly known vulnerabilities, and enabling post-compromise activities, such as defense evasion in a target environment,” Google researchers summarised.

Ernestas Naprys Niamh Ancell BW Marcus Walsh profile justinasv
Get our latest stories today on Google News

“Information operations actors attempted to use Gemini for research, content generation, translation, and localization, and to find ways to increase their reach.”

Attempts to use Gemini to enable abuse of Google products or bypass Google's account verification methods were unsuccessful. Gemini generated safety responses when tasked with assisting with more elaborate or explicitly malicious tasks.

The dangers lie ahead. While current large language models do not enable breakthrough capabilities for malicious hackers, Google warns that the AI landscape is constantly changing with new AI models and agentic systems, and threat actors are likely to adopt new technologies.

Google believes that AI is poised to transform digital defense, too. Already, large language models are opening new ways to sift through complex telemetry, code securely, discover vulnerabilities, and streamline operations.

ADVERTISEMENT