Microsoft: China uses AI to test US voters and sow division


Chinese cyber actors have been doubling down on familiar targets over the past year and are now using AI-generated or AI-enhanced content to achieve their goals, a new report from Microsoft says.

In its latest East Asia report called “Same targets, new playbooks: East Asia threat actors employ unique methods,” the Microsoft Threat Analysis Center (MTAC) says China has increased its use of AI-generated content to sow division in the US and possibly influence the outcome of the presidential election in its favor.

Many of the techniques are well-known. Threat actors affiliated with the Chinese government have built deceptive social media accounts where contentious questions on controversial US domestic issues are posed.

ADVERTISEMENT

Mostly unsuccessful operations

Microsoft, though, interprets this as a way to “better understand the key issues that divide US voters” – a sort of intelligence gathering operation ahead of the US presidential election in November.

What’s more, Chinese cyber actors have now honed their techniques and keep experimenting with new media in their influence operations, MTAC says. AI-generated content is prevalent, for instance.

“The influence actors behind these campaigns have shown a willingness to both amplify AI-generated media that benefits their strategic narratives, as well as create their own video, memes, and audio content,” says the report (PDF).

“Such tactics have been used in campaigns stoking divisions within the United States and exacerbating rifts in the Asia-Pacific region – including Taiwan, Japan, and South Korea.”

The most active threat actor is tracked as Storm-1376, or Spamouflage. According to Microsoft, the group used AI-generated images and text translated into more than 30 languages in an attempt to influence public opinion.

Chinese groups have used AI-generated news anchors, AI-enhanced videos, AI-generated memes, and AI-generated audio clips.

The campaigns mostly ran on X, formerly known as Twitter. The platform, though, has been bleeding popularity and daily active users – unsurprisingly, MTAC says that the efforts have been unsuccessful in swaying opinion.

ADVERTISEMENT

Still, Chinese actors have attempted to influence the debate about a range of topics, including “the train derailment in Kentucky in November 2023, the Maui wildfires in August 2023, the disposal of Japanese nuclear wastewater, drug use in the US as well as immigration policies and racial tensions in the country.”

Also targeting Taiwan and Asian countries allied to the US, such as Myanmar, Chinese groups have used AI-generated news anchors, AI-enhanced videos, AI-generated memes, and AI-generated audio clips.

More dangerous groups lurking

“Despite the chances of such content in affecting election results remaining low, China’s increasing experimentation in augmenting memes, videos, and audio will likely continue – and may prove more effective down the line,” says MTAC.

James Turgal, Optiv’s VP of cyber risk, strategy, and board relations, says that AI tools make cyberattacks bigger, faster, more covert and better able to overcome existing software security tools.

“Chinese operatives have already used AI to generate images for influence operations meant to mimic US voters across the political spectrum and create controversy along racial, economic and ideological lines,” said Turgal.

China is executing more serious hacking operations, too. Key American security agencies and their international partners have recently published a high-level whitepaper to warn businesses of the urgent risk posed by Volt Typhoon, a Chinese state-sponsored hacking group.

According to the US government agencies, Volt Typhoon successfully compromised thousands of internet-connected devices and is believed to be part of a larger effort to compromise Western critical infrastructure, including naval ports, internet service providers, and utilities.

The US is not an angel, of course. In late 2022, Meta’s security team formally confirmed the involvement of the US military in a pro-Western influence operation. Before the network was shut down, it was active in Afghanistan, Algeria, Iran, Iraq, Kazakhstan, Kyrgyzstan, Russia, Somalia, Tajikistan, Uzbekistan, and Yemen.

ADVERTISEMENT