Meta has disrupted six new campaigns trying to sway public opinion through fake accounts and AI-generated content across the tech giant’s platforms.
On May 29th, Meta published its latest “Adversarial Threat Report,” which paints coordinated manipulation efforts detected and removed across Meta’s platforms in Q1 2024.
Meta calls these threats Coordinated Inauthentic Behavior (CIB), which it defines as coordinated efforts to “manipulate public debate for a strategic goal,” in which thousands of fake accounts are used as the key strategy.
According to the report, Meta has successfully identified and disrupted six new campaigns originating from Bangladesh, China, Croatia, Iran, Israel, and a CIB network targeting Moldova and Madagascar. Many of these campaigns were detected and removed early in their efforts to build an audience.
While many see AI as a huge threat, Meta has not identified any uncontrollable AI-driven campaigns.
“So far, we have not seen novel GenAI-driven tactics that would impede our ability to disrupt the adversarial networks behind them,” writes Meta in the report.
“We have not seen threat actors use photo-realistic AI-generated media of politicians as a broader trend at this time.”
Meta’s report has noted instances of AI being used to create photos and images, AI-generated video news readers, and text generation.
Examples of AI usage include a deceptive network from China sharing AI-generated poster images for a fictitious pro-Sikh activist movement called Operation K.
An Israel-based CIB network posted likely AI-generated comments under the pages of media organizations and public figures. These comments included links to the operation’s websites and were often met with critical responses from genuine users, who called them propaganda.
“We found and removed many of these campaigns early, before they were able to build audiences among authentic communities,” says Meta.
“While we continue to monitor and assess the risks associated with evolving new technologies like AI, what we’ve seen so far shows that our industry’s existing defenses, including our focus on behavior, rather than content, in countering adversarial threat activity, already apply and appear effective.”
Russian disinformation campaign
Meta’s apps have seen major tactical changes by the Russian disinformation campaign named Doppelganger, primarily focused on weakening international support for Ukraine.
This campaign, described by Meta as a “smash-and-grab” effort, expends significant resources despite facing a high detection rate and daily asset losses. This campaign, characterized as a “smash-and-grab” effort, expends substantial resources despite encountering a high detection rate and daily asset losses.
The persistence is reasonable, as an influence campaign reportedly directed by the Russian Presidential administration during wartime.
Since 2022, Meta claims to have continuously monitored, detected, blocked, and exposed Doppelganger’s attempts. The campaign has largely stopped engaging in specific tactics on Meta’s platforms, including linking to spoofed websites impersonating news media or government agencies, commenting on others’ posts, creating fictitious brands like “Reliable Recent News,” and seeding links to drive off-platform traffic through ads, posts, or comments.
While these are significant on-platform shifts, Meta remains vigilant as Doppelganger may evolve its tactics. Foreign threats, such as Doppelganger, are particularly dangerous ahead of EU parliamentary elections.
However, Meta’s report highlights that the majority of the EU-focused inauthentic behavior has been domestic in nature, primarily targeting their own countries and local elections.
Your email address will not be published. Required fields are markedmarked