Unmasking election meddling: your guide to spotting AI-generated content in US polls


Foreign states' efforts are escalating as they seek to impact the US presidential election outcome.

In this investigative article, I will shed light on three reports, one published by an official US organization and two independent intelligence companies detailing attempts from three national actors – China, Russia, and Iran – to impact the US election.

The exciting thing about these attempts is that they leverage artificial intelligence (AI) technologies to reach and influence a large number of US voters.

ADVERTISEMENT

The US ODNI office published a report in mid-September 2024 discussing the efforts made by foreign national actors to impact the US election using generated and manipulated media. The report highlighted the main methods used by these actors to spread false content in an attempt to influence public perception.

Laundering information through prominent figures

Foreign states may first spread false information in niche groups, such as social media platform groups, WhatsApp group messages, and popular discussion forums. This content is designed to eventually reach well-known figures, such as media personalities and public affairs professionals, who knowingly or unknowingly amplify the message to a broader audience.

Ernestas Naprys vilius Konstancija Gasaityte profile Paulius Grinkevicius
Don't miss our latest stories on Google News

Advanced AI-powered disinformation techniques

Emerging AI and Machine Learning (ML) technologies have boosted foreign state actors' ability to generate and disseminate sophisticated disinformation. ML algorithms can now generate hyper-realistic content that can bypass traditional verification mechanisms.

These technologies allow malicious actors to craft narratives that appear to come from authentic sources, have emotional reasoning, and are accurately targeted to specific demographic segments.

Recent geopolitical events provide clear evidence of these advanced capabilities. For instance, in the context of the Russia-Ukraine conflict, Russian state-affiliated threat actors have used unprecedented AI-driven disinformation tactics. They leverage ML algorithms to generate thousands of synthetic social media profiles and news articles designed to manipulate international perception.

ADVERTISEMENT

For example, we showed many AI-generated videos and images in addition to creating fake content portraying Ukrainian military personnel committing fabricated crimes; these videos were crafted to appear authentic and emotionally provocative to convince the public.

In the context of the US election, we can find two examples of using AI technology to manipulate public minds:

The one is a fabricated video clip showing Texas Governor Greg Abbott. Suspected nation-state actors, allegedly backed by Russia, altered an interview originally broadcast on Fox News by using AI technology to modify the audio within the video clip. The manipulated content falsely showed Governor Abbott saying that 'US President Joe Biden should learn to work with Russian President Vladimir Putin for national interests,' while this statement was absent from the original interview.

Another example from the US election of using AI-generated fake media to fool voters is the numerous images distributed by Donald Trump supporters showing him with black voters to encourage African Americans to vote for the Republican party.

AI-generated Trump
An image generated using AI | Source: BBC

Publishing on inauthentic social media accounts or websites

Advances in computing technology have simplified the process of mimicking websites using low-cost software solutions. On the other hand, actors can now easily create a large number of social media accounts and publish diverse content through them.

In the context of US elections, national actors have created fake accounts and entire websites that closely mimic legitimate news sources, designed to deceive audiences and manipulate information consumption.

A report by Graphika intelligence firm found that Chinese State-Linked Influence Operation 'Spamouflage' poses as US voters to promote divisive narratives ahead of the 2024 election. They have created a large number of accounts on various social media platforms that impersonate US voters. Those accounts were used to divisive narratives about sensitive social issues in the US, such as guns and immigration.

The company recognized 15 Spamouflage accounts on X and one on TikTok posing as US citizens or US-based advocates for peace, human rights, US soldiers, and information integrity, expressing frustration with American politics and the West

Fake Twitter accounts
Two Twitter accounts identified as part of the Spamouflage campaign | Source: ISD Global
ADVERTISEMENT

The CyberCX Intelligence report reveals a sophisticated state-sponsored information manipulation network on X (Twitter). It has over 5,000 inauthentic accounts controlled through an advanced AI large language model system. This network demonstrates a refined approach to narrative manipulation, primarily focusing on 'laundering' politically divisive content by algorithmically rewriting and amplifying political discussions.

Although the current observations suggest a targeted approach potentially linked to election-related influence, the network's underlying AI-driven architecture could be easily adapted to serve different malicious information operations across multiple domains.

Releasing AI-generated leaks

To create controversy or attract media coverage, foreign actors now generate fabricated documents or "leaked" information using AI to produce authentic-looking files. For instance, AI-generated content has been used to forge documents in supposed leaks from official sources to stir political tensions. These fabricated "leaks" are precisely crafted to appear sensitive or confidential, appealing to curiosity while obscuring their synthetic origins.

The convergence of advanced AI technologies and strategic disinformation tactics presents a critical challenge for national governments and election integrity, which requires governments to apply sophisticated counter-intelligence strategies while requiring the public to be aware of the numerous techniques used in creating fabricated content.

How can the public know if a particular piece of content is disinformation or is generated using AI?

As AI technologies advance, detecting content generated by AI tools becomes increasingly challenging, especially for public users who may lack the technical skills to distinguish fabricated content from actual material. Here are the main methods the public can use to identify whether a particular piece of content, website, or social media post is authentic or fake.

If you’re uncertain whether a particular image or video may be AI-generated, you can use the following automated AI detection services:

ADVERTISEMENT

You can also execute a reverse image search to see where this image appears online.

Source verifications for websites

  • Check the About Us page. If it is missing, the website is suspicious.
  • Check the Contact information. It could be on a dedicated webpage or at the bottom of the website pages. Real organizations list phone numbers and email addresses (not email addresses on free email services).
  • Check when the website was created using WHOIS records. Websites created newly (for example, before the election in weeks) could be suspicious. Here are two websites for retrieving domain names WHOIS information: Whois Tool from Godaddy and WHOIS.com
ADVERTISEMENT

Social media account computation

  • Check the account creation date. New accounts with few followers can be highly suspicious.
  • Review posting history. Accounts created recently with high posting activity may be suspicious, as accounts that have been inactive for a long time but suddenly show a spike in activity may be suspicious.
  • Look for inconsistencies between the account's join date and other claimed details, such as employment history, graduation year, or other facts that do not align with the account's creation date.
  • Examine the profile picture and use reverse image search engines to see where it appears online. You may also use AI image detection tools to check if the profile picture was generated by AI technology.
  • A low number of followers with high engagement can be suspicious.
  • Check the account creation dates of followers.

Cross-reference news

  • Check if major news outlets report the same story. Examples of news outlets include CNN, BBC News, The New York Times, Reuters Group, The Washington Post, and Al Jazeera.
ADVERTISEMENT