A fake robocall impersonating US President Joe Biden was making rounds in the state of New Hampshire on Tuesday, urging Democrats not to vote in the primary. This again raises concerns about AI-amplified electoral misinformation.
In the recording, a voice that sounds exactly like Biden’s tells Democratic voters in New Hampshire to “save” their vote for the November general election and to stay at home.
“Voting this Tuesday only enables Republicans in their quest to elect Donald Trump again. Your vote makes a difference this November, not this Tuesday,” says the vote in the recording.
It’s unclear how many people received the deepfake robocall but Nomorobo, an anti-robocall application, estimated that the calls were made between 5,000 and 25,000 times. The White House soon confirmed that the call was fake.
Exposed vulnerabilities
"This is deep fake disinformation designed to harm Joe Biden, suppress votes and damage our democracy. It is being referred to law enforcement so that they can determine who is responsible and bring them to justice," said Aaron Jacobs, a spokesperson for the grassroots effort to write-in Biden on the Democratic primary ballot.
At his own request, Biden was not on the New Hampshire Democratic primary ballot this year, but over 100 New Hampshire supporters had launched a campaign to encourage state Democrats to write-in Biden on primary day, which was still permitted.
The New Hampshire state Justice Department said in a statement that the message “appeared to be” artificially generated even though the voice in the robocall sounds like the voice of president Biden.
The message also appears to have been spoofed, making the call appear to "show that it had been sent by the treasurer of a political committee that has been supporting the New Hampshire Democratic Presidential Primary write-in efforts for President Biden."
On Tuesday, the state’s Justice Department advised voters to disregard the message and clarified: “Voting in the New Hampshire Presidential Primary Election does not preclude a voter from additionally voting in the November General Election.”
Sam Gregory, the programme director at WITNESS, a project that trains activists to use video safely, ethically, and effectively, said the incident in New Hampshire was problematic – and symptomatic.
In a thread on X, Gregory said that a AI-faked robocall exposed vulnerabilities around synthesized audio – it’s easily made and shared in messaging apps, calls, or robocalls, for example.
These fakes also contain less glitches as the tech improves and lack a referent point like a reverse image search. Watermarking and labeling has not been adopted widely to be effective yet as well.
Besides, what confuses the matter even more is the fact that claims that audio is AI-faked are used to dismiss real compromising leaked audio, said Gregory.
Overconfidence isn’t an option
Biden’s camp said it was not involved in unleashing the robocall, and the Donald Trump campaign – the former US president won the Republican primary in New Hampshire – also denied involvement.
According to James Turgal, Optiv’s vice president of cyber risk, strategy, and board relations, involvement of unfriendly nation states thus cannot be dismissed.
“Nation states have consistently used AI to imitate authoritative sources and people, making it easier to deceive specific individuals or the public by impersonating election officials, creating parallel personas or creating or falsifying official election documents,” said Turgal.
“Specifically, Microsoft analysts have warned that Chinese operatives have already used AI to generate images for influence operations meant to mimic US voters across the political spectrum and create controversy along racial, economic, and ideological lines.”
Turgal warns it wouldn’t be smart to simply rely on existing election software security tools because these might be easily overcome.
“Researchers are using AI with the assumption that AI enhanced security tools will allow for new defensive capabilities. But make no mistake, generative AI poses a significant threat to election offices and election system workers,” said Turgal, a former Federal Bureau of Investigation official.
“Election offices should have policies in place to defend against social engineering attacks, and all staff must participate in social engineering and deep fake video training that includes information about all forms and attack vectors, including electronic (email, text and social media platforms) in-person and telephone-based attempts.”
Your email address will not be published. Required fields are markedmarked