© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

Wild West elections warning as AI enters the race


Dirty tricks are already prevalent in election campaigns worldwide. Now imagine artificial intelligence (AI) joining forces with disinformation, lies, and propaganda. Good luck, democracy.

Nothing Is True and Everything Is Possible – that’s the title of a book written in 2015 by Peter Pomerantsev, a British journalist. He was talking about the surreal experience of life in Vladimir Putin's Russia.

Over a couple of decades since Putin first came to power in 2000, the Kremlin has indeed created a political system that looks like normal democracy – regular elections are held, multiple parties exist, citizens can consume any kind of media.

The problem is that elections are rigged, and the parties are all under Putin’s control. The media? They do what their owners tell them, and the owners obey the presidential administration. Foreign social media is mostly banned, too.

In other words, Russia has masterminded the so-called “imitation democracy.” Play the concerned citizen all you like, Putin seems to say – we’ll do what we want. We’ll even arrange your opinion for you, and use it to support the truly anti-imperialist quest of invading Ukraine.

It might seem bizarre to some, but quite a lot of people in the West are more or less perfectly fine with what the Russian government is up to. But – if a tiny bit of “what-about-us-ism is allowed – that’s maybe because Western democracies aren’t perfect either.

Take the United States. Obviously, the country is more democratic than Russia, but the major election campaigns have for years been financed by outside spending groups called super PACs (Political Action Committees).

Millions, if not billions, of dollars are thrown at gaining crucial influence once a candidate is elected. And if that means it’s special, vested, interests that citizens end up voting for, the integrity of the democratic process is genuinely thrown into doubt.

Even more so when AI comes into play. And it’s coming fast. The next US presidential election is still more than a year away but the political season is already looking murkier than ever – thanks to advances in digital technology.

What’s fake and what’s real?

Already in May, when Biden announced he would seek another term in the White House, the Republican National Committee responded with an attack advert that invited Americans to “look into the country’s possible future if Joe Biden is re-elected in 2024.”

The 30-second clip was entirely AI-generated and shows Biden and Vice President Kamala Harris celebrating at an Election Day party, followed by a series of imagined reports about international and domestic crises that the ad suggests would follow a Biden victory.

In this specific case, it’s pretty clear the images aren’t real. For example, there isn’t a hot geopolitical crisis in Taiwan – yet. But it’s also obvious there will be instances where citizens simply won’t be able to distinguish between fake and real.

“We are going to face a Wild West of campaign claims and counter-claims, with limited ability to distinguish fake from real material and uncertainty regarding how these appeals will affect the election,” said Darrell West, a senior fellow in the Center for Technology Innovation within the Governance Studies program at the Brookings Institution, a think tank.

A more serious incident took place in February, before the primary in Chicago’s mayoral race. A video showing candidate Paul Vallas saying “in my day, no one would bat an eye” if a police officer killed more than a dozen people went viral – but, again, it was a digital fabrication.

Vallas never actually said that – generative AI did. He advanced to the run-off but then lost to another candidate – it’s impossible to say, but the viral fake video might have lost Vallas crucial support as his rival, Brandon Johnson, only won by around 26,000 votes.

There was another deepfake showing Biden declaring a national draft to aid Ukraine’s war effort, and another video depicted Democratic Senator Elizabeth Warren saying that Republicans should be barred from voting in 2024.

Finally, as elections are held in other countries, Poland’s main opposition party, the Civic Platform, also entered the headlines last week after it published a political ad containing a deepfake of prime minister Mateusz Morawiecki’s voice.

The video itself doesn’t mention that the voice was generated by AI – only later the disclaimer was added to the post on social media, even though labeling AI-generated material is clearly vital.

Turmoil is certainly possible. Marcel Kieltyka, expert at anti-fake media website Demagog, warned: “The voice of a politician generated by AI in which that politician announces controversial decisions or views in the middle of a crisis-type situation such as a war or a natural disaster is a recipe for chaos.”

“Generative AI can be particularly challenging in the context of political campaigns. Not only can the technology rapidly produce targeted campaign emails, texts or videos, it could also be used to mislead voters, impersonate candidates, and compromise elections on an unprecedented scale and speed,” Rennie Westcott, intelligence analyst at Blackbird.AI, also told Cybernews.

“Jewish lasers from space, anyone?”

“It’s the first national campaign season in which widely accessible AI tools allow users to synthesize audio in anyone’s voice, and generate photo-realistic images of anybody doing nearly anything,” the Brennan Center for Justice at New York University (NYU) School of Law recently observed.

Not only that. Voters will soon consume information that is not just curated by AI but is produced by AI – for instance, social media bot accounts with near human-level conversational abilities will surface.

Some of the tools could be used by the candidates’ teams but an average person might also be interested in promoting their favorite – you don’t have to be a coding or video wizard to generate content.

Hostile nations like Russia or China will potentially try to influence the 2024 US presidential election as Moscow infamously did in 2016, and Microsoft researchers said on September 7th they found what they believe is a network of fake, Chinese-controlled social media accounts seeking to influence US voters by using artificial intelligence.

But you don’t even have to work for a troll farm to wreak havoc among the opposition. In a way, everyone can now be a political content creator.

That’s not necessarily bad news – voter engagement needs to be boosted, especially in the US where, compared to, say, Western Europe, citizens don’t really vote en masse. The problems begin when one decides to try to sway others with a wave of false information.

“Sadly, the majority of Americans now get their news from social media posts where there is little concern for the truth,"

John Gunn.

AI will definitely help make that wave bigger. Deepfake images, audio, and video could prompt an uptick in viral moments around faux scandals or artificial glitches, further warping the nation’s civic conversation at election time, the Brennan Center says.

Generative AI can develop messages aimed at those upset with immigration, the economy, abortion policy, critical race theory, transgender issues, or the Ukraine war,” said West of the Brookings Institution.

And if respectable media organizations and academic institutions refuse to lend their data for AI training, current models will mostly learn from the abundance of past election disinformation about, say, the security of voting machines and mail voting – not to mention the ad hominem falsehoods related to various candidates.

“Sadly, the majority of Americans now get their news from social media posts where there is little concern for the truth and new policies at websites such as X (formerly Twitter) have made essentially everything fair game,” John Gunn, the chief executive of Token, a cybersecurity company, told Cybernews.

“Jewish lasers from space, anyone? How about a cabal of cannibal pedophiles running a deep state? This is enabling a dangerous manipulation of the populace through the dissemination of misinformation. The capabilities of AI will amplify and exacerbate this destructive trend,” he said.

According to Juliette Powell, who teaches media, technology, and ethics at NYU and is co-author of a new book The AI Dilemma: 7 Principles for Responsible Technology, images of public events are also at risk.

deepfake-face-artificial
Deepfakes are becoming a big problem in politics. Image by Shutterstock.

“Moving forward, we may only trust images or photographs of events taken from two or three angles because of the problems of manipulating them by AI,” Powell, who thinks developing digital literacy is key, told Cybernews.

This atmosphere of mistrust created by generative AI could even lead to incidents where a candidate, attacked by her opponent over an alleged digital forgery, has to dedicate precious time and resources to disproving the claims.

Let’s not fret in advance

It’s not all doom and gloom, of course – it’s only the beginning, and therefore it’s still possible to correct the course. It’s easy to argue, for instance, that AI can and should be integrated into election campaigns if only because the technology makes everything so much cheaper and faster.

Imagine an unexpected scandal hitting your main rival and thus opening up a perfect opportunity for you to hit her with an attack ad. Just a few years ago, this would have taken at least a day but now, with the benefit of generative AI, a simple text prompt can do it at a fraction of time, cost, and personnel. This is extremely handy to candidates whose pockets are quite shallow.

“In the coming year, response times may drop to minutes, not hours or days. AI can scan the internet, think about strategy, and come up with a hard-hitting appeal. That could be a speech, press release, picture, joke, or video touting the benefits of one candidate over another,” said West.

Besides, AI enables very precise audience targeting, which is crucial in political campaigns, especially in the US where swing voters are key.

Candidates don’t want to waste money on those who already support or oppose their campaign, and AI can help them with the fine-tuning effort. Campaign teams then save cash on exhausting canvassing missions.

In short, AI in politics is coming whether we like it or not, and banning it would be impractical given its advantages. However, simply labeling AI-generated content as such is not enough, says Josh Amishav, founder and chief executive of Breachsense data monitoring company.

“Integrating AI in election campaigns is a complex subject that requires more than just labeling AI-generated content, although that's a good start for transparency. Special attention must be given to algorithmic biases that could unfairly target or exclude certain groups,” said Amishav.

“Human oversight is required to align AI outputs with ethical norms and campaign promises. Additionally, AI-generated content should be fact-checked to prevent misinformation. Responsible use guided by regulations can help mitigate the risks.”

Movement is already visible. In August, the US Federal Election Commission decided to advance a petition by Public Citizen, an advocacy group, that called for a ban on political campaigns distributing fake audio, video, and images of their opponents.

Powell of NYU thinks the European Union has already taken the lead on the subject with its Digital Services Act. The bill, which went into effect on August 25th, says that all tech media companies doing business in Europe – including Alphabet, Meta, X, Microsoft, and ByteDance – must now identify images, video, and audio that were generated by AI.

Cole South, founder of PromptFolder, an app that helps users save, share, and discover AI prompts for tools like ChatGPT and Midjourney, says it is important not to exaggerate the impact of deepfakes and manipulative chatbots in advance.

“By implementing robust detection mechanisms, investing in technological advancements, and educating the public, we can counteract their influence and uphold the integrity of our democratic processes,” said South.


More from Cybernews:

LastPass under fire again as users report stolen crypto keys and losses

LADbible Group leaks internal data

Lazarus steals $41M from virtual betting site

Microsoft: China using AI to sway US elections

Eleven Russian Trickbot gangsters sanctioned by US and UK

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked