AI-generated content to test democracy as 4 billion cast votes


Davos is discussing whether democracy can survive the tech threat as 4 billion voters head to the polls in what may be pivotal elections this year. Nationalism is on the rise, and AI-enabled tools will turbocharge various mis- and disinformation campaigns.

Ravi Agrawal, Editor-in-Chief at Foreign Policy magazine, calculates that over 4 billion people around the world will vote in major democratic elections in 2024. India, the US, Bangladesh, Pakistan, and Indonesia head to the polls, and only Nigeria is missing from the top six.

“This is the year that more people than in any other year in the history of the world will head to elections,” Agrawal opened a discussion at a World Economic Forum in Davos. “Now, elections are usually about hope, and there's a lot of hope involved, with 4 billion people heading to the polls. But there's also a lot of fear this year.”

The list of threats grows

There’s cause for concern that AI tools will be layered into democratic processes, Alexandra Reeve Givens, Chief Executive Officer at the Center for Democracy and Technology, warned.

“We already live in a fragmented information ecosystem where there are echo chambers, where there are many different sources of information hitting your average voter, your average citizen, at any given moment,” Reeve Givens said.

Mis- and disinformation, usually intentional, have already surrounded the political environment and the state of the world, including false statements about candidates. Fake audio or video is on the rise.

“Former President Trump himself has been the victim of fake images of him on the plane with Jeffrey Epstein,” she noted.

Already in previous elections, voters were targeted by robocalls and automated text messages sending incorrect information about their voting location or whether their poll was open or not. Manipulated messages were designed to influence voters' behavior, Reeve Givens argues.

Generative AI combined with privacy leakage will make it easier than ever to tailor and personalize messages to voters. Combined with underpaid and overworked election officials, who are targeted with phishing or doxing schemes, the cocktail may be potentially disastrous.

“That's a parade of horribles. There are hopeful things about how tech helps connect the world and get out the message. But these are threats we have to be really conscious of as we go into this year,” Reeve Givens said.

Check the origin of information like you do with food

André Kudelski, Chief Executive Officer of Kudelski Group, warned that “bad guys” can come from any place, from any different jurisdiction.

“Regulation is something that can be pretty useful and efficient if the perceived threats come from the same territory as those that you want to protect. And if you have this asymmetry, it's important to come up with some technologies that allow you to fight against this asymmetry,” he observed.

The missing puzzle piece to him is solutions to trace and identify the content’s origin, whether it’s fake or not. And even then, sometimes people may not even be interested in whether the content is actually true. However, it is important to enable users to make their own opinions.

“It's like for food. To be able to know what the components are in the food that you are eating. And for the element of video, that is something that you can achieve through a combination of watermarking, some elements of blockchain. So, fundamentally, give more traceability here,” Kudelski said.

Regulation, a double-edged sword

Jan Lipavský, Minister of Foreign Affairs of the Czech Republic, believes that global solutions are needed as we already communicate through different global internet platforms, and governments need to globally agree on guiding principles.

“We should be more thinking about the right not to be manipulated,” Lipavský said. “I need to know if the photo or video of something happening is true or if it was created artificially.

Lipavský thinks it would be a huge mistake for governments to accept that something is not possible to regulate and control, including AI.

“So it needs to be developed, such kind of regulation or possession, to control of those technologies that the governments will be sure that it is not going against the interests of governments,” said Lipavský.

He compared AI regulation to gun controls, where governments selling weapons must ensure that sales do not go against their interests.

Lipavský warned that Russia’s information war plays with supporting both left and right extremists just for the sake of splitting up societies.

“We need to be looking for solutions, how the freedom of speech, free journalism needs to be supported. But in the same way, it should not endanger our democratic societies. So, we need more resilient societies. We need companies to understand that corporate social responsibility doesn't only mean doing something nice in a local municipality, but that their tools are not misused for the sake of this. Also, there needs to be some kind of accountability,” Lipavský noted.

Cloudflare’s boss sees AI moderating AI

Matthew Prince, Co-Founder and Chief Executive Officer at Cloudflare, believes that AI may be used for good. AI systems are already finding new threats and vulnerabilities “that no human has ever identified before.” He warns that regulating AI could stifle innovation.

“We sit in front of somewhere between 20 and 25% of the web, and the theory of the company has always been, if we could see enough of what was going on, we could use machine learning and artificial intelligence systems in order to be able to predict new threats before our clients were attacked by them,” Prince said.

Prince noted that AI systems are non-deterministic, and the exact same input can result in different outcomes over time. While you can control the inputs, it is very hard for AI companies to guarantee that “you will never have this output.”

“We need to look at how these systems are being regulated. But we have to be cautious about making sure that we don't shut down innovation as we do that,” Prince said. “We've seen example after example of well-designed, well-restricted AI systems that people have been able to trick.”

Reeve Givens believes that big tech has a duty to help surface the trusted sources of information and set usage and content policies to stop people from using generated content for mass political targeting campaigns.

“I think there's a conversation to be had around the generative AI companies, what their products are able to generate, whether they're automatically labeling,” she said.

She expressed worries that in this heightened threat environment, a number of social media companies have been scaling back their investments in trust and safety, reducing staff.

However, Reeve Givens also expressed an opinion that we need to be very careful in the balancing act.

“Now, do you want the CEO of a tech company deciding if information should be upranked or the government minister deciding if the information should be upranked? Neither of those solutions is ideal, right?” she said. “We don’t live in a perfect world, and there are plenty of governments around the world right now that are already putting extreme pressure on technology companies, where the words of the opposition are illegal misinformation that can no longer be shown.”

Kudelski agreed: “If we just do things too much by regulation, we cannot even really be sure if something is right or wrong because the perspective may be biased.”

He believes in educating people to understand there may not be a single reality but many different views and to make their own opinions to decide what they think is right or wrong.

Greater role of media

Prince from Cloudflare added that media now may play an increasing role in checking what is real and what is not using emerging technologies.

“If I were running a media company today, I would be thinking about how we, using the role that we have traditionally played as reporters of what is going on, as the truth-tellers in society at some level, can we be working with or developing ourselves the technology to be able to say: ‘this is something that someone actually said, this is something that was actually generated,’ and I'm going to help you distinguish between one and another,” Prince said.

More from Cybernews:

Pandora’s box: AI in an X-rated world

Musk's X approved for Virginia money transmitter license

BreachForums former admin gets 15 year prison sentence

AI and big data: the energy suckers of the future

Pandora’s box: AI in an X-rated world

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked