Fears of effective AI-enabled disinformation campaigns and deepfakes on social media didn’t materialize and affect the integrity of elections around the world this year, Meta Global Affairs president Nick Clegg says.
As many as two billion people voted in elections across some of the world’s biggest democracies in 2024, including India, Indonesia, Mexico, countries of the European Union, and, of course, the United States.
This same year, though, experts have also been warning of the potential impact of generative AI on elections – including the risk of widespread deepfakes and AI-powered disinformation campaigns.
However, according to Clegg, generative AI didn’t play a big role in disrupting campaigns this year, so the predicted risks didn’t materialize significantly.
This was the biggest election year in history. Billions of people voted in some of the world’s largest democracies. That meant it was a big year for Meta. While predicted risks like AI didn’t materialize in a significant way, we had to evolve how we dealt with increasingly…
undefined Nick Clegg (@nickclegg) December 3, 2024
“While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content,” said Clegg in a blog post.
He added that during the election periods, AI content related to elections, politics, and social topics represented less than 1% of all fact-checked misinformation.
Still, in total, Meta rejected 590,000 requests for the company’s Imagine AI tool to generate images of President-elect Donald Trump, Vice President-elect JD Vance, Vice President Kamala Harris, Governor Tim Walz, and President Joe Biden in the month leading up to the US presidential election.
Clearly, the interest in using deepfakes in elections is still real – and the prevalence of AI tools is growing. Maybe that’s why Clegg didn’t rule out that generative AI could still be a risk in votes of the future.
Meta said that it also closely monitored the use of generative AI by covert influence campaigns, organized, the company added, mostly by Russia and Iran. But the firm found that hostile actors made “only incremental productivity and content-generation gains” using generative AI.
Earlier this year, Meta began requiring users and campaigns to label posts and ads that were created with generative AI. The firm also introduced a controversial AudioSeal tool, which marks AI-generated speech.
Your email address will not be published. Required fields are markedmarked