Starting next year, Meta will implement a new policy requiring advertisers to disclose when digital tools like AI were used to create or alter political and social ads.
The new policy will come into force in the new year and will be applied globally on Facebook and Instagram, the company announced in a blog post.
An intense political calendar for 2024 includes presidential elections in the US and the European Parliamentary elections in the EU.
There are concerns that generative AI, including deepfake technology, could be exploited in both events to mislead voters and wage disinformation campaigns.
Meta said that it would require advertisers to disclose whenever a social issue, electoral, or political ad features a photorealistic image, video, or realistic-sounding audio to depict real people in false scenarios – or if they show realistic-looking individuals who do not exist.
Additionally, the company will ask advertisers to reveal if their ads portray fabricated events, alter footage of real events, or depict actual events without authentic images, videos, or audio recordings.
The policy “builds on Meta’s industry-leading transparency measures for political ads,” the company’s president of global affairs, Nick Clegg, said in a post on X.
“These advertisers are required to complete an authorization process and include a ‘Paid for by’ disclaimer on their ads, which are then stored in our public Ad Library for 7 years,” he said.
Advertisers running political and social ads will not have to disclose when content is digitally created or altered in an “inconsequential or immaterial” way, Meta said. This includes size and color corrections, as well as the cropping and sharpening of an image.
The new policy was announced on the heels of Meta’s decision to bar political campaigns and advertisers from using its new generative AI advertising products.
Meta is the second biggest digital advertising platform in the world, behind only Alphabet’s Google, which announced the launch of similar generative AI ad tools last week and said it would block “political keywords” from being used as prompts.
In May, Senator Amy Klobuchar (D-MN) and Rep. Yvette Clarke (D-NY) introduced legislation to require a disclaimer on political ads that use images or video generated by AI.
“This is a step in the right direction, but we can’t just rely on voluntary commitments,” Klobuchar said in response to Meta’s new restrictions, adding that guardrails are needed “so AI-manipulated ads don’t upend our elections.”
More from Cybernews:
Subscribe to our newsletter