Google posts new disclosure policy for digitally altered election ads


Google announces updated disclosure requirements on Monday for advertisers using ‘synthetic or digitally altered’ content for all online political ad campaigns.

In an effort to combat election disinformation, the updated ‘Political Content Policy’ covers all digitally manipulated images, audio, and/or videos, effective July 1st, 2024.

“We believe that users should have information to make informed decisions when viewing election ads that contain synthetic content that has been digitally altered or generated,” Google stated.

ADVERTISEMENT

Google describes ads requiring disclosure as those made with manipulated content that “inauthentically depicts real or realistic-looking people or events.”

This would include synthetic material that:

  • appears to show a person saying or doing something they didn’t say or do
  • alters footage of a real event
  • generates a realistic portrayal of an event to depict scenes that did not actually take place

Meta implemented a similar disclosure policy for AI generated political content across its social media platforms in February.

Google AI election ad disclosure
Image by Google

The updated policy explains that when advertisers create a campaign, in the settings section, they will be required to select a checkbox labeled “Altered or synthetic content.”

To streamline the process, Google said it will auto-generate in-ad disclosures for every political advertising campaign check-marked for synthetic content, running in-stream ad formats for mobile phones, computers, and TVs, as well as feeds, shorts, and web ads formats for mobile phones.

For all other formats, if the advertiser selects the ‘Altered or synthetic content’ checkbox, they will be required to provide their own “prominent disclosure.”

“The disclosure must be clear and conspicuous, and must be placed in a location where it is likely to be noticed by users,” Google said.

ADVERTISEMENT

Advertisers that violate the new policy will be issued a minimum seven day warning before their account is suspended by Google.

The tech giant provides several examples of acceptable wording advertisers can use in the disclosure warning:

  • This audio was computer generated
  • This image does not depict real events
  • This video content was synthetically generated

Curbing election misinformation

The rapid growth of generative AI and its misuse to create misleading text, images, and video for public consumption potentially sabotaging the election process, has caused concern among nations worldwide.

A recently released Microsoft Threat Intelligence Election Report found that both Russian and Chinese influence operations were already hard at work launching misinformation campaigns to sway American voters months before the November elections.

Chinese nation-state-backed cyber groups were seen “leveraging generative AI technologies to effectively create and enhance images, memes, and videos,” Microsoft said.

In January, the Microsoft-funded OpenAI had announced new AI tools to help fight election disinformation, also banning the use of its tech for political campaigns.

In May, the ChatGPT-maker revealed it had disrupted five covert influence operations attempting to use OpenAI’s large language models for "deceptive activity" on the web to “manipulate public opinion or influence political outcomes."

In contrast, social media platform X raised hackles among users after owner Elon Musk decided to cut the platform’s election integrity staff last September, claiming ineptitude.

ADVERTISEMENT

Ironically, a recent investigation this past April found Google’s YouTube platform guilty of approving 100% of fake election ads placed by the investigators intended to spread misinformation about upcoming elections in India.