YouTube tells creators to disclose altered or synthetic content


YouTube has introduced a new tool in its Creator Studio, which asks all creators to disclose whether their realistic content has been created or altered using artificial intelligence (AI).

YouTube has announced in a recent blog post that it will require AI disclosures from content creators.

The company revealed in a previous blog post back in November that these disclosures will be shown in label form, either appearing in the expanded description box or in front of the videos.

ADVERTISEMENT

YouTube states that they aren’t “requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.”

Instead, the label is supposedly there to “strengthen transparency” between the audience and content creators.

YouTube provided examples of some of the content that creators will need to disclose. These include using a person's likeness, altering footage of real events or places, or when creating realistic scenes.

There are also cases where no disclosure is needed, including things like clearly unrealistic content, color adjustments or lighting filters, special effects, beauty filters, and other visual enhancements.

Another instance where creators won’t need to reveal whether they used AI is in the production process.

For example, if content creators generated scripts, content ideas, or autonomic captions.

Most of the labels will appear in the expanded description of the video. However, videos that are centered around sensitive subjects such as health, news, elections, or finance will have a prominent label on the front of the video.

The company states that users will start seeing labels being rolled out in the coming weeks, starting with the YouTube app on phones and then on desktops and other devices.

ADVERTISEMENT

This development has come at a time where experts and governments are concerned about the rising use of AI and its potential to mislead individuals.

Artificial intelligence is a particularly sticky subject in the world of politics, especially in relation to this year’s presidential election in the US.

A fake robocall impersonating President Joe Biden made waves in the state of New Hampshire recently, in which ‘he’ was heard urging Democrats not to vote in the primary. This again raises concerns about AI-amplified electoral misinformation.

In response to concerns, 20 tech companies and counting – including Google, Meta Platforms, Microsoft, and OpenAI – have signed on to a new ‘tech accord’ aimed at preventing the distribution of deceptive AI content during the 2024 global election cycle.