No more AI photos on Instagram, Threads, and Facebook


Meta says that in the upcoming months, users will see labels on potentially AI-generated content.

On February 6th, Meta announced that it's working with industry partners on common technical standards for identifying AI content, including video and audio.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” says Nick Clegg, President of Global Affairs.

ADVERTISEMENT

“It’s important that we help people know when photorealistic content they’re seeing has been created using AI.”

Images generated using Meta’s AI image generator have the label “Imagined with AI” along with invisible watermarks and metadata embedded within image files. However, Clegg states that Meta envisions AI labeling across all content created with Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

“We’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads,” says Clegg.

According to Clegg, while AI companies are starting to include metadata in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale. Therefore, Meta can’t label this content as AI-generated.

However, the company is adding a feature for people to disclose when they share AI-generated video or audio so a label can be attached until a further solution is created.

In instances where digitally manipulated images, videos, or audio recordings pose a significant risk of misleading the public, a more conspicuous label will be applied, providing users with additional information and context.

Clegg admits that it’s not yet possible to identify all AI-generated content, but confirms that the company is working on the solutions available with current technology.

“We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks,” he said.

ADVERTISEMENT