Meta’s policies on manipulated media were rebuked and called “confusing” by a company-funded oversight board on Monday after the firm allowed an altered video of US President Joe Biden to spread on Facebook.
The video appeared on Facebook in May 2023. It was edited to make it appear as though Biden is inappropriately touching his adult granddaughter’s chest and accompanied by a caption describing the US president as a “sick pedophile.”
In fact, Biden was placing an “I Voted” sticker on the granddaughter’s chest after Natalie Biden voted in the 2022 midterm elections. He then kissed her on the cheek.
The video spread on social media, but Meta has consistently said that the post did not violate its rules because they apply only to deepfakes – content created by artificial intelligence to impersonate an individual – that alter someone’s speech.
This wasn’t done. As bizarre as it sounds, Meta said the video did not violate the existing policy because it showed Biden doing something he did not do rather than saying something he didn’t.
The Oversight Board, an independent collective of experts, academics, and lawyers who oversee thorny content decisions on Facebook and Instagram, has now said that it upheld Meta’s decision to leave the video in place.
But the board that took on the case last October after the video was reported by a Facebook user still called on the company to clarify its “confusing” and “incoherent” policies
“The Board is concerned about the Manipulated Media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent (for example, to electoral processes),” said the Oversight Board in a statement.
The independent board added that the policy’s application is too narrow and should be extended to cover audio as well as content that shows people allegedly doing things they actually didn’t do.
“As it stands, the policy makes little sense,” the Oversight Board’s co-chair, Michael McConnell, said.
“It bans altered videos that show people saying things they do not say but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.”
Late in 2023, Meta barred political campaigns and advertisers from using its new generative AI advertising products. However, AI has already joined forces with electoral disinformation, lies, and propaganda.
The World Economic Forum’s “Global Risks Report 2024” said this month that generative AI products, including deepfakes, might play a significant role in disrupting election outcomes around the world this year. In November, US citizens will vote in the presidential election.
Your email address will not be published. Required fields are markedmarked