Meta’s new team targeting disinformation and AI harms in EU elections


Meta has unveiled plans to activate a dedicated team to combat disinformation and harms generated by the use of artificial intelligence (AI) ahead of the upcoming European Parliament elections.

According to Marco Pancini, Meta’s head of EU affairs, the “EU-specific Elections Operations Center” will identify potential threats to the integrity of the vote and put mitigating measures in place in real-time.

The team would bring together experts from Meta’s intelligence, data science, engineering, research, operations, content policy, and legal teams to work on tackling misinformation, influence operations, and risks related to the abuse of AI tools, added Pancini in a blog post.

“Ahead of the elections period, we will make it easier for all our fact-checking partners across the EU to find and rate content related to the elections because we recognize that speed is especially important during breaking news events,” Pancini also said.

Meta is planning to use keyword detection to group election-related content in one place. This will supposedly make it easy for fact-checkers to find.

With regards to misusing AI tools, the company is planning to add a feature for users to reveal when they share AI-generated content. Meta may also apply penalties if they fail to do so. However, “may” is actually a specific word that Meta has chosen to use in this case.

That’s because, Pancini writes, “If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label, so people have more information and context.”

In other words, even a clear case of deceitful AI-generated content might not be actually removed from Meta’s platforms.

Meta has invested more than $20 billion into safety and security and quadrupled the size of its global team working in this area to around 40,000 people since 2016, Pancini said.

This includes 15,000 content reviewers who review content across Facebook, Instagram, and Threads in more than 70 languages – including all 24 official EU languages.

However, The Information reported in mid-February that Meta was actually reducing payments to news organizations that fact-check potential misinformation on WhatsApp, including around elections.

Besides, in December 2023, Meta said in an updated blog post that it was allowing Facebook, Instagram, and Threads users in the US to decide how much fact-checked content they would see in each app.

Since it’s physically impossible to fact-check every controversial post on these platforms, some of them are sure to slip through to feeds of users who have decided to hide fact-checked content.

Finally, a poll from Monmouth University demonstrated in June 2023 that, despite a massive fact-checking effort, around 30% of Americans still believed President Joe Biden’s victory in the 2020 elections resulted from fraud – just as they did three years ago.


More from Cybernews:

Cybernews podcast: how algorithms curate and flatten our online lives

Odysseus lunar landing puts US back on Moon

Bezos, Nvidia join OpenAI in funding humanoid robot startup

LockBit still shows signs of life, new ransom attacks reported

Cyberstalking and cyber harassment: knowing the laws and your rights

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked