Ahead of the US elections, Threads is rolling out a separate system to fact-check users’ posts. It has started a discussion about potential biases in the fact-checkers themselves.
Instagram head Adam Mosseri announced that Threads will have its own fact-checking system to rate and flag false content provided by a third party, and warn the users before sharing it.
The third-party organization that Meta has partnered with to bring fact-checking to Threads has not been disclosed yet.
While there have been talks about the need for a fact-checking tool for Threads since last year, the social media platform has been matching similar content ratings from other social media platforms owned by Meta – Facebook and Instagram.
"We recently rolled out the ability for our third-party fact-checking partners to review and rate false content on Threads," Mosseri wrote in a post on Threads.
"Previously, we matched near-identical false content on Threads based on what was fact-checked on Facebook and Instagram. Now fact-checkers can rate Threads content on their own.”
Post by @mosseriView on Threads
“Fact-check the fact-checkers”
The push for investing in fact-checking is most likely influenced by the US elections and the need to prevent the spread of misinformation.
However, automated fact-checkers raise many concerns about transparency and censorship of different voices. Threads users also remained skeptical.
One user claimed his post was flagged by a fact-checker as false, reasoning that “corporate greed is not the main driver of inflation.”
“Which fact-check partner decided corporate greed is not a main driver of inflation – Wall Street?” comments the user, Mark Buldak.
Post by @cat_dad_2010View on Threads
Post by @cat_dad_2010View on Threads
The fact-checker used the American conservative magazine The Dispatch as a source of information. Inflation is a complex phenomenon influenced by many factors, and companies’ profits posted by the user were inaccurate. However, the fact-checker does not state that numbers are incorrect, and its reasoning brings up a broader discussion about defining the basis for fact-checking.
Different ideological perspectives can complicate the process of determining which socioeconomic phenomena are considered facts and which are theories. Automated fact-checkers may struggle to understand this diversity of thought, potentially oversimplifying it to a single "truth."
“Who fact checks the fact checkers?” writes another user named Reginald Andreas.
“How can users be certain that the facts are actual facts and not just things that would appeal to what a particular group may BELIEVE to be a fact? If a fact is a fact but is offensive, will it still be flagged?”
“False content = facts I don't like or opinions I don't like. Maybe just say that,” commented a user named Sharyn.
Reducing political content or censoring
Metas’s Transparency Center states that the focus of the fact-checking programme is identifying and addressing viral misinformation.
According to the company, they work with independent third-party fact-checking organizations certified through the non-partisan International Fact-Checking Network (IFCN) or European Fact-Checking Standards Network (EFCSN).
“We show additional information from third-party fact-checkers on the reduced content and display a clear label to warn people that the content has been rated as False, altered or partly false. We generally label but do not limit distribution for content that fact-checkers rate as Missing context,” writes Meta.
Meta’s Threads has been working to reduce political content. Earlier this year, Meta announced a controversial algorithm update to reduce the reach of political content by excluding it from Reels and Explore on Instagram and not recommending it on Threads.
This received criticism, as it is hard to define what are the limits of political content, and might be censoring voices. According to Human Rights Watch, Meta has been systematically censoring critical Palestinian voices, including content creators, journalists, and activists reporting from the ground in Gaza.
On the other hand, Meta lacked classifiers for automatically identifying and removing hate speech in Hebrew until September 2023.
The same limitations with identifying hate speech were noticed among Ethiopian diasporas since the 2020 outbreak of the Tigray War between the Ethiopian government and the Tigray People’s Liberation Front (TPLF). Also, in Myanmar, after a military coup that led to massacres of the Rohingya minority.
At the time, Meta stated that the company was putting effort into expanding its capabilities to catch hate speech in a variety of languages.
Your email address will not be published. Required fields are markedmarked