By Raphael Tsavkko Garcia
Twitter has decided to step up its game in the fight against misinformation with Birdwatch, but critics complain about lack of information and transparency, and remind the social media platform that other interesting anti-misinformation tools are readily available.
In the midst of increasing discussions about the influence of fake news in public debate and electoral processes, Twitter is planning to launch a feature aimed at expanding the possibility of reporting false content called Birdwatch, which would apparently create a form of misinformation crowdsourcing database.
Despite Twitter’s good intentions, initiatives aimed at tackling misinformation are not new, and Brazil has several examples that could not only serve as a model, but also to indicate a path to be followed by the company.
“I’m always in favour of platforms launching tools that enable users to participate more proactively in monitoring things like misinformation,” said Jillian C York, Director for International Freedom of Expression for the Electronic Frontier Foundation. However, Twitter hasn’t made clear yet “who the moderators are to be able to flag tweets and vote on whether they’re misleading.”
“Birdwatch helps in the fight against misinformation, but is a space to be disputed. Like other tools that collectively build the content of platforms, for example Wikipedia, whenever they can, bad actors will dispute content and world views. We will not escape the need to create a niche pedagogical council that can be the authoritative source and responsible for disseminating the correct information,” explained Yasodara Cordova, MPA/Edward S. Mason Fellow at the Kennedy School, at Harvard University.
A concern among experts is also the focus given by platforms such as Twitter and Facebook to initiatives and measures focused exclusively on the USA.
“One of my concerns is that a lot of the calls for content moderation are coming in a US context and platforms may apply rules to the US election that will not be applied elsewhere because of their own myopic views,” York pointed out.
What is Twitter’s game?
So far, it seems that Birdwatch is just a moderation tool where users will flag tweets and vote on them to see if they are trustworthy – there’s indeed a lot to be said about that. Who will be able to vote, what weight will be given to each vote, will there be deletion of content or just limitation of scope, are some questions yet to be answered.
“I can do annotations on top of tweets marking content as fake news, as inconsistent or anything like it [but] my fear is that after someone does this, the tweet can simply be deleted and disappears from Twitter and we won’t have access to it anymore,” criticises Lucas Lago, researcher and creator of the 7C0 Project.
“Twitter is very limited. In 280 characters, you don’t have much space to explain yourself, so the misinformation also happens because of the lack of space for you to explain yourself, things get badly counted. Suddenly you don’t bring important data to your tweet because there was no space, and the Birdwatch flag option would give you this opportunity to contextualize and tell a better story,” said David Nemer, Assistant Professor of Media Studies at the University of Virginia.
“The point here, again, is that we don’t know yet who will be able to contextualize these tweets, and that’s the big question,” Nemer added.
There is concern about who will have the power to add context to the tweets. It is not clear who the moderators are and who will be able to flag tweets and vote on whether they’re misleading.
“But I like this as a concept I just don’t know who it applies to and I don’t know if it’s meant to apply purely to paid content moderators or whether it’s something that will be akin to the concept of super reporters or super flaggers or whether this will actually be a democratic egalitarian tool that enables Twitter users to participate in tackling misinformation,” said York.
Lack of transparency
But one of the great problems of Big Tech companies persists.
“I have my doubts about how Birdwatch will work, because anyone can flag a tweet as fake and it’s not clear who will be able to annotate such tweets and add more context to it, if everyone [can do it] I think it loses its meaning as people who created some fake content will be able to create the context to support even more fake news,” said Nemer.
Recalling the recent case where Twitter prevented the circulation of a NY Post story about the alleged involvement of Democratic candidate Joe Biden in his son’s business, Nemer wonders if the same rule would have applied if it had been the NY Times.
“I thought it was wise of Twitter to limit access to the news without verification, but what is strange is that this process has no transparency and both Twitter and Facebook are terrible at it,” he said.
Brazil’s a pioneer: A few examples of projects Twitter should pay attention to
Thinking exactly about this trend, initiatives have emerged in Brazil that are in a way complementary to Birdwatch – although substantially different – to try to improve public debate with quality information or avoiding the creation of alternative narratives.
Citizen and academic initiatives have been emerging for quite some time and a few are worth mentioning.
The 7C0 project, which is a number in hexadecimal that, transformed to decimal, means 1984, a reference to George Orwell, is an automated Twitter account aimed at showing the deleted tweets of political actors as a way to keep tabs on politicians. The main goal of the project is to keep a database of tweets that have been deleted by politicians to prevent them from creating alternative narratives – and for the public not to depend, for example, on prints, which can be easily manipulated.
“All people are equal before the law, but the law also demands transparency of the acts of governments and politicians, so when a politician comes to Twitter, they’re in a way making a political speech, they’re participating and making an extension of their work on the internet and therefore in my conception this couldn’t be deleted,” said Lago.
The Radar, created by the verification agency Aos Fatos, allows monitoring how low quality content and fake news spread on social networks and producing reports on the spreading of disinformation in Brazil. The tool also monitors WhatsApp looking for low quality content on the popular messaging app as well as Youtube videos using an algorithm that calculates the score of each message based on a series of complex criteria.
The Monitor of Political Debate in the Digital Environment, a project by the Public Policy Research Group for Access to Information at the University of São Paulo (USP), seeks to map, measure and analyze the ecosystem of political debate in the digital medium by compiling data and presenting analysis on the quality of political debate and polarization in the country through a database of websites, Twitter and Facebook profiles that are relevant to the political debate.
There’s also Brasil WikiEdits, a Twitter bot that monitors changes to Wikipedia from the networks of the legislative, judiciary and executive branches to map potential changes of encyclopedia entries for political purposes and denouncing, like Project 70C, attempts to construct alternative narratives by modifying or deleting entries in the online encyclopedia.
They are “fairly different, yet they all have the objective to bring fresh and trustworthy information to citizens from different approaches (even technological ones), to improve the way information is available on the internet,” explained Lago.
While Brasil WikiEdits and the 7C0 Project serve more as databases, Radar and Monitor are more like what Cordova calls a pedagogical council. That is, voices with authority to guide public debate.
“A disadvantage of these initiatives, though, is that of reach, they are not very well-known,” said David Nemer, Assistant Professor of Media Studies at the University of Virginia.
Despite all the doubts, experts agree that new tools for combating misinformation are welcome and that “Twitter is making interesting attempts to label information that will help people choose their sources better,” explained Cordova.
York agrees, pointing out that sometimes it is important that harmful content is properly marked as such, but that pure removal is not always the best way. She says that “tech companies can also enforce the rules and then that also means you’re not just taking down but labelling it, and I do think that fact-checking efforts and labelling can be a more effective measure than mere takedowns.”
There are two sides of this complicated battle. On one side, the need to moderate or even remove content from circulation, which can lead to a Streisand Effect that ends up amplifying the message of the deleted content. And on the other, the need to not let the story be erased and narratives be created by the absence of content that can be verified, as is the case of Lago’s 7C0 project and WikiData.
For now, it is necessary to wait to observe how Birdwatch will behave once it leaves the beta phase and becomes accessible to the public.
About the author: Raphael Tsavkko Garcia is a Brazilian freelance journalist published by Al Jazeera, Foreign Policy, Undark, The Washington Post, among other news outlets. He also holds a PhD in Human Rights from the University of Deusto.