Twitter usually booms during global football tournaments, and the Qatar World Cup is no exception. However, advocates have already noticed that the platform, faced with a mass exodus of employees, fails to stop hate speech and racism aimed at players.
Researchers at the Center for Countering Digital Hate (CCDH) have just presented a new analysis that showed that Twitter doesn’t remove abusive and often blatantly racist tweets, hurled at footballers.
Out of 100 tweets reported to Twitter, many included the N-word and monkey or banana emojis, or told the players to “go back” to other countries. However, only one tweet that repeated a racial slur 16 times was removed – 99 abusive messages stayed live.
To some, this is unsurprising. Twitter’s new owner Elon Musk laid off about half of the platform’s workforce soon after buying it, and hundreds of other staff, including all-important content moderation teams, resigned en masse last week.
There might simply not be enough moderators to stop the kind of abuse that was targeting three black English footballers who missed their penalties at the Euro 2020 tournament. Back then, Twitter sprinted and was effective in removing the wave of racist tweets.
This time, the platform might be unable to cope with online abuse, Sanjay Bhandari, the chair of the UK-based anti-discrimination body Kick It Out, told the Guardian.
Musk has insisted that moderation capabilities at Twitter remain strong. He also said he was committed to preventing the platform from becoming a “free-for-all hellscape.”
However, in an update to the platform’s rules on hate speech last week, Musk said “negative/hate tweets” would be “deboosted & demonetized,” but not necessarily removed. He added that users would not find tweets if they didn’t specifically seek them out.
It seems that for FIFA, the International Federation of Association Football, organizer of the World Cup tournaments, this is not enough.
Last week FIFA and the international players’ union Fifpro announced a Social media protection service (SMPS). It is available to players in all 32 countries competing at the Qatar World Cup.
The service supposedly stops footballers from seeing abusive messages when they log on to their phones in dressing rooms minutes after matches. FIFA will also closely monitor social media accounts of all participants of the World Cup and report abusive comments to social networks and law authorities.
There’s a catch, though. The SMPS only applies to posts on Facebook, Instagram, and Youtube – Twitter is understood to be excluded from the process due to technical issues.
Of which there are many, even if not necessarily technical. Twitter simply has fewer staffers to work with an influx of rule-breaking tweets that spread rapidly during popular sports competitions, such as the World Cup.
Plus, many teams are down to few or no engineers. Hundreds of them left Twitter after Musk demanded that employees either sign a pledge to work longer hours or resign.
The CCDH also noticed that Musk’s claims about a fall in hate speech on Twitter do not stand up to scrutiny. In the first full week under Musk’s ownership, tweets and retweets mentioning the N-word, for example, were triple the 2022 average.
Your email address will not be published. Required fields are markedmarked