How much will platforms do to eliminate Russian misinformation?
As Russia continues to pump out misinformation about its actions in Ukraine, the major tech platforms have been treading a fine line – and it now seems that the Kremlin has decided that Facebook has taken a step too far.
Photoshopped or years-old images, propaganda pitching Ukraine as the aggressor, and misinformation about NATO have been proliferating across the internet, shared by both Russian sources and those taken in by their claims.
This has, naturally, caused concern. For example, at the end of February, the prime ministers of Poland, Lithuania, Latvia, and Estonia wrote a joint letter to Google, Facebook, and Twitter, calling on them to suspend accounts denying, glorifying, or justifying wars of aggression, war crimes and crimes against humanity.
"Although the online platforms have undertaken significant efforts to address the Russian government's unprecedented assault on truth, they have not done enough," they wrote.
"Russia's disinformation has been tolerated on online platforms for years; they are now an accessory to the criminal war of aggression the Russian government is conducting against Ukraine and the free world."
Both before and after this call, platforms have taken some action.
Facebook and Instagram owner Meta has established a special operations center to monitor the platform round the clock, with staff including native Russian and Ukrainian speakers.
It's also globally demoting content from Facebook Pages, and Instagram accounts from Russian state-controlled media outlets and starting to label posts that link to them.
On March 1, Google announced that it was blocking YouTube channels connected to the state-run media RT and Sputnik across Europe.
And, it says, YouTube has removed hundreds of channels and thousands of videos for violating its Community Guidelines, including many engaged in what it calls coordinated deceptive practices.
As part of its crack-down, the company is also 'pausing' a number of channels’ ability to monetize on YouTube – a move that could mark a detectible downturn in revenues, given that RT, for example, claims it's had more than 10 billion views on the site.
Meanwhile, Microsoft has now banned content and ads from RT and Sputnik, and has removed RT's apps from its store.
Twitter appears to be going less far, with head of site integrity Yoel Roth tweeting: "Today, we’re adding labels to Tweets that share links to Russian state-affiliated media websites and are taking steps to significantly reduce the circulation of this content on Twitter."
And while TikTok, meanwhile, says it's 'increased resources' to fight misinformation, misleading videos have continued to clock up tens of millions of views.
The companies, however, have been treading a fine line, fearful of seeing their services shut down in the country altogether. When, in February, Facebook restricted access to Russian state-backed media, the Kremlin responded by limiting access to the platform within Russia; similar moves were made against Twitter.
And, they point out, their platforms can be extremely helpful to those attempting to fight or flee the invasion. Facebook, for example, has made encrypted one-to-one chats available on Instagram for all adults in Ukraine and Russia, and all the major platforms are offering informational services.
Now, their fears have been realized, with Russia announcing last week that access to Facebook would be blocked altogether in the country and Twitter facing major restrictions.
However, this may not have been the platforms' only reason for restricting content: more cynically, there's the question of setting a precedent.
Demands that the platforms restrict content more tightly are unrelenting. If misinformation about Ukraine can be completely removed, how about misinformation about Covid? How about other offensive content?
We can all hope that the situation in Ukraine will be temporary – but calls for greater control of internet content certainly won't.