Turns out that banning lying social media accounts really helps


A new study has shown that banning 70,000 misinformation supersharers after the January 6th, 2021, insurrection in Washington immediately reduced the spread of bogus information on Twitter.

The very fact that Twitter, now renamed X, suspended accounts associated with the right-wing QAnon movement and cited their role in spreading misinformation about the 2020 US presidential election became a field day for misinformation researchers.

They could and did analyze data to see whether the move reduced the amount of misinformation on the platform. Now, a new study published last week in the journal Nature says that it did. This suggests that banning serial spreaders of lies is more effective than deboosting or suppressing individual posts.

According to the researchers, the mass suspension greatly reduced the sharing of links to “low credibility” websites among Twitter users who were following the suspended accounts. What’s more, the move also pushed quite a few other supersharers of misinformation to leave the site voluntarily.

“The results are informative regarding the current role of social media companies in the regulation of speech and how terms-of-use interventions may be applied in other settings,” said Stefan McCabe, Diogo Ferrari, Jon Green, David Lazer, and Kevin Esterling, co-authors of the study.

They point out that there has been little research on platform-wide interventions on speech previously. With the 2024 presidential election coming up, the research can be useful in showing that it’s possible to limit the spread of online lies. If the platforms are willing to do so, of course.

The problem is that they most likely don’t. Under the leadership of Elon Musk, who has called himself a free-speech absolutist, X has reinstated many previously banned accounts, including that of former President Donald Trump.

Musk has praised X’s “Community Notes” tool as an alternative to enforcing online speech rules and has said he preferred to limit the reach of more controversial posts rather than remove them or ban accounts.

Another study recently revealed that a small group of humans – the so-called supersharers – are responsible for most misinformation online, rather than automated and AI-powered accounts.

The authors found that supersharers were disproportionately Republican, middle-aged white women residing in three conservative states, Arizona, Florida, and Texas, which are focus points of contentious abortion and immigration battles.

Their neighborhoods were poorly educated but relatively high in income, and, probably most importantly, they persistently shared misinformation manually – that is, the massive volume of content was not automated with the help of technology.