
Elon Musk was not telling the truth when he said that hate speech decreased after his takeover of Twitter, later renamed X, according to scientists who looked into the matter.
A new study has found that the weekly rate of hate speech is about 50% higher since Musk purchased the microblogging website than in the months prior. This included the increased use of “specific” homophobic, transphobic, and racist slurs, researchers said.
The analysis showed that a spike in hate speech occurred just before the Tesla and SpaceX owner acquired X and continued through his seven-month tenure as the company’s chief executive.
Musk bought Twitter in October 2022 for $44 billion and acted as the company’s chief executive until May the following year, when he handed over the reins to Linda Yaccarino, its current boss.
The billionaire said shortly after taking over the platform that “hate speech impressions… continue to decline,” but researchers point out that the opposite was true under Musk’s leadership.
“I was not surprised that we observed increased levels of hate speech following Musk’s purchase of X,” Daniel Hickey, lead author of the study from the University of California, Berkeley, told Cybernews in an email.
He said researchers anticipated this outcome due to Musk’s comments about reducing moderation on the platform, the dismissal of trust and safety staff, and the disbandment of the trust and safety advisory council.
Researchers also found that the average number of “likes” on hate posts almost doubled, suggesting that more people were exposed to hate speech on X. Meanwhile, bot accounts and inauthentic activity did not decrease, despite Musk’s pledge to "defeat spam bots or die trying."
Only English-language hate speech was analyzed for the study, which was detailed in PLOS ONE journal.
Hate exposure drives users away
While researchers could not draw firm conclusions that the rise in hate and inauthentic activity was a direct result of Musk’s takeover – due to limited information on specific internal changes at X – it coincided with his tenure.
“This evidence does not point in favor of claims made that exposure to hate speech has decreased, and contributes to our concern that more people may be seeing speech that is divisive and harmful to their well-being,” Hickey said.
“If the results were different, we would have been more than happy to report them,” he said, stressing that his team did not have an agenda against X even though it was “worried” about the proliferation of hate on the platform.
This concern has been shared by X users, including in the scientific community, with many migrating to platforms like Bluesky, but also Mastodon and Threads, as less toxic alternatives.
X’s official policy does not allow users to directly attack other people “on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”
This includes dehumanization of and slurs against groups based on these characteristics, as well as “hateful references” to violent events like genocides and lynchings.
Your email address will not be published. Required fields are markedmarked