In the wake of egregious online misinformation that spread like wildfire across social media, violent riots erupted across the UK over anti-immigration sentiments, with asylum seekers becoming the focus of mobs of angry British citizens.
Counter-protesters met these riots in several cities, taking to the streets to stand against them.
Furthermore, rioting extremists have been mobilizing and coordinating attacks, which have resulted in 50 injured police officers and 483 arrests by August 8th. This far-right movement has been utilizing the popular messaging app Telegram with tens of thousands of members, even after the misinformation has been addressed.
All this culminated after the tragic stabbing murders of three young girls in the seaside town of Southport in northern England, followed by the convergence of angry UK citizens collaborating on Telegram as a knee-jerk reaction to false news and baseless claims that blamed the stabbings on a Muslim migrant, when in fact the perpetrator was a UK-born teenager.
As an avid Telegram user myself, I can tell you that the platform is not exactly praised for its aptitude for moderating illicit content, let alone extremist rhetoric and activities which occur daily. Hence, a large, shady user base prefers the platform.
Nevertheless, in this case, Telegram has shut down some of the channels in an effort to staunch the activity and disrupt the extremist groups from coordinating further acts of violence.
In the grand scheme of things, the ongoing battle to preserve free speech and freedom of expression always falls short when bad actors wield it maliciously. Because of recent events, social media platforms have a difficult road ahead.
The battle against misinformation
What began as a tragedy quickly shifted and took on a completely different form. The events that unfolded are a perfect example of the consequences of misinformation and disinformation.
- Misinformation: pertains to the spreading of false or inaccurate information, regardless of any intent to deceive. In other words, it can be unintentional and propagated by those who do not verify its accuracy.
- Disinformation: is the dissemination of false information that is designed to mislead. This is commonly associated with propaganda.
Everyone in the hacker world is intimately familiar with this. Consequently, because many news platforms are known to shape biased perceptions of events to cater to politically aligned audiences, certain facts are often avoided, ultimately distorting the full truth of events.
Regardless of the intent, the consequences are often inescapable.
A day after the murders on July 29th, Marc Owen Jones, an associate professor specializing in Middle Eastern studies at Hamad bin Khalifa University in Doha, where he studies information control strategies, had been monitoring “at least 27 million impressions [across social media] for posts stating or speculating that the attacker was Muslim, a migrant, refugee or foreigner,” he said on X.
The false narrative was also being propagated by influencer Andrew Tate, with 9.9 million followers on X, who mentioned that an “undocumented migrant” who had “arrived on a boat” was responsible for the murders in Southport.
Furthermore, he said, “The soul of the Western man is so broken that when the invaders slaughter your daughters, you do absolutely f****** nothing.” This open-ended provoking comment challenged an urgency to respond to the incident while falsely blaming unauthorized immigrants.
Still, the false narrative continued to spread across X and was carried by Channel 3, an entity that claims to operate as a news organization. After the misinformation was proved to be just that, the account issued an apology and corrected the false information.
This is nothing new. For example, last year, the BBC uncovered nearly 800 false accounts on TikTok, which was spreading propaganda and disinformation about the Russian/Ukrainian war to mislead public opinion.
The circumstances surrounding the murder of these three young girls have given rise to conspiracy theories about whether the spreading of false information was intentional, the results proliferated like a highly contagious virus commonly known as hate against migrants in general, especially Arab migrants.
Following the thread of events, the grievance for these murders was unceremoniously hijacked by an agenda that redirected public outcry onto anti-immigration protests, differences in religion, and ultimately race itself.
While I could go on about Deepfakes such as AI-generated media, and other historical instances like ‘Pizzagate’ and continue to emphasize the consequences of false information and failure to fact-check, I will try to spare you.
There is a common saying, “The internet and real life are not the same.”
Perhaps it’s more relative than we think.
Telegram and social media moderation
If the UK riots have taught us anything, it’s that social media platforms are faced with new challenges regarding moderation, especially when platforms can be used to destabilize societies and organize violence.
Because of this, British Prime Minister Keir Starmer warned social media companies, reminding them that they are required to adhere to laws that prohibit the incitement of violence online, following incidents of violence triggered by misinformation related to the deadly mass stabbing earlier in the week.
"Let me also say to large social media companies, and those who run them, violent disorder clearly whipped up online: that is also a crime. It's happening on your premises, and the law must be upheld everywhere," said Starmer during a news conference, where he emphasized the need for a balance for handling content of this nature with free speech.
“Inciting violence online is a criminal offense. That is not a matter of free speech – it is a criminal offense,” said Starmer.
Social media platforms have community rules that users are obligated to follow, but how these rules are enforced is another story entirely.
Why these discussions are happening now in the wake of the UK riots is beyond me, and defies explanation when social media companies like Facebook and Telegram have thrived as illicit operating centers for myriads of cybercriminals and CSAM (child sexual abuse material) marketplaces for years.
While I believe it is vitally important for companies like these to protect free speech, matters that extend beyond the scope of free speech are entirely different and should be the subject of discussion, particularly by those who understand the difference.
Extremism should indeed be moderated. This requires better algorithms and content analysts without neglecting other pressing issues. Shockingly, these same platforms often fail to adequately moderate illegal content that continues to flow with impunity. It will be interesting to see how social media companies plan to moderate these things.
While racially charged violent riots perpetrated by extremists against migrants and Arab people are unjustifiable, the sexual exploitation of children on these platforms rarely takes center stage and remains insufficiently exposed and addressed.
The balance between free speech and what is considered unlawful will always be delicate because the fate of free speech will always depend on the one interpreting the law and having the power to enforce it.
At the end of the day, while society divides against itself over political concepts and policies surrounding the left or the right, extremism will always remain an enemy to any society until something else comes along with the power to diminish freedom because of extremism.
Your email address will not be published. Required fields are markedmarked