X’s misinformation problem laid bare amid Israel-Hamas conflict

Misinformation, propaganda, and graphic footage of the abductions and military operations in Israel and Gaza are spreading like wildfire on social media, especially X, formerly known as Twitter, where content moderation is almost non-existent anymore.

“For many reasons, this is the hardest time I've ever had covering a crisis on here,” Justin Peden, an OSINT researcher from Alabama known online as the Intel Crab, posted on X on Monday.

“Credible links are now photos. On-the-ground news outlets struggle to reach audiences without an expensive blue checkmark. Xenophobic goons are boosted by the platform's CEO.”

Most independent researchers have noticed the same – that misinformation and propaganda on social media make it harder for people, who are now used to getting their news from X, TikTok, or Facebook, to assess what’s going on.

A flood of lies

Here are the hard facts: early on Saturday morning, Hamas, a Palestinian militant group that has ruled the Gaza Strip since 2007, fired thousands of rockets into Israel and, after breaching the wire that separates Gaza from Israel in multiple places, stormed towns and villages in Southern Israel.

More than 700 Israeli civilians and soldiers were killed, and the toll keeps rising. Around 150 people were also taken hostage. In response, Israel has declared war on Hamas and is bombing targets in Gaza non-stop. Palestinian authorities say about 700 people have been killed by the airstrikes. A parallel cyber conflict is also simmering.

However, the social media space has been flooded with old videos, fake photos, and even video game footage at a level most researchers have never seen before. Journalists and analysts find it extremely hard to find unique first-person accounts from Israel or Gaza – instead, they have to sift through previously unseen levels of garbage.

Cyabra, an Israeli analysis firm that has tracked bot accounts on Twitter/X, historically found a huge amount of fake accounts spreading pro-Hamas propaganda on the platform. Cyabra has offered to help journalists and organizations “uncover malicious actors at work.”

The company’s chief executive Dan Brahmy wrote on X: “We have analyzed close to 1M posts, pictures, and videos in order to uncover over 70K fake profiles, controlled by the same murderous groups you are seeing on your TV screens to spread disinformation, or to gather sensitive details about their targets.”

Shayan Sardarizadeh, a journalist at BBC Verify, a fact-checking and disinformation team, also said there had been a “deluge” of false posts on X since Saturday’s attacks. What’s more, untrue posts from verified accounts – that pay for a blue tick – have even been boosted.

“I’ve been fact-checking on Twitter for years, and there's always plenty of misinformation during major events. But the deluge of false posts in the last two days, many boosted via Twitter Blue, is something else. Neither fact-checkers nor Community Notes can keep up with this,” the journalist said.

Endorsement deleted

That’s not entirely surprising, though. Yes, X has been flagging several posts as misleading or false but hundreds of messages are still live – at least partly because the platform’s disinformation and election integrity team was recently reduced to fewer than ten staff members. Obviously, that’s not enough.

Besides, a review by NewsGuard, an anti-misinformation outfit, recently found that engagement has “soared” by 70% for Russian, Chinese, and Iranian disinformation sources after Elon Musk, who bought the app in 2022, removed labels from state-run propaganda accounts.

A few days before the Hamas attacks, X also removed headlines from links on the platform, thus making external links difficult to tell apart from standard photos. Musk said this is a way to “greatly improve the aesthetics.”

Finally, the billionaire himself recommended two accounts for war coverage that have made false claims or antisemitic comments.

Elon Musk recommended two fishy accounts for war coverage. Image by Cybernews.

@WarMonitors account told a user in June to “go worship a jew lil bro,” and about a year ago, said that “the overwhelming majority of people in the media and banks are zionists.”

Another account, @sentdefender, regularly posts false content, Emerson T Brooking, a researcher at the Atlantic Council’s digital forensic research lab, said: “Absolutely poisonous account. Regularly posting wrong and unverifiable things (‘sources say’). Inserting random editorialization and trying to juice its paid subscriber count.”

Musk, who has called himself a free speech absolutist, later deleted his recommendation after @WarMonitors described Gaza militants as “martyrs,” and threatened to remove his endorsement.

Actions taken

Late on Monday night, X’s Safety account acknowledged that there have been “more than 50 million posts globally focusing on the weekend’s terrorist attack on Israel by Hamas.”

“As the events continue to unfold rapidly, a cross-company leadership group has assessed this moment as a crisis requiring the highest level of response. This means we’re laser focused and dedicated to protecting the conversation on X and enforcing our rules as we continue to assess the situation on the platform,” said the company.

X wrote it had updated its Public Interest Policy and taken action to remove newly created Hamas-affiliated accounts in order to prevent “terrorist content from being distributed online.”

“Community Notes are now live on posts and new accounts are being enrolled in real time to propose and rate notes. Community Notes typically appear within minutes of content posting,” X wrote about its feature allowing X users to add context to potentially misleading posts.