Many people rely on Facebook and Instagram to get their news. However, important information is often buried deep under the trivial content preferred by social media algorithms.
Social media is the primary source of news for nearly half of the US population. However, even when something is breaking, news is quite often drowned by trivial posts of, for example, animals and pretty much anything that ignites drama online.
This trend might be especially frustrating during a state of emergency, where officials are trying to communicate effectively with those impacted by disasters.
Researchers at the Stevens Institute of Technology in New Jersey investigated posts on X (formerly Twitter) that attracted the most attention and engagement before, during, and after the storms. They picked the hurricanes Harvey, Imelda, Laura, and Florence, which devastated the US, Central America, and the Caribbean between 2017 and 2020.
The results showed that important rescue communications were drowned out by a constant stream of content that was more appealing to users but did not serve public safety purposes.
Their findings, published in the International Journal of Disaster Risk Reduction, show that the most popular social media posts were related to public safety or rescue work. People preferred tweeting about the fate of pets during storms, sharing human interest stories, or arguing about politics and climate change.
“It’s like being at a crowded party – if everyone’s arguing loudly about politics, it’s hard to make yourself heard over the noise,” explains Dr. Jose Ramirez-Marquez of the Stevens School of Systems and Enterprises. He sees it as a communicational challenge to the authorities.
According to the authors, social networks need to “step in” to actively amplify official disaster-related information instead of letting algorithms regulate themselves, as this trend might also be used to spread false information for malicious purposes.
“As we’ve seen in recent weeks, with the misinformation surrounding natural disasters in Florida, Georgia, and North Carolina, social networks remain highly vulnerable to misinformation,” Dr. Ramirez-Marquez says.
Social media algorithms are a black hole
While the mechanics of algorithms remain mostly a black hole, social media’s responsibility in the context of war and natural disasters remains problematic, and simply “stepping in” is not always that simple.
The question of choosing to amplify one piece of information over another requires huge levels of transparency. However, it might still interfere with freedom of speech and lead to a single narrative.
For example, during the coronavirus pandemic in 2019, Meta began to remove vaccine-related misinformation as part of its fight against disinformation in response to its potential impact on human well-being.
However, human rights organizations criticized big tech companies, including Meta, for selectively censoring content on social media platforms that have a real-life impact. In February, the company pushed to de-amplify political content on Instagram, receiving backlash from users who labeled it “censorship.”
In May, Meta rolled out a fact-checking feature on its other social media platform, Threads. However, it started a discussion about potential biases in the fact-checkers themselves, as they might tune in to a single political thought.
According to Human Rights Watch, Meta has been systematically censoring critical Palestinian voices, including content creators, journalists, and activists reporting from the ground in Gaza. At the same time, they claim that Meta lacked classifiers for automatically identifying and removing hate speech in Hebrew until September 2023.
Facebook also received criticism for its role in Myanmar’s genocide of the Rohingya, as the platform did not act on the hate brewing in the Facebook groups that led to real-life violence.
Your email address will not be published. Required fields are markedmarked