© 2022 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

UK orders social media platforms to actively look for Russian propaganda

Companies will have to proactively look for and remove disinformation which could be harmful to the UK from foreign state actors.

The UK government is making changes to new internet safety laws, hoping to minimize people’s exposure to state-sponsored or state-linked disinformation.

“The invasion of Ukraine has yet again shown how readily Russia can and will weaponize social media to spread disinformation and lies about its barbaric actions, often targeting the very victims of its aggression.” Digital Secretary Nadine Dorries said. “We cannot allow foreign states or their puppets to use the internet to conduct hostile online warfare unimpeded.”

A new Foreign Interference Offence created by the National Security Bill will be added to the list of priority offenses in the Online Safety Bill. Social media platforms, search engines, and other apps that allow people to post content will be required to take a proactive approach to identify and remove misinformation aimed at interfering with the UK.

Tech companies will be required to deal with information posted by individuals and groups on behalf of foreign states and aimed at influencing elections, court proceedings, and undermining democratic institutions.

The National Security Bill, due in Parliament for Committee Stage next week, establishes a new offense of foreign interference to deter and disrupt state threats activity. The Online Safety Bill, as it is currently drafted, will force companies to take action on state-sponsored disinformation.

According to security Minister Damian Hinds, companies will need to implement proportionate systems and processes to mitigate the possibility of users encountering illegal content under the Foreign Interference Offense.

“This could include measures such as making it more difficult to create large-scale fake accounts or tackling the use of bots in malicious disinformation campaigns. When moderating their sites, the firms will need to make judgments about the intended effect of content or behavior they have reasonable grounds to believe state-sponsored disinformation and whether it amounts to misrepresentation,” Hinds said.

Online Safety Bill worries experts

In April, 45 cybersecurity experts warned that the UK's Online Safety Bill could expose your private messages to third parties.

They sent a letter to the parliament expressing concern that the bill, which is supposed to make the internet a safer place, forces providers to partially or fully abandon end-to-end encryption, exposing private messages to third parties.

“The proposal is ill-suited to address its stated aim and instead places huge risk to all users of private messaging platforms, as well as creating unimplementable and impractical requirements which would be at odds with human rights standards,” the letter says.

The Online Safety Bill aims to protect children and adults from illicit content, including making cyberflashing illegal and creating a “communication offense” for users who share dangerous content with others. The power to decide on the content that can be shown to users via platforms will go into the hands of Ofcom.

Meaningful impact is unlikely

Social media sites already have an ethical duty to take down misinformation, Matthew Gracey McMinn, Head of Threat Research at infosecurity company Netacea, told Cybernews.

He believes that this regulation is a step in the right direction, however, it is unlikely that social media platforms will be able to make any meaningful impact on bots in the immediate future.

“Social media sites will need to invest in AI to track and remove malicious actors and content. The National Cyber Security Centre recently stated that the UK should expect a long campaign of cyberattacks from Russia, it's expected that misinformation bots will only increase in aggression,” he said.

McMinn pointed out that the misinformation campaigns are organized at scale. For example, at one point, Twitter notified 700,000 users who had encountered accounts that had been linked to Russia and the Internet Research Agency (IRA), a Russian agency for influence operations.

Twitter discovered 3,814 accounts linked to the IRA, who had posted over 176,000 tweets and over 50,000 accounts linked to the Russian government which had tweeted over a million posts. Some of these posts can still be seen on the platform in 2021.

More from Cybernews:

Exoplanet imaging could be humanity’s best chance to find extraterrestrial intelligence

Snapchat users say the new “dangerous” Ghost Trails feature might be used for stalking

Dutch university recovered ransomware payment with a $300k profit

Indian financial crime agency raids Chinese-owned Vivo

How 7k DOGE ended up in limbo for almost two weeks

TikTok forces personalized ads in Europe

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked