UK parliament finally approves Online Safety Bill, concerns remain

Britain’s controversial Online Safety Bill, setting tougher standards for social media platforms, has finally been passed by parliament. The document has been heavily altered since it was first proposed but some are still unhappy.

The United Kingdom’s Technology Secretary Michelle Donelan said the bill – explained here – was a “game-changing” piece of legislation. “Today, this government is taking an enormous step forward in our mission to make the UK the safest place in the world to be online,” she stated.

The bill introduces a new regulatory regime to address illegal and harmful content online. Once it receives royal assent and becomes law, social media platforms such as Facebook or TikTok will be expected to remove illegal content quickly or prevent it from appearing in the first place.

The comms watchdog Ofcom will now be the main internet regulator in the country. Social media platforms will have to:

  • remove illegal content quickly or prevent it from appearing in the first place, including content promoting self-harm;
  • prevent children from accessing harmful and age-inappropriate content;
  • enforce age limits and age-checking measures;
  • ensure the risks and dangers posed to children on the largest social media platforms are more transparent, including by publishing risk assessments;
  • provide parents and children with clear and accessible ways to report problems online when they do arise.

Precisely because so much attention is paid to child safety online, the bill’s supporters have included the National Society for the Prevention of Cruelty to Children, the safety group Internet Watch Foundation, bereaved parents who claim harmful online content contributed to their child’s death, and sexual abuse survivors.

Chris Dimitradis, Chief Global Strategy Officer at ISACA, the global professional association and learning organisation for information security, called the passing of the bill a “milestone moment in improving internet safety in the UK.”

“If implemented effectively, the legislation will provide appropriate protection for UK citizens to live online safely, keeping them safe from harmful or fraudulent content and creating enhanced digital trust between platforms and users,” Dimitradis told Cybernews.

If companies do not comply with the new regulation, Ofcom will be able to issue fines of up to 18 million pounds ($22.3 million), or 10% of their annual global turnover.

The 300-page document has been heavily updated since it was first proposed four years ago. New provisions related to trolling, deepfake porn, animal cruelty, or scam ads, have been added.

What about encryption?

Most importantly, though, in November 2022, the UK government shifted away from tackling “legal but harmful” content after campaigners and some lawmakers raised concerns that this could curtail free speech.

London had previously said that social media companies could be fined if they failed to stamp out harmful content such as abuse even if it fell below the criminal threshold, while senior managers could have also faced criminal action.

Some concerns remain. The most contentious – and hard to implement – feature of the bill is the requirement to scan encrypted messages for illegal content.

Compliance would be likely to involve client-side scanning to detect illegal content before it's encrypted – a suggestion that’s already seen serious criticism from security experts as being both unworkable and a threat to privacy and security.

Meta’s WhatsApp, Signal and other similar apps which use end-to-end encryption – which protects messages from being seen by people outside the chat – have previously threatened to leave the UK if they’re forced to enable scanning.

However, a few weeks ago, the UK government said that Ofcom could not be required to order scanning unless the “appropriate technology” exists. This appears to have been a sort of a climb-down, and the solution was cautiously welcomed by campaigners and tech leaders.

Caution is smart, as Britain has already urged Meta not to roll out end-to-end encryption on Instagram and Facebook Messenger without safety measures to protect children from sexual abuse.

Meta plans to implement end-to-end encryption across Messenger and Instagram direct messages, saying that the technology reinforced safety and security.