The UK government’s controversial Online Safety Bill has finally become an official law. Critics and tech firms say that the document is dangerous, but others think more action is needed.
The Act’s provisions force tech companies to take responsibility for the content they host on their platforms.
The law also gives the UK government the power to force internet companies to remove child sexual abuse material, online scams, anonymous trolls, deepfakes, and other content that the government deems illegal.
Companies that fail to act on the government's requests will face fines of up to £18 million ($22 million), or 10% of their global annual turnover. Tech executives could also face prison time under certain circumstances.
Government “immensely proud”
The Act forces tech firms to protect children from material that might be legal but still deemed harmful. It also now requires pornography sites to stop children from viewing content by checking ages more thoroughly than before.
New offenses have also been created, such as cyber-flashing – sending unsolicited sexual imagery online – and the sharing of deepfake pornography where AI is used to apply someone’s likeness into pornographic material. The law is explained in more detail here.
“Today will go down as an historic moment that ensures the online safety of British society not only now, but for decades to come. I am immensely proud of the work that has gone into the Online Safety Act from its very inception to it becoming law today,” said the UK’s Technology Secretary Michelle Donelan.
“The Bill protects free speech, empowers adults, and will ensure that platforms remove illegal content.”
The government said in August that the regulator Ofcom would only ask tech firms to access messages once "feasible technology" had been developed.
Tech companies aren’t happy – mostly because of, in their view, unrealistic demands to prevent illegal content from appearing on their platforms rather than removing it. According to the firms, this means scanning even encrypted messages before they’re actually garbled.
Scanning encrypted messages?
This suggestion has already received serious criticism from security experts, as being both unworkable and a threat to privacy and security.
Meta’s WhatsApp, Signal, and other similar apps that use end-to-end encryption – which protects messages from being seen by people outside the chat – have previously threatened to leave the UK if they’re forced to enable scanning.
Last year, Meta said that the Online Safety Bill “risks people’s private messages being constantly surveilled and censored.”
“You can’t scan encrypted messages while preserving encryption. End-to-end encryption either protects everything or protects nothing. There’s no way for the government to scan an end-to-end encrypted message without breaking it and putting everyone under threat of hacks and surveillance,” Andy Yen, the CEO of Proton, a mail platform focusing on privacy, also wrote in an op-ed last month.
However, the government said in August that the regulator Ofcom would only ask tech firms to access messages once "feasible technology" had been developed. This appears to have been a sort of a climb-down, and the solution was cautiously welcomed by campaigners and tech leaders.
Some say that more needs to be done. Adenike Cosgrove, a cybersecurity strategist at Proofpoint, an American cybersecurity company, says more laws have to be passed to ensure that all platforms comply with the Online Safety Act.
“It will be difficult for social media platforms to identify and remove all harmful content especially where criminals masquerade as legitimate people on dating apps, building relationships with people and often then asking them to move to another platform to continue the conversation,” said Cosgrove.
“With this in mind, we must weigh the potential benefits of the bill against the potential risks and to ensure that it is implemented in a way that both protects user safety and privacy.”
More from Cybernews:
Subscribe to our newsletter