Online Safety Act taking effect: don’t suggest friends to children on social media


Stop suggesting friends to children on social media platforms and contribute to fighting online grooming, Ofcom, the United Kingdom’s communications watchdog, says.

In its first draft guidance for tech platforms on complying with the newly introduced Online Safety Act, Ofcom – which enforces the law – published a series of recommendations covering activities such as child sexual abuse material, grooming, and fraud.

“Ofcom is exercising its new powers to release draft Codes of Practice that social media, gaming, pornography, search and sharing sites can follow to meet their duties under the Online Safety Act, which came into law last month,” said the regulator.

ADVERTISEMENT

According to Ofcom, social media platforms should fight online grooming by not suggesting children as “friends” by default. Kids should also not see lists of suggested friends.

This is something that can be exploited by groomers, and Ofcom stressed that over one in ten 11-18-year-olds have been sent naked or semi-naked images. The regulator also said that social media platforms should make sure children’s location information is not revealed in their profiles or posts.

“Regulation is here, and we’re wasting no time in setting out how we expect tech firms to protect people from illegal harm online while upholding freedom of expression. Children have told us about the dangers they face, and we’re determined to create a safer life online for young people in particular,” said Dame Melanie Dawes, Ofcom's chief executive.

Kids should also not receive messages from people not in their contacts lists, and the latter should not be visible to other users. Ofcom said that some 30% of secondary-school-aged children (11-18 years) have received an unwanted friend or follow request.

“If these unwanted approaches occurred so often in the outside world, most parents would hardly want their children to leave the house. Yet somehow, in the online space, they have become almost routine. That cannot continue,” said Dame Dawes.

Ofcom says that social media platforms should make sure moderation teams have the resources they need so that people can easily make a complaint to the site. Websites should also identify content containing the addresses of child abuse websites.

Search engines are singled out. They should not index websites previously identified as hosting child abuse material, and their users should have a way to report “search suggestions” they believe are pointing them to illegal content.

ADVERTISEMENT

Ofcom says that it wants to hear what tech platforms think of its plans, detailed in over 1,500 pages. But companies that fail to act on the government's requests will face fines of up to £18 million ($22 million), or 10% of their global annual turnover. Tech executives could also face prison time under certain circumstances.

So far, tech companies haven’t been happy about the law – mostly because of, in their view, unrealistic demands to prevent illegal content from appearing on their platforms rather than removing it.

According to the firms, this means scanning even encrypted messages before they’re actually garbled. Security experts say this is both unworkable and a threat to privacy and security.

However, the government said in August that the regulator Ofcom would only ask tech firms to access messages once "feasible technology" had been developed. This appears to have been a sort of a climb-down.