While social networks are happy to continue growing their audiences and feeding them targeted content, parents and government institutions are becoming increasingly concerned. But despite efforts to rein in the big tech behemoths, regulations are often of dubious value.
The age range of social media users has been widening since its introduction in the 2000s despite self-imposed and governmental restrictions.
According to a Common Sense Media survey conducted in 2021, 38% of 8 to 12-year-olds have used social media (up from 31% in 2019).
Self-imposed age limitations
Let’s take a look at the official age restrictions of the most popular social media platforms according to their terms of service in the US:
Social network | Minimum age | Notes |
13 years | 14+ and 18+ content restrictions, age verification by uploading an ID | |
13 years | age verification by entering your birthday | |
YouTube | 13 years | 18+ content restrictions and age verification, you may be asked to verify your age with an ID or credit card in the EU, EEA, Switzerland, the UK, and Australia |
13 years | age verification by entering your birthday | |
13 years | 18+ age verification by uploading ID, recording a selfie, or asking mutual friends | |
TikTok | 13 years | facial age estimation by a third party or a selfie with an ID |
Telegram | 16 years | no verification for 18+ content |
Snapchat | 13 years | age verification by entering your birthday |
X | 13 years | age verification by entering your birthday, ID verification for Premium users to get the “Verified” label |
13 years | age verification by entering your birthday, changing your age to 18+ requires uploading a birth certificate or ID | |
13 years | no verification for 18+ content | |
16 years | optional verification | |
Discord | 13 years | age verification by entering your birthday |
Twitch | 13 years | age verification by entering your birthday, changing your age requires uploading a birth certificate or government ID |
The vast majority of social networks have set a minimum age of 13 in the US, with Telegram and LinkedIn being the sole stand-outs at 16. That is unless local laws require a higher age.
However, in cases like X, WhatsApp, WeChat, or Snapchat, there’s no way to verify a user’s age other than simply entering your date of birth. You can also view 18+ content on Reddit with a click of a few buttons. And here’s where the governmental bodies come in.
Government actions
Recently, there have been a number of government incentives across the world to increase the minimum age for social media platforms or their functions.
Kids Online Safety Act (KOSA)
Already passed by the Senate in July 2024, KOSA aims to protect children from harmful content on social media. According to the law, the platforms must mitigate risks such as addiction or exploitation by imposing a “duty of care.” Kids’ feeds should be free of posts that promote “dangerous acts that are likely to cause serious bodily harm, serious emotional disturbance, or death.”
This could mean changes in design, feed algorithms, privacy tools, and greater parental control.
KOSA has already been criticized for its censorship potential and its impact on free speech. After all, there’s no objective way to determine whether a certain post is harmful to minors.
If KOSA were to become a law, it would empower state attorneys and the Federal Trade Commission (FTC) to enforce it, allowing lawsuits against social networks. However, this scenario now looks less likely as House Speaker Mike Johnson called the bill “very problematic.”
It must be passed before the Congress session ends in January 2025 for President Biden to sign it.
Children and Teens’ Online Privacy Protection Act (COPPA 2.0)
Passed together with KOSA, COPPA 2.0 is an update of a 1998 law aimed at protecting children under 13 on the web. Now it takes into concern those up to 17 years old.
One of the proposed measures is the ban on targeted advertising, the other being an “eraser button” that allows children and their parents to remove personal data.
Just like KOSA, COPPA 2.0 had its share of criticism in the House. For example, according to Frank Pallone, it would allow parents to “snoop on their teens’ every click online,” which undermines the goal of providing more privacy to minors.
Stop Addictive Feeds Exploitation (SAFE) for Kids act
Earlier in 2024, the New York State Governor signed a Stop Addictive Feeds Exploitation (SAFE) for Kids act which prevents “social media platforms from providing an addictive feed to children younger than 18 without parental consent and prohibits social media platforms from withholding non-addictive feed products or services where that consent is not obtained.”
Moreover, social platforms will no longer be able to send notifications between 12:00 a.m. and 6:00 a.m. without parental consent.
On the other hand, similar state laws have been challenged before by the social media giants and the same can be expected this time as well.
Australia’s minimum age limit
In September, Australia’s Prime Minister Anthony Albanese discussed plans to impose an age limit on social media in 2024. The exact age was not specified, but Albanese mentioned it to be between 14 and 16.
The opposition raised concerns about the enforcement of such a law and whether it wouldn't lead teens to hide their online identities.
What’s next?
It would be unfair to say that the largest social networks do next to nothing to protect minors from harmful content. While some efforts may not be effective, accessing mature content is becoming harder. Users are increasingly asked to confirm their age by taking selfies and uploading IDs, leaving teens in a tighter spot when it comes to faking their age.
Despite Meta, Alphabet, and the rest's efforts, the US and other governments are continuing to plan for even tighter restrictions. However, their soundness is being questioned not only by teenagers themselves.
Your email address will not be published. Required fields are markedmarked