
According to sources, the BBC was informed that TikTok is profiting from teenagers as young as 15 performing sexual livestreams, despite its policy that prohibits these activities. If companies like TikTok rely on AI-driven algorithms to police unlawful, dangerous, or sexually explicit uploads, why is this happening?
Regardless of the algorithms, did you know that, per policy, TikTok bans sexual solicitation as well as traditional solicitation – such as begging for money – known as digital gifts? Yet, according to moderators, the company is aware that this is happening and has not intervened.
That is because TikTok is taking a 70% cut. Therefore, if the social media platform is enforcing its policies only nominally, it could ostensibly explain why it's profiting off the begging of children and underage sexual content.
This situation emerged when three women from Kenya contacted the BBC, disclosing that they had begun producing sexually explicit content in exchange for digital gifts during their teenage years. Additionally, they revealed that TikTok actively promoted and facilitated payment negotiations for more explicit material, which was then distributed and monetized across other platforms.
In response to the BBC, TikTok asserted that the company has “zero tolerance for exploitation.” This response is expected, as it aligns with its community guidelines.
Conversely, if these guidelines aren’t being enforced due to the revenue TikTok is generating, that could explain why enforcement is merely nominal and perhaps only stated as a meaningless legal posture.
Are AI policing algorithms a failure by design?
Last year, my girlfriend and I took our electric scooters on a weekend vacation along the coast. Since we both had phone mounts on our handlebars, she decided to livestream our journey on the scooters. As soon as she began live streaming on TikTok, the platform canceled the feed and flagged it as unsafe.
This instance highlights the effectiveness of the algorithm for recognizing a user livestreaming while operating a moving vehicle – in this case, an electric scooter. This is significant because TikTok utilizes AI in its algorithms.
This is generally geared to identify emerging trends, cater to personalized user experiences, and understand what any user wants to see, and ensure it gets seen ad nauseam as part of its ad business.
This also means that the AI algorithm promotes sexually explicit and suggestive content to children, ostensibly in the same way Facebook (Meta) does, despite being sued by the Attorney General of New Mexico for turning a blind eye to this unrestrained epidemic, as well as being aggressively grilled by Senator Ted Cruz at a Senate Judiciary Committee hearing last year, titled “Big Tech and the Online Child Sexual Exploitation Crisis.”
This begs the question, if AI is able to detect prohibited content, how does it advertise it? Only platforms like TikTok, Facebook, Instagram, and other social media giants can answer what is otherwise unknowable and proprietary.
What TikTok knows about a user’s age
In a congressional hearing last March with TikTok CEO Shou Zi Chew, it was revealed that while the platform relies on the flawed method of age gating – easily circumvented by children – the company also scans user-uploaded videos to estimate their age. During the hearing, Chew confirmed that TikTok had developed technology to analyze video uploads for age determination, although he did not specify how this process works.
This means TikTok depends on both human moderation and an automated system driven by AI and machine learning to flag and remove content that violates community guidelines. The system is supposed to delete any obvious sexual content and nudity.
According to a former moderator, TikTok has no interest in stopping sex solicitation. This is due to the revenue TikTok earns from users giving digital gifts to solicitors. Furthermore, the way users are monetizing sex solicitation on the platform is similar in concept to how live cam models operate on adult websites.

Following the money
The BBC investigated TikTok’s earnings and discovered that streams were generating up to $1,000 (£900) hourly. However, users receiving digital gifts, which can be converted to money, only receive a portion of it.
That is because the tech giant takes up to 70% of these earnings from a user’s total revenue. Although TikTok staunchly denies it received a cut that large from livestream gifts, the BBC investigated and found that TikTok is still receiving a 70% stipend from livestream gifts, even in cases involving exploitive begging, such as from families in Syrian refugee camps.
However, the devil is in the details.
Big tech’s role in child exploitation
One thing is clear: due to these compounding circumstances, we are living in a world that does not protect children. It seems every week, and nearly every day, I am confronted on social media about children being targeted over sexual exploitation. Why are we even having this discussion in the first place?
If safeguarding children from online threats and exploitation was a priority, it would be pursued with the same determination as big tech has when chasing money.
According to the BBC, TikTok has long been aware of child exploitation occurring on its platform, having conducted its own investigation and analysis three years prior. However, it disregarded the issue.
Since money talks, it ostensibly prioritized profits over safeguarding against sexual exploitation, according to claims made in a lawsuit filed against the platform last year by the US state of Utah. The outcome of this lawsuit is still pending.
The lawsuit accuses the company “that the live streaming feature enables adult users to give TikTok currency to young users in return for sexual solicitation and exploitation, with the company receiving a percentage of each payment.”
This is a rallying cry, as it was once said that money is the root of all evil. In the case against big tech giants like Meta, Instagram, TikTok, Cloudflare, and others not mentioned herein, that certainly seems to be the case.
On a personal note, companies claim they are unaware that their platforms are being used as illicit marketplaces for distributing and monetizing Child Sexual Abuse Material (CSAM). However, lawmakers and government officials who have asked hardball questions know this excuse is both deceptive and untrue.
Your email address will not be published. Required fields are markedmarked