Banned TikTok would be even more dangerous than a legal one

TikTok, much the same as other popular social media networks, is swarming with malware. Were the US government to ban it, TikTok users would face even more dangers.

In Washington, a plan to either ban TikTok or force its parent company, Chinese Bytedance, to divest is further brewing, with the WSJ hinting that the whole mayhem might turn into prolonged debate if the decision isn’t made soon enough.

If banned, TikTok would probably disappear from the official app stores in the US, and while users would probably still be able to reach it using a VPN, it would be a hurdle to update the social media app. This would introduce a whole new level of risks to user privacy and safety online.

Recently, Menlo Security sounded alarm bells, saying that 12.5% of companies had at least one employee accessing TikTok from a browser in the last 30 days alone. Why does that matter? Well, TikTok, given its popularity, is widely abused by criminals, and clicking on TikTok links might simply end up with a device infected with malware.

In TikTok’s defense, that’s certainly not unique to this China-owned social media platform. There have been numerous reports, including this most recent one, of how threat actors are abusing YouTube to spread malicious programs.

From a cybersecurity standpoint of view, simply banning TikTok might eventually leave its US users – there are approximately 150 million of them – even more prone to cyber risks.

I briefly chatted with Andrew Harding, VP of Security Strategy at Menlo Security, to learn more about the TikTok problem.

If TikTok is banned in the US, will that problem (users accidentally clicking on malware) disappear?

A TikTok ban, like any other prohibition, will create unforeseen problems. Any ban by the US government will only create an artificial demand for TikTok. Federal resources would be better applied to significant threats that harm children and that encourage cybercrime. TikTok is not inherently more harmful than other social media applications.

Why is the US federal government so wound up about one social media application while others exhibit the same traits? There are serious questions about TikTok's privacy policy and the degree to which the company can be trusted to comply with nation-state limits on the use and transmission of user data, but these are data privacy concerns, not malware or credential-stealing concerns. And these concerns exist with every social media application.

Why is it dangerous to access TikTok via a browser?

The danger is not TikTok – whatever danger it poses to users, and whatever danger posed by handing over so much data to a social media application – is posed by any application.

TikTok use certainly could threaten productivity within enterprises, but so does playing Solitaire on a PC. TikTok use is rife with privacy concerns, and there is certainly a risk of data protection concerns any time data can be uploaded to a 3rd party. There is also danger in banning an application and even updating an application.

Moreover, orphaned apps and copycat sites can pose additional threats. I expect users might not use TikTok as much if it's limited to access via a browser, but our research shows that it's already in use today.

Companies need to decide whether or not they allow social media content. There are legitimate business uses for social media, of course. Browser security systems can help to simplify whatever policy decisions enterprises and government agencies choose. Read-only use of social media, except where there is a legitimate business purpose, seems like a sensible policy. Blocking social media on work devices also makes sense in many cases.

If it’s so dangerous, why haven't companies blocked TikTok on their corporate network? Or maybe they have?

TikTok is not inherently "so dangerous." Unmanaged browsing is dangerous. Driving users to search the web for Tiktok-substitute websites is dangerous. I was able to create 100 potential copycat sites in under an hour with the help of a well-known GenAI tool. Whatever policy an enterprise or agency ultimately chooses to enforce needs to be supported by tools and services.

Can you point me to a real case scenario where TikTok was exploited to spread malware, and a corporate network got infected as a result?

This whole issue has been warped by over-simplified reporting. Most attacks related to TikTok have been related to TikTok trends.

There have been social engineering attacks that exploit the human desire associated with a TikTok trend. There was a significant malware outbreak that posed a real danger. The threat actor exploited the end user's desire to "unmask" blurred TikTok content that might have enabled the user to view nude videos that have been blurred. Was this a TikTok threat? The malware was hosted on the leading source code repository. The attack was propagated on a leading digital communication platform that is popular with gamers.

Should they each be blamed and banned? Should TikTok be blamed because the possibility of seeing nude videos clouded the judgment of many users? This social engineering attack and the associated "unfiltered" malware show us that attackers are creative and will exploit human frailties and foibles.

This attack was also particularly damaging because the malware could steal credentials and financial information from victims. Banning TikTok won't stop that.

Enterprises need cost-effective controls that stop phishing "offers" and malware payloads from even getting in front of users. Browser security tools must be readily accessible to aid both users and their respective organizations in adhering to laws and regulations concerning browsing and content. These systems must not only ensure compliance but also protect users from potential cyberattacks.