
Meta has sued the Hong Kong-based maker of CrushAI, a platform capable of creating sexually explicit deepfakes, claiming that it repeatedly circumvented the social media company’s rules to purchase ads on Instagram and Facebook. But isn’t it too late?
-
Meta has sued the Hong Kong-based maker of CrushAI, a platform capable of creating sexually explicit deepfakes.
-
Meta is claiming that the app maker repeatedly circumvented the social media company’s rules to purchase ads on Instagram and Facebook.
-
Some experts say that Meta's reactive model fundamentally misunderstands how digital harm works in our AI-powered world.
The lawsuit is part of what Meta described as a wider effort to crack down on so-called “nudifying” apps, following claims that the company was failing to adequately address ads for those services on its platforms.
The aforementioned apps allow users to create nude or sexualized images from a photo of someone’s face, even without their consent.
As of February, Joe Timeline HK Limited, the maker of CrushAI, also known as Crushmate and by several other names, had run more than 87,000 ads on Meta platforms that violated its rules, the complaint that Meta filed in Hong Kong district court Thursday said.
“We’ve filed a lawsuit in Hong Kong, where Joy Timeline HK Limited is based, to prevent them from advertising CrushAI apps on Meta platforms,” Meta said in a press release.
“This follows multiple attempts by Joy Timeline HK Limited to circumvent Meta’s ad review process and continue placing these ads, after they were repeatedly removed for breaking our rules.”
Meta alleges Joy Timeline HK Limited violated its rules by creating a network of at least 170 business accounts on Facebook or Instagram to buy the ads.
The app maker also allegedly had more than 55 active users managing over 135 Facebook pages where the ads were displayed. The ads primarily targeted users in the United States, Canada, Australia, Germany and the United Kingdom.
404 Media already reported in January that 90% of Crush’s traffic came from Meta’s platforms. But that’s almost half a year ago – why only take action now?
Ben Colman, co-founder and CEO of Reality Defender, a deepfake detection company, thinks that Meta’s lawsuit only seems like a step forward. It actually exposes a much deeper problem with how tech platforms approach AI-enabled harm, said Colman.

“Meta's approach to AI-generated harm follows a predictable pattern: wait for victims to report content, then remove it after the damage is done. This reactive model fundamentally misunderstands how digital harm works in our AI-powered world,” Colman wrote on Reality Defender’s website.
“By the time Meta removes the content, screenshots have been taken, links have been shared, and the victim's life has already been altered.”
According to Colman, Meta's message to victims is essentially: “We'll clean up the mess after your life is destroyed.” That's negligence dressed up as action, he said.
Your email address will not be published. Required fields are markedmarked