Since the fake sexually explicit images of Taylor Swift went viral on social media, Microsoft has fixed its generative artificial intelligence (AI) tool and US lawmakers have proposed new regulation.
The ‘Swift effect’ is real – apparently, it alone can spur tech companies and lawmakers into instituting AI protections.
The industry has always expressed deep concerns about the creative potential for generative AI image-creation tools being misused, but until now has never taken specific action. It seems it takes a celebrity to see that happen.
A pornographic AI-generated image of Swift, shared by a user on X, was viewed a staggering 47 million times last week. The image spread to another platform, Telegram.
The images are and will probably remain out there. But they at least highlighted a problem of non-consensual deepfake pornography spreading on social media and elsewhere uncontrollably.
Even the White House, mindful of the fact that millions of ‘swifties’ can become an important voting bloc in the US presidential election later this year, is commenting. Its press secretary Karine Jean-Pierre called the fake images “alarming.”
Now, X says it is actively curbing the spread of the Swift images, although the platform has already lifted the ban on searches for the popstar.
Additionally, Microsoft has closed the loophole in its Designer AI image generator that could create explicit images of celebrities such as Swift, 404 Media reported.
Previously, users could get around simple name blocks by deliberately misspelling prompts. But now, it’s entirely impossible to generate images of celebrities – even though, surely, the cat-and-mouse game between malicious actors and companies will continue.
Perhaps more importantly, a group of US senators have now introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, legislation that would “hold accountable those responsible for the proliferation of nonconsensual, sexually-explicit deepfake images and videos.”
Creators of such images would be subject to civil action lawsuits over digital forgery and entitle the victim with financial damages as relief.
Deepfake pornography has grown into something of an epidemic. There were almost 280,000 clearnet synthetic, non-consensual exploitative videos in 2023, as per a recent report on deepfakes and the rise of nonconsensual synthetic adult content.
The total duration of these videos was 1,249 days and the number of views topped 4.2 billion, the report found.
More from Cybernews:
Subscribe to our newsletter