Artificial intelligence (AI) and pornography seem like an unlikely pair. However, the relationship between them is growing ever-entangled as AI is being used to create pornographic content.
As AI has exploded onto the scene, it has permeated all aspects of business and society, including, of all things, pornography.
There are various sites that one could use to generate pornographic content nowadays.
Sites like PornPen and Unstable Diffusion specialize in uncensored image generation.
Standard AI (not intended for this purpose) has also been known to create pornography – both legal and illegal.
But what is pseudo-erotica, and how does it affect the individual and the industry?
What are deepfakes?
One type of sexual content that has grown prolific over the past year is deepfake pornography.
Deep learning creates fake images, audio, and videos – hence the portmanteau “deepfake.”
Deep learning is a part of machine learning that falls within the broader umbrella of artificial intelligence.
One example of a popular deepfake is the viral video of Queen Elizabeth II, the former Queen of England, giving an alternative version of her famous Queen’s Speech.
This video is an example of how realistic yet uncanny deepfakes can be.
Now imagine your face has been superimposed onto a sexually explicit image or video of a real porn actor.
This not only exploits you, but it also puts the porn actor at a disadvantage as their image or video is also being exploited for ill gain.
Erika Lust, an award-winning indie porn filmmaker, told Cybernews how this technology has raised “significant concerns as it can be used to create non-consensual explicit content, potentially violating privacy and consent.”
Some researchers posit that deep fake porn has become a “new normal” as the emergence of online groups “dedicated to creating and sharing fake nudes” has become more widespread, researcher, teacher, and digital rights activist Sophie Maddocks explains.
Deepfake statistics
Deepfake pornography has grown into something of an epidemic.
There were almost 280,000 clearnet synthetic, non-consensual exploitative videos in 2023, as per ‘The State of Deepfakes and the exponential rise of nonconsensual synthetic adult content in 2023,' by Genevive Oh.
The duration of these videos was 1249 days, and the number of views amounted to over 4.2 billion, the report states.
Furthermore, the hours of videos posted on leading deepfake NCEI websites jumped from 9,300 to nearly 14,000.
This harmful trend has demonstrated far-reaching adverse effects on individuals and the industry.
Real stories
The consequences of non-consensual deepfake videos can be devastating.
One example is of 15-year-old Francesca Mani, one of reportedly 30 victims of deepfake pornography that circulated at Westfield High School in New Jersey, the MIT Technology Review writes.
Various males at the high school used AI to manipulate images, creating sexually explicit child pornography.
Alongside Mani, other females fell victim to non-consensual deepfake pornography, which further exemplifies the growing problem that adults and children face in the evolving digital age.
Recently, we learned that child abuse material was present within LAION, a significant data set that supports text-to-image services such as Stable Diffusion and Google’s Imagen.
The Stanford Internet Observatory concluded that machine learning models could produce child sexual abuse material.
So, if explicit material is an intrinsic element of AI systems, how can we stop machine learning models from producing this content?
Effects on the individual
Research shows that AI is being used to manipulate images and exploit individuals, mainly women and children.
Furthermore, the creation of non-consensual deepfake pornography can have profound effects on victims.
“Many victims have experienced depression, thoughts of suicide, PTSD, and sexual trauma after their images were falsified,” Maddocks adds
Even if images are known to be fake, this content sets out to “sexually shame and humiliate victims who may struggle to reach a place of psychological safety,” as their images cannot be erased from the internet.
“It violates their autonomy and can cause emotional distress, embarrassment, or damage to their reputation. It’s also damaging to the industry as it becomes increasingly difficult for viewers to discern between authentic and manipulated content, potentially leading to misinformation, distrust in media, and distrust towards users' favorite websites.”
Erika Lust
Effects on the industry
In the context of pornography, there is seemingly no space for deepfakes as “this technology allows for the creation of highly realistic and sometimes non-consensual or illegal content,” Brian Prince CEO and founder of TopAITools.com said.
In deepfake pornography, often actors’ faces are replaced with celebrities or notable figures who did not consent to their images being used for this purpose.
The manipulation of porn actors using AI technologies often violates the actors’ consent and autonomy, Prince said.
This can then lead to psychological harm and damage to one's reputation and could even result in legal battles, Prince adds.
These factors are crucial in what makes deepfakes harmful within the adult entertainment industry. It also questions the ethical implications of using one's likeness without their approval.
Positive impact
Conversely, AI has made significant improvements to user experience and the way we watch pornography.
“AI has revolutionized the industry by personalizing content through algorithms that analyze user preferences and provide tailored suggestions,” said Lust.
Furthermore, AI has enhanced the ability to monitor and regulate harmful content more effectively, potentially mitigating the effects of harmful and abusive content.
“AI tools can assist in content moderation on porn platforms, flagging inappropriate or illegal material, and ensuring compliance with guidelines and regulations.”
Putting an end to non-consensual deepfakes
Although AI has positively impacted the porn industry, more regulations need to be put in place to stop the proliferation of non-consensual deepfake pornography.
Our experts weighed in on the topic and discussed ways to end non-consensual deepfake pornography.
“Governments and policymakers need to act and create regulations that specifically address the creation and distribution of non-consensual content,” Lust states
To halt the generation of non-consensual pornography, we should establish stricter laws, technological measures like watermarking, and better social media content platform enforcement,” said MD Nazmul Hasan, CEO at Microters.
We should also adopt regulations that include penalties for creating and distributing non-consensual AI content, said Prince.
“These laws can outline penalties for those who create, distribute, or profit from such material without consent,” concluded Lust.
Above all, pre-existing laws must be updated to acknowledge and penalize the misuse of someone's likeness for AI-generated pornography.
Your email address will not be published. Required fields are markedmarked