40% of UK adults unaware that AI-generated abuse material is illegal


A recent survey revealed that 40% of people didn't know or thought that AI-generated child sexual abuse material was legal in the UK.

The Lucy Faithfull Foundation, a UK-wide charity dedicated to the prevention of child sexual abuse, has acknowledged a shocking trend that highlights the dangers of artificial intelligence (AI) for the means of generating child sexual abuse materials (CSAM).

The research finds that 66% of UK adults have concerns surrounding the advancements in AI and the consequences they may have on children.

Whereas a large majority of the individuals (70%) surveyed were unaware that AI is being used to generate CSAM of minors.

While the survey found that 88% of people believe that AI-generated sexual imagery of under 18s should be illegal, 40% thought that this type of content was legal in the UK, according to the foundation.

In the UK, it is illegal to generate, distribute, or view sexual images of children under the age of 18. This includes images generated by AI.

The foundation claims that CSAM generated by AI can promote the sexualization of real children and warns of the consequences that offenders who view these images could face.

Not only is artificial intelligence being used to create explicit images of children, but it is also used to transform images of real children into CSAM – including children who have previously been subjects of sexual abuse, according to the foundation.

“Real children who have been abused find themselves victimized again as offenders create new sexual imagery of them and distribute this online,” the foundation said.

The survey reveals that there are extreme gaps in the public’s understanding of AI and its impact on children, Donald Findlater, director of the Stop It Now helpline, said.

As AI continues to advance, Findlater believes that it’s imperative that individuals educate themselves on the potential dangers related to this technology and how AI is exploited by offenders daily.

In December 2023, The Stanford Internet Observatory identified CSAM in LAION-5B, a large data set used to train services like Stable Diffusion and Google’s Imagen.

Previous Stanford Internet Observatory reports have deduced that machine-learning models can produce CSAM. However, the work assumed this was only possible by combining “two concepts” such as child and explicit actions.

Now, it’s apparent that certain machine learning models may have been trained, in part, on CSAM.


More from Cybernews:

“20 fake bank accounts opened in my name”: stolen identity turns into nightmare

DarkGate gang using CAPTCHA to spread malware

TikTok, YouTube, Instagram, and Snapchat sued over youth mental health crisis

New role at OpenAI will pay up to $275K

Pentagon says 26K people impacted by data breach from early 2023

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked