Children in the UK are using AI image generators to make indecent images of other children. It’s a concerning – and illegal – trend, an internet safety group has warned.
The UK Safer Internet Center (UKSIC) said it received reports from teachers that students were using AI to “create imagery which legally constitutes child sexual abuse material.”
It said “urgent action” was needed to prevent the technology from being abused in schools and help children understand the risks of generating this sort of imagery.
It is illegal to make, possess, and distribute imagery of child sexual abuse in the UK. This applies to photographic and AI-generated content alike, including cartoons and less realistic depictions.
Children may be exploring AI image generators without fully realizing the harm they may be causing or the risks of such imagery being shared online, according to the UKSIC Director David Wright.
“Young people are not always aware of the seriousness of what they are doing, yet these types of harmful behaviors should be anticipated when new technologies, like AI generators, become more accessible to the public,” Wright said.
Schools needed to nip the problem in the bud by filtering and monitoring their systems, as well as reaching out for support when dealing with incidents and safeguarding matters, he said.
“Although the case numbers are currently small, we are in the foothills and need to see steps being taken now before schools become overwhelmed and the problem grows,” Wright said.
Teachers and parents are urged to talk to children and explain the risks associated with such behavior, with concerns it could lead to further abuse or blackmail if the images are leaked to the open web.
There is a “real risk” sex offenders could use fake images to shame and silence their victims, warned Victoria Green, CEO of the Marie Collins Foundation, a charity dedicated to children who have been affected by technology-enabled abuse.
“The imagery may not have been created by children to cause harm, but, once shared, this material could get into the wrong hands and end up on dedicated abuse sites,” Green said.
In October, the UK-based Internet Watch Foundation warned that AI-generated images of child sexual abuse were now so realistic that many would be indistinguishable from authentic imagery, even to trained analysts.
It said it had discovered “thousands” of child sexual abuse images online and cautioned that more needed to be done to prevent the production of such imagery at scale.
Your email address will not be published. Required fields are markedmarked