The CEOs of Meta, X, TikTok, Snap, and Discord faced tough questions on efforts to combat online child sexual exploitation at a US Senate hearing on Wednesday, as new reports say the number of AI-generated child abuse images has multiplied by the thousands.
Senator Dick Durbin, the Judiciary Committee's Democratic chairman, cited statistics from the National Center for Missing and Exploited Children nonprofit group that showed financial "sextortion," in which a predator tricks a minor into sending explicit photos and videos, had skyrocketed last year.
"This disturbing growth in child sexual exploitation is driven by one thing: changes in technology," Durbin said during the hearing.
As the hearing kicked off on Wednesday, the committee played a video in which children spoke about being victimized on the social media platforms.
"I was sexually exploited on Facebook," said one child in the video, who appeared in shadow.
In the hearing room, dozens of parents stood waiting for the CEOs to enter, holding pictures of their children.
"Mr. Zuckerberg, you and the companies before us, I know you don't mean it to be so, but you have blood on your hands," said Senator Lindsey Graham, referring to Meta CEO Mark Zuckerberg. "You have a product that's killing people."
Wednesday also marks the first appearance by TikTok CEO Shou Zi Chew before U.S. lawmakers since March when the Chinese-owned short video app company faced harsh questions, including some suggesting the app was damaging children's mental health.
"We make careful product design choices to help make our app inhospitable to those seeking to harm teens," Chew said, adding TikTok's community guidelines strictly prohibit anything that puts "teenagers at risk of exploitation or other harm -- and we vigorously enforce them."
Chew disclosed more than 170 million Americans used TikTok monthly -- 20 million more than the company said last year.
Under questioning by Graham, he said TikTok would spend more than $2 billion on trust and safety efforts, but declined to say how the figure compared to the company's overall revenue.
Zuckerberg scraps Instagram for kids
Zuckerberg, whose Meta owns Facebook and Instagram; X CEO Linda Yaccarino; Snap CEO Evan Spiegel; and Discord CEO Jason Citron also testified.
"We’re committed to protecting young people from abuse on our services, but this is an ongoing challenge," Zuckerberg said in testimony delivered at the hearing. "As we improve defenses in one area, criminals shift their tactics, and we have to come up with new responses."
Zuckerberg reiterated that the company has no plans to move forward with a previous idea to create a kids version of Instagram.
Speigel said Snap's parental controls resemble "how we believe parents monitor their teens activity in the real world – where parents want to know who their teens are spending time with but don’t need to listen in on every private conversation."
The committee last year approved several bills, including one that would remove tech firms' immunity from civil and criminal liability under child sexual abuse material laws that was first proposed in 2020. None have become law.
Senator Amy Klobuchar on Wednesday questioned what she said was inaction in the tech industry, comparing it to the response shown when a panel blew out of a Boeing plane earlier this month.
"When a Boeing plane lost a door in flight several weeks ago, nobody questioned the decision to ground a fleet. ... So why aren't we taking the same type of decisive action on the danger of these platforms when we know these kids are dying?" Klobuchar said.
AI-generated child abuse content is growing risk
The US National Center for Missing and Exploited Children (NCMEC) said it had received 4,700 reports last year about content generated by artificial intelligence that depicted child sexual exploitation. A figure, the NCMEC told Reuters that is expected to grow as AI technology advances.
In recent months, child safety experts and researchers have raised the alarm about the risk that generative AI tech, which can create text and images in response to prompts, could exacerbate online exploitation.
The NCMEC has not yet published the total number of child abuse content reports from all sources that it received in 2023, but in 2022 it received reports of about 88.3 million files.
"We are receiving reports from the generative AI companies themselves, (online) platforms and members of the public. It's absolutely happening," said John Shehan, senior vice president at NCMEC, which serves as the national clearinghouse to report child abuse content to law enforcement.
Researchers at Stanford Internet Observatory said in a report in June that generative AI could be used by abusers to repeatedly harm real children by creating new images that match a child's likeness.
Content flagged as AI-generated is becoming "more and more photo realistic," making it challenging to determine if the victim is a real person, said Fallon McNulty, director of NCMEC's CyberTipline, which receives reports of online child exploitation.
OpenAI, creator of the popular ChatGPT, has set up a process to send reports to NCMEC, and the organization is in conversations with other generative AI companies, McNulty said.
Your email address will not be published. Required fields are markedmarked