UK Tech Firms and Child Safety Officials to Test AI's Ability to Generate Exploitation Images

Technology companies and child safety organizations will be granted authority to assess whether AI systems can generate child exploitation images under recently introduced UK legislation.

Substantial Increase in AI-Generated Illegal Content

The announcement coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the government will allow approved AI developers and child protection groups to examine AI systems – the underlying technology for conversational AI and image generators – and verify they have sufficient safeguards to stop them from producing images of child exploitation.

"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Experts, under rigorous protocols, can now detect the danger in AI systems early."

Tackling Regulatory Challenges

The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at averting that issue by helping to stop the production of those materials at their origin.

Legal Structure

The changes are being added by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on owning, creating or sharing AI systems designed to generate child sexual abuse material.

Real-World Consequences

This recently, the minister toured the London headquarters of a children's helpline and heard a mock-up call to advisors involving a report of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a sexualised AI-generated image of himself, created using AI.

"When I hear about young people experiencing blackmail online, it is a source of intense anger in me and rightful concern amongst families," he said.

Concerning Statistics

A leading online safety organization stated that instances of AI-generated abuse content – such as webpages that may contain numerous files – had more than doubled so far this year.

Cases of the most severe content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "represent a vital step to guarantee AI tools are secure before they are released," stated the chief executive of the online safety organization.

"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a simple actions, providing offenders the capability to create potentially limitless quantities of sophisticated, lifelike exploitative content," she continued. "Material which further exploits victims' trauma, and renders children, particularly girls, more vulnerable on and off line."

Counseling Session Information

The children's helpline also published details of support interactions where AI has been referenced. AI-related risks discussed in the conversations comprise:

  • Using AI to evaluate body size, body and looks
  • AI assistants dissuading young people from talking to safe adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-faked images

During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related topics were mentioned, four times as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for assistance and AI therapy applications.

Nathan Stephens
Nathan Stephens

A seasoned casino streamer and reviewer with a passion for live gaming and sharing expert strategies.