UK Tech Firms and Child Safety Officials to Test AI's Ability to Create Abuse Content

Tech firms and child protection agencies will receive permission to evaluate whether AI systems can produce child exploitation material under new UK laws.

Significant Increase in AI-Generated Illegal Material

The declaration coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the amendments, the government will allow approved AI companies and child safety groups to inspect AI systems – the underlying technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to stop them from producing depictions of child exploitation.

"Ultimately about stopping exploitation before it occurs," declared Kanishka Narayan, noting: "Experts, under rigorous conditions, can now identify the risk in AI systems promptly."

Tackling Legal Challenges

The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to preventing that problem by helping to stop the creation of those materials at source.

Legislative Framework

The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, producing or distributing AI systems designed to create child sexual abuse material.

Real-World Consequences

This week, the minister toured the London headquarters of a children's helpline and heard a simulated call to advisors involving a account of AI-based exploitation. The interaction depicted a adolescent requesting help after being blackmailed using a explicit deepfake of himself, created using AI.

"When I hear about children facing blackmail online, it is a cause of intense frustration in me and rightful anger amongst families," he stated.

Alarming Data

A prominent internet monitoring foundation stated that instances of AI-generated exploitation content – such as webpages that may contain multiple images – had significantly increased so far this year.

Cases of category A content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI images in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The law change could "represent a crucial step to guarantee AI products are safe before they are launched," stated the head of the internet monitoring foundation.

"Artificial intelligence systems have enabled so victims can be victimised repeatedly with just a few clicks, providing criminals the capability to create possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Content which additionally commodifies survivors' suffering, and renders children, particularly girls, more vulnerable both online and offline."

Counseling Session Information

Childline also published details of counselling sessions where AI has been mentioned. AI-related risks discussed in the conversations include:

  • Employing AI to rate body size, body and appearance
  • AI assistants discouraging children from consulting trusted adults about abuse
  • Facing harassment online with AI-generated material
  • Digital extortion using AI-manipulated images

During April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were connected with mental health and wellness, including using AI assistants for support and AI therapeutic applications.

Stephanie Roberts
Stephanie Roberts

Lena is a seasoned sports analyst with over a decade of experience in betting strategies and statistical modeling.