Tech firms and child protection agencies will receive permission to evaluate whether AI systems can produce child exploitation material under new UK laws.
The declaration coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the amendments, the government will allow approved AI companies and child safety groups to inspect AI systems – the underlying technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to stop them from producing depictions of child exploitation.
"Ultimately about stopping exploitation before it occurs," declared Kanishka Narayan, noting: "Experts, under rigorous conditions, can now identify the risk in AI systems promptly."
The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to preventing that problem by helping to stop the creation of those materials at source.
The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, producing or distributing AI systems designed to create child sexual abuse material.
This week, the minister toured the London headquarters of a children's helpline and heard a simulated call to advisors involving a account of AI-based exploitation. The interaction depicted a adolescent requesting help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I hear about children facing blackmail online, it is a cause of intense frustration in me and rightful anger amongst families," he stated.
A prominent internet monitoring foundation stated that instances of AI-generated exploitation content – such as webpages that may contain multiple images – had significantly increased so far this year.
Cases of category A content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
The law change could "represent a crucial step to guarantee AI products are safe before they are launched," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so victims can be victimised repeatedly with just a few clicks, providing criminals the capability to create possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Content which additionally commodifies survivors' suffering, and renders children, particularly girls, more vulnerable both online and offline."
Childline also published details of counselling sessions where AI has been mentioned. AI-related risks discussed in the conversations include:
During April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellness, including using AI assistants for support and AI therapeutic applications.
Lena is a seasoned sports analyst with over a decade of experience in betting strategies and statistical modeling.