UK Technology Firms and Child Protection Officials to Test AI's Capability to Create Abuse Content
Tech firms and child safety organizations will be granted permission to assess whether artificial intelligence systems can generate child abuse images under recently introduced British legislation.
Significant Increase in AI-Generated Illegal Content
The declaration coincided with revelations from a safety watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the changes, the government will permit approved AI developers and child protection groups to inspect AI models – the underlying technology for chatbots and image generators – and verify they have sufficient protective measures to prevent them from creating images of child exploitation.
"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under rigorous protocols, can now identify the danger in AI systems promptly."
Addressing Legal Challenges
The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is aimed at preventing that problem by enabling to stop the production of those images at source.
Legislative Framework
The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or sharing AI models developed to create child sexual abuse material.
Practical Impact
This recently, the minister toured the London base of Childline and listened to a simulated conversation to advisors featuring a report of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I hear about young people facing blackmail online, it is a source of intense frustration in me and rightful concern amongst families," he said.
Concerning Data
A leading online safety organization reported that cases of AI-generated abuse content – such as online pages that may include numerous images – had more than doubled so far this year.
Cases of category A content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "represent a crucial step to guarantee AI products are secure before they are launched," stated the head of the internet monitoring foundation.
"AI tools have made it so victims can be victimised all over again with just a simple actions, giving criminals the ability to make possibly limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Material which further commodifies victims' trauma, and renders young people, particularly girls, more vulnerable both online and offline."
Support Interaction Data
The children's helpline also published information of counselling sessions where AI has been referenced. AI-related harms discussed in the conversations include:
- Employing AI to evaluate weight, physique and appearance
- Chatbots dissuading children from talking to safe guardians about abuse
- Being bullied online with AI-generated material
- Digital extortion using AI-faked pictures
During April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, including using AI assistants for assistance and AI therapy apps.