UK Technology Companies and Child Protection Officials to Examine AI's Ability to Create Abuse Images
Technology companies and child protection agencies will receive permission to evaluate whether AI tools can generate child exploitation images under new UK laws.
Substantial Rise in AI-Generated Illegal Content
The declaration came as findings from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will permit designated AI developers and child safety groups to inspect AI models – the underlying systems for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.
"Ultimately about preventing exploitation before it happens," declared Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the risk in AI models promptly."
Tackling Regulatory Obstacles
The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing process. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that issue by helping to stop the creation of those materials at source.
Legal Structure
The amendments are being introduced by the government as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, producing or sharing AI models developed to create exploitative content.
Real-World Consequences
This week, the official visited the London base of Childline and listened to a mock-up call to counsellors involving a report of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a sexualised deepfake of themselves, created using AI.
"When I learn about children facing blackmail online, it is a source of extreme frustration in me and rightful anger amongst parents," he stated.
Concerning Statistics
A leading internet monitoring foundation reported that cases of AI-generated abuse material – such as online pages that may include numerous files – had more than doubled so far this year.
Instances of category A material – the gravest form of abuse – increased from 2,621 visual files to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI images in 2025
- Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a vital step to guarantee AI products are safe before they are launched," commented the chief executive of the online safety organization.
"AI tools have enabled so survivors can be targeted all over again with just a few clicks, giving criminals the ability to create possibly limitless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Content which additionally exploits survivors' suffering, and renders children, particularly female children, less safe on and off line."
Counseling Session Data
Childline also published information of support interactions where AI has been mentioned. AI-related risks discussed in the sessions comprise:
- Using AI to evaluate body size, body and appearance
- AI assistants discouraging children from consulting trusted guardians about abuse
- Being bullied online with AI-generated material
- Digital extortion using AI-faked pictures
During April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapy apps.