British Tech Firms and Child Protection Officials to Examine AI's Capability to Create Abuse Content
Tech firms and child protection organizations will be granted permission to evaluate whether artificial intelligence tools can produce child exploitation material under recently introduced British laws.
Significant Increase in AI-Generated Harmful Material
The declaration coincided with findings from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the changes, the government will permit designated AI companies and child safety groups to inspect AI models β the foundational technology for chatbots and visual AI tools β and verify they have adequate protective measures to prevent them from creating images of child exploitation.
"Fundamentally about preventing exploitation before it occurs," declared Kanishka Narayan, adding: "Experts, under strict protocols, can now detect the risk in AI systems early."
Tackling Legal Challenges
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.
This legislation is designed to preventing that problem by enabling to stop the creation of those images at their origin.
Legislative Structure
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on owning, producing or distributing AI systems developed to generate child sexual abuse material.
Real-World Consequences
This week, the official visited the London base of Childline and heard a mock-up conversation to counsellors featuring a report of AI-based abuse. The call depicted a teenager requesting help after facing extortion using a explicit AI-generated image of himself, constructed using AI.
"When I learn about young people experiencing extortion online, it is a cause of intense anger in me and justified anger amongst families," he said.
Alarming Data
A prominent internet monitoring foundation reported that instances of AI-generated exploitation content β such as webpages that may include numerous files β had significantly increased so far this year.
Instances of the most severe material β the gravest form of abuse β rose from 2,621 visual files to 3,086.
- Girls were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a crucial step to ensure AI products are secure before they are released," commented the head of the internet monitoring foundation.
"AI tools have made it so survivors can be victimised all over again with just a simple actions, providing criminals the capability to create possibly endless amounts of advanced, photorealistic child sexual abuse material," she continued. "Material which additionally commodifies victims' suffering, and makes young people, especially girls, less safe both online and offline."
Support Session Information
The children's helpline also published details of counselling interactions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Using AI to evaluate body size, body and appearance
- Chatbots dissuading children from consulting trusted adults about abuse
- Facing harassment online with AI-generated material
- Digital blackmail using AI-faked pictures
Between April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and related topics were discussed, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellness, encompassing using AI assistants for support and AI therapeutic applications.