British Tech Companies and Child Protection Agencies to Examine AI's Capability to Create Abuse Content
Technology companies and child protection agencies will receive permission to evaluate whether AI tools can generate child exploitation images under new UK legislation.
Significant Rise in AI-Generated Harmful Content
The declaration came as findings from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the authorities will allow designated AI developers and child safety groups to examine AI models – the underlying technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to prevent them from creating images of child exploitation.
"Fundamentally about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict conditions, can now identify the risk in AI models promptly."
Addressing Regulatory Challenges
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is designed to preventing that issue by helping to halt the creation of those materials at source.
Legislative Framework
The changes are being added by the authorities as revisions to the criminal justice legislation, which is also establishing a ban on possessing, producing or sharing AI models designed to generate exploitative content.
Real-World Consequences
This week, the official visited the London base of Childline and listened to a mock-up call to counsellors involving a account of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I learn about young people experiencing blackmail online, it is a cause of intense anger in me and justified anger amongst parents," he said.
Concerning Data
A leading online safety foundation stated that instances of AI-generated abuse material – such as online pages that may include numerous images – had significantly increased so far this year.
Cases of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a vital step to guarantee AI products are safe before they are launched," stated the chief executive of the internet monitoring foundation.
"AI tools have enabled so survivors can be victimised repeatedly with just a few clicks, providing criminals the capability to create possibly endless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Material which additionally exploits victims' suffering, and makes children, particularly female children, less safe on and off line."
Counseling Session Data
Childline also released details of counselling interactions where AI has been mentioned. AI-related risks discussed in the conversations comprise:
- Using AI to rate body size, body and appearance
- Chatbots dissuading children from consulting safe adults about abuse
- Being bullied online with AI-generated material
- Online blackmail using AI-faked pictures
Between April and September this year, the helpline conducted 367 support interactions where AI, chatbots and related terms were discussed, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellness, including using chatbots for assistance and AI therapeutic applications.