British Tech Companies and Child Safety Agencies to Examine AI's Capability to Create Exploitation Content

Tech firms and child safety agencies will be granted permission to assess whether artificial intelligence systems can generate child abuse images under recently introduced UK laws.

Substantial Rise in AI-Generated Harmful Content

The announcement coincided with revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will allow designated AI companies and child safety groups to inspect AI models – the underlying systems for chatbots and visual AI tools – and ensure they have adequate safeguards to stop them from creating depictions of child exploitation.

"Ultimately about stopping abuse before it happens," declared Kanishka Narayan, noting: "Experts, under rigorous protocols, can now detect the danger in AI models early."

Tackling Regulatory Challenges

The changes have been introduced because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation process. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This law is designed to preventing that problem by enabling to halt the production of those images at source.

Legal Structure

The changes are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on possessing, creating or distributing AI systems designed to create exploitative content.

Practical Consequences

This week, the minister toured the London headquarters of a children's helpline and listened to a simulated call to counsellors involving a report of AI-based exploitation. The interaction depicted a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, created using AI.

"When I hear about young people facing blackmail online, it is a source of extreme frustration in me and rightful anger amongst families," he stated.

Concerning Data

A prominent internet monitoring organization reported that instances of AI-generated abuse material – such as online pages that may include multiple images – had more than doubled so far this year.

Instances of category A material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
  • Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to guarantee AI products are safe before they are released," commented the head of the online safety foundation.

"AI tools have enabled so survivors can be targeted repeatedly with just a few clicks, providing criminals the capability to create possibly endless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Material which further commodifies victims' trauma, and renders young people, particularly girls, more vulnerable on and off line."

Support Interaction Data

Childline also published information of support interactions where AI has been mentioned. AI-related risks mentioned in the conversations include:

  • Employing AI to rate weight, body and appearance
  • Chatbots discouraging children from consulting trusted adults about harm
  • Being bullied online with AI-generated content
  • Online extortion using AI-faked images

Between April and September this year, Childline conducted 367 support sessions where AI, chatbots and associated terms were discussed, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellness, encompassing utilizing AI assistants for assistance and AI therapy apps.

Diana Richards
Diana Richards

A passionate writer and life coach dedicated to helping others achieve their full potential through mindful practices.