New laws to target AI-generated child sex abuse | Bolt Burdon Kemp New laws to target AI-generated child sex abuse | Bolt Burdon Kemp

Find lawyer icon
Find your Lawyer

Free call back
Contact us
Round the clock support
Won't shy away from difficult cases
Committed to swiftly progressing claims

New laws to target AI-generated child sex abuse

The UK is set to become the first country to make AI–generated child sexual abuse images illegal.

Four new laws will specifically target the use of artificial intelligence (AI) in the creation of child sexual abuse material (CSAM).

The problem

These measures are a response to a significant increase in AI-generated child abuse images, with more than 245 confirmed cases reported in 2024 – up from 51 cases in 2023.

Speaking on Sunday, the Home Secretary Yvette Cooper said: “What we’re seeing is that AI is now putting online child abuse on steroids.”

The Internet Watch Foundation (IWF) reports it found more than 3,500 AI images on a single website on the dark web over a 30 day period last year. Of these around one in five were classed the most severe kind of abuse including images of rape and sexual torture.

AI has been used to generate CSAM in many ways including using the faces of children on existing abuse images and even using technology to ‘nudify’ real images of children. The exploitation continues when this newly generated content is used to blackmail children into further abuse, including forcing them to engage in live streaming of abuse.

The National Crime Agency (NCA) estimates 840,000 adults – 1.6% of the adult population – are a threat to children nationwide, both online and offline. There are around 800 arrests every month relating to threats posed to children online.

Current challenges

AI technology has enabled the creation of highly realistic images that look identical to real photographs, making it increasingly complex to identify and rescue actual victims.

Traditional detection systems use ‘digital fingerprint hash values’ to identify and trace existing CSAM images that have been altered and shared. But AI-generated CSAM that creates entirely new material can easily escape detection and be shared and altered using AI almost instantly.

New proposed legislation

The new legislation represents a significant step in adapting legal frameworks to address the misuse of AI in creating CSAM.

Criminalising the possession and distribution of AI tools and instructional materials for generating such content aims to deter offenders and protect vulnerable groups.

However, continuous evaluation and adaptation of these laws will be essential to keep pace with technological advancements and emerging threats.

Key provisions:

Criminalisation of AI tools for CSAM

The new laws will make it illegal to possess, create, or distribute AI tools designed to generate child sexual abuse images. Offenders could face up to five years in prison.

Ban on instructional manuals

Possessing manuals that provide guidance on using AI for abusive purposes will also be criminalised, with penalties of up to three years in prison. Anyone who runs or moderates websites that share images or advice with other offenders will also be targeted

Enhanced border inspections

The UK Border Force will be empowered to inspect digital devices of suspected offenders to prevent the importation and distribution of AI-generated CSAM.

Child safety experts and advocacy groups have welcomed the proposals but emphasised the need for further robust regulation to prevent AI misuse. In response to the announcements, Barnardo’s chief executive Lynn Perry said: “We welcome the Government taking action to tackle the increase in AI-produced child sexual abuse imagery which normalises the abuse of children, putting more of them at risk, both on and offline.

“It is vital that legislation keeps up with technological advances to prevent these horrific crimes.

“Tech companies must make sure their platforms are safe for children. They need to take action to introduce stronger safeguards, and Ofcom must ensure that the Online Safety Act is implemented effectively and robustly.”

We work with survivors of image-based abuse, including CSAM, deepfake images, revenge porn, and sextortion. If you have experienced this type of abuse and would like to speak to someone, please do get in touch with a member of our team for free and confidential advice.

Some of Our Accreditations

See more of our accreditations

We’re here to help you.

Want to talk to one of our experienced lawyers? We can call when it suits you for a no-obligation, strictly confidential chat.

Your browser is out of date. Please update your browser.

This site (and many others) provides a limited experience on unsupported browsers and not all functionality will work correctly or look its best.