Growing demand for AI-generated abuse revealed – with offenders wrongly thinking it’s victimless
A new study has revealed an increasing trend among paedophiles to create and share child sexual abuse material (CSAM) using artificial intelligence.
Anglia Ruskin University researchers analysed dark web forums and found they are not just platforms for sharing illegal content; they have evolved into educational hubs where offenders exchange knowledge on how to use AI tools for their purpose.
One of the most concerning revelations in the research was the misconception among offenders that AI-generated CSAM is ‘victimless’.
This belief is dangerously misleading. Many of these offenders are using real images of children as a base, manipulating them to produce increasingly graphic content.
The findings from this study underscore the urgent need for a better understanding of how AI-generated CSAM is being produced and distributed.
Dr. Deanna Davy, one of the researchers, warns this is a “rapidly growing problem” and stresses the importance of developing more effective strategies to combat it.
The study calls for comprehensive research into the impact of AI-generated CSAM on offender behaviour, as well as its broader implications for online safety and child protection.
This development poses a significant challenge for society, one that demands immediate and concerted action to protect the children.