AI-generated videos

Draft content added

In 2025 we assessed 3,443 AI-generated videos that showed realistic child sexual abuse. In addition 1 video was assessed as prohibited. 

In 2025 we saw over 260 times more AI-generated child sexual abuse videos than in 2024 when we saw just 13. 

The breakdown below shows increases in severity across all categories.

  • Category A: over 550 fold increase

  • Category B: over 120 fold increase

  • Category C: over 180 fold increase

Nearly two-thirds (65%) of AI videos were assessed as Category A.

In comparison, criminal assessments not involving AI showed that 43% were attributed to Category A severity. This highlights that AI tools are enabling the creation of significantly more severe content at a higher rate, often using real images of victims or survivors to generate this material, which further increases the revictimisation of children.

AI-generated videos by severity

  • Category A: Videos involving penetrative sexual activity; images involving sexual activity with an animal; or sadism.
  • Category B: Videos involving non-penetrative sexual activity.
  • Category C: Other indecent videos not falling within categories A or B.
Analyst insight

As the quality of generative AI image tools has increased, we have seen the emergence of photorealistic AI child sexual abuse images. Inevitably the development of generative AI video tools has seen an alarming increase in the quality of AI child sexual abuse videos. 

We first encountered generative AI CSAM videos in early 2024. The first examples we saw were “deepfake” videos – existing adult pornography altered with AI to replace the face of a woman with that of a known child victim. Even at this early stage of AI videos, the effect was still convincing. Shortly after came the next step - a fully synthetic AI CSAM video showing a boy and a man in a scenario entirely generated by AI. The video was low quality, more a choppy succession of photos than a proper video, but this was our first warning about the direction of generative AI. 

The quality of generative AI videos has grown exponentially in the last two years. We are now seeing photorealistic CSAM videos showing known victims in entirely new scenarios. In many cases the abuse depicted is worse than that seen in the original images and videos of a victim. And creators are not only generating videos, they are also making and sharing models of known victims that will allow others to generate their own CSAM videos of that victim. 

“Deepfake” CSAM videos have become effortless to generate. Numerous websites exist that require a single image of a child to generate a video of them stripping or performing sexual acts. With no safeguards in place, these websites have given anyone the ability to create CSAM. 

Generative AI is now limited only by the user’s imagination. In the wrong hands, that has terrible consequences.