AI-generated videos

 

Frontline observations

In 2025 we assessed 3,443 AI-generated videos (out of a total of 63,682) that showed photorealistic child sexual abuse. In addition 1 video was assessed as prohibited. 

In 2025 we saw over 260 times more AI-generated child sexual abuse videos than in 2024 when we saw just 13. 

The breakdown below shows the increases we saw in severity across all categories of videos applied by our Assessors. 

  • Category A: over 550 fold increase

  • Category B: over 120 fold increase

  • Category C: over 180 fold increase

Nearly two-thirds (65%) of AI videos were assessed as Category A.

 

In comparison, criminal assessments of non AI-generated videos showed that 43% were attributed to Category A severity. This highlights the concern that AI tools are enabling the creation of significantly more severe content, often using real images of victims or survivors to generate this material, which further increases the revictimisation of children.

AI-generated videos by severity

  • Category A: Videos involving penetrative sexual activity; images involving sexual activity with an animal; or sadism.
  • Category B: Videos involving non-penetrative sexual activity.
  • Category C: Other indecent videos not falling within categories A or B.

For a deeper look at generative AI and the risks for child sexual abuse, read our AI report here.

IWF Internet Content Analyst
Frontline observations

Some readers may find the following descriptions distressing, please feel free to skip this section.

As the quality of generative AI image tools has increased, we have seen the emergence of photorealistic AI child sexual abuse images. Inevitably the development of generative AI video tools has seen an alarming increase in the quality of AI child sexual abuse videos. 

We first encountered generative AI child sexual abuse videos in early 2024. The first examples we saw were “deepfake” videos – existing adult pornography altered with AI to replace the face of a woman with that of a known child victim. Even at this early stage of AI videos, the effect was still convincing. Shortly after came the next step - a fully synthetic AI sexual abuse video showing a boy and a man in a scenario entirely generated by AI. The video was low quality, more a choppy succession of photos than a proper video, but this was our first warning about the direction of generative AI. 

The quality of generative AI videos has grown exponentially in the past two years. We are now seeing photorealistic sexual abuse videos showing known child victims in entirely new scenarios. In many cases the abuse depicted is worse than that seen in the original images and videos of a victim. And creators are not only generating videos, they are also making and sharing models of known victims that will allow others to generate their own abuse videos depicting that victim. 

“Deepfake” child sexual abuse videos have become effortless to generate. Numerous websites exist that require a single image of a child to generate a video of them stripping or performing sexual acts. With no safeguards in place, these websites have given anyone the ability to create child sexual abuse videos. 

Generative AI is now limited only by the user’s imagination. In the wrong hands, that has terrible consequences.