Self-generated imagery

Draft content added

In the 2024 report, we reduced the publication of URL level data relating to ‘self-generated’ child sexual abuse imagery, as analysis showed that nearly 91% of webpages contained at least one self-generated image or video.

In 2025, we are able to report more detailed findings derived from our image and video datasets, where ‘self-generated’ indicators are applied and recorded on an individual image and video basis.

Self-generated imagery breakdown

  • Other Imagery
  • Self-generated

Whilst 27% of images and videos appears to be a stark contrast to the 91% of URLs showing CSAI reported in 2024, this difference is not unexpected. We frequently observe a high level of image repetition, both where websites migrate hosting to evade detection and where a relatively small set of images is repeatedly reused by known offending networks.

Intelligrade is designed to prevent the ingestion of duplicate imagery (using PDNA matching), meaning that identical images circulating across multiple webpages are not repeatedly recorded within the system. As a result, the lower percentage reflects the prevalence of repeated imagery rather than a reduction in self-generated content. At present, the volume of repeated imagery itself is not captured within our datasets.

Of the 140,276 items of self-generated imagery, the following breakdown shows the split between images and videos.

Self-generated image and video analysis

  • Images
  • Videos

Self-generated content represents 45% of all videos processed in 2025, as shown in the figure below.

Self-generated video analysis

  • Other Imagery
  • Self-generated
Analyst insight

‘Self-generated’ is a description of how a piece of child sexual abuse material was first created. Generally, the term applies to CSAM where a child has used a device to self-capture themselves. The child is usually physically alone in this situation, with no offender present.  

The vast amount of material is taken within a domestic setting and for many children, this is their bedroom. We see thousands of instances of children interacting with a device in their bedroom, livestreaming with a person who has groomed or coerced them to perform sexually. The offender records the interaction – a process known as ‘capping’ - and creates multiple CSAM images and videos from this interaction. This imagery is posted online in large volumes. The child may never know they were recorded with this intention.  

Other children may take an image of themselves that later becomes ‘leaked’, or shared beyond their control, breaching the original boundaries of consent. For older children, sometimes this imagery is found among legal adult pornography, or shared online on websites promoting ‘teen’ content. It could also be shared amongst peers on ‘peer-to-peer' systems or apps, or be edited using AI tools to create further harm.   

From within the four corners of an image alone, we never know the exact level of consent behind an image’s creation, nor do we know how it first became available online.  However, messages within online interactions, or audio of children talking to online remote offenders, reveal insights into how a child came to record themselves. Some children are heavily coerced into sexual behaviours by playful, flirtatious attention, games and emojis. Others are encouraged to exchange ‘nude’ imagery in what is disguised as a relationship or mutual sexual exchange between peers. We have also observed disturbing instances of humiliating sexual extortion, where a child is threatened with exposure if they do not comply with demands for sexual content.  

In so many cases, what starts as one sexual interaction between a child and another person can turn into hundreds of online child sexual images, posts, views and shares.