"Self-generated" images and videos include those where a child or children are seen alone, with no perpetrator physically present with them at the time the imagery was captured. These children are groomed, deceived or extorted into producing and sharing a sexual image or video of themselves by someone who is not physically present in the room with the child. Sometimes children are completely unaware they are being recorded and that there is then a image or video of them being shared by abusers.
We regard the term "self-generated" child sexual abuse as an inadequate and potentially misleading term which does not fully encompass the full range of factors often present within this imagery, and which appears to place the blame with the victim themselves. Children are not responsible for their own sexual abuse. Until a better term is found, however, we will continue to use the term "self-generated" as, within the online safety and law enforcement sectors, this is well recognised.
In 2025, we are able to report more detailed findings derived from our image and video datasets, where a “self‑generated” tag is applied and recorded for each individual image and video in which a child or children appear to be physically alone at the time the content was captured, regardless of how it was created, obtained, or shared.
In 2024, we reported that 91% of URLs examined contained child sexual abuse content. In 2025, now that we are able to provide more detailed findings directly from the images and videos we assess, a stark contrast has emerged. This difference is not unexpected. We frequently observe a high level of image repetition, both where websites migrate hosting to evade detection and where a relatively small set of images is repeatedly reused by known offending networks.
IntelliGrade is designed to prevent the ingestion of duplicate imagery (using PDNA matching), meaning that identical images circulating across multiple webpages are not repeatedly recorded within the system. As a result, the lower percentage may reflect the prevalence of repeated imagery rather than a reduction in "self-generated" content. At present, the volume of repeated imagery itself is not captured within our datasets.
Of the 140,276 items of “self‑generated” imagery, 28,666 (20%) were videos and 111,610 (80%) were images.
Of all the images and videos marked as "self-generated" a high proportion (77%) are either videos or 'grid' images which are collages comprised of multiple images. For videos and grid images we only record severity. This contributes to greater efficiency and helps to protect the wellbeing of our image assessors without compromising the accuracy of our work.
More than half (53%) of this imagery featured Category C sexual activity which includes, for example, images and videos showing partially nude, nude, or topless sexual posing.
UK-based children who have been coerced into appearing in these types of images or videos can seek help anonymously through the Report Remove service where they can report their own imagery to us and receive support from Childline.
Some readers may find the following descriptions distressing, please feel free to skip this section.
‘Self-generated’ is a description of how a piece of child sexual abuse material was first created. Generally, the term applies to child sexual abuse imagery where a child has used a device to self-capture themselves. The child or children are physically alone in this situation, with no offender present.
The vast amount of material is taken within a domestic setting and for many children, this is their bedroom. We see thousands of instances of children interacting with a device in their bedroom, livestreaming with a person who has groomed or coerced them to perform sexually. The offender records the interaction – a process known as ‘capping’ - and creates multiple images and videos from this interaction. This imagery is posted online in large volumes. The child may never know they were recorded with this intention.
Other children may take an image of themselves that later becomes ‘leaked’, or shared beyond their control, breaching the original boundaries of consent. For older children, sometimes this imagery is found among legal adult pornography, or shared online on websites promoting ‘teen’ content. It could also be shared amongst peers on ‘peer-to-peer' systems or apps, or be edited using AI tools to create further harm.
From within the four corners of an image alone, we never know the exact level of consent behind an image’s creation, nor do we know how it first became available online. However, messages within online interactions, or audio of children talking to online remote offenders, reveal insights into how a child came to record themselves. Some children are heavily coerced into sexual behaviours by playful, flirtatious attention, games and emojis. Others are encouraged to exchange ‘nude’ imagery in what is disguised as a relationship or mutual sexual exchange between peers. We have also observed disturbing instances of humiliating sexual extortion, where a child is threatened with exposure if they do not comply with demands for sexual content.
In so many cases, what starts as one sexual interaction between a child and another person can turn into hundreds of online child sexual images, posts, views and shares.