AI-generated images

Draft status

AI-generated images depicting child sexual abuse may appear realistic, resembling genuine photographs, or they may take the form of non-photographic images (NPI), such as computer-generated or illustrated content.

When these images resemble real child sexual abuse material, UK law treats them in the same way as actual photographs. They are included on the IWF URL List and IWF Hash List, and action is taken to have the material removed.

If the images are non-photographic in nature, for example cartoon-style or digitally generated imagery, they are classified as prohibited content and added to the IWF NPI List. 

In all instances, this material is illegal under UK law.

In 2025 we assessed 4,586 AI-generated images as showing realistic child sexual abuse.

AI-generated images severity over past two years

64% of the AI-generated images found to be criminal were Category C, depicting nude or partially nude sexually posed children.  

Of the 4,586 criminal images, 180 of them were grid images. These are single images made up of multiple sub-images. Commonly arranged in a grid pattern they are often referred to as ‘grids’ although they can adopt other layouts. For these we only record severity and they therefore do not feature in the charts below. 

In addition we assessed 81 images as prohibited. 

AI-generated images by sex over past two years

Girls continued to be the most frequently depicted in AI-generated imagery, accounting for 97% of cases this year, a slight decrease from 99% in 2024. 

AI-generated images by age over the past two years

7-10-year-olds continue to be the most common age group seen in AI-generated child sexual abuse images with 44% falling into this age bracket. However the biggest growth can be seen in the 3-6-year-old age group that has risen 31% since last year. 

Analyst Insight

In the hotline we’re sometimes asked how the analysts can distinguish an AI-generated image from a photograph. In response to that the analysts often mention AI generated images having a particular overly polished look often associated with synthetic imagery. In 2025 we found this no longer applied as the image styles became increasingly more diverse. While many AI-generated images still look glossy, airbrushed and exceedingly colourful, we’re also seeing images which look completely ordinary. They can be poorly lit, grainy and imperfect, resembling many real photographs of children we see daily.  

Improved quality and realism are only one part of the picture: as the AI models become more sophisticated, users are also able to generate more complex images. For example, it’s possible to instruct the model to piece parts of multiple images together to create a completely new realistic scene. The resulting outputs are highly detailed and can depict complex interactions between multiple child and adult characters.  

With AI tools becoming more accessible and more powerful, we are worried about the threat of AI generated images being used in sexual extortion schemes. Young people around the world are victimised by extortion gangs, and if perpetrators understand the potential of generative AI, they make take advantage of this new technology in their extortion of children. Widely available nudifying tools make it easy for bad actors to create a compromising image of anyone with minimal effort.  

Another important development is the commercialisation of AI generated CSAM and exploitative imagery online. For example, we came across subscription-based platforms that offered the users access to multiple models of children, so they could generate content of their own choosing. Some of these models were clearly based on the likeness of known child sexual abuse victims, leading to further revictimisation.  

This year the hotline saw the line between AI generated and traditional CSAM blur and we continue to adapt to this new and challenging landscape.