AI‑generated images depicting child sexual abuse can vary in appearance. Some are highly realistic and resemble genuine photographs, while others are non‑photographic images (NPI), such as computer‑generated or illustrated content.
Under UK law, when AI‑generated child sexual abuse images are realistic enough that they cannot be distinguished from real photographs, they are treated the same as actual photographs. They are included on the IWF URL List and IWF Hash List, and action is taken to have the material removed.
If the images are non-photographic in nature, for example cartoon-style or digitally generated imagery, they are classified as prohibited content and added to the IWF NPI List.
In all instances, this material is illegal under UK law.
Whilst AI-generated images accounted for just 1% of all the sexual abuse images we took action on this year (449,298), its impact can be just as harmful. Images of real children, real victims of child sexual abuse, are often used to train the AI models and we often see new imagery of well known victims being created using this technology. Just like other forms of child sexual abuse, the harm can still be profound and enduring for those children depicted.
64% percent of the AI-generated images identified as criminal were classified as Category C, depicting nude or partially nude sexually posed children. This compares with 50% of the non-AI-generated criminal images assessed.
Of the AI-generated images we have seen, 23% contained a higher proportion of the most severe material (Category A) compared with 15% of non AI-generated images.
Of the 4,586 criminal AI-generated images, 180 of them were grid images. These are single images composed of multiple sub‑images. They are commonly arranged in a grid format, often referred to as “grids”. Although other layouts can also occur. For these items, we record only the severity level, so they do not appear in the charts below.
In addition we assessed 81 AI-generated images as prohibited.
Girls continued to be the most frequently depicted sex in AI-generated imagery, accounting for 97% of cases this year, a slight decrease from 98% in 2024.
Depictions of children that fall into the 7-10-year-old age group continue to be the most common seen in AI-generated child sexual abuse images with 44% falling into this age bracket. However the biggest growth can be seen in the 3-6-year-old age group that has risen 31% since last year.
Whilst children aged 7–10 were most frequently depicted in the AI-generated imagery seen by the IWF during 2024 and 2025, it is important to note that some children may appear repeatedly within these datasets. Therefore, the figures do not necessarily represent unique individuals. We do also observe instances in which creators generate large sets of images depicting the same child.
One possible explanation for this age distribution is that certain AI image generators may more easily produce images of younger children. The physical characteristics associated with early puberty (typically ages 11–13, which represent the highest age group in non-AI imagery) may be more difficult for these systems to replicate convincingly.
However, our analysts continue to observe that AI-generated sexual abuse imagery is being created and requested across all age groups.
For a deeper look at generative AI and the risks for child sexual abuse, read our AI report here.
Some readers may find the following descriptions distressing, please feel free to skip this section.
In the hotline we’re sometimes asked how the analysts can distinguish an AI-generated image from a photograph. In response to that the analysts often mention AI-generated images having a particular overly polished look often associated with synthetic imagery. In 2025 we found this no longer applied as the image styles became increasingly more diverse. While many AI-generated images still look glossy, airbrushed and exceedingly colourful, we’re also seeing images which look completely ordinary. They can be poorly lit, grainy and imperfect, resembling many real photographs of children we see daily.
Improved quality and realism are only one part of the picture: as the AI models become more sophisticated, users are also able to generate more complex images. For example, it’s possible to instruct the model to piece parts of multiple images together to create a completely new realistic scene. The resulting outputs are highly detailed and can depict complex interactions between multiple child and adult characters.
With AI tools becoming more accessible and more powerful, we are worried about the threat of AI-generated images being used in sexual extortion schemes. Young people around the world are victimised by extortion gangs, and if perpetrators understand the potential of generative AI, they may take advantage of this new technology in their extortion of children. Widely available nudifying tools make it easy for bad actors to create a compromising image of anyone with minimal effort.
Another important development is the commercialisation of AI-generated child sexual abuse material and exploitative imagery online. For example, we identified subscription-based platforms that offered the users access to multiple models of children, so they could generate content of their own choosing. Some of these models were clearly based on the likenesses of known child sexual abuse victims, leading to further re-victimisation.
This year the Hotline saw the line between AI-generated and traditional sexual abuse material of children blur and we continue to adapt to this new and challenging landscape.
Swift action by legislators and technology companies is needed stop AI technology from being exploited to create child sexual abuse material. This includes regulatory requirement to ensure AI products are safe by design, banning nudification apps and tools, and closing legal loopholes to ensure AI generated material is treated the same as other forms of CSAM.
Since the IWF first started monitoring AI in early 2023, we’ve seen a rapid, frightening advancement in the ability to artificially generate child sexual abuse imagery. There is no doubt that such imagery is harmful – it revictimises those whose imagery has been used to create this abuse, enables the creation of more extreme imagery, and can increase the risk of progression to contact abuse for those who view such material.
Action by policymakers and technology companies is needed stop AI technology from being exploited to create child sexual abuse material.
In the UK, the Government has moved decisively to close legal loopholes related to the use of AI in the creation of child sexual abuse material. These measures target both the tools used to generate such material, including a ban of nudification tools, and the guidance used by offenders to exploit AI for this purpose.
The UK Government has also announced plans to allow designated authorities to test and scrutinise AI models to ensure they cannot be used to generate sexual imagery of children. While a welcomed step, there is no legal requirement for companies to conduct or share pre-deployment safety testing of AI systems. We continue to call on companies to make sure the products they build and make available to the global public are safe by design.
At EU level, new legislation has the potential to play a decisive role in tackling the use of AI to create child sexual abuse material, but only if it is implemented and enforced with child protection as a clear priority. The AI Act introduces EU-wide rules intended to ensure AI systems are safe and trustworthy, yet there is currently no guarantee that risks related to child sexual abuse material, including AI-generated child sexual abuse material, will be treated with the urgency and scrutiny they require.
There is a lack of consistency across the EU with regards to legal definitions of child sexual abuse material, and AI-generated child sexual abuse imagery is not illegal in all Member States. The Child Sexual Abuse (CSA) recast Directive which, if passed, would criminalise the production and dissemination of AI-generated sexual abuse material, as well as textual and instructional materials intended to facilitate or encourage child sexual abuse.