AI-generated child sexual abuse material

Alt text here

AI-generated child sexual abuse material represents a significant and evolving risk, as advances in technology make it easier to produce realistic and harmful content at scale. Such imagery can contribute to the ongoing exploitation of children, cause lasting harm to victims, and place increasing pressure on safeguarding, legal and regulatory systems. Imagery can be found on both the clear web and dark web .

The harm caused by AI-driven sexual imagery of children is compounded by the ways in which this content is created. It often draws on real children’s faces or bodies, either directly within the images or indirectly through the data used to train AI systems. Highly realistic material can be generated by modifying existing child sexual abuse content or by using simple prompts to create new abusive imagery within seconds, enabling rapid and large-scale production.

AI‑generated images and videos assessed to contain child sexual abuse over the past two years.

15,036

Frontline observations Policy overview

Over the past two years (2024 and 2025) we have assessed more than 15,000 AI-generated images and videos. 

The imagery is often created using real victims of abuse and just like other forms of child sexual abuse material, the harm can still be profound and enduring for those children depicted. 

In 2025 we recorded 491 reports that contained realistic AI-generated child sexual abuse imagery. This is an increase of 154% on 2024 when we saw 193. From those reports, 8,029 criminal images and videos were identified and an additional 82 were prohibited. Prohibited images are non-photographic images (including computer generated images (CGI), cartoons, manga images and drawings). These are assessed under a separate legal framework (Section 62 of the Coroners and Justice Act 2009) to indecent images that fall under the Protection of Children Act 1978 and Section 160 of the Criminal Justice Act 1988

In 2025 we saw over 260 times more AI-generated child sexual abuse videos (3,443) than in 2024 when we saw just 13. 

AI-generated child sexual abuse images and videos over the past two years

The chart above illustrates a substantial increase in the number of AI-generated videos depicting child sexual abuse recorded by our analysts this year. This total is influenced by several factors including where our analysts are looking for child sexual abuse material online as part of their proactive work and what is reported to us by the public. In 2025, there was a significant expansion in the availability of open-source video AI models, which may also have contributed to this rise. As the technology has become more accessible, individuals producing this material may have shifted their focus from generating still images to creating video content. This is not something we record but is something our analysts tell us they have seen. 

2025 has yet again seen the quality of AI-generated videos improve dramatically, and all assessed AI-generated imagery appeared significantly more realistic. 

AI-generated images and videos by severity over the past two years

Most (47%) of the AI-generated images assessed as criminal in the past two years have been Category C images involving nude, partially nude or topless sexual posing of children. However, videos tend to depict more severe Category A sexual abuse, with 65% showing penetrative sexual activity, sexual activity with an animal or sadism.

AI-generated images and videos are more likely to be assessed as Category A, indicating a disproportionate presence of the most severe content within AI-generated material.

 

Category A assessments were more common in AI‑generated videos (65%) than in non‑AI images (46%). A similar pattern was seen in images, with AI‑generated content showing a higher Category A rate (33%) compared with non‑AI videos (23%)

For more detailed statistics on criminal AI-generated videos and images, see our video insights and image insights

 

IWF Internet Content Analyst
Frontline observations

Some readers may find the following descriptions distressing, please feel free to skip this section.

In 2025 Merriam-Webster dictionary named “slop” as its word of the year defining it as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” This may be an accurate description of how many people today experience generative AI, and it helps explain why some are tempted to dismiss AI-generated child sexual abuse images as crude and obviously fake.  

However, the images and videos seen in the Hotline tell a different story. Many of them are created with significant attention to detail, using state of the art models, powerful hardware and complex workflows that can take months to master. In many cases the resulting images look indistinguishable from photographic child sexual abuse material.   

In 2025 we saw AI-generated content take on new forms. We encountered AI companion sites which offered explicit conversations with simulated child characters and the ability to generate criminal images alongside the chats. Alarmingly, members of the public found these services by clicking on ads on mainstream social media platforms, making it apparent how easy it is to access this content.

Another important trend seen was the sharp increase in the number of realistic AI-generated videos. Multiple open-source video generating models were released in 2025, instantly becoming popular with bad actors. Where previously we saw convincing deepfakes and face-swaps, now videos generated from text prompts and images took centre stage. In the span of a year, we saw them evolve from simple clips of sexually posed children to movies featuring multiple child and adult characters, different locations and scenes. The more recent also featured AI-generated audio, including the voices of children.

We also observed that 'nudifying' bots and apps developed new functions. The AI tech no longer just 'removes' clothing from a static image but can also create sexual videos involving anyone with a publicly available image.  

Finally, we have been monitoring the emergence of first agentic AI tools and their potential to affect the spread of child sexual abuse content online. With the rise of “vibe coding” and autonomous AI agents, bad actors may be able to build, update and maintain websites and apps more easily than ever. By lowering technical knowledge barriers and improving the resilience of networks which spread child sexual abuse material, these tools carry the risk of increasing the scale of the problem in 2026 and beyond.  

 

Policy overview button

Swift action by legislators and technology companies is needed stop AI technology from being exploited to create child sexual abuse material. This includes regulatory requirement to ensure AI products are safe by design, banning nudification apps and tools, and closing legal loopholes to ensure AI generated material is treated the same as other forms of child sexual abuse material.

Since the IWF first started monitoring AI in early 2023, we’ve seen a rapid, frightening advancement in the ability to artificially generate child sexual abuse imagery. There is no doubt that such imagery is harmful – it revictimises those whose imagery has been used to create this abuse, enables the creation of more extreme imagery, and can increase the risk of progression to contact abuse for those who view such material.

Action by policymakers and technology companies is needed stop AI technology from being exploited to create child sexual abuse material.

In the UK, the Government has moved decisively to close legal loopholes related to the use of AI in the creation of child sexual abuse material. These measures target both the tools used to generate such material, including a ban of nudification tools, and the guidance used by offenders to exploit AI for this purpose.

The UK Government has also announced plans to allow designated authorities to test and scrutinise AI models to ensure they cannot be used to generate sexual imagery of children. While a welcomed step, there is no legal requirement for companies to conduct or share pre-deployment safety testing of AI systems. We continue to call on companies to make sure the products they build and make available to the global public are safe by design.

At EU level, new legislation has the potential to play a decisive role in tackling the use of AI to create child sexual abuse material, but only if it is implemented and enforced with child protection as a clear priority. The AI Act introduces EU-wide rules intended to ensure AI systems are safe and trustworthy, yet there is currently no guarantee that risks related to child sexual abuse material, including AI-generated child sexual abuse material, will be treated with the urgency and scrutiny they require.

There is a lack of consistency across the EU with regards to legal definitions of child sexual abuse material, and AI-generated child sexual abuse imagery is not illegal in all Member States. The Child Sexual Abuse (CSA) recast Directive which, if passed, would criminalise the production and dissemination of AI-generated sexual abuse material, as well as textual and instructional materials intended to facilitate or encourage child sexual abuse.