This year in focus

Draft status

IWF datasets:

Our annual data and insights report this year splits out our datasets in a clearer distinct areas which consist of:

This year, our Annual Report presents our datasets in more clearly defined and distinct areas, reflecting the different ways data informs and supports our operational work. These datasets consist of reports, URLs, and hashes, each serving a specific purpose within our response to online child sexual abuse imagery.

Reports

Reports account for a significant proportion of our work and are the primary mechanism through which content is brought to our attention. Most reports relate to URLs suspected of hosting child sexual abuse imagery. Reports may be submitted directly by members of the public or generated internally to record our proactive work. Since the launch of our child reporting tool, reports submitted by children and young people are also received and processed in this format.

URLs

URLs (or webpages) provide critical intelligence about where child sexual abuse imagery is being hosted online. Analysis of URL data allows us to identify hosting patterns, the types of websites involved, and repeat offenders. This intelligence is particularly valuable in identifying sites that are explicitly generating revenue from hosting this type of content.

Hashes

Our hash dataset contains detailed information about individual images and videos, enabling analysis at a granular level beyond the URL. Hash data includes attributes such as the number of children depicted, estimated age and sex, the type of activity shown, and trend insights, including the identification of self-generated and artificially generated material. This approach allows for more precise trend analysis than URL-level data, where a single webpage may contain hundreds or thousands of images, creating greater uncertainty.

Within the hash dataset, images and videos are treated differently due to their complexity and welfare impact. For images, we record the full range of available data. For videos, which present a higher welfare risk to analysts, we record severity grading alongside trend indicators relating to self-generated and AI-generated content.

IWF assessment and classification

Our work combines action taken on both public and proactive reports of content hosted online, alongside the hashing and grading of imagery. We also grade images as part of our partnership with the UK Government’s Child Abuse Image Database (CAID), a secure national repository containing images and videos of child sexual abuse material collected by UK police forces and the National Crime Agency. This database plays a critical role in supporting investigations and securing convictions against offenders who create, access, or distribute this material.

In 2025, we conducted assessments from within CAID of over 600,000 images and videos to determine whether they met the threshold for criminal classification. A significant proportion of these assessments resulted in non-criminal gradings. In many cases, the imagery did not meet the criminal threshold under UK law, or it was not possible to determine with complete confidence that a child was depicted.

We assess child sexual abuse material according to the levels detailed in the Sentencing Council's Sexual Offences Definitive Guideline. The Indecent Photographs of Children section (Page 34) outlines the different categories of child sexual abuse material.

  • Category A: Images involving penetrative sexual activity; images involving sexual activity with an animal; or sadism.
  • Category B: Images involving non-penetrative sexual activity.
  • Category C: Other indecent images not falling within Categories A or B.

Dataset development and welfare considerations

We continue to seek opportunities to expand our datasets and share insights with industry and partners. However, this must be carefully balanced against the welfare of our analysts and assessors. This work we carried out on CAID and our proactive work prompted further internal discussion around how we respond to imagery that does not explicitly depict child sexual abuse but where there is reason to believe a child may have been exploited. As a result, we introduced an ‘exploitative’ category to better reflect and respond to this type of content.

Exploitative definitions:

Exploitative content refers to any material, particularly images that depicts, or implies the sexualisation or victimisation of a minor, even where the content itself may fall short of legal thresholds for criminality.

This includes, but is not limited to:

Borderline criminal sexualised depictions of a child
Content that may not be illegal on its own, but is clearly sexualised in nature or intent and raises concerns, even if they do not meet criminal threshold under UK law for child sexual abuse.

Images linked to known exploitation
Lawful images of a child that, when viewed in context, are linked to known or suspected exploitation of the same child, identified through how they are shared, contextual information, metadata, distribution patterns, or known victim identification.

Images believed to depict a child but without confirmed identification
Content where there is high confidence that a child is shown, but where it is not possible to confirm this with complete certainty without independent confirmation.

We classify these areas of content as exploitative because it contributes to, or risks contributing to, the sexual exploitation or continued harm of a child, even where a single image cannot be graded as illegal.

In October 2025, we began the internal process of grading exploitative images which included reassessing a number of images that had previously been assessed as not illegal.

In just over two months, our team has assessed more than 72,000 images as belonging to the exploitative category.

Exploitative breakdown

  • Borderline indecent
  • Known or confirmed victims
  • Age in question
  • Borderline indecent & age in question

While this category was initially developed for internal use, we intend to expand its application and share this intelligence with industry and partners. Through our membership services, this approach aims to strengthen protections for victims and support earlier intervention where harm may not yet meet criminal thresholds.

Analyst insight

Analyst insight