All time hash data

Draft content added

Number of criminal hashes

3,224,085

We use our bespoke grading and hashing software, called IntelliGrade, to enable us to perform an image-level assessment of images and videos. This also creates a hash of the image or video. A hash is a type of digital fingerprint created using a mathematical algorithm that identifies a picture of confirmed child sexual abuse.

Each hash is completely unique. Once an image has been hashed, it enables us to share these with industry and law enforcement, eliminating the need to view the criminal imagery.  

Hashes are quickly recognised; this means thousands of criminal pictures can be blocked from ever being uploaded to the internet in the first place.

The IWF began hashing child sexual abuse images in 2015.

In 2021 we launched our Taskforce, a team dedicated to assessing images and videos, which was originally funded by Thorn and then by the Home Office as part of partnership with the UK Government’s Child Abuse Image Database. (CAID)

Every piece of child sexual abuse imagery found by our Analysts is viewed by our team of Assessors. Assessors add a severity assessment and metadata to each image and video, always grading with human eyes and thorough quality assurance. These hashes form our Hash List, which when deployed can help reduce the revictimisation of children that occurs with every view, upload and share of child sexual abuse material.

Our hash data also serves another purpose thanks to how we exchange information with CAID. In addition to their deployment by Members, our hashes are uploaded into the CAID database where they can inform law enforcement's investigative and safeguarding work.

As of the end of 2025, a total of 3,224,085 individual hashes of criminal imagery had been accumulated.

This dataset consisted of 3,049,074 images and 175,011 videos, including 8,988 instances of prohibited imagery.

All the 3,224,085 unique hashes are available to Members through the IWF Hash List. In 2025, 54 Members were subscribed to the Hash Lists.

Images by sexual activity metadata - all time

Note: Some older hashes are not included in the above table of sexual activity metadata.  

Severity of hashes - all time

Category A: Images depicting penetrative sexual activity, sexual activity involving an animal, or sadistic conduct.
Category B: Images depicting non-penetrative sexual activity.
Category C: Other indecent images that do not fall within Categories A or B.

Category C includes, for example, images showing partially nude, nude, or topless sexual posing.

KG - To include: (1) Hash diagram (2) Hash dataset (3) IntelliGrade diagram

Analyst Insights

Image Assessors and Internet Content Analysts frequently see imagery that without additional intelligence, does not clearly depict a child being abused. It could be a close-up image, a pseudo image, or an obscured image that is hard to distinguish. In some cases, it could depict an older teenager whose imagery is tricky to visually age assess.  

CAID’s intel provides valuable context for our Assessors that helps them make accurate, verified grading decisions. Intelligence can help us understand the context of large image sets; add clarity to potentially indeterminable imagery; and helps us ‘connect the dots’ when it comes to complex imagery depicting multiple children. 

The intelligence we gain from the CAID database is also valuable to our proactive work. Child sexual abuse imagery of 16-17-year-olds is widely distributed online, and our analysts have no difficulty finding it – sometimes a search term or some keywords locate it quickly. To request removal of it from a webpage, we must be certain it depicts a child under the age of 18 - and this is where CAID intelligence on ‘identified victims’ is vital to our takedown efforts.  One piece of CAID intelligence could lead to an analyst removing hundreds of URLs of just one older child victim.  

 Every piece of child sexual abuse imagery found by our Analysts is viewed by our team of Assessors. Assessors add a grade and meta data to each image and video, always grading with human eyes and thorough quality assurance. These hashes form our Hash List, which when deployed can help reduce the revictimization of children that occurs with every view, upload and share of CSAM. Our hash intel also serves another purpose thanks to how we exchange information with CAID. In addition to their deployment by Members, our hashes are uploaded into the CAID database where they can inform law enforcement's investigative and safeguarding work.  

We also extract hashes from CAID, to which our Assessors add our grades and meta data, before adding them to our own Hash List. Once deployed online, CAID-sourced hashes start working online to protect to children and internet users.