All-time hash data

 

Frontline observations

Number of criminal hashes

3,224,085

We use our bespoke grading and hashing software, called IntelliGrade, to enable us to perform an assessment of images and videos. When an image or video is assessed as child sexual abuse, a hash is created for it.  A hash is a unique digital fingerprint created from a file, such as an image or video, using a mathematical algorithm. When a file is “hashed,” the algorithm converts its data into a fixed-length string of letters and numbers which enables it to be identified without requiring the imagery to be viewed. Each hash is completely unique.

We can then share these hashes with industry and law enforcement, allowing them to automatically detect and block previously identified child sexual abuse imagery by matching uploads against our database of known hash values, without them needing to view, store or re-share the original content. This also helps to prevent further revictimisation of the children in these images as it limits the need for them to be viewed, even by those working to remove it. 

Checking images or videos against a hash list is a rapid process, and quickly enables thousands of known criminal images to be prevented from being uploaded to the internet in the first place.

The IWF began hashing child sexual abuse images in 2015.

In 2021 we launched our Taskforce, a team dedicated to assessing images and videos, which was originally funded by Thorn and then by the Home Office as part of partnership with the UK Government’s Child Abuse Image Database (CAID).

Every piece of child sexual abuse imagery found by our Analysts is viewed by our team of Assessors. Assessors add a severity assessment and metadata to each image and video, always grading with human eyes and thorough quality assurance. A dedicated Quality Assurance team reviews assessments to ensure accuracy, consistency, and alignment with classification standards across the entire hotline. These hashes form our Hash List, which when deployed helps reduce the revictimisation of children by preventing the further viewing, sharing, dissemination, and uploading of child sexual abuse material, interrupting its continued circulation and limiting repeated harm. 

Our hash data also serves another purpose thanks to how we exchange information with CAID. In addition to its deployment by Members, our hash data is uploaded into the CAID where it can inform law enforcement's investigative and safeguarding work.

In 2025 we assessed 512,980 images and videos, with 333,933 added to our hash list. The remaining files were reviewed through our quality assurance processes and enriched with additional metadata.

As of the end of 2025, a total of 3,224,085 individual hashes of criminal imagery had accumulated.

This dataset consisted of 3,049,074 images and 175,011 videos, including 8,988 instances of prohibited imagery.

All the 3,224,085 unique hashes are available to Members through the IWF Hash List and our Image Intercept tool. In 2025, 54 Members were subscribed to the Hash Lists.

What makes IntelliGrade different from other technology, is that it allows us to enrich these hashes with additional contextual metadata as shown in the chart below. Providing this additional breakdown allows us to clearly indicate the type of sexual activity depicted within the imagery and ensure it is accurately aligned to each category.

Hashes by sexual activity metadata - all time

Note: Some older hashes were created before we had the ability to assign specific sexual activity categories, so they are not included in the sexual activity metadata chart above.

These different classifications of sexual activity will define which Category an image is given in accordance with the UK's Sentencing Council’s Sexual Offences Definitive Guidelines.

Category A: Penetration, bestiality,  sadism or degradation

Category B: Non-penetrative sexual activity, masturbation, inappropriate touching, adult sexual arousal

Category C: Sexual posing with nudity, sexual display of the pubic region. Other indecent images that do not fall within Categories A or B. 

A key benefit of this enrichment process is that hashes generated by IntelliGrade are compatible with child sexual abuse laws and classifications in the UK, US, Canada, Australia, New Zealand and the Interpol Baseline standard. The Baseline sets evaluation criteria for child sexual abuse material, and material classified under it is illegal across all 196 Interpol member countries.

This means we can provide a dataset of hashes of child sexual abuse imagery which is compatible with multiple legal jurisdictions around the world. 

Severity of hashes - all time

Sexual activity classified under the UK Category B definition has been the most prevalent, accounting for 40% of the images and videos on our hash list.

KG - To include: (1) Hash diagram (2) Hash dataset (3) IntelliGrade diagram

IWF Image Classification Assessor
Frontline observations

Some readers may find the following descriptions distressing, please feel free to skip this section.

IWF's Image Assessors and Internet Content Analysts frequently see imagery that without additional intelligence, does not clearly depict a child being abused. It could be a close-up image, a pseudo image, or an obscured image that is hard to distinguish. In some cases, it could depict an older teenager whose imagery is tricky to visually age assess.  

The UK Home Office’s Child Abuse Image Database (CAID) intelligence provides valuable context for our assessors that helps them make accurate, verified grading decisions. Intelligence can help us understand the context of large image sets; add clarity to potentially indeterminable imagery, and helps us ‘connect the dots’ when it comes to complex imagery depicting multiple children. 

The intelligence we gain from CAID is also valuable to our proactive work. Child sexual abuse imagery of 16-17-year-olds is widely distributed online, and our analysts have no difficulty finding it – sometimes a search term or some keywords locate it quickly. To request removal of it from a webpage, we must be certain it depicts a child under the age of 18 - and this is where CAID intelligence on ‘identified victims’ is vital to our takedown efforts. One piece of CAID intelligence could lead to an analyst removing hundreds of URLs of just one older child victim.  

 Every piece of child sexual abuse imagery found by our analysts is viewed by our team of assessors. Assessors add a grade and meta data to each image and video, always grading with human eyes and thorough quality assurance. These hashes form our Hash List, which when deployed can help reduce the re-victimisation of children that occurs with every view, upload and share of the abuse image. Our hash intel also serves another purpose thanks to how we exchange information with CAID. In addition to their deployment by Members, our hashes are uploaded into CAID where they can inform law enforcement's investigative and safeguarding work.  

We also extract hashes from CAID, to which our assessors add our grades and meta data, before adding them to our own Hash List. Once deployed online, CAID-sourced hashes start working online to protect to children and internet users.  

 

Policy overview button

The UK’s Online Safety Act attempts to hold tech companies legally accountable, by placing responsibility on platforms to minimise harm and deliver more positive outcomes for children. It is imperative that this legislation delivers ambitious and effective regulation to ensure services undertake necessary steps to combat child sexual abuse material online.

The Online Safety Act 2023 (OSA) introduces measures to protect children and adults online. The Act requires platforms that offer user-to-user services or search engines to remove illegal content, including child sexual abuse material, address harmful material for children, and enforce age limits for adult content.

However, existing gaps in the Act and rule-based nature of regulations continue to present challenges in implementation.

Limited clarity on key principles and provisions of the Act, including safety by design, technical feasibility and safe harbour continue to present challenges for implementation and enforcement of the regime.

Ofcom, the independent communications regulator in the UK, has been working to implement the legislation. Ofcom has published specific steps that providers can take to meet their safety obligations through codes of practice. Under the new regulatory framework, Ofcom possesses the power to assess and enforce compliance among service providers.

The Illegal Harms Codes sets out the steps companies should take to tackle child sexual abuse material and other illegal harms on their services. Notably, the codes require services to use hash matching to detect and remove this material for the first time. With the implementation of the Act, the sharing of pictures and videos showing the sexual abuse of children should become significantly more difficult.The urgency of these measures is clear. In 2025, we detected more than 312,000 reports containing child sexual abuse images and videos which was a 7% increase on the previous year. By holding tech companies legally accountable, the OSA looks to ensure that platforms take needed action to prevent the creation and sharing of this criminal content. The greater the risk on a service, the more measures and safeguards are needed to keep users safe from harm and prevent grooming and exploitation of children. Effective and ambitious regulation is essential to make sure these services deliver real protection and positive outcomes for children.