A guide to how we assess both reports and imagery, highlighting the key outcomes from 2024
The Hotline is split into two workstreams as seen in the diagram below, which includes assessments of both reports and imagery.
The majority of our work (63%) comes from proactive searching online; however, we also receive reports from external sources. These include the public, the police, IWF Members, hotline agencies and other charities and stakeholders in the child protection sector. In addition, we receive reports from our child reporting services, Report Remove and Meri Trustline, where children are able to report URLs or their own illegal imagery.
These reports are analysed by our Internet Content Analysts who determine if the content reported is criminal according to UK Law. URLs found to be criminal are actioned and listed on our IWF URL list for our Members.
If a child reports a URL this will follow our report assessment process, if they report criminal imagery this will follow the criminal imagery process, both are set out in the workflow diagram above.
All criminal imagery is then uploaded to our Intelligrade system where our Taskforce Image Assessors then provide analysis of each image or video. These assessments provide data insights based on the images, videos or children seen. All imagery will be given a severity assessment and included on the IWF Hash list.
The IWF has access to the government’s Child Abuse Image Database (CAID) in order to assess images and videos from it. This data is then shared to support law enforcement and contributes to the IWF Hash List to help the tech sector find and remove copies of known child sexual abuse images online.
Our Quality Assurance team provide an independent audit across both workstreams within the Hotline to ensure accuracy and consistency.
Dataset tag example
This year we have been able to provide more detailed analysis focusing on different areas of our work. Image, video and child analysis are a new focus for 2024. We have created six dataset tags to clearly identify what data and what part of our workstream is being referred to throughout all sections of our report.
See Reports assessment for more details on our full report analysis.
* This figure accounts for all illegal hashes in 2024, this includes videos and collage images where individual children are not recorded.
See Imagery assessment for more details on our image and video analysis.
The Internet Content Analysts, often referred to as Analysts, are a team of 17 people. They are responsible for proactively searching for images and videos of child sexual abuse online, responding to public reports, and monitoring new trends. It’s their job to ensure criminal content depicting the sexual abuse of children is removed from the internet.
The Image Classification Assessors are a Taskforce of 11 whose role is to assess images and videos, adding extra metadata to each image such as the age of the child depicted, and the type of sexual activity that is happening to them in addition to other information. Once they have added the data, a hash or “digital fingerprint” is created. These hashes are then used to prevent the upload, download and further dissemination of this image by our industry partners.
The role of our Quality Assurance team is to support the Hotline. The team of five are deliberately managed by a different Director to the rest of the Hotline and they ensure the work of the Hotline is held to the highest standards. They check for accuracy and consistency of assessments and track trends to ensure the IWF remains a trusted and world-leading organisation.