Draft content added
Reports assessed
Over the past year, our Hotline assessed 451,186 reports of child sexual abuse material (CSAM), actioning 311,599 confirmed cases of child sexual abuse and 439 prohibited reports.
Reports to the Internet Watch Foundation (IWF) come from the public, partners, and proactive searching. All reports are manually assessed by our specialist analysts in the UK.
The public can report suspected CSAM via our website or through 50+ Reporting Portals, available in over 19 languages. Partners including law enforcement, international hotlines, child protection organisations, and over 200 IWF Members, submit reports through the Hotline, reporting portals, or an API.
Since 2014, the IWF has been authorised to proactively search online for child sexual abuse material, which now accounts for a significant portion of our work. Together, these reporting routes enable us to assess hundreds of thousands of reports each year against UK legal guidelines.
Needs a chart by Kate G
Rotate your device
This chart compares proactively sourced URLs (where our analysts search for child sexual abuse imagery) and URLs or imagery reported to us via external sources.
Assessed refers to all reports that have been reviewed whereas actioned refers to the number of those found to contain illegal content.
Externally submitted reports make up 8% of all actioned reports, largely because many reports do not meet the UK legal threshold or fall outside the IWF’s remit.
Reports assessed
This is the number of reports processed in total from external sources. Our analysts were able to take action on 22,309 of these reports.
We receive a high volume of reports from the public and from other sources, including the police, IWF Members, hotline agencies and child protection organisations.
This chart shows the percentage of reports from each external source that were actionable (i.e., contained child sexual abuse material).
‘Other’ refers to charities and other stakeholders working within the child protection sector.
Note: These percentages exclude newsgroups and duplicate/'previously actioned' reports.
Our Hotline assessed 161,449 reports submitted by the public.
When correctly identified duplicate reports (i.e. multiple reports of the same URL) are included in the accuracy calculation, public reporting accuracy was 25% (down from 27% in 2024).
Think we should talk in this section on the types of reports that might get submitted and why we might not be able to take action on them such as content no longer live or previously actioned.
Public Reports
When you first look at our data, it's clear that most of the illegal content we action comes from our proactive work, but our public reports represent something the numbers alone can't tell you; they're often warning signs.
A surge in reports from the public about a particular website can show that a certain type of online offending is spreading, sometimes across multiple platforms or in multiple countries. The comments reporters include in their reports sometimes provide us with context we couldn't find by ourselves. These details help us understand not just what child sexual abuse material (CSAM) is being shared, but how bad actors are sharing it, and what platforms are at risk of being exploited.
The number of non-illegal public reports we receive also captures something we deal with regularly: reporters stumbling across adult content that they are concerned depicts a child. We are trained on how to ‘age assess’ the people we view online, and how to distinguish child victims in CSAM from legitimate performers in adult pornography. Many public reports we assess are determined not to contain CSAM, but in fact depict deceptive adult content, where adult performers are styled and described as being far younger than their actual age.
Reports from the public also inform our proactive work, because they highlight patterns of CSAM distribution, and hotspots where CSAM is being posted.
Public reports help shape our overall understanding of child sexual abuse online, particularly sharing habits; how CSAM spreads online; how quickly it moves. This intelligence helps us to then determine the best action we can take to disrupt it.
Police
The Hotline receives reports from law enforcement agencies both here in the UK and abroad. Often, they come to us because they need specific insight about how a particular site is accessed, such as a disguised site that masks the display of CSAM. They might ask us for context around a platform’s behaviour; or further intelligence on what we have documented about a URL they're investigating. They may also report to us sites that they believe contain CSAM, so that we can expedite their removal.
Our focus is entirely on child sexual abuse material online, meaning we have gained expertise that can be shared with law enforcement agencies. Sometimes, our records and evidence are the missing piece that helps identify an offender, support an arrest, issue a warrant, or help explain how distribution networks operate.
Sometimes we receive reports from officers who've attended one of our Open Days, which is especially rewarding, and proves that the knowledge we've shared can be applied in real investigations. Officers supporting children whose indecent imagery has been distributed online can report their intelligence to us. This intelligence can lead to us removing multiple URLs of CSAM depicting a victim. In addition, hashing the imagery can potentially prevent further uploads and sharing.
Reports assessed
Since 2013, the IWF has been legally permitted to actively search for child sexual abuse imagery, a process we refer to as proactive searching.
Proactive searching enables us to review websites known to host, or link to, child sexual abuse content. This allows us to identify and have removed significantly more child sexual abuse images and videos from the internet, making it the most efficient method for finding and removing this material.
Our analysts were able to action 287,149 proactive reports, representing almost 100% of all proactive reports.
This approach also strengthens protection for victims who report abuse through our child reporting service. Once victims are known to us, we can take further action to prevent the continued circulation of their imagery online.
Proactive searching of the clear web and dark web accounts for a large proportion of our work. We capitalise on the intel received from public reports, and from navigating both popular and obscure corners of the internet to find further instances of child sexual abuse imagery (CSAM) to remove from public view and download for hashing.
Our data shows that our largest volume of proactive work comes from URLs of image hosting services: websites whose purpose is to host imagery, often one image per one URL. Our experience tells us that bad actors use image hosting services to populate online forums with thousands of indecent images of children.
In our proactive efforts we isolate these offending URLs, and act to remove each one. The forums are so big, so expansive, and are repopulated with CSAM so rapidly and purposefully that we find huge numbers of URLs this way. After we chase the removal of these vast online collections of CSAM, bad actors appear to quickly re-host their collections of imagery via an alternative hosting company or by using a different image host. Once again, they re-expose and revictimize the same children, but we proactively pursue its removal as many times as it takes.
We also work proactively on locating commercial sites. One we have located one disguised site, we find that others are simply one click away, as though similar CSAM websites are connected along a digital trail designed to provide more and more indecent content to those that seek it.
We know that indecent imagery of certain child victims has been widely distributed online. By entering the right words into search engines or certain websites, we can often generate hundreds of CSAM URLs to action. Some of those search terms are degrading and abusive words associated with the content of specific child victims. This is how this material is known to bad actors who seek it out, but for us such terminology becomes another tool for finding URLs proactively.
Reports assessed
The IWF Reporting Portals provide a reporting mechanism for online child sexual abuse imagery in countries where no such facility exists. In partnership with local governments, law enforcement, industry, funders, and charities, the portals offer a direct reporting route to our UK-based analysts.
Our Analysts were able to action 880 reports received directly from our international portals.
In 2025, the IWF, funded by Safe Online, commissioned the first independent evaluation of the Portals. Covering more than a decade and 53 countries, it assessed how the Portals operate, their global reach, impact, and key limitations. The full report can be found here
Our international Portals give people in countries without a hotline a place to turn to when they’re worried about something they’ve seen online. Some country’s portals have 2 versions: one in English and one in a local language, ensuring that the reporting journey is accessible to as many people as possible.
Reports from Portals often stretch us in new ways. We might be facing a language we don’t recognise, a platform that behaves differently in another region, or technology that blocks UK access entirely - but our goal is the same. We use our skills and training to find alternative routes into these sites, assessing the material accurately, and ensuring child sexual abuse material is never welcome online, no matter who found it nor where it’s hidden.
What we learn through our Portals strengthens our knowledge in how to signpost and support international self-reporters. Our Portal partners often provide us with excellent signposting information on local services that we can share with those reporters who approach IWF directly for help with online content removal and the associated threats and harm they are experiencing.
The intelligence we feed back to our Portal partners can help them understand what their communities are encountering most, and learn more about local experiences in the context of global trends. It can also inform them of any CSAM that is hosted within their country. The Portal relationship turns isolated reports into shared understanding, and that shared understanding into action we can apply all over the internet.
Reports assessed
The IWF currently works with two organisations that provide direct reporting services, enabling children to report child sexual abuse imagery directly to us.
The IWF assesses reported content and takes action where it meets the threshold of illegality. Illegal imagery is given a unique digital fingerprint (hash) and shared with internet companies to help prevent further distribution.
The IWF and NSPCC’s Childline developed the world-first Report Remove tool to help young people remove sexual images or videos of themselves from the internet. Launched in June 2021, the service enables children to report content safely and anonymously, while receiving ongoing support from Childline.
The IWF has worked with the National Crime Agency, UK law enforcement, and the NSPCC to continue promoting Report Remove and improve referrals, including through signposting resources for schools.
In 2025 our analysts were able to action 1,175 reports received through this reporting service for children.
Through our collaboration with the RATI Foundation, an Indian child protection organisation, young people are able to submit online content for assessment by the IWF. Where the content breaches UK law, the IWF will seek to have it blocked and/or removed.
The platform was developed by the non-profit technology organisation Tech Matters in collaboration with the IWF. It builds on Tech Matters’ existing software, Aselo, a cloud-based contact centre platform used by the RATI Foundation to operate Meri Trustline, its helpline supporting children in India who are experiencing online harms.
In 2025 our analysts were able to action 86 reports received through this reporting service for children.
Some of the most challenging reports we handle come from self-reporters: people who discover their own intimate images online and turn to us for help removing them.
Sometimes the images were taken and shared recently, meaning the reporter is still a child. At other times the images were taken years ago but online now, while the reporter is an adult.
Images can be recirculated for years after they were originally created and self-reporters have described their imagery being found online by someone they know. They tell us how it impacts their life, and how the imagery never seems to go away. International self-reporters tell us that family or community finding out about online indecent imagery puts them at great risk of harm.
For analysts, it’s delicate work and often difficult to navigate. Where it is suitable, we aim to signpost all self-reporters to appropriate organisations for emotional support, whilst we take on the practical work to remove their online indecent imagery.
Unfortunately, a large portion of reports from self-reporters will not be represented in the overall CSAM data this year, because sometimes we cannot take any action. This can be because the reported imagery is on an encrypted or private chats that IWF cannot access. At other times, although the material may be causing harm to the reporter, it does not reach the legal threshold for removal as child sexual abuse material. In some cases, we cannot visually determine the reporter to be a child in the imagery. When we cannot help a self-reporter, we always aim to suggest different organisations that they could turn to for assistance or support.