Where our reports come from

Overview
Proactive search
External report sources
Child reporting services
International reporting portals

Overview

 

Jump to Policy overview button

Total number of reports assessed

451,210

Over the past year, our Hotline assessed 451,210 reports of child sexual abuse materialactioning 311,610 confirmed cases of child sexual abuse and 439 prohibited reports .

Reports to the Internet Watch Foundation (IWF) come from the public, partners and proactive searching. All reports are manually assessed by our specialist analysts in the UK.

The public can report suspected child sexual abuse material via our website or through 50+ international reporting portals, available in over 19 languages. Additional reports are received from other external sources, including law enforcement, international hotlines, child protection organisations and over 200 IWF Members who submit reports through the Hotline, international reporting portals or an API (Application Programming Interface).

Since 2014, the IWF has been authorised to proactively search online for child sexual abuse material. This proactive searching now accounts for a significant portion of our work, representing 64% of all reports in 2025. 

Together, these reporting routes enable us to assess hundreds of thousands of reports each year against UK legal guidelines.

 

Below is a breakdown, by source, of the reports received in 2025. 

Full details for each source can be found in the tabs at the top of this page.

Rotate your device

Breakdown of all reports received

Percentages are rounded to the nearest whole number.

*Child reporting services are Report Remove and Meri Trustline

Externally submitted reports make up 7% of all actioned reports, largely because many reports do not meet the UK legal threshold or fall outside the IWF’s remit.

 

Policy overview button

Despite the number of reports we receive from external sources, as well as those that we proactively discover, there are areas of the internet where child sexual abuse material is able to flourish. The rollout of end-to-end encryption (E2EE) messaging without any safeguards means services lose the ability to detect and remove child sexual abuse images and videos. This doesn’t need to be the case. Services must implement pre-encryption checks, such as upload prevention, on E2EE platforms, to ensure that known child sexual abuse material is detected and blocked before being shared.

IWF analysts routinely identify child sexual abuse images and videos on public parts of the internet that are linked to E2EE services. While we do not have the ability to see what is happening within E2EE environments, it is clear from just the public-facing parts that images and videos of child sexual abuse are being sold and/or shared in these spaces. The continued rollout of E2EE messaging without any safeguards means that services, and organisations like ours, lose the ability to detect and remove child sexual abuse images and videos. As this blind spot continues to grow, offenders thrive while victims and survivors live with the increasing threat of their abuse resurfacing.

Technology exists that can detect and block known child sexual abuse material prior to an image being shared within an E2EE environment. Upload prevention is a pre-encryption check that ensures known child sexual abuse material is detected and blocked before it can be encrypted. Pre-encryption checks are commonly used in E2EE environments to improve the user experience, and should be extended to the detection of pictures and videos showing the sexual abuse of children. Services must implement upload prevention on their E2EE services and reduce the risk of known child sexual abuse imagery being sent and shared on their platforms. Further investment and innovation in privacy-preserving technologies that can detect and take down this material without compromising message confidentiality is also needed.

Clear action by policymakers is a central part of tackling harms in E2EE environments. As the Online Safety Act is implemented, the UK regulator Ofcom must regularly assess their position on the ‘technical feasibility’ condition to ensure platforms do not use this as a get-out clause to evade compliance with hash-matching requirements. The UK government should also introduce additional legislation that ensures private messaging platforms, including those that are end-to-end encrypted, take the necessary steps to detect and block child sexual abuse material. 

 

European flag icon

In the European Union, the Child Sexual Abuse Regulation must be adopted without further delay. Member States must push for progress on the Regulation to provide a permanent legal basis for detecting child sexual abuse material across the EU.

Victims and survivors of child sexual abuse should be able to live without fear of their images being shared online. Services and law makers have a duty to make sure end to end encrypted environments are not safe havens for criminals to target children and share child sexual abuse material.

Proactive search

 

Frontline observations

Proactive reports created and assessed

287,273

Since 2014, the IWF has been legally permitted to actively search for child sexual abuse imagery, a process we refer to as proactive searching.

Proactive searching enables us to review websites known to host, or link to, child sexual abuse content. This allows us to identify and have removed significantly more child sexual abuse images and videos from the internet, making it the most efficient method for finding and removing this material.

Our analysts were able to action 287,152 proactive reports.

 

In some cases reports are created by our Analysts for URLs containing child sexual abuse material that, by the time they are processed, have already been removed and/or can no longer be actioned.

Taking action on a report means that the content flagged in the report has been reviewed by trained analysts and confirmed to meet UK legal thresholds for child sexual abuse material. Once confirmed, action can result in:

  • Removal of the content from the internet where possible.

  • Blocking access to the URL so it cannot be viewed in the UK.

  • Referral to law enforcement or other appropriate authorities for further investigation.

  • Recording and monitoring the report internally to inform proactive work and trends analysis.

In simple terms, “taking action” is the process of assessing a report, confirming it is illegal, doing everything possible to stop access to it and supporting safeguarding efforts.

This approach also strengthens protection for victims who report abuse through our child reporting service. Once victims are known to us, we can take further action to prevent the continued circulation of their imagery online.

IWF Internet Content Analyst
Frontline observations

Some readers may find the following descriptions distressing, please feel free to skip this section.

Proactive searching of the clear web and dark web accounts for a large proportion of our work. We capitalise on the intelligence received from public reports and navigate both popular and obscure corners of the internet to find further instances of child sexual abuse imagery that can be removed from public view and downloaded for hashing. 

Our data shows that our largest volume of proactive work comes from URLs of image hosting services: websites whose purpose is to host imagery, often one image per one URL. Our experience tells us that bad actors use image hosting services to populate online forums with thousands of indecent images of children.  

In our proactive efforts we isolate these offending URLs, and act to remove each one. The forums are big, expansive, and repopulated with child sexual abuse material so rapidly and purposefully that we find huge numbers of URLs this way. After we chase the removal of these vast online collections, bad actors appear to quickly move their collections of imagery via an alternative hosting company or by using a different image host. Once again, they expose and revictimise the same children, but we proactively pursue the URLs removal as many times as it takes. 

We also work proactively on locating commercial sites. Sometimes these are disguised sites and once we have located one, we find that others are simply a click away, as though similar child sexual abuse websites are connected along a digital trail designed to provide more and more indecent content to those that seek it. 

We know that indecent imagery of certain child victims has been widely distributed online. By entering the right words into search engines or certain websites, we often generate hundreds of child sexual abuse URLs that can be actioned. Some of those search terms are degrading and abusive and are associated with the content of specific child victims. This is how the material is known to bad actors who seek it out, but for us such terminology becomes another tool for finding URLs proactively. 

External report sources

 

Frontline observations

External reports received and assessed

153,485

 

We received 153,485 reports from external sources, including the general public, the police, IWF Members, hotline agencies and other child protection organisations. Our analysts assessed all the reports received to determine if they contained, or were linked to, child sexual abuse imagery.

 15% (22,317) of these external reports were actioned as criminal.

Actionable reports from each external source

Public reports are submissions made by members of the public who encounter online content they suspect may contain child sexual abuse material. Individuals provide the relevant URL through an online reporting form, enabling trained analysts to assess the content against UK legal thresholds and take action where appropriate.

In the chart below, we show each reporting source separately and measure its accuracy based on the number of reports it submitted. Each source’s accuracy is calculated independently, meaning it is measured only against its own reports and not compared to any other source.

External reports –⁠ % which led to child sexual abuse imagery being processed and actioned

This chart shows the percentage of reports from each external source that were actionable (i.e., contained child sexual abuse material).

Note: These percentages exclude newsgroups and duplicate/'previously actioned' reports.

Limitations of external reporting

 

It is important to recognise that the figures presented may not reflect the content that was visible at the time of reporting. In many cases, URLs reported to us contain no visible content when assessed. This may be because the criminal content was removed from the site before our analysts were able to assess the report or because the URL had already been received and assessed via another source, resulting in a duplicate report. It may also be that the content is not accessible to us without entering areas of the URL that fall outside our remit.

When correctly identified duplicate reports (i.e. multiple reports of the same URL) are included in the calculation, external report source accuracy was 25%.

IWF Internet Content Analyst
Frontline observations

Some readers may find the following descriptions distressing, please feel free to skip this section.

In the Hotline, we receive reports from a variety of different external sources from all around the world, including both the public and law enforcement agencies.

When you first look at our data, it’s clear that most of the illegal content we action comes from our proactive work, but our public reports represent something the numbers alone can’t tell you; they’re often warning signs. A surge in reports from the public about a particular website can show that a certain type of online offending is spreading, sometimes across multiple platforms or in multiple countries. The comments people include in their reports sometimes provide us with context we couldn’t find by ourselves. These details help us understand not just what child sexual abuse material is being shared, but how bad actors are sharing it, and what platforms are at risk of being exploited.   

The number of non-illegal public reports we receive also captures something we deal with regularly: people stumbling across adult content that they are concerned depicts a child. We are trained on how to ‘age assess’ the people we view online, and how to distinguish child victims in child sexual abuse material from legitimate performers in adult pornography. Many public reports we assess do not to contain child sexual abuse material but depict deceptive adult content, when adult performers are styled and described as being far younger than their actual age.  

Reports from the public also inform our proactive work, because they highlight patterns of child sexual abuse material distribution, and hotspots where child sexual abuse is being posted. 

Public reports help shape our overall understanding of child sexual abuse online, particularly sharing habits; how child sexual abuse material spreads online; how quickly it moves. This intelligence helps us to determine the best action we can take to disrupt it. 

The Hotline also receives reports from law enforcement agencies from all over the world. Often, they come to us because they need specific insight about how a particular site is accessed, such as a disguised site that masks the display of child sexual abuse material. They might ask us for context around a platform’s behaviour; or further intelligence on what we have documented about a URL they’re investigating. They may also report sites that they believe contain child sexual abuse material, so that we can expedite their removal. Sometimes, our records and evidence are the missing piece that helps identify an offender, support an arrest, issue a warrant or help explain how distribution networks operate.  

On occasion we receive reports from officers who’ve attended one of our open days, which is especially rewarding, and proves that the knowledge we’ve shared can be applied in real investigations. Officers supporting children whose indecent imagery has been distributed online can report their intelligence to us. This intelligence can lead to us removing multiple URLs of child sexual abuse material depicting a victim. In addition, hashing the imagery can potentially prevent further uploads and sharing. 

Child reporting services

 

Frontline observations

Child reporting services reports received and assessed

2,078

The IWF currently works with two organisations that provide direct reporting services, enabling children to self-report child sexual abuse imagery directly to us.

The IWF assesses reported content and takes action where it meets the threshold of illegality. Illegal imagery is given a unique digital fingerprint (hash) that is shared with internet companies to help prevent further distribution.

Of the 2,078 reports received, we took action on 1,261 that showed child sexual abuse. 

 

Report Remove

The IWF and NSPCC's Childline developed the world-first Report Remove tool to help young people remove sexual images or videos of themselves from the internet. Launched in June 2021, the service enables children to report content safely and anonymously, while receiving ongoing support from Childline.

The IWF has worked with the National Crime Agency, UK law enforcement and the NSPCC to continue promoting Report Remove and improve referrals, including through the development of resources for schools. 

In 2025 our analysts were able to action 1,175 reports received through this reporting service for children.

 

Meri Trustline

Through our collaboration with the RATI Foundation, an Indian child protection organisation, young people are able to submit online content for assessment by the IWF. Where the content breaches UK law, the IWF will seek to have it blocked and/or removed.

The platform was developed by the non-profit technology organisation Tech Matters in collaboration with the IWF. It builds on Tech Matters’ existing software, Aselo, a cloud-based contact centre platform used by the RATI Foundation to operate Meri Trustline, its helpline supporting children in India who are experiencing online harms.

In 2025 our analysts were able to action 86 reports received through this reporting service for children.

 

IWF Internet Content Analyst
Frontline observations

Some readers may find the following descriptions distressing, please feel free to skip this section.

Some of the most challenging reports we handle come from self-reporters who discover their own intimate images online and turn to us for help removing them.  

Sometimes the images were taken and shared recently, meaning the reporter is still a child. At other times the images were taken years ago but appear online now, when the reporter is an adult.  

Images can be recirculated for years after they were originally created and self-reporters have described their imagery being found online by someone they know. They tell us how it impacts their life, and how the imagery never seems to go away. International self-reporters tell us that family or community finding out about online indecent imagery puts them at great risk of harm.  

For analysts, it’s delicate work and often difficult to navigate. Where suitable, we signpost all self-reporters to appropriate organisations for emotional support, while we take on the practical work of removing the indecent imagery.  

Unfortunately, a large portion of reports are not represented in the overall child sexual abuse data this year as we're not always able to take action on them. This can be because the reported imagery is on an encrypted or private chat that the IWF can’t access. At other times, although the material may be causing harm to the reporter, it does not reach the legal threshold for removal as child sexual abuse material. In some cases, we can’t visually determine if the reporter is a child in the imagery. When we are not able to help a self-reporter, we always refer them to different organisations that they could ask for assistance or support. 

International reporting portals

 

Frontline observations

International reporting portal reports received and assessed

8,374

The IWF international reporting portals provide a reporting mechanism for online child sexual abuse imagery in countries where no such facility exists. In partnership with local governments, law enforcement, industry, funders and charities, the portals offer a direct reporting route to our UK-based analysts.

Our analysts were able to action 880 reports received directly from our international portals.

 

In 2025, the IWF, funded by Safe Online, commissioned the first independent evaluation of the portals. Covering more than a decade and 53 countries, it assessed how the portals operate, their global reach, impact, and key limitations. The full report can be found here

IWF Internet Content Analyst
Frontline observations

Some readers may find the following descriptions distressing, please feel free to skip this section.

Our international portals give people in countries without a hotline a place to report child sexual abuse material they may have stumbled across online. Some country’s portals have two versions: one in English and one in a local language, ensuring that the reporting journey is accessible to as many people as possible.  

Reports from portals often challenge us in new ways. We might be facing a language we don’t recognise, a platform that behaves differently in another region or technology that blocks UK access entirely – but our goal is the same. We use our skills and training to find alternative routes into these sites, assessing the material accurately, and ensuring child sexual abuse material is never welcome online, no matter who found it or where it’s hidden. 

What we learn through our portals strengthens our knowledge in how to signpost and support international self-reporters. Our portal partners often provide us with excellent signposting information on local services that we can share with those reporters who approach the IWF directly for help with online content removal and the associated threats and harm they are experiencing.  

The intelligence we feed back to our portal partners can help them understand what their communities are encountering online, and learn more about local experiences in the context of global trends. It can also inform them of any child sexual abuse material hosted within their country. The portal relationship turns isolated reports into shared understanding, and that shared understanding into action we can apply throughout the internet.