Alt text here!

Executive summary

Under construction

 

Headline statistics

Assessed icon

 

451,210 reports assessed (+6% YoY).
70 This is equivalent to one report every 70 seconds.

 

Confirmed icon

 

311,610 reports confirmed as, or linked to, child sexual abuse material (+7% YoY).
101 This is equivalent to one every 101 seconds.

 

Report Remove logo

 

1,894 Report Remove submissions received (+66%).
1175 leading to 1,175 actioned reports.

 

Video icon

 

63,682 child sexual abuse videos assessed (+50% YoY).
29%
including a 29% increase in Category A material, the most severe forms of abuse.

 

Our Approach & Evidence Base

We want to see a safer internet where child sexual abuse and exploitation cannot happen.

 

The Internet Watch Foundation works to identify, remove, and prevent the spread of child sexual abuse material (CSAM) online - including imagery of real children and material generated using AI tools.

The 2025 Annual Data & Insights Report examines how child sexual abuse material is created, distributed, and monetised, and the systemic challenges that allow it to persist online. It highlights several areas of particular concern, including:

Alt text here

 

These findings reflect insights drawn from the IWF’s operational work, including proactive detection activities. They should not be interpreted as a measure of global prevalence.

Our analysis is based on verified, victim-centred assessment by trained analysts and image specialists, drawing on: 

Alt text here

Together, these sources provide a multi-layered understanding of how online abuse emerges, spreads, and persists.

 

 

 

This executive summary highlights key findings and priorities for collective action. The full report provides deeper analysis of trends, systemic risks, and the evolving online harm landscape in 2025.

 

 

 

Emerging and persistent harms

In 2025, analysts identified several emerging and evolving risks shaping the online child sexual abuse landscape, including the rapid growth of AI-generated CSAM, persistent gendered sexual abuse targeting girls, and the victimisation of older teenagers. These trends highlight how technological change, social dynamics, and criminal exploitation intersect to create new forms of harm online. 

 

Alt text here

 

AI-generated child sexual abuse material

We saw a sharp rise in the volume, realism, and severity of AI-generated child sexual abuse videos.

 

Alt text here

 

Generative AI tools, including video models, nudification apps, subscription platforms, and agentic AI systems, have lowered technical barriers, enabling offenders with minimal expertise to produce and distribute illegal content at scale. AI is being used to generate synthetic abuse, manipulate images of real children, and produce explicit chats with simulated child characters. Early signs of commercialisation are emerging, with subscription-based services offering tailored content creation.

When AI systems are trained on real victims’ imagery, synthetic material prolongs harm and enables re-victimisation. Some content is used for blackmail or sexually motivated extortion. Open-source AI tools further lower barriers, allowing offenders to adapt and deploy harmful content with minimal oversight.

 

Frontline observations icon
Frontline observations

Alt text here

 

Policy Icon
Policy overview

Swift action by legislators and technology companies is needed to stop AI technology from being exploited to create child sexual abuse material and to perpetrate violence against women and girls.This includes regulatory requirements to ensure AI products are safe by design, banning nudification apps and tools, and closing legal loopholes to ensure AI generated material is treated the same as other forms of CSAM in jurisdictions beyond the UK.

 

 

Alt text here

 

Gendered sexual harm
(violence against women & girls)

Girls remain disproportionately represented in sexual abuse imagery, both real and AI-generated.

Alt text here

Analysts frequently encounter violent sexualisation, misogynistic framing and degrading scenarios. Voyeuristic and non-consensual material circulates in “exposing” spaces where girls’ bodies are commodified for rating, identification and abusive commentary. AI tools amplify harm by recreating abuse and generating sexualised depictions at scale.

These patterns reflect entrenched gendered sexual violence online, fuelled by societal norms, power imbalances and misogyny. Non-consensual sharing, voyeurism, and AI manipulation strip girls of control over their image, increasing the risk of repeated circulation and re-victimisation.

 

Add Policy Callout Here

 

Quotes from the frontline
"Sinister sexualisation with violent undertones."

“A sense of commodification and ownership of women’s and girls’ bodies is strongly detectable in certain online spaces.”

Alt text here

 

Older teen victimisation

Older teenagers are increasingly caught in cycles of abuse involving ‘self-generated’ imagery, leaks, AI manipulation and sexual extortion. Boys are disproportionately represented through our child reporting services and in sexually coerced extortion cases.

Alt text here

Images are often self-captured in private settings and later leaked, manipulated or shared under pressure. Once online, content spreads rapidly across platforms, sometimes reaching adult platforms where teens are mistaken for adults. Sextortion cases escalate quickly, with offenders demanding additional images or payments. Some imagery is repackaged into humiliating collages, increasing shame and compliance.

The combination of ‘self-generated’ content, leaks and coercion is creating a fast-growing, interconnected ecosystem of harm. Once shared, images can resurface repeatedly, amplifying distress and risk.

Our Response: The IWF continues to support children through Report Remove, while working with industry to adopt child sexual abuse material hashing, strengthen verification and monitoring processes, and escalate sextortion cases to safeguarding partners. Collaboration with the adult sector, technology platforms and regulators is critical to reduce exposure, protect teens and disrupt exploitation at scale.

Quotes from the frontline
“Pay — or see your images sent to family, friends and schools.”

“What feels alarming is how the extorter threatens the child. They employ emotional manipulation and use intimidating, aggressive language and threats that escalate rapidly after nudes are taken.”

Systemic conditions enabling child sexual abuse material distribution

Hosting and blocking child sexual abuse material is fragmented across technical, commercial, and regulatory layers, often spanning jurisdictions with differing laws. Its persistence reflects the combined effects of technology, infrastructure, commercial interests, and scalability pressures, which can overshadow user safety.

 

Child sexual abuse material hosting hotspots

A small number of jurisdictions host a disproportionate share of confirmed child sexual abuse material.

Alt text here

A small number of jurisdictions host most confirmed child sexual abuse material URLs, often concentrated in a few high-volume sites. Changes in rankings reflect sites emerging, migrating, or being disrupted. When child sexual abuse material is concentrated on a few high-volume sites in jurisdictions with slower or inconsistent takedown, material remains accessible longer, increasing the risk it will be copied, redistributed, or reposted elsewhere. The UK demonstrates that rapid, collaborative removal is effective and can limit exposure.

Effective child protection therefore depends on faster, more consistent international enforcement approaches, supported by coordinated action across industry and regulatory partners.

 

Add EU Policy Callout Here

 

 

Add UK OSA Policy Callout Here

 

Quotes from the frontline
“We monitor removals daily — some hosts act quickly; others require more steps and take longer.”

“There are more factors to consider when reporting child sexual abuse material that is hosted internationally: legal parameters… language barriers… or a company structure that may make it hard to find an abuse contact.”

 

Online recidivism & infrastructure evasion

Child sexual abuse material distribution is becoming more resilient and widespread, with offenders exploiting weaknesses across internet infrastructure to evade detection and quickly rebuild operations.

Alt text here

Offenders increasingly rely on image-hosting services to upload large collections of child sexual abuse material, which are then embedded across forums and blogs. Removed content is rapidly reposted to alternative pre-registered domains or reappears under new domain endings (TLD hopping), often featuring the same material and victims. Legitimate platforms are frequently abused, and takedowns targeting only specific URLs remove content temporarily but do not prevent rapid re-uploads, limiting the overall effectiveness of enforcement. 

This adaptive behaviour creates multi-layered resilience, allowing material to persist across the internet. Without coordinated action across registries, registrars, hosting providers, image hosts, and platforms, these distribution pathways remain open, increasing systemic risk.

Our Response: The IWF uses several tools to disrupt repeat CSAM activity across the internet infrastructure:

Together, these measures target domains, hosting infrastructure and access points. However, lasting systemic impact depends on broader industry alignment and shared responsibility.

Alt text here

 

Add E2EE Policy Callout Here

 

Quotes from the frontline
“The same thousands of files… the same children… re uploaded almost immediately.”

“One especially persistent method of distribution is image hosting services that have been reported and removed by us, appearing again almost instantly under a slightly different domain name.”

 

Commercialised child sexual abuse material distribution networks

Criminal networks profit from child sexual abuse material by disguising websites, routing users through monetised pathways, and exploiting viral recruitment mechanisms.

Alt text here

Operators hide criminal material behind adult content or maintenance pages, using referrals, viral invites, and AI-driven content to funnel users toward abusive material. Invite Child Abuse Pyramid (ICAP) sites exemplify this approach, combining recruitment and monetisation in structured networks. Delays in takedown of reported ICAP URLs allow offenders to continue distributing content and generating profit. Payment routes may be concealed or routed through encrypted messaging channels, increasing resilience.

Profit incentives embed child sexual abuse material deeper into the online ecosystem, sustaining demand, normalising abuse, and allowing content to persist across multiple sites. Disguised infrastructure, referral systems, digital advertising, and encrypted payments make disruption slower and more complex. Effective mitigation depends on coordinated action across core stakeholders, including financial institutions, connectivity providers, platforms, image-hosting services, and digital advertising networks. 

 

Add Financial Policy Callout Here

 

Quotes from the frontline
“A persistent, profit driven network that shifts domains and referral pathways to stay online.”

“We processed more than 10,000 ICAP reports in 2025… These sites have evolved… most recently using AI-generated videos of children on the login page.”

How the IWF tackles child sexual abuse & exploitation online

We combine specialist analysts, technical solutions, and global partnerships to detect, disrupt, remove, and prevent child sexual abuse material (CSAM) at scale. Our work depends on collaboration with industry, regulators, civil society, and law enforcement.

 

Member services

 

Alt text here

Children services

 

Alt text here

Operational Activity

 

Alt text here

What we do

Detect

We use specialised technology to actively find child sexual abuse material and maintain a growing hash database to identify known child sexual abuse material across the internet.

Disrupt

We work with partners to block and disrupt access to child sexual abuse material, using temporary and permanent measures to prevent exposure while content is removed.

Innovate

We co-develop, test and train solutions with technology companies, from small startups to global organisations, to protect children from harm including on device AI classifiers and privacy preserving digital forensics.

Advocate for change

We collaborate with governments, regulators, law enforcement, and tech partners to influence laws, policies, and standards that protect children, promote online safety, and ensure platforms act responsibly. We champion proactive detection, reporting, and removal of child sexual abuse material and embed child protection in emerging technologies.

Educate

We share data, insights, and guidance with the child protection sector, law enforcement, technology companies, educators, parents, and children to help keep them safe online.

How can you help

The scale and complexity of these harms demand coordinated action across sectors, jurisdictions, and systems.

 

Policymakers: Robust child-safety regulation must compel services to prevent, detect, and remove child sexual abuse material, including upload-prevention safeguards, safety-by-design, and coordinated international standards. Urgent implementation closes gaps that allow abuse to persist.

Internet Infrastructure Providers: Companies operating the internet’s core infrastructure, including registries, registrars, hosting providers, filtering companies, search engines and payment providers, should join the IWF. Rapid responses to alerts, proactive blocking tools, and coordinated disruption of redistribution routes help remove child sexual abuse material and limit its spread across the internet’s infrastructure.

Technology Builders: Companies that build platforms, AI systems, and software must ensure their products cannot be misused to generate, manipulate, or distribute child sexual abuse material. Embedding safety-by-design, strong safeguards, and proactive detection, and collaborating with the IWF to share insights and co-develop protective tools, can prevent abuse at scale.

Research partners: We invite researchers and data specialists to share anonymised data, develop analytical tools, and run joint projects. Together, we can identify emerging threats, test interventions, and strengthen evidence-based child protection.

 

 

Alt text here

 

Thanks & forward look

This work is made possible by IWF Members, funders, hotlines, international partners, and law enforcement colleagues. We thank our analysts, assessors, and data specialists, whose expertise underpins these insights.

Looking ahead, we will continue to invest in technology, partnerships, and child-centred services to prevent victimisation and make the internet safer.

 

Together, we can shrink the space in which offenders operate and uphold every child’s right to be safe online.