Under construction

The Internet Watch Foundation works to identify, remove, and prevent the spread of child sexual abuse material (CSAM) online - including imagery of real children and material generated using AI tools.
The 2025 Annual Data & Insights Report examines how child sexual abuse material is created, distributed, and monetised, and the systemic challenges that allow it to persist online. It highlights several areas of particular concern, including:
These findings reflect insights drawn from the IWF’s operational work, including proactive detection activities. They should not be interpreted as a measure of global prevalence.
Our analysis is based on verified, victim-centred assessment by trained analysts and image specialists, drawing on:
Together, these sources provide a multi-layered understanding of how online abuse emerges, spreads, and persists.
In 2025, analysts identified several emerging and evolving risks shaping the online child sexual abuse landscape, including the rapid growth of AI-generated CSAM, persistent gendered sexual abuse targeting girls, and the victimisation of older teenagers. These trends highlight how technological change, social dynamics, and criminal exploitation intersect to create new forms of harm online.
We saw a sharp rise in the volume, realism, and severity of AI-generated child sexual abuse videos.
Generative AI tools, including video models, nudification apps, subscription platforms, and agentic AI systems, have lowered technical barriers, enabling offenders with minimal expertise to produce and distribute illegal content at scale. AI is being used to generate synthetic abuse, manipulate images of real children, and produce explicit chats with simulated child characters. Early signs of commercialisation are emerging, with subscription-based services offering tailored content creation.
When AI systems are trained on real victims’ imagery, synthetic material prolongs harm and enables re-victimisation. Some content is used for blackmail or sexually motivated extortion. Open-source AI tools further lower barriers, allowing offenders to adapt and deploy harmful content with minimal oversight.
Swift action by legislators and technology companies is needed to stop AI technology from being exploited to create child sexual abuse material and to perpetrate violence against women and girls.This includes regulatory requirements to ensure AI products are safe by design, banning nudification apps and tools, and closing legal loopholes to ensure AI generated material is treated the same as other forms of CSAM in jurisdictions beyond the UK.
Girls remain disproportionately represented in sexual abuse imagery, both real and AI-generated.
Analysts frequently encounter violent sexualisation, misogynistic framing and degrading scenarios. Voyeuristic and non-consensual material circulates in “exposing” spaces where girls’ bodies are commodified for rating, identification and abusive commentary. AI tools amplify harm by recreating abuse and generating sexualised depictions at scale.
These patterns reflect entrenched gendered sexual violence online, fuelled by societal norms, power imbalances and misogyny. Non-consensual sharing, voyeurism, and AI manipulation strip girls of control over their image, increasing the risk of repeated circulation and re-victimisation.
“A sense of commodification and ownership of women’s and girls’ bodies is strongly detectable in certain online spaces.”
Older teenagers are increasingly caught in cycles of abuse involving ‘self-generated’ imagery, leaks, AI manipulation and sexual extortion. Boys are disproportionately represented through our child reporting services and in sexually coerced extortion cases.
Images are often self-captured in private settings and later leaked, manipulated or shared under pressure. Once online, content spreads rapidly across platforms, sometimes reaching adult platforms where teens are mistaken for adults. Sextortion cases escalate quickly, with offenders demanding additional images or payments. Some imagery is repackaged into humiliating collages, increasing shame and compliance.
The combination of ‘self-generated’ content, leaks and coercion is creating a fast-growing, interconnected ecosystem of harm. Once shared, images can resurface repeatedly, amplifying distress and risk.
“What feels alarming is how the extorter threatens the child. They employ emotional manipulation and use intimidating, aggressive language and threats that escalate rapidly after nudes are taken.”
Hosting and blocking child sexual abuse material is fragmented across technical, commercial, and regulatory layers, often spanning jurisdictions with differing laws. Its persistence reflects the combined effects of technology, infrastructure, commercial interests, and scalability pressures, which can overshadow user safety.
A small number of jurisdictions host a disproportionate share of confirmed child sexual abuse material.
A small number of jurisdictions host most confirmed child sexual abuse material URLs, often concentrated in a few high-volume sites. Changes in rankings reflect sites emerging, migrating, or being disrupted. When child sexual abuse material is concentrated on a few high-volume sites in jurisdictions with slower or inconsistent takedown, material remains accessible longer, increasing the risk it will be copied, redistributed, or reposted elsewhere. The UK demonstrates that rapid, collaborative removal is effective and can limit exposure.
Effective child protection therefore depends on faster, more consistent international enforcement approaches, supported by coordinated action across industry and regulatory partners.
“There are more factors to consider when reporting child sexual abuse material that is hosted internationally: legal parameters… language barriers… or a company structure that may make it hard to find an abuse contact.”
Child sexual abuse material distribution is becoming more resilient and widespread, with offenders exploiting weaknesses across internet infrastructure to evade detection and quickly rebuild operations.
Offenders increasingly rely on image-hosting services to upload large collections of child sexual abuse material, which are then embedded across forums and blogs. Removed content is rapidly reposted to alternative pre-registered domains or reappears under new domain endings (TLD hopping), often featuring the same material and victims. Legitimate platforms are frequently abused, and takedowns targeting only specific URLs remove content temporarily but do not prevent rapid re-uploads, limiting the overall effectiveness of enforcement.
This adaptive behaviour creates multi-layered resilience, allowing material to persist across the internet. Without coordinated action across registries, registrars, hosting providers, image hosts, and platforms, these distribution pathways remain open, increasing systemic risk.
Together, these measures target domains, hosting infrastructure and access points. However, lasting systemic impact depends on broader industry alignment and shared responsibility.
“One especially persistent method of distribution is image hosting services that have been reported and removed by us, appearing again almost instantly under a slightly different domain name.”
Criminal networks profit from child sexual abuse material by disguising websites, routing users through monetised pathways, and exploiting viral recruitment mechanisms.
Operators hide criminal material behind adult content or maintenance pages, using referrals, viral invites, and AI-driven content to funnel users toward abusive material. Invite Child Abuse Pyramid (ICAP) sites exemplify this approach, combining recruitment and monetisation in structured networks. Delays in takedown of reported ICAP URLs allow offenders to continue distributing content and generating profit. Payment routes may be concealed or routed through encrypted messaging channels, increasing resilience.
Profit incentives embed child sexual abuse material deeper into the online ecosystem, sustaining demand, normalising abuse, and allowing content to persist across multiple sites. Disguised infrastructure, referral systems, digital advertising, and encrypted payments make disruption slower and more complex. Effective mitigation depends on coordinated action across core stakeholders, including financial institutions, connectivity providers, platforms, image-hosting services, and digital advertising networks.
“We processed more than 10,000 ICAP reports in 2025… These sites have evolved… most recently using AI-generated videos of children on the login page.”
We combine specialist analysts, technical solutions, and global partnerships to detect, disrupt, remove, and prevent child sexual abuse material (CSAM) at scale. Our work depends on collaboration with industry, regulators, civil society, and law enforcement.
We use specialised technology to actively find child sexual abuse material and maintain a growing hash database to identify known child sexual abuse material across the internet.
We work with partners to block and disrupt access to child sexual abuse material, using temporary and permanent measures to prevent exposure while content is removed.
We co-develop, test and train solutions with technology companies, from small startups to global organisations, to protect children from harm including on device AI classifiers and privacy preserving digital forensics.
We collaborate with governments, regulators, law enforcement, and tech partners to influence laws, policies, and standards that protect children, promote online safety, and ensure platforms act responsibly. We champion proactive detection, reporting, and removal of child sexual abuse material and embed child protection in emerging technologies.
We share data, insights, and guidance with the child protection sector, law enforcement, technology companies, educators, parents, and children to help keep them safe online.
The scale and complexity of these harms demand coordinated action across sectors, jurisdictions, and systems.
Policymakers: Robust child-safety regulation must compel services to prevent, detect, and remove child sexual abuse material, including upload-prevention safeguards, safety-by-design, and coordinated international standards. Urgent implementation closes gaps that allow abuse to persist.
Internet Infrastructure Providers: Companies operating the internet’s core infrastructure, including registries, registrars, hosting providers, filtering companies, search engines and payment providers, should join the IWF. Rapid responses to alerts, proactive blocking tools, and coordinated disruption of redistribution routes help remove child sexual abuse material and limit its spread across the internet’s infrastructure.
Technology Builders: Companies that build platforms, AI systems, and software must ensure their products cannot be misused to generate, manipulate, or distribute child sexual abuse material. Embedding safety-by-design, strong safeguards, and proactive detection, and collaborating with the IWF to share insights and co-develop protective tools, can prevent abuse at scale.
Research partners: We invite researchers and data specialists to share anonymised data, develop analytical tools, and run joint projects. Together, we can identify emerging threats, test interventions, and strengthen evidence-based child protection.
This work is made possible by IWF Members, funders, hotlines, international partners, and law enforcement colleagues. We thank our analysts, assessors, and data specialists, whose expertise underpins these insights.
Looking ahead, we will continue to invest in technology, partnerships, and child-centred services to prevent victimisation and make the internet safer.