The tools for creating AI-enabled child sexual abuse imagery have also converged, allowing a user to make these images with minimal effort and limited technicalThe tools for creating AI-enabled child sexual abuse imagery have also converged, allowing a user to make these images with minimal effort and limited technical

Amount of AI-enabled child sexual abuse imagery increased in 2025 – report

2026/03/24 15:32
2 min read
For feedback or concerns regarding this content, please contact us at [email protected]

MANILA, Philippines – The latest report from the Internet Watch Foundation (IWF), an English charity working to minimize the availability of child sexual abuse material (CSAM) hosted online, said that artificial intelligence-enabled CSAM increased in 2025 compared to the previous year.

The 2026 CSAM report, titled Harm without limits: AI child sexual abuse material through the eyes of our analysts, found that in 2025, the IWF assessed 8,029 AI-generated images and videos as showing realistic child sexual abuse, with the imagery appearing on both regular online platforms and on the dark web. It is said to be a 14% increase in criminal AI content from the previous year.

This AI-enabled CSAM is said to be sophisticated, as 65% of it — 2,233 pieces of CSAM in total — is “realistic full-motion AI video content” under classification A based on UK grading.

Classification A material, or the most offensive types of CSAM, involve “penetrative sexual activity, sexual activity with an animal, or sadism.”

According to IWF’s analysts, there are identified AI-generated child sexual abuse images shared on AI chatbot services. These may encourage users to act out simulated child sexual abuse scenarios and, because these generative models are trained using photographed abuse imagery, can directly re-victimize sexual abuse survivors.

Artificial intelligence mechanisms have also grown sufficiently advanced as to make the tools for creating AI CSAM easier or at least requiring less work to create. IWF says a convergence of AI tools “can now generate abusive imagery with minimal effort, removing the need for technical expertise and significantly lowering barriers to entry.”

Alongside this report, polling done by Savanta for the IWF also showed more than four in five UK adults wanted regulation to ensure AI would be safe by design.

The full IWF report is available here. – Rappler.com

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.