The United Nations Children’s Fund (UNICEF) noted there's a rise in the volume of AI-generated sexualized images of children circulating on the internet.The United Nations Children’s Fund (UNICEF) noted there's a rise in the volume of AI-generated sexualized images of children circulating on the internet.

UNICEF urges governments to criminalize AI-generated child abuse material

4 min read

The United Nations Children’s Fund (UNICEF) on Wednesday pointed to reports of a rapid surge in the volume of AI-generated sexualized images circulating on the internet. Those AI-generated images mainly included cases where photographs of children have been manipulated and sexualized.

The agency argued that the rise in AI-powered image or video generation tools producing child sexual abuse materials escalates the risks to children through digital technologies. UNICEF also called for urgent action by governments and the industry to prevent the creation and spread of AI-generated sexual content of children.

Sexualized AI-generated content causes long-term emotional harm to children

The UN organization noted that less than 5 years ago, high-quality generative models required significant computing power and expertise. However, the current open-source models make it easier for perpetrators to create sexual abuse content. 

The agency believes that although no real child is directly involved, such content normalizes the sexualization of children and complicates victim identification. UNICEF also argued that perpetrators can create realistic sexual images of a child without their involvement or awareness. 

UNICEF said such content can violate a child’s right to protection without even knowing it has happened. The agency also stated that children are faced with shame, stigma, more judgment from peers and adults, social isolation, and long-term emotional harm.

UNICEF also revealed that the rise in accessibility of AI-powered image or video generation tools has led to a surge in the production and spread of child sexual abuse materials (CSAM). The UK’s Internet Watch Foundation (IWF) found approximately 14,000 suspected AI-generated images on a single dark-web forum dedicated to child sexual abuse materials in just one month. The report disclosed that a third of the materials were criminal and the first realistic AI videos of child sexual abuse. 

The IWF also discovered 3,440 AI videos of child sexual abuse, a 26,362% surge from the 13 videos found the previous year. 2,230 (65%) of the videos were categorized as Category A for being so extreme, while a further 1,020 (30%) were categorized as Category B.

IWF also identified AI CSAM on mainstream platforms, which included deepfake nudes created in peer-to-peer contexts targeting girls. The organization also pointed to an instance in Korea where law enforcement reported a 10x surge in sexual offenses involving AI and deepfake technologies between 2022 and 2024. 

The AI and deepfakes mainly included teenagers, constituting the majority of the accused. Thorny’s survey discovered that 1 in 10 teens in the U.S. knew of cases where friends had created synthetic non-consensual intimate images of children using AI tools.

UNICEF, ECPAT International, and INTERPOL also found that across 11 countries, around 1.2 million children found their images manipulated into sexually explicit deepfakes through AI tools in 2025. The agencies also reported that up to two-thirds of children in some 11 countries worry that AI could be used to create fake sexual images.

UNICEF argued that parents and caregivers need to be informed about AI-enabled sexual exploitation and abuse. The agency also called for schools to educate students about AI-related risks and the harm they cause to affected individuals.

Countries dissociate from Grok due to its sexualized AI deepfakes

UNICEF’s report comes as Elon Musk’s AI tool Grok implemented features that prevent the tool from editing real photos of real people to show them in revealing clothing in countries where it’s prohibited. The initiative came after widespread concern over sexualized AI deepfakes.

The UK government called on X to control Grok while regulator Ofcom said it’s working around the clock to fix it. X said in a statement that the firm had geoblocked all users’ ability to generate images of real people in bikinis, underwear, and similar clothing via the Grok account and in Grok in X in countries where it’s prohibited.

Malaysia and Indonesia also blocked access to Grok earlier last month over its ability to generate sexually explicit deepfakes. Cryptopolitan reported earlier this month that Indonesia had allowed Grok to resume operations after X committed to improving compliance with the country’s laws.

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.

Market Opportunity
RISE Logo
RISE Price(RISE)
$0.003623
$0.003623$0.003623
+0.36%
USD
RISE (RISE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.