February 10, 2026 — Ahead of Safer Internet Day, UNICEF has issued an urgent call to action in response to a rapid increase in artificial-intelligence-generated sexualized images of children, warning that “deepfake abuse is abuse — and there is nothing fake about the harm it causes.”

Deepfakes — images, videos, or audio manipulated using AI to appear real — are increasingly being weaponized to create exploitative content involving minors. New findings from a multi-country survey conducted as part of UNICEF-supported Disrupting Harm research reveal the alarming scale of the problem: at least 1.2 million children across 11 countries reported that their images had been manipulated into sexually explicit deepfakes in the past year. In some nations, the rate was as high as one in every 25 children, the equivalent of a single child in a typical classroom.

Many of the manipulated images involve so-called “nudification,” in which AI tools digitally remove or alter clothing. UNICEF emphasized that the emotional and psychological consequences are immediate and profound, including shame, stigma, social isolation, and long-term trauma — even when the child never shared an image or knew the manipulation had occurred.

In a new issue brief, UNICEF explained that technological advances have dramatically lowered the barrier for perpetrators. Less than five years ago, creating such content required proprietary models and sophisticated hardware; today, open-source tools running on consumer-grade computers can produce highly realistic results.

UNICEF also cautioned that risks are magnified when generative AI tools are integrated directly into social-media platforms, allowing manipulated content to spread rapidly. While acknowledging that some developers have adopted safety-by-design approaches, UNICEF warned that protections remain inconsistent across the industry.

UNICEF Calls for Coordinated Action

To confront the growing threat of AI-generated child sexual abuse material, UNICEF urged:

  • Governments to expand legal definitions of child sexual abuse material to explicitly include AI-generated content and to criminalize its creation, possession, procurement, and distribution;
  • AI developers to implement robust safeguards and misuse-prevention measures in their systems; and
  • Digital platforms to prevent circulation of such material in the first place — not merely remove it after the harm has occurred — by strengthening moderation systems and investing in rapid-detection technologies.

UNICEF further stressed the need for resources and training for parents, educators, mental-health professionals, social-service providers, and law-enforcement agencies to better support affected children.

Even when no real child is physically involved in producing the imagery, UNICEF warned, the broader societal harm remains significant by normalizing the sexualization of minors and fueling demand for exploitative material. AI-generated images can also complicate criminal investigations and delay efforts to identify and protect victims.

Safer Internet Day serves as a reminder that children’s rights to protection from sexual exploitation — guaranteed under international law — must extend fully into the digital world.

Sauder Schelkopf continues to monitor developments in online child safety, AI-driven harms, and emerging regulatory responses, and industry leaders respond to these growing risks.

For more information about this matter or to speak with an attorney, please contact us by completing the form on this page or call 1-888-711-9975.