Several major artificial-intelligence companies have committed to guidelines aimed at preventing the creation of AI-generated child sexual abuse material.


What You Need To Know

  • Several major artificial-intelligence companies have committed to guidelines aimed at preventing the creation of AI-generated child sexual abuse material

  • The companies making the pact include Amazon, Google, Meta, Microsoft and OpenAI, among others

  • Thorn, a nonprofit group that builds technology to protect children from sexual abuse, and All Tech Is Human, an organization focused on promoting responsible tech development, organized the initiative

  • The alliance of AI companies has agreed to develop, deploy and maintain generative AI technologies following principles aimed at making their products less likely to create abuse material, making such content more reliably detected and limiting the distribution of the content

The companies making the pact include Amazon, Google, Meta, Microsoft and OpenAI, among others. Thorn, a nonprofit group that builds technology to protect children from sexual abuse, and All Tech Is Human, an organization focused on promoting responsible tech development, organized the initiative, which was announced Tuesday. 

"We're at a crossroads with generative AI, which holds both promise and risk in our work to defend children from sexual abuse,” Rebecca Portnoff, vice president of data science at Thorn, said in a statement. “ … That this diverse group of leading AI companies has committed to child safety principles should be a rallying cry for the rest of the tech community to prioritize child safety.”

The announcement comes a day after Stanford University’s Internet Observatory released a report saying a flood of AI-generated sexual abuse material threatens to overwhelm the tip line at the National Center for Missing and Exploited Children, a nonprofit group that works with federal law enforcement.

Generative artificial intelligence allows users to easily create realistic video, pictures, audio and text. Bad actors may also use AI to manipulate authentic, benign images of children into sexualized content. 

Child safety advocates warn the volume of realistic, AI-generated child abuse images makes the already challenging job of identifying and rescuing real victims even harder. 

The alliance of AI companies has agreed to develop, deploy and maintain generative AI technologies following principles aimed at making their products less likely to create abuse material, making such content more reliably detected and limiting the distribution of the content. Each company has committed to release progress updates. 

Some of the guidelines call for safeguarding datasets used to train AI systems from child sexual abuse material, continuosly testing models to better understand their capabilities of producing abusing content and investing in continued research to stay up to date about evolving threats.

The principles are outlined in a paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse."

"It is imperative to prioritize child safety when building new technologies, including generative AI," Laurie Richardson, Google’s vice president of trust and safety, said in a statement. " … We commend Thorn and All Tech Is Human for bringing key players in the industry together to standardize how we combat this material across the ecosystem."

Some of the companies said they already had teams working on the issue but welcomed taking additional steps.

“Today’s commitment marks a significant step forward in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children,” Microsoft Chief Digital Safety Officer Courtney Gregoire wrote in a blog post. “This collective action underscores the tech industry’s approach to child safety, demonstrating a shared commitment to ethical innovation and the well-being of the most vulnerable members of society.”

Chelsea Carlson, who oversees child safety at OpenAI, said the firm is “committed to working alongside Thorn, All Tech is Human and the broader tech community to uphold the Safety by Design principles and continue our work in mitigating potential harms to children."

Other companies making commitments are Anthropic, Civitai, Metaphysic, Mistral AI and Stability AI.