Thorn, a child sexual abuse advocacy organization, and All Tech Is Human, dedicated to addressing complex issues in technology and society, yesterday announced a commitment from several leading generative AI platforms to “ develop, build and train generative AI models that proactively address child safety risks.”

Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI and Stability AI have committed to the document Safety by Design for Generative AI: Preventing Child Sexual Abuse, which defines mitigation measures and practical strategies that developers, vendors, data hosting platforms, social platforms, and intelligence search engines can adopt to apply these principles.

Among them, AI companies commit to “develop, build and train generative AI models that proactively address child safety risks.” They also promise to “responsibly source” training data sets, as well as “detect and remove child sexual abuse material and child sexual exploitation material from training data, and report any confirmed cases to relevant authorities.” .

The document outlines a series of strategic lines of work for prevention and to “address misuse by adversaries.” The publication and distribution of generative AI models will be done “after having been trained and evaluated to ensure the safety of children, providing protections throughout the process.”