Major AI companies agree to tackle child abuse images
Steve Johnson/Unsplash
Leading artificial intelligence companies including Google, Meta, OpenAI, Microsoft and Amazon have agreed to take steps to prevent child sexual exploitation material (CSAM) from being used in training datasets for AI models.
Here's What We Know
The tech giants have signed a new set of principles aimed at limiting the spread of CSAM. They pledge to ensure that training data does not contain malicious content, avoid datasets with a high risk of including such material, and remove such images or links to them from data sources.
In addition, companies intend to "stress test" AI models to ensure they do not generate CSAM content, and to release models only after assessing their compliance with child safety requirements.
Companies such as Anthropic, Civitai, Metaphysic, Mistral AI and Stability AI have also joined the initiative.
The growing popularity of generative AI has fuelled the proliferation of fake images online, including synthetic photos of child sexual exploitation. Recent research has revealed the presence of CSAM references in some popular training datasets for AI models.
Thorn, a non-profit organisation dedicated to fighting child abuse that helped develop the guidelines, warns that generating CSAM through AI could prevent victims from being identified, create more demand and make it easier to find and distribute problematic material.
Google says that along with adopting the principles, the company has also increased advertising grants to the National Centre for Missing and Exploited Children (NCMEC) to promote initiatives to raise awareness and give people the tools to identify and report abuse.
Source: The Verge