Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner. Facebook Messenger An icon of the facebook messenger app logo. Facebook An icon of a facebook f logo. Facebook Messenger An icon of the Twitter app logo. LinkedIn An icon of the LinkedIn logo. WhatsApp Messenger An icon of the Whatsapp messenger app logo. Email An icon of an mail envelope. Copy link A decentered black square over a white square.

Tech giants agree to child safety principles around generative AI

Top companies have pledged to develop, deploy and maintain generative AI models with child safety at the centre (Yui Mok/PA)
Top companies have pledged to develop, deploy and maintain generative AI models with child safety at the centre (Yui Mok/PA)

Some of the world’s biggest tech and AI firms have agreed to follow new online safety principles designed to combat the creation and spread of AI-generated child sexual abuse material.

Amazon, Google, Meta, Microsoft and ChatGPT creator OpenAI are among the companies to have signed up to the principles, called Safety By Design.

The commitments have been drawn up by child online safety group Thorn and fellow nonprofit All Tech is Human and sees the firms pledge to develop, deploy and maintain generative AI models with child safety at the centre in an effort to prevent the misuse of the technology in child exploitation.

The principles see firms commit to develop, build and train AI models that proactively address child safety risks, for example by ensuring training data does not include child sexual abuse material, as well as maintaining safety after their release by staying alert and responding to child safety risks that emerge.

Generative AI tools such as ChatGPT have become the key area of development within the technology sector over the last 18 months, with an array of AI models and content generation tools being developed and launched by the major firms.

The rapid rise has seen social media and other platforms flooded with AI-generated words, images and videos, with many online safety groups warning of the implications of more fake and misleading content being seen and spread online.

Earlier this year, children’s charity the NSPCC warned that young people were already contacting Childline about AI-generated child sexual abuse material.

Speaking about the new agreed principles, Dr Rebecca Portnoff, vice president of data science at Thorn, said: “We’re at a crossroads with generative AI, which holds both promise and risk in our work to defend children from sexual abuse.

“I’ve seen first-hand how machine learning and AI accelerates victim identification and child sexual abuse material detection. But these same technologies are already, today, being misused to harm children.

“That this diverse group of leading AI companies has committed to child safety principles should be a rallying cry for the rest of the tech community to prioritise child safety through Safety by Design.

“This is our opportunity to adopt standards that prevent and mitigate downstream misuse of these technologies to further sexual harm against children. The more companies that join these commitments, the better that we can ensure this powerful technology is rooted in safety while the window of opportunity is still open for action.”