TikTok, Snapchat, OnlyFans and others to combat AI-generated child abuse content
Major social platforms, AI companies, governments and NGOs issued a joint statement pledging to combat AI-generated abusive content, such as explicit images of children.
163 Total views
1 Total shares
A coalition of major social media platforms, artificial intelligence (AI) developers, governments and non-governmental organizations (NGOs) have issued a joint statement pledging to combat abusive content generated by AI.
On Oct. 30, the United Kingdom issued the policy statement, which includes 27 signatories, including the governments of the United States, Australia, Korea, Germany and Italy, along with social media platforms Snapchat, TikTok and OnlyFans.
It was also undersigned by the AI platforms Stability AI and Ontocord.AI and a number of NGOs working toward internet safety and children’s rights, among others.
The statement says that while AI offers “enormous opportunities” in tackling threats of online child sexual abuse, it can also be utilized by predators to generate such types of material.
It revealed data from the Internet Watch Foundation that, within a month of 11,108 AI-generated images shared in a dark web forum, 2,978 depicted content related to child sexual abuse.
Related: US President Joe Biden urges tech firms to address risks of AI
The U.K. government said the statement stands as a pledge to “seek to understand and, as appropriate, act on the risks arising from AI to tackling child sexual abuse through existing fora.”
“All actors have a role to play in ensuring the safety of children from the risks of frontier AI.”
It encouraged transparency on plans for measuring, monitoring and managing ways AI can be exploited by child sexual offenders and on a country level to build policies regarding the topic.
Additionally, it aims to maintain a dialogue around combating child sexual abuse in the AI age. This statement was released in the run-up to the U.K. hosting its global summit on AI safety this week.
Concerns over child safety in relation to AI have been a major topic of discussion in the face of the rapid emergence and widespread use of the technology.
On Oct. 26, 34 states in the U.S. filed a lawsuit against Meta, the Facebook and Instagram parent company, over child safety concerns.
Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews