源之原味

谷歌释放其新的图像检测 AI 在儿童滥用内容在线

 

本文来自thenextweb.com。源URL是: https://thenextweb.com/artificial-intelligence/2018/09/04/google-sics-its-new-ai-on-child-abuse-images-online/

以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.

Google’s latest attempt to battle the spread of child sexual abuse material (CSAM) online comes in the form of an AI that can quickly identify images that haven’t been previously catalogued.

It’s part of the company’s Content Safety API, which is available to NGOs and other bodies working on this issue. By automating the process of rifling through images, the AI not only speeds things up, but also reduces the number of people required to be exposed to it – a job that can take a serious psychological toll.

Given that the UK’s Internet Watch Foundation (IWF) found nearly 80,000 sites hosting CSAM last year, it’s clear that the problem isn’t close to being contained. Google’s been on this mission for several years now, and it’s certainly not the only tech firm that’s taking steps to curb the spread of CSAM on the web.

It previously began removing search results related to such content back in 2013, and subsequently partnered with Facebook, Twitter and the IWF to share lists of file hashes that would help identify and remove CSAM files from their platforms.

microsoft worked on a similar project back in 2015, and Hollywood star Ashton Kutcher founded Thorn, an NGO focused on building tech tools to fight human trafficking and child sexual exploitation. One of its projects, dubbed Spotlight, helps law enforcement officials by identifying ads in classifieds sites and forums promoting escort services involving minors.

Google’s new AI goes beyond looking at known hashes, so it’ll hopefully be able to tackle new content without having to rely on databases of previously identified CSAM.

Find out more about the new service, which is available for free to NGOs,在此页上.

下一篇:

Beautystack 是以女性为中心的技术的未来一瞥

Leave A Reply

Your email address will not be published.