YouTube 删除了600万粗略的视频之前, 任何人都没有看到他们在 Q4 2017
以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.
As the world’s largest video platform, youtube has a responsibility to police its network to ensure that it doesn’t host videos that violate its community guidelines – meaning no sexually explicit, hateful, or gratuitously violent content.
It’s made plenty of mistakes in its effort to do so, including removing legit videos, monetizing channels promoting pedophilia and Nazis, accidentally blocking numerous alt-right channels, and allowing its search engine to autosuggest some disturbing queries. Now, it’s revealed some interesting numbers that illustrate just how much crap it has to deal with.
在 its first-ever quarterly Community Guidelines enforcement report, the company noted that it removed some 8.3 million videos that violated its terms of service between October and December 2017 – of which some 6.7 million were flagged automatically by its bots, and 75 percent of those were removed before they racked up a single view.
That’s heartening to know: as numerousstories dating back to 2012 indicate, content moderation jobs at companies like YouTube and Facebook, which require staffers to watch scores of flagged videos which often contain horrific content can crush employees’ souls. The more of that that we can entrust to machines, the better.
YouTube also received some 9.3 million flags on videos from humans in the last quarter of 2017, with viewers from India, the US, and Brazil leading the charge in reporting clips. The most common reason for flagging a video was because it was found to contain sexually explicit content (accounting for 30 percent of all flags); videos containing spam or misleading content were reported nearly as often, accounting for 26.4 percent of all flags.
The company clearly has a lot more work to do in shaping its strategies for battling unsavory content, but it’s interesting to learn that it’s making great strides both with its own tech, as well as with the help of its community.
Find the full report 在此页上.