Facebook Gives Us the First Real Look at Its Cesspool of Terrorist Propaganda and Hate Speech
This article is from gizmodo.com. The origin url is: https://gizmodo.com/facebook-gives-us-the-first-real-look-at-its-cesspool-o-1826042898
The following content is generated by machine-translator. If you feel the readability is not good, please read the original post or Click here.
Although Facebook’s ability to track and remove terrorist content was one of his favorite talking points when CEO Mark Zuckerberg testified before Congress last month, a new report from the social media network shows that there’s still a lot of work to do. The company’s first content moderation report, released Tuesday, finds that it’s removing significantly more hate speech, propaganda, graphic violence, and sexual content.
Zuckerberg has faced increased scrutiny over content moderation as horrific tragedies like the ethnic cleansing in Myanmar have been fueled by the organizational potential of Facebook. The report compares the last quarter of 2017 to the first quarter of 2018, and it shows moderation systems are catching more objectionable content. For most categories that it monitors, Facebook is unable to say how large the problem is, making it difficult to determine whether it’s algorithmic systems are getting better or if there’s just more content that needs to be removed.
“Today, as we sit here, 99 percent of the ISIS and al-Qaeda content that we take down on Facebook, our AI systems flag before any human sees it,” Zuckerberg boasted at his Senate hearing in April. According to today’s report, that number is actually 99.5 percent, up from 96.9 percent the previous quarter. But the number of pieces of terrorist propaganda that were taken down jumped a whopping 73 percent in just a few months. While it couldn’t accurately estimate how prevalent terrorist violations on the network were, it did conclude “that the number of views of terrorist propaganda content related to ISIS, al-Qaeda and their affiliates on Facebook is extremely low” in comparison with other content that violates its policies.
It was able to estimate that fake accounts represent around 3-4 percent of its monthly active user base. The company most recently claimed to have 2.2 billion users. But the veracity of that number is always under threat with floods of fake accounts being opened on the site at all times. In the first quarter this year, it removed an astounding 583 million fake accounts. This is actually 111 million fewer account removals than in the previous quarter. Again, it’s difficult to say if that means that Facebook is getting better or worse in its detection system. The percentage of fake accounts it removed before a user reported them declined less than 1 percent.
Nudity and sexual content removed was flat, with 21 million pieces of offending content being taken down in both quarters and a slight increase in the percentage of violating content that was seen by users before it was removed.
In an analysis that accompanies the latest report, Facebook executives explained that hate speech is the most difficult challenge it faces when automating moderation. “In some cases, our software hasn’t been sufficiently trained to automatically detect violations at scale,” the company wrote. But it said violations like “hate speech or graphic violence … require us to understand context when we review reports and therefore require review by our trained teams.” In the case of graphic violence, it might have a newsworthy purpose, and hate speech can be subtle depending on who is perceiving it.
Only 38 percent of hate speech that was removed by Facebook was taken down prior to being reported by users. Executives emphasized that “it’s important that people continue to report violations to help our enforcement efforts.” Facebook’s scale is just far bigger than any company should be, and AI isn’t going to fix the problem anytime soon. It has previously said that it will double its team of human moderators to 20,000 employees and contractors by the end of 2018. At the recent congressional hearings, Zuckerberg often pointed out how difficult it is to train algorithms and team members to spot hate speech in different cultures with different languages.
Because this is the first moderation report, and it only covers two quarters in the 14-year-old company’s history, we won’t really know the significance of its findings for quite some time. It has promised to continue to release the report every six months.