以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.
In the aftermath of a school shooting that claimed the lives of 14 students and three staff members, students like Emma González, David Hogg, and Cameron Kasky soon became household names. They managed to snag the attention not just of their peers, but the populace as a whole as they toured the country in a push for common sense gun regulation. The right took notice.
As has become commonplace on social media, partisan politics got in the way of actual debate when National Rifle Association supporters began circulating an image of González ripping apart the United States Constitution.
The image, as it turns out, was a fake.
Faked images aren’t the only hurdle in stopping the spread of misinformation, but they have, in recent years, become a key vehicle in facilitating the spread of misinformation. For anyone looking for a technological fix, stopping the spread of false imagery is obviously a great place to start. While we focus on the youtube‘s and Facebook‘s of the world, each of which are floundering in their fight against the spread of fake news, maybe it’s third parties we should be looking to for an answer.
Ash Bhat and Rohan Phadte, two UC Berkeley undergrads, think they have that answer, at least for spotting fake images. The duo recently developed a plugin, SurfSafe, that instantly checks photos against more than 100 trusted news sites and fact-checking organizations. The goal, of course, is to spot the fakes before internet users share them. The photo of González, for example, could have been snuffed out early on, before it was viewed, and shared, by millions. “The fake news we care about is the fake news that’s spreading virally,” Bhat told 有线. “If a piece of fake news is spreading, we’ll have seen it.”
We want SurfSafe to become a solution that’s analogous to anti-virus software. We want to scan your news feed for fake news as you browse.
The solution is a simple one. When a user hovers over a photo, SurfSafe scans its entire database of digital fingerprints looking for a match. The algorithm quickly goes to work looking for the earliest instance of the image appearing on the internet. If it finds a match, it’ll surface the original image on the right side of a user’s screen. Users then have options to tag the image as Photoshopped, misleading, or propaganda — all of which will help train the algorithm as it goes.
The more people who use the plugin, the smarter it will get. Bhat says the average internet user often sees hundreds of thousands of images a day. The plugin saves the signature of all of these images, looking for subtle variations to the fingerprint, or hash, that accompany even minor edits.
If it’s able to attract a few hundred thousand users in its first year, its creators expect the database to contain more than 100 billion fingerprints.
It’s not a perfect solution, Bhat acknowledges this much, but it’s a good start.
SurfSafe launches today. Chrome users can get it here.