图片: AP

大多数人都知道, 算法控制你在 Facebook 或谷歌上看到的, 但自动化的 decision-making 越来越被用来确定真实生活的结果, 以及影响一切从 消防部门如何防止火灾警察部门如何预防犯罪.考虑到这些 (通常是秘密的) 系统在我们的生活中占据了多大的比重, 现在是我们具体了解算法是如何伤害人们的时候了。一份新的报告正试图做到这一点。

本周, 隐私论坛的未来发布 一种综合的算法危害分类, the frequently unforeseen negative consequences of automated decision-making. These harms are dangerous because cities and organizations rely on algorithms to make big-scale decisions. Think of it this way: a child abuse case worker that misses important clues endangers a child. But a predictive assessment algorithm that fails to flag even extreme cases can endanger thousands of children. At its core, the report is about finding the language to address a complicated problem that we should all be thinking more about: algorithms can be unfair, illegal useful, all at the same time. So what should we do about them?

“They’re more, sort of, societal, philosophical questions, at the moment,” Lauren Smith, who co-authored the report, tells Gizmodo. “There’s no clear way to create those overall rules.”

To grapple with these questions, the report features two charts. The first lists the potential harms of automated decision-making, categorizing the negative effects of algorithms based on how they hurt people, whether they harm individuals or larger groups, and whether there are existing legal standards for addressing them.

第二个图表着眼于解决问题的方法, 或者至少减少这些算法的危害。对于那些可能被现有法律标准 (或已经是非法的) 所覆盖的那些, 有更明确的方法来减轻负面后果, 例如联系当局。对于那些没有, 史密斯和她的团队争辩说, 缺乏法律不一定意味着应该有更多的规则。

广告

“We wouldn’t characterize this as ‘the ones without legal analogs need legislation,’” Smith told Gizmodo. “The ones with a legal analog represent those core values that we have in society that we’ve already enshrined in law. The tactics for mitigating harms that occur anyway through technology should be distinct from ones that are sort of posing these new questions, introducing these new societal debates.”

Let’s compare two types of algorithmic harms. In the first case, a bank uses an algorithm designed to deny all loan applications from black women. Here, there’s an existing law being broken that can be prosecuted.Now compare that example with “filter bubbles,” the kind of ideological echo chambers on social media that, in the worst cases, can radicalize people to violence. While also potentially dangerous, this problem isn’t covered by any current laws or related “legal analogs”—and Smith isn’t sure it should.

“Is it the responsibility of the technology platform to analyze your data and say, ‘Well, this person has these views. We want to ensure that 30 percent of the news that they see comes from a different political perspective’? I’m not sure that’s a position that consumers want them to be in.”

广告

随着算法工具在我们生活的各个方面都根深蒂固, 科技一直在努力确定其在减轻危害方面的作用。Facebook 应该用 AI 来过滤仇恨言论吗?谷歌的搜索协议是否应该介入 显示高薪工作 男人而不是女人?我们只是发展了对算法中的公平性的理解, 而其中的一部分就是理解我们在如何解决这些问题的过程中所受到的限制。史密斯指出了设计过程。

“There’s a big role for design there and a big role for internal processes,” Smith says, “and for [creating] ethical frameworks, or internal IRBs [institutional review boards] to be part of how they’re thinking about and understanding this.”

Reconfiguring the design process so that more ethical and philosophical questions are considered before an algorithm is put into place will go a lot farther than simply relying on regulation. The first step is finding the words.

广告

“These technologies are just beginning to evolve, so to study and understand the impacts they’re having will go a long way towards thinking about what harms they may cause and how to mitigate them,” says Smith.