源之原味

谷歌承诺不开发 AI 武器, 但表示仍将与军方合作。

 

抽象
The company has released much-needed guidelines on its approach on AI research

这篇文章来自theverge.com。原文网址是: https://www.theverge.com/2018/6/7/17439310/google-ai-ethics-principles-warfare-weapons-military-project-maven

以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.

Google has released a set of principles to guide its work in artificial intelligence, making good on a promise to do so last month following controversy over its involvement in a Department of Defense drone project. The document, titled “Artificial Intelligence at Google: our principles,” does not directly reference this work, but makes clear that the company will not develop AI for use in weaponry. It also outlines a number of broad guidelines for AI, touching issues like bias, privacy, and human oversight.

While the new principles forbid the development of AI weaponry, they state that Google will continue to work with the military “in many other areas.” Speaking to 的边缘, a Google representative said that had these principles been published earlier, the company would likely not have become involved in the Pentagon’s drone project, which used AI to analyze surveillance footage. Although this application was for “non-offensive purposes,” and therefore hypothetically permitted under these guidelines, the representative said it was too close for comfort — suggesting Google will play it safe with future military contracts.

As well as forbidding the development of AI for weapons, the principles say Google will not work on AI surveillance projects that violate “internationally accepted norms,” or projects which contravene “widely accepted principles of international law and human rights.” The company says that its main focuses for AI research are to be “socially beneficial.” This means avoiding unfair bias; remaining accountable to humans and subject to human control; upholding “high standards of scientific excellence,” and incorporating privacy safeguards.

“At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy,” Google CEO Sundar Pichai wrote in an accompanying 博客文章. “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

Google has faced significant scrutiny over its use of AI after its work for the Department of Defense was revealed in a report by gizmodo earlier this year. Thousands of employees signed an open letter urging Google to cut ties with the program, named Project Maven, and at least a dozen or so employees even resigned over the company’s continued involvement.

Google says it plans to honor its contract with the Pentagon, but will end its involvement with Project Maven when that expires in 2019. A 博客文章 by Google Cloud CEO Diane Greene described the work as simply “low-res object identification using AI.” However, it was reported that the work was, in part, a try out for Google to win a lucrative contract with the Pentagon estimated to be worth $10 billion. IBM, Microsoft, and Amazon are all thought to be competing, and a Google representative confirmed to 的边缘 that it would continue to pursue parts of the contract — if the work in question fit these new principles.

Google’s decision to outline its ethical stance on AI development comes after years of worry over the impending threat posed by automated systems, as well as more sensational warnings about the development of artificial general intelligence — or AI with human-level intelligence. Just last month, a coalition of human rights and technology groups came together to put out a document titled The Toronto Declaration that calls for governments and tech companies to ensure AI respects basic principles of equality and nondiscrimination.

Over the years, criticism and commentary regarding AI development has come from a wide-ranging group, from pessimists on the subject like Tesla and SpaceX founder Elon Muskmore reasonable voices in the industry like Facebook scientist Yann LeCun. Now, Silicon Valley companies are beginning to put more significant resources toward AI safety research, with help from ethics-focused organizations like the nonprofit Open AI and other research groups around the world.

However, as Google’s new ethical principles demonstrate, it’s difficult to make rules that are broad enough to encompass a wide range of scenarios, but flexible enough to not exclude potentially useful work. As ever, public scrutiny and debate are necessary to ensure that AI is deployed fairly and in a socially beneficial manner. Google will have to get used to talking about it.

Update June 7th, 5:00PM ET: Updated with additional comment from Google.

Leave A Reply

Your email address will not be published.