源之原味

DeepMind, Skype 和特斯拉创始人承诺永远不会制造杀手机器人

 

本文来自thenextweb.com。源URL是: https://thenextweb.com/artificial-intelligence/2018/07/18/deepmind-skype-and-tesla-founders-pledge-never-to-build-killer-robots/

以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.

2,400 researchers and more than 100 tech organizations from around the world called for a global ban on lethal autonomous weapons, and pledged not to manufacture them, in a letter published by Stockholm’s Future of Life Institute.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

The signatories include the likes of Elon Musk, DeepMind’s three co-founders – Shane Legg, Mustafa Suleyman, and Demis Hassabis, Skype founder Jaan Tallinn, and some of the world’s leading AI researchers like Stuart Russell, Yoshua Bengio, and Jürgen Schmidhuber. Google Deepmind, XPRIZE Foundation and Clearpath Robotics were some tech companies part of the list of signatories.

The website also 指出 that 26 countries in the United Nations have also endorsed a ban on lethal autonomous weapons systems: Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China, Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, Ghana, Guatemala, Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela and Zimbabwe.

The letter was published today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) organized by the Future of Life Institute. Although the institute had helped issue letters from some of the signatories in the past, this is the first time those involved have individually declared not to develop lethal autonomous weapons.

The move comes at a time when leading technology companies are facing an internal dilemma about developing AI for military use.

In May, about a dozen Google employees left the company over its involvement in US military’s Project Maven – an artificial intelligence drone program for the Pentagon. About 4,000 other employees signed a petition demanding that the company stop its contribution to the program.

The current letter published in Stockholm is not the first open warning about weaponized AI. Last September, more than 100 CEOs of AI and robotics firms across the world signed an 公开信 to the UN, warning that their work could be repurposed to build killer robots.

Despite warnings, there is still a growing trend of military operations conducted using autonomous weapons. In January, The Washington Post 报告 that about 13 armed drones descended from an unknown location and attacked the headquarters of Russia’s military operations in Syria.

India too revealed its interest in developing a military program focused on AI, during its DefExpo 2018, in April. According to the Times of India, the country’s Defense Production Secretary, Ajay Kumar, had revealed that the Indian government had set up an AI task force, and would prepare a roadmap for building AI-powered weapons.

Even though some companies are calling for a ban on the use of AI for weapons, the field of artificial intelligence will continue to see advancements – and some bad apples will almost certainly look to enhance arms for warfare in this way. This is not unlike the dynamics surrounding chemical weapons.

Having said that, it is easier to ban advanced weapons at an early stage of development when only a few countries have the potential to make them. Hopefully, initiatives such as this spur on governments and global organizations to adequately address the issue of AI weaponry as the need for oversight and regulation increases worldwide.

下一篇:

Survey: 45% of security professionals reuse the same password

Leave A Reply

Your email address will not be published.