Ex-Google CEO is relatively certain robots aren’t going to kill us for another decade or two
以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.
Rest easy, fellow humans; robots probably aren’t going to enslave or eliminate humans for at least another decade or so. Speaking at the Munich Security Conference last week, ex-Google CEO Eric Schmidt downplayed the popular doomsday scenario, stating:
Everyone immediately then wants to talk about all the movie-inspired death scenarios, and I can confidently predict to you that they are one to two decades away. So let’s worry about them, but let’s worry about them in a while.
That’s not exactly comforting.
Rapid advances in artificial intelligence and robotics have amplified the discussion in recent years. With an entire generation raised on science fiction movies depicting robot uprisings, it’s a near-certainty that these are exactly the sort of scenarios we imagine when thinking about the future of robots.
And it’s certainly plausible, but not likely.
Schmidt goes on to say:
The other point that I want to remind everyone, these technologies have serious errors in them, and they should not be used with life-critical decisions. So I would not want to be in an airplane where the computer was making all the general intelligence decisions about flying it. The technology is just not reliable enough ― there too many errors in its use. It is advisory, it makes you smarter and so forth, but I wouldn’t put it in charge of command and control.
That last sentence is key.
Researchers understand, even if most of us don’t, that AI isn’t as suited to replace humans as it is to augment them. The human brain is complex. And while the average 40-year-old can’t memorize Wikipedia or beat the best poker players, the typical robot can’t handle the simple improvisation that humans excel at.
In fact, most of what AI and robots are good at is menial task work, simple and repeatable objectives that are both easy to define and measure. Robots aren’t all that good at improvising; they need a defined set of rules and those rules will increasingly need to include failsafe measures to shut the machine down in periods of failure.
And while these technologies will continue to improve, sentience isn’t anywhere on the horizon. For a robot to be dangerous, it has to be programmed to be dangerous. So it’s not robots we should fear, but the humans responsible for writing their code.
If you’re looking for a more plausible scenario, it’s this: readers hurling themselves into the nearest body of water after another joke about the latest advancement at Boston Dynamics as the one that will ultimately be responsible for our deaths.