A beginner’s guide to AI: Human-level machine intelligence
This article is from thenextweb.com. The origin url is: https://thenextweb.com/artificial-intelligence/2018/11/16/a-beginners-guide-to-ai-human-level-machine-intelligence/
The following content is generated by machine-translator. If you feel the readability is not good, please read the original post or Click here.
Welcome to TNW’s beginner’s guide to AI. This (currently) five part feature should provide you with a very basic understanding of what AI is, what it can do, and how it works. The guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, and artificial general intelligence.
There are few technologies that inspire the imagination like artificial intelligence. And, in the field of AI, the Holy Grail is living machines.
The quest to imbue machines with the spark of life is an ancient one. The golem of Jewish folklore is an early example of an automaton, as well as Pygmalion’s Galatea. But, rather than bring machines to life, we’ve so far only imitated intelligence.
Modern AI, mostly, appears in the form of deep learning. Computer vision, natural language processing, and other machine learning-based technologies have revolutionized the field, seemingly overnight, but it’s hard to cut through the hyperbole and figure out what’s really going on.
You’ll hear and read a lot of different terms being thrown around to describe concepts that seem the same, with few people able to explain the differences. Worse, the experts using these terms often quibble over their definitions.
To clear things up, we need to sort through the nomenclature. The term artificial general intelligence (AGI) is, perhaps, the most popular one for the concept. But there are many others:
- Sentient machines
- Conscious AI (CAI)
- General Artificial Intelligence (GAI)
- Strong AI
- General AI
- Human-level AI
- The Singularity
And there are even terms for machines that are more intelligent than humans: superintelligence and super-intelligent AI (SAI).
If having several terms for the same idea wasn’t difficult enough, there’s no actual consensus anywhere on exactly what any of these terms mean. So, if you’re reading this and getting red in the face over the author’s misuse of a term you clearly know means something different: welcome to the regular conversation.
Some researchers believe AGI — we’ll just go with that term — can be achieved without a machine necessarily being able to do everything a human can (intellectually speaking). Others have a much stricter definition that says all AI is weak unless it can demonstrate consciousness.
And, of course, there’s a distinction between general intelligence and sentience. Many experts are content to think an autonomous contraption that requires no human input to perform a variety of tasks which imitate human-level intelligence (think: a robot that can charge, repair, and upgrade itself but doesn’t feel emotions) clears the bar for AGI, but not for sentience or life.
The counter-argument to that sentiment is: the human brain may be nothing more than a big neural network, and consciousness is a by-product of its robustness. Sentience, some experts claim, is merely a side-effect of the brain’s function. This might sound counter-intuitive, but there’s plenty of research to support it:
- Is consciousness an illusion?
- Your brain hallucinates your conscious reality
- Is consciousness a byproduct of neural activity in the brain?
- Is the root of consciousness just shared vibrations?
So sentience, consciousness, or ‘being alive,’ might not be something we can explain yet in humans or robots. But that doesn’t mean it’s beyond the realm of possibility that we’ll figure it out tomorrow, a year from now, or in 2029 like famous futurologist Ray Kurzweil thinks.
It even seems reasonable to theorize that AI will become sentient not by design, but as a by-product of the development of more advanced neural networks or some machine learning paradigm that hasn’t been invented yet.
At the end of the day (today, at least) defining what AGI actually is might not be the most pressing matter for the general public. Because, no matter what terminology we use, it hasn’t been achieved yet.
It’s probably fair to say nobody is even close to giving a machine human-level intelligence. To use a popular theoretical benchmark: there’s no robot that could walk into your house today and make a pot of coffee using your appliances and dishes. AI simply isn’t powerful enough for that kind of intense decision-making yet.
However, cars can already drive themselves, and you can buy a flying camera that pilots itself today. Those are narrow, or weak AI — meaning they’re only designed to do one or a handful of specific things within certain parameters. But it won’t be long before someone designs a machine that Frankensteins a hundred or so amazing narrow AIs together in a seamless package. Arguably, that’s what Sophia’s creators are trying to do.
Ben Goertzel, CEO of SingularityNET and the person responsible for Sophia the robot’s AI, compares the state of AGI research today to the period when the first automobiles were being designed. “I don’t think we’ve even reached the Model T stage yet, but I think we will soon,” he told TNW.
A few years ago AGI was considered a niche line of research. It was the fodder of professional futurologists like Google’s Ray Kurzweil and philosophers like Oxford’s Nick Bostrum. These days, it’s seeing heavy investment and is considered a lucrative and rewarding field to work in.
Just a handful of the companies and organizations deeply involved in AGI development include:
And that list could also include dozens of startups and hundreds of universities such as MIT, NYU, and Oxford. The development of a human-level artificial intelligence isn’t science fiction, it’s a business strategy.
The bottom line is that AGI, in the limited sense of machines that can operate completely autonomously while performing a series of specific tasks that require human-level intelligence, is something most experts in the field would likely agree is attainable if not inevitable.
Whether or not that aforementioned spark of life — the “Number Five alive!” moment from the movie “Short Circuit” — ever happens, or is even possible, remains a mystery.
Here’s some of our favorite TNW articles featuring the subject of AGI:
- The AI company Elon Musk co-founded intends to create machines with real intelligence
- Human intelligence and AI are vastly different — so let’s stop comparing them
- Google’s top AI scientists: We’re entering phase two
- Stephen Hawking thinks AI will cure disease if it doesn’t kill us all
- When you wish upon an algorithm: Will Sophia ever be real?
And here’s a few resources for when you’re ready to dive a little deeper:
- Artificial General Intelligence (various, edited by Ben Goertzel, Cassio Pennachin)
- Superintelligence: Paths, Dangers, Strategies (Nick Bostrum)
- The Singularity is Near (Ray Kurzweil)
Don’t forget to check out our Artificial Intelligence section for the latest news and analysis from the world of machines that learn.