Are you afraid of artificial intelligence? There seems to be three schools of thought regarding AI in our modern technology-centric lives. First, AI is dangerous and needs to be curbed. Second, AI is little more than a tool to improve our lives. Third, what’s AI?
Ian Pearson at the World Government Summit:
The fact is that AI can go further than humans, it could be billions of times smarter than humans at this point
I’m not sure what that means but being a billion times smarter than humans could be, 1) a good thing, or, 2) not beneficial for humans at all (think Skynet, Terminator, et al). Regardless, some brains that already are bigger than most of us are speaking out against an invasion of artificial intelligence.
I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.
Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.
Translation: Let’s give it a whirl and see what happens.
Microsoft co-founder Bill Gates:
Even in the next 10 problems like vision and speech understanding and translation will be very good. Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.
How did we get to a point where technology could be feared?
The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture.
As a bona fide certified Apple watcher with a touch of paranoia about our privacy and security on the interwebs I am reminded of Colossus: The Forbin Project.
The film is based upon the 1966 science fiction novel Colossus, by Dennis Feltham Jones (as D. F. Jones), about an advanced American defense system, named Colossus, becoming sentient to everyone’s pleasant surprise at first. After being handed full control, Colossus’s draconian logic expands on its original nuclear defense directives to assume total control of the world and end all warfare for the good of mankind despite its creators’ orders to stop.
This theme has been around awhile and its not going away. That brings me to Apple.
Do Apple’s product have artificial intelligence? Yes, at least according to the definition.
Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”. See glossary of artificial intelligence… Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data, including images and videos.
By that somewhat broad but popular definition, the answer is yes. Siri is representative of artificial intelligence because it/he/she can understand our queries and take action accordingly. What we don’t have– yet– is a centrally located artificial intelligence which could form its own decisions and take action to match those decisions.
Is such capability in the near future? That’s what we don’t know but the movie themes and recent arguments against artificial intelligence do not take into account when or how AI becomes sentient; we can imagine the dangers and the benefits.
Is it time to be a bit paranoid? No. We’re well past the era where paranoia is something we should engage. After all, if everyone is out to get you, a little paranoia is the right attitude to have. That consideration holds true if technology is out to help humankind.