Physicist Stephen Hawking says artificial intelligence could be the “worst event in the history of our civilization.” Maybe so, but a growing number of people in the U.S. of A. think it might be a re-election in 2020. Hawking, Tesla’s Elon Musk, and many others of scientific and technological thought think A.I. could be a terrible outcome for mankind. Here’s a thought.
Apple’s A.I. will save the world.
The alternatives to Apple’s version of A.I. are obvious. The world will end anyway. Or, nothing much will happen anyway. Or, something else will happen that is worse or better. There’s only one way to find out. Hawking seems to agree with me.
Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it
To be completely fair about this, there are too many unknowns to know. Or, at least, to know ahead of time, but does it not make sense to be careful about how A.I. is unleashed upon mankind? Look at the changes brought to societies worldwide by the seemingly innocuous advent of the smartphone which brings instantaneous information to almost anyone almost anywhere but with what result?
Are humans getting along better? No. In fact, we as a species seem to be getting worse and misinformation now spreads at the speed of light (well, almost that fast). Hawking again:
Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.
This might be a stretch but remember a year ago when Apple was considered to be hopelessly behind in the race toward artificial intelligence, augmented reality, and smartphones? Yet, barely a year later and Apple is considered the leader in AR implementation to the masses, and the hottest new product on planet earth is yet another iPhone. Yes, and this one is another step toward artificial intelligence because the device can recognize who we are and is just steps away from knowing us, watching us, and a few steps later could be helping us in ways we do not yet fathom.
Siri may seem to be a clumsy imitation of A.I. but have you noticed she’s improved– speech recognition is better, and capable of performing more requests. Hey, baby steps are what children do before they stand which is what happens before they walk which isn’t that far ahead of running.
So far, Apple’s approach seems pragmatic and cautious. Privacy and security are paramount with such technologies as Touch ID’s fingerprint sensor, Face ID’s facial recognition system, and how Siri interacts with users. Apple seems to take great care to wash and scrub personal information before it is used en masse to improve products. Hawking:
I am an optimist and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance
Transform for good? Or, destroy?
I’m not sure how exactly we embed the Three Laws of Robotics into Pokemon Go, Siri, or Face ID, but I agree with Hawking that we should be thinking about it now, so as not to worry about how to overcome some future malevolent digital force which does not want to be tampered with. Isaac Asimov:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
All we need to do is build those three laws into every piece of A.I., every app running a machine, and we’re all good to go, right?
Even Asimov came up with another law:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
How do you define harm?
Apple’s version of A.I. is a slow walk of technology and I see that as a better approach that could help to save the world from its own creations. The real issue with A.I. begins when such artificial intelligent systems begin to write themselves and create their own laws and manufacture their own hardware. What do we do then?