People ask my opinion about the comments made by high-profile individuals who believe AI could be mankind’s downfall -- physicist Stephen Hawking, entrepreneur Elon Musk, as well as commercial software giants Bill Gates and Bill Joy. When I hear such comments, it describes a technology discipline I don’t recognize. I can’t imagine how one of my algorithms is going to wake up one day and strangle me.
The people making these comments, while brilliant in their own fields, have little direct experience with machine learning, AI, or robotics. They’re all very intelligent and most know something about mathematics, but none have the formal understanding of AI, in the form of machine learning and deep learning, that an academic researcher or other professional in the field should have.
What are the experts saying? The “Killer AI” fear is not being supported by people deeply involved in the field of AI and machine learning. In fact, many are taking the time to make public statements to the contrary. For example, Professor Andrew Ng, who founded Google’s Google Brain project, and built the famous deep learning net that learned on its own to recognize cat videos before he left to become chief scientist at Chinese search engine company Baidu had this to say:
“Computers are becoming more intelligent and that’s useful as in self-driving cars or speech recognition systems or search engines. That’s intelligence. But sentience and consciousness is not something that most of the people I talk to think we’re on the path to.”
Another perspective comes from Yann LeCun, who is Facebook’s director of research, a legend in neural networks and machine learning, and one of the world’s top experts in deep learning:
“Some people have asked what would prevent a hypothetical super-intelligent autonomous benevolent AI to ‘reprogram’ itself and remove its built-in safeguards against getting rid of humans. Most of these people are not themselves AI researchers, or even computer scientists.”
Then we have the perspective of Michael Littman, an AI researcher and computer science professor at Brown University, and former program chair for the Association of the Advancement of Artificial Intelligence:
“There are indeed concerns about the near-term future of AI -- algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population ... These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.”
“The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.”
A Cambrian explosion for machine sentience? The natural question to pose is just how sentience could spring in to existence in the first place? It happens all the time in SciFi. Mr. Data on Star Trek: The Next Generation, who was a sentient android created by Dr. Soong, along with brother Lore used an “emotion chip” to become evil. There’s also the sinister HAL 9000 computer in 2001 Space Odyssey that terrorizes crew member Dave Bowman. Or more recently, the “humanics” project featured on the CBS television show Extant featured human-like robots that one day decided to kill their creators and take over the world. These are all great stories, but they’re all Sci Fi!
Coming back to reality, how could our current level of rather pedestrian AI applications (e.g. so-called “partner” robots being developed by Toyota, Google’s self-driving car project, or smart computer vision systems developed at MIT) suddenly turn evil? I’ve used neural network algorithms in my work and I’ve gone so far as to write a back-propagation process that helps the data flow through the neural network in order to make predictions. I’ve stared at lines of code that make all this happen. But I cannot conceive of a situation where my lines of code suddenly “jump the tracks” and find the ability to feel, perceive, or experience subjectively.
More experts need to step up It is time for more AI professionals and academics to step up to dispel this on-going discourse. We don’t want the situation to get out of hand and we need to have the science and scientists stand up for what’s real.
For those still fearful of AI, I suggest you take a class in machine learning. You’ll be able to program your own algorithms and understand the basics of the mathematics behind statistical learning. Once you get that level of familiarity, your fears will fade away, guaranteed.