Tesla CEO Elon Musk has received a lot of criticism recently for saying at the National Governors Association meeting, "AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that." Musk also referred to artificial intelligence (AI) technology as "the scariest problem" and called for government regulation. This is not new rhetoric; we have heard alarming language about AI as an existential threat to humanity for years now.
I prefer to be killed by my own stupidity rather than the codified morals of a software engineer or the learned morals of an evolving algorithm. But am I scared? No. Do I feel threatened? No.
It is certainly true that we have seen machines, devices, appliances, automobiles and software become increasingly capable over time. They have become increasingly intelligent only to the extent that we apply a machine-motivated definition of intelligence. The truth: Machines have become increasingly capable of performing human tasks.
Since mankind first emerged, humans have transformed objects into tools. By smashing two rocks together, they created spearheads. With the Industrial Revolution, tools became increasingly automated. Today, robots do most of the work in factories. Thermostats respond to temperature changes without our intervention. Still, no one would argue that these automated tools think for themselves.
But is this long-standing paradigm about to change? With the rise of AI, will our tools and machines start to truly think for themselves? Do we risk becoming slaves to machine masters?
The frenzied alarms are rooted in the belief that once artificial intelligence takes hold, it will develop so quickly that we cannot control its negative effects, superintelligence in machines will develop, and the rest is -- or will be -- history.
I do not believe that this will happen, at least not in this form, and not any time soon. Let us look at where artificial intelligence stands today.
[Read the rest of this article at InformationWeek.com]