How Smart Is Artificial Intelligence Really?


(Image: Willyam Bradberry/Shutterstock)

(Image: Willyam Bradberry/Shutterstock)

Tesla CEO Elon Musk has received a lot of criticism recently for saying at the National Governors Association meeting, "AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that." Musk also referred to artificial intelligence (AI) technology as "the scariest problem" and called for government regulation. This is not new rhetoric; we have heard alarming language about AI as an existential threat to humanity for years now.

I prefer to be killed by my own stupidity rather than the codified morals of a software engineer or the learned morals of an evolving algorithm. But am I scared? No. Do I feel threatened? No.

It is certainly true that we have seen machines, devices, appliances, automobiles and software become increasingly capable over time. They have become increasingly intelligent only to the extent that we apply a machine-motivated definition of intelligence. The truth: Machines have become increasingly capable of performing human tasks.

Since mankind first emerged, humans have transformed objects into tools. By smashing two rocks together, they created spearheads. With the Industrial Revolution, tools became increasingly automated. Today, robots do most of the work in factories. Thermostats respond to temperature changes without our intervention. Still, no one would argue that these automated tools think for themselves.

But is this long-standing paradigm about to change? With the rise of AI, will our tools and machines start to truly think for themselves? Do we risk becoming slaves to machine masters?

The frenzied alarms are rooted in the belief that once artificial intelligence takes hold, it will develop so quickly that we cannot control its negative effects, superintelligence in machines will develop, and the rest is -- or will be -- history.

I do not believe that this will happen, at least not in this form, and not any time soon. Let us look at where artificial intelligence stands today.

[Read the rest of this article at InformationWeek.com]

Oliver Schabenberger, EVP, Chief Technology Officer, and head of R&D at SAS

Oliver Schabenberger is Executive Vice President, Chief Technology Officer, and head of R&D at SAS.

Get Ready for IoT by Taking 3 Essential Steps

To prepare for IoT, companies need to take advantage of big data and advanced analytics, and adapt their culture, so they are ready for the transformation.


Re: AI threat
  • 9/13/2017 8:29:05 AM
NO RATINGS

I think the chat bot is a good example of what could happen with more complex bots.  Neither of the companies who were working on the chat bots thought they would run off the rails this way and both were using them for customer facing applications.  If they ran into problems on systems they knew the whole world could see what makes anyone think private systems with fewer eyes will be any less likely to go in unexpected directions?  Especially if the plan is to give and take control of other systems?

Re: AI threat
  • 9/10/2017 8:11:13 AM
NO RATINGS

The weak link in artificial intelligense can of course be the expert human who gives the instructions, and inadvertently (or in the case of a bad guy) gives incomplete or wrong commands for the machine to execute. Not knowing all the consequences of complex inputs may not always lead to the "truth" o the matter at hand.

Re: AI threat
  • 9/8/2017 8:23:23 AM
NO RATINGS

I'm sure not all bots are bad, just as all are not good or well designed. I think we need to be very careful about where we use bots and be certain they are well developed and tested.

Re: AI threat
  • 9/8/2017 8:20:59 AM
NO RATINGS

@SaneIT true, but does that happen beyond the realm of conversation? The example you referred to was a chatbot. So, yes, it did cause a stir, but it could hardly spark WW III by tampering with controls and causing a disaster that would spark it. That's the kind of scenario Musk envisioned.

Re: AI threat
  • 9/8/2017 8:13:12 AM
NO RATINGS

While AT&T may not be building bots that make their own decisions we've already seen that other companies are making bots that do.  We only need to look as far as Microsoft and Facebook for admissions that their bots got out of control.  Microsoft's Tay went off the rails and had to be shut down and Facebook recently revealed that their bots created their own language to communicate with each other.  For all the "bots only do what we tell them to do" articles there are multiple examples of bots behaving badly in the real world.  

AI threat
  • 9/7/2017 1:49:39 PM
NO RATINGS

I recently posed this question to Mazin Gilbert, Vice President of Advanced Technology, AT&T Labs: "Can you address concerns about the potential danger of AI raised by people like Elon Musk?"

He ansered as follows:

 As we get 150 petabytes of data in every day, there is no way for a human to review it all. AI systems are very good at taking in a lot of data and inferring what the data is telling you. It is up to you as a human expert to tell the system what to do in response to "go shut down the IP address because when you see that, it's a security threat." It will not take actions that I have not trained it to do. That's not what we're building, not what we're deploying, not what these AI systems are. Their purpose is to help us humans do our jobs better.

<<   <   Page 2 / 2
INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +