How Smart Is Artificial Intelligence Really?


(Image: Willyam Bradberry/Shutterstock)

(Image: Willyam Bradberry/Shutterstock)

Tesla CEO Elon Musk has received a lot of criticism recently for saying at the National Governors Association meeting, "AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that." Musk also referred to artificial intelligence (AI) technology as "the scariest problem" and called for government regulation. This is not new rhetoric; we have heard alarming language about AI as an existential threat to humanity for years now.

I prefer to be killed by my own stupidity rather than the codified morals of a software engineer or the learned morals of an evolving algorithm. But am I scared? No. Do I feel threatened? No.

It is certainly true that we have seen machines, devices, appliances, automobiles and software become increasingly capable over time. They have become increasingly intelligent only to the extent that we apply a machine-motivated definition of intelligence. The truth: Machines have become increasingly capable of performing human tasks.

Since mankind first emerged, humans have transformed objects into tools. By smashing two rocks together, they created spearheads. With the Industrial Revolution, tools became increasingly automated. Today, robots do most of the work in factories. Thermostats respond to temperature changes without our intervention. Still, no one would argue that these automated tools think for themselves.

But is this long-standing paradigm about to change? With the rise of AI, will our tools and machines start to truly think for themselves? Do we risk becoming slaves to machine masters?

The frenzied alarms are rooted in the belief that once artificial intelligence takes hold, it will develop so quickly that we cannot control its negative effects, superintelligence in machines will develop, and the rest is -- or will be -- history.

I do not believe that this will happen, at least not in this form, and not any time soon. Let us look at where artificial intelligence stands today.

[Read the rest of this article at InformationWeek.com]

Oliver Schabenberger, EVP, Chief Technology Officer, and head of R&D at SAS

Oliver Schabenberger is Executive Vice President, Chief Technology Officer, and head of R&D at SAS.

Get Ready for IoT by Taking 3 Essential Steps

To prepare for IoT, companies need to take advantage of big data and advanced analytics, and adapt their culture, so they are ready for the transformation.


Re: AI threat
  • 9/14/2017 3:43:18 PM
NO RATINGS

@Lyndon_Henry It most likely does.

BTW the NYT just published "Finally, Some Answers From Equifax to Your Data Breach Questions" It ends thus:

 At least, the company is starting to engage, which is more than I can say for Experian and TransUnion, which have ignored most of my detailed questions in the past few days, both via email to company spokespeople and on Twitter.

Look, I get the deal here. We all get it now. These companies don't think of us as customers. They think of us as products. They get lenders and others to send over our payment histories to them, aggregate it and resell the data elsewhere. And until recently, they answered to no one, more or less.

 

Now, however, Equifax has to answer to all of us consumers and others, since they're going to be sued and investigated to kingdom come. And Experian and TransUnion ought to be more forthcoming.

So to all of them, I say: Want fewer freezes? Less Twitter outrage? Answer our reasonable questions, so we can protect ourselves now that it is utterly clear that many of the supposed experts in this industry cannot do so. Silence helps no one at this point.

 

 

Re: AI threat
  • 9/14/2017 3:33:43 PM
NO RATINGS

..

Ariella writes

The site doesn't allow me to leave a link, so check for Gizmodo's article, "Hackers Have Already Started to Weaponize Artificial Intelligence"

From the Gizmodo article Ariella cites:

"Hackers have been using artificial intelligence as a weapon for quite some time," said Brian Wallace, Cylance Lead Security Data Scientist, in an interview with Gizmodo. "It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end." These tools, he says, can make decisions about what to attack, who to attack, when to attack, and so on.

...

"Artificial intelligence can be used to mine large amounts of public domain and social network data to extract personally identifiable information like date of birth, gender, location, telephone numbers, e-mail addresses, and so on, which can be used for hacking [a person's] accounts," Dutt told Gizmodo. 

So, putting this altogether ... should we assume that the recent huge Equifax hack – which has put just about all Americans' most private individual ID information in the hands of likely criminal hackers – involved AI?

..

Re: AI threat
  • 9/13/2017 8:34:21 AM
NO RATINGS

@kq4ym The site doesn't allow me to leave a link, so check for Gizmodo's article, "Hackers Have Already Started to Weaponize Artificial Intelligence"

"the exercise showed that hackers are already in a position to use AI for their nefarious ends. And in fact, they're probably already using it, though it's hard to prove. "

Re: AI threat
  • 9/13/2017 8:29:05 AM
NO RATINGS

I think the chat bot is a good example of what could happen with more complex bots.  Neither of the companies who were working on the chat bots thought they would run off the rails this way and both were using them for customer facing applications.  If they ran into problems on systems they knew the whole world could see what makes anyone think private systems with fewer eyes will be any less likely to go in unexpected directions?  Especially if the plan is to give and take control of other systems?

Re: AI threat
  • 9/10/2017 8:11:13 AM
NO RATINGS

The weak link in artificial intelligense can of course be the expert human who gives the instructions, and inadvertently (or in the case of a bad guy) gives incomplete or wrong commands for the machine to execute. Not knowing all the consequences of complex inputs may not always lead to the "truth" o the matter at hand.

Re: AI threat
  • 9/8/2017 8:23:23 AM
NO RATINGS

I'm sure not all bots are bad, just as all are not good or well designed. I think we need to be very careful about where we use bots and be certain they are well developed and tested.

Re: AI threat
  • 9/8/2017 8:20:59 AM
NO RATINGS

@SaneIT true, but does that happen beyond the realm of conversation? The example you referred to was a chatbot. So, yes, it did cause a stir, but it could hardly spark WW III by tampering with controls and causing a disaster that would spark it. That's the kind of scenario Musk envisioned.

Re: AI threat
  • 9/8/2017 8:13:12 AM
NO RATINGS

While AT&T may not be building bots that make their own decisions we've already seen that other companies are making bots that do.  We only need to look as far as Microsoft and Facebook for admissions that their bots got out of control.  Microsoft's Tay went off the rails and had to be shut down and Facebook recently revealed that their bots created their own language to communicate with each other.  For all the "bots only do what we tell them to do" articles there are multiple examples of bots behaving badly in the real world.  

AI threat
  • 9/7/2017 1:49:39 PM
NO RATINGS

I recently posed this question to Mazin Gilbert, Vice President of Advanced Technology, AT&T Labs: "Can you address concerns about the potential danger of AI raised by people like Elon Musk?"

He ansered as follows:

 As we get 150 petabytes of data in every day, there is no way for a human to review it all. AI systems are very good at taking in a lot of data and inferring what the data is telling you. It is up to you as a human expert to tell the system what to do in response to "go shut down the IP address because when you see that, it's a security threat." It will not take actions that I have not trained it to do. That's not what we're building, not what we're deploying, not what these AI systems are. Their purpose is to help us humans do our jobs better.

INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +