Should We Fear AI?


As a data scientist I use machine learning methods to solve problems nearly every day. Increasingly, I find myself on the front lines of the “Killer AI” debate that you see with increasingly frequency in the mainstream press.

People ask my opinion about the comments made by high-profile individuals who believe AI could be mankind’s downfall -- physicist Stephen Hawking, entrepreneur Elon Musk, as well as commercial software giants Bill Gates and Bill Joy. When I hear such comments, it describes a technology discipline I don’t recognize. I can’t imagine how one of my algorithms is going to wake up one day and strangle me.

Credit: Wikipedia/ckroberts61
Credit: Wikipedia/ckroberts61

The people making these comments, while brilliant in their own fields, have little direct experience with machine learning, AI, or robotics. They’re all very intelligent and most know something about mathematics, but none have the formal understanding of AI, in the form of machine learning and deep learning, that an academic researcher or other professional in the field should have.


What are the experts saying?
The “Killer AI” fear is not being supported by people deeply involved in the field of AI and machine learning. In fact, many are taking the time to make public statements to the contrary. For example, Professor Andrew Ng, who founded Google’s Google Brain project, and built the famous deep learning net that learned on its own to recognize cat videos before he left to become chief scientist at Chinese search engine company Baidu had this to say:

“Computers are becoming more intelligent and that’s useful as in self-driving cars or speech recognition systems or search engines. That’s intelligence. But sentience and consciousness is not something that most of the people I talk to think we’re on the path to.”

Another perspective comes from Yann LeCun, who is Facebook’s director of research, a legend in neural networks and machine learning, and one of the world’s top experts in deep learning:

“Some people have asked what would prevent a hypothetical super-intelligent autonomous benevolent AI to ‘reprogram’ itself and remove its built-in safeguards against getting rid of humans. Most of these people are not themselves AI researchers, or even computer scientists.”

Then we have the perspective of Michael Littman, an AI researcher and computer science professor at Brown University, and former program chair for the Association of the Advancement of Artificial Intelligence:

“There are indeed concerns about the near-term future of AI -- algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population ... These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.”

Lastly, here’s the perspective of Oren Etzioni, University of Washington computer science professor, and now CEO of the Allen Institute for Artificial Intelligence:

“The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.”


A Cambrian explosion for machine sentience?
The natural question to pose is just how sentience could spring in to existence in the first place? It happens all the time in SciFi. Mr. Data on Star Trek: The Next Generation, who was a sentient android created by Dr. Soong, along with brother Lore used an “emotion chip” to become evil. There’s also the sinister HAL 9000 computer in 2001 Space Odyssey that terrorizes crew member Dave Bowman. Or more recently, the “humanics” project featured on the CBS television show Extant featured human-like robots that one day decided to kill their creators and take over the world. These are all great stories, but they’re all Sci Fi!

Coming back to reality, how could our current level of rather pedestrian AI applications (e.g. so-called “partner” robots being developed by Toyota, Google’s self-driving car project, or smart computer vision systems developed at MIT) suddenly turn evil? I’ve used neural network algorithms in my work and I’ve gone so far as to write a back-propagation process that helps the data flow through the neural network in order to make predictions. I’ve stared at lines of code that make all this happen. But I cannot conceive of a situation where my lines of code suddenly “jump the tracks” and find the ability to feel, perceive, or experience subjectively.


More experts need to step up
It is time for more AI professionals and academics to step up to dispel this on-going discourse. We don’t want the situation to get out of hand and we need to have the science and scientists stand up for what’s real.

For those still fearful of AI, I suggest you take a class in machine learning. You’ll be able to program your own algorithms and understand the basics of the mathematics behind statistical learning. Once you get that level of familiarity, your fears will fade away, guaranteed.

Daniel D. Gutierrez, Data Scientist

Daniel D. Gutierrez is a Data Scientist with Los Angeles-based Amulet Analytics, a service division of Amulet Development Corp. He's been involved with data science and big-data long before it came in vogue, so imagine his delight when the Harvard Business Review recently deemed "data scientist" as the sexiest profession for the 21st century. Previously, he taught computer science and database classes at UCLA Extension for over 15 years, and authored three computer industry books on database technology. He also served as technical editor, columnist, and writer at a major monthly computer industry publication for seven years. Follow his data science musings at @AMULETAnalytics.

Data Prep and Data Quality

Yes, there are ways to minimize the amount of time that data scientists spend on data preparation.

The Value Proposition of Streaming Analytics to the Enterprise

Streaming analytics, which is drawing an increasing amount of interest, helps enterprises by visualizing the business in real-time, cutting preventable losses, automating immediate actions, and detecting urgent conditions.


Re: Built in limitations
  • 3/17/2016 4:55:11 PM
NO RATINGS

Detecting stress and sentiment is indeed being worked on, but there are instances of improvement needed. I recall a post explaining the failure of Siri, Cortana, etc. to recognize stress adequately in an emergency.

http://q13fox.com/2016/03/14/siri-i-was-raped-study-compares-smartphone-responses-in-crises/

These instances can help in adjusting the demand for analytics and technology beyond commercial purposes towards addressing broader social concerns. 

Re: Built in limitations
  • 3/17/2016 4:49:49 PM
NO RATINGS

Nice share Seth, though it does make the head spin in how adoption would take place. People really do fear change, sometimes more than necessary. But that fear has to be managed with each innovation intorduced.

Re: Built in limitations
  • 3/11/2016 11:01:13 AM
NO RATINGS

I think the first step in machines being more compassionate is already in early-stage use, where computers/robots are programmed to detect stress in our speech, along the lines of what Ariella Brown described in a recent blog  about companion robots. Another use  could be in customer support phone banks. If a caller's stress level grows, as indicated by change in voice volume, use of nasty words, and change in cadence, maybe it's a sign that you are at risk of losing the customer and you need to escalate the call to a different support person. Although, I'm not confident that certain companies really care if you hate their support personnel.

 

Re: Built in limitations
  • 3/11/2016 10:23:22 AM
NO RATINGS

I would surmise that AI, to become more like sentient beings would have to solve the mystery of how to be "compassionate" in behavior. Human research indicates this may be an inate property of our brains moderating our fear and flee responses. Just how AI could simulate this would be very interesting to contemplate. Should that be possible we should have no fear of humans being replaced, harmed, or even killed. Or maybe not?

Re: AI threat #2
  • 3/10/2016 10:39:51 AM
NO RATINGS

@Lyndon. Another issue with the idea of turning the white collar/tech worker into the "idle not so rich" because of AI is that we have to focus on the word "idle". If most of the population is sitting around contemplating their navels while cashing government stipends, that would call into the old saying about the idle mind being the devil's playground.

I'll be the cautious optimist and predict that new types of jobs will emerge in one way or another.

 

Re: AI threat #2
  • 3/10/2016 10:02:35 AM
NO RATINGS

..

Seth writes


 I've read some articles that predict that we will lose 50% of jobs to technology by 2050. 

If this is true we are going to also have to politically change our country.  I guess is that democratic socialism will have win out with universal health care and even a guaranteed income.  We just might have to have a guaranteed income so people can afford to buy the products the technology is creating.  It's other than than that or have millions of hungry & unemployed people and al the civil unrest that will come with it.  


In my view, at least one beneficial outcome of Bernie Sanders's campaign has been to make "socialism" a respectable word again in the USA ...

Regarding the "guaranteed income" idea, a March 2 New York Times article is very pertinent to this discussion. The title kinda gives it away : "A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck".

The article focuses on a plan for a "universal basic income" (UBI) which is tersely described this way: "As the jobs dry up because of the spread of artificial intelligence, why not just give everyone a paycheck?" The article elaborates:


Imagine the government sending each adult about $1,000 a month, about enough to cover housing, food, health care and other basic needs for many Americans. ... Rather than a job-killing catastrophe, tech supporters of U.B.I. consider machine intelligence to be something like a natural bounty for society: The country has struck oil, and now it can hand out checks to each of its citizens.


This sounds to me more like a recipe for expanded poverty-level welfare rather than socialism. However it's characterized, I think it's an epic pipedream in a society controlled by the ultra-rich who are getting richer and more powerful. Machines, including robotic technology and AI, are used to try to increase the profits and the rate of profit of this mega-income stratum, and they are not happily going to allow their profit stash to be tapped to subsidize that growing mass of "moochers" and "losers". Also, did I forget to mention that through their wealth they control the basic political process?

If the USA as currently constituted cannot even maintain and replace essential infrastructure (water lines, roads/highways/bridges, railroads, mass transit systems, gas lines, pipelines, public school facilities, for starters), is it plausible that it will blithely start sending out $1,000 monthly checks to all the unemployed?

I do think there's hope, but it will require some very difficult struggle and truly revolutionary changes, not fantasies. 

 

Re: AI threat #2
  • 3/9/2016 3:28:23 PM
NO RATINGS

@ Lyndon,

 

That is my real fear as well.  I've read some articles that predict that we will lose 50% of jobs to technology by 2050. 

If this is true we are going to also have to politically change our country.  I guess is that democratic socialism will have win out with universal health care and even a guaranteed income.  We just might have to have a guaranteed income so people can afford to buy the products the technology is creating.  It's other than than that or have millions of hungry & unemployed people and al the civil unrest that will come with it.   

Re: AI threat #2
  • 3/7/2016 7:09:34 PM
NO RATINGS

..

Jim writes


 I'd disagree with the NY Times ... about calling the Wall Street analyst job replacements "robots". What Wall Street tends to adopt is strictly software, and the fact that they are adopting it isn't new.


 

I think "robots" was used somewhat jocularly, and I often use it that way too. (Some humans also come across as "robotic" ...)

It tends to apply (maybe too loosely) to decision-making software, with elements of AI, that resembles human action.

In my mind, a bona fide robot will be fully autonomous, capable of thinking on its own two feet (or wheels as the case might be) ...

I definitely agree about the idiocy of the housing bubble shenanigans...

 

Re: AI threat #2
  • 3/7/2016 2:36:28 PM
NO RATINGS

@Lyndon. I'd disagree with the NY Times (and not just because they never hired me when I was a young journalist) about calling the Wall Street analyst job replacements "robots". What Wall Street tends to adopt is strictly software, and the fact that they are adopting it isn't new. What may be new is that the companies are recognizing that all the analysts are doing is acting on (or perhaps parroting) what the software says.

Those analysts didn't seem to mind having computers on board 10 years ago when Wall Street gave us the great recession thanks to things like mortgage backed securities, moving cash from the depository system to high-risk ventures, a housing bubble that was exceeded in idiocy only by the earlier dotcom bubble, and absurd executive and analyst bonuses earned largely by watching automated trades pass through on a computer screen.

Take any of the old lawyer jokes (like "What do you call 1,000 lawyers at the bottom of the ocean? A good start") and substitute "Wall Street analyst" for "lawyer" and millions of Americans wouldn't mind.

 

AI threat #2
  • 3/7/2016 1:46:33 PM
NO RATINGS

..

Seth writes that "...  I think we have a bit of time before we have to worry about A.I.    If for the only reason that we haven't really figured out how to make better batteries yet."

There seem to be two different AI "threat" potentials that have stirred a lot of concern and discussion: (1) the threat of robotic machines directly posing some harm to humans (e.g., military warfare robots) and (2) the threat of intelligent machines ("robots") filling jobs, replacing humans, and thus increasing widespread unemployment.

The second threat is perhaps more immediate and real. To this point is a Feb 25th New York Times article: "The Robots Are Coming for Wall Street". This has the subheadline: "Hundreds of financial analysts are being replaced with software. What office jobs are next?"

The article presents an interesting discussion of the potential impact of advancing AI technology on jobs and the U.S. workforce, focusing on the replacement of financial analysts. It quotes the head of a Cambridge, Ma-based software development firm who dismisses as "cynical" the claim from typical tech entrepreneurs that "we're creating new jobs, we're creating technology jobs...."

Instead, he says, "... we are creating a very small number of high-paying jobs in return for destroying a very large number of fairly high-paying jobs, and the net-net to society, absent some sort of policy intervention or new industry that no one's thought of yet to employ all those people, is a net loss."

At this point, I don't thnk a definitive assessment on this issue can be made, but I believe it's important to recognize that this is currently a particularly active topic for intense debate.

 

Page 1 / 4   >   >>
INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +