Machine Learning's Greatest Weakness is Humans


Machine learning-- deep learning and cognitive computing in particular-- attempt to model the human brain. That seems logical because the most effective way to establish bilateral understanding with humans is to mimic them. As we have observed from everyday experiences, machine intelligence isn't perfect and neither is human intelligence.

Still, understanding human behavior and emotion is critical if machines are going to mimic humans well. Technologists know this, so they're working hard to improve natural language processing, computer vision, speech recognition, and other things that will enable machines to better understand humans behave more like humans

I imagine that machines will never emulate humans perfectly because they will be able to rapidly identify the flaws in our thinking and behavior and improve upon them. To behave exactly like us would be illogical and ill-advised.

[Learn more about AI and Machine Learning. Sign up now for the new edition of the A2 Academy.]

From an analytical perspective, I find all of this fascinating because human behavior is linear and non-linear, rational and irrational, logical and illogical. If you study us at various levels of aggregation, it's possible to see patterns in the way humans behave as a species, why we fall into certain groups and why behave the way we do as individuals. I think it would be very interesting to compare what machines have to say about all of that with what psychologists, sociologists, and anthropologists have to say.

Right now we're at the point where we believe that machines need to understand human intelligence. Conversely, humans need to understand machine intelligence.

Why AI is Flawed

Human brain function is not infallible. Our flaws present challenges for machine learning, namely, machines have the capacity to make the same mistakes we do and exhibit the same biases we do, only faster. Microsoft's infamous twitter bot is a good example of that.

Then, when you model artificial emotional intelligence based on human emotion, the results can be entertaining, inciting or even dangerous.

Training machines, whether for supervised or unsupervised learning, begins with human input at least for now. In the future, the necessity for that will diminish because a lot of people will be teaching machines the same things. The redundancy will indicate patterns that are easily recognizable, repeatable and reusable. Open source machine learning libraries are already available, but there will be many more that approximate some aspect of human brain function, cognition, decision-making, reasoning, sensing and much more.

Slowly but surely, we're creating machines in our own image.

Lisa Morgan, Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

How Today's Analytics Change Recruiting

HR analytics aren't mainstream yet, but they're gaining traction because human capital management is a competitive issue. From recruiting to employee performance and overall HR operations, analytics will have an increasing impact on companies and individuals.

How the IoT Will Impact Enterprise Analytics

Organizations are collecting more data from devices, whether in industrial settings, retail operations, or other installations. Here's a closer look at what they are doing with all that data.


Re: Moral and ethical decsions
  • 6/28/2017 3:35:35 PM
NO RATINGS

Probably not unlike asking your car mechanic to make your dinner.

Re: Moral and ethical decsions
  • 6/28/2017 3:26:02 PM
NO RATINGS

What we may discover by programing A.I. that what we thought was illogical or irrational behavior in humans actually does have a purpose.  

I think much of what is illogical or irrational in humans comes down to a lack of programing or experience in a certain situation.    If you have a robot programmed as a mechanic and then ask it to make you a salad it might behave in ways that seem downright weird. 

Re: Moral and ethical decsions
  • 6/16/2017 11:37:56 AM
NO RATINGS

Either we make computers act more like humans, or we will have to improve humans.

Re: Moral and ethical decsions
  • 6/16/2017 10:28:11 AM
NO RATINGS

It almost seems that there will have to be built into machines those good and bad days, or at least some randomness of active to simulate how human minds are flawed and either not rational at times or at least do maybe predictable behaviors but we just haven't figured out yet to predict them!

Re: Moral and ethical decsions
  • 6/14/2017 12:13:36 PM
NO RATINGS

To Lisa's point, the technology is not intrinsically nefarious it can become nefarious in the hands of those that use it for nefarious purposes. Very much the same with the hacking incidents that we have seen that use harness technology for the purposes of bringing down a system. AI  will grow up and have its challenges but I don't think it's as scary as many are predicting it will be.

Re: Moral and ethical decsions
  • 6/13/2017 5:09:14 PM
NO RATINGS

..

Lisa writes

There's also the dark underbelly, meaning the use of AI for nefarious purposes.  I don't think that issue is being addressed adequately either.  

Definitely. I think we can be pretty certain that in today's world, with contestants ranging from the powerful military establishments of technologically advanced nations to fanatically obsessed and dedicated terrorist movements, some pretty darn unsavory implementations of AI will be arriving soon. Actually, I think they're already arriving ...

..

Re: Moral and ethical decsions
  • 6/12/2017 3:21:48 PM
NO RATINGS

One of the advantages would be that machines typically don't have good days or bad ones.

Re: Moral and ethical decsions
  • 6/8/2017 7:02:02 PM
NO RATINGS

AI is being built into so many things.  Vendors and some of the big consulting firms are moving extremely quickly, confident that everything will work as planned.  Frankly, I think we're getting too confident too soon because we're running so fast.

There's also the dark underbelly, meaning the use of AI for nefarious purposes.  I don't think that issue is being addressed adequately either.  

Generally, I think more critical thinking needs to be done.  Looking at the potential benefits only is naive, in my opinion.  Naysaying isn't going to stop the train.  

As my favorite saying goes, "Think.  It's not illegal yet."  :)

Moral and ethical decsions
  • 6/8/2017 6:20:42 PM
NO RATINGS

"From an analytical perspective, I find all of this fascinating because human behavior is linear and non-linear, rational and irrational, logical and illogical. If you study us at various levels of aggregation, it's possible to see patterns in the way humans behave as a species, why we fall into certain groups and why behave the way we do as individuals."

 

This is truly the issue for AI. AI will be very successful in linear decision making and repetitive actions that are predictable but when it comes to getting judgement or moral decisions like driving a car into a school bus or a tree, AI will still stumble. It has its place but not everywhere.

INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +