Deep Learning Methods and Applications


If I were to show you a picture of a house, you would know it’s a house without even stopping to think about it. Because you have seen hundreds of different types of houses, your brain has come to recognize the features -- a roof, a door, windows, a front stoop -- that make up a house. So, even if the picture only shows part of the house, you still know instantly what you’re looking at. You have learned to recognize houses.

Deep learning is a specialization of artificial intelligence that can train a computer to perform human-like tasks, like recognizing, classifying, and describing images of houses. But how are deep learning methods and applications used in business, and what benefits does deep learning promise for the future of analytics? We turned to Oliver Schabenberger, SAS VP of Analytic Server R&D, to learn more about deep learning and how it works.

How do you define deep learning?

Oliver Schabenberger: Deep learning methods are part of machine learning, which is considered a form of weak artificial intelligence (AI). We say weak AI, because we do not claim to create thinking machines that operate like a human brain. But we do claim that these learning methods can perform specific, human-like tasks in an intelligent way. And we are finding out that these systems of intelligence augmentation can often perform these tasks with greater accuracy, reliability or repeatability than a human.

Some say that deep learning is at the intersection of machine learning and big data, but there’s more to it than that. Let us look at where the "deep" and the "learning aspects" come into play.

Oliver Schabenberger, VP of Analytic Server R&D, SAS
Oliver Schabenberger, VP of Analytic Server R&D, SAS

One aspect of deep learning is to apply neural network models to greater depth, for greater accuracy in our analytics. A learning system expresses its model, or its environment, as a hierarchy of layers. You can think of layers as representing different types of information about the problem, for example, forms of regularities in images: shapes, patterns, edges, and so on. Because of their layered structure and information flow between neurons, the standard tool in building learning systems is the neural network. Because of advances in computing and algorithms, we can today build neural nets with more layers than just a few years ago. Deep neural networks are the cornerstone of many learning methods.

The second aspect refers to the way in which the systems learn in the sense of improving their performance (speed, accuracy, generalizability) as they consume more data. And it refers to applications where machines perform tasks that a human learned, such as recognizing a pattern, reading text, understanding speech, classifying events and objects. The systems do not solve a problem, they are trained on a problem.

In what way is this artificial intelligence?

Schabenberger: As soon as you say, “artificial intelligence,” people worry that the machines are taking over. That is not the case. The computers are still dumb as bricks. They’re just performing human-like tasks like recognizing patterns, recognizing speech and responding to questions. And their learned capabilities do not generalize to other tasks. For example, AlphaGo, the spectacular deep learning algorithm by Google’s DeepMind, which recently beat the world’s best Go players multiple times, won’t be of much use in classifying images, or of much help emptying your dishwasher. But it is an incredible Go player.

However, there is an interesting parallel between our improved understanding of the functioning in the human neocortex and deep neural network techniques. We have learned that the neocortex, site of many cognitive abilities, propagates incoming signals through layers that find regularities to create representations of things.

Likewise, a neural network algorithm is organized in layers and neurons. But rather than claiming that neural nets have proven useful in cognitive computing because they mimic the human brain, I argue that they are successful because they process data differently than past approaches, which did not operate at all like our neocortex.

Can you give us a deep learning example that's easy to understand?

Schabenberger: For a good example that illustrates deep learning vs. standard techniques, let’s consider the task of playing the game Atari Breakout. First, let’s discuss our options, then we can watch it in action in a YouTube video.

You could write a game bot that knows how to play Atari Breakout. You would program things like the paddle, its movements, the ball, and rules about how the ball bounces when it hits paddle, walls, or blocks. You capture the logic and strategy of the game in the software itself. You compile and deploy the software and see how well your game bot performs. If you need to improve on the game playing ability, you change the code, you recompile, redeploy, and retest.

Another option is to use a deep learning technique, known as deep reinforcement learning, to solve the problem. You represent the environment with a deep neural network and you instruct a program how to move through an environment, how to take actions, and how to reap reward for taking the action. So you tell the computer: the reward is the score at the top of the breakout screen and your action is to move the paddle. As the computer, that’s all you need to know. You move the paddle and read what’s happening on the score. The problem has now turned into an optimization problem: given the current state of the game, what actions need to be taken (how do I move the paddle) to maximize the future rewards.

Watch this video on Google’s DeepMind implementation of deep reinforcement learning for Atari Breakout:

The software does not know there is a wall or a block or even a ball. It just knows it has a paddle it can move and it wants to get a high score. Two hours in, it plays like an expert. Nobody had to recompile, redeploy or rerun anything. After four hours, it can beat the game. No domain knowledge is involved.

Where can we learn more about deep learning?

Schabenberger: I’ve just contributed to a new What is Deep Learning? resource that provides a lot of information on why deep learning is important and how it works. You’ll also find links to deep learning webinars and videos of data scientists talking about deep learning over there. It’s a great site for explaining the topic to colleagues.

* * *

[This article first appeared on the SAS website, SAS Blogs: Connecting you to people, products & ideas from SAS.]

Alison Bolen, Editor of Blogs and Social Content

Alison Bolen is an editor at SAS, where she writes and edits blog content and publishes the Intelligence Quarterly magazine. She recently picked up and moved her family and her home office from Ohio to New York. Since starting at SAS in 1999, Alison has edited print publications, Web sites, e-newsletters, customer success stories and blogs. She has a bachelor's degree in magazine journalism from Ohio University and a master's degree in technical writing from North Carolina State University.

How Can Marketing and IT Work Better Together?

If your marketing team is running up against obstacles when dealing with IT, here's a roadmap for communications to help get projects moving again.

3 Machine Learning Technologies to Watch Over the Next 3 Years

Booz Allen Hamilton's Principal Data Scientist and Executive Advisor Kirk Borne recently weighed in on a few of the rising technologies for machine learning. Here's what he said.


Re: Broad applications
  • 4/27/2016 10:32:23 AM
NO RATINGS

What happens when a hypochondriac meets Watson?  Does Watson just kick them out of the office or does it prescribe placebos to prevent them from coming back?  I also think about some of the bots that have tried to pass the Turing test and how they get tripped up.  I can imagine Watson being tripped up with a very confused patient and just throwing them out of the diagnostic loop.  Having been in hospital rooms with people who really don't know what's going on I can see an AI getting confused quickly. 

Re: Broad applications
  • 4/27/2016 9:04:12 AM
NO RATINGS

While "recognizing patterns, recognizing speech and responding to questions," is useful and leading to more interesting problems to solve and new technologies, one still has to wonder when or if we'll be able to get machines to make decisions on what we might call "moral" problems. For example how should a driverless vehicle be programmed to decide whether to swerve to avoid a pedestrian with the possibility of injuring the car's occupant vs. hitting the pedestrian and saving the occupant. Just one of billions of sticky machinee learning problems to work out.

Re: Broad applications
  • 4/26/2016 10:20:52 AM
NO RATINGS

Medical diagnoses by Watson is a good example.

With enough cases (thousands?) Watson will not need to have only the honest patients. People lie in ways that can be teased out of the data - just ask a lot of questions and cross-check with the lab test results.

Re: Broad applications
  • 4/26/2016 8:29:55 AM
NO RATINGS

I think we're just scratching the surface though, machines can learn to recognize words and objects but what to do with them intuitively isn't something I've seen anyone showing off.  The Breakout example is a good look at this, all the AI knows is to hit the ball, it doesn't know what the ball does after it is hit, it just knows that if it hits the ball that the score goes up.   This isn't to say that it didn't learn to play but it is missing half of the visual inputs if it is ignoring the wall.  This is useful in controlled environments and if the same rules always apply.  I think of Watson and the proposal to use it in a medical sense to diagnose illnesses.  I'm sure it is great at finding illness and disease to match a patient's symptoms but that assumes the patient is truthful/accurate, judgement calls when given bad information are going to be something that will be interesting to watch develop in AI.

Re: Broad applications
  • 4/25/2016 6:25:29 PM
NO RATINGS

Natural progression introduced deep learning. Advancement in algorithm, processing power and technology overall allows for such exciting tools.

Broad applications
  • 4/25/2016 5:30:21 PM
NO RATINGS

The time has come for Deep Learning.

Speech recognition, Image recognition, Facial recognition, residential architecture, even site selection and store layouts - all can be learned and computers can offer solutions that will get better with each passing month.

INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +