Deep learning is a specialization of artificial intelligence that can train a computer to perform human-like tasks, like recognizing, classifying, and describing images of houses. But how are deep learning methods and applications used in business, and what benefits does deep learning promise for the future of analytics? We turned to Oliver Schabenberger, SAS VP of Analytic Server R&D, to learn more about deep learning and how it works.
How do you define deep learning?
Oliver Schabenberger: Deep learning methods are part of machine learning, which is considered a form of weak artificial intelligence (AI). We say weak AI, because we do not claim to create thinking machines that operate like a human brain. But we do claim that these learning methods can perform specific, human-like tasks in an intelligent way. And we are finding out that these systems of intelligence augmentation can often perform these tasks with greater accuracy, reliability or repeatability than a human.
Some say that deep learning is at the intersection of machine learning and big data, but there’s more to it than that. Let us look at where the "deep" and the "learning aspects" come into play.
One aspect of deep learning is to apply neural network models to greater depth, for greater accuracy in our analytics. A learning system expresses its model, or its environment, as a hierarchy of layers. You can think of layers as representing different types of information about the problem, for example, forms of regularities in images: shapes, patterns, edges, and so on. Because of their layered structure and information flow between neurons, the standard tool in building learning systems is the neural network. Because of advances in computing and algorithms, we can today build neural nets with more layers than just a few years ago. Deep neural networks are the cornerstone of many learning methods.
The second aspect refers to the way in which the systems learn in the sense of improving their performance (speed, accuracy, generalizability) as they consume more data. And it refers to applications where machines perform tasks that a human learned, such as recognizing a pattern, reading text, understanding speech, classifying events and objects. The systems do not solve a problem, they are trained on a problem.
In what way is this artificial intelligence?
Schabenberger: As soon as you say, “artificial intelligence,” people worry that the machines are taking over. That is not the case. The computers are still dumb as bricks. They’re just performing human-like tasks like recognizing patterns, recognizing speech and responding to questions. And their learned capabilities do not generalize to other tasks. For example, AlphaGo, the spectacular deep learning algorithm by Google’s DeepMind, which recently beat the world’s best Go players multiple times, won’t be of much use in classifying images, or of much help emptying your dishwasher. But it is an incredible Go player.
However, there is an interesting parallel between our improved understanding of the functioning in the human neocortex and deep neural network techniques. We have learned that the neocortex, site of many cognitive abilities, propagates incoming signals through layers that find regularities to create representations of things.
Likewise, a neural network algorithm is organized in layers and neurons. But rather than claiming that neural nets have proven useful in cognitive computing because they mimic the human brain, I argue that they are successful because they process data differently than past approaches, which did not operate at all like our neocortex.
Can you give us a deep learning example that's easy to understand?
Schabenberger: For a good example that illustrates deep learning vs. standard techniques, let’s consider the task of playing the game Atari Breakout. First, let’s discuss our options, then we can watch it in action in a YouTube video.
You could write a game bot that knows how to play Atari Breakout. You would program things like the paddle, its movements, the ball, and rules about how the ball bounces when it hits paddle, walls, or blocks. You capture the logic and strategy of the game in the software itself. You compile and deploy the software and see how well your game bot performs. If you need to improve on the game playing ability, you change the code, you recompile, redeploy, and retest.
Another option is to use a deep learning technique, known as deep reinforcement learning, to solve the problem. You represent the environment with a deep neural network and you instruct a program how to move through an environment, how to take actions, and how to reap reward for taking the action. So you tell the computer: the reward is the score at the top of the breakout screen and your action is to move the paddle. As the computer, that’s all you need to know. You move the paddle and read what’s happening on the score. The problem has now turned into an optimization problem: given the current state of the game, what actions need to be taken (how do I move the paddle) to maximize the future rewards.
Watch this video on Google’s DeepMind implementation of deep reinforcement learning for Atari Breakout:
The software does not know there is a wall or a block or even a ball. It just knows it has a paddle it can move and it wants to get a high score. Two hours in, it plays like an expert. Nobody had to recompile, redeploy or rerun anything. After four hours, it can beat the game. No domain knowledge is involved.
Where can we learn more about deep learning?
Schabenberger: I’ve just contributed to a new What is Deep Learning? resource that provides a lot of information on why deep learning is important and how it works. You’ll also find links to deep learning webinars and videos of data scientists talking about deep learning over there. It’s a great site for explaining the topic to colleagues.
[This article first appeared on the SAS website, SAS Blogs: Connecting you to people, products & ideas from SAS.]