Machine Learning (ML) and Deep Learning (DL) are two data science terms that have become hot buzzwords in tech, business, and marketing. In fact, ML and DL are two different terms often confused for the same thing. Here’s the difference: Deep Learning is a part of Machine Learning, an advanced technique of teaching algorithms to take independent decisions, which is a part of a broader field of Artificial Intelligence.
In 2017 AlphaGo, an AI built by Google’s DeepMind, defeated Chinese Go arch-champion Ke Jie. The media described it as a significant breakthrough in AI, ML, and DL. They were all right, but confused the readers.
Let's separate the facts from the buzz.
Artificial Intelligence was born as an academic discipline in the 1950s, inspired by the visions of science fiction writers. Its goal was to explore the possibilities of machines capable of independent "thinking." There has been no consensus on what to understand by that. Is it the ability to learn or solve particular problems (so-called “narrow AI”)? Or is “thinking” the capability to come up with creative ideas, or become conscious like humans (so-called “general AI”)?
Over time, AI research has gone through several periods of optimism and pessimism. The most recent boom started in 2012, when the AlexNet neural network beat its state-of-the-art competitors by a huge margin during the annual ImageNet Large Scale Visual Recognition Challenge. The same year computer scientist Andrew Ng succeeded in teaching algorithms to recognize cats in YouTube videos. A task trivial for human babies seemed impossible for the most advanced machines. Ng's neural network was able to identify cats without any information on if and where there was an image of a cat an a video frame. In other words, after watching and analysing millions of frames, the "stupid" network developed its own concept of how a cat looks. This was the most impressive part of Ng's achievement. The scientist didn't know how exactly his algorithm was "thinking", but the outcome was correct.
The AI boom has started. Has been fuelled by technology (wide availability of powerful and cheap GPUs capable of parallel processing, practically infinite storage of every kind of data, including images and video), investment by the most prominent corporations (Google, IBM, Microsoft, Amazon and Facebook), and crowdsourcing enabled by releasing the most advanced frameworks to the public. They are also present in Fintech.
As a result, every company has to use AI, ML and DL acronyms to describe its product, be it a dating platform or gluten-free candies. It's a prisoner's dilemma. Even if startup founders are skeptical about it, they have to use the terms. Otherwise, they seem old-fashioned.
ML is just one of the things an AI is supposed to do. We can say a machine is learning if algorithms can analyze data and make future decisions on the basis of it. Here’s a pair of examples:
- An algorithm is "fed" a set of photographs. People then tag the ones where they can see a cucumber. After parsing a significant amount of such data, and applying a machine learning algorithm, the machine can analyze a new set of pictures and with a high level of confidence identify a cucumber.
- An algorithm analyzes the music or video users are playing in an app. It checks if they listen or watch all of it, or skip to the next one. With time, the algorithm can predict which song or movie a particular user is going to like.
Simple (shallow) ML methods are used in narrow AI applications. They are getting better with time, as the more cucumbers an algorithm is fed, the more precisely it can recognize them.
DL is a set of algorithms that aim to build a hierarchical representation of data, which can also be used in the examples above. “Hierarchical” here means that they work on "layers", learning step by step how to make accurate decisions, i.e. recognize the shape of an object, then its color, texture, etc. Unlike shallow ML methods, neural networks can learn on their own which features are important to correctly identify a cat or a cucumber, or correctly predict which song will appeal to a user. While the simplest task-specific ML algorithms needed to be told what "to look at", DL algorithms create new ways to solve a problem.
DL algorithms use more sophisticated structures, such as convolutional neural networks, belief networks, or recurrent neural networks. By building them, engineers try to imitate the biological nervous systems. However, they are still far from replicating the way animals and humans process data.
The aforementioned Google AlphaGo uses Deep Reinforcement Learning techniques to compete with human Go players in a game that requires intuition and imagination. Unlike a simple chess-playing algorithm, which is told by people how to react to a specific situation, AlphaGo invents its own moves, which Go players can learn from it.
DL is a set of ML methods that structure algorithms into layers. They are inspired by biological neural networks and make it possible for computers to develop their "thinking" to make more independent decisions. Neural networks are usually "black boxes." It is hard to explain why a neural network identified a cucumber on a picture, even if it did it correctly. Simple ML methods, such as a shallow decision tree, are much easier to explain, but they cannot learn so efficiently. DL is much more powerful, able to build hierarchical representations, but also prone to overfitting (overtraining), which happens when the algorithm "focuses" on the rules that work on the data it is fed, rather than trying to apply the rule to bigger sets of new data. As a result, the final product matches the training data perfectly, but is not able to make accurate predictions in the real word.
Deep Learning methods can take Machine Learning and its advantages to the next level from recognising cats, cucumbers, grouping emails, and winning chess games, to teaching Go players new moves, driverless cars, and early automatic cancer detection.