Basic Machine Learning is quite sensible
ML is just one of the things an AI is supposed to do. We can say a machine is learning if algorithms can analyze data and make future decisions on the basis of it. Here’s a pair of examples:
- An algorithm is "fed" a set of photographs. People then tag the ones where they can see a cucumber. After parsing a significant amount of such data, and applying a machine learning algorithm, the machine can analyze a new set of pictures and with a high level of confidence identify a cucumber.
- An algorithm analyzes the music or video users are playing in an app. It checks if they listen or watch all of it, or skip to the next one. With time, the algorithm can predict which song or movie a particular user is going to like.
Simple (shallow) ML methods are used in narrow AI applications. They are getting better with time, as the more cucumbers an algorithm is fed, the more precisely it can recognize them.
Deep Learning takes Machine Learning to the next level
DL is a set of algorithms that aim to build a hierarchical representation of data, which can also be used in the examples above. “Hierarchical” here means that they work on "layers", learning step by step how to make accurate decisions, i.e. recognize the shape of an object, then its color, texture, etc. Unlike shallow ML methods, neural networks can learn on their own which features are important to correctly identify a cat or a cucumber, or correctly predict which song will appeal to a user. While the simplest task-specific ML algorithms needed to be told what "to look at", DL algorithms create new ways to solve a problem.
DL algorithms use more sophisticated structures, such as convolutional neural networks, belief networks, or recurrent neural networks. By building them, engineers try to imitate the biological nervous systems. However, they are still far from replicating the way animals and humans process data.
The aforementioned Google AlphaGo uses Deep Reinforcement Learning techniques to compete with human Go players in a game that requires intuition and imagination. Unlike a simple chess-playing algorithm, which is told by people how to react to a specific situation, AlphaGo invents its own moves, which Go players can learn from it.
DL is a set of ML methods that structure algorithms into layers. They are inspired by biological neural networks and make it possible for computers to develop their "thinking" to make more independent decisions. Neural networks are usually "black boxes." It is hard to explain why a neural network identified a cucumber on a picture, even if it did it correctly. Simple ML methods, such as a shallow decision tree, are much easier to explain, but they cannot learn so efficiently. DL is much more powerful, able to build hierarchical representations, but also prone to overfitting (overtraining), which happens when the algorithm "focuses" on the rules that work on the data it is fed, rather than trying to apply the rule to bigger sets of new data. As a result, the final product matches the training data perfectly, but is not able to make accurate predictions in the real word.
Deep Learning methods can take Machine Learning and its advantages to the next level from recognising cats, cucumbers, grouping emails, and winning chess games, to teaching Go players new moves, driverless cars, and early automatic cancer detection.