Bearing this in mind, there are many tasks right now that lack the data to properly train the ML models, but such data will be available soon. What is going to generate a new source of data that are currently unavailable, but could be in the near future?
I see four main sources or processes that will make new kind of data available:
digitization of processes;
sharing data that are generated and could be further used by third-parties;
using ML for getting previously hard to acquire data that now can be collected cheaply.
Internet of Things is a hot topic right now, and it will bring a great new pool of data that are not being processed right now. An example from Gartner report “AI on the Edge: Fusing Artificial Intelligence and IoT Will Catalyze New Digital Value Creation” shows that by using IoT and ML Google was able to reduce the cooling costs for their data centres by whopping 40 percent! You can read more about IoT in the context of industry 4.0 in our article.
When you hear this and realize that you’re sitting on a pot of gold, you start to look for parties interested in paying you for the access to the data that you on your side cannot make use of. For years a great example of this have been retailers who had a lot of information about the consumers and their shopping behaviour that they shared with brand owners and marketing agencies so that they could gain insights about customers. Now more and more companies decide to share data they generate – from logistics companies to telecoms.
New ways of acquiring data
And finally, using Machine Learning to get data for Machine Learning sounds odd, but it is often the case! A good example is the segmentation of aerial images. For years it has been quite expensive to label photos to get valuable insights, e.g. where you can find a road on the photo, or where a house is. This had to be manually inspected by humans and as such, it required a significant amount of time and effort.
That’s where machine learning comes in handy – there are already some algorithms being developed that can transform pictures into meaningful data that can be used in the further analysis, be it a traditional one or an analysis involving machine learning. A company that specializes in producing data in such a way is, for example, TensorFlight.
Algorithms and methods
It starts with data, but it’s not only about data. Machine Learning is booming right now, and it doesn’t look as if it were to change in the near future – a new ML paper is published every 20 minutes!
Let’s look at the most interesting (in my opinion) methods that are being developed, or are about to migrate from the papers to reality in the near future:
When people started to use language, a new era in the history of humankind started.
We are still waiting for the same revolution in human-computer understanding, and we still have a long way to go. But in the meantime, even though the computer may not fully understand us, it can pretend to do so, and yet be quite effective in the majority of applications. The field is developing rapidly. In fact, a quarter of all ML articles published lately have been about NLP, and we will see many applications of it from chatbots through virtual assistants to machine translators.
Reinforcement learning might sound mysterious for people outside of Machine Learning, but definitely may be one of the ‘next big things’. It’s been around for years, but only since recently it’s been flourishing and has gained momentum.
A definition from Wikipedia may shed some light on what it really is: reinforcement learning is an ‘(...)area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward.’ So it’s all about creating programs that interact with the environment (a computer game or a city street) to maximize some reward, taking feedback from the environment. This finds a broad range of applications from robots figuring out on their own how to walk/run/perform some task to autonomous cars to beating game players (the last one is maybe the least practical one).
Image recognition getting better
Improvement of image recognition is a bit similar to the NLP topic. For language processing, it’s all about making a computer understand what we are saying, whereas in Image Recognition we’d like to be on the same page when it comes to image inputs. However, there is a significant difference – if a machine can spot a visual pattern that is too complex for us to comprehend, we probably won’t be too picky about it. But it’s a double-edged sword because machines can sometimes get lost in low-level noise and completely miss the point.
With our improvement of Image Recognition, algorithms are becoming capable of doing more and more advanced tasks with a performance similar to or even outperforming humans. They also never get bored and return results in a blink of an eye. Does it sound like future radiologists, surveyors and factory quality specialists? For me, probably yes.
When machine learning won’t help you
After analyzing factors that impact the capabilities of machine learning, we can understand better how your business and processes can leverage artificial intelligence. You can also get the big picture of what kinds of tasks can be improved or automated with AI-driven solutions. Here, I’d also like to shed some light on the constraints that can hamper the machine learning revolution in your business:
If you think about some application but no data is available or the quality of data is low, you shouldn’t expect any breakthrough.
If the application is hard to define or there is no way of measuring how well it’s performed, training a Machine Learning model may be impossible or very limited.
If nothing similar has been researched in this area, you’d probably have to wait some time before a method matures enough to be implemented on production.