The 9th edition of MLguru is here! Prepare for a bunch of hot news from the world of Machine Learning and artificial intelligence.
In this edition you will read about:
Have you ever realized that it’s already 5am and you are sitting in front of your laptop, watching not-really-sophisticated movies that make no sense? It’s happened to me a few times, and I always asked myself the question: “how did I end up here?” I just wanted to watch a video and go to sleep. And yes, it was 4 hours ago.
From 2010 to 2011, Guillaume Chaslot worked on YouTube’s artificial intelligence recommendation engine, that is the algorithm that decides what you see next based on your viewing habits and previous searches. One of Chaslot’s main tasks was to increase the amount of time people spent on YouTube. At the time, this pursuit seemed harmless. But nearly a decade later, we can see that this assumption went terribly wrong. Read more.
For years, Apple’s FaceTime has been the most intimate way to talk to someone digitally. But it’s always been imperfect, too. FaceTiming can feel a bit like two people talking while checking their email at the same time.
To alleviate that, Apple has decided to stimulate eye contact in FaceTime with digital fakery. App designer Mike Rundle noticed a new feature in iOS 13’s third developer beta called FaceTime Attention Correction. Working on the iPhone XS and XS Max, the system appears to use ARCore – the same tech that powers animoji – to fix the position of your pupils digitally. Read more.
Megvii, a Chinese AI startup that supplies facial recognition software for the Chinese government’s surveillance program, is expanding its technology beyond the realm of humans to recognize different faces of pets. The company says that the Megvii app can register your dog simply by scanning the snout through your phone’s camera. Just like a phone registers your fingerprint for biometric unlocks, the app asks you to take photos of your dog’s nose from multiple angles. Megvii says that the app is 95-percent accurate, and it has reunited 15,000 pets with their owners. Read more.
Are there any music lovers? What if we told you you could generate 8-bit music using Transformer Networks? Chris Donahue and his team have just announced their latest project, LakhNES, which is a model that uses transfer learning. They pre-trained the model on the heterogeneous Lakh MIDI dataset and fine-tuned it on NES music. The results were amazing! Read about the project here or simply check out Chris’s profile on Twitter.
Are you tired of blindly tweaking the layout parameters to visualize your graph? A new machine learning model, created by Oh-Hyun Kwon and his team, builds a WYSIWYG interface for you to intuitively produce a layout you want! Check out their demo, or follow Oh-Hyun on Twitter for more details.
A baby can recognize an elephant after seeing two photos, while deep-learning algorithms need to see thousands, if not millions. A teen can learn to drive safely by practicing for 20 hours, and he or she will manage to avoid crashes without first experiencing one, while reinforcement-learning algorithms (a subcategory of deep learning) must go through tens of millions of trials, including many egregious failures. How can we change that? The answer might be in unsupervised learning. Read more.
Last Friday, we ran a day-long workshop on Machine Learning for Fintech companies. We understand that not all of you could come to London and participate. This is why we are planning a series of free online webinars that will expand your Machine Learning knowledge. Please let us know the topics that you find most interesting. Is it fraud detection? Or maybe Machine Learning in the healthcare industry? Or something else? Drop me an email with your suggestions.
Let’s be in touch!