Core ML vs TensorflowLite: ML Mobile Frameworks Comparison

Photo of Mateusz Opala

Mateusz Opala

Updated Nov 13, 2024 • 8 min read
rafael-zamora-522765-unsplash-932402-edited

Including machine learning (ML) capabilities can add a lot of value to your product. Today’s customers expect applications to be tailored to their precise needs, and machine learning is increasingly becoming the go-to solution.

Machine learning gives machines the power to evolve. Using algorithms that continuously assess and learn from data, ML enables computers to access insights that would often be invisible to the human eye, learn from them, and improve. In other words, computers that use ML systems can recognise patterns in enormous datasets and act upon them.

However, these types of tasks require vast computing power that has historically been beyond the capabilities of pocket devices such as smartphones and tablets. This, of course, hasn’t meant that our mobile applications have been deprived of ML capabilities. Rather, building machine learning functionality into a smartphone app meant offloading the computational heavy lifting to a remote data centre via an Internet connection, rather than running the necessary algorithms on the mobile device itself.

Today, however, developers are hard at work bringing ML capabilities directly onto mobile platforms, with both Apple and Google releasing frameworks that enable on-device machine learning that can run ML algorithms truly locally. For iOS, Apple’s machine learning framework is called Core ML, while Google offers TensorFlow Lite, which supports both iOS and Android. Let’s take a look at both platforms and see how they compare.

Core ML

The Core ML framework from Apple allows developers to integrate trained machine learning models into mobile apps. It supports a variety of ML models, including neural networks, tree ensembles, support vector machines, and generalised linear models.

Essentially, the Core ML library enables mobile app developers to train ML models on powerful computers, but then save those training models on the smartphone where the optimised version will run.

As Apple states,

Core ML is optimized for on-device performance, which minimizes memory footprint and power consumption. Running strictly on the device ensures the privacy of user data and guarantees that your app remains functional and responsive when a network connection is unavailable.”

Developers can exploit data from a smartphone’s camera using Apple’s Vision – a framework that performs face and landmark detection, text detection, barcode recognition, image registration, and general feature tracking. Core ML also supports Apple’s Natural Language for natural language processing, and GameplayKit for evaluating learned decision trees.

In practice, Core ML allows developers to simplify the integration of machine learning into applications to create various “smart” functions with just a few lines of code. Such functions include image recognition, predictive text input, pattern recognition, face recognition, voice identification, handwriting recognition, and more.

There are some drawbacks and limitations to Core ML, however. For example, Core ML only supports two types of machine learning: regression and classification. While it’s true that classification is arguably the most-used ML task, there are many others, including clustering, ranking, structure prediction, data compression and so on – most of then are currently behind the scope of Core ML.

Also, as InfoWorld’s Serdar Yegulalp notes, “There are no provisions within Core ML for model retraining or federated learning, where data collected from the field is used to improve the accuracy of the model. That’s something you would have to implement by hand, most likely by asking app users to opt in for data collection and using that data to retrain the model for a future edition of the app.”

TensorFlow Lite

TensorFlow Lite is Google’s local device version of its open-source TensorFlow project, which launched a couple of years ago. TensorFlow’s popularity quickly exploded, with the likes of eBay, Uber, Airbnb and Dropbox using it to power their AI development. Following this, Google produced a slimmed-down version called TensorFlow Mobile, which was designed to shrink the size and complexity of ML software so that it could run efficiently on smartphones.

TensorFlow Lite is an even leaner version of the library, which helps developers build lightweight ML software for use not only in smartphones, but embedded devices as well. Google says that “Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it will become the recommended solution for deploying models on mobile and embedded devices.”

The main purpose of the TensorFlow Lite framework is to bring lower-latency inference performance to mobile and embedded devices in order to take advantage of the increasingly common machine learning chips, now also appearing in small devices. It’s designed to be lightweight, fast, and optimised for mobile devices so as to dramatically improve machine learning model loading times and support hardware acceleration.

Like Core ML, TensorFlow Lite has a number of pre-trained and optimised-for-mobile models that developers can use “out of the box”, and these models can also be tweaked and retrained to suit specific needs.

MobileNets is a family of mobile-first computer vision models, able to identify across 1,000 different object classes with great accuracy. Similar to MobileNets, though larger in size, is the Inception v3 image recognition model, while Smart Reply is an on-device conversational model that provides one-touch replies to incoming conversational chat messages.

At present, Tensorflow Lite isn’t a full release, meaning there’s still much more to come as the library grows and more things get added. “With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of the most important common models,” said the TensorFlow Development team in a blog post.

“We plan to prioritize future functional expansion based on the needs of our users. The goals for our continued development are to simplify the developer experience, and enable model deployment for a range of mobile and embedded devices.”

Final Thoughts

With Core ML and TensorFlow Lite, it’s now possible to bring machine learning out of the cloud and onto the mobile devices in people’s pockets. The beauty of both frameworks is that they make it exceptionally easy for developers to get started with machine learning. Though still in their relative infancy, these frameworks offer toolkits that provide useful new capabilities in smartphone applications. And there are many benefits to do doing machine learning locally rather than in the cloud too, including lower latency, greater availability, improved privacy, and lower costs.

If you’re considering creating an application with on-device machine learning capabilities and want to know whether Code ML or TensorFlow Lite will be the most appropriate, get in touch with Netguru today, and we’ll chat through your requirements and advise you on the best path forward.

Photo of Mateusz Opala

More posts by this author

Mateusz Opala

Optimize with AI solutions  Automate processes and enhance efficiency with AI development  Get Started!

Read more on our Blog

Check out the knowledge base collected and distilled by experienced professionals.

We're Netguru

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency.

Let's talk business