Building Brains on the Edge

Running TensorFlow Lite models on micro-controllers

Alasdair Allan
5 years ago

Over the last six months or so we’ve seen a dramatic drop in the amount of computing power necessary to run machine learning models. Traditionally associated with heavy duty, power-hungry, processors, a huge shift going on with machine learning and our ability to run models closer to the data.

Announced to the public just three months ago, TensorFlow Lite for Micro-controllers is a massively streamlined version of TensorFlow. Designed to be portable to “bare metal” systems, it doesn’t need either standard C libraries, or dynamic memory allocation. The core runtime fits in just 16KB on a Cortex-M3, and with enough operators to run a speech keyword detection model, takes up a total of 22KB.

“The kind of AI we can squeeze into a US $30 or $40 system won’t beat anyone at Go, but it opens the door to applications we might never even imagine otherwise.”Limor Fried

But it is still very much early days for new framework with people still feeling around for what might be possible within the constrains imposed by the low powered hardware. Despite this, in the short time since its release we’ve seen both SparkFun and Adafruit, who are arguably the two biggest companies in the maker world, jump in feet first with both hardware and software support for machine learning on low-power embedded hardware.

But what’s missing right now is tooling. Making TensorFlow Lite for Micro-controllers available from within the Arduino environment was a big deal. It is a huge change in the accessibility of machine learning in the emerging edge computing market. However, it’s only a first step.

While you might drawn an analogy between a pre-trained model and a binary, and the data set the model was trained on and source code, it turns out that the data isn’t as useful to you as the model.

Because, taking a step back. The secret behind the recent successes of machine learning isn’t the algorithms, machine learning has been lurking in the background for decades waiting for us to catch up.

Instead, the success of machine learning has relied heavily on the corpus of training data that companies — like Google — have managed to build up. For the most part these training datasets are the secret sauce, and closely held by the companies, and people, that have them. But those datasets have also grown so large that most people, even if they had them, couldn’t store them, or train a new model based on them.

So unlike software, where we want source code not binaries, I’d actually argue that with machine learning the majority of us want models, not data. Most of us, software developers, hardware people, and makers should be looking at inferencing, not training.

What we lack therefore is simple tooling to help us reuse those models on our own hardware, and to easily train new models from the much more limited data sets that most of us will have available, or more realistically in a lot of cases to use transfer learning to retrain existing models using new data.

Building on the initial demo built by the TensorFlow team at Google, Adafruit has invested a lot of time over the last month into iterating the tooling around the speech demo to make it easy to build and deploy models.

They’ve also put together a TensorFlow Lite for Micro-controllers Kit and accompanying the kit is a quick start guide and guide on how to train new voice models on your desktop with Docker.

Their latest model is the first ‘triple word’ model for TensorFlow Lite for Micro-controllers that I’ve come across, but it now looks like they might be ready to push beyond the initial demo.

“Our next steps at Adafruit will be to make it easier to install different models and create new ones. With 200 kB of RAM, you could have a model capable of recognizing 10 to 20 words. But even more exciting than voice recognition is the prospect of using these cheap boards to gather data and run models built around very different kinds of signals. Can we use data from the PyGamer’s onboard accelerometer to learn how to distinguish the user’s movements in doing different tasks? Could we pool data and train a model to, say, recognize the sound of a failing servo or a switching power supply?”Limor Fried

Most of the collective experience with fitting machine learning models on to embedded hardware is for hot word recognition. So while I must admit that I can see why we’ve started out with voice recognition on these low-powered boards, I’m looking forward to seeing TensorFlow Lite for Micro-controllers used for other things.

I was encouraged to see that the SparkFun Edge board was released with a connector to interface with an OmniVision OV7670 camera module, although we haven’t seen models built to take advantage of that quite yet.

However, it looks like Adafruit is thinking down similar lines, with their new Braincraft board. Designed along with Pete Warden, part of the TensorFlow Lite team at Google, who was also involved with the SparkFun Edge, the new board will come as both a standalone board, and as a Raspberry Pi HAT.

We’re primarily a visual species and there are therefore a lot of cameras out in the world. With the addition of machine learning the camera is probably the best and most flexible sensor we have to make decision about the world. The new Braincraft board should have either a ‘normal’ CMOS sensor, or an Infra-red sensor like Panasonic Grid-EYE.

However, the new board should also have other sensors, like an accelerometer, which could be useful for predictive maintenance. That’s something we’ve seen before with the SmartEdge Agile, although that was connected to the Brainium cloud, and the new board should be doing its processing locally.

The new board is intended to be low-powered, capable of being battery or solar-powered, and therefore should support some form of low-powered, long-range, albeit low-bandwidth, wireless connectivity designed for the Internet of Things—rather than connectivity like Wi-Fi or cellular that needs a bigger power envelope. Right now, Adafruit is thinking about NB-IoT for that purpose, although personally I think that LoRaWAN is (arguably) in the lead in that space, and is far more open source and open hardware friendly since it’s possible to roll your own network very easily.

I know the folks at Adafruit are looking for feedback on what should go into and onto the new Braincraft board, so make sure to comment if you have an opinion on what should be included.

Depending on the state of our technology computing seems to oscillate between thin and thick client architectures. Either the bulk of our compute power and storage is hidden away in racks of (sometimes distant) servers, or alternatively, it’s in a mass of distributed systems much closer to home. We’re now on the swing back, towards distributed systems once again, and that’s mainly because of machine learning.

Things are moving fast, and it’s going to be a fascinating year for embedded machine learning and computing on the edge.

Alasdair Allan
Scientist, author, hacker, maker, and journalist. Building, breaking, and writing. For hire. You can reach me at 📫 alasdair@babilim.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles