What is Tiny Machine Learning? If you haven’t yet heard of it, you will. Tiny Machine Learning is a set of machine learning libraries that can run on 32-bit microcontrollers and occupy very little memory. Increasingly, 32-bit microcontrollers are so inexpensive that embedding them into a range of sensors, meters or as front-end sentinels for more complex devices is commonplace.
As a result, this technology targets very low power IoT applications and can be used to determine when to power up or power down more complex, power hungry devices. Companies and divisions investing in this technology include ARM, Google and Tensorflow, along with several other organizations and their experts in the machine learning and embedded computing spaces.
So, how does this technology interact with the edge? Tiny Machine Learning (TinyML) will drive the need for modern edge data management in more complex devices and gateways as organizations seek to make complex AI applications work on edge-computing devices. Though TinyML does not necessarily require modern edge data management, when deployed across a grid of smart meters, sensors or as a front-end sentinel for more complex devices the requirement will manifest itself.
When describing modern edge data management, the definition is largely focused on platforms with 64-bit processors (MPUs) with megabytes of on-chip memory and gigabytes of flash or other secondary storage. These 64-bit systems power smartphones, surveillance cameras and IoT gateways, many of which are already made up of billions of devices.
For a bit of background, these devices originally used 32-bit microcontrollers (MCUs). But as device operations have become networked as well as more sophisticated and able to leverage richer underlying compute resources, 32-bit MCUs have ceded to 64-bit MPUs. As a result, TinyML can be found in trillions of devices with orders of magnitude more than that of 64-bit-based IoT systems. The question is, how does TinyML impact mobile and IoT platforms as it relates to edge data management?
TinyML drives more edge data management
Handling metadata as well as monitoring and managing those downstream 32-bit devices will be performed as a shared task across complex devices and gateways on the edge and in the cloud. In turn, edge data management requires local on-device persistent data management as well as manipulation and analysis of data shared across the edge. Let’s review a few real-world examples to make this more concrete.
It’s important to understand that within a single system, there might be various levels of technology residing together, such as 64-bit MPUs and 32-bit MCUs. For example, consider a smart car: though the main CPU is the 64-bit MPU, there are several 32-bit and 16-bit MCUs managing everything, from a tire pressure monitoring system to the power management of each battery in a hybrid or fully electric vehicle. In an autonomous vehicle, there might even be several MPUs supported by several MCUs. If this is sounding far-fetched, keep in mind that most mid-range Ford models have dozens of MCUs in each car. If you have MCUs analyzing specific functions, the next logical step is to use MPUs capable of more complex analysis and downstream device management, both of which come into play way before we get to the level of a Tesla.
Sounds reasonable, but let’s bring this a bit closer to home. A great example I’ve seen recently was provided by Peter Warden, Google’s evangelist for TensorFlow Lite and TinyML, at a recent ARM conference. In a recent test of a speech recognition algorithm using deep learning techniques and occupying only 19 KB with a 32-bit ARM MCU, a device was able to run on extremely low power and recognize certain command words, such as wake up.
Providing a slightly larger — but still small — machine learning routine for machine vision, a device can perform gesture recognition. In and of itself, this isn’t terribly useful. However, if you were to combine it with local logic for powering the rest of the system and accompanying MPU, as well as using robust speech recognition and machine vision, you can see how these MPU-based functions would take full advantage of downstream TinyML and support MCU capabilities.
Of course, in taking in all of this information, whether through speech or gesture recognition, these IoT and mobile devices must be able to quickly and efficiently analyze, determine and perform the required action. This is where data management comes in.
The role of data management in supporting TinyML on IoT devices
Data management support for MPU-based mobile or IoT devices to the MCU-based device running TinyML can take several forms. Initial training of the TinyML algorithms are done in a back-end laboratory but, once deployed, local persistent data is stored and used to further tune the algorithm for the specific voice patterns of the actual end-users.
What this means is that edge data management is necessary to support local machine learning inference. Further, edge data management would facilitate updates to deployed inference routines as well as any other code involved in device management. Essentially, edge data management can look across the probability of false positives for improving wake-up command and accuracy by running more extensive deep learning routines across both speech and gesture recognition. In addition, it can perform predictive maintenance, including pattern baseline data on q specific device stored in the MPU.
At the end of the day, the application of TinyML is endless and goes beyond simply powering smart assistants and smart cars. As the human-to-machine process is evaluated, we’ll see a multitude of new use cases in industries such as smart agriculture, intelligent transportation, environmental monitoring and green buildings.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.