IoT is extending networks further and further from conventional workstations and centralized data centers. That trend has, in turn, created the need for computing power closer to those endpoints. Edge computing devices, such as gateways, first addressed that need.
There's now another option: tiny machine learning, or tinyML, which embeds analytics on sensors at the very end (or "very edge") of a connected device ecosystem.
"The question was whether we could perform analytics on the device itself. It was a Mission Impossible kind of thing," said Evgeni Gousev, senior director at Qualcomm Technologies as well as co-founder and board chairman at the tinyML Foundation.
TinyML (a trademark of the tinyML Foundation) is a construct that puts machine learning algorithms on the small, low-power devices that exist at the very end of an IoT ecosystem and works even if there's no connectivity.
As such, it guarantees that devices can process data exactly where it's created and consistently detect and respond to issues in real time as programmed -- without worrying about bandwidth capacity or latency.
"It solves cost, power and privacy issues. It's a cheap, democratic way of doing AI," Gousev said.
What's driving AI at the very edge
The ability to instantly understand an environment and respond to issues appropriately is one of the biggest value propositions offered by IoT. That value proposition requires intelligence and needs access to that intelligence fast. Many organizations turned to cloud computing to deliver that intelligence.
"We put AI and machine learning models that kept growing in the cloud where there was massive infrastructure. So, the inference was done there. But if we have a low-power, remote device or poor connectivity, that's not so good," said Pablo Micolini, engineering manager for the innovation and engineering company Theorem.
He believes you should rely on tinyML when trying to make everything fit on tiny devices instead of in the cloud. For example, a wearable medical device designed to detect a critical health problem would need an "always-on" capability to analyze patient data and send out alerts when there's an emergency.
"Latency matters. You can't do all of your processing in the cloud, so certain processing has to be done on the edge where there's less latency. TinyML is even better, with processing on the device and no latency," said Andrew Nelson, principal architect in cloud and data center transformation at Insight Enterprises, an IT services and solutions company.
Gousev agrees with the pros of using TinyML that Nelson mentioned, claiming there are various reasons why organizations would want or need tinyML.
He cited numerous "trigger types of use cases" where smart sensors are programmed to detect and then alert to specific triggers. Retailers, for instance, could use tinyML to detect empty store shelves and alert to the need to restock. Hoteliers could use tinyML in an emergency to identify occupied hotel rooms. Managers at elder care facilities could use it to detect falls in individual units. Also, officials could use it to detect and identify certain sounds, as is the case with public safety systems that detect the sound of gunshots.
Not everyone is convinced that tinyML is needed in such cases. Nelson, for one, questioned whether there's significant need for tinyML. He said tinyML may be needed in extreme use cases where scientists may want sensors with embedded intelligence for field research in remote areas. However, he believes organizations can usually use edge devices or cloud resources for their IoT deployments' analytics requirements.
"I haven't seen processing that we can't do on the edge in most cases," he said.
Indeed, experts know that organizations likely won't use tinyML as the only intelligent solution. As Micolini explained, organizations can opt to use tinyML in an IoT deployment if they only need some simple analytics at the endpoint. However, most organizations will design IoT deployments where tinyML will work in conjunction with intelligence both on nearby edge devices and in the cloud.
"This deployment allows you to have simple models on the device complemented by more complex models on the edge and in the cloud," Micolini said.
Benefits, challenges and the future of tinyML
Researchers expect tinyML to quickly grow. ABI Research, a tech market advisory firm, predicts that 2.5 billion devices with tinyML chipsets will ship in 2030 with an increasing focus on low latency, advanced automation and highly power-efficient, low-cost chipsets.
Proponents credit the aforementioned benefits of tinyML for this growth: providing instantaneous analytics and responding accordingly, eliminating latency and working without connectivity. It can also keep the data it processes local, thereby helping to safeguard it from hackers who generally target centralized data stores.
Despite such benefits, experts said enterprise IT leaders should expect challenges when implementing and scaling tinyML within their organizations.
Micolini said CIOs will find that there's a limited number of technologists who have the mix of hardware and software skills required to work with tinyML and assess which processes should be handled by the embedded analytics and which should go to edge devices or even centralized servers. He added that CIOs will also have to be realistic about the limited intelligence that tinyML can deliver as well as its capacity to be trained and updated once deployed.
Gousev agreed, explaining how tinyML doesn't have as much processing power. For example, it can distinguish a dog from a cat, but it doesn't have the processing power to distinguish breeds.
"By making it small and cheap, you're giving up some of the capabilities of the technology," he said.
That reality, alongside growing AI capacity in edge devices, the expanding 5G network and other advances in IoT technologies, may limit the appeal and need of tinyML. Some experts, like Nelson, currently expect tinyML to be most useful for niche needs, such as deployments in extremely remote locations. Others, though, said they expect explosive growth in tinyML deployments in the years ahead.
"It's going to be everywhere," Gousev said.