Edge devices' compute demands complicate cloud IoT choices
Cloud vendors want companies to use their platforms for the full scope of their IoT deployments, but that might not be their best choice.
As edge computing emerges as part of IoT deployments, users must decide not only how often to send data to the cloud, but whether to send it there in the first place. These decisions are particularly imperative for industrial customers and other settings where connected devices require more compute power nearby.
For starters, some edge devices have limited or no connectivity, so a steady stream of data transmission back to a cloud platform isn’t feasible. Furthermore, massive cloud data centers are typically located far from the source of IoT data, which can impact latency for data that requires quick analysis and decisions, such as to make an autonomous car change lanes. And some devices must process lots of data quickly or closer to the source of that information for compliance reasons, which in turn necessitates for more compute power at the edge.
AWS and Microsoft have begun to fill those gaps in their services with IoT services that extend from the cloud to the edge. AWS’ addition of Greengrass, its stripped-down software for edge devices, was particularly striking — for the first time in more than a decade of operations, AWS made its compute capabilities available outside its own data centers. That shift in philosophy illustrates just how much potential AWS sees in this market, and also some of the limitations.
With Greengrass and Azure IoT Edge, users now can streamline their IoT operations under one umbrella, and companies dabbling in IoT may find that attractive. Others may be drawn to the emerging collection of IoT vendors that process data as close to the source as possible.
Major cloud providers take a “cloud down” approach that uses existing big data technologies, but that emphasis doesn’t help if the business value of IoT requires decisions in a short timeframe, said Ramya Ravichandar, director of product management at FogHorn Systems. The startup company provides industrial customers machine learning at the edge, in partnerships and competition with those cloud providers.
Ravichandar cited the example of a review of system of assembly lines, where data is sent back to the cloud to run large-scale machine learning models to improve those systems, potentially across global regions.
“[The cloud is] where you want to leverage heavy duty training on large data stores, because building that model is always going to require bigger [compute power] than what is at the edge,” she said.
Users must decide if there’s value to send edge device data to the cloud determine where to store and process the data, weigh latency requirements and risks, and from all that determine costs and how to spread them between the edge and the cloud, said Alfonso Velosa, a Gartner analyst.
“We’re still figuring out how that architecture is going to roll out,” Velosa said. “Many companies are investing in it but we don’t know the final shape of it.”