Who doesn’t want their car to do some of the driving for them? And wouldn’t most people like to be sure their credit card isn’t used for fraudulent purchases? Activities such as these make people’s lives easier and better, but are also contributing to the explosive growth of IoT data being generated, processed and stored at the farthest reaches of computer networks. But with many edge data centers limited on space, power and cooling, is storage becoming a problem?
By some estimates, edge computing is growing at more than 37% each year, with much of that growth coming from 5G technologies. Overall, the edge computing market is forecasted to grow to $43.4 billion by 2027. Within that time range is the anticipated growth of IoT edge storage thanks to the proliferation of latency-sensitive services and applications, according to Seagate.
The need for speed
Success at the edge begins with real-time decision making and speed; not simply for convenience, but for mission-critical activities and safety. For example, with autonomous vehicles there is no time to move vehicle data to a centralized location for processing, then bring it back to the edge before forwarding it to the vehicle. Rather, decisions must be made nearby to where the result of that decision must play out over high bandwidth.
The same is true for content streaming services. For example, Netflix simply couldn’t provide its services if it only operated from a central location. No one location has that much bandwidth. Instead, Netflix caches files in edge locations, which act as content distribution networks. By putting the content at the edge and closer to the user, they have much more aggregate bandwidth.
AI and machine learning
Now, consider computational storage. Companies conducting AI decision making with edge devices typically build many of those decisions using prior information. For example, creating machine learning (ML) models for autonomous vehicles and traffic signal coordination may take place with massive data sets in centralized data centers.
The resulting models will reside at the edge where they are ready for faster decision making, which is referred to as inference. Inference often happens at the edge because it’s generally a smaller model that simply applies what was learned elsewhere through ML. Because inference takes place at the edge, it allows for very low latency between the edge and 5G network-connected devices.
5G devices may send in small amounts of data frequently. Though 5G requires little bandwidth, it needs low latency so data can be acted upon quickly. One of the promises of 5G is lowered latency so more devices can be connected to the networks without losing performance.
Storage at the edge
While processing at the edge is critical, storage is also essential. However, edge storage presents some interesting problems. Edge data centers often are ad hoc and are limited on power, cooling and physical space. Often these distributed data processing and IoT storage locations have no dedicated staff. With these constraints, the hardware selection may be limited simply because the facility lacks physical space.
Specialized hardware storage configurations may be discouraged in favor of purchasing very fast and robust equipment that serve multiple purposes and work with many different applications. These storage systems must be easy and flexible to implement, and preferably work on standard hardware and standard Ethernet networks.
For example, there are easy to deploy non-volatile memory express (NVMe) and TCP software-defined block storage solutions that work with standard server hardware and network interface cards. In addition, they also provide high performance at low total cost of ownership (TCO).
Solutions like this sit at the base of the network, improving block storage by making it faster as well as lowering latencies, improving bandwidth and increasing availability for local or distributed file systems. These solutions may also allow for storage and compute to scale independently — known as disaggregation — while delivering the performance of NVMe without having to deploy NVMe or other flash directly into the storage nodes.
High IOPs and lowered TCO
The ability to have hundreds of thousands or even millions of IOPs per node — it varies by reads or writes — lowers TCO by providing exceptionally high flash utilization. Some solutions also improve flash endurance. Solutions such as these these fit into different environments and may also work with open source solutions.
When shopping for the right fit for an IoT edge operation, consider whether volumes can be expanded; whether the solution offers consistent response times which is critical for the edge; and whether it can provide high aggregate writes and strong read performance that allows for high capacity and density.
Other considerations include whether the solution works on proprietary or off-the-shelf hardware and if can work with any size drives. This is because working with larger drives means using less physical space in a potentially crammed edge data center location.
IoT storage on the edge is growing exponentially, so it is important companies prepare for and meet all its challenges.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.