carloscastilla - Fotolia
Offering a highly distributed, peer-to-peer architecture, object storage is a serious contender in the race to the edge.
Edge computing -- an IT model in which data is processed as close to its originating source as possible -- plays an increasing role in supporting distributed workloads. It requires, however, storage systems that can accommodate large data sets and high-performing applications.
As a result of these requirements, there's growing interest in edge object storage, which organizations often use alongside flash SSDs and NVMe.
IoT drives the need for edge computing
The cloud is valuable for enterprise workloads because it's highly scalable. It eliminates the capital expenditure and administrative overhead of on-premises infrastructure. However, its centralized nature can present some of the same challenges as the traditional data center, particularly when it comes to latency and data volumes.
The types of devices found at the edge -- such as smartphones, machinery sensors and IoT devices -- can generate enormous amounts of data in a short time. In some cases, that data requires immediate processing for organizations to respond quickly to the devices or perform actions, such as shutting down machinery if a sensor shows a problem. If all that data feeds into a centralized platform, such as the cloud, it can overwhelm networks and the platform's systems, which slows performance and increases latency.
Devices receive the responses they need much faster when organizations process data closer to those devices. This need to process and manage data near remote devices has given rise to edge computing.
In an edge computing model, organizations send filtered or aggregated data to a centralized platform. They can schedule those transmissions during off times. This reduces the load on the platform's systems and better controls network traffic patterns. The edge system controls the data it sends to the centralized platform and the timing of the transfer.
Edge computing will be essential to handle the influx of data from 5G and IoT devices, as well as autonomous cars, medical equipment, manufacturing systems, surveillance cameras and other devices. To work, however, edge computing requires storage systems that support data-intensive operations.
Object storage on the edge
Edge systems don't operate in isolation of centralized cloud or data center platforms. Instead, they extend those platforms to support the growing number of distributed devices and their data. Storage systems in edge environments must meet the demands of the local processing operations. They must also accommodate the data management requirements that go with decentralized infrastructure.
Some edge environments use file or block storage systems, depending on workloads and data volumes, but these systems have limitations and add complexity to distributed operations. For this reason, many organizations look to object storage to support edge scenarios.
Object storage provides a highly distributed and scalable architecture made up of self-contained units -- or objects -- that each include data, metadata and an identifier key. Organizations use object storage extensively in public cloud platforms. It can support large volumes of unstructured data.
Because of its architecture, object storage is well suited for distributed data systems at the edge. It provides a single global namespace that offers a unified management plane for accessing data. It conforms to standard technologies such as HTTP, REST and Amazon S3, which simplifies data access. Its rich metadata makes it easy to search and manage the data, as well as perform advanced and comprehensive analytics.
Edge object storage provides almost unlimited scalability. It avoids the complexities that come with a hierarchical file system. It also facilitates efficient disaster recovery because IT can easily replicate objects.
Edge object storage addresses many of the limitations of SAN and NAS systems because it offers a peer-to-peer architecture that simplifies operations and increases flexibility. The same storage operations run in the centralized environment, as well as on edge systems, which provides a consistent and efficient storage infrastructure that spans networks and geographical locations. Object storage also accommodates fluctuating workloads as they evolve. It can handle cloud-native applications that incorporate modern technologies such as containerization and microservices.
Edge object storage and NVMe
Object storage must be able to meet the performance requirements of an edge computing environment.
In the past, object storage was known more for its distributed and scalable nature than for its performance. Metadata added overhead, data modifications could be cumbersome and inherent latencies affected read operations. But the advent of flash SSDs and NVMe changed the role that object storage plays in data centers and edge environments alike.
Several vendors now offer object storage systems that include SSDs, either in all-flash or hybrid configurations. These systems support workloads that require high IOPS and low latency, such as artificial intelligence, deep learning and big data analytics. Organizations also use them in edge environments to process and store the influx of data from distributed devices.
To further improve performance, some all-flash storage systems have added support for NVMe or, by extension, NVMe-oF. NVMe enables applications to take full advantage of the high performance and low latency inherent in flash SSDs. Unlike traditional storage access protocols such as SAS or SATA, NVMe was designed from the ground up to maximize flash performance and reduce latency. NVMe optimizes command submissions and supports parallel operations, which results in much faster data transfers than the older protocols can achieve.
Flash SSDs are ideal for edge computing because they increase storage efficiency. They reduce power usage and the infrastructure footprint. NVMe maximizes flash's inherent capabilities. Together, flash and NVMe make it possible to implement edge object storage.