Petya Petrova - Fotolia

Rancher Longhorn hones Kubernetes storage for edge computing

Kubernetes storage products such as Rancher Longhorn target edge computing environments, as adoption of the trend creates a fresh opportunity for container infrastructure.

Updates to SUSE Rancher's Longhorn Kubernetes storage project this week target growing edge computing environments, where IT pros have a chance to begin anew with cloud-native tech.

Version 1.1 of Rancher Longhorn adds new features to support Kubernetes storage in resource-constrained edge environments, such as support for ARM64 low-power chips and built-in local storage replicas that container workloads can fall back on if they lose access to edge networks.

Rancher Longhorn also offers developer-friendly, automated Kubernetes storage installation workflows. This may further boost its appeal for edge computing, because adoption there is spearheaded by new machine learning, big data and IoT apps created by Agile and DevOps teams.

"Developers waste a ton of time figuring out storage solutions for containers at the edge, even for simple use cases," said Torsten Volk, managing research director at Enterprise Management Associates. "Longhorn 1.1 ... is tailored to reliably running Kubernetes applications at edge locations without forcing developers into writing a bunch of device-specific code."

IT industry watchers expect edge computing to grow in 2021, driven by 5G wireless networks, the increasing popularity of internet of things (IoT) apps and distributed enterprise IT infrastructures in fields such as data analytics, retail, manufacturing, automotive, energy and telecommunications.

Gary Chen, IDCGary Chen

As enterprise edge computing accelerates, it presents a fresh opportunity for Kubernetes vendors such as Rancher. Small edge environments that must be managed remotely without a legacy external storage system is where Longhorn, bundled with Rancher's lightweight k3s distribution, might fit best as a kind of container-based hyper-converged infrastructure, analysts said.

"Edge computing environments are often resource-constrained, which makes Kubernetes and containers a good fit," said Gary Chen, research director of software-defined compute at IDC. "If Kubernetes storage comes with it, that makes life easier for people who probably won't want to put a full enterprise storage system there."

Rancher Longhorn version 1.1 updates aimed at edge computing environments, such as ARM64 support, may also have benefits for data center users. Low-power ARM64 chips are also gaining popularity in cloud environments for their cost benefits. AWS and Docker have a partnership to support ARM64 in the cloud, and one Rancher Kubernetes user who uses Google Cloud hopes more cloud providers will soon follow suit.

"Data centers, even in the cloud, cost a lot of money -- if you can increase your available compute and reduce its cost, that is a definite win, but it's not doable if the software infrastructure you're relying on doesn't support ARM," said Thomas Ornell, senior IT infrastructure engineer at ABAX, a telematics company in Norway and a user of Rancher and Longhorn. "This edge computing, at least from my perspective, is pretty useful, even when it's not [used at the] edge."

Edge computing renews Kubernetes storage prospects

Sheng Liang, SUSESheng Liang

Longhorn started within Rancher in 2016, and the company donated it to the Cloud Native Computing Foundation (CNCF) in October 2019. Longhorn reached a stable version 1.0 in June 2020. As Longhorn slowly evolved, Kubernetes storage options for cloud and data center deployments also proliferated from startups and established vendors, including Pure Storage Portworx, Robin.io, Red Hat, MayaData and many others.

That fact isn't lost on Sheng Liang, Rancher's co-founder and now SUSE's president of engineering and innovation.

All major Kubernetes vendors are aware of the importance of including consistent data management for cloud and edge use cases with their Kubernetes platforms.
Torsten VolkManaging research director, Enterprise Management Associates

"There are people who use Longhorn in the cloud, but the fundamental ability to provide persistent storage [for Kubernetes] is solved," he said. "Even in corporate data centers, people are putting together their own machines, where Longhorn is more of a hyper-converged approach."

Rancher focused on edge computing relatively early as the first Kubernetes distributor to create a trimmed-down version of the container orchestrator for that market in k3s, which launched in February 2019. But SUSE and Rancher will be far from alone in offering Kubernetes storage tools for edge computing in 2021.

"All major Kubernetes vendors are aware of the importance of including consistent data management for cloud and edge use cases with their Kubernetes platforms," Volk said.

Other large vendors such as IBM Red Hat offer built-in Kubernetes storage for OpenShift Kubernetes with a similar integrated approach to Rancher's Longhorn. In recent releases, Red Hat has also begun to flesh out OpenShift edge computing support.

Arun Chandrasekaran, GartnerArun Chandrasekaran

Longhorn isn't strictly tied to Rancher Kubernetes -- it can run on its own, and Rancher's management framework supports most third-party Kubernetes distros. But some analysts said Longhorn offers the most value as part of the overall Rancher stack.

"Rancher is trying to create a platform solution for edge with [a] container runtime, [a] thin OS, lightweight orchestration in k3s and an integrated data persistence layer with it," said Arun Chandrasekaran, VP analyst at Gartner. "Hence, I see Longhorn primarily as a platform play by Rancher than as a standalone storage product for edge."

Rancher Longhorn version 1.1 also introduces a sought-after feature for data center users called ReadWriteMany, which allows multiple containers to share storage volumes within a Kubernetes cluster. In the past, directly attached storage volumes within the Kubernetes cluster could be used by only one container at a time. Multiple containers could share NFS storage outside the cluster, but this introduced management complexity, Ornell said; running an NFS server inside the cluster risked circular failures because of interdependencies.

"Any time traffic leaves the Kubernetes network there's a risk of something going wrong," Ornell said. "Overall, this decreases that risk and means we can use the same backup routines [for Kubernetes storage]."

Next Steps

KubeCon + CloudNativeCon news coverage

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close