JumalaSika ltd - Fotolia

Tip

Run Kubernetes at the edge with these K8s distributions

The idea of edge computing is not as far off as it once was. Evaluate several ways to bring Kubernetes to the edge, and when an organization should use each approach.

One of the most important questions IT organizations should ask about edge computing is whether it's an extension of cloud computing. The role of the edge is fundamental to the role of Kubernetes in edge computing architectures -- so it's critical to understand.

Edge computing basics

The idea of edge computing is to process client data at the periphery of a distributed IT architecture's network, as close to that data's origination as possible. This reduces latency and network disruptions, and offers other benefits associated with distributed storage and compute resources.

Edge computing devices act on specific missions, rather than as part of a resource pool. This means cloud technology -- which focuses on resource pools -- isn't particularly useful at the edge. And to deploy Kubernetes at the edge could create more burdens than benefits.

Let's view these benefits and challenges through different Kubernetes variants with edge capabilities: the standard, or base, Kubernetes technology; KubeEdge; K3s; and MicroK8s.

Standard Kubernetes at the edge

First, let's look at some native Kubernetes features relative to edge computing.

Because edge computing consists of small data centers -- rather than specialized servers -- at the edge, it makes sense to use standard Kubernetes technology at the edge, as well as in the cloud. There are also some Kubernetes features important in edge applications when set up properly.

For example, ReplicaSets enable IT admins to assign a backup resource explicitly, which makes edge application failover a fast process. And ReplicaSets use hostPath volumes to make databases from an edge host available to all the pods that run on it. Use Kubernetes affinities, taints and tolerations to map edge pods to suitable nodes and away from unsuitable nodes. This feature prevents edge pods from reaching out to nodes on the other side of the world.

KubeEdge

For explicit separation of edge and cloud -- and, simultaneously, an overarching Kubernetes deployment -- KubeEdge is likely a good solution. KubeEdge creates an edge environment on a cloud platform and uses an edge controller to link that edge environment to the main Kubernetes deployment. This results in a similar setup to a standard Kubernetes deployment through both edge and core. But it's easier to administer the edge portion because it requires less specific rule-building to direct edge pods to edge nodes properly, and to establish backup paths. KubeEdge also includes a lightweight, edge-centric service mesh to access edge elements.

The biggest question to ask about running Kubernetes at the edge is whether your IT organization's edge resources are comparable to its cloud resources.

K3s

Another package that can be important to Kubernetes at the edge is K3s, a Rancher-developed small-footprint Kubernetes distribution that's tailored for edge missions with limited resources. K3s' footprint can be half -- or even less -- the size of the standard Kubernetes distro, and it's fully CNCF-certified so that the same YAML configuration files drive both. K3s creates an edge cluster, which provides further isolation between the edge and cloud. This setup benefits scenarios wherein edge pods can't run outside the edge -- for resource or latency reasons, for example. However, K3s has non-redundant elements -- such as database components like SQLite -- that can pose risks, and it's more difficult to coordinate a separate K3s edge cluster if admins can assign the same pods to both edge and cloud nodes.

MicroK8s

Some users consider MicroK8s as an intermediary between edge clusters and a full, standard Kubernetes deployment. MicroK8s has a small enough footprint to run in environments with limited resources, but can also orchestrate full-blown cloud resource pools. Thus, MicroK8s is arguably the most edge-agile of the edge Kubernetes options -- and it achieves this agility without a complex installation or operation. However, it doesn't support all possible Kubernetes features: IT organizations with a Kubernetes deployment in place must reimagine some feature use to match MicroK8s features.

Making a decision

The biggest question to ask about running Kubernetes at the edge is whether your IT organization's edge resources are comparable to its cloud resources. If they are, a standard Kubernetes deployment -- with set node affinities and related pod-assignment parameters to steer edge pods to edge nodes -- is the more effective setup. If the edge and cloud environments are symbiotic, rather than unified, consider KubeEdge. Most edge users should consider this to be the default option.

The more dissimilar the edge and cloud environments or requirements are, the more logical it is to keep the two separated -- particularly if edge resources are too limited to run standard Kubernetes. If you want common orchestration of both edge and cloud workloads so the cloud can back up the edge, for example, use MicroK8s or a similar distribution. If latency or resource specialization at the edge eliminates the need for that cohesion, K3s is a strong choice.

Just don't assume one kind of Kubernetes distribution fits your IT organization's whole mission.

Next Steps

Learn how to install MicroK8s for Kubernetes

When should you use K3s vs. MicroK8s?

Dig Deeper on Containers and virtualization

Search Software Quality
Search App Architecture
Cloud Computing
Search AWS
TheServerSide.com
Search Data Center
Close