Funtap - stock.adobe.com
To evaluate edge tools and applications appropriately, IT professionals must examine what distinguishes edge computing from other types of computing. The edge provides computing activity somewhere close to the point of activity. This flies in the face of modern trends to consolidate that computing power in the data center or the cloud to create positive economy at scale.
The justification for scrapping the standard concentration principle is latency. Edge computing targets real-time missions, wherein the application must be tightly coupled to real-world events and actions -- that's why it has to be close.
The most critical term in edge applications is control loop -- the workflow from the point where a real-time event signals the need for an action, through to the determination of the proper action to take, and back to control elements that perform the necessary steps. This cycle synchronizes with a process, such as an assembly line or warehousing, in which it is critical that there is no delay between event and control reaction. The control loop must be short and introduce as little latency as possible. Edge placement accomplishes part of that task -- in part via network optimization, and in part by the combination of application design and tool selection.
Balancing latency with urgency
Traditional applications -- transactional applications -- are almost always constrained by human reaction time or database activity in terms of performance and response time. Edge computing's real-time applications have no such constraints. The performance of network connections and edge applications determines response time; everything that introduces latency to the system must undergo optimization, or the application's mission is at risk.
This is a major shift in the way developers think about applications. Breaking large, monolithic applications into dozens of microservices, using a service mesh for message exchange and similar modern application design strategies could prove fatal in a real-time application.
The application practices and tools appropriate to edge computing reflect the fact that the edge is likely to be positioned in a rack, or room, close to the processes being controlled, rather than in a data center or in the cloud. Edge platforms are not designed for general-purpose computing, so traditional OSes and middleware are not optimal. Embedded control or real-time OS- and process-specific middleware form the basis for the common edge system. The hardware is usually specialized, so the edge is unlike either the data center or the cloud.
Holdups on edge adoption
The need for point-of-activity hosting makes it likely that a given edge location cannot be backed up by any resource located elsewhere without introducing more latency than the activity can accept. That mere fact reduces the benefit of virtualization -- in any form -- significantly. And in place of higher availability through rapid redeployment in case of failure -- or scaling if there's a significant change in load size -- the edge system must rely on redundancy to improve availability and be engineered for peak loads rather than for scalability. That has a major influence on the tools and platforms used.
Containers, which facilitate both software portability and easy scaling and redeployment, are less valuable at the edge and can generate unnecessary -- or prohibitive -- latency. Orchestration tools like Kubernetes are also redundant where there are neither containers to deploy nor pools of resources on which to deploy them.
Are more applications moving to the edge?
The edge is, in a process sense, an island. Expect to design programs differently for edge operation, but traditional programming languages work if the platform software supports them.
Edge applications monitor the control loop and enforce a latency budget, but edge applications require maintenance and management in parallel with cloud and data center applications. Container strategies might not be applicable -- let alone helpful -- where there are no containers or resource pools. Monitoring tools specific to edge OSes are better at managing latency in edge control loops, but monitoring the latency budget through a complete transaction can require data integration from multiple sources.
Where enterprises want orchestration and automation across their entire IoT infrastructure flow, from edge through cloud to data center, there are versions of Kubernetes designed to support bare-metal clusters. These combine with multi-domain Kubernetes tools, such as Anthos or Istio, to unify operation. Alternatively, enterprises could use a non-container-centric DevOps tool, like Ansible, Chef or Puppet. However, as edge applications are likely to be less dynamic than cloud applications, it might be easier to manage them separately with the OS tools available.
The edge isn't taking over
A wholesale shift to edge computing is far from certain. While web GUI processes are increasingly latency-sensitive, they still support human reaction time and are so tightly linked to the internet and public cloud services that it's unlikely most will move to the edge.
IoT applications are the primary source of edge computing. The industrial, manufacturing and transportation applications that would justify the edge most easily are a small part of most enterprises' application inventory. The edge is different and likely to be challenging in terms of tools and practices, but it's not so large, or growing so fast, as to disrupt IT operations. Think of edge computing as an extension of real-time activity rather than as another place to host applications.