Getty Images/iStockphoto

Tip

Demystify the cloud and edge computing relationship

Edge computing remains primarily on-prem, but evolving technologies like 5G might enable some workloads to migrate to shared hosting models, creating a hybrid computing ecosystem.

The tech space continues to fill up with buzzwords and loosely defined terms, so it's not surprising that enterprises have questions about the relationship between cloud and edge computing.

Today, more edge computing messages are processed per week than traditional transactions are processed in a year. A single mass retailer generates one and a half billion edge messages per day, for example. These edge messages might include customer interaction data, inventory updates, security camera feeds and POS transactions. However, virtually all edge computing is done on company premises, by company-owned systems and applications. Given that, we can say that today's edge computing is where traditional transaction processing was 20 years ago: on premises.

In this tip, gain a better understanding of the relationship between cloud and edge computing. Will edge computing develop a cloud-like component, just as data center computing has?

Cloud vs. edge

Before we dive in, we need to review the basic definitions of cloud and edge computing.

What is cloud computing?

Cloud computing is computing as a service, meaning it's built as a pool of shared resources offered to buyers on various terms. It is all aimed at replacing traditional in-house transaction processing and computing for missions where a traditional approach isn't economical. In theory, almost anything that can be run in a data center can run in the cloud, though not all applications meet the business case for cloud use.

What is edge computing?

Edge computing is computing moved close to the source of the messages or events that edge applications process to reduce latency. It reduces the amount of handling associated with the messages, which makes its handling more reliable.

Generally, edge computing is used in process control applications, where a message generated by a real-time IoT system must be analyzed and turned into a command issued to the process. This message/command exchange is called a control loop -- if the process moves rapidly, that loop must be quick, which means the applications and network connections must have low latency.

Cloud computing vs edge computing.
Cloud computing is centralized and provides high processing and compute power, whereas edge is decentralized and provides low latency.

Milliseconds matter

Almost all edge applications have several control loops, reflecting pieces of a real-time process that all fit together. For example, a car manufacturer might have the following loops:

  • Loop 1. Order and receive car parts.
  • Loop 2. Put the parts on a car.
  • Loop 3. Bring the parts to the assembly line.

The first control loop is almost identical to a traditional order flow -- used to analyze supply chain fulfillment -- so current cloud technology can handle it. But can cloud technology handle the second and third loops, which require faster processes? It depends on whether the manufacturer can host a cloud-like resource pool close enough to the actual industrial process to meet latency goals.

These latency goals start with the movements required in the real-world elements of the assembly process. If a machine must command the insertion of a bolt into a hole as it passes by, it follows a specific time limit set by the speed of the hole's movement and the speed of the bolt insertion mechanism. The machine also needs to pick up a bolt for the next hole. A set of positioning events is required to signal the timing of these steps, all of which must be turned into commands within the constraints of the physical system.

Generally, the faster the physical process, the shorter the interval for processing -- or the shorter the control loop. The more steps there are, the more events must be processed in a physical interval, so the shorter the control loop.

Latency economics

Today, processes controlled by edge computers on-premises rarely tolerate control-loop latencies greater than several hundred milliseconds and often require less than 100 milliseconds of latency. This level of latency is challenging for cloud applications because of the following factors:

  • The distance between the process and the cloud host.
  • The electronic network delay in moving between the points.
  • The delay in scheduling the shared resources of the cloud.

Cloud hosting economics depend on application-sensitive economies of scale. One user's peak must fit into other users' valleys of usage. If everyone wanted constant processing, there would be little or no economy of scale in hosting. The more stringent the requirements for quick scheduling of an edge resource to meet latency goals, the harder it is to avoid dedicating that resource rather than sharing it.

The most economically efficient hosting pools would draw users from a wide area to achieve optimum economies of scale. Better economics means a greater pool-to-user distance and greater latency. Generally, enterprises report that latency increases at the rate of 1 millisecond per 60 miles of distance, to which access network latency must be added.

One option for lower latency is 5G connectivity, which can make the difference between a workable connection and one that will likely fail to meet application control-loop latency requirements. For example, applications that serve mobile process elements, such as transportation, or a public utility that can't use wireline connectivity for its process elements, could benefit from 5G. This is why these applications are the largest single drivers of private 5G in today's market.

It is unlikely that cloud-like edge computing based on resource pools will completely displace short-control-loop premises edge hosting. This shouldn't be a surprise because cloud computing cannot economically replace all data center computing.

However, it is likely that a combination of rethinking the architecture of current edge applications, growth in the use of cloud computing and improvements in network and virtualization technologies will result in the migration of some of the deeper control-loop components of edge applications to shared hosting in the future.

Tom Nolle is founder and principal analyst at Andover Intel, a consulting and analysis firm that looks at evolving technologies and applications first from the perspective of the buyer and the buyer's needs. By background, Nolle is a programmer, software architect, and manager of software and network products. He has provided consulting services and technology analysis for decades.

Dig Deeper on Cloud deployment and architecture