Get started Bring yourself up to speed with our introductory content.

Where is the edge in edge computing?

The center of gravity  for computing has expanded and contracted over the years.

When individual computers were expensive, users congregated around those scarce resources. Minicomputer departmental servers and especially PCs kicked computing out towards the periphery. Cloud computing on public cloud started to pull compute inwards again.

However, a variety of trends — especially IoT — are drawing many functions out to the periphery again.

The edge isn’t just the very edge

The concept of edge computing as we understand it today dates back to the nineties when Akamai started to provide content delivery networks at the network edge. We also see echoes of edge computing in client-server computing and other architectures that distribute computing power between a user’s desk and a server room somewhere. Later, IoT sensors, IoT gateways and pervasive computing more broadly highlighted how not everything could be centralized.

One thing these historical antecedents have in common is that they’re specific approaches intended to address rather specific problems. This pattern continues today to some extent. Centralize where you can and distribute where you must remains a good rule of thumb when thinking about distributed architectures. Today there are far more patterns and complexity.

Why must you sometimes distribute?

Centralized computing has a lot of advantages. Computers are in a controlled environment, benefit from economies of scale and can be more easily managed. There’s a reason the industry has generally moved away from server closets to datacenters, but you can’t always centralize everything. Consider some of the things you need to think about when you design computer architecture, such as bandwidth, latency and resiliency.

For bandwidth, moving bits around costs money in networking gear and other costs. You might not want to stream movies from a central server to each user individually. This is the type of fan-out problem that Akamai originally solved. Alternatively, you may be collecting a lot of data at the edge that doesn’t need to be stored permanently or that can be aggregated in some manner before sending it home. This is the fan-in problem.

Moving bits also takes time, creating latency. Process control loops or augmented reality applications may not be able to afford delays associated with communication back to a central server. Even under ideal factors, such communications are constrained by the speed of light and, in practice, can take much longer.

Furthermore, you can’t depend on communication links always being available. Perhaps cell reception is bad. It may be possible to add resiliency to limit how many people or devices a failure affects. Or it may be possible to continue providing service, even if degraded, if there’s a network failure.

Edge computing can also involve issues like data sovereignty when you want to control the proliferation of information outside of a defined geographical area for security or regulatory reasons.

Why are we talking so much about edge computing today?

None of this is really new. Certainly there have been echoes of edge computing for as long as we’ve had a mix of large computers living in an air-conditioned room somewhere and smaller ones that weren’t.

Today we’re seeing a particularly stark contrast. The overall trend for the last decade has been to centralize cloud services in relatively concentrated scale-up data centers driven by economies of scale, efficiency gains through resource sharing and the availability of widespread high-bandwidth connectivity to those sites.

Edge computing has emerged as a countertrend that decentralizes cloud services and distributes them to many, small scale-out sites close to end users or distributed devices. This countertrend is fueled by emerging use cases like IoT, augmented reality, virtual reality, robotics, machine learning and telco network functions. These are best optimized by placing service provisioning closer to users for both technical and business reasons. Traditional enterprises are also starting to expand their use of distributed computing to support richer functions in their remote and branch offices, retail locations and manufacturing plants.

There are many edges

There is no single edge, but a continuum of edge tiers with different properties in terms of distance to users, number of sites, size of sites and ownership. The terminology used for these different edge locations varies both across and within industries. For example, the edge for an enterprise might be a retail store, a factory or a train. For an end user, it’s probably something they own or control like a house or a car.

Service providers have several edges. There’s the edge at the device. This is some sort of standalone device, perhaps a sensor in an IoT context. There’s the edge where they terminate the access link. This can often be viewed as a gateway. There can also be multiple aggregation tiers, which are all edges in their own way and may have significant computing power.

This is all to say that the edge has gotten quite complicated. It’s not just the small devices that the user or the physical world interacts with any longer. It’s not even those plus some gateways. It’s really a broader evolution of distributed computing.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

CIO
Security
Networking
Data Center
Data Management
Close