Edge computing is defined as a distributed computing paradigm that brings computing and data closer to where IoT typically is deployed, which improves response times and bandwidth. However, this definition sounds too technical to appreciate the true power of edge computing.
Instead, edge computing redefined is a computing paradigm to bring actionable intelligence and insights as close as possible to the location where IoT is deployed, which maximizes the anticipated value proposition of such deployments. The reason for such redefinition is evident: The value generation power of edge computing matters the most.
The value drivers
It is critical to understand the basis for such a redefinition that emphasizes value proposition. There are several drivers for any IoT product or service. Drivers can be broadly categorized as business, technology, device and data drivers, as shown in Figure 1. Ultimately, any drivers' collective goal is to enable swift, speedy, intelligent and actionable decisions.
Business drivers -- also known as functional or operational drivers -- are pivotal to any IoT implementation. Business drivers aim to use IoT-driven intelligent decisions to reap benefits such as productivity gains, increased revenues, improved customer satisfaction through innovative products and services and, more critically, lives saved. Technology drivers complement the business drivers.
As new technologies emerge that make more unexplored products and services possible, new business drivers are realized. Technologies such as low-power wireless, super-speed processors or network virtualization simplify previously complex IoT solutions and, thereby, remove barriers for IoT adoption. Technologies also accelerate the advancement of devices capable of capturing richer and previously inconceivable data.
A network of interconnected devices can bring to reality once-dreamt capabilities, such as augmented reality and intelligent digital twin. For example, an oil and natural gas engineer could inspect hundreds to thousands of miles of gas pipelines on a computer and take remediating actions. That can only be made possible because of the digital twins aided by meta-cognitive and autonomous sensing devices that constantly monitor and report the health of critical components along the pipeline.
Effectively edge computing is an extension of cloud computing with the caveat that edge computing is more decentralized and distributed than traditional cloud computing because of IoT's inherent mobility requirements. The degree of distribution on the front end closer to IoT locations differs significantly from that on the back-end server. While, by definition, edge computing intends to be as close as possible to IoT devices, organizations still implement edge computing the same way as cloud computing because of the following critical reasons:
- Edge computing security products and practices are still evolving, and it is a known issue that edge devices can be easy targets for hackers to infiltrate.
- Edge computing architecture patterns are still evolving. Even though performance and latency requirements are different in edge computing, the data generated by the edge devices still gets stored in centrally managed cloud storage systems.
- The CPU-intensive, memory-intensive and low-latency disk input/output resources needed for executing complex machine learning and deep data processing models are more readily available in the traditional cloud computing environment than at the edge.
- Other technology capabilities and services such as serverless and managed container services that are far more advanced and easier to access on the cloud, lag at the edge.
Because of these reasons, data insights and intelligence that drive actionable IoT decisions must rely on centrally managed high-power computing in the cloud. Cloud computing adds extra network and processing latencies, which are not desirable for time-critical, sense and respond IoT implementations.
Essentially, the current challenges pose severe barriers to realizing the full potential of IoT implementations, particularly in cases where intelligent sense and respond actions would result in a significant value, including saving lives and preventing loss of property and assets.
The emerging architecture
Several experts in the industry proposed to develop a reliable and strong reference architecture for edge computing. While the goal of architecture development is to bring powerful computing to the edge, several architectures still can't decentralize edge computing sufficiently off the cloud.
Here, I propose a reference architecture that takes a layered approach to decentralize edge computing and address numerous known challenges. Figure 2 represents the proposed reference architecture.
The architecture has three distinct layers: device, edge and cloud. The edge layer is central to the reference architecture that addresses edge computing requirements. The following are the key responsibilities of the edge layer:
- receiving, processing and forwarding data flow from the device layer;
- providing time-sensitive services, such as edge security and privacy protection;
- edge data analysis;
- intelligent computing; and
- IoT process optimization and real-time control.
The edge layer has three sublayers according to their data processing capacity: near-edge, mid-edge and far-edge layers.
Near-edge layer. The near-edge layer contains edge controllers that collect data from the device layer, perform preliminary data thresholding or filtering and control flow down to the devices. Because of gadget heterogeneity in the device layer, the edge controllers in the near-edge layer must support a wide array of communication protocols. The edge controllers also interface with upper layers to receive operational instructions or data-driven decisions and translate them into programmable logic controllers or action module-based control flow instructions to be transmitted to the devices. As a result, the near-edge layer must exhibit microsecond latency while interfacing with the device layer. Such latency becomes mandatory in situations where the call to action is time critical, such as the expected transient response of a self-driving vehicle in the event of a pedestrian suddenly entering the field of vision.
Mid-edge layer. The mid-edge layer contains edge gateways and is primarily responsible for exchanging data with the near-edge and the far-edge layers through wired and wireless networks. This layer has more storage and computing resources compared to the near-edge layer. More involved data processing can occur at this layer by combining information from multiple devices. The expected latency in this layer is milliseconds to seconds. Because this layer has storage capability, data and intelligence derived from the data processing can be cached locally to support future processing. The edge gateway in the mid-edge is also responsible for transferring control flow from the upper layers to the near-edge layer and managing the equipment in both mid- and near-edge layers.
Far-edge layer. The far-edge layer contains powerful edge servers responsible for performing more complex and critical data processing and making directional decisions based on the data collected from the mid-edge layer. Essentially, the edge servers in the far-edge layer form a mini computing platform with more powerful storage and computing resources. The far-edge layer processes bulk data by using more complex machine learning algorithms. The layer analyzes more data from different equipment to achieve process optimization or evaluates the best measures to take over a wider area for an extended period, usually with longer latency. The far-edge layer also acts as a bridge between the cloud and the edge layers.
The following table captures different layers in the example of connected vehicle architecture.
The real value proposition
As evident from the reference architecture and the example implementation, edge computing makes the following possible:
- intelligence derivation closer to the devices or things on the internet;
- a conduit for bidirectional information exchange with different data and intelligence junctions; and
- minimization of decision-to-action latency.
Intelligence and decision-to-action latency are crucial for the expected value proposition. It is clear from the connected vehicle example that the emerging architecture brings the intelligence as close to the IoT edge as possible. It is also clear that the more complex the intelligence derivation, the farther its computation is from IoT devices. However, in several sense-and-respond IoT implementations, the complex intelligence would need to be closer to the device layer and at low latency.
A cloud-based AI model holds no value if it predicts collision after the vehicles have already collided. Implementing time-critical intelligence farther from the devices layer and at a higher latency is an antipattern -- a common response to a problem that is usually ineffective -- to the full potential of edge computing. It is as if the edge in edge computing is interpreted as the edge of the cloud instead of the edge of IoT devices.
Think of any IoT deployment. The real value lies in the swift development of actionable decisions by uncovering intelligence and insights from data generated at the edge. Figure 3 depicts the concepts of IoT intelligence, decision-to-action latency and the absolute value proposition. The IoT value proposition can advance exponentially by bringing the intelligence closer to the device locations incrementally aided by edge computing. Figure 3 also demonstrates that the lower the intelligence latency, the more the value proposition. Effectively, the figure implies that edge computing must become richer and more powerful with computing resources to derive complex intelligence at a low latency closer to the devices layer.
Feasible example applications
Consider agricultural IoT where tractors, soil sensors, sensors on the plows and water systems are all interconnected to collect data and mine for intelligence. There may not be internet connectivity in such remote locations to send the data to cloud servers for processing. Even if there is connectivity, the delays in receiving intelligence to determine when to fertilize and water the farm might not be acceptable. Instead, local computing with edge capabilities would turn data into actions instantaneously.
Edge computing also shines in IoT use cases where data is rapidly generated. For example, airplanes have sensors on almost every component of the plane, such as the engine, the landing gear, the body, the wings and the tires. On average, airplanes generate 60 GB to 100 GB of data per flight and with more advancements, that data size could grow from 5 TB to 8 TB in the next 10 years, according to a Forbes article. It would be practically infeasible to upload such data to the cloud in real time. However, data collection could provide valuable insights to the pilots and the onboard crew during the flight. Instead, edge processors installed within the plane can process data and extract required intelligence instantaneously.
The following are the key takeaways from the arguments made in support of the importance of edge computing.
- Key drivers for IoT implementations aim to deliver intelligent answers with the primary goal of making lives better.
- The intelligence in an IoT-enabled innovative solution is mainly attributed to data-driven insights.
- New architecture patterns are emerging that emphasize the need for low-latency edge intelligence.
- Research and field experience suggest that the more low-latency intelligence gets generated closer to edge devices, the greater the potential value-proposition will be.
About the author
Murali Kashaboina is a technology-driven business-oriented leader and entrepreneur who brings IT experience in directing value-driven digital strategies and initiatives. He also presents on machine learning, AI, big data and IoT, and speaks at industry conferences. Kashaboina was inducted into Forbes' Business Council, conferred the Top 50 Tech Leaders Award by InterCon World in 2019 and named one of the Top 20 Inspiring Entrepreneurs 2020 in the USA of Indian Origin Award by the Asia Pacific Entrepreneur Magazine.
Kashaboina co-founded Entrigna, an AI-driven real-time decisions company specializing in IoT, industrial IoT and big data custom solutions. Before founding Entrigna, he served as managing director of enterprise architecture at United Airlines and played a leadership role in the merger of United and Continental Airlines. More recently, Kashaboina served in the capacity of CTO for a major healthcare organization in the Pacific Northwest region with a focus on healthcare innovation driving the smart hospital agenda. He currently serves as an advisor to BData Inc., a digital healthcare company, and as chief innovation officer at NuEra Automotive Solutions based out of Irving, Texas.
Kashaboina holds a master's degree in mechanical engineering from the University of Dayton, Ohio, and is currently preparing for his Ph.D. in data science and AI.