sdecoret - Fotolia
Edge computing. IoT. AI.
It's hard to read a headline lately without at least one of these technologies making an appearance. To say they are overhyped is an understatement. Many enterprises have difficulty deciphering how to plan for deploying one of the three, let alone combining them. Even beside the hype, all three technologies are relatively new.
So, how can an enterprise plan for a deployment of AI and IoT at the edge? The answer is to look at the three technology topics individually, then view them as a unit, to have any chance at successful implementation.
Edge computing is all about reducing latency between where a condition that needs handling emerges and where the handling process takes place, which is sometimes called shortening the control loop.
Cloud data centers are often hundreds or even thousands of miles away from the place where user devices connect, and that can mean a round-trip delay of 100 milliseconds or more. Since this is additive to any processing time in the application, total delay time can amount to as much as a half-second, which is unacceptable in many scenarios -- think connected healthcare devices or autonomous vehicles.
Cloud providers know latency can be a problem, which is why they offer on-premises hosting of some or all of their IoT features. Running AI at the edge is also a viable model.
The next place to look for practical strategies is IoT, which doesn't have much to do with the internet at all, in a practical context. It's about harnessing the raw information from sensors and facilitating machine control from applications rather than by people. This raw information almost always comes in the form of events or signals that something has happened or as status changes -- for example, the temperature of a freezer, the location of an item in the supply chain or the revolution of a conveyer belt in a manufacturing plant.
Cloud providers offer event-processing and IoT tools, including the serverless dynamic hosting of processes on demand. These tools are normally run in the cloud data center and collect events from multiple sources, even from multiple geographic areas. But when combined with edge computing, this cloud event processing can support both low-latency responses to time-critical events and more complex multi-event analysis. Having the same cloud platform at the network edge and deeper in the cloud facilitates development of IoT applications.
AI comes into this picture in interpreting these events. Some events are simple, meaning the mere occurrence of the event should signal the execution of a task or process. For example, if the smoke detector activates, enable the fire alarm.
Most events, however, depend on analyzing the exact event in context, either with other events or with broader conditions, such as time of day, weather and so forth. Long-standing strategies like complex event processing already do this, but they depend on being able to define all event and contextual relationships. AI offers another path, one where machine learning can watch events and responses and learn what to do. Likewise, neural networks can introduce almost-human judgment.
Combining AI and IoT at the edge
While each of these three concepts has obvious value and obvious areas where they can be applied, the real magic happens when all three are deployed together.
The primary value of combining AI, IoT and edge computing is their ability to generate fast, appropriate responses to events signaled by IoT sensors. Virtual and augmented reality applications demand this kind of response, as do enterprise applications in process control and the movement of goods. The cooperation inherent in manufacturing, warehousing, sales and delivery will likely create the sweet spot for an IoT-enabled AI edge. Such activities form a chain of movement of goods that cross many different companies and demand coordination that a single-company IoT model could not provide.
Deploying a combination of the three technologies is difficult, however. Fortunately, some basic rules will help.
Rule No. 1. Apply AI at the IoT edge where event generation and control response are concentrated in a single facility or campus. When you move AI to an edge location, the AI will see less because it only receives and processes data from that location. If you use AI at one edge site to generate control outputs in another site, you'll add latency to your application and defeat the value of edge computing. This means you need to have sensors and controllers in the same general location and place your AI there, too. This will keep your control loops short.
Focusing edge AI on a common facility will also reduce the risk of losing the connection between the sensors and controllers and the AI edge. Local connectivity is more reliable than a carrier network service, and if you have power backup available in the facility, you can even ride out power failures. If that's your goal, make sure the network features used by your edge AI application are also in the facility so their power will be backed up, too.
Rule No. 2. Think of edge AI as an extension of deep AI. Unless the IoT mission of each of your AI edge facilities is different, you'll want to apply the lessons learned at one facility to the others. Machine learning can be an edge application if each edge facility is unique. If not, think of it as a collective application hosted at a location where data from all facilities is available for analysis.
Deep AI is crucial in analyzing how the event-to-control feedback loop actually changes conditions, making it an event-control-measure path. The goal is to learn whether the control responses initiated locally actually brought about the optimum result. The decisions made by this deep AI learning can then be fed back to edge locations, either in the form of neural network updates or manually, as needed, to constantly optimize the way AI responds to events.
Rule No. 3. Think event-flows, not workflows in your application planning. Most enterprise development practices were weaned on transaction processing, and transactions are multistep, contextual, update-centric forms of work. Their pace of generation can be predicted fairly well, and when a transaction is initiated, the flow of information it triggers is usually predictable.
Events are simply signals of conditions or changes in conditions. They can't be predicted as easily, and some initiate a cascade of activity from edge to data center, while others are actioned locally with IoT at the edge and generate no deeper impact at all. The trick is to imagine all possible normal and abnormal conditions to predict events and size your resources appropriately.
Edge computing, IoT and AI are three of the hottest and least-understood topics in technology, so the pace of change in this space is nothing short of breathtaking. Keep the basic rules covered here in mind as you review developments and try to be realistic in assessing how fast the technologies will advance. We still have a long road to travel.