edge analytics Top 5 benefits of edge computing for businesses
Definition

edge AI

What is edge AI?

Edge artificial intelligence (edge AI) is a paradigm for crafting AI workflows that span centralized data centers (the cloud) and devices outside the cloud that are closer to humans and physical things (the edge). This stands in contrast to the more common practice in which the AI applications are developed and run entirely in the cloud, which people have begun to call cloud AI. It also differs to older AI development approaches in which people would craft AI algorithms on desktops and then deploy them on desktops or special hardware for tasks, such as reading check numbers.

The concept of the edge is often characterized as a physical thing, such as a network gateway, smart router or an intelligent 5G cell tower. However, this misses the value that edge AI brings to devices, such as cellphones, autonomous cars and robots. A more helpful way to understand the importance of edge is as a means of extending digital transformation practices innovated in the cloud out to the world.

Other types of edge innovation helped improve the efficiency, performance, management, security and operations of computers, smartphones, vehicles, appliances and other devices that use cloud best practices. Edge AI focuses on best practices, architectures and processes for extending data science, machine learning and AI outside the cloud.

How edge AI works

Until recently, most AI applications were developed using symbolic AI techniques that hard-coded rules into applications, such as expert systems or fraud detection algorithms. In some cases, nonsymbolic AI techniques, such as neural networks, were developed for an application, such as optical character recognition for check numbers or typed text.

Over time, researchers discovered ways to scale up deep neural networks in the cloud for training AI models and generating responses based on input data, which is called inferencing. Edge AI extends AI development and deployment outside of the cloud.

In general, edge AI is used for inferencing, while cloud AI is used to train new algorithms. Inferencing algorithms require significantly less processing capabilities and energy than training algorithms. As a result, well-designed inferencing algorithms are sometimes run on existing CPUs or even less capable microcontrollers in edge devices. In other cases, highly efficient AI chips improve inferencing performance, reduce power or both.

Benefits of edge AI

Edge AI has several benefits over cloud AI, including the following:

  • Reduced latency/higher speeds. Inferencing is performed locally, eliminating delays in communicating with the cloud and waiting for the response.
  • Reduced bandwidth requirement and cost. Edge AI reduces the bandwidth and associated costs for shipping voice, video and high-fidelity sensor data over cell networks.
  • Increased data security. Data is processed locally, which reduces the risk that sensitive data is stored in the cloud or intercepted in transit.
  • Improved reliability/autonomous technology. The AI can continue to operate even if the network or cloud service goes down, which is critical for applications such as autonomous cars and industrial robots.
  • Reduced power. Many AI tasks can be performed with less energy on the device than would be required to send the data to the cloud, thus extending battery life.

Edge AI use cases and industry examples

Common edge AI use cases include speech recognition, fingerprint detection, face-ID security, fraud detection and autonomous driving systems. Edge AI can combine the power of the cloud with the benefits of local operation to improve the performance of AI algorithms over time.

One example might be an autonomous navigation system in which the AI algorithms are trained in the cloud, but the inferencing is run on the car for controlling the steering, acceleration and brakes. Data scientists develop better self-driving algorithms in the cloud and push these models to the vehicle.

In some cars, these systems continue to simulate controlling the car even when a person is driving. The system then notes when the human does something the AI did not expect, captures relevant video and uploads this to the cloud to improve the algorithm. The main control algorithm is then enhanced using input from all the vehicles in the fleet, which is pushed out in the next update.

Here are a few other ways that edge AI might work in practice:

  • Speech recognition algorithms transcribe speech on mobile devices.
  • Google's new AI automatically generates realistic background imagery to replace people or objects removed from a picture.
  • Wearable health monitors assess heart rate, blood pressure, glucose levels and breathing locally using AI models developed in the cloud.
  • A robot arm gradually learns a better way of grasping a particular kind of package and shares this with the cloud to improve other robots.
  • Amazon Go is a new cashier-less service trained in the cloud that automatically counts items placed into a shopper's bag using edge AI without a separate checkout process.
  • Smart traffic cameras automatically adjust light timings to optimize traffic.

Edge AI vs. cloud AI

The distinction between edge AI and cloud AI needs to start with how these ideas evolved. There were mainframes, desktop computers, smartphones and embedded systems long before there was a cloud or an edge. The applications for all these devices were developed at a slow pace using Waterfall development practices. Teams attempted to cram as much functionality and testing as possible into annual updates.

The cloud brought attention to various ways to automate many data center processes. This allowed teams to adopt more Agile development practices. Some large cloud applications now get updated dozens of times a day. This makes it easier to develop application functionality in smaller chunks. The concept of the edge suggests a way of extending these more modular development practices beyond the cloud to edge devices. Edge AI focuses on extending AI development and deployment workflows to run on mobile phones, smart appliances, autonomous vehicles, factory equipment and remote edge data centers.

There are different degrees of this. At a minimum, an edge device like a smart speaker may send all the speech to the cloud. More sophisticated edge AI devices, such as 5G access servers, could provide AI capabilities to nearby devices. The Linux Foundation's LF Edge group includes light bulbs, mobile devices, on-premises servers, 5G access devices and smaller regional data centers as different types of edge devices.

Edge AI and cloud AI work together with various contributions from the edge or the cloud in both training and inferencing. At one end, the edge device sends raw data to the cloud for inference and waits for a response. In the middle, the edge AI may run inference locally on the device using models trained in the cloud. At the other end, the edge AI may also play a larger role in training the AI models as well.

The future of edge technology

Edge AI is still an emerging field and growing fast. The LF Edge group predicts that the power footprint for edge devices will increase from 1 GW in 2019 to 40 GW by 2028 at a compound growth rate of 40%. Edge AI is expected to grow along with this. Today, consumer devices, including smartphones, wearables and intelligent appliances, make up the bulk of edge AI use cases. But enterprise edge AI is likely to grow faster with the expansion of cashier-less checkout, intelligent hospitals, smart cities, industry 4.0 and supply chain automation.

New AI technology, such as federated deep learning, could also improve the privacy and security of edge AI. In traditional AI, a relevant subset of raw data is pushed up to the cloud to enhance training. In federated learning, the edge device makes AI training updates locally. These model updates are pushed into the cloud rather than the actual data, which reduces privacy concerns.

Another significant change could be improved edge AI orchestration. Today, most edge AI algorithms run local inferencing directly against data directly seen by the device. In the future, more sophisticated tools might run local inferencing using data from a collection of sensors adjacent to the device itself.

The development and operations of AI models are in a much earlier stage than the DevOps practices for creating applications in general. Compared to traditional application development practices, data scientists and data engineers face numerous data management challenges. These are even more complicated when it comes to managing edge AI workflows that involve orchestrating data, modeling and deployment processes that span edge devices and the cloud.

Over time, the tools for these processes are likely to improve. This will make it easy to scale edge AI applications and explore new AI architectures. For example, experts are already exploring ways to deploy edge AI into 5G edge data centers closer to mobile and IoT devices. New tools could also allow enterprises to explore new kinds of federated data warehouses with data stored closer to the edge.

This was last updated in December 2021

Continue Reading About edge AI

Dig Deeper on AI infrastructure