Although it’s hardly a secret, a steep rise in the number of connected devices around us is set to change the way we live, work and interact with technology. By 2025, forecasts indicate that there will be as many as 75 billion smart devices globally, introducing us to a new era of hyper-connectivity. These devices will not only collect data, but also produce and process information directly on the products closest to their users on the edge. Increased functionality and computing available on the edge is already changing the way companies design and build products, from intelligent construction site video surveillance to oil rig maintenance. In the following article, I will unpack how taking data processing out of the cloud and to the edge can positively impact reliability, privacy and latency.
What exactly is the edge?
Edge computing refers to applications, services and processing performed outside of a central data center and closer to end users. The definition of “closer” falls along a spectrum and depends highly on networking technologies used, the application characteristics and the desired end-user experience.
While edge applications do not need to communicate with the cloud, they may still interact with servers and internet-based applications. Many of the most common edge devices feature physical sensors, such as temperature, lights and speakers, and moving computing power closer to these sensors in the physical world makes sense. Do you really need to rely on a cloud server when asking your lamp to dim the lights? With collection and processing power now available on the edge, companies can significantly reduce the volumes of data that must be moved and stored in the cloud, saving themselves time and money in the process.
The stakes are high
With edge computing set to change the way we live and work, it’s critical for businesses to understand what’s at stake for their business models, customer experiences and workforces. Edge computing impacts three dimensions: reliability, privacy and latency — each with profound implications for companies and consumers alike. Additionally, the convergence of edge computing and artificial intelligence is unlocking new opportunities for companies in 2020 and beyond.
A primary motivator driving edge computing’s adoption is the need for robust and reliable technology in hard-to-reach environments. Many industrial and maintenance businesses simply cannot rely on internet connectivity for mission-critical applications. Wearables must also be resilient enough to perform without 4G. For these use cases and many more, offline reliability makes all the difference.
Protecting privacy is both a potential asset and a risk for businesses in a world where data breaches occur regularly. Consumers have become wary that their smart speakers — or the people behind them — are always listening and, rightfully, companies largely reliant on cloud technology have been scrutinized for what they know about users and what they do with that information.
Edge computing helps alleviate some of these concerns by bringing processing and collection into the environment(s) where the data is produced. The leading voice assistants on the market today, for example, systematically centralize, store and learn from every interaction end users have with them. Their records include raw audio data and the outputs of all algorithms involved, attached to logs of all actions taken by the assistant. The latest research and innovations also suggest that interactions are set to become significantly smoother and more relevant based on additional information about end users’ tastes, contacts, habits and so forth.
This creates a paradox for voice companies and beyond that rely on the cloud. For AI-powered voice assistants to be relevant and useful they must know more personal information about their users. Moving processing power to edge is the only way to offer the same level of performance without compromising on privacy.
In the simplest terms, latency refers to the time difference between an action and a response. You may have experienced latency when using a smartphone if you notice a slight delay in the time it takes to open an app after touching the icon on your screen. However, for many industrial use cases, there is more at risk than a poor user experience and making users wait. For manufacturing companies, mission-critical systems cannot afford the delay of sending information to off-site cloud databases. Cutting power to a machine split-seconds too late is the difference between avoiding and incurring physical damage.
When the computing is on the edge, latency just isn’t an issue. Customers and workers won’t have to wait while data is sent to and from a cloud server. Their maintenance reports, shipping lists or error logs are recorded and tracked in real time.
Local computing power becomes the norm
We are living in a centralized world, whether we think about it that way or not. Every time you turn on your mobile phone or open a SaaS application, you are essentially engaging with an interface that represents what is occurring on a cloud server. In his 2016 talk, “The End of Cloud Computing,” Andreessen Horowitz’s Peter Levine outlined a vision for the future of edge computing. “Your car is basically a data center on wheels. A drone is a data center on wings,” Levine quipped. Nearly three years later, Levine’s words couldn’t be more prophetic. With more and more applications capable of functioning in local environments due to innovations in edge computing, decentralization is becoming far more than just a trendy buzzword and companies and consumer are benefiting from improved reliability, privacy and latency among their IoT devices.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.