Edge computing risked becoming an IoT cliché. The phrase kept popping up everywhere with no clear consensus on what it meant or if it was important.
But as a former Chair of the Bluetooth SIG and now a CTO for wireless semiconductors, I’ve seen enough of the evolution of wireless technology to conclude that edge computing is anything but a fad. In fact, it’s a necessary progression for connected devices and applications.
For example, early chips for Bluetooth Low Energy — the low-power variant of Bluetooth introduced in 2010 — had sufficient onboard processing power to maintain the Bluetooth wireless link and run simple applications, but could do no more. Yet, these chips formed the foundation of a whole generation of brand-new Bluetooth-enabled devices — such as wireless heart rate monitors — that could link to a smartphone for the very first time.
Better chips mean better features
Bluetooth Low Energy chips evolved and gained more processing power — and associated memory — to power products, such as activity monitors that could measure not just heart rate, but also steps taken, distance walked and calories burned. And to keep consumers happy, developers had to add even more enhanced features. Examples included sleep quality measurement — time to get to sleep, hours slept, number of times woke up — and analysis, including quality and quantity of deep sleep cycles, to keep sales of these “wearables” booming.
Later, these devices formed the basis of the latest generation of medical wearables. These include diabetes monitors that track blood sugar levels and fall monitors for the elderly that allow concerned relatives and caregivers to know someone is OK based on their routine daily activity.
Bluetooth solved many IoT problems
The relevance of this trip down Bluetooth memory lane is that it can be seen as a precursor in some ways of the internet of things. The main difference for the emerging IoT market is where edge computing fits in when it seems we are likely to have unlimited processing power enabled by cloud computing and analytics.
Using the example of wearables, you’ll see the kind of edge computing now required for IoT reveal itself like invisible ink under an ultraviolet light.
The evolution of wearables required each generation to monitor and collate a greater number of measurements (raw data). Developers found optimal ways of doing this by processing raw data locally (on the edge of the application using the Bluetooth chips’ increasingly powerful onboard processors) and then forwarding to a smartphone app and the cloud (for data sharing and tracking) only the essential information (desired data).
The technology enabled continuous (low-latency) monitoring, and the modest Bluetooth wireless throughput was sufficient to update apps and cloud servers of the key tracking information without requiring extended on-air duration that would otherwise be needed to stream raw data. (Short on-air duration also minimizes power consumption — vital for any battery-driven device such as an IoT enterprise sensor.) Sending only the key information also minimized the impact on the user’s cellphone data allowance (data cost).
Things go wrong, hackers never quit
Because users didn’t always carry their smartphones, wearables had to operate autonomously when not connected. Resiliency was built into the systems. They didn’t depend on a continuous network or internet connection for successful operation (redundancy). If the network or internet link failed, the wearable (edge device) waited patiently until it could reestablish the connection. It then transferred any new key information.
Once personal health or safety data (for example, fall detection) was transmitted across the wireless link, data protection became vital. It’s unfortunate, but an aggressive army of hackers was only too keen to find and exploit any security weakness. Security and encryption mechanisms suddenly became key elements of any edge computing capability.
The cloud won’t do the heavy lifting
I suggest that today’s Bluetooth Low Energy wearable devices look a lot like the forerunner of an enterprise IoT device. Their developers solved many of the same challenges facing today’s IoT, albeit on a much smaller scale. The scaling will come from low-power wide area network (LPWAN) wireless technologies. Some of the most promising examples globally are those using the latest cellular LTE Cat-M1 and NB-IoT standards. These will allow IoT devices to connect to the cloud over long distances through existing telecoms infrastructure.
But even with straight-to-cloud connectivity, edge computing may be even more important for IoT than for Bluetooth. As we scale to millions and billions of sensors, costs like power consumption and network data can quickly ramp. This means the ability to transfer only the most relevant data could be vital for any commercially viable application. And the sheer volume of raw data generated could pose a challenge for cloud services anyway. Most of the raw computing will have to done on the edge of the application.
Edge computing is here to stay
As such, I think cloud and edge computing in combination will open up great opportunities. Finding the right balance between edge and cloud, however, requires knowledge of the full end-to-end application. This includes the cost of power, data transfer and cloud services. Developing end-to-end prototypes may be the easiest way to find what strikes the optimal balance. And thanks to simplified development tools, kits, software and cloud technologies, this task will become easier and easier to overcome.
But one thing is for sure: Edge computing is here to stay. And it will be a key enabler of a commercially viable IoT.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.