Manage Learn to apply best practices and optimize your operations.

IoT: The potential, and the performance challenges

New products introduced recently from Amazon, Google and Apple make it clear that IoT will soon become interwoven in all aspects of our lives — from our homes and appliances to everyday services such as healthcare and public utilities.

Each IoT device will have unique performance requirements (speed — or lack of latency — and availability), depending on the criticality of the service being supported. If your Alexa speaker goes down, that may be annoying, but it’s not as detrimental or hazardous as a connected inhaler or ingestible sensor going down, which are among the latest IoT healthcare advances and can be vital in keeping patients on track with treatment plans.

Recent estimates predict especially strong growth for IoT spending in healthcare and B2B sectors, particularly industrial and discrete manufacturing, transportation and logistics, and utilities. But the extent to which these prognostications will become reality — and IoT achieves its full growth and adoption potential — will depend on performance assurances. Organizations relying on IoT will need complete faith that their devices’ internet connections are consistently fast and reliable enough to support mission-critical tasks.

We hear a lot about security, data privacy and integration as ongoing obstacles to more prolific IoT adoption, but we don’t always hear about performance challenges, even though widespread IoT device outages are not out of the ordinary. While Amazon struggled with site availability issues on this year’s Prime Day, a less-reported story was the fact that other Amazon services, notably Alexa, also experienced outages that day. Like the site outage, this was believed to be the result of huge web traffic spikes and not enough servers within Amazon’s infrastructure.

Performance challenges ahead

In some ways, what’s happening in the IoT world parallels what has happened, and continues to happen, in the traditional web world. As more people come online in more far-flung geographies, huge infrastructures are being built to serve content to them. But more infrastructure — like the cloud, content delivery networks (CDNs), domain name system servers and more — inadvertently introduces more points of potential failure and therefore more performance risks.

Ask any IT person responsible for digital services what it’s like trying to deliver strong performance across all this complexity and you’re likely to hear the words challenging, frustrating, nerve-wracking or perhaps even nearly impossible. In a sense, our insatiable endeavor to deliver high-performing online services to more end users worldwide has yielded an infrastructure so vast and unwieldy that it has, in some ways, become a major hairball to contend with.

Most IT managers have resigned themselves to the fact that round-the-clock, perfect, blip-free digital service performance is not possible. They have instead shifted their focus to proactively identifying and addressing growing hot spots — for example, a gradually slowing cloud service provider — before end users are impacted, as well as finding and fixing the root causes of performance issues quickly and accurately when problems inevitably do occur.

Monitoring can help — but are you ready for it?

Synthetic monitoring is one key to this challenge. In the traditional web world, synthetic monitoring works by generating simulated end-user traffic from the cloud, and pinging websites, mobile sites and applications at regular intervals from key geographic regions in order to get an accurate depiction of true end-user performance. Synthetic monitoring data combined with advanced analytics allows organizations to then drill down and identify the root cause of a performance issue, whether within or beyond one’s own firewall — anything from an overloaded server within one’s own data center, to a slow API call within a multi-step transactional process, to a regional CDN demonstrating high response times.

However, making sense and deriving actionable insights from synthetic monitoring ultimately depends on an organization’s ability to harness, analyze and discern trends across voluminous data sets. This challenge grows exponentially for companies serving end users in a wider range of geographies which may be supporting multiple regional website or mobile site versions.

Flash forward to the world of IoT, where connected devices can communicate with each other through the MQTT protocol. Similar monitoring techniques can be applied to MQTT in order to not just measure the speed and availability of device-to-device connections, but also assess if the devices are functioning properly based on sensor data.

For example, many IoT devices are used to assess various environmental or external factors, such thermometers and pressure gauges. A thermometer may be performing well (it is available and communicating with other devices quickly), but is it actually reporting the right temperature? Corroborating sensor data with environmental data can show if the sensors are actually working, as well as which sensors on a device may be working the best. Device manufacturers can then use this information to continually hone and improve their product designs.

The latest figures show there are currently about 4.1 billion internet users worldwide. In an IoT world, the end user is the device — and there are expected to be 31 billion IoT-connected devices by the end of 2020. If collecting and making sense of the huge volume of end-user performance data (measuring the reliability and speed of web-to-end user interactions) around the world has been tough, you can imagine what it’s going to be like for IoT data, where we’re measuring not just speed and availability, but actual product functionality across hundreds of millions of sensors. If end-user performance data is a tidal wave many organizations are still grappling with, IoT performance and functional data is going to be a tsunami.

In order for IoT to reach its full potential and widest adoption — particularly within the critical healthcare and B2B sectors, where rock-solid performance and reliable functionality is not just a nice-to-have, but a must-have — those responsible for managing device performance will have no choice but to get their arms around this. Fortunately, the traditional web world offers years of experience and many valuable lessons that can provide a logical starting point.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Data Center
Data Management