iconimage - Fotolia

Tip

How to overcome IoT performance testing challenges

IoT that interacts with things -- rather than human users -- introduces new challenges in effective testing, but IT pros can use design best practices to ensure applications will scale.

Testing applications that don't interact with human users means IT pros must know the difference between human-driven and thing-driven applications.

Most organizations use applications that interact with human users and have decades of experience in testing them. Organizations must adjust their testing process to handle applications that interact with things, such as IoT sensors and smart thermostats. Developers can fix some problems during IoT performance testing procedures, but there are other issues they can only address properly during the application design.

Understand differences between two types of applications

Five primary differences separate human-driven and IoT applications that could affect IoT performance testing:

  1. Large, highly variable activity levels. IoT networks have tens of thousands of devices. All of them might be active or inactive at any given time, making workloads large and highly variable.
  2. Stateless operation. Human-driven applications typically involve a connected sequence of steps, called state or context. IoT sensors cannot maintain context themselves, so IoT applications must manage state for stateless elements.
  3. No subjective feedback. Many testing procedures rely on humans to put data into the effectiveness of the UI or the response time; IoT networks can't provide that.
  4. Wide distribution. An IoT network could spread over a large area subject to considerable variation in network performance and latency. Testing it effectively may require a similarly wide distribution of test data injection points.
  5. Diverse device interfaces. IoT device inventories change. New devices often have different data formats and connection protocols. These differences should not percolate into the deeper application framework because every new device would have to run through the entire test process.

The problems created in testing IoT application functions relate largely to the second of the differences above. Stateless application development isn't new, but many enterprise development teams have rarely, if ever, developed this way. It's best to think of an application as a state or event system with a number of operating states to interpret device input, including starting up, operating and recovering. Developers need a systematic way of viewing an application to test all its features and every type of event in every possible state.

Device variability creates IoT performance testing challenges

Sometimes the variable activities of devices also affect testing application functions. Applications that must correlate events from multiple devices to analyze state and event context, event distribution and timing pose major problems in testing. Instead of having random test data generators, it's necessary to have specific sequences of events to properly test.

The best performance tests work the IoT application at the same level expected in production, plus designed-in excess to handle peak loads.

The sheer scale of IoT data affects IoT performance testing for applications. The best performance tests work the IoT application at the same level expected in production, plus designed-in excess to handle peak loads. Fortunately, the same tools used for performance test injection for human-driven applications will serve IoT applications, provided that testing generates workloads with the proper distribution of injection points. Users can't often replicate the scale of a live IoT deployment, which creates problems with IoT performance testing. If that's not practical, the best strategy is to gather data from successively larger-scale tests to plot a performance-versus-load curve. Developers should expect to run at least a half-dozen tests. After the third, they can use the results to draw a curve to predict later test results. Developers can stop testing when the curve correctly anticipates future larger-scale test results. They should run one last test at very high volume -- even if they can't actually replicate the distribution of IoT sources -- to validate the application performance beyond the design load.

Apply good design practices

Device testing in IoT is a problem of both test procedures and design. Most IoT pioneers have learned that if IoT applications must be customized for every new device, device introduction will be a major problem, likely riddled with errors. The key to device testing is to first design the application to abstract devices by class. That means that motion or temperature sensors should present a common interface to the rest of the application. Adapt each specific device to that common abstraction. This limits device testing to testing a device against that class-specific standard. If it's a motion sensor, does it present the same data as all members of that class after being adapted?

It's good software design practice to define APIs so that the important properties of a device or software element are exposed in a harmonized way. Unexposed internal features and elements cannot be manipulated from the outside. Certify APIs against this policy through a software peer review or the imposition of a fixed set of APIs created by application architects and promulgated downward to all developers.

Functional testing of APIs for IoT should be very similar to functional testing of any set of modern APIs, providing that testing follows design rules. Pay special attention if the API conveys state or context. State control is essential in nearly all IoT applications. A number of different mechanisms pass state information to a service or microservice where it's needed, and all those mechanisms will involve an API. That means API testing has to face a concurrency problem.

All IoT applications are asynchronous, meaning that they deal with real-world events as they happen, rather than reading data from a file where I/O remains synchronized with program behavior. This means that testing must simulate the same kind of behavioral disorganization that real-world, event-driven systems exhibit. Purely random test data may serve, but only if the real IoT devices generate random data. It's better to have tests driven by a series of device emulator systems, each of which injects device-appropriate data at the scale and distribution needed.

Dig Deeper on Internet of things platform

CIO
Security
Networking
Data Center
Data Management
Close